What were Luddites after?

Those who resist technological progress (or, progress, in general) are called Luddites. But, Luddites were not protesting technological progress as much as the distribution of the fruits of the profits derived from the deployment of machines. Sounds familiar. Read this interview. In fact, ‘Luddites’ derived their name from a mythical character called Ned Ludd. The reference for that article and many other articles are there in the article I co-wrote with my colleague Raghuraman on ‘Machines and Men’. I enjoyed writing this one. Our first article in this series of two articles is here.

It happens during the day

Chanced upon the review of three books by Quinn Slobodian in ‘Boston Review’. The three books are ‘dark’ in his view, especially the first one, he reviews: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff.

He takes exception to her comments on how media influences behaviour:

One could ask whether her description doesn’t flunk the Cultural Studies 101 test by failing to acknowledge that the media’s designers don’t dictate directly its use and consumption. We hear a great deal about what companies “aim” to do through baroque projects of “behavioral modification,” but, as with the Cold War brainwashing techniques she references, we have little evidence that these efforts work—except for generating ever greater contracts for those pronouncing their own effectiveness. [Link]

But, let us listen to the testimony of Jim Balsillie, former CEO of ‘Research in Motion’ (remember Blackberry?):

Second, social media’s toxicity is not a bug — it’s a feature. Technology works exactly as designed. Technology products, services and networks are not built in a vacuum. Usage patterns drive product development decisions. Behavioral scientists involved with today’s platforms helped design user experiences that capitalize on negative reactions because they produce far more engagement than positive reactions. [Emphasis mine]

Third, among the many valuable insights provided by whistleblowers inside the tech industry is this quote: “the dynamics of the attention economy are structurally set up to undermine the human will.” Democracy and markets work when people can make choices aligned with their interests. The online advertisement-driven business model subverts choice and represents a foundational threat to markets, election integrity and democracy itself. [Link]

Indeed, the comment about on-line reminds me of advertising itself. I just did a blog post on it yesterday.

More importantly, very powerful lines above. The point to note here is obvious: it is not the subversion of the tech. platforms by populists, demagogues, far-Right and other extremists that is the issue. The platform is the subversion.

That is why it was disappointing to read that Stanley Druckenmiller, otherwise an intelligent man, criticise the Trump Administration’s consideration of anti-trust investigations of the tech. companies in the USA.

So, Zuboff is not exactly wrong. It is ‘dark’ for the rest of us because we don’t know (or cannot be bothered to fathom) how the ‘rich’ and the ‘connected’ operate. Looks like that is the stuff of the second and, even more so, the third book reviewed.

The second book he reviews is Darkness by Design: The Hidden Power in Global Capital Markets by Walter Mattli. A key paragrph from the review:

Mattli shows how the shape of financial governance—and lack thereof—was pushed by a small elite of investment entities. The advantages gained by those able to make costly investments in computerization began to concentrate wealth at the upper end of exchange’s members, including the “national commercial and non-U.S. ‘universal’ banks” that deregulation had allowed to enter. By 2000, the twenty-five second-tier firms had less than 10 percent of the market capitalization of the top ten. Household name titans such as Barclays, Credit Suisse, Citigroup, Deutsche Bank, Goldman Sachs, Merrill Lynch, Morgan Stanley, and JP Morgan dominated. 

The third book he reviews is Katharina Pistor’s The Code of Capital.

In fact, he summarises it very well:

Katharina Pistor’s The Code of Capital is also an urgent tract. The difference, in her telling, is that the law doesn’t always ride a white horse. It comes as often to perpetuate injustice as redress it.

Further, the following statements are both profound and true. In other words, the State is the protector – or it ought to be – and the villain. The problem is, in other words, the State is captured and hence, aids the evasion of taxes by the rich whom it supposedly has to now bring back into its tax net!:

The concentration of wealth and its evasion of state attempts at its capture through taxation also do not happen by escaping law or the state, but through the law and the state—through projects of legal “encoding,” to use Pistor’s dominant metaphor.

Quinn Slobodian highlights a few things from Pistor’s book that ought to be of interest to those in Finance. Very few would even be aware of them:

Pistor introduces us to new sites and conventions created to offer protection for capital mobility and insulation from democratic states, places with their own acronyms, where PRIME Finance (Panel of Recognized International market Experts in finance) protects PRIMA (the Place of Relevant Intermediary Approach convention).

So, the point is that it is not about shining light on people operating covertly, in darkness, outside the pale of law. There is collusion. There is capture. State and the law have facilitated it.

Perhaps Pistor’s book is similar to the one by Brink Lindsey and Steven M. Teles: ‘The Captured Economy‘. I have begun reading it.

What it costs the world for Alexa to answer your queries?

My friend and co-author Gulzar Natarajan pointed me to an article by Gillian Tett in FT on how private equity has grown over the years when public markets and public listing were the fashion since the Eighties. I then caught up with a few others of hers. One of them was on what it costs humans to be able to use their modern technological gadgets and devices that are founded on ‘artificial intelligence’.

That article had a link to this one: ‘Anatomy of an AI system’. It could be one of the most important articles you would read in 2019.

To me, the article underscores, for the umpteenth time, the fact that humans are incapable of grasping (let alone comprehending) what they unleash. They wade into waters that they can scarcely fathom and the splash and the spillovers are something that they cannot ever hope to get a grip on or control. Sample this:

it took Intel more than four years to understand its supply line well enough to ensure that no tantalum from the Congo was in its microprocessor products. As a semiconductor chip manufacturer, Intel supplies Apple with processors. In order to do so, Intel has its own multi-tiered supply chain of more than 19,000 suppliers in over 100 countries providing direct materials for their production processes, tools and machines for their factories, and logistics and packaging services. 20 That it took over four years for a leading technology company just to understand its own supply chain, reveals just how hard this process can be to grasp from the inside, let alone for external researchers, journalists and academics.

We are doomed not because we have damaged the environment, not becasue we are running out of water; not because we have run up too much debt; not because we have accumulated too much wealth in too few hands but because we know not and refuse to admit we know not.

Whose sufferance?

This is not a long article but makes some important points. It is about technology taking over our lives. AS always, it is an outcome of ‘means’ becoming the ‘end’. Technology was the means to a comfortable life. But, because we chased it too hard, technology has become the ‘end’ in itself and it is now dominating our lives.

IF you are not convinced about the conflation of ‘means’ and ‘ends’, read this paragraph by George Dyson:

The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind. [Emphasis mine; link]

This warning by a MIT professor in 1970 was very prescient:

While Minsky believed that A.I. might solve the world’s problems, he also recognized how it could all go drastically awry. In an interview with Life magazine in November of 1970, Minsky warned: “Once the computers get control, we might never get it back. We would survive at their sufferance.” In one of his more famous premonitions, he posited, “If we’re lucky, the [machines] might decide to keep us as pets.” [Link]

The reference to the work of Arthur C. Clarke, ‘Childhood’s End’ is interesting. One should check it out.

Perhaps, this tribal community in Nagaland, in its own way, is showing us the way?

Natural enemies and man-made enemies

It is my piece in MINT on Tuesday. I enjoyed writing this but it was also one of the most difficult pieces to write. I spent several hours on it. But, the end result is gratifying.

A few weeks ago, I saw a news story that the eco-sensitive zone around the Bannerghatta National Park would be reduced by 100 sq km. This news was covered in a small way in the national newspapers. Since then, a campaign has been mounted to prevent this proposed reduction from happening. This story reminded me of an interaction with Meghna Krishnadas of Yale University early in November.

In the paper Weaker Plant-enemy Interactions Decrease Tree Seedling Diversity With Edge-effects In A Fragmented Tropical Forest, written with Robert Bagchi, Sachin Sridhara and Liza S. Comita (Nature Communications, Vol. (9), article number: 4523 (2018)), she tested the hypothesis that natural enemies—insect herbivores and fungal pathogens—help shape plant diversity, especially in the context of forest fragmentation. These enemy effects or their absence are more pronounced in forest edges than in the interiors.

In plain English, if some plants are not regulated by natural enemies, they will tend to grow unregulated and uninhibited. That would reduce plant diversity. So, applying pesticides and destroying insect herbivores and fungal pathogens destroy plant diversity too. In other words, the fragile balance between humans and nature has to be nurtured carefully. If we don’t, we will not be able to sustain biodiversity, especially in fragmented forests. That is why the decision to reduce the eco-sensitive zone around the Bannerghatta National Park deserves the scrutiny it is getting.

Many people feel that there is a trade-off between short-term economic growth imperatives and the need to preserve the ecology and environment. Western countries could burn hydrocarbons without a worry when they were developing countries. Developing countries have to be mindful of carbon emission and their commitments to international climate accords.

However, these commitments are not merely a case of being good global citizens. They are necessary even to maintain the health of local citizens. Without a healthy population, there is no sustained economic growth. So, sometimes, these trade-offs are more imagined than real.

The fragile balance between nature and humans was also the subject matter of the recently released Rajinikanth-starrer 2.0. After having feted technology in his earlier films, director Shankar reminds himself and his audience that technology and seeming technological progress are, more often than not, only mixed blessings. The message to be sensitive to the need for the winged population to survive is neither a luxury nor a concern of developed societies. The movie reminds the audience that by preying on insects and worms, birds maintain plant health and obviate the need for the application of pesticides.

Juxtaposing the message of the paper with the message of the movie gives us a beautiful insight. Birds are natural enemies for insects and worms. Without birds, we will have too much of them. Without them, we will have too little plant diversity. Nature has arranged itself well.

We do not understand it and frown upon any effort required to preserve its fragile balance as a hindrance. We clothe our laziness and our short-termism in intellectual terms, arguing that economic growth and poverty alleviation require relegating environmental considerations to the background. We do so at our own peril.

We cast our interference with and trampling upon natural arrangements as the triumph of human intellect. I view them with trepidation. For example, Financial Times featured an article recently on embryo selection (Profiling For IQ Opens New Uber-parenting Possibilities, 22 November 2018). The article briefly mentions personal and social costs of such embryo selection without going into details. It is fraught with immense danger.

It will be polarising at a social level. It will add yet another dimension of inequality to the ones we know. At a personal level, it will add immense stress as competition will be intense among the so-called “super kids” of which there will be plenty. There is a reason for nature’s bell curve distribution of many things. Consequences of extremely thick fat tails are unknown unknowns.

Indian cricketer Cheteshwar Pujara had said, “When you start playing shots [during a testing spell], that means your game is not capable enough to play the Test format. You are trying to survive rather than understand the situation and play accordingly.” He is right. When someone wishes to rush through a situation that requires deliberation, they are fearful and doubtful of their staying power. That is how humans are reacting to the complexities of the world, some of which may be self-inflicted. When Seth Klarman told the audience at Harvard Business School in October that one of society’s most vexing problems was its relentless short-term orientation, he was echoing Pujara. Short-termism betrays lack of confidence in long-term staying power.

Finally, the conclusion that natural enemies are useful for biodiversity is readily transferable to societies. Natural enemies are useful for diversity of opinions and ideas. So, the more we shut down opposite views (enemies), the less intellectually vibrant the society becomes. Just as biodiversity is beneficial, diversity of views is also beneficial. For that, one needs natural enemies. Therefore, common sense and self-interest dictate that we don’t smother natural enemies.

V. Anantha Nageswaran is the dean of IFMR Graduate School of Business (KREA University). These are his personal views.

Comment are welcome at views@livemint.com

The techypocrisy

The wheel has come back a full circle or is on its way – or so it seems. See two recent NYT articles here and here. The digital gap is not what you thought or think it is and that technology deprivation is no deprivation but a blessing!

Of course, I am not sure extreme answers are the right ones or that they would be effective with all children. To each children, each parent. In fact, I am wary of fundamentalist or extreme views with respect to technology – utopia vs. dystopia. But, evidence points to a compelling case that modern technology is shaping a dystopian world.

But, what psychologists working for tech. companies do and how tech. company executives themselves have discouraged their own children from taking up ‘screen’ habits are extremely illuminating and insightful. Of course, without mincing words or sentiment, they are most troubling and leave us fulminating, angry and helpless, all at the same time.

[On a related and unrelated note, read this piece about the forked tongues of tech. leaders.]

The march of progress be damned and perhaps, named something more appropriately for what it is.

These developments are consistent with ‘More is preferred to less’ axiom of neo-classical economics. That is why we have frequent updates to hardware, software and also so many clickbaits with man apps.

I would also recommend the 4-part (each approximately one hour) documentary on ‘The Century of the Self’. I have watched two parts. Very, very insightful.

https://topdocumentaryfilms.co m/the-century-of-the-self/ (This is the link to the complete 4-hour video)

Those who teach consumer marketing should find it useful as to how it all began. You may draw your own conclusions as to the morality (or, lack thereof) of it all. On my part, I am clear. Consumer marketing – for most products (fast foods, soda, entertainment electronics, to name just a few) – sails close to the wind on ethics and morality or beyond it.

Kissinger on Artificial Intelligence

I am no fan of Henry Kissinger. One cannot be, after reading ‘The Blood Telegram’. But, his comments on Artificial Intelligence were very thoughtful. They were written in ‘The Atlantic’ three months ago. I do not know why I never got around to posting the extracts from that article here, although I had saved the extracts for posting as soon I had finished reading that piece. Today, when I read the well-written review of Yuval Harari’s latest book by Manu Joseph in MINT, I resolved to post it here today:

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity. ….

………….  Before AI began to play Go, the game had varied, layered purposes: A player sought not only to win, but also to learn new strategies potentially applicable to other of life’s dimensions. For its part, by contrast, AI knows only one purpose: to win. It “learns” not conceptually but mathematically, by marginal adjustments to its algorithms. So in learning to win Go by playing it differently than humans do, AI has changed both the game’s nature and its impact. Does this single-minded insistence on prevailing characterize all AI?…

…….  Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?

…………  Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition….

….. the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. [Link]

So, what did Manu Joseph write about Artificial Intelligence that triggered this post?

To draw our attention to the impending darkness, Harari mentions a chess contest that was held in December last year. One of the contenders was known to chess players around the world. Stockfish, believed to be the world’s most powerful chess engine, is a computer program that has been designed to analyse chess moves. No human has a chance to beat it. Stockfish played AlphaZero, Google’s machine-learning program. The two programs played a hundred games.

AlphaZero won 28, drew 72 and lost none. The programmers of AlphaZero had not taught it chess; it learned on its own—in 4 hours.

Google’s claim of “4 hours” is actually a bit dramatic and opaque.

Also, AlphaZero has been training through such powerful devices that we should not try to comprehend “4 hours” in human terms. Harari, despite being a historian, is not concerned with the nuances of it all. He wants us to be scared. All things considered, it still is extraordinary that AlphaZero could teach itself chess and become the best chess player in the universe known to us. Harari uses such events to point to the future when machines will do almost all human tasks.

It is fitting (in many ways) to end this post with a link to the article by Nicolas Carr published in 2008 on whether Google was making us stupid. Now, we know the answer or do we?