Kissinger on Artificial Intelligence

I am no fan of Henry Kissinger. One cannot be, after reading ‘The Blood Telegram’. But, his comments on Artificial Intelligence were very thoughtful. They were written in ‘The Atlantic’ three months ago. I do not know why I never got around to posting the extracts from that article here, although I had saved the extracts for posting as soon I had finished reading that piece. Today, when I read the well-written review of Yuval Harari’s latest book by Manu Joseph in MINT, I resolved to post it here today:

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity. ….

………….  Before AI began to play Go, the game had varied, layered purposes: A player sought not only to win, but also to learn new strategies potentially applicable to other of life’s dimensions. For its part, by contrast, AI knows only one purpose: to win. It “learns” not conceptually but mathematically, by marginal adjustments to its algorithms. So in learning to win Go by playing it differently than humans do, AI has changed both the game’s nature and its impact. Does this single-minded insistence on prevailing characterize all AI?…

…….  Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?

…………  Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition….

….. the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. [Link]

So, what did Manu Joseph write about Artificial Intelligence that triggered this post?

To draw our attention to the impending darkness, Harari mentions a chess contest that was held in December last year. One of the contenders was known to chess players around the world. Stockfish, believed to be the world’s most powerful chess engine, is a computer program that has been designed to analyse chess moves. No human has a chance to beat it. Stockfish played AlphaZero, Google’s machine-learning program. The two programs played a hundred games.

AlphaZero won 28, drew 72 and lost none. The programmers of AlphaZero had not taught it chess; it learned on its own—in 4 hours.

Google’s claim of “4 hours” is actually a bit dramatic and opaque.

Also, AlphaZero has been training through such powerful devices that we should not try to comprehend “4 hours” in human terms. Harari, despite being a historian, is not concerned with the nuances of it all. He wants us to be scared. All things considered, it still is extraordinary that AlphaZero could teach itself chess and become the best chess player in the universe known to us. Harari uses such events to point to the future when machines will do almost all human tasks.

It is fitting (in many ways) to end this post with a link to the article by Nicolas Carr published in 2008 on whether Google was making us stupid. Now, we know the answer or do we?

On reading ‘The future does not need us’

One of the delights of reading ‘The Final Hour’ by Sir Martin Rees was the discovery of the article by Bill Joy: The future does not need us’ published in ‘Wired’ magazine in April 2000. I read it for the first time today.

There were so many thoughtful observations by the man who was the Chief Scientist at Sun Microsystems. I will start with the footnote!

The footnote on the decision taken by New York Times and Washington Post to publish the ‘Unabomber’s manifesto’ is itself worthy of a separate case-study.  Bill Joy reproduces two paragraphs from the Unabomber’s manifesto that Ray Kurzweil had reproduced in his book. They are actually very perceptive.

For me, this was one of the most important passages in the article by Bill Joy:

Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies – robotics, genetic engineering, and nanotechnology – pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once – but one bot can become many, and quickly get out of control.

The second paragraph from Bill Joy that I liked:

I realize now that she had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order. With this respect comes a necessary humility that we, with our early-21st-century chutzpah, lack at our peril. The commonsense view, grounded in this respect, is often right, in advance of the scientific evidence. The clear fragility and inefficiencies of the human-made systems we have built should give us all pause; the fragility of the systems I have worked on certainly humbles me.

He is referring to his grandmother in that paragraph.

This is a key proposal:

The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

This is so thoughtfully funny:

Do you remember the beautiful penultimate scene in Manhattan where Woody Allen is lying on his couch and talking into a tape recorder? He is writing a short story about people who are creating unnecessary, neurotic problems for themselves, because it keeps them from dealing with more unsolvable, terrifying problems about the universe.

Bill Joy also cites a wonderful paragraph from Carl Sagan’s ‘The Pale Blue dot’:

Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.

Bill Joy on Sagan and humility:

For all its eloquence, Sagan’s contribution was not least that of simple common sense – an attribute that, along with humility, many of the leading advocates of the 21st-century technologies seem to lack.

That is a good moment to end this blog post. Read or re-read that article again.

Don’t know; can’t know

Every once in a while I visit the twitter handle of Martin Ford (author of ‘The rise of the robots’) to catch all that is happening, not happening or should not be happening (but happening) in the world of technology, robotics and artificial intelligence (AI). I am profoundly sceptical of their net impact on the world. Yes, they may help in criminal investigations, in making some diagnoses for certain diseases, etc. Humans may even live longer, thanks to them. I do not know if it will happen and if it happens, whether it is a good thing. I do not think it is a good thing.

But, on balance, with its impact on employment, on its potential avaialbility only to the rich and the well-heeled, I think AI and Robotics will accentuate the many faultlines in the society.  Also, humans, bored stiff, and having too much time to kill, will actually turn destructive of one another, of the society and of the environment. Sounds bleak,  I know. But, it is just one person’s view. As fallible or as correct as anyone else’s or like any other view that I have held. But, some of the recent links:

Luke Dormehl writes about eight jobs that are under threat from the AI revolution. Non-paying or pro-bono blogging is not one of them.

This WSJ article says that employers are relying on intelligent software to figure out what you meant when you said or wrote something in an employee survey.

Although this article is in the context of the use of robotics and AI for elderly care, this question is relevant in all contexts:

“The greatest danger of Artificial Intelligence,” he writes, “is that people conclude too early that they understand it.”

Any serious discussion of AI’s impact on the aging population must start with Yudkowsky’s implied question: Do we understand it? And if we do, how do we harness it to enhance the lives of our burgeoning population of older adults? [Link]

‘Retailers race against Amazon to automate stores’ is the header of this article in New York Times. You think of a supermarket that is eeriely quiet and there are no tellers at checkout counters. I think humans will forget how to communicate. They will become idiots, I think. In that sense, AI will have triumphed over Real Intelligence or RI because it would have extinguished whatever RI was there in humans.

Stephan Talty thinks of five sccenarios in 2065 with AI but he is not thrilled or that is what I think:

If there’s one thing that gives me pause, it’s that when human beings are presented with two doors—some new thing, or no new thing—we invariably walk through the first one. Every single time. We’re hard-wired to. We were asked, nuclear bombs or no nuclear bombs, and we went with Choice A. We have a need to know what’s on the other side.

But once we walk through this particular door, there’s a good chance we won’t be able to come back. Even without running into the apocalypse, we’ll be changed in so many ways that every previous generation of humans wouldn’t recognize us. [Link]

It is a fascinating, engrossing and scary article. Certainly, I do not want to live in that world. Give me the messiness of humans, any day.

This article proves the point that Stephan Talty makes. It is about AI professors boycotting a Korean University for its killer robots. It sounds nice and brave but the conclusions are sobering and realistic:

Although a boycott against KAIST would be significant, some experts say the campaign to control the development of autonomous weaponry is futile….For Walsh and others, though, the danger is too great to be complacent. “If developed, autonomous weapons will […] permit war to be fought faster and at a scale greater than ever before,” said Walsh in a press statement. “This Pandora’s box will be hard to close if it is opened.” [Link]

Walsh is a Professor at the University of New South Wales.

Spectre or spectacle or scenario?

This sentence in the FT article on the latest (and probably intentional, at least so far) and major design flaw in computer chips caught my attention:

But the inevitable trade off between efficiency and security has not always been made with perfect knowledge of the consequences. [Link]

What was amusing about this sentence was that it did not seem to recognise the fact that most of the things that have been happening around the world – not just in the world of computing – in the last three to four decades have been done without due (or any) regard for the consequences.

Examples: QE, or, financial de-regulation or algorithmic trading or dark pools or negative interest rates or Arctic drilling or fracking or smart phones or social media.

A distracted post

I saw the link to the story in FT Alphaville about smart phones and their impact on productivity.  We should not be surprised at all. The evidence is in front of our eyes, as we walk on the road, as we drive, etc. Almost everyone is distracted, to the detriment of not just productivity but of safety. The FT Alphaville story is here. The original blog post is here. The original post is worth reading for it teases out other dimensions of what it means to be part of the distracted generation.

Izabella Kaminska had written in 2014 about supermarkets, big data and manipulation of human preferences. That link appeared in the post above. I quickly glanced through it. Helps us to focus on how powerless we are and how little influence and control we have over our own lives and choices. It is as much a spiritual realisation as it is a consequence of modern technology! Humans have unleashed a Frankenstein monster on fellow humans. Quite likely they did not intend it that way since they are not in control themselves! So, who really drives this? Perhaps, no one. Once we set down on a path of ‘conquering’ everything that we viewed as obstacles, this ought to be a logical conclusion?

In case you are too distracted to read ‘Thinking Fast and Slow’, please do watch Dan Ariely’s TED talk. I had posted that several times. But, worth reiterating.

 

Workplace automation – some links

On November 3, I had posted a small extract from a long article in ‘New Yorker’ on workplace automation. I am repeating it here.

This article on Fanuc, the Japanese robots manufacturer is equally important read.

Robots routinely crash – this is encouraging in a curious sort of way.

A socially aware techie?!:

Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence, said that although technology often benefited society, it did not always do so equitably. “Recent technological advances have been leading to a lot more concentration of wealth,” he said. “I certainly do worry about the effects of AI technologies on wealth concentration and inequality, and how to make the benefits more inclusive.” [Link]

The Guardian on the Alphabet’s ‘Urban Takeover’. This is unlikely to be socially beneficial.

The first time a prominent commentator has dared question the mindless expansion of the frontiers of technology. Lant Pritchett’s piece in ‘Ideas for India’ is worth a read. (ht: Amol Agrawal’s Mostly Economics).

Humans and Robots

In southern Denmark, the regional government hired a chief robotics officer, Poul Martin Møller, to help integrate more robots into the public sector, largely as a money-saving measure. He decided that the Danish hospital system, which was under pressure to reduce costs, could benefit from robotic orderlies. There were few medical-oriented robots on the market, though, so Møller and his team took small, mobile robots with movable arms, designed for use in warehouses, and refashioned them, so that they could carry supplies to doctors and nurses. The machines worked well, scuttling through surgery wings and psych wards like helpful crabs, never complaining or taking cigarette breaks. But Møller wasn’t prepared for the reaction of the hospital staff, who recognized their mechanical colleagues as potential replacements, and tried to sabotage them. Fecal matter and urine were left in charging stations. [Link]

It is a fairly long article but worth a read.