On reading ‘The future does not need us’

One of the delights of reading ‘The Final Hour’ by Sir Martin Rees was the discovery of the article by Bill Joy: The future does not need us’ published in ‘Wired’ magazine in April 2000. I read it for the first time today.

There were so many thoughtful observations by the man who was the Chief Scientist at Sun Microsystems. I will start with the footnote!

The footnote on the decision taken by New York Times and Washington Post to publish the ‘Unabomber’s manifesto’ is itself worthy of a separate case-study.  Bill Joy reproduces two paragraphs from the Unabomber’s manifesto that Ray Kurzweil had reproduced in his book. They are actually very perceptive.

For me, this was one of the most important passages in the article by Bill Joy:

Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies – robotics, genetic engineering, and nanotechnology – pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once – but one bot can become many, and quickly get out of control.

The second paragraph from Bill Joy that I liked:

I realize now that she had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order. With this respect comes a necessary humility that we, with our early-21st-century chutzpah, lack at our peril. The commonsense view, grounded in this respect, is often right, in advance of the scientific evidence. The clear fragility and inefficiencies of the human-made systems we have built should give us all pause; the fragility of the systems I have worked on certainly humbles me.

He is referring to his grandmother in that paragraph.

This is a key proposal:

The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

This is so thoughtfully funny:

Do you remember the beautiful penultimate scene in Manhattan where Woody Allen is lying on his couch and talking into a tape recorder? He is writing a short story about people who are creating unnecessary, neurotic problems for themselves, because it keeps them from dealing with more unsolvable, terrifying problems about the universe.

Bill Joy also cites a wonderful paragraph from Carl Sagan’s ‘The Pale Blue dot’:

Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.

Bill Joy on Sagan and humility:

For all its eloquence, Sagan’s contribution was not least that of simple common sense – an attribute that, along with humility, many of the leading advocates of the 21st-century technologies seem to lack.

That is a good moment to end this blog post. Read or re-read that article again.

Don’t know; can’t know

Every once in a while I visit the twitter handle of Martin Ford (author of ‘The rise of the robots’) to catch all that is happening, not happening or should not be happening (but happening) in the world of technology, robotics and artificial intelligence (AI). I am profoundly sceptical of their net impact on the world. Yes, they may help in criminal investigations, in making some diagnoses for certain diseases, etc. Humans may even live longer, thanks to them. I do not know if it will happen and if it happens, whether it is a good thing. I do not think it is a good thing.

But, on balance, with its impact on employment, on its potential avaialbility only to the rich and the well-heeled, I think AI and Robotics will accentuate the many faultlines in the society.  Also, humans, bored stiff, and having too much time to kill, will actually turn destructive of one another, of the society and of the environment. Sounds bleak,  I know. But, it is just one person’s view. As fallible or as correct as anyone else’s or like any other view that I have held. But, some of the recent links:

Luke Dormehl writes about eight jobs that are under threat from the AI revolution. Non-paying or pro-bono blogging is not one of them.

This WSJ article says that employers are relying on intelligent software to figure out what you meant when you said or wrote something in an employee survey.

Although this article is in the context of the use of robotics and AI for elderly care, this question is relevant in all contexts:

“The greatest danger of Artificial Intelligence,” he writes, “is that people conclude too early that they understand it.”

Any serious discussion of AI’s impact on the aging population must start with Yudkowsky’s implied question: Do we understand it? And if we do, how do we harness it to enhance the lives of our burgeoning population of older adults? [Link]

‘Retailers race against Amazon to automate stores’ is the header of this article in New York Times. You think of a supermarket that is eeriely quiet and there are no tellers at checkout counters. I think humans will forget how to communicate. They will become idiots, I think. In that sense, AI will have triumphed over Real Intelligence or RI because it would have extinguished whatever RI was there in humans.

Stephan Talty thinks of five sccenarios in 2065 with AI but he is not thrilled or that is what I think:

If there’s one thing that gives me pause, it’s that when human beings are presented with two doors—some new thing, or no new thing—we invariably walk through the first one. Every single time. We’re hard-wired to. We were asked, nuclear bombs or no nuclear bombs, and we went with Choice A. We have a need to know what’s on the other side.

But once we walk through this particular door, there’s a good chance we won’t be able to come back. Even without running into the apocalypse, we’ll be changed in so many ways that every previous generation of humans wouldn’t recognize us. [Link]

It is a fascinating, engrossing and scary article. Certainly, I do not want to live in that world. Give me the messiness of humans, any day.

This article proves the point that Stephan Talty makes. It is about AI professors boycotting a Korean University for its killer robots. It sounds nice and brave but the conclusions are sobering and realistic:

Although a boycott against KAIST would be significant, some experts say the campaign to control the development of autonomous weaponry is futile….For Walsh and others, though, the danger is too great to be complacent. “If developed, autonomous weapons will […] permit war to be fought faster and at a scale greater than ever before,” said Walsh in a press statement. “This Pandora’s box will be hard to close if it is opened.” [Link]

Walsh is a Professor at the University of New South Wales.

Spectre or spectacle or scenario?

This sentence in the FT article on the latest (and probably intentional, at least so far) and major design flaw in computer chips caught my attention:

But the inevitable trade off between efficiency and security has not always been made with perfect knowledge of the consequences. [Link]

What was amusing about this sentence was that it did not seem to recognise the fact that most of the things that have been happening around the world – not just in the world of computing – in the last three to four decades have been done without due (or any) regard for the consequences.

Examples: QE, or, financial de-regulation or algorithmic trading or dark pools or negative interest rates or Arctic drilling or fracking or smart phones or social media.

A distracted post

I saw the link to the story in FT Alphaville about smart phones and their impact on productivity.  We should not be surprised at all. The evidence is in front of our eyes, as we walk on the road, as we drive, etc. Almost everyone is distracted, to the detriment of not just productivity but of safety. The FT Alphaville story is here. The original blog post is here. The original post is worth reading for it teases out other dimensions of what it means to be part of the distracted generation.

Izabella Kaminska had written in 2014 about supermarkets, big data and manipulation of human preferences. That link appeared in the post above. I quickly glanced through it. Helps us to focus on how powerless we are and how little influence and control we have over our own lives and choices. It is as much a spiritual realisation as it is a consequence of modern technology! Humans have unleashed a Frankenstein monster on fellow humans. Quite likely they did not intend it that way since they are not in control themselves! So, who really drives this? Perhaps, no one. Once we set down on a path of ‘conquering’ everything that we viewed as obstacles, this ought to be a logical conclusion?

In case you are too distracted to read ‘Thinking Fast and Slow’, please do watch Dan Ariely’s TED talk. I had posted that several times. But, worth reiterating.


Workplace automation – some links

On November 3, I had posted a small extract from a long article in ‘New Yorker’ on workplace automation. I am repeating it here.

This article on Fanuc, the Japanese robots manufacturer is equally important read.

Robots routinely crash – this is encouraging in a curious sort of way.

A socially aware techie?!:

Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence, said that although technology often benefited society, it did not always do so equitably. “Recent technological advances have been leading to a lot more concentration of wealth,” he said. “I certainly do worry about the effects of AI technologies on wealth concentration and inequality, and how to make the benefits more inclusive.” [Link]

The Guardian on the Alphabet’s ‘Urban Takeover’. This is unlikely to be socially beneficial.

The first time a prominent commentator has dared question the mindless expansion of the frontiers of technology. Lant Pritchett’s piece in ‘Ideas for India’ is worth a read. (ht: Amol Agrawal’s Mostly Economics).

Humans and Robots

In southern Denmark, the regional government hired a chief robotics officer, Poul Martin Møller, to help integrate more robots into the public sector, largely as a money-saving measure. He decided that the Danish hospital system, which was under pressure to reduce costs, could benefit from robotic orderlies. There were few medical-oriented robots on the market, though, so Møller and his team took small, mobile robots with movable arms, designed for use in warehouses, and refashioned them, so that they could carry supplies to doctors and nurses. The machines worked well, scuttling through surgery wings and psych wards like helpful crabs, never complaining or taking cigarette breaks. But Møller wasn’t prepared for the reaction of the hospital staff, who recognized their mechanical colleagues as potential replacements, and tried to sabotage them. Fecal matter and urine were left in charging stations. [Link]

It is a fairly long article but worth a read.

Bullet train and rail safety

About ten days ago, the Business Standard newspaper carried a brief story featuring four tweets by Mr. P. Chidambaram, the former Finance Minister, on the need for railway safetey and how money spent on bullet trains should be spent on railway safety. On the face of it, it is unexceptionable. But, it is unfortunate.

But, the truth is, as stated many a time in this blog, that the soft loan – very, very low interest rate for a long term – that is being extended by Japan is only for this project. The Government of India is not diverting funds for this. There will be some money that the Indian Government has to put in. But, it is a very small sum.

Check this out from the ‘Hindustan Times’:


To fund the ambitious Rs 1,10,000-crore project, a loan of Rs 88,000 crore will be taken from Japan. The Japan International Cooperation Agency (JICA) will fund it at a low rate of interest of 0.1% per annum. This loan has to be repaid to Japan in 50 years, with 15 years grace period. [Link]

Ajit Ranade had written cogently and eloquently on why the project makes sense. He wishes the project Godspeed. He is quite right on this one. The project has much to commend for itself, inclding Rail Safety. Ranade writes [emphasis mine]:

The MAHSR is one of the crown jewels in the robust Indo-Japan relationship. Its concessional funding of $15 billion, at an interest rate of 0.1 per cent, is from Japan International Cooperation Agency, to be repaid over 50 years after an initial 15-year moratorium. This funding is specifically earmarked for MAHSR and is not fungible.

Japan is an acknowledged world leader in high-speed rail technology, whose focus is on reliability and safety. Their approach is integrating transport and development, not merely to achieve high-speed connection.

The project envisages technology and skill transfer, indigenous manufacturing and employment. Don’t forget that Japan had a major role to play in India becoming a hub of small-car manufacturing in the world. That’s the story of Maruti Suzuki.

Japan also was instrumental in the setting up of Delhi Metro. It may be useful to note that the Tokyo metro system is one of the world’s most sophisticated, with 158 lines criss-crossing 2,200 stations serving 40 million passenger rides daily. All this with a near-zero accident rate. Hence the safety aspect of MAHSR has the Shinkansen mindset and approach behind it.

Just as the Indian Space Research Organisation inspires, so would the Bullet Train project, both on safety and on technological excellence and even punctuality. Certain discontinuous opportunistic leapfrogging is necessary from time to time. That is what this project would do.

As for safety in Indian Railways, read what Sunil Jain wrote in ‘Financial Express’ after the Elphinstone Road tragedy [Emphasis mine]:

And Suresh Prabhu lost his job as Railway minister following the Kalinga-Utkal Express derailment – while the poor man had done well to focus on eliminating unmanned railway crossings where 60% of fatalities used to occur in the past, when push came to shove, he hadn’t managed to stop the derailments. But how could he? In 2012, the Anil Kakodkar panel said India needed Rs 1 lakh crore for fixing safety and said it wasn’t safe to use the 52kg/m tracks or the 43,000 ICF coaches – this got highlighted in all the recent accidents – but the Railways is too broke, so we fix what we can (albeit at a faster pace under Prabhu) and leave the rest to God.

As a former Finance Minister, Mr. Chidambaram should come out in support of the Delhi Metro fare hike and oppose the politically motivated resistance to it. See here.

On reading this blog post, my friend shared this news-story from February 2012 with me. All parties oppose what they propose! Neither the Congress nor the BJP is an exception.