Every once in a while I visit the twitter handle of Martin Ford (author of ‘The rise of the robots’) to catch all that is happening, not happening or should not be happening (but happening) in the world of technology, robotics and artificial intelligence (AI). I am profoundly sceptical of their net impact on the world. Yes, they may help in criminal investigations, in making some diagnoses for certain diseases, etc. Humans may even live longer, thanks to them. I do not know if it will happen and if it happens, whether it is a good thing. I do not think it is a good thing.
But, on balance, with its impact on employment, on its potential avaialbility only to the rich and the well-heeled, I think AI and Robotics will accentuate the many faultlines in the society. Also, humans, bored stiff, and having too much time to kill, will actually turn destructive of one another, of the society and of the environment. Sounds bleak, I know. But, it is just one person’s view. As fallible or as correct as anyone else’s or like any other view that I have held. But, some of the recent links:
Luke Dormehl writes about eight jobs that are under threat from the AI revolution. Non-paying or pro-bono blogging is not one of them.
This WSJ article says that employers are relying on intelligent software to figure out what you meant when you said or wrote something in an employee survey.
Although this article is in the context of the use of robotics and AI for elderly care, this question is relevant in all contexts:
“The greatest danger of Artificial Intelligence,” he writes, “is that people conclude too early that they understand it.”
Any serious discussion of AI’s impact on the aging population must start with Yudkowsky’s implied question: Do we understand it? And if we do, how do we harness it to enhance the lives of our burgeoning population of older adults? [Link]
‘Retailers race against Amazon to automate stores’ is the header of this article in New York Times. You think of a supermarket that is eeriely quiet and there are no tellers at checkout counters. I think humans will forget how to communicate. They will become idiots, I think. In that sense, AI will have triumphed over Real Intelligence or RI because it would have extinguished whatever RI was there in humans.
Stephan Talty thinks of five sccenarios in 2065 with AI but he is not thrilled or that is what I think:
If there’s one thing that gives me pause, it’s that when human beings are presented with two doors—some new thing, or no new thing—we invariably walk through the first one. Every single time. We’re hard-wired to. We were asked, nuclear bombs or no nuclear bombs, and we went with Choice A. We have a need to know what’s on the other side.
But once we walk through this particular door, there’s a good chance we won’t be able to come back. Even without running into the apocalypse, we’ll be changed in so many ways that every previous generation of humans wouldn’t recognize us. [Link]
It is a fascinating, engrossing and scary article. Certainly, I do not want to live in that world. Give me the messiness of humans, any day.
This article proves the point that Stephan Talty makes. It is about AI professors boycotting a Korean University for its killer robots. It sounds nice and brave but the conclusions are sobering and realistic:
Although a boycott against KAIST would be significant, some experts say the campaign to control the development of autonomous weaponry is futile….For Walsh and others, though, the danger is too great to be complacent. “If developed, autonomous weapons will […] permit war to be fought faster and at a scale greater than ever before,” said Walsh in a press statement. “This Pandora’s box will be hard to close if it is opened.” [Link]
Walsh is a Professor at the University of New South Wales.