Tuesday, March 3, 2015

What Will Life Be Like in an Artificial Intelligence Future?

He begins with a light-hearted romp through the current state of play.
From Irving Wladawsky-Berger:
People have long argued about the future impact of technology.  But, as AI is now seemingly everywhere, the concerns surrounding its long term impact may well be in a class by themselves.  Like no other technology, AI forces us to explore the boundaries between machines and humans.  What will life be like in such an AI future?

Not surprisingly, considerable speculation surrounds this question.  At one end we find books and articles exploring AI’s impact on jobs and the economy.  Will AI turn out like other major innovations, e.g., steam power, electricity, cars, -  highly disruptive in the near term, but ultimately beneficial to society?  Or, as our smart machines are being increasingly applied to cognitive activities, will we see more radical economic and societal transformations?  We don’t really know

These concerns are not new.  In a 1930 essay, for example, English economist John Maynard Keynes warned about the coming technological unemployment, a new societal disease whereby automation would outrun our ability to create new jobs.

Then we have the more speculative predictions, that in the not too distant future, a sentient, superintelligent AI might be able to far surpass human intelligence as well as experience human-like feelings.  Such an AI, we are warned, might even pose an “existential risk” that “could spell the end of the human race.”
A number of experts view these superintelligence predictions as yet another round of the periodic AI hype that in the 1980s led to the so-called AI winter.  Interest in AI declined until the field was reborn in the 1990s by embracing an engineering-based data-intensive, analytics paradigm.  To help us understand what an AI future might be like, Stanford University recently launched AI100, “a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.”

Superintelligence is a truly fascinating subject, the stuff of science fiction novels and movies.  But, whether you believe in it or not, how can we best frame a serious discussion of the subject?  I believe that in the end, it comes down to which of two major forces will prevail over time - exponential growth or complexity brake. 

In their 2011 bestseller, Race Against the Machine, MIT’s Erik Brynjolfsson and Andy McAfee argue that the breakthrough advances AI has achieved in just the past few years, - e.g. Watson, Siri, Google driverless cars, - are the result of Moore’s Law and exponential growth.  They illustrate the power of exponential growth using an ancient Indian story about the creation of chess. 

According to the story, upon being shown the game of chess, the emperor was so pleased that he told its inventor to name his own reward.  The inventor proceeded to request what seemed like a modest reward.  He asked for an amount of rice computed as follows: one grain of rice for the first square of the chess board, two grains for the second one, four for the third and so on, doubling the amount each time up to the 64th square. 

After 32 squares, the inventor had received 232 or about 4 billion grains of rice, roughly one large field’s worth weighing about 100,000 kilograms - a large, but not unreasonable reward.  However, the second half of the chess board is different due the power of exponential growth.  After 64 squares, the total amount of rice, 264, would have made a heap bigger than Mount Everest and would have been roughly 1000 times the world’s total production of rice in 2010. 

Digital technologies have recently entered the second half of the chessboard.  If we assume 1958 as the starting year and the standard 18 months for the doubling of Moore’s Law, 32 doublings would then take us to 2006, - “into the phase where exponential growth yields jaw-dropping results.”  What happens then?

In his 2005 book, The Singularity is Near: When Humans Transcend Biology, - author and inventor Ray Kurzweil predicted that exponential advances in technology, lead to what he calls The Law of Accelerating Returns.  As a result, around 2045 we will reach the Singularity, at which time “machine intelligence will be infinitely more powerful than all human intelligence combined.” ...MORE