Saturday, August 15, 2015

Artificial Intelligence Is Already Wierdly Human

From Nautil.us:
Nineteen stories up in a Brooklyn office tower, the view from Manuela Veloso’s office—azure skies, New York Harbor, the Statue of Liberty—is exhilarating. But right now we only have eyes for the nondescript windows below us in the tower across the street.

In their panes, we can see chairs, desks, lamps, and papers. They don’t look quite right, though, because they aren’t really there. The genuine objects are in a building on our side of the street—likely the one where we’re standing. A bright afternoon sun has lit them up, briefly turning the facing windows into mirrors. We see office bric-a-brac that looks ghostly and luminous, floating free of gravity.

Veloso, a professor of computer science and robotics at Carnegie Mellon University, and I have been talking about what machines perceive and how they “think”—a subject not nearly as straightforward as I had expected. “How would a robot figure that out?” she says about the illusion in the windows. “That is the kind of thing that is hard for them.”

Artificial intelligence has been conquering hard problems at a relentless pace lately. In the past few years, an especially effective kind of artificial intelligence known as a neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car. Neural nets, whose architecture copies that of the human brain, can now—usually—tell good writing from bad, and—usually—tell you with great precision what objects are in a photograph. Such nets are used more and more with each passing month in ubiquitous jobs like Google searches, Amazon recommendations, Facebook news feeds, and spam filtering—and in critical missions like military security, finance, scientific research, and those cars that drive themselves better than a person could.
Not knowing why a machine did something strange leaves us unable to make sure it doesn’t happen again.
Neural nets sometimes make mistakes, which people can understand. (Yes, those desks look quite real; it’s hard for me, too, to see they are a reflection.) But some hard problems make neural nets respond in ways that aren’t understandable. Neural nets execute algorithms—a set of instructions for completing a task. Algorithms, of course, are written by human beings. Yet neural nets sometimes come out with answers that are downright weird: not right, but also not wrong in a way that people can grasp. Instead, the answers sound like something an extraterrestrial might come up with.

These oddball results are rare. But they aren’t just random glitches. Researchers have recently devised reliable ways to make neural nets produce such eerily inhuman judgments. That suggests humanity shouldn’t assume our machines think as we do. Neural nets sometimes think differently. And we don’t really know how or why.

That can be a troubling thought, even if you aren’t yet depending on neural nets to run your home and drive you around. After all, the more we rely on artificial intelligence, the more we need it to be predictable, especially in failure. Not knowing how or why a machine did something strange leaves us unable to make sure it doesn’t happen again.

But the occasional unexpected weirdness of machine “thought” might also be a teaching moment for humanity. Until we make contact with extraterrestrial intelligence, neural nets are probably the ablest non-human thinkers we know.

To the extent that neural nets’ perceptions and reasoning differ from ours, they might show us how intelligence works outside the constraints of our species’ limitations. Galileo showed that Earth wasn’t unique in the universe, and Darwin showed that our species isn’t unique among creatures. Joseph Modayil, an artificial intelligence researcher at the University of Alberta, suggests that perhaps computers will do something similar for the concept of intellect. “Artificial systems show us intelligence spans a vast space of possibilities,” he says.

First, though, we need to make sure our self-driving cars don’t mistake school buses for rugby shirts, and don’t label human beings in photos as gorillas or seals, as one of Google’s neural nets recently did. In the past couple of years, a number of computer scientists have become fascinated with the problem and with possible fixes. But they haven’t found one yet....MORE
 Here's the June 17 Google Research blog post that woke people up to what this application of AI was up to:


What Can We Learn from AI Dreams?