The Laws of Robotics…
Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?
That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.
Specifically, they code the AI to ignore human input and its consequences for success or failure. If going inside is a “failure” and it learns that every time a human picks it up, the human then carries it inside, the robot might decide to start running away from any human who approaches. If going inside is a desired goal, it may learn to give up on pathfinding its way inside, and simply bump into human ankles until it gets what it wants. Writ large, the “law” being developed is basically, “Thou shalt not learn to win the game in ways that are annoying and that I didn’t see coming.”
It’s a very good rule to have. Elon Musk seems to be using the media’s love of sci-fi panic headlines to promote his name and brand, at this point, but he’s not entirely off base when he says that we need to worry about AI run amok. The issue isn’t necessarily hegemony by the robot overlords, but widespread chaos as AI-based technologies enter an ever-wider swathe of our lives. Without the ability to safely interrupt an AI and not influence its learning, the simple act of stopping a robot from doing something unsafe or unproductive could make it less safe or productive — making human intervention a tortured, overly complex affair with unforeseeable consequences. Read More…