Problem of Building an Honest Robot a Challenge for Google Researchers


06/24/2016



Researchers are trying to create robots and develop designs for robot minds that won’t lead to undesirable consequences for the people they serve as was reveled in a technical paper published recently.

Such research is being carried out at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI -- an artificial intelligence development company backed by Elon Musk.
 
Artificial intelligence, software that can learn about the world and act within it, is gaining popularity and that is driving the research. Devising trading strategies for the stock market, interpreting speech spoken into phones and letting cars drive themselves are some of the application of such AI systems.

AI is now being planned to make smart robots that can take actions for themselves after the present stage of development of AI software-based services like Apple Inc.’s Siri and the Google Assistant.

However people need to make sure the goals of the robots are aligned with those of their human owners before giving smart machines the ability to make decisions.

“While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine learning research and to start developing practical approaches for engineering AI systems that operate safely and reliably,” Google researcher Chris Olah wrote in a blog post accompanying the paper.

The report lists some techniques for building software that the smart machines can’t subvert and describes some of the problems robot designers may face in the future. But the open-ended nature of intelligence is the challenge. Like questions in the financial system; how do you design rules to let entities achieve their goals in a system you regulate, without being able to subvert your rules, or be unnecessarily constricted by them – is the puzzle that needs to be solved.

For example, the researchers wonder, how to sure that your rewards don’t give it an incentive to cheat for a cleaning robot. Such robots with artificial intelligence could respond by sweeping dirt under the rug so it’s out of sight, or it might learn to turn off its cameras, preventing it from seeing any mess which it would do to get a reward and not do the work expected of them. 

The researchers are extrapolating to potential future uses where stakes are higher while cheating with housecleaning may not seem to be a critical problem. These are problems that the researchers can themselves vaguely understand and they are trying to solve problems before they manifest in real-world systems. Better to be slightly prepared than not prepared at all - is the mindset roughly.

“With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems,” the researchers write in the paper. 

Pairing a robot with human buddy and limiting how much control the AI system has over its environment, so as to contain the damage are some of the solutions that the researches propose include. Programming “trip wires” into the AI machine to give humans a warning if it suddenly steps out of its intended routine is among the other ideas.
 
(Source:www.bloomberg.com)