Google researchers tackle AI and robotics safety, prevent future toasters from killing us in our sleep
Humans have been afraid of the dangers posed by AI and hypothetical robots or androids since the terms first entered common parlance. Much early science fiction, including stories by Isaac Asimov and more than a few plots of classic Star Trek episodes dealt with the unanticipated consequences humans might encounter if they created sentient AI. It’s a fear that’s been played out in both the Terminator and Matrix franchises, and echoed by luminaries like Elon Musk. Now, Google has released its own early research into minimizing the potential danger of human/robot interaction, as well as calling for an initial set of guidelines designed to govern AI and make it less likely that a problem will occur in the first place.
We’ve already covered Google’s research into an AI killswitch, but this project has a different goal — how to avoid the need for activating such a kill switch in the first place. This initial paper describes outcome failures as “accidents,” defined as a “situation where a human designer had in mind a certain (perhaps informally specified) objective or task, but the system that was actually designed and deployed failed to accomplish that objective in a manner that led to harmful results.”
The report lays out five goals designers must keep in mind in order to avoid accidental outcomes, using a simple cleaning robot in each case. These are:
- Avoid negative side effects: A cleaning robot should not create messes or damage its environment while pursuing its primary objective. This cannot feasibly require manual per-item designations from the owner (imagine trying to explain to a robot every small object in a room that was or was not junk).
- Avoid reward hacking: A robot that receives a reward when it achieves a primary objective (e.g. cleaning the house) might attempt to hide messes, prevent itself from seeing messes, or even hide from its owners to avoid being told to clean a house that had become dirty.
- Scalable oversight: The robot needs broad heuristics that allow for proper item identification without requiring constant intervention from a human handler. A cleaning robot should know that a paper napkin lying on the floor after dinner is likely to be garbage, while a cell phone isn’t. This seems like a tricky problem to track — imagine asking a robot to sort through homework or mail scattered on a desk and differentiate which items were are were not garbage. A human can perform this task relatively easily; a robot could require extensive hand-holding.
- Safe exploration: The robot needs freedom to experiment with the best ways to perform actions, but it also needs appropriate boundaries for what types of exploration are and are not acceptable. Experimenting with the best method of loading a dishwasher to ensure optimum cleanliness is fine. Putting objects in the dishwasher that don’t belong in it (wooden spoons, saucepans with burned on dinner, or the family dachshund) is an undesired outcome.
- Robustness to distributional shift: How much can a robot bring from one environment into a different one? The Google report notes that best practices learned in an industrial environment could be deadly in an office, but I don’t think many people intend to buy an industrial cleaning robot and then deploy it at their place of work. Consider, instead, how this could play out in more pedestrian settings. A robot that learns rules based on one family’s needs might misidentify objects to be cleaned or fail to handle them properly. Cleaning products suitable for one type of surface might be less suitable for another. Clothes and papers might be misplaced, or pet toys and baby toys might be mistaken for each other (leading to amusing, if hygienically horrifying scenarios). Anyone with a laundry hamper that the robot thinks looks rather like a diaper pail could find themselves making a quick product return.
The full report steps through and discusses how to mitigate some of these issues and is worth a read if you care about the high-level discussions of how to build robust, helpful AI. I’d like to take a different tack, however, and consider how they might relate to a Boston Dynamics video that hit the Internet yesterday. Boston Dynamics has created a new 55- to 65-pound robot, dubbed SpotMini, that it showcases performing a fair number of actions and carrying out common household chores. The full video is embedded below:
At 1:01, we see SpotMini carefully loading glasses into a dishwasher. When it encounters an A&W Root Beer can, it picks the can up and deposits it into a recycling container. Less clear is whether Robo Dogmeat can perform this task when confronted with containers that blur the line between an obvious recyclable (aluminum can) and objects more likely to be re-used, like plastic water bottles, glass bottles of various types, mason jars, and other container types. Still, this is significant progress.
Following scenes show the SpotMini falling over banana peels strewn on the floor, as well as bringing a human a can of beer before wrestling with him for it. While the first was likely included to showcase how the robot could get back up after falling and the second as a laugh, both actually indicate how careful we will have to be when it comes to creating robust algorithms that dictate how future robots behave. While anyone can fall on slippery ground, a roughly 60-pound robot also needs to be able to identify and avoid these kinds of risks, lest it damage nearby people — particularly children or the elderly.
The bit at the end is amusing, but it also showcases a potential problem. A robot that delivers food and drink needs to be aware of when it is and isn’t suitable to release its cargo. It’s not hard to imagine how robots could be useful to the elderly or medically infirm — a SpotMini like the one shown above could help elderly people maintain a higher quality of life and live independently for a longer period of time. If it winds up wrestling grandma over possession of her dentures, however, the end result is likely to be less than appealing.
Source | ExtremeTech