Thursday, February 10, 2011

Robots to get their own internet




Robots could soon have an equivalent of the internet and Wikipedia.

European scientists have embarked on a project to let robots share and store what they discover about the world.

Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters.


The idea behind RoboEarth is to develop methods that help robots encode, exchange and re-use knowledge, said RoboEarth researcher Dr Markus Waibel from the Swiss Federal Institute of Technology in Zurich.

"Most current robots see the world their own way and there's very little standardisation going on," he said. Most researchers using robots typically develop their own way for that machine to build up a corpus of data about the world.

This, said Dr Waibel, made it very difficult for roboticists to share knowledge or for the field to advance rapidly because everyone started off solving the same problems.

By contrast, RoboEarth hopes to start showing how the information that robots discover about the world can be defined so any other robot can find it and use it.

RoboEarth will be a communication system and a database, he said.

In the database will be maps of places that robots work, descriptions of objects they encounter and instructions for how to complete distinct actions.

The human equivalent would be Wikipedia, said Dr Waibel.

"Wikipedia is something that humans use to share knowledge, that everyone can edit, contribute knowledge to and access," he said. "Something like that does not exist for robots."

It would be great, he said, if a robot could enter a location that it had never visited before, consult RoboEarth to learn about that place and the objects and tasks in it and then quickly get to work.




While other projects are working on standardising the way robots sense the world and encode the information they find, RoboEarth tries to go further.

"The key is allowing robots to share knowledge," said Dr Waibel. "That's really new."

RoboEarth is likely to become a tool for the growing number of service and domestic robots that many expect to become a feature in homes in coming decades.

Dr Waibel said it would be a place that would teach robots about the objects that fill the human world and their relationships to each other.

For instance, he said, RoboEarth could help a robot understand what is meant when it is asked to set the table and what objects are required for that task to be completed.

The EU-funded project has about 35 researchers working on it and hopes to demonstrate how the system might work by the end of its four-year duration.

Early work has resulted in a way to download descriptions of tasks that are then executed by a robot. Improved maps of locations can also be uploaded.

A system such as RoboEarth was going to be essential, said Dr Waibel, if robots were going to become truly useful to humans.


Friday, January 28, 2011

Robots learn from rats' brains



Queensland engineers have translated biological findings to probabilistic algorithms that could direct robots through complicated human environments.

While many of today's machines relied on expensive sensors and systems, the researchers hoped their software would improve domestic robots, cheaply.

Roboticist Michael Milford worked with neuroscientists to develop algorithms that mimicked three navigational systems in rats' brains: place cells; head direction cells; and grid cells.

In an article published in PLoS Computational Biology this week, he described simulating grid cells - recently discovered brain cells that helped rats contextually determine their location.

To explain the function of grid cells, Milford described getting out of a lift at an unknown floor, and deducing his location based on visual cues like vending machines and photocopiers.

"We take it for granted that we find our way to work ... [but] the problem is extremely challenging," said the Queensland University of Technology researcher.

"Robots are able to navigate to a certain point, but they just get confused and lost in an office building," he told iTnews.

The so-called RatSLAM software was installed in a 20kg Pioneer 2DXe robot with a forward facing camera that was capable of detecting visual cues, their relative bearing and distance.

The robot was placed in a maze similar to those used in experiments with rats, with random goal locations that simulated a rat's collection of randomly thrown pieces of food.

It calibrated itself using visual cues, performing up to 14 iterations per second to determine its location when placed in one of four initial starting positions.

Milford explained that environmental changes like lighting, shadows, moving vehicles and people made it difficult for robots to navigate in a human world.

Machines like the Mars Rovers and those competing in the DARPA Challenges tended to use expensive sensors - essentially "throwing a lot of money" at the problem, he said.

But a cheaper solution was needed to direct domestic robots, which were currently still in early stages of development and "very, very, very, dumb".

"The only really successful cheap robot that has occurred so far is the [iRobot Roomba] vacuum cleaner," he said. "They don't have any idea where they are; they just move around randomly."

The grid cell project was the latest in almost seven years of Milford's research into applying biological techniques to machines.

The team had been approached "occasionally" by domestic robot manufacturers, he said, but was currently focussed on research, and not commercialisation.

Monday, October 11, 2010

Google Cars Drive Themselves, in Traffic



Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving.

The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.

With someone behind the wheel to take control if something goes awry and a technician in the passenger seat to monitor the navigation system, seven test cars have driven 1,000 miles without human intervention and more than 140,000 miles with only occasional human control. One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation. The only accident, engineers said, was when one Google car was rear-ended while stopped at a traffic light.

Autonomous cars are years from mass production, but technologists who have long dreamed of them believe that they can transform society as profoundly as the Internet has.

Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.

The Google research program using artificial intelligence to revolutionize the automobile is proof that the company’s ambitions reach beyond the search engine business. The program is also a departure from the mainstream of innovation in Silicon Valley, which has veered toward social networks and Hollywood-style digital media.

During a half-hour drive beginning on Google’s campus 35 miles south of San Francisco last Wednesday, a Prius equipped with a variety of sensors and following a route programmed into the GPS navigation system nimbly accelerated in the entrance lane and merged into fast-moving traffic on Highway 101, the freeway through Silicon Valley.

It drove at the speed limit, which it knew because the limit for every road is included in its database, and left the freeway several exits later. The device atop the car produced a detailed map of the environment.

The car then drove in city traffic through Mountain View, stopping for lights and stop signs, as well as making announcements like “approaching a crosswalk” (to warn the human at the wheel) or “turn ahead” in a pleasant female voice. This same pleasant voice would, engineers said, alert the driver if a master control system detected anything amiss with the various sensors.

The car can be programmed for different driving personalities — from cautious, in which it is more likely to yield to another car, to aggressive, where it is more likely to go first.

Christopher Urmson, a Carnegie Mellon University robotics scientist, was behind the wheel but not using it. To gain control of the car he has to do one of three things: hit a red button near his right hand, touch the brake or turn the steering wheel. He did so twice, once when a bicyclist ran a red light and again when a car in front stopped and began to back into a parking space. But the car seemed likely to have prevented an accident itself.

When he returned to automated “cruise” mode, the car gave a little “whir” meant to evoke going into warp drive on “Star Trek,” and Dr. Urmson was able to rest his hands by his sides or gesticulate when talking to a passenger in the back seat. He said the cars did attract attention, but people seem to think they are just the next generation of the Street View cars that Google uses to take photographs and collect data for its maps.

The project is the brainchild of Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence Laboratory, a Google engineer and the co-inventor of the Street View mapping service.

In 2005, he led a team of Stanford students and faculty members in designing the Stanley robot car, winning the second Grand Challenge of the Defense Advanced Research Projects Agency, a $2 million Pentagon prize for driving autonomously over 132 miles in the desert.

Besides the team of 15 engineers working on the current project, Google hired more than a dozen people, each with a spotless driving record, to sit in the driver’s seat, paying $15 an hour or more. Google is using six Priuses and an Audi TT in the project.

The Google researchers said the company did not yet have a clear plan to create a business from the experiments. Dr. Thrun is known as a passionate promoter of the potential to use robotic vehicles to make highways safer and lower the nation’s energy costs. It is a commitment shared by Larry Page, Google’s co-founder, according to several people familiar with the project.

The self-driving car initiative is an example of Google’s willingness to gamble on technology that may not pay off for years, Dr. Thrun said. Even the most optimistic predictions put the deployment of the technology more than eight years away.

One way Google might be able to profit is to provide information and navigation services for makers of autonomous vehicles. Or, it might sell or give away the navigation technology itself, much as it offers its Android smart phone system to cellphone companies.

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?

And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?

“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”

The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.

Scientists and engineers have been designing autonomous vehicles since the mid-1960s, but crucial innovation happened in 2004 when the Pentagon’s research arm began its Grand Challenge.

The first contest ended in failure, but in 2005, Dr. Thrun’s Stanford team built the car that won a race with a rival vehicle built by a team from Carnegie Mellon University. Less than two years later, another event proved that autonomous vehicles could drive safely in urban settings.

Advances have been so encouraging that Dr. Thrun sounds like an evangelist when he speaks of robot cars. There is their potential to reduce fuel consumption by eliminating heavy-footed stop-and-go drivers and, given the reduced possibility of accidents, to ultimately build more lightweight vehicles.

There is even the farther-off prospect of cars that do not need anyone behind the wheel. That would allow the cars to be summoned electronically, so that people could share them. Fewer cars would then be needed, reducing the need for parking spaces, which consume valuable land.

And, of course, the cars could save humans from themselves. “Can we text twice as much while driving, without the guilt?” Dr. Thrun said in a recent talk. “Yes, we can, if only cars will drive themselves.”