Showing posts with label navigation. Show all posts
Showing posts with label navigation. Show all posts

Friday, January 28, 2011

Robots learn from rats' brains



Queensland engineers have translated biological findings to probabilistic algorithms that could direct robots through complicated human environments.

While many of today's machines relied on expensive sensors and systems, the researchers hoped their software would improve domestic robots, cheaply.

Roboticist Michael Milford worked with neuroscientists to develop algorithms that mimicked three navigational systems in rats' brains: place cells; head direction cells; and grid cells.

In an article published in PLoS Computational Biology this week, he described simulating grid cells - recently discovered brain cells that helped rats contextually determine their location.

To explain the function of grid cells, Milford described getting out of a lift at an unknown floor, and deducing his location based on visual cues like vending machines and photocopiers.

"We take it for granted that we find our way to work ... [but] the problem is extremely challenging," said the Queensland University of Technology researcher.

"Robots are able to navigate to a certain point, but they just get confused and lost in an office building," he told iTnews.

The so-called RatSLAM software was installed in a 20kg Pioneer 2DXe robot with a forward facing camera that was capable of detecting visual cues, their relative bearing and distance.

The robot was placed in a maze similar to those used in experiments with rats, with random goal locations that simulated a rat's collection of randomly thrown pieces of food.

It calibrated itself using visual cues, performing up to 14 iterations per second to determine its location when placed in one of four initial starting positions.

Milford explained that environmental changes like lighting, shadows, moving vehicles and people made it difficult for robots to navigate in a human world.

Machines like the Mars Rovers and those competing in the DARPA Challenges tended to use expensive sensors - essentially "throwing a lot of money" at the problem, he said.

But a cheaper solution was needed to direct domestic robots, which were currently still in early stages of development and "very, very, very, dumb".

"The only really successful cheap robot that has occurred so far is the [iRobot Roomba] vacuum cleaner," he said. "They don't have any idea where they are; they just move around randomly."

The grid cell project was the latest in almost seven years of Milford's research into applying biological techniques to machines.

The team had been approached "occasionally" by domestic robot manufacturers, he said, but was currently focussed on research, and not commercialisation.

Monday, October 11, 2010

Google Cars Drive Themselves, in Traffic



Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving.

The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.

With someone behind the wheel to take control if something goes awry and a technician in the passenger seat to monitor the navigation system, seven test cars have driven 1,000 miles without human intervention and more than 140,000 miles with only occasional human control. One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation. The only accident, engineers said, was when one Google car was rear-ended while stopped at a traffic light.

Autonomous cars are years from mass production, but technologists who have long dreamed of them believe that they can transform society as profoundly as the Internet has.

Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.

The Google research program using artificial intelligence to revolutionize the automobile is proof that the company’s ambitions reach beyond the search engine business. The program is also a departure from the mainstream of innovation in Silicon Valley, which has veered toward social networks and Hollywood-style digital media.

During a half-hour drive beginning on Google’s campus 35 miles south of San Francisco last Wednesday, a Prius equipped with a variety of sensors and following a route programmed into the GPS navigation system nimbly accelerated in the entrance lane and merged into fast-moving traffic on Highway 101, the freeway through Silicon Valley.

It drove at the speed limit, which it knew because the limit for every road is included in its database, and left the freeway several exits later. The device atop the car produced a detailed map of the environment.

The car then drove in city traffic through Mountain View, stopping for lights and stop signs, as well as making announcements like “approaching a crosswalk” (to warn the human at the wheel) or “turn ahead” in a pleasant female voice. This same pleasant voice would, engineers said, alert the driver if a master control system detected anything amiss with the various sensors.

The car can be programmed for different driving personalities — from cautious, in which it is more likely to yield to another car, to aggressive, where it is more likely to go first.

Christopher Urmson, a Carnegie Mellon University robotics scientist, was behind the wheel but not using it. To gain control of the car he has to do one of three things: hit a red button near his right hand, touch the brake or turn the steering wheel. He did so twice, once when a bicyclist ran a red light and again when a car in front stopped and began to back into a parking space. But the car seemed likely to have prevented an accident itself.

When he returned to automated “cruise” mode, the car gave a little “whir” meant to evoke going into warp drive on “Star Trek,” and Dr. Urmson was able to rest his hands by his sides or gesticulate when talking to a passenger in the back seat. He said the cars did attract attention, but people seem to think they are just the next generation of the Street View cars that Google uses to take photographs and collect data for its maps.

The project is the brainchild of Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence Laboratory, a Google engineer and the co-inventor of the Street View mapping service.

In 2005, he led a team of Stanford students and faculty members in designing the Stanley robot car, winning the second Grand Challenge of the Defense Advanced Research Projects Agency, a $2 million Pentagon prize for driving autonomously over 132 miles in the desert.

Besides the team of 15 engineers working on the current project, Google hired more than a dozen people, each with a spotless driving record, to sit in the driver’s seat, paying $15 an hour or more. Google is using six Priuses and an Audi TT in the project.

The Google researchers said the company did not yet have a clear plan to create a business from the experiments. Dr. Thrun is known as a passionate promoter of the potential to use robotic vehicles to make highways safer and lower the nation’s energy costs. It is a commitment shared by Larry Page, Google’s co-founder, according to several people familiar with the project.

The self-driving car initiative is an example of Google’s willingness to gamble on technology that may not pay off for years, Dr. Thrun said. Even the most optimistic predictions put the deployment of the technology more than eight years away.

One way Google might be able to profit is to provide information and navigation services for makers of autonomous vehicles. Or, it might sell or give away the navigation technology itself, much as it offers its Android smart phone system to cellphone companies.

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?

And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?

“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”

The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.

Scientists and engineers have been designing autonomous vehicles since the mid-1960s, but crucial innovation happened in 2004 when the Pentagon’s research arm began its Grand Challenge.

The first contest ended in failure, but in 2005, Dr. Thrun’s Stanford team built the car that won a race with a rival vehicle built by a team from Carnegie Mellon University. Less than two years later, another event proved that autonomous vehicles could drive safely in urban settings.

Advances have been so encouraging that Dr. Thrun sounds like an evangelist when he speaks of robot cars. There is their potential to reduce fuel consumption by eliminating heavy-footed stop-and-go drivers and, given the reduced possibility of accidents, to ultimately build more lightweight vehicles.

There is even the farther-off prospect of cars that do not need anyone behind the wheel. That would allow the cars to be summoned electronically, so that people could share them. Fewer cars would then be needed, reducing the need for parking spaces, which consume valuable land.

And, of course, the cars could save humans from themselves. “Can we text twice as much while driving, without the guilt?” Dr. Thrun said in a recent talk. “Yes, we can, if only cars will drive themselves.”

Tuesday, September 28, 2010

EPFL develops Linux-based swarming micro air vehicles



The good people at Ecole Polytechnique Federale de Lausanne (or EPFL) in Switzerland have been very busy lately, as this video demonstrates.

Not only have they put together a scalable system that will let any flying robot perch in a tree or similar structure, but now they've gone and developed a platform for swarming air vehicles (with Linux, nonetheless).

Said to be the largest network of its kind, the ten SMAVNET swarm members control their own altitude, airspeed, and turn rate based on input from the onboard gyroscope and pressure sensors. The goal is to develop low cost devices that can be deployed in disaster areas to creat ad hoc communications networks, although we can't help but think this would make the best Christmas present ever.

Thursday, September 17, 2009

Welcome to the robot revolution


Much like the then-fledgling PC industry in the late 1970s, the robotics industry is on the cusp of a revolution, contends the head of Microsoft Corp.'s robotics group.

Today's giant, budget-bending robots that are run by specialists in factories and on assembly floors are evolving into smaller, less-expensive and cuter machines that clean our carpets, entertain us and may someday take care of us as we grow old. The move is akin to the shift from the mainframe world of the 1970s to the personal computers that invaded our offices and homes over the past 20 to 25 years.

"The transition is starting," said Tandy Trower, general manager of Microsoft's 3-year-old robotics group. "It's like we're back in 1977 -- four years before the IBM PC came out. We were seeing very primitive but very useful machines that were foreshadowing what was to come. In many ways, they were like toys compared to what we have today. It's the same with robots now."

Trower said many countries are making significant investments in robotics, and advances are beginning to multiply. Robotic aids and companions -- some looking like an updated version of R2-D2 and others more humanoid -- will begin moving into our homes in three to five years as technology advances and prices drop, he predicted.

"Robots are really an evolution of the technology we have now," Trower said. "We're just adding to our PCs, really. We're letting them get up off our desks and move around. They're evolving into something you will engage with and will serve you in your life someway."

Some, experts though, are hesitant to talk of revolutions, especially in an industry that has seen many promises made that have yet to materialize.

James Kuffner, an associate professor at the Robotics Institute at Carnegie Mellon University, warns that any revolution could be lengthy, as robots likely won't soon be doing dishes and walking dogs for about 20 years.

"People ask me when they'll have a Jetsons-like robot walking around their house," Kuffner said. "I tell them the first gas-powered engine was built in 1885, but it took until 1915 before a large segment of the population could afford a car. When that happened, society was transformed. In the 1950s, the first computers were built, but it wasn't until the early '80s when the personal computer came on the scene. And, of course, it completely transformed society."

Kuffner said the he believes the robot revolution countdown should start in 1996 when Honda Motor Co. released the P2, a self-contained, life-size humanoid machine. Going by historical example, a good portion of the population could have a robot in the home by 2026, he said.


"The Roomba vacuum cleaner is often seen as the first successful home robot, but it's pretty limited," Kuffner added. "So, sure, you can say we have robots in our homes. But a humanoid robot like you see in Hollywood movies, designed to perform a large number of tasks without special programming or tuning? In about 20 years."

Neena Buck, an independent robotics analyst based in Cambridge, Mass., said agreed that the robotics business will take off, but that it will be some time before humanoid robots are washing cars or dancing. First, she said, there will be single-task robots for house cleaning and the like, and exoskeletal robots to help people with infirmities.

"A Jetsons robot -- I don't think that's how it will happen," she said. "Maybe people need to change their vision of a robot."

Trower told Computerworld that robotics has been slow to grow in recent years because of the lack of a standard software platform -- the very thing Microsoft Corp. mandated he create.

The Microsoft robotics group, which is tasked with generating profits within three to five years, is now updating its Robotics Studio software, which includes a tool set and a set of programming libraries that sit on top of Windows. The studio also includes a programming language and a simulator, so that developers can first try out programs in a virtual world. The latest version of the studio platform is slated to ship by the end of this year, according to Trower.

"The robotics industry needs portability," said Trower. "There's been no standard. We wanted to make it easy for the industry to bootstrap itself. I truly think software is holding the robotic industry back."

Software was definitely holding back graduate students at the University of Massachusetts, Amherst, in their quest to build a new version of the school's uBot robot.

Bryan Thibodeau and Patrick Deegan are both graduate students who have been building the fifth generation of uBot, dubbed uBot-5, a two-wheeled, two-armed robot that can maintain its balance.

The developers said they expect to save significant time during the development of uBot 6 due to the use of Robotics Studio in their current project. "We can transfer applications we've written before for this to other robots," said Deegan. "This is the fifth generation, and we had to write code from scratch every time. The next time, we won't. It'll save us tons of time -- probably six months minimum. Now, we can start from here and keep going."

During a demonstration of the uBot-5, Thibodeau said that the developers will spend a lot less time simply reinventing the wheel. "Now we can focus on doing more, instead of doing the same thing over again," he added.

Deegan and Thibodeau noted that they hope the uBot will eventually be used to help care for the growing elderly population, helping them stay in their homes longer and more safely.

With two arms that one day could open a door, two wheels to move it about a home, and a rotating torso and touch screen that could enable it to "look" about its environment, Trower called uBot-5 is a good example of what's likely the next generation of in-home robots.

"The idea of dexterous manipulation makes a difference," said Trower. "It would be able to interact with things in the home environment, load the dishwasher, fold clothes. Once it has two arms, it opens up a huge variety of possibilities."

A touch screen that sits on the uBot-5's shoulders could act, for example, as a sort of portal for an elderly woman living alone. If the woman fell and was unresponsive, the robot could be programmed to recognize the problem and alert emergency response services. Her doctor could access the robot through his computer, see what the robot sees and speak to the woman through the robot. His face could appear on the screen, making it more natural for the two to talk to each other, using the robot as the conduit.

Richard Doherty, research director at The Envisioneering Group, a market research firm n Seaford, N.Y., said progress in the robotics industry could be limited or slowed because people will be afraid of losing their jobs -- such as a home care assistant -- to robots.

"In this country, people are afraid for their jobs. They don't want to see a robotic coffee maker or robots that could change your oil … or take care of the elderly," said Doherty. "It's job inertia. … We need to see robots in a different light. We need people to understand that this machine could help care for their grandmother."


This is exactly the kind of aid and companionship that one artificial intelligence researcher expects to see from robots in the coming years. David Levy, a British artificial intelligence researcher whose book, Love and Sex with Robots, was released last November, said in a previous interview that robotics will make such dramatic advances in the coming years that humans will be marrying robots by the year 2050.

"Robots started out in factories making cars. There was no personal interaction," said Levy, who is also an international chess master who has been developing computer chess games for years. "Then people built mail-cart robots, and then robotic dogs. Now robots are being made to care for the elderly. In the last 20 years, we've been moving toward robots that have relationships with humans, and it will keep growing toward a more emotional relationship, a more loving one and a sexual one."

While iRobot Corp.'s Roomba may be a vacuum cleaner and not a companion, Trower noted that people who own the robots identify with them, often naming them, drawing faces on them and even insisting that broken ones be repaired rather than replaced with a new machine.

"This is part of the evolution," said Trower. "We now see robots coming into people's lives and living with us. It's sneaking in and saying, 'Aren't I cute?'"

Friday, August 28, 2009

Lobsters teach robots magnetic mapping trick


SPINY lobsters have become the unlikely inspiration for a robot with a unique sense of direction. Like the lobster, it uses a map of local variations in the Earth's magnetic field to find its way around - a method that could give domestic robots low-cost navigational capabilities.

In 2003, computer scientist Janne Haverinen read in Nature (vol 421, p 60) about the amazing direction-finding ability of the Caribbean spiny lobster Panulirus argus. A team from the University of North Carolina, Chapel Hill, had moved the critters up to 37 kilometres from where they were caught and deprived them of orientational cues, but found they always set off in the right direction home. They concluded P. argus must navigate with an inbuilt map of local anomalies in the Earth's magnetic field.

"My first inspiration came from birds, ants and bees," says Haverinen. "But the spiny lobster clinched it for me."

The findings set Haverinen, who works in the intelligent systems lab at the University of Oulu, Finland, wondering if he could draw magnetic maps of buildings for domestic and factory robots. It is well known that compasses are sent haywire by the metal in buildings - plumbing, electrical wiring and the steel rods in reinforced concrete, for instance - and cannot find magnetic north. Haverinen's idea was that these distortions of the Earth's magnetic field might create a distinctive magnetic topography.

"So we decided to try to use this 'magnetic landscape' - the array of disturbances - that was upsetting the compass as a map for a robot," says Haverinen.

The team used a magnetometer to scan the magnetic field strength close to the floor in their lab (see picture) and in a 180-metre corridor in a local hospital. They then stored the field variations in the memory of a small wheeled robot and mounted a magnetometer on a rod projecting in front of it to prevent interference from its motors.

The robot was able to work out where it was and to drive along the corridor without a vision system. What's more, the magnetic map stayed true a year after the first mapping was done, Haverinen reports in Robotics and Autonomous Systems (DOI: 10.1016/j.robot.2009.07.018).

"So there just might be enough stable information for robots to work out where they are in the ambient magnetic field," he says. That would obviate the need for expensive "indoor GPS" systems in which triangulation between fixed radio beacons in a building tells robots their position.

"Reliance on any one guidance method is not a great idea in case it fails," warns Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. "But you could use a system like this, if it's proven to work, to boost your confidence in a robot by using it in conjunction with, say, vision-based navigation."

Monday, August 17, 2009

Robot tourguide in Taiwan


The MSI produced robot named "Rich" demonstrates giving a tour walking down a garden trail in the Grand Hills apartment showroom of the Far Glory property company in Linkou, Taipei County, Taiwan

Sunday, May 17, 2009

Space robot 2.0: Smarter than the average rover



SOMETHING is moving. Two robots sitting motionless in the dust have spotted it. One, a six-wheeled rover, radios the other perched high on a rocky slope. Should they take a photo and beam it back to mission control? Time is short, they have a list of other tasks to complete, and the juice in their batteries is running low. The robots have seconds to decide. What should they do?

Today, mission control is a mere 10 metres away, in a garage here at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. Engineers can step in at any time. But if the experiment succeeds and the robots spot the disturbance and decide to beam the pictures back to base, they will have moved one step closer to fulfilling NASA’s vision of a future in which teams of smart space probes scour distant worlds, seeking out water or signs of life with little or no help from human controllers.

NASA, along with other space agencies, has already taken the first tentative steps towards this kind of autonomous mission (see “Spacecraft go it alone”). In 1999, for example, NASA’s Deep Space 1 probe used a smart navigation system to find its way to an asteroid – a journey of over 600 million kilometres. Since 2003, an autonomous control system has been orbiting our planet aboard NASA’s Earth Observing-1 satellite. It helps EO-1 to spot volcanic eruptions and serious flooding, so the events can be photographed and the images beamed back to researchers on the ground. And in the next month or so, the latest iteration of smart software will be uploaded onto one of NASA’s Mars rovers, loosening the machine’s human tether still further so it can hunt for unusual rock formations on its own.

The idea is not to do away with human missions altogether. But since it is far cheaper and easier to send robots first, why not make them as productive as possible? Besides, the increasingly long distances they travel from home make controlling a rover with a joystick impractical. Commands from Earth might take 20 minutes to reach Mars, and about an hour to reach the moons of Jupiter.

So what can we realistically expect autonomous craft to do? It is one thing to build a space probe that can navigate by itself, respond quickly to unexpected events or even carry on when a critical component fails. It’s quite another to train a planetary rover to spot a fossilised bone in a rock, let alone distinguish a living cell from a speck of dirt.

The closest thing to a space robot with a brain is NASA’s pair of Mars rovers (see image), and their abilities are fairly limited. Since they landed in January 2004 they have had to cope with more than six critical technical problems, including a faulty memory module and a jammed wheel. That the craft are still trundling across the red planet and returning valuable geological data is down to engineers at mission control fixing the faults remotely. In fact the rovers can only do simple tasks on their own, says Steve Chien, the head of JPL’s artificial intelligence group. They can be programmed to drive from point A to point B, stop, and take a picture. They can spot clouds and whirling mini-tornadoes called dust devils on their own. They can also protect themselves against accidental damage – by keeping away from steep slopes or large rocks. For pretty much everything else, they depend on their human caretakers.

What are we missing?

This is becoming a significant limitation. While NASA’s first Mars rover, Sojourner (see image), travelled just 100 metres during its mission in 1997, Spirit and Opportunity have covered over 24 kilometres so far. As they drive they are programmed to snap images of the landscape around them, but that doesn’t make for very thorough exploration. “We are travelling further and further with each rover mission,” says Tara Estlin, senior computer scientist and one of the team developing autonomous science at JPL. “Who knows what interesting things we are missing?”

NASA wouldn’t want the rovers to record everything they see and transmit it all back to Earth; the craft simply don’t have the power, bandwidth and time. Instead, the team at JPL has spent around a decade developing software that allows the rovers to analyse images as they are recorded and decide for themselves which geological features are worth following up. Key to this is a software package called OASIS – short for on-board autonomous science investigation system.



The idea is that before the rovers set out each day, controllers can give OASIS a list of things to watch out for. This might simply be the largest or palest rock in the rover’s field of view, or it could be an angular rock that might be volcanic. Then whenever a rover takes an image, OASIS uses special algorithms to identify any rocks in the scene and single out those on its shopping list (Space Operations Communicator, vol 5, p39). Not only is OASIS able to tell the rovers what features are of scientific interest, it knows their relative value too: smooth rocks which may have been eroded by water might take priority over rough ones, say. This helps the rovers decide what to do next.

There are also practical considerations to take into account. As they trundle around the surface, the rovers must keep track of whether they have enough time, battery power and spare memory capacity to proceed. So the JPL team has also created a taskmaster – software that can plan and schedule activities. With science goals tugging at one sleeve and practical limitations at the other, this program steps in to decide how to order activities so that the rover can reach its goals safely, making any necessary scheduling changes along the way. With low-priority rocks close by, say, a rover might decide it is worth snapping six images of them rather than one of a more interesting rock a few metres away, since the latter would use up precious battery juice.

Why stop there? Since OASIS allows a rover to identify high-priority targets on its own, the JPL team has decided to take the next step: let the rover drive over to an interesting rock and deploy its sensors to take a closer look. To do this, Estlin and her colleagues won’t be using OASIS, however. Instead, they have taken elements from it and used them to create a new control system called Autonomous Exploration for Gathering Increased Science (AEGIS). This has been tested successfully at JPL and is scheduled for uplink and remote installation on the rover Opportunity sometime in September.

Once AEGIS is in control, Opportunity will be able to deploy its high-resolution camera automatically and beam data back to Earth for analysis – the first time autonomous software has been able to control a craft on the surface of another world. This is just the beginning, says Estlin. For example, researchers at JPL and the Wesleyan University in Middletown, Connecticut, have developed a smart detector system that will allow a rover to carry out a basic scientific experiment on its own. In this case, its task will be to identify specific minerals in an alien rock.

The detector consists of two automated spectrometers controlled by “support vector machines” – relatives of artificial neural networks – of a kind already in use aboard EO-1. The new SVM system uses the spectrometers to take measurements and then compares the results with an on-board database containing spectra from thousands of minerals. Last year the researchers published results in the journal Icarus (vol 195, p 169) showing that in almost all cases, even in complex rock mixtures, their SVM could automatically spot the presence of jarosite, a sulphate mineral associated with hydrothermal springs.
Alien novelties

Though increasingly sophisticated, these autonomous systems are still a long way from the conscious machines of science fiction that can talk, feel and recognise new life forms. Right now, Chien admits, we can’t even really program a robot for “novelty detection” – the equivalent of, say, picking out the characteristic shape of a bone among a pile of rocks – let alone give it the ability to detect living creatures.

In theory, the shape of a complex natural object such as an ice crystal or a living cell could be described in computer code and embedded in a software library. Then the robot would only need a sensor such as a microscope with sufficient magnification to photograph it.

In fact identifying a cell is a huge challenge because its characteristics can be extremely subtle. In 1999, NASA funded an ambitious project that set out to discover whether there are specific signatures such as shape, symmetry, or a set of combined features that could provide a key to identifying and categorising simple living systems (New Scientist, 22 April 2000, p 22). The idea was to create a huge image library containing examples from Earth, and then teach a neural network which characteristics to look for. Unfortunately, the project ended before it could generate any useful results.

Just as a single measurement is unlikely to provide definitive proof of alien life, so most planetary scientists agree that a single robotic explorer, however smart, won’t provide all the answers. Instead, JPL scientists envisage teams of autonomous craft working together, orbiting an alien world and scouring the surface for interesting science, then radioing each other to help decide what features deserve a closer look.

This model is already being put through its paces. Since 2004, networks of ground-based sensors placed around volcanoes, from Erebus in Antarctica to Kilauea and Mauna Loa in Hawaii, have been watching for sudden changes that might signal an eruption. When they detect strong signals, they can summon EO-1, which uses its autonomous software planner to schedule a fly-past. The satellite then screens the target area for clouds, and if skies are clear, it records images, processes them and transmits them to ground control.



In July, a network of 15 probesMovie Camera were placed into Mount St Helens, a volcano in Washington state. These probes carry sensors that monitor conditions inside the crater and can talk to each other to analyse data in real time, as well as call up EO-1 to take photos. If it detects activity from orbit, the satellite can even ask the probes to focus attention on a particular spot.

Networks of autonomous probes can provide a number of advantages, including helping a mission cover more ground, and ensuring it continues even if one or more probes are damaged or destroyed. This approach also offers increased processing power, since computers on separate probes can work together to crunch data more quickly. And researchers are beginning to believe that teams of autonomous probes could eventually be smart enough to do almost everything a human explorer could, even in the remotest regions of space.

Last year, in a paper published in the journal Planetary and Space Science (vol 56, p 448), a consortium of researchers from the US, Italy and Japan laid out their strategy for searching out life using autonomous craft controlled by fuzzy logic, the mathematical tool developed in the 1960s to give computers a way to handle uncertainty. Their plan calls for the use of three types of craft: surface-based rovers with sensors designed to spot signs of water and potential sources of heat, such as geothermal vents; airships that float low overhead and help pinpoint the best sites for study; and orbiters that image the planet surface, coordinating with mission control as well as beaming data back to Earth.

The consortium argue that fuzzy logic is a better bet than neural networks or other artificial intelligence techniques, since it is well suited to handling incomplete data and contradictory or ambiguous rules. They also suggest that by working together, the three types of probes will have pretty much the same investigative and deductive powers as a human planetary scientist.
The team of probes will have much the same investigative powers as a human scientist

Experimental simulations of a mission to Mars seem to confirm this view: in two tests the autonomous explorers came to the same conclusions as a human geoscientist. The system could be particularly useful for missions to Titan and Enceladus, the researchers suggest, since autonomy will be a key factor for the success of a mission so far from Earth.

Back at JPL, the day’s test of robot autonomy is almost complete. The two robots are running new software designed to improve coordination between craft. Part of the experiment is to see whether the robots can capture a photo of a moving target – in this case a small remote-controlled truck nicknamed Junior – and relay it back to “mission control” using delay-tolerant networking, a new system for data transfer.

In future deep-space missions, robots will need autonomy for longer stretches since commands from Earth will take an hour or so to reach them. And as planets rotate, there will be periods when no communication is possible. Delay-tolerant networking relies on a “store and forward” method that promises to provide a more reliable link between planetary explorers and mission control. Each node in the network – whether a rover or an orbiter – holds on to a transmission until it is safe to relay it to the next node. Information may take longer to reach its destination this way, but it will get there in the end.

And it seems to work: the images from the two robots arrive. They include both wide-angle shots and high-resolution close-ups of Junior. Estlin is pleased.

As we stand in the heat, a salamander scuttles quickly across a rock. I can’t help wondering whether the robots would have picked that out. Just suppose the Mars rover had to choose between a whirling dust devil and a fleeing amphibian? Chien assures me that the software would direct the rover to prioritise, depending on the relative value of the two. I hope it goes for the salamander. And if alien life proves half as shy, I hope the rover can act fast.