Sunday, May 31, 2009
DustCar garbage robot
The DustCar garbage robot that takes instructions through a touchscreen mounted on its chest. The robot moves on tires avoiding obstacles with its built-in recognition system that sports a laser sensor that helps it make distinctions between trash and obstruction.
Equipped with a GPS system, the robot takes SMS alerts for chores and can travel up to 16k /h to 24 km/h, whereby it can be left loose to wander from door to door to collect almost 30kg of garbage.
Labels:
garbage
Sunday, May 17, 2009
Space robot 2.0: Smarter than the average rover
SOMETHING is moving. Two robots sitting motionless in the dust have spotted it. One, a six-wheeled rover, radios the other perched high on a rocky slope. Should they take a photo and beam it back to mission control? Time is short, they have a list of other tasks to complete, and the juice in their batteries is running low. The robots have seconds to decide. What should they do?
Today, mission control is a mere 10 metres away, in a garage here at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. Engineers can step in at any time. But if the experiment succeeds and the robots spot the disturbance and decide to beam the pictures back to base, they will have moved one step closer to fulfilling NASA’s vision of a future in which teams of smart space probes scour distant worlds, seeking out water or signs of life with little or no help from human controllers.
NASA, along with other space agencies, has already taken the first tentative steps towards this kind of autonomous mission (see “Spacecraft go it alone”). In 1999, for example, NASA’s Deep Space 1 probe used a smart navigation system to find its way to an asteroid – a journey of over 600 million kilometres. Since 2003, an autonomous control system has been orbiting our planet aboard NASA’s Earth Observing-1 satellite. It helps EO-1 to spot volcanic eruptions and serious flooding, so the events can be photographed and the images beamed back to researchers on the ground. And in the next month or so, the latest iteration of smart software will be uploaded onto one of NASA’s Mars rovers, loosening the machine’s human tether still further so it can hunt for unusual rock formations on its own.
The idea is not to do away with human missions altogether. But since it is far cheaper and easier to send robots first, why not make them as productive as possible? Besides, the increasingly long distances they travel from home make controlling a rover with a joystick impractical. Commands from Earth might take 20 minutes to reach Mars, and about an hour to reach the moons of Jupiter.
So what can we realistically expect autonomous craft to do? It is one thing to build a space probe that can navigate by itself, respond quickly to unexpected events or even carry on when a critical component fails. It’s quite another to train a planetary rover to spot a fossilised bone in a rock, let alone distinguish a living cell from a speck of dirt.
The closest thing to a space robot with a brain is NASA’s pair of Mars rovers (see image), and their abilities are fairly limited. Since they landed in January 2004 they have had to cope with more than six critical technical problems, including a faulty memory module and a jammed wheel. That the craft are still trundling across the red planet and returning valuable geological data is down to engineers at mission control fixing the faults remotely. In fact the rovers can only do simple tasks on their own, says Steve Chien, the head of JPL’s artificial intelligence group. They can be programmed to drive from point A to point B, stop, and take a picture. They can spot clouds and whirling mini-tornadoes called dust devils on their own. They can also protect themselves against accidental damage – by keeping away from steep slopes or large rocks. For pretty much everything else, they depend on their human caretakers.
What are we missing?
This is becoming a significant limitation. While NASA’s first Mars rover, Sojourner (see image), travelled just 100 metres during its mission in 1997, Spirit and Opportunity have covered over 24 kilometres so far. As they drive they are programmed to snap images of the landscape around them, but that doesn’t make for very thorough exploration. “We are travelling further and further with each rover mission,” says Tara Estlin, senior computer scientist and one of the team developing autonomous science at JPL. “Who knows what interesting things we are missing?”
NASA wouldn’t want the rovers to record everything they see and transmit it all back to Earth; the craft simply don’t have the power, bandwidth and time. Instead, the team at JPL has spent around a decade developing software that allows the rovers to analyse images as they are recorded and decide for themselves which geological features are worth following up. Key to this is a software package called OASIS – short for on-board autonomous science investigation system.
The idea is that before the rovers set out each day, controllers can give OASIS a list of things to watch out for. This might simply be the largest or palest rock in the rover’s field of view, or it could be an angular rock that might be volcanic. Then whenever a rover takes an image, OASIS uses special algorithms to identify any rocks in the scene and single out those on its shopping list (Space Operations Communicator, vol 5, p39). Not only is OASIS able to tell the rovers what features are of scientific interest, it knows their relative value too: smooth rocks which may have been eroded by water might take priority over rough ones, say. This helps the rovers decide what to do next.
There are also practical considerations to take into account. As they trundle around the surface, the rovers must keep track of whether they have enough time, battery power and spare memory capacity to proceed. So the JPL team has also created a taskmaster – software that can plan and schedule activities. With science goals tugging at one sleeve and practical limitations at the other, this program steps in to decide how to order activities so that the rover can reach its goals safely, making any necessary scheduling changes along the way. With low-priority rocks close by, say, a rover might decide it is worth snapping six images of them rather than one of a more interesting rock a few metres away, since the latter would use up precious battery juice.
Why stop there? Since OASIS allows a rover to identify high-priority targets on its own, the JPL team has decided to take the next step: let the rover drive over to an interesting rock and deploy its sensors to take a closer look. To do this, Estlin and her colleagues won’t be using OASIS, however. Instead, they have taken elements from it and used them to create a new control system called Autonomous Exploration for Gathering Increased Science (AEGIS). This has been tested successfully at JPL and is scheduled for uplink and remote installation on the rover Opportunity sometime in September.
Once AEGIS is in control, Opportunity will be able to deploy its high-resolution camera automatically and beam data back to Earth for analysis – the first time autonomous software has been able to control a craft on the surface of another world. This is just the beginning, says Estlin. For example, researchers at JPL and the Wesleyan University in Middletown, Connecticut, have developed a smart detector system that will allow a rover to carry out a basic scientific experiment on its own. In this case, its task will be to identify specific minerals in an alien rock.
The detector consists of two automated spectrometers controlled by “support vector machines” – relatives of artificial neural networks – of a kind already in use aboard EO-1. The new SVM system uses the spectrometers to take measurements and then compares the results with an on-board database containing spectra from thousands of minerals. Last year the researchers published results in the journal Icarus (vol 195, p 169) showing that in almost all cases, even in complex rock mixtures, their SVM could automatically spot the presence of jarosite, a sulphate mineral associated with hydrothermal springs.
Alien novelties
Though increasingly sophisticated, these autonomous systems are still a long way from the conscious machines of science fiction that can talk, feel and recognise new life forms. Right now, Chien admits, we can’t even really program a robot for “novelty detection” – the equivalent of, say, picking out the characteristic shape of a bone among a pile of rocks – let alone give it the ability to detect living creatures.
In theory, the shape of a complex natural object such as an ice crystal or a living cell could be described in computer code and embedded in a software library. Then the robot would only need a sensor such as a microscope with sufficient magnification to photograph it.
In fact identifying a cell is a huge challenge because its characteristics can be extremely subtle. In 1999, NASA funded an ambitious project that set out to discover whether there are specific signatures such as shape, symmetry, or a set of combined features that could provide a key to identifying and categorising simple living systems (New Scientist, 22 April 2000, p 22). The idea was to create a huge image library containing examples from Earth, and then teach a neural network which characteristics to look for. Unfortunately, the project ended before it could generate any useful results.
Just as a single measurement is unlikely to provide definitive proof of alien life, so most planetary scientists agree that a single robotic explorer, however smart, won’t provide all the answers. Instead, JPL scientists envisage teams of autonomous craft working together, orbiting an alien world and scouring the surface for interesting science, then radioing each other to help decide what features deserve a closer look.
This model is already being put through its paces. Since 2004, networks of ground-based sensors placed around volcanoes, from Erebus in Antarctica to Kilauea and Mauna Loa in Hawaii, have been watching for sudden changes that might signal an eruption. When they detect strong signals, they can summon EO-1, which uses its autonomous software planner to schedule a fly-past. The satellite then screens the target area for clouds, and if skies are clear, it records images, processes them and transmits them to ground control.
In July, a network of 15 probesMovie Camera were placed into Mount St Helens, a volcano in Washington state. These probes carry sensors that monitor conditions inside the crater and can talk to each other to analyse data in real time, as well as call up EO-1 to take photos. If it detects activity from orbit, the satellite can even ask the probes to focus attention on a particular spot.
Networks of autonomous probes can provide a number of advantages, including helping a mission cover more ground, and ensuring it continues even if one or more probes are damaged or destroyed. This approach also offers increased processing power, since computers on separate probes can work together to crunch data more quickly. And researchers are beginning to believe that teams of autonomous probes could eventually be smart enough to do almost everything a human explorer could, even in the remotest regions of space.
Last year, in a paper published in the journal Planetary and Space Science (vol 56, p 448), a consortium of researchers from the US, Italy and Japan laid out their strategy for searching out life using autonomous craft controlled by fuzzy logic, the mathematical tool developed in the 1960s to give computers a way to handle uncertainty. Their plan calls for the use of three types of craft: surface-based rovers with sensors designed to spot signs of water and potential sources of heat, such as geothermal vents; airships that float low overhead and help pinpoint the best sites for study; and orbiters that image the planet surface, coordinating with mission control as well as beaming data back to Earth.
The consortium argue that fuzzy logic is a better bet than neural networks or other artificial intelligence techniques, since it is well suited to handling incomplete data and contradictory or ambiguous rules. They also suggest that by working together, the three types of probes will have pretty much the same investigative and deductive powers as a human planetary scientist.
The team of probes will have much the same investigative powers as a human scientist
Experimental simulations of a mission to Mars seem to confirm this view: in two tests the autonomous explorers came to the same conclusions as a human geoscientist. The system could be particularly useful for missions to Titan and Enceladus, the researchers suggest, since autonomy will be a key factor for the success of a mission so far from Earth.
Back at JPL, the day’s test of robot autonomy is almost complete. The two robots are running new software designed to improve coordination between craft. Part of the experiment is to see whether the robots can capture a photo of a moving target – in this case a small remote-controlled truck nicknamed Junior – and relay it back to “mission control” using delay-tolerant networking, a new system for data transfer.
In future deep-space missions, robots will need autonomy for longer stretches since commands from Earth will take an hour or so to reach them. And as planets rotate, there will be periods when no communication is possible. Delay-tolerant networking relies on a “store and forward” method that promises to provide a more reliable link between planetary explorers and mission control. Each node in the network – whether a rover or an orbiter – holds on to a transmission until it is safe to relay it to the next node. Information may take longer to reach its destination this way, but it will get there in the end.
And it seems to work: the images from the two robots arrive. They include both wide-angle shots and high-resolution close-ups of Junior. Estlin is pleased.
As we stand in the heat, a salamander scuttles quickly across a rock. I can’t help wondering whether the robots would have picked that out. Just suppose the Mars rover had to choose between a whirling dust devil and a fleeing amphibian? Chien assures me that the software would direct the rover to prioritise, depending on the relative value of the two. I hope it goes for the salamander. And if alien life proves half as shy, I hope the rover can act fast.
Labels:
A.I.,
artificial intelligence,
exploration,
navigation,
space
Tuesday, May 12, 2009
Subscribe to:
Posts (Atom)