Showing posts with label exploration. Show all posts
Showing posts with label exploration. Show all posts
Saturday, July 9, 2011
Nexus S to serve as brain for 3 robots aboard the ISS
The shuttle Atlantis is set to carry two Nexus S phones into orbit that will turn a trio of floating satellites on the International Space Station into remote-operated robots.
The 135th and last flight of the shuttle program, set for 11:26 a.m. ET, will help advance the cause of robotkind when the Android handsets are attached to the bowling ball-size orbs.
Propelled by small CO2 thrusters, the Synchronized Position Hold, Engage, Reorient, Experimental Satellites (Spheres) were developed at MIT and have been in use on the ISS since 2006.
As seen in the vid below, they look like the Star Wars lightsaber training droid but are designed to test spacecraft maneuvers, satellite servicing, and flight formation.
Normally, the Spheres orbs carry out preprogrammed commands from a computer aboard the ISS, but the Nexus Android phones will give them increased computing power, cameras, and links to ground crew who will pilot them.
The ISS Nexus-powered robots aren’t an entirely unique concept, of course. Toy maker Hasbro showed off something similar at Google I/O 2011, its conceptual male and female Nexus S robotic docks. The toys are able to move around and interact with their environment and even get dizzy when shaken by mischievous handlers.
Labels:
Android OS,
exploration,
flying,
Google,
hardware,
space
Thursday, February 10, 2011
Robots to get their own internet
European scientists have embarked on a project to let robots share and store what they discover about the world.
Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.
Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters.
The idea behind RoboEarth is to develop methods that help robots encode, exchange and re-use knowledge, said RoboEarth researcher Dr Markus Waibel from the Swiss Federal Institute of Technology in Zurich.
"Most current robots see the world their own way and there's very little standardisation going on," he said. Most researchers using robots typically develop their own way for that machine to build up a corpus of data about the world.
This, said Dr Waibel, made it very difficult for roboticists to share knowledge or for the field to advance rapidly because everyone started off solving the same problems.
By contrast, RoboEarth hopes to start showing how the information that robots discover about the world can be defined so any other robot can find it and use it.
RoboEarth will be a communication system and a database, he said.
In the database will be maps of places that robots work, descriptions of objects they encounter and instructions for how to complete distinct actions.
The human equivalent would be Wikipedia, said Dr Waibel.
"Wikipedia is something that humans use to share knowledge, that everyone can edit, contribute knowledge to and access," he said. "Something like that does not exist for robots."
It would be great, he said, if a robot could enter a location that it had never visited before, consult RoboEarth to learn about that place and the objects and tasks in it and then quickly get to work.
"The key is allowing robots to share knowledge," said Dr Waibel. "That's really new."
RoboEarth is likely to become a tool for the growing number of service and domestic robots that many expect to become a feature in homes in coming decades.
Dr Waibel said it would be a place that would teach robots about the objects that fill the human world and their relationships to each other.
For instance, he said, RoboEarth could help a robot understand what is meant when it is asked to set the table and what objects are required for that task to be completed.
The EU-funded project has about 35 researchers working on it and hopes to demonstrate how the system might work by the end of its four-year duration.
Early work has resulted in a way to download descriptions of tasks that are then executed by a robot. Improved maps of locations can also be uploaded.
A system such as RoboEarth was going to be essential, said Dr Waibel, if robots were going to become truly useful to humans.
Labels:
A.I.,
artificial intelligence,
exploration,
language,
networking
Friday, January 28, 2011
Robots learn from rats' brains
Queensland engineers have translated biological findings to probabilistic algorithms that could direct robots through complicated human environments.
While many of today's machines relied on expensive sensors and systems, the researchers hoped their software would improve domestic robots, cheaply.
Roboticist Michael Milford worked with neuroscientists to develop algorithms that mimicked three navigational systems in rats' brains: place cells; head direction cells; and grid cells.
In an article published in PLoS Computational Biology this week, he described simulating grid cells - recently discovered brain cells that helped rats contextually determine their location.
To explain the function of grid cells, Milford described getting out of a lift at an unknown floor, and deducing his location based on visual cues like vending machines and photocopiers.
"We take it for granted that we find our way to work ... [but] the problem is extremely challenging," said the Queensland University of Technology researcher.
"Robots are able to navigate to a certain point, but they just get confused and lost in an office building," he told iTnews.
The so-called RatSLAM software was installed in a 20kg Pioneer 2DXe robot with a forward facing camera that was capable of detecting visual cues, their relative bearing and distance.
The robot was placed in a maze similar to those used in experiments with rats, with random goal locations that simulated a rat's collection of randomly thrown pieces of food.
It calibrated itself using visual cues, performing up to 14 iterations per second to determine its location when placed in one of four initial starting positions.
Milford explained that environmental changes like lighting, shadows, moving vehicles and people made it difficult for robots to navigate in a human world.
Machines like the Mars Rovers and those competing in the DARPA Challenges tended to use expensive sensors - essentially "throwing a lot of money" at the problem, he said.
But a cheaper solution was needed to direct domestic robots, which were currently still in early stages of development and "very, very, very, dumb".
"The only really successful cheap robot that has occurred so far is the [iRobot Roomba] vacuum cleaner," he said. "They don't have any idea where they are; they just move around randomly."
The grid cell project was the latest in almost seven years of Milford's research into applying biological techniques to machines.
The team had been approached "occasionally" by domestic robot manufacturers, he said, but was currently focussed on research, and not commercialisation.
Labels:
A.I.,
animals,
artificial intelligence,
brains,
exploration,
navigation,
rats
Saturday, September 12, 2009
New Nasa spacebot

NASA's Limbed Excursion Mechanical Utility Robot (LEMUR) is being designed as an inspection/maintenance robot for equipment in space. A scaled-up version of Lemur IIa, could help build large structures in space. The Lemur IIa pictured here is shown on a scale model of a segmented telescope.
Labels:
exploration,
space
Wednesday, September 2, 2009
next-gen exploration robots

Two All-Terrain Hex-Legged Extra-Terrestrial Explorer (ATHLETE) rovers traverse the desert terrain adjacent to Dumont Dunes, CA. The ATHLETE rovers are being built to be capable of rolling over Apollo-like undulating terrain and "walking" over extremely rough or steep terrain for future lunar missions.
Labels:
exploration,
space
Sunday, May 17, 2009
Space robot 2.0: Smarter than the average rover

SOMETHING is moving. Two robots sitting motionless in the dust have spotted it. One, a six-wheeled rover, radios the other perched high on a rocky slope. Should they take a photo and beam it back to mission control? Time is short, they have a list of other tasks to complete, and the juice in their batteries is running low. The robots have seconds to decide. What should they do?
Today, mission control is a mere 10 metres away, in a garage here at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. Engineers can step in at any time. But if the experiment succeeds and the robots spot the disturbance and decide to beam the pictures back to base, they will have moved one step closer to fulfilling NASA’s vision of a future in which teams of smart space probes scour distant worlds, seeking out water or signs of life with little or no help from human controllers.
NASA, along with other space agencies, has already taken the first tentative steps towards this kind of autonomous mission (see “Spacecraft go it alone”). In 1999, for example, NASA’s Deep Space 1 probe used a smart navigation system to find its way to an asteroid – a journey of over 600 million kilometres. Since 2003, an autonomous control system has been orbiting our planet aboard NASA’s Earth Observing-1 satellite. It helps EO-1 to spot volcanic eruptions and serious flooding, so the events can be photographed and the images beamed back to researchers on the ground. And in the next month or so, the latest iteration of smart software will be uploaded onto one of NASA’s Mars rovers, loosening the machine’s human tether still further so it can hunt for unusual rock formations on its own.
The idea is not to do away with human missions altogether. But since it is far cheaper and easier to send robots first, why not make them as productive as possible? Besides, the increasingly long distances they travel from home make controlling a rover with a joystick impractical. Commands from Earth might take 20 minutes to reach Mars, and about an hour to reach the moons of Jupiter.
So what can we realistically expect autonomous craft to do? It is one thing to build a space probe that can navigate by itself, respond quickly to unexpected events or even carry on when a critical component fails. It’s quite another to train a planetary rover to spot a fossilised bone in a rock, let alone distinguish a living cell from a speck of dirt.
The closest thing to a space robot with a brain is NASA’s pair of Mars rovers (see image), and their abilities are fairly limited. Since they landed in January 2004 they have had to cope with more than six critical technical problems, including a faulty memory module and a jammed wheel. That the craft are still trundling across the red planet and returning valuable geological data is down to engineers at mission control fixing the faults remotely. In fact the rovers can only do simple tasks on their own, says Steve Chien, the head of JPL’s artificial intelligence group. They can be programmed to drive from point A to point B, stop, and take a picture. They can spot clouds and whirling mini-tornadoes called dust devils on their own. They can also protect themselves against accidental damage – by keeping away from steep slopes or large rocks. For pretty much everything else, they depend on their human caretakers.
What are we missing?
This is becoming a significant limitation. While NASA’s first Mars rover, Sojourner (see image), travelled just 100 metres during its mission in 1997, Spirit and Opportunity have covered over 24 kilometres so far. As they drive they are programmed to snap images of the landscape around them, but that doesn’t make for very thorough exploration. “We are travelling further and further with each rover mission,” says Tara Estlin, senior computer scientist and one of the team developing autonomous science at JPL. “Who knows what interesting things we are missing?”
NASA wouldn’t want the rovers to record everything they see and transmit it all back to Earth; the craft simply don’t have the power, bandwidth and time. Instead, the team at JPL has spent around a decade developing software that allows the rovers to analyse images as they are recorded and decide for themselves which geological features are worth following up. Key to this is a software package called OASIS – short for on-board autonomous science investigation system.

The idea is that before the rovers set out each day, controllers can give OASIS a list of things to watch out for. This might simply be the largest or palest rock in the rover’s field of view, or it could be an angular rock that might be volcanic. Then whenever a rover takes an image, OASIS uses special algorithms to identify any rocks in the scene and single out those on its shopping list (Space Operations Communicator, vol 5, p39). Not only is OASIS able to tell the rovers what features are of scientific interest, it knows their relative value too: smooth rocks which may have been eroded by water might take priority over rough ones, say. This helps the rovers decide what to do next.
There are also practical considerations to take into account. As they trundle around the surface, the rovers must keep track of whether they have enough time, battery power and spare memory capacity to proceed. So the JPL team has also created a taskmaster – software that can plan and schedule activities. With science goals tugging at one sleeve and practical limitations at the other, this program steps in to decide how to order activities so that the rover can reach its goals safely, making any necessary scheduling changes along the way. With low-priority rocks close by, say, a rover might decide it is worth snapping six images of them rather than one of a more interesting rock a few metres away, since the latter would use up precious battery juice.
Why stop there? Since OASIS allows a rover to identify high-priority targets on its own, the JPL team has decided to take the next step: let the rover drive over to an interesting rock and deploy its sensors to take a closer look. To do this, Estlin and her colleagues won’t be using OASIS, however. Instead, they have taken elements from it and used them to create a new control system called Autonomous Exploration for Gathering Increased Science (AEGIS). This has been tested successfully at JPL and is scheduled for uplink and remote installation on the rover Opportunity sometime in September.
Once AEGIS is in control, Opportunity will be able to deploy its high-resolution camera automatically and beam data back to Earth for analysis – the first time autonomous software has been able to control a craft on the surface of another world. This is just the beginning, says Estlin. For example, researchers at JPL and the Wesleyan University in Middletown, Connecticut, have developed a smart detector system that will allow a rover to carry out a basic scientific experiment on its own. In this case, its task will be to identify specific minerals in an alien rock.
The detector consists of two automated spectrometers controlled by “support vector machines” – relatives of artificial neural networks – of a kind already in use aboard EO-1. The new SVM system uses the spectrometers to take measurements and then compares the results with an on-board database containing spectra from thousands of minerals. Last year the researchers published results in the journal Icarus (vol 195, p 169) showing that in almost all cases, even in complex rock mixtures, their SVM could automatically spot the presence of jarosite, a sulphate mineral associated with hydrothermal springs.
Alien novelties
Though increasingly sophisticated, these autonomous systems are still a long way from the conscious machines of science fiction that can talk, feel and recognise new life forms. Right now, Chien admits, we can’t even really program a robot for “novelty detection” – the equivalent of, say, picking out the characteristic shape of a bone among a pile of rocks – let alone give it the ability to detect living creatures.
In theory, the shape of a complex natural object such as an ice crystal or a living cell could be described in computer code and embedded in a software library. Then the robot would only need a sensor such as a microscope with sufficient magnification to photograph it.
In fact identifying a cell is a huge challenge because its characteristics can be extremely subtle. In 1999, NASA funded an ambitious project that set out to discover whether there are specific signatures such as shape, symmetry, or a set of combined features that could provide a key to identifying and categorising simple living systems (New Scientist, 22 April 2000, p 22). The idea was to create a huge image library containing examples from Earth, and then teach a neural network which characteristics to look for. Unfortunately, the project ended before it could generate any useful results.
Just as a single measurement is unlikely to provide definitive proof of alien life, so most planetary scientists agree that a single robotic explorer, however smart, won’t provide all the answers. Instead, JPL scientists envisage teams of autonomous craft working together, orbiting an alien world and scouring the surface for interesting science, then radioing each other to help decide what features deserve a closer look.
This model is already being put through its paces. Since 2004, networks of ground-based sensors placed around volcanoes, from Erebus in Antarctica to Kilauea and Mauna Loa in Hawaii, have been watching for sudden changes that might signal an eruption. When they detect strong signals, they can summon EO-1, which uses its autonomous software planner to schedule a fly-past. The satellite then screens the target area for clouds, and if skies are clear, it records images, processes them and transmits them to ground control.

In July, a network of 15 probesMovie Camera were placed into Mount St Helens, a volcano in Washington state. These probes carry sensors that monitor conditions inside the crater and can talk to each other to analyse data in real time, as well as call up EO-1 to take photos. If it detects activity from orbit, the satellite can even ask the probes to focus attention on a particular spot.
Networks of autonomous probes can provide a number of advantages, including helping a mission cover more ground, and ensuring it continues even if one or more probes are damaged or destroyed. This approach also offers increased processing power, since computers on separate probes can work together to crunch data more quickly. And researchers are beginning to believe that teams of autonomous probes could eventually be smart enough to do almost everything a human explorer could, even in the remotest regions of space.
Last year, in a paper published in the journal Planetary and Space Science (vol 56, p 448), a consortium of researchers from the US, Italy and Japan laid out their strategy for searching out life using autonomous craft controlled by fuzzy logic, the mathematical tool developed in the 1960s to give computers a way to handle uncertainty. Their plan calls for the use of three types of craft: surface-based rovers with sensors designed to spot signs of water and potential sources of heat, such as geothermal vents; airships that float low overhead and help pinpoint the best sites for study; and orbiters that image the planet surface, coordinating with mission control as well as beaming data back to Earth.
The consortium argue that fuzzy logic is a better bet than neural networks or other artificial intelligence techniques, since it is well suited to handling incomplete data and contradictory or ambiguous rules. They also suggest that by working together, the three types of probes will have pretty much the same investigative and deductive powers as a human planetary scientist.
The team of probes will have much the same investigative powers as a human scientist
Experimental simulations of a mission to Mars seem to confirm this view: in two tests the autonomous explorers came to the same conclusions as a human geoscientist. The system could be particularly useful for missions to Titan and Enceladus, the researchers suggest, since autonomy will be a key factor for the success of a mission so far from Earth.
Back at JPL, the day’s test of robot autonomy is almost complete. The two robots are running new software designed to improve coordination between craft. Part of the experiment is to see whether the robots can capture a photo of a moving target – in this case a small remote-controlled truck nicknamed Junior – and relay it back to “mission control” using delay-tolerant networking, a new system for data transfer.
In future deep-space missions, robots will need autonomy for longer stretches since commands from Earth will take an hour or so to reach them. And as planets rotate, there will be periods when no communication is possible. Delay-tolerant networking relies on a “store and forward” method that promises to provide a more reliable link between planetary explorers and mission control. Each node in the network – whether a rover or an orbiter – holds on to a transmission until it is safe to relay it to the next node. Information may take longer to reach its destination this way, but it will get there in the end.
And it seems to work: the images from the two robots arrive. They include both wide-angle shots and high-resolution close-ups of Junior. Estlin is pleased.
As we stand in the heat, a salamander scuttles quickly across a rock. I can’t help wondering whether the robots would have picked that out. Just suppose the Mars rover had to choose between a whirling dust devil and a fleeing amphibian? Chien assures me that the software would direct the rover to prioritise, depending on the relative value of the two. I hope it goes for the salamander. And if alien life proves half as shy, I hope the rover can act fast.
Labels:
A.I.,
artificial intelligence,
exploration,
navigation,
space
Subscribe to:
Posts (Atom)