Monday, October 11, 2010
Google Cars Drive Themselves, in Traffic
Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving.
The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.
With someone behind the wheel to take control if something goes awry and a technician in the passenger seat to monitor the navigation system, seven test cars have driven 1,000 miles without human intervention and more than 140,000 miles with only occasional human control. One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation. The only accident, engineers said, was when one Google car was rear-ended while stopped at a traffic light.
Autonomous cars are years from mass production, but technologists who have long dreamed of them believe that they can transform society as profoundly as the Internet has.
Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.
The Google research program using artificial intelligence to revolutionize the automobile is proof that the company’s ambitions reach beyond the search engine business. The program is also a departure from the mainstream of innovation in Silicon Valley, which has veered toward social networks and Hollywood-style digital media.
During a half-hour drive beginning on Google’s campus 35 miles south of San Francisco last Wednesday, a Prius equipped with a variety of sensors and following a route programmed into the GPS navigation system nimbly accelerated in the entrance lane and merged into fast-moving traffic on Highway 101, the freeway through Silicon Valley.
It drove at the speed limit, which it knew because the limit for every road is included in its database, and left the freeway several exits later. The device atop the car produced a detailed map of the environment.
The car then drove in city traffic through Mountain View, stopping for lights and stop signs, as well as making announcements like “approaching a crosswalk” (to warn the human at the wheel) or “turn ahead” in a pleasant female voice. This same pleasant voice would, engineers said, alert the driver if a master control system detected anything amiss with the various sensors.
The car can be programmed for different driving personalities — from cautious, in which it is more likely to yield to another car, to aggressive, where it is more likely to go first.
Christopher Urmson, a Carnegie Mellon University robotics scientist, was behind the wheel but not using it. To gain control of the car he has to do one of three things: hit a red button near his right hand, touch the brake or turn the steering wheel. He did so twice, once when a bicyclist ran a red light and again when a car in front stopped and began to back into a parking space. But the car seemed likely to have prevented an accident itself.
When he returned to automated “cruise” mode, the car gave a little “whir” meant to evoke going into warp drive on “Star Trek,” and Dr. Urmson was able to rest his hands by his sides or gesticulate when talking to a passenger in the back seat. He said the cars did attract attention, but people seem to think they are just the next generation of the Street View cars that Google uses to take photographs and collect data for its maps.
The project is the brainchild of Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence Laboratory, a Google engineer and the co-inventor of the Street View mapping service.
In 2005, he led a team of Stanford students and faculty members in designing the Stanley robot car, winning the second Grand Challenge of the Defense Advanced Research Projects Agency, a $2 million Pentagon prize for driving autonomously over 132 miles in the desert.
Besides the team of 15 engineers working on the current project, Google hired more than a dozen people, each with a spotless driving record, to sit in the driver’s seat, paying $15 an hour or more. Google is using six Priuses and an Audi TT in the project.
The Google researchers said the company did not yet have a clear plan to create a business from the experiments. Dr. Thrun is known as a passionate promoter of the potential to use robotic vehicles to make highways safer and lower the nation’s energy costs. It is a commitment shared by Larry Page, Google’s co-founder, according to several people familiar with the project.
The self-driving car initiative is an example of Google’s willingness to gamble on technology that may not pay off for years, Dr. Thrun said. Even the most optimistic predictions put the deployment of the technology more than eight years away.
One way Google might be able to profit is to provide information and navigation services for makers of autonomous vehicles. Or, it might sell or give away the navigation technology itself, much as it offers its Android smart phone system to cellphone companies.
But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?
And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”
The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.
Scientists and engineers have been designing autonomous vehicles since the mid-1960s, but crucial innovation happened in 2004 when the Pentagon’s research arm began its Grand Challenge.
The first contest ended in failure, but in 2005, Dr. Thrun’s Stanford team built the car that won a race with a rival vehicle built by a team from Carnegie Mellon University. Less than two years later, another event proved that autonomous vehicles could drive safely in urban settings.
Advances have been so encouraging that Dr. Thrun sounds like an evangelist when he speaks of robot cars. There is their potential to reduce fuel consumption by eliminating heavy-footed stop-and-go drivers and, given the reduced possibility of accidents, to ultimately build more lightweight vehicles.
There is even the farther-off prospect of cars that do not need anyone behind the wheel. That would allow the cars to be summoned electronically, so that people could share them. Fewer cars would then be needed, reducing the need for parking spaces, which consume valuable land.
And, of course, the cars could save humans from themselves. “Can we text twice as much while driving, without the guilt?” Dr. Thrun said in a recent talk. “Yes, we can, if only cars will drive themselves.”
Labels:
cars,
driving,
Google,
navigation
Sunday, October 10, 2010
Aiming to Learn as We Do, a Machine Teaches Itself: NELL, the Never-Ending Language Learning system
Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers atCarnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency andGoogle, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”
NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.
NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”
Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.
For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”
Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.
Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.
“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.
Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”
NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.
For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.
NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”
A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”
“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”
A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.
When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)
NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.
His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers atCarnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency andGoogle, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”
NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.
NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”
Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.
For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”
Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.
Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.
“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.
Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”
NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.
For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.
NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”
A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”
“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”
A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.
When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)
NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.
His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”
Tuesday, September 28, 2010
EPFL develops Linux-based swarming micro air vehicles
The good people at Ecole Polytechnique Federale de Lausanne (or EPFL) in Switzerland have been very busy lately, as this video demonstrates.
Not only have they put together a scalable system that will let any flying robot perch in a tree or similar structure, but now they've gone and developed a platform for swarming air vehicles (with Linux, nonetheless).
Said to be the largest network of its kind, the ten SMAVNET swarm members control their own altitude, airspeed, and turn rate based on input from the onboard gyroscope and pressure sensors. The goal is to develop low cost devices that can be deployed in disaster areas to creat ad hoc communications networks, although we can't help but think this would make the best Christmas present ever.
Labels:
flying,
navigation
Sunday, September 12, 2010
Future farms to be run by robots
Robots from Mechanisation Automation Robotics Remote Sensing (MARRS) technologies could one day run automated farms in Australia, a futuristic researcher from the University of Queensland says.
Dr Adam Postula says technologies can be used to control unmanned aircraft or unmanned tractors, using detection systems capable of observing environments using visual, infra-red or laser light wavelengths.
The emerging technologies can also help farmers by detecting and communicating in real-time variable environmental, field, and crop parameters such as moisture content, temperature and humidity.
Dr Postula and a colleague will speak on the role of smart machines in the future of Australian farming at an industry event in Marburg, west of Brisbane, on Wednesday.
The workshop will focus on opportunities available to Australian farmers through the introduction of robots and smart machines into their operations.
The remote-controlled farm could become a reality, just as mines are becoming more automated, he said.
"That's definitely possible. Look what happens with farms now - how many people we've employed on farms before and how many we have now," Dr Postula said.
"I've seen a mine in Sweden where there were no people underground, everything was controlled from above ground."
The future of farming is largely about precision, Dr Postula told AAP on Monday.
"It's not only pursued in space - where you put your plants in particular locations - but also you know almost everything about your soil, about moisture, stuff that really matters for growing," he said.
"In order to know that you have to have sensors that are close to the plant."
That means the sensors must be cheap, he said.
Dr Postula said any four-wheel drive vehicle can be made autonomous, and unmanned aircraft will be able to scan and estimate the size of crops and the maturity of fruit, or determine the location of cattle.
"We expect that walking, moving, flying robots will be commonplace on Australian farms in the future," Dr Postula said.
Saturday, July 24, 2010
Robot eats sewage for energy: researchers develop synthetic gut
In the bid to create such autonomous robots, researchers turned to biomass as an energy source. By being able to feed themselves, robots could be set to work for long periods without human intervention.
Such food-munching robots have been demonstrated in the past, often generating power with the help of microbial fuel cells (MFCs) - bio-electrochemical devices that enlist cultures of bacteria to break down food to generate power. Until now, though, no one had tackled the messy but inevitable issue of finding a way to evacuate the waste these bugs produce.
What was needed was an artificial gut, says Chris Melhuish, director of the Bristol Robotics Lab in the UK. He has spent three years with Ioannis Ieropoulos and colleagues working up the concept. The result: Ecobot III.
"Diarrhoea-bot would be more appropriate," Melhuish admits. "It's not exactly knocking out rabbit pellets." Even so, he says, it marks the first demonstration of a biomass-powered robot that can operate unaided for some time.
Previous incarnations of Ecobot showed that it is possible to generate enough power for the robot to exhibit certain basic, yet intelligent behaviours, such as moving towards a light source. Human intervention was needed to clean up after meals, though.
Now, by redesigning the robot to include a digestive tract, Ecobot III has shown that it can survive for up to seven days, feeding and "watering" itself unaided. It obediently expels its waste into a litter tray once every 24 hours.
The key to getting this gut to work, says Ieropoulos, is a recycling system that relies on a gravity-fed peristaltic pump which, like the human colon, applies waves of pressure to squeeze unwanted matter out of a tube.
At the start of the digestive process the robot feeds itself by moving into contact with a dispenser. This pumps a nutrient-rich solution of partially processed sewage into its "mouth" where it is distributed into 48 separate MFCs within the robot. This fluid is a concoction of minerals, salts, yeast extracts and other nutrients. As unappetising as this mixture sounds, for the culture of microbes in the robot's stomach it is ambrosia itself.
At the heart of the process is a reduction-oxidation reaction that takes place in the anode chambers of each of the robot's MFCs. As the bacteria metabolise the organic matter, hydrogen atoms are given off. The hydrogen's electrons migrate to the electrode, generating a current, while hydrogen ions pass through a proton-exchange membrane into the cathode chamber of the cell, which contains water. Here, oxygen dissolved in that water combines with the protons to produce additional water. Because this supply of water gradually evaporates, the robot also needs regular drinks, which it gets from a separate spout.
The cells are arranged in a stack of two tiers of 24 (see picture), designed to allow gravity to direct any heavy undigested matter to accumulate in a central trough. The contents are repeatedly re-circulated from the trough into the robot's feeder tanks to extract as much energy as possible, before being excreted.
Getting rid of this waste not only prevents fuel cells from filling up and becoming clogged, but also removes any acidic waste products from the digester that might poison the bacteria, says Ieropoulos.
As things stand, the fuel cells are capable of extracting a mere 1 per cent of the chemical energy available in its food, despite the recycling process. The system uses off-the-shelf components, so modifying the anodes to have a larger surface area upon which bacteria can attach themselves, should help extract far more energy, says Ieropoulos.
Robert Finkelstein who is heading the Energetically Autonomous Tactile Robot (EATR) project at the US's military research agency DARPA, thinks MFC technology is the wrong choice. It is inefficient and too slow to convert energy, he says.
EATR will derive its energy from burning biomass rather than eating it. Using a novel combustion engine, developed by Cyclone Power Technology of Pompano Beach, Florida, the hope is that when EATR is assembled and tested later this month it will generate enough energy to roll 160 kilometres on 60 kilograms of biomass. In terms of the calorific value of the fuel, that's better than the average car, says Finkelstein.
One of the advantages of MFCs, though, is that they can consume almost anything, including waste water, a substance that isn't easily burned, says Ieropoulos. The bacteria in Ecobot III's gut are made up of hundreds of different species, allowing it to adapt to different foodstuffs. One of the ideas the group is playing with, and the reason they are using waste water as food, is to see if these fuel cells could be used as part of a filtration system to clean up sewage water.
The work will be presented at the Artificial Life conference in Odense, Denmark, next month. The next step is to explore how the robot will cope with a heartier meal, namely flies.
The carnivorous-robot fearing public need not worry, says Melhuish. Much of the energy generated from flies will go into powering the robot's digestive system. With an average speed of about 21 centimetres a day, it is unlikely to catch you, he says.
Labels:
biomatter fuelled,
digestion
Saturday, May 1, 2010
Family Nanny robot is just five years and $1,500 away from being your new best friend
While Japan's busy preparing its robotic invasion on the moon, China's Siasun Robot & Automation Co., Ltd. has its eyes on Planet Earth instead.
Meet Family Nanny, a two-foot-seven, 55-pound robot that can talk, email, text, detect gas leaks, and run around on its two wheels for eight hours on a single two-hour charge.
It'll make great chatty company for the elderly while it relays vital stats back to health monitoring systems. In case of emergencies such as a gas leak, the Family Nanny will alert the owner via text and email.
Not bad for ¥10,000 ($1,465), we'd say, but we'll remain skeptical on its chatting skills until it launches -- supposedly sometime around 2015.
Labels:
China,
healthcare,
Siasun
Friday, April 2, 2010
Robot folds laundry
UC Berkeley roboticist Pieter Abbeel and his colleagues developed software that enables a robot to fold towels. From the abstract to their scientific paper:
"The robot begins by picking up a randomly dropped towel from a table, goes through a sequence of vision-based re-grasps and manipulations-- partially in the air, partially on the table--and finally stacks the folded towel in a target location. The reliability and robustness of our algorithm enables for the first time a robot with general purpose manipulators to reliably and fully-autonomously fold previously unseen towels, demonstrating success on all 50 out of 50 single-towel trials as well as on a pile of 5 towels. "
Labels:
housekeeping,
video
Sunday, March 28, 2010
Virtual pets that can learn
"SIT," says the man. The dog tilts its head but does nothing. "Sit," the man repeats.
The dog lies down. "No!" the man admonishes.
Then, unable to get the dog to sit, the man decides to teach it by example. He sits down himself.
"I'm sitting. Try sitting," he says. The dog cocks its head attentively, folds its hind legs under its body and sits. "Good!" says the man.
No, it's not a rather bizarre way to teach your pet new tricks. It is a demonstration a synthetic character in a virtual world being controlled by an autonomous artificial intelligence (AI) program, which will be released to inhabitants of virtual worlds like Second Life later this year.
Novamente, a company in Washington DC which built the AI program that controls the dog, says that the demonstration is a foretaste not just of future virtual pets but of computer games to come. Their work, along with similar programs from other researchers, was presented at the First Conference on Artificial General Intelligence at the University of Memphis in Tennessee earlier this month.
If first impressions are anything to go by, synthetic pets like Novamente's dog will be a far cry from today's virtual pets, such as Neopets and Nintendogs, which can only perform pre-programmed moves, such as catching a disc. "The problem with current virtual pets is they are rigidly programmed and lack emotions, responsiveness, individual personality or the ability to learn," says Ben Goertzel of Novamente. "They are pretty much all morons."
In contrast, Goertzel claims that synthetic characters like his dog can be taught almost anything, even things that their programmers never imagined.
For instance, owners could train their pets to help win battles in adventure games such as World of Warcraft, says Sibley Verbek of the Electric Sheep Company in New York City, which helped Novamente create the virtual pets. "It is a system that allows the user to teach the virtual character anything they want to," he says.
So how do these autonomous programs work? Take Novamente's virtual pet, which is expected to be the first to hit the market. One way that the pets learn is by being taught specific tasks by human-controlled avatars, similar to the way babies are taught by their parents.
To do this, the humans must directly tell the pet - via Second Life's instant messaging typing interface - that they are about to teach it a task. When the pet receives a specific command, such as "I am going to teach you to sit", it works out that it is about to learn something new called "sit". It then watches the human avatar and starts to copy some of the things the teacher does.
At first it doesn't know which aspects of the task are important. This can lead to mistakes: the dog lying down instead of sitting, for example. But it soon figures out the correct behaviour by trying the task several times in a variety of ways. The key learning tool is that the pets are pre-programmed to seek praise from their owners, so they can make increasingly intelligent guesses about what they should copy, repeating adjustments that seem to make the human avatar more likely to say "good dog", and avoiding those that elicit the response "bad dog". Eventually, the pet figures out how to sit.
Learning by imitation isn't exactly a new idea. Robots in the real world are still being trained in this wayMovie Camera. But it hasn't been easy. For example, a real robot needs sophisticated computer vision to recognise its teacher's legs, so that it can isolate their movement and copy it. But the great variation in the size and shape of legs, which depends on their motion and the angle of viewing, means it is hard to program a robot to recognise legs.
In Second Life, you can get round this problem. Characters don't see objects from a certain angle, nor from a particular distance; all they know is the 3D coordinates of the object, allowing them to recognise legs simply by their geometry. Once the pet can recognise legs, Goertzel then programs it to map the leg movements to the movement of its own legs. Obviously, the pet's own legs are a different size and shape, so the exact same motions wouldn't be appropriate. But the pets experiment with slightly different variations on the theme - and then settle on the set of movements that elicits the most praise from the avatar.
So far, Goertzel says he has successfully taught his dogs to play fetch, basic soccer skills such as kicking the ball, faking a shot and dribbling, and to dance a simple series of moves, just by showing them how (watch a video of the demo at www.novamente.net/puppy.mov).
Imitation isn't the only way the pets learn, however. They can also learn things humans may not have intended to teach them. As well as seeking praise, they are also programmed with other basic desires such as hunger and thirst, as well as some random movements and exploration of the virtual environment. As they explore, their "memory" records everything that happens. It then carries out statistical analyses to find combinations of sequences and actions that seem to predict fulfilment of its goals, such as appeasement of hunger, and uses that knowledge to guide its future behaviour. This can then lead to more sophisticated behaviour, such as a dog learning to touch its bowl when a human walks into the room, because that increases the chance of a goal being fulfilled. "It learns that going near the bowl is symbolic for food," says Goertzel. "This is a sort of rudimentary gestural communication."
Goertzel is aiming even higher. He says learning gestures could eventually form the basis for virtual pets to learn language, just as it does in young children. "Eventually we want to have virtual babies or talking parrots that learn to speak," he says (see "If only they could talk").
Deb Roy, an AI researcher at the Massachusetts Institute of Technology, worries that people will tire of training their virtual pets. "Philosophically I am on board. These are lovely and powerful ideas," he says, "But what are the results that show [Goertzel's team] are making progress compared to people who have tried similar things?"
Novamente has a few tricks up its sleeve to stop people from getting bored. For starters, the synthetic characters will learn quickly as more and more people use them. Although each pet has its own "brain", Novamente's servers will pool knowledge from all the brains. So once one pet has mastered one trick, it will be much easier for another one to master it, too.
Researchers at Novamente are not the only ones who hope to create compelling synthetic characters. Selmer Bringsjord, Andrew Shilliday and colleagues at Rensselaer Polytechnic Institute in Troy, New York, are working on a character called EddieMovie Camera, that they hope will reason about another human's state of mind - potentially leading to characters that understand deceit and betrayal - and predict what other characters will do next.
The fusing of virtual worlds and AI will almost certainly be good for AI. Since the field failed to deliver on its initial promises of machines you can chat to, robotic assistants that do your housework and conscious machines, it has been hard to get funding to build generally intelligent programs. Instead more specific, "narrow AI" such as computer vision or chess-playing have flourished. Novamente is planning to make its pets so much fun that people will actually pay money to interact with them. If so, the multibillion-dollar games industry could drive AI towards delivering on its original promise.
Labels:
artificial intelligence,
Second Life
Could the fusion of games, virtual worlds and artificial intelligence take us closer to building artificial brains?
Novamente is a company that creates virtual pets equipped with artificial intelligence.
As they move forward on this goal they hope the pets will learn to make common-sense assumptions like humans, which could eventually allow them to understand and produce natural language, for example.
One of the biggest challenges faced by researchers trying to imbue computers with natural language abilities is getting computers to resolve ambiguities. Take this sentence: "I saw the man with a telescope." There are three possible ways to interpret the sentence. Either I was looking at a man holding a telescope, or I saw a man through my telescope, or more morbidly, I am sawing a man with a telescope. The context would help a human figure out the real meaning, while a computer might be flummoxed.
But in an environment like Second Life, a synthetic character endowed with AI could use its immediate experience and interactions with other avatars and objects to make sense of language the way humans might. "The stuff that really excites me is to start teaching [pets] simple language," says Ben Goertzel of Novamente.
But other AI researchers doubt that virtual environments will be rich enough for synthetic characters to move towards the kind of general intelligence that is required for natural language processing. Stephen Grand, an independent researcher from Baton Rouge, Louisiana, who created the AI game Creatures in the mid-1990s, applauds the Novamente approach, but thinks there are limits to learning inside a virtual world.
"Just imagine how intelligent you would be if you were born with nothing more than the sensory information available to a Second Life inhabitant," he says. "It's like trying to paint a picture while looking through a drinking straw."
Labels:
artificial intelligence,
Second Life
Thursday, March 25, 2010
IBM Simulates a Cat-Like Brain: AI or Shadow Minds for Humans?
IBM's Almaden Research Center have announced that they had produced a "cortical simulation" of the scale and complexity of a cat brain.
This simulation ran on one of IBM's "Blue Gene" supercomputers, in this case at the Lawrence Livermore National Laboratory (LLNL).
This isn't a simulation of a cat brain, it's a simulation of a brain structure that has the scale and connection complexity of a cat brain.
It doesn't include the actual structures of a cat brain, nor its actual connections; the various experiments in the project filled the memory of the cortical simulation with a bunch of data, and let the system create its own signals and connections.
Put simply, it's not an artificial (feline) intelligence, it's a platform upon which an A(F)I could conceivably be built.
Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses.
Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated "Blue Matter" project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections.
The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.
Labels:
A.I.,
artificial intelligence,
cats
Wednesday, March 10, 2010
Android Phone powered robot
Some clever California hackers, Tim Heath and Ryan Hickman, are building bots that harness Android phones for their robo-brainpower.
Their first creation, the TruckBot, uses a HTC G1 as a brain and has a chassis that they made for $30 in parts. It's not too advanced yet—it can use the phone's compass to head in a particular direction—but they're working on incorporating the bot more fully with the phone and the Android software.
Some ideas they're looking to build in soon are facial and voice recognition and location awareness.
If you're interested in putting together a Cellbot of your own the team's development blog has some more information.
Subscribe to:
Posts (Atom)