Saturday, August 29, 2009

Housekeeper Robots

.. from our friends in Japan, of course. Where else?

Friday, August 28, 2009

Lobsters teach robots magnetic mapping trick

SPINY lobsters have become the unlikely inspiration for a robot with a unique sense of direction. Like the lobster, it uses a map of local variations in the Earth's magnetic field to find its way around - a method that could give domestic robots low-cost navigational capabilities.

In 2003, computer scientist Janne Haverinen read in Nature (vol 421, p 60) about the amazing direction-finding ability of the Caribbean spiny lobster Panulirus argus. A team from the University of North Carolina, Chapel Hill, had moved the critters up to 37 kilometres from where they were caught and deprived them of orientational cues, but found they always set off in the right direction home. They concluded P. argus must navigate with an inbuilt map of local anomalies in the Earth's magnetic field.

"My first inspiration came from birds, ants and bees," says Haverinen. "But the spiny lobster clinched it for me."

The findings set Haverinen, who works in the intelligent systems lab at the University of Oulu, Finland, wondering if he could draw magnetic maps of buildings for domestic and factory robots. It is well known that compasses are sent haywire by the metal in buildings - plumbing, electrical wiring and the steel rods in reinforced concrete, for instance - and cannot find magnetic north. Haverinen's idea was that these distortions of the Earth's magnetic field might create a distinctive magnetic topography.

"So we decided to try to use this 'magnetic landscape' - the array of disturbances - that was upsetting the compass as a map for a robot," says Haverinen.

The team used a magnetometer to scan the magnetic field strength close to the floor in their lab (see picture) and in a 180-metre corridor in a local hospital. They then stored the field variations in the memory of a small wheeled robot and mounted a magnetometer on a rod projecting in front of it to prevent interference from its motors.

The robot was able to work out where it was and to drive along the corridor without a vision system. What's more, the magnetic map stayed true a year after the first mapping was done, Haverinen reports in Robotics and Autonomous Systems (DOI: 10.1016/j.robot.2009.07.018).

"So there just might be enough stable information for robots to work out where they are in the ambient magnetic field," he says. That would obviate the need for expensive "indoor GPS" systems in which triangulation between fixed radio beacons in a building tells robots their position.

"Reliance on any one guidance method is not a great idea in case it fails," warns Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. "But you could use a system like this, if it's proven to work, to boost your confidence in a robot by using it in conjunction with, say, vision-based navigation."

Saturday, August 22, 2009

Real-Life Decepticons: Robots Learn to Cheat

The robots — soccer ball-sized assemblages of wheels, sensors and flashing light signals, coordinated by a digital neural network — were placed by their designers in an arena, with paper discs signifying “food” and “poison” at opposite ends. Finding and staying beside the food earned the robots points.

At first, the robots moved and emitted light randomly. But their innocence didn’t last. After each iteration of the trial, researchers picked the most successful robots, copied their digital brains and used them to program a new robot generation, with a dash of random change thrown in for mutation.

Soon the robots learned to follow the signals of others who’d gathered at the food. But there wasn’t enough space for all of them to feed, and the robots bumped and jostled for position. As before, only a few made it through the bottleneck of selection. And before long, they’d evolved to mute their signals, thus concealing their location.

Signaling in the experiment never ceased completely. An equilibrium was reached in the evolution of robot communication, with light-flashing mostly subdued but still used, and different patterns still emerging. The researchers say their system’s dynamics are a simple analogue of those found in nature, where some species, such as moths, have evolved to use a biologist-baffling array of different signaling strategies.

“Evolutionary robotic systems implicitly encompass many behavioral components … thus allowing for an unbiased investigation of the factors driving signal evolution,” the researchers wrote Monday in the Proceedings of the National Academy of Sciences. “The great degree of realism provided by evolutionary robotic systems thus provides a powerful tool for studies that cannot readily be performed with real organisms.”

Of course, it might not be long before robots directed towards self-preservation and possessing brains modeled after — if not containing — biological components are considered real organisms.

Monday, August 17, 2009

Robot tourguide in Taiwan

The MSI produced robot named "Rich" demonstrates giving a tour walking down a garden trail in the Grand Hills apartment showroom of the Far Glory property company in Linkou, Taipei County, Taiwan

Tuesday, August 11, 2009

Robots to get their own operating system

THE UBot whizzes around a carpeted conference room on its Segway-like wheels, holding aloft a yellow balloon. It hands the balloon to a three-fingered robotic arm named WAM, which gingerly accepts the gift.

Cameras click. "It blows my mind to see robots collaborating like this," says William Townsend, CEO of Barrett Technology, which developed WAM.

The robots were just two of the multitude on display last month at the International Joint Conference on Artificial Intelligence (IJCAI) in Pasadena, California. But this happy meeting of robotic beings hides a serious problem: while the robots might be collaborating, those making them are not. Each robot is individually manufactured to meet a specific need and more than likely built in isolation.

This sorry state of affairs is set to change. Roboticists have begun to think about what robots have in common and what aspects of their construction can be standardised, hopefully resulting in a basic operating system everyone can use. This would let roboticists focus their attention on taking the technology forward.

One of the main sticking points is that robots are typically quite unlike one another. "It's easier to build everything from the ground up right now because each team's requirements are so different," says Anne-Marie Bourcier of Aldebaran Robotics in Paris, France, which makes a half-metre-tall humanoid called Nao (pictured).

Some robots, like Nao, are almost autonomous. Others, like the UBot, are semi-autonomous, meaning they perform some acts, such as balancing, on their own, while other tasks, like steering, are left to a human operator.

Also, every research robot is designed for a specific objective. The UBot's key ability is that it can balance itself, even when bumped - crucial if robots are to one day work alongside clumsy human beings. The Nao, on the other hand, can walk and even perform a kung-fu routine, as long as it is on a flat, smooth surface. But it can't balance itself as robustly as the UBot and won't easily be able to learn how.

On top of all this, each robot has its own unique hardware and software, so capabilities like balance implemented on one robot cannot easily be transferred to others.

Bourcier sees this changing if robotics advances in a manner similar to personal computing. For computers, the widespread adoption of Microsoft's Disk Operating System (DOS), and later Windows, allowed programmers without detailed knowledge of the underlying hardware and file systems to build new applications and build on the work of others.

Programmers could build new applications without detailed knowledge of the underlying hardware

Bringing robotics to this point won't be easy, though. "Robotics is at the stage where personal computing was about 30 years ago," says Chad Jenkins of Brown University in Providence, Rhode Island. Like the home-brew computers of the late 70s and early 80s, robots used for research today often have a unique operating system (OS). "But at some point we have to come together to use the same resources," says Jenkins.

This desire has its roots in frustration, says Brian Gerkey of the robotics research firm Willow Garage in Menlo Park, California. "People reinvent the wheel over and over and over, doing things that are not at all central to what they're trying to do."

For example, if someone is studying object recognition, they want to design better object-recognition algorithms, not write code to control the robot's wheels. "You know that those things have been done before, probably better," says Gerkey. But without a common OS, sharing code is nearly impossible.

The challenge of building a robot OS for widespread adoption is greater than that for computers. "The problems that a computer solves are fairly well defined. There is a very clear mathematical notion of computation," says Gerkey. "There's not the same kind of clear abstraction about interacting with the physical world."

Nevertheless, roboticists are starting to make some headway.The Robot Operating System or ROS is an open-source set of programs meant to serve as a common platform for a wide range of robotics research. It is being developed and used by teams at Stanford University in California, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others.

ROS has software commands that, for instance, provide ways of controlling a robot's navigation, and its arms, grippers and sensors, without needing details of how the hardware functions. The system also includes high-level commands for actions like image recognition and even opening doors. When ROS boots up on a robot's computer, it asks for a description of the robot that includes things like the length of its arm segments and how the joints rotate. It then makes this information available to the higher-level algorithms.

A standard OS would also help researchers focus on a key aspect that so far has been lacking in robotics: reproducibility.

Often, if a team invents, say, a better navigation system, they will publish the results but not the software code. Not only are others unable to build on this discovery, they cannot independently verify the result. "It's useful to have people in a sense constrained by a common platform," says Giorgio Metta, a robotics researcher at the Italian Insitute of Technology in Genoa. "They [will be] forced to do things that work, because somebody else can check. I think this is important, to make it a bit more scientifically oriented."

ROS is not the only robotic operating system vying to be the standard. Microsoft, for example, is trying to create a "Windows for robots" with its Robotics Developer Studio, a product that has been available since 2007.

Gerkey hopes to one day see a robot "app store" where a person could download a program for their robot and have it work as easily as an iPhone app. "That will mean that we have solved a lot of difficult problems," he says.

Monday, August 10, 2009

Boffins work on world's first synthetic brain

LONDON: The world's first synthetic brain could be built within 10 years, giving us an unprecedented insight into the nature of consciousness and our perception of reality.

Scientists working on the Blue Brain Project in Switzerland are the first to attempt to "reverse-engineer" the mammalian brain by recreating the behaviour of billions of neurons on a computer.

Professor Henry Markram, director of the project at the Ecole Polytechnique Fédérale de Lausanne, has already simulated parts of the neocortex, the most "modern" region of the brain, which evolved rapidly in mammals to cope with the demands of parenthood and social situations.

Professor Markram's team created a 3D simulation of about 10,000 brain cells to mimic the behaviour of the rat neocortex. The way all the cells connect and send signals to each other is just as important as how many there are.

"You need one laptop to do all the calculations for one neuron, so you need 10,000 laptops," Professor Markram told the TEDGlobal conference in Oxford yesterday. Instead, he uses an IBM Blue Gene supercomputer.

The artificial brain is already revealing some of the inner workings of the most impressive 1.5 kilograms of biological tissue ever to evolve. Show the brain a virtual image and its neurons flicker with electrical activity as the image is processed.

Ultimately, scientists want to use synthetic brains to understand how sensory information from the real world is interpreted and stored, and how consciousness arises.

Sunday, August 9, 2009

Artificial intelligence technology could soon make the internet an even bigger haven for bargain-hunters

Software "agents" that automatically negotiate on behalf of shoppers and sellers are about to be set free on the web for the first time.

The "Negotiation Ninjas", as they are known, will be trialled on a shopping website called Aroxo in the autumn.

The intelligent traders are the culmination of 20 years' work by scientists at Southampton University.

"Computer agents don't get bored, they have a lot of time, and they don't get embarrassed," Professor Nick Jennings, one of the researchers behind the work, told BBC News.

"I have always thought that in an internet environment, negotiation is the way to go."

Price fixing

The agents use a series of simple rules - known as heuristics - to find the optimal price for both buyer and seller based on information provided by both parties.

Heuristics are commonly used in computer science to find an optimal solution to a problem when there is not a single "right answer".

They are often used in anti-virus software to trawl for new threats.

"If you can't analyse mathematically exactly what you should do, which you can't in general for these sorts of systems, then you end up with heuristics," explained Professor Jennings.

"We use heuristics to determine what price we should offer during the negotiation - and also how we might deal with multiple negotiations at the same time.

"We have to factor in some degrees of uncertainty as well - the chances are that sellers will enter into more negotiations than they have stock."

To use one of the intelligent agents, sellers must answer a series of questions about how much of a discount they are prepared to offer and whether they are prepared to go lower after a certain number of sales, or at a certain time of day.

They are also asked how eager they are to make a sale.

At the other end, the buyer types in the item they wish to purchase and the price they are willing to pay for it.

The agents then act as an intermediary, scouring the lists of sellers who are programmed to accept a price in the region of the one offered.

If they find a match, the seller is prompted to automatically reply with a personalised offer.

The buyer then has a choice to accept, reject or negotiate. If they choose to negotiate, the agent analyses the seller's criteria to see if they can make a better offer.

The process continues until either there is a sale or one of the parties pulls out.

Aroxo will be trialling the Negotiation Ninjas from the autumn, and plans to have the system fully operational in time for Christmas shopping this year.

The site currently offers mainly electrical goods.

While the sellers will not have to pay to use the Ninjas, they pay to contact a buyer. The charge from Aroxo is 0.3% of the buyer's original asking price.

For Professor Jennings, this application of his research marks a return to a more traditional retail model.

"Fixed pricing is a relatively recent phenomenon," he said. "Throughout history most transactions have been negotiated. Only in the last 100 years have we gone for fixed pricing."

Sunday, August 2, 2009

Will artificial intelligence invade Second Life?

Popular culture is filled with different notions of what artificial intelligence should or will be like. There's the all-powerful Skynet from the "Terminator" movies, "Star Wars"-style androids, HAL from "2001: A Space Odyssey," the classic sentient computer program, carrying on a witty conversation through a computer terminal. Soon, we may have to add another to the list.

In September 2007, a software company called Novamente, along with the Electric Sheep Company, a producer of add-ons for virtual worlds, announced plans to release artificial intelligences (AI) into virtual worlds like the ultra-popular "Second Life."

Novamente's "intelligent virtual agents" would use online games and virtual worlds as a development zone, where they will grow, learn and develop by interacting with humans. The company said that it will start by creating virtual pets that become smarter as they interact with their (human-controlled) avatar owners. (An avatar is the character or virtual representation of a player in a virtual world.) More complex artificially controlled animals and avatars are expected to follow.

Novamente's artificial intelligence is powered by a piece of software called a "Cognition Engine." Pets and avatars powered by the Cognition Engine will feature a mix of automated behaviors and learning and problem-solving capabilities. Ben Goertzel, the CEO of Novamente, said that his company had already created a "fully functioning animal brain".

Goertzel envisioned Novamente's first artificial intelligences as dogs and monkeys, initially going on sale at your local virtual pet shop in October 2007.

These virtual pets will work much like real pets -- trainable, occasionally misbehaving, showing the ability learn and perform tasks and responding positively to rewards. After dogs and monkeys, Novamente would then move on to more complex creatures, such as parrots that, like their real-life counterparts, could learn to speak.

Finally, the company expects to produce virtual human babies that, propelled by its own artificial intelligence, would grow, develop and learn in the virtual world.

While we frequently see or read about robots with interesting capabilities, scientists have struggled for decades to create anything approaching a genuine artificial intelligence. A robot may be an expert at one skill, say shooting a basketball, but numerous basic tasks, such as walking down stairs, may be beyond its capabilities. This is where a virtual world has its advantages, Goertzel says.

On the next page, we'll look at why virtual worlds may present the next and best frontier for the development of artificial intelligence.

Advantages of Artificial Intelligence in Virtual Worlds

While we already deal with some virtual AI -- notably in action games against computer-controlled "bots" or challenging a computer opponent to chess -- the work of Novamente, Electric Sheep Company and other firms has the potential to initiate a new age of virtual AI, one where, for better or worse, humans and artificial intelligences could potentially be indistinguishable.

If you think about it, we take in numerous pieces of information just walking down the street, much of it unconsciously. You might be thinking about the weather, the pace of your steps, where to step next, the movement of other people, smells, sounds, the distance to the destination, the effect of the environment around you and so forth.

An artificial intelligence in a virtual world has fewer of these variables to deal with because as of yet, no virtual world approaches the complexity of the real world. It may be that by simplifying the world in which the artificial intelligence operates (and by working in a self-contained world), some breakthroughs can be achieved.

Such a process would allow for a more linear development of artificial intelligence rather than an attempt to immediately jump to lifelike robots capable of learning, reason and self-analysis.

Goertzel states that a virtual world also offers the advantage of allowing a newly formed artificial intelligence to interact with thousands of people and characters, increasing learning opportunities. The virtual body is also easier to manage and control than that of a robot.

If an AI-controlled parrot seems to have particular challenges in a game world, it's less difficult for programmers to create another virtual animal than if they were working with a robot. And while a virtual world AI lacks a physical body, it displays more complexity (and more realism) than a simple AI that merely carries on text-based conversations with a human.

Novamente claims that its system is the first to allow artificial intelligences to progress through a process of self-analysis and learning. The company hopes that its AI will also distinguish itself from other attempts at AI by surprising its creators in its capabilities -- for example, by learning a skill or task that it wasn't programmed to perform.

Novamente has already created what it terms an "artificial baby" in the AGISim virtual world. This artificial baby has learned to perform some basic functions.

Despite all of this excitement, the AI discussed here are far from what's envisioned in "Terminator." It will be some time before AIs are seamlessly interacting with players, impressing us with their cleverness and autonomy and seeming all too human.

Even Philip Rosedale, the founder of Linden Labs, the company behind "Second Life," has warned against becoming caught up in the hype of the supposedly groundbreaking potential of these virtual worlds.

But "Second Life" and other virtual worlds may prove to be the most valuable testing grounds to date for AI. It will also be interesting to track how virtual artificial intelligences progress as the virtual worlds they occupy change and become more complex.

Besides acting as an incubator for artificial intelligence, "Second Life" has already been an important case study in the development of cyber law and the economics and legality of hawking virtual goods for real dollars.

The popular virtual world has even been mentioned as a possible virtual training facility for children taking emergency preparedness classes

Scientists secretly fear AI robot-machines may soon outsmart men

A robot that can open doors. Computer viruses that no one can stop.

Advances in the scientific world promise many benefits, but scientists are secretly panicking over the thought that artificially intelligent machines could outsmart humans.

While at a conference, held in Monterey Bay, California, leading experts warned that mankind might not be able to control computer-based systems that carry out a growing share of society’s workload, reports The Times.

“These are powerful technologies that could be used in good ways or scary ways,” warned Eric Horvitz, principal researcher at Microsoft who organised the conference on behalf of the Association for the Advancement of Artificial Intelligence.

Alan Winfield, a professor at the University of the West of England, believes that boffins spend too much time developing artificial intelligence and too little on robot safety.

“We’re rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced,” he said.

The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans.

According to the panel, identity thieves might feasibly plant a virus on a person’s smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves.

Saturday, August 1, 2009