Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Thursday, February 10, 2011

Robots to get their own internet




Robots could soon have an equivalent of the internet and Wikipedia.

European scientists have embarked on a project to let robots share and store what they discover about the world.

Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters.


The idea behind RoboEarth is to develop methods that help robots encode, exchange and re-use knowledge, said RoboEarth researcher Dr Markus Waibel from the Swiss Federal Institute of Technology in Zurich.

"Most current robots see the world their own way and there's very little standardisation going on," he said. Most researchers using robots typically develop their own way for that machine to build up a corpus of data about the world.

This, said Dr Waibel, made it very difficult for roboticists to share knowledge or for the field to advance rapidly because everyone started off solving the same problems.

By contrast, RoboEarth hopes to start showing how the information that robots discover about the world can be defined so any other robot can find it and use it.

RoboEarth will be a communication system and a database, he said.

In the database will be maps of places that robots work, descriptions of objects they encounter and instructions for how to complete distinct actions.

The human equivalent would be Wikipedia, said Dr Waibel.

"Wikipedia is something that humans use to share knowledge, that everyone can edit, contribute knowledge to and access," he said. "Something like that does not exist for robots."

It would be great, he said, if a robot could enter a location that it had never visited before, consult RoboEarth to learn about that place and the objects and tasks in it and then quickly get to work.




While other projects are working on standardising the way robots sense the world and encode the information they find, RoboEarth tries to go further.

"The key is allowing robots to share knowledge," said Dr Waibel. "That's really new."

RoboEarth is likely to become a tool for the growing number of service and domestic robots that many expect to become a feature in homes in coming decades.

Dr Waibel said it would be a place that would teach robots about the objects that fill the human world and their relationships to each other.

For instance, he said, RoboEarth could help a robot understand what is meant when it is asked to set the table and what objects are required for that task to be completed.

The EU-funded project has about 35 researchers working on it and hopes to demonstrate how the system might work by the end of its four-year duration.

Early work has resulted in a way to download descriptions of tasks that are then executed by a robot. Improved maps of locations can also be uploaded.

A system such as RoboEarth was going to be essential, said Dr Waibel, if robots were going to become truly useful to humans.


Friday, January 28, 2011

Robots learn from rats' brains



Queensland engineers have translated biological findings to probabilistic algorithms that could direct robots through complicated human environments.

While many of today's machines relied on expensive sensors and systems, the researchers hoped their software would improve domestic robots, cheaply.

Roboticist Michael Milford worked with neuroscientists to develop algorithms that mimicked three navigational systems in rats' brains: place cells; head direction cells; and grid cells.

In an article published in PLoS Computational Biology this week, he described simulating grid cells - recently discovered brain cells that helped rats contextually determine their location.

To explain the function of grid cells, Milford described getting out of a lift at an unknown floor, and deducing his location based on visual cues like vending machines and photocopiers.

"We take it for granted that we find our way to work ... [but] the problem is extremely challenging," said the Queensland University of Technology researcher.

"Robots are able to navigate to a certain point, but they just get confused and lost in an office building," he told iTnews.

The so-called RatSLAM software was installed in a 20kg Pioneer 2DXe robot with a forward facing camera that was capable of detecting visual cues, their relative bearing and distance.

The robot was placed in a maze similar to those used in experiments with rats, with random goal locations that simulated a rat's collection of randomly thrown pieces of food.

It calibrated itself using visual cues, performing up to 14 iterations per second to determine its location when placed in one of four initial starting positions.

Milford explained that environmental changes like lighting, shadows, moving vehicles and people made it difficult for robots to navigate in a human world.

Machines like the Mars Rovers and those competing in the DARPA Challenges tended to use expensive sensors - essentially "throwing a lot of money" at the problem, he said.

But a cheaper solution was needed to direct domestic robots, which were currently still in early stages of development and "very, very, very, dumb".

"The only really successful cheap robot that has occurred so far is the [iRobot Roomba] vacuum cleaner," he said. "They don't have any idea where they are; they just move around randomly."

The grid cell project was the latest in almost seven years of Milford's research into applying biological techniques to machines.

The team had been approached "occasionally" by domestic robot manufacturers, he said, but was currently focussed on research, and not commercialisation.

Sunday, March 28, 2010

Virtual pets that can learn


"SIT," says the man. The dog tilts its head but does nothing. "Sit," the man repeats.

The dog lies down. "No!" the man admonishes.

Then, unable to get the dog to sit, the man decides to teach it by example. He sits down himself.

"I'm sitting. Try sitting," he says. The dog cocks its head attentively, folds its hind legs under its body and sits. "Good!" says the man.

No, it's not a rather bizarre way to teach your pet new tricks. It is a demonstration a synthetic character in a virtual world being controlled by an autonomous artificial intelligence (AI) program, which will be released to inhabitants of virtual worlds like Second Life later this year.

Novamente, a company in Washington DC which built the AI program that controls the dog, says that the demonstration is a foretaste not just of future virtual pets but of computer games to come. Their work, along with similar programs from other researchers, was presented at the First Conference on Artificial General Intelligence at the University of Memphis in Tennessee earlier this month.

If first impressions are anything to go by, synthetic pets like Novamente's dog will be a far cry from today's virtual pets, such as Neopets and Nintendogs, which can only perform pre-programmed moves, such as catching a disc. "The problem with current virtual pets is they are rigidly programmed and lack emotions, responsiveness, individual personality or the ability to learn," says Ben Goertzel of Novamente. "They are pretty much all morons."

In contrast, Goertzel claims that synthetic characters like his dog can be taught almost anything, even things that their programmers never imagined.

For instance, owners could train their pets to help win battles in adventure games such as World of Warcraft, says Sibley Verbek of the Electric Sheep Company in New York City, which helped Novamente create the virtual pets. "It is a system that allows the user to teach the virtual character anything they want to," he says.


So how do these autonomous programs work? Take Novamente's virtual pet, which is expected to be the first to hit the market. One way that the pets learn is by being taught specific tasks by human-controlled avatars, similar to the way babies are taught by their parents.

To do this, the humans must directly tell the pet - via Second Life's instant messaging typing interface - that they are about to teach it a task. When the pet receives a specific command, such as "I am going to teach you to sit", it works out that it is about to learn something new called "sit". It then watches the human avatar and starts to copy some of the things the teacher does.

At first it doesn't know which aspects of the task are important. This can lead to mistakes: the dog lying down instead of sitting, for example. But it soon figures out the correct behaviour by trying the task several times in a variety of ways. The key learning tool is that the pets are pre-programmed to seek praise from their owners, so they can make increasingly intelligent guesses about what they should copy, repeating adjustments that seem to make the human avatar more likely to say "good dog", and avoiding those that elicit the response "bad dog". Eventually, the pet figures out how to sit.


Learning by imitation isn't exactly a new idea. Robots in the real world are still being trained in this wayMovie Camera. But it hasn't been easy. For example, a real robot needs sophisticated computer vision to recognise its teacher's legs, so that it can isolate their movement and copy it. But the great variation in the size and shape of legs, which depends on their motion and the angle of viewing, means it is hard to program a robot to recognise legs.

In Second Life, you can get round this problem. Characters don't see objects from a certain angle, nor from a particular distance; all they know is the 3D coordinates of the object, allowing them to recognise legs simply by their geometry. Once the pet can recognise legs, Goertzel then programs it to map the leg movements to the movement of its own legs. Obviously, the pet's own legs are a different size and shape, so the exact same motions wouldn't be appropriate. But the pets experiment with slightly different variations on the theme - and then settle on the set of movements that elicits the most praise from the avatar.


So far, Goertzel says he has successfully taught his dogs to play fetch, basic soccer skills such as kicking the ball, faking a shot and dribbling, and to dance a simple series of moves, just by showing them how (watch a video of the demo at www.novamente.net/puppy.mov).

Imitation isn't the only way the pets learn, however. They can also learn things humans may not have intended to teach them. As well as seeking praise, they are also programmed with other basic desires such as hunger and thirst, as well as some random movements and exploration of the virtual environment. As they explore, their "memory" records everything that happens. It then carries out statistical analyses to find combinations of sequences and actions that seem to predict fulfilment of its goals, such as appeasement of hunger, and uses that knowledge to guide its future behaviour. This can then lead to more sophisticated behaviour, such as a dog learning to touch its bowl when a human walks into the room, because that increases the chance of a goal being fulfilled. "It learns that going near the bowl is symbolic for food," says Goertzel. "This is a sort of rudimentary gestural communication."

Goertzel is aiming even higher. He says learning gestures could eventually form the basis for virtual pets to learn language, just as it does in young children. "Eventually we want to have virtual babies or talking parrots that learn to speak," he says (see "If only they could talk").


Deb Roy, an AI researcher at the Massachusetts Institute of Technology, worries that people will tire of training their virtual pets. "Philosophically I am on board. These are lovely and powerful ideas," he says, "But what are the results that show [Goertzel's team] are making progress compared to people who have tried similar things?"

Novamente has a few tricks up its sleeve to stop people from getting bored. For starters, the synthetic characters will learn quickly as more and more people use them. Although each pet has its own "brain", Novamente's servers will pool knowledge from all the brains. So once one pet has mastered one trick, it will be much easier for another one to master it, too.

Researchers at Novamente are not the only ones who hope to create compelling synthetic characters. Selmer Bringsjord, Andrew Shilliday and colleagues at Rensselaer Polytechnic Institute in Troy, New York, are working on a character called EddieMovie Camera, that they hope will reason about another human's state of mind - potentially leading to characters that understand deceit and betrayal - and predict what other characters will do next.

The fusing of virtual worlds and AI will almost certainly be good for AI. Since the field failed to deliver on its initial promises of machines you can chat to, robotic assistants that do your housework and conscious machines, it has been hard to get funding to build generally intelligent programs. Instead more specific, "narrow AI" such as computer vision or chess-playing have flourished. Novamente is planning to make its pets so much fun that people will actually pay money to interact with them. If so, the multibillion-dollar games industry could drive AI towards delivering on its original promise.

Could the fusion of games, virtual worlds and artificial intelligence take us closer to building artificial brains?


Novamente is a company that creates virtual pets equipped with artificial intelligence.
As they move forward on this goal they hope the pets will learn to make common-sense assumptions like humans, which could eventually allow them to understand and produce natural language, for example.

One of the biggest challenges faced by researchers trying to imbue computers with natural language abilities is getting computers to resolve ambiguities. Take this sentence: "I saw the man with a telescope." There are three possible ways to interpret the sentence. Either I was looking at a man holding a telescope, or I saw a man through my telescope, or more morbidly, I am sawing a man with a telescope. The context would help a human figure out the real meaning, while a computer might be flummoxed.

But in an environment like Second Life, a synthetic character endowed with AI could use its immediate experience and interactions with other avatars and objects to make sense of language the way humans might. "The stuff that really excites me is to start teaching [pets] simple language," says Ben Goertzel of Novamente.

But other AI researchers doubt that virtual environments will be rich enough for synthetic characters to move towards the kind of general intelligence that is required for natural language processing. Stephen Grand, an independent researcher from Baton Rouge, Louisiana, who created the AI game Creatures in the mid-1990s, applauds the Novamente approach, but thinks there are limits to learning inside a virtual world.

"Just imagine how intelligent you would be if you were born with nothing more than the sensory information available to a Second Life inhabitant," he says. "It's like trying to paint a picture while looking through a drinking straw."

Thursday, March 25, 2010

IBM Simulates a Cat-Like Brain: AI or Shadow Minds for Humans?



IBM's Almaden Research Center have announced that they had produced a "cortical simulation" of the scale and complexity of a cat brain.

This simulation ran on one of IBM's "Blue Gene" supercomputers, in this case at the Lawrence Livermore National Laboratory (LLNL).

This isn't a simulation of a cat brain, it's a simulation of a brain structure that has the scale and connection complexity of a cat brain.

It doesn't include the actual structures of a cat brain, nor its actual connections; the various experiments in the project filled the memory of the cortical simulation with a bunch of data, and let the system create its own signals and connections.

Put simply, it's not an artificial (feline) intelligence, it's a platform upon which an A(F)I could conceivably be built.


Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses.





Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated "Blue Matter" project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections.

The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.

Wednesday, November 18, 2009

Brisbane maps robotic future


Scientists in Brisbane are blurring the line between biology and technology and creating a new generation of robot "helpers" more in tune with human needs.

The University of Queensland is hosting the the Australian Research Council's Thinking Systems symposium this week, which brings together UQ's robotic navigation project with the University of New South Wales' robotic hands project and a speech and cognition project out of the University of Western Sydney.

Scientists are working towards a range of robotic innovations, from the development of navigation and learning robots to the construction of artificial joints and limbs and the creation of a conversational computer program, a la 2001: A Space Odyssey's HAL.

UQ's School of Information Technology and Electrical Engineering head, Professor Janet Wiles, said the symposium paired "some very clever engineers...with very clever scientists" to map the future of robotics - and it was going to be a very different world.

"You're bringing together neuroscience, cognitive science, psychology, behaviour and robotics information system to look at the cross disciplinary projects we can do in this space," Professor Wiles said.

"We're doing a combination of the fundamental science and the translation into the technology and that's one of the great benefits of our project."

The group aims to advance robotic technology by decoding the way human and animal brains work to equip machines with the ability to operate in the real world.

"There's a strong connection to cognition - the way the brain works as a whole - and navigation, so what we've been doing is studying the fundamental of navigation in animals and taking the algorithms we've learnt from those and putting them into robots," Professor Wiles said.

Over the next two decades, she sees robots becoming more and more important, expanding from their current roles as cleaners, assemblers and drones and into smarter machines more closely integrated with human beings in the form of replacement limbs and joints.

"It's not going to be the robots and us. Already a lot of people are incorporating robot components; people who have had a leg amputated who now have a knee and in the knee. It is effectively a fully-articulated robotic knee [with] a lot of the spring in the step that a natural knee has," Professor Wiles said.

"The ability of robots to replace component parts is an area which is going to be growing.

"This is where you're going to blur the line between technology and biology when you start to interface these two fields."

At UQ, the team is working on developing computer codes or algorithms that would enable a robot to "learn" rapidly about its near environment and navigate within it.

"Navigation is quite an intriguing skill because it is so intrinsic to what we do and we are really not aware of it unless we have a poor sense of navigation," Professor Wiles said.

"The kind of navigation we are dealing with is how you get from one place to another, right across town or from one room in a building to another you can't see."

With about four million robots in households right now, performing menial chores such as vacuuming the carpet, improvements in navigation has the potential to widen the scope of these creations to take a larger place in everyday life.

According to Professor Wiles, the ability to rapidly process information and apply it to the area they are working in will give robots the edge into the future.

"Robots need to learn new environments very rapidly and that's what a lot of our work focuses on.

"When you take a robot out of the box you don't want to program into it with the architecture of your house, you want the robot to explore the house and learn it very quickly," Professor Wiles said.

"Household robotics is going to be really big in the next 15 years or so and this is one of the things you need is for robots to be able to look after themselves in space."

But as Australian universities and international research institutes look into replicating the individual parts of biological creatures and mimic them in machines, the question of intelligence inevitably become more important.

While the sometimes frightening scenarios played out in science fiction novels and films - where so often robots lay waste to humanity - remains securely in the realm of fantasy, Professor Wiles believes that some day machines will think like us.

"There's strong AI [artificial intelligence] and weak AI. Strong AI says there will be artificially intelligent creatures which are not biological. Weak AI says they will have a lot of the algorithms and they do already have a lot of those algorithms," she said.

"The bee, whose brain is a tiny as a sesame seed, already has better navigation abilities than even our best robots.

"So we have a little way to go before robots reach biological intelligence let alone human intelligence but I don't see why we shouldn't take steps towards it."

Saturday, August 22, 2009

Real-Life Decepticons: Robots Learn to Cheat


The robots — soccer ball-sized assemblages of wheels, sensors and flashing light signals, coordinated by a digital neural network — were placed by their designers in an arena, with paper discs signifying “food” and “poison” at opposite ends. Finding and staying beside the food earned the robots points.

At first, the robots moved and emitted light randomly. But their innocence didn’t last. After each iteration of the trial, researchers picked the most successful robots, copied their digital brains and used them to program a new robot generation, with a dash of random change thrown in for mutation.

Soon the robots learned to follow the signals of others who’d gathered at the food. But there wasn’t enough space for all of them to feed, and the robots bumped and jostled for position. As before, only a few made it through the bottleneck of selection. And before long, they’d evolved to mute their signals, thus concealing their location.

Signaling in the experiment never ceased completely. An equilibrium was reached in the evolution of robot communication, with light-flashing mostly subdued but still used, and different patterns still emerging. The researchers say their system’s dynamics are a simple analogue of those found in nature, where some species, such as moths, have evolved to use a biologist-baffling array of different signaling strategies.

“Evolutionary robotic systems implicitly encompass many behavioral components … thus allowing for an unbiased investigation of the factors driving signal evolution,” the researchers wrote Monday in the Proceedings of the National Academy of Sciences. “The great degree of realism provided by evolutionary robotic systems thus provides a powerful tool for studies that cannot readily be performed with real organisms.”

Of course, it might not be long before robots directed towards self-preservation and possessing brains modeled after — if not containing — biological components are considered real organisms.

Tuesday, August 11, 2009

Robots to get their own operating system


THE UBot whizzes around a carpeted conference room on its Segway-like wheels, holding aloft a yellow balloon. It hands the balloon to a three-fingered robotic arm named WAM, which gingerly accepts the gift.

Cameras click. "It blows my mind to see robots collaborating like this," says William Townsend, CEO of Barrett Technology, which developed WAM.

The robots were just two of the multitude on display last month at the International Joint Conference on Artificial Intelligence (IJCAI) in Pasadena, California. But this happy meeting of robotic beings hides a serious problem: while the robots might be collaborating, those making them are not. Each robot is individually manufactured to meet a specific need and more than likely built in isolation.

This sorry state of affairs is set to change. Roboticists have begun to think about what robots have in common and what aspects of their construction can be standardised, hopefully resulting in a basic operating system everyone can use. This would let roboticists focus their attention on taking the technology forward.

One of the main sticking points is that robots are typically quite unlike one another. "It's easier to build everything from the ground up right now because each team's requirements are so different," says Anne-Marie Bourcier of Aldebaran Robotics in Paris, France, which makes a half-metre-tall humanoid called Nao (pictured).

Some robots, like Nao, are almost autonomous. Others, like the UBot, are semi-autonomous, meaning they perform some acts, such as balancing, on their own, while other tasks, like steering, are left to a human operator.

Also, every research robot is designed for a specific objective. The UBot's key ability is that it can balance itself, even when bumped - crucial if robots are to one day work alongside clumsy human beings. The Nao, on the other hand, can walk and even perform a kung-fu routine, as long as it is on a flat, smooth surface. But it can't balance itself as robustly as the UBot and won't easily be able to learn how.

On top of all this, each robot has its own unique hardware and software, so capabilities like balance implemented on one robot cannot easily be transferred to others.

Bourcier sees this changing if robotics advances in a manner similar to personal computing. For computers, the widespread adoption of Microsoft's Disk Operating System (DOS), and later Windows, allowed programmers without detailed knowledge of the underlying hardware and file systems to build new applications and build on the work of others.

Programmers could build new applications without detailed knowledge of the underlying hardware

Bringing robotics to this point won't be easy, though. "Robotics is at the stage where personal computing was about 30 years ago," says Chad Jenkins of Brown University in Providence, Rhode Island. Like the home-brew computers of the late 70s and early 80s, robots used for research today often have a unique operating system (OS). "But at some point we have to come together to use the same resources," says Jenkins.

This desire has its roots in frustration, says Brian Gerkey of the robotics research firm Willow Garage in Menlo Park, California. "People reinvent the wheel over and over and over, doing things that are not at all central to what they're trying to do."

For example, if someone is studying object recognition, they want to design better object-recognition algorithms, not write code to control the robot's wheels. "You know that those things have been done before, probably better," says Gerkey. But without a common OS, sharing code is nearly impossible.

The challenge of building a robot OS for widespread adoption is greater than that for computers. "The problems that a computer solves are fairly well defined. There is a very clear mathematical notion of computation," says Gerkey. "There's not the same kind of clear abstraction about interacting with the physical world."

Nevertheless, roboticists are starting to make some headway.The Robot Operating System or ROS is an open-source set of programs meant to serve as a common platform for a wide range of robotics research. It is being developed and used by teams at Stanford University in California, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others.

ROS has software commands that, for instance, provide ways of controlling a robot's navigation, and its arms, grippers and sensors, without needing details of how the hardware functions. The system also includes high-level commands for actions like image recognition and even opening doors. When ROS boots up on a robot's computer, it asks for a description of the robot that includes things like the length of its arm segments and how the joints rotate. It then makes this information available to the higher-level algorithms.

A standard OS would also help researchers focus on a key aspect that so far has been lacking in robotics: reproducibility.

Often, if a team invents, say, a better navigation system, they will publish the results but not the software code. Not only are others unable to build on this discovery, they cannot independently verify the result. "It's useful to have people in a sense constrained by a common platform," says Giorgio Metta, a robotics researcher at the Italian Insitute of Technology in Genoa. "They [will be] forced to do things that work, because somebody else can check. I think this is important, to make it a bit more scientifically oriented."

ROS is not the only robotic operating system vying to be the standard. Microsoft, for example, is trying to create a "Windows for robots" with its Robotics Developer Studio, a product that has been available since 2007.

Gerkey hopes to one day see a robot "app store" where a person could download a program for their robot and have it work as easily as an iPhone app. "That will mean that we have solved a lot of difficult problems," he says.

Sunday, August 9, 2009

Artificial intelligence technology could soon make the internet an even bigger haven for bargain-hunters


Software "agents" that automatically negotiate on behalf of shoppers and sellers are about to be set free on the web for the first time.

The "Negotiation Ninjas", as they are known, will be trialled on a shopping website called Aroxo in the autumn.

The intelligent traders are the culmination of 20 years' work by scientists at Southampton University.

"Computer agents don't get bored, they have a lot of time, and they don't get embarrassed," Professor Nick Jennings, one of the researchers behind the work, told BBC News.

"I have always thought that in an internet environment, negotiation is the way to go."

Price fixing

The agents use a series of simple rules - known as heuristics - to find the optimal price for both buyer and seller based on information provided by both parties.

Heuristics are commonly used in computer science to find an optimal solution to a problem when there is not a single "right answer".

They are often used in anti-virus software to trawl for new threats.

"If you can't analyse mathematically exactly what you should do, which you can't in general for these sorts of systems, then you end up with heuristics," explained Professor Jennings.

"We use heuristics to determine what price we should offer during the negotiation - and also how we might deal with multiple negotiations at the same time.

"We have to factor in some degrees of uncertainty as well - the chances are that sellers will enter into more negotiations than they have stock."



To use one of the intelligent agents, sellers must answer a series of questions about how much of a discount they are prepared to offer and whether they are prepared to go lower after a certain number of sales, or at a certain time of day.

They are also asked how eager they are to make a sale.

At the other end, the buyer types in the item they wish to purchase and the price they are willing to pay for it.

The agents then act as an intermediary, scouring the lists of sellers who are programmed to accept a price in the region of the one offered.

If they find a match, the seller is prompted to automatically reply with a personalised offer.

The buyer then has a choice to accept, reject or negotiate. If they choose to negotiate, the agent analyses the seller's criteria to see if they can make a better offer.

The process continues until either there is a sale or one of the parties pulls out.

Aroxo will be trialling the Negotiation Ninjas from the autumn, and plans to have the system fully operational in time for Christmas shopping this year.

The site currently offers mainly electrical goods.

While the sellers will not have to pay to use the Ninjas, they pay to contact a buyer. The charge from Aroxo is 0.3% of the buyer's original asking price.

For Professor Jennings, this application of his research marks a return to a more traditional retail model.

"Fixed pricing is a relatively recent phenomenon," he said. "Throughout history most transactions have been negotiated. Only in the last 100 years have we gone for fixed pricing."

Sunday, August 2, 2009

Will artificial intelligence invade Second Life?


Popular culture is filled with different notions of what artificial intelligence should or will be like. There's the all-powerful Skynet from the "Terminator" movies, "Star Wars"-style androids, HAL from "2001: A Space Odyssey," the classic sentient computer program, carrying on a witty conversation through a computer terminal. Soon, we may have to add another to the list.

In September 2007, a software company called Novamente, along with the Electric Sheep Company, a producer of add-ons for virtual worlds, announced plans to release artificial intelligences (AI) into virtual worlds like the ultra-popular "Second Life."

Novamente's "intelligent virtual agents" would use online games and virtual worlds as a development zone, where they will grow, learn and develop by interacting with humans. The company said that it will start by creating virtual pets that become smarter as they interact with their (human-controlled) avatar owners. (An avatar is the character or virtual representation of a player in a virtual world.) More complex artificially controlled animals and avatars are expected to follow.

Novamente's artificial intelligence is powered by a piece of software called a "Cognition Engine." Pets and avatars powered by the Cognition Engine will feature a mix of automated behaviors and learning and problem-solving capabilities. Ben Goertzel, the CEO of Novamente, said that his company had already created a "fully functioning animal brain".

Goertzel envisioned Novamente's first artificial intelligences as dogs and monkeys, initially going on sale at your local virtual pet shop in October 2007.

These virtual pets will work much like real pets -- trainable, occasionally misbehaving, showing the ability learn and perform tasks and responding positively to rewards. After dogs and monkeys, Novamente would then move on to more complex creatures, such as parrots that, like their real-life counterparts, could learn to speak.

Finally, the company expects to produce virtual human babies that, propelled by its own artificial intelligence, would grow, develop and learn in the virtual world.

While we frequently see or read about robots with interesting capabilities, scientists have struggled for decades to create anything approaching a genuine artificial intelligence. A robot may be an expert at one skill, say shooting a basketball, but numerous basic tasks, such as walking down stairs, may be beyond its capabilities. This is where a virtual world has its advantages, Goertzel says.

On the next page, we'll look at why virtual worlds may present the next and best frontier for the development of artificial intelligence.


Advantages of Artificial Intelligence in Virtual Worlds

While we already deal with some virtual AI -- notably in action games against computer-controlled "bots" or challenging a computer opponent to chess -- the work of Novamente, Electric Sheep Company and other firms has the potential to initiate a new age of virtual AI, one where, for better or worse, humans and artificial intelligences could potentially be indistinguishable.

If you think about it, we take in numerous pieces of information just walking down the street, much of it unconsciously. You might be thinking about the weather, the pace of your steps, where to step next, the movement of other people, smells, sounds, the distance to the destination, the effect of the environment around you and so forth.

An artificial intelligence in a virtual world has fewer of these variables to deal with because as of yet, no virtual world approaches the complexity of the real world. It may be that by simplifying the world in which the artificial intelligence operates (and by working in a self-contained world), some breakthroughs can be achieved.

Such a process would allow for a more linear development of artificial intelligence rather than an attempt to immediately jump to lifelike robots capable of learning, reason and self-analysis.

Goertzel states that a virtual world also offers the advantage of allowing a newly formed artificial intelligence to interact with thousands of people and characters, increasing learning opportunities. The virtual body is also easier to manage and control than that of a robot.

If an AI-controlled parrot seems to have particular challenges in a game world, it's less difficult for programmers to create another virtual animal than if they were working with a robot. And while a virtual world AI lacks a physical body, it displays more complexity (and more realism) than a simple AI that merely carries on text-based conversations with a human.

Novamente claims that its system is the first to allow artificial intelligences to progress through a process of self-analysis and learning. The company hopes that its AI will also distinguish itself from other attempts at AI by surprising its creators in its capabilities -- for example, by learning a skill or task that it wasn't programmed to perform.

Novamente has already created what it terms an "artificial baby" in the AGISim virtual world. This artificial baby has learned to perform some basic functions.

Despite all of this excitement, the AI discussed here are far from what's envisioned in "Terminator." It will be some time before AIs are seamlessly interacting with players, impressing us with their cleverness and autonomy and seeming all too human.

Even Philip Rosedale, the founder of Linden Labs, the company behind "Second Life," has warned against becoming caught up in the hype of the supposedly groundbreaking potential of these virtual worlds.

But "Second Life" and other virtual worlds may prove to be the most valuable testing grounds to date for AI. It will also be interesting to track how virtual artificial intelligences progress as the virtual worlds they occupy change and become more complex.

Besides acting as an incubator for artificial intelligence, "Second Life" has already been an important case study in the development of cyber law and the economics and legality of hawking virtual goods for real dollars.

The popular virtual world has even been mentioned as a possible virtual training facility for children taking emergency preparedness classes

Scientists secretly fear AI robot-machines may soon outsmart men


A robot that can open doors. Computer viruses that no one can stop.

Advances in the scientific world promise many benefits, but scientists are secretly panicking over the thought that artificially intelligent machines could outsmart humans.

While at a conference, held in Monterey Bay, California, leading experts warned that mankind might not be able to control computer-based systems that carry out a growing share of society’s workload, reports The Times.

“These are powerful technologies that could be used in good ways or scary ways,” warned Eric Horvitz, principal researcher at Microsoft who organised the conference on behalf of the Association for the Advancement of Artificial Intelligence.

Alan Winfield, a professor at the University of the West of England, believes that boffins spend too much time developing artificial intelligence and too little on robot safety.

“We’re rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced,” he said.

The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans.

According to the panel, identity thieves might feasibly plant a virus on a person’s smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves.

Sunday, May 17, 2009

Space robot 2.0: Smarter than the average rover



SOMETHING is moving. Two robots sitting motionless in the dust have spotted it. One, a six-wheeled rover, radios the other perched high on a rocky slope. Should they take a photo and beam it back to mission control? Time is short, they have a list of other tasks to complete, and the juice in their batteries is running low. The robots have seconds to decide. What should they do?

Today, mission control is a mere 10 metres away, in a garage here at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. Engineers can step in at any time. But if the experiment succeeds and the robots spot the disturbance and decide to beam the pictures back to base, they will have moved one step closer to fulfilling NASA’s vision of a future in which teams of smart space probes scour distant worlds, seeking out water or signs of life with little or no help from human controllers.

NASA, along with other space agencies, has already taken the first tentative steps towards this kind of autonomous mission (see “Spacecraft go it alone”). In 1999, for example, NASA’s Deep Space 1 probe used a smart navigation system to find its way to an asteroid – a journey of over 600 million kilometres. Since 2003, an autonomous control system has been orbiting our planet aboard NASA’s Earth Observing-1 satellite. It helps EO-1 to spot volcanic eruptions and serious flooding, so the events can be photographed and the images beamed back to researchers on the ground. And in the next month or so, the latest iteration of smart software will be uploaded onto one of NASA’s Mars rovers, loosening the machine’s human tether still further so it can hunt for unusual rock formations on its own.

The idea is not to do away with human missions altogether. But since it is far cheaper and easier to send robots first, why not make them as productive as possible? Besides, the increasingly long distances they travel from home make controlling a rover with a joystick impractical. Commands from Earth might take 20 minutes to reach Mars, and about an hour to reach the moons of Jupiter.

So what can we realistically expect autonomous craft to do? It is one thing to build a space probe that can navigate by itself, respond quickly to unexpected events or even carry on when a critical component fails. It’s quite another to train a planetary rover to spot a fossilised bone in a rock, let alone distinguish a living cell from a speck of dirt.

The closest thing to a space robot with a brain is NASA’s pair of Mars rovers (see image), and their abilities are fairly limited. Since they landed in January 2004 they have had to cope with more than six critical technical problems, including a faulty memory module and a jammed wheel. That the craft are still trundling across the red planet and returning valuable geological data is down to engineers at mission control fixing the faults remotely. In fact the rovers can only do simple tasks on their own, says Steve Chien, the head of JPL’s artificial intelligence group. They can be programmed to drive from point A to point B, stop, and take a picture. They can spot clouds and whirling mini-tornadoes called dust devils on their own. They can also protect themselves against accidental damage – by keeping away from steep slopes or large rocks. For pretty much everything else, they depend on their human caretakers.

What are we missing?

This is becoming a significant limitation. While NASA’s first Mars rover, Sojourner (see image), travelled just 100 metres during its mission in 1997, Spirit and Opportunity have covered over 24 kilometres so far. As they drive they are programmed to snap images of the landscape around them, but that doesn’t make for very thorough exploration. “We are travelling further and further with each rover mission,” says Tara Estlin, senior computer scientist and one of the team developing autonomous science at JPL. “Who knows what interesting things we are missing?”

NASA wouldn’t want the rovers to record everything they see and transmit it all back to Earth; the craft simply don’t have the power, bandwidth and time. Instead, the team at JPL has spent around a decade developing software that allows the rovers to analyse images as they are recorded and decide for themselves which geological features are worth following up. Key to this is a software package called OASIS – short for on-board autonomous science investigation system.



The idea is that before the rovers set out each day, controllers can give OASIS a list of things to watch out for. This might simply be the largest or palest rock in the rover’s field of view, or it could be an angular rock that might be volcanic. Then whenever a rover takes an image, OASIS uses special algorithms to identify any rocks in the scene and single out those on its shopping list (Space Operations Communicator, vol 5, p39). Not only is OASIS able to tell the rovers what features are of scientific interest, it knows their relative value too: smooth rocks which may have been eroded by water might take priority over rough ones, say. This helps the rovers decide what to do next.

There are also practical considerations to take into account. As they trundle around the surface, the rovers must keep track of whether they have enough time, battery power and spare memory capacity to proceed. So the JPL team has also created a taskmaster – software that can plan and schedule activities. With science goals tugging at one sleeve and practical limitations at the other, this program steps in to decide how to order activities so that the rover can reach its goals safely, making any necessary scheduling changes along the way. With low-priority rocks close by, say, a rover might decide it is worth snapping six images of them rather than one of a more interesting rock a few metres away, since the latter would use up precious battery juice.

Why stop there? Since OASIS allows a rover to identify high-priority targets on its own, the JPL team has decided to take the next step: let the rover drive over to an interesting rock and deploy its sensors to take a closer look. To do this, Estlin and her colleagues won’t be using OASIS, however. Instead, they have taken elements from it and used them to create a new control system called Autonomous Exploration for Gathering Increased Science (AEGIS). This has been tested successfully at JPL and is scheduled for uplink and remote installation on the rover Opportunity sometime in September.

Once AEGIS is in control, Opportunity will be able to deploy its high-resolution camera automatically and beam data back to Earth for analysis – the first time autonomous software has been able to control a craft on the surface of another world. This is just the beginning, says Estlin. For example, researchers at JPL and the Wesleyan University in Middletown, Connecticut, have developed a smart detector system that will allow a rover to carry out a basic scientific experiment on its own. In this case, its task will be to identify specific minerals in an alien rock.

The detector consists of two automated spectrometers controlled by “support vector machines” – relatives of artificial neural networks – of a kind already in use aboard EO-1. The new SVM system uses the spectrometers to take measurements and then compares the results with an on-board database containing spectra from thousands of minerals. Last year the researchers published results in the journal Icarus (vol 195, p 169) showing that in almost all cases, even in complex rock mixtures, their SVM could automatically spot the presence of jarosite, a sulphate mineral associated with hydrothermal springs.
Alien novelties

Though increasingly sophisticated, these autonomous systems are still a long way from the conscious machines of science fiction that can talk, feel and recognise new life forms. Right now, Chien admits, we can’t even really program a robot for “novelty detection” – the equivalent of, say, picking out the characteristic shape of a bone among a pile of rocks – let alone give it the ability to detect living creatures.

In theory, the shape of a complex natural object such as an ice crystal or a living cell could be described in computer code and embedded in a software library. Then the robot would only need a sensor such as a microscope with sufficient magnification to photograph it.

In fact identifying a cell is a huge challenge because its characteristics can be extremely subtle. In 1999, NASA funded an ambitious project that set out to discover whether there are specific signatures such as shape, symmetry, or a set of combined features that could provide a key to identifying and categorising simple living systems (New Scientist, 22 April 2000, p 22). The idea was to create a huge image library containing examples from Earth, and then teach a neural network which characteristics to look for. Unfortunately, the project ended before it could generate any useful results.

Just as a single measurement is unlikely to provide definitive proof of alien life, so most planetary scientists agree that a single robotic explorer, however smart, won’t provide all the answers. Instead, JPL scientists envisage teams of autonomous craft working together, orbiting an alien world and scouring the surface for interesting science, then radioing each other to help decide what features deserve a closer look.

This model is already being put through its paces. Since 2004, networks of ground-based sensors placed around volcanoes, from Erebus in Antarctica to Kilauea and Mauna Loa in Hawaii, have been watching for sudden changes that might signal an eruption. When they detect strong signals, they can summon EO-1, which uses its autonomous software planner to schedule a fly-past. The satellite then screens the target area for clouds, and if skies are clear, it records images, processes them and transmits them to ground control.



In July, a network of 15 probesMovie Camera were placed into Mount St Helens, a volcano in Washington state. These probes carry sensors that monitor conditions inside the crater and can talk to each other to analyse data in real time, as well as call up EO-1 to take photos. If it detects activity from orbit, the satellite can even ask the probes to focus attention on a particular spot.

Networks of autonomous probes can provide a number of advantages, including helping a mission cover more ground, and ensuring it continues even if one or more probes are damaged or destroyed. This approach also offers increased processing power, since computers on separate probes can work together to crunch data more quickly. And researchers are beginning to believe that teams of autonomous probes could eventually be smart enough to do almost everything a human explorer could, even in the remotest regions of space.

Last year, in a paper published in the journal Planetary and Space Science (vol 56, p 448), a consortium of researchers from the US, Italy and Japan laid out their strategy for searching out life using autonomous craft controlled by fuzzy logic, the mathematical tool developed in the 1960s to give computers a way to handle uncertainty. Their plan calls for the use of three types of craft: surface-based rovers with sensors designed to spot signs of water and potential sources of heat, such as geothermal vents; airships that float low overhead and help pinpoint the best sites for study; and orbiters that image the planet surface, coordinating with mission control as well as beaming data back to Earth.

The consortium argue that fuzzy logic is a better bet than neural networks or other artificial intelligence techniques, since it is well suited to handling incomplete data and contradictory or ambiguous rules. They also suggest that by working together, the three types of probes will have pretty much the same investigative and deductive powers as a human planetary scientist.
The team of probes will have much the same investigative powers as a human scientist

Experimental simulations of a mission to Mars seem to confirm this view: in two tests the autonomous explorers came to the same conclusions as a human geoscientist. The system could be particularly useful for missions to Titan and Enceladus, the researchers suggest, since autonomy will be a key factor for the success of a mission so far from Earth.

Back at JPL, the day’s test of robot autonomy is almost complete. The two robots are running new software designed to improve coordination between craft. Part of the experiment is to see whether the robots can capture a photo of a moving target – in this case a small remote-controlled truck nicknamed Junior – and relay it back to “mission control” using delay-tolerant networking, a new system for data transfer.

In future deep-space missions, robots will need autonomy for longer stretches since commands from Earth will take an hour or so to reach them. And as planets rotate, there will be periods when no communication is possible. Delay-tolerant networking relies on a “store and forward” method that promises to provide a more reliable link between planetary explorers and mission control. Each node in the network – whether a rover or an orbiter – holds on to a transmission until it is safe to relay it to the next node. Information may take longer to reach its destination this way, but it will get there in the end.

And it seems to work: the images from the two robots arrive. They include both wide-angle shots and high-resolution close-ups of Junior. Estlin is pleased.

As we stand in the heat, a salamander scuttles quickly across a rock. I can’t help wondering whether the robots would have picked that out. Just suppose the Mars rover had to choose between a whirling dust devil and a fleeing amphibian? Chien assures me that the software would direct the rover to prioritise, depending on the relative value of the two. I hope it goes for the salamander. And if alien life proves half as shy, I hope the rover can act fast.