Sunday, March 28, 2010

Virtual pets that can learn


"SIT," says the man. The dog tilts its head but does nothing. "Sit," the man repeats.

The dog lies down. "No!" the man admonishes.

Then, unable to get the dog to sit, the man decides to teach it by example. He sits down himself.

"I'm sitting. Try sitting," he says. The dog cocks its head attentively, folds its hind legs under its body and sits. "Good!" says the man.

No, it's not a rather bizarre way to teach your pet new tricks. It is a demonstration a synthetic character in a virtual world being controlled by an autonomous artificial intelligence (AI) program, which will be released to inhabitants of virtual worlds like Second Life later this year.

Novamente, a company in Washington DC which built the AI program that controls the dog, says that the demonstration is a foretaste not just of future virtual pets but of computer games to come. Their work, along with similar programs from other researchers, was presented at the First Conference on Artificial General Intelligence at the University of Memphis in Tennessee earlier this month.

If first impressions are anything to go by, synthetic pets like Novamente's dog will be a far cry from today's virtual pets, such as Neopets and Nintendogs, which can only perform pre-programmed moves, such as catching a disc. "The problem with current virtual pets is they are rigidly programmed and lack emotions, responsiveness, individual personality or the ability to learn," says Ben Goertzel of Novamente. "They are pretty much all morons."

In contrast, Goertzel claims that synthetic characters like his dog can be taught almost anything, even things that their programmers never imagined.

For instance, owners could train their pets to help win battles in adventure games such as World of Warcraft, says Sibley Verbek of the Electric Sheep Company in New York City, which helped Novamente create the virtual pets. "It is a system that allows the user to teach the virtual character anything they want to," he says.


So how do these autonomous programs work? Take Novamente's virtual pet, which is expected to be the first to hit the market. One way that the pets learn is by being taught specific tasks by human-controlled avatars, similar to the way babies are taught by their parents.

To do this, the humans must directly tell the pet - via Second Life's instant messaging typing interface - that they are about to teach it a task. When the pet receives a specific command, such as "I am going to teach you to sit", it works out that it is about to learn something new called "sit". It then watches the human avatar and starts to copy some of the things the teacher does.

At first it doesn't know which aspects of the task are important. This can lead to mistakes: the dog lying down instead of sitting, for example. But it soon figures out the correct behaviour by trying the task several times in a variety of ways. The key learning tool is that the pets are pre-programmed to seek praise from their owners, so they can make increasingly intelligent guesses about what they should copy, repeating adjustments that seem to make the human avatar more likely to say "good dog", and avoiding those that elicit the response "bad dog". Eventually, the pet figures out how to sit.


Learning by imitation isn't exactly a new idea. Robots in the real world are still being trained in this wayMovie Camera. But it hasn't been easy. For example, a real robot needs sophisticated computer vision to recognise its teacher's legs, so that it can isolate their movement and copy it. But the great variation in the size and shape of legs, which depends on their motion and the angle of viewing, means it is hard to program a robot to recognise legs.

In Second Life, you can get round this problem. Characters don't see objects from a certain angle, nor from a particular distance; all they know is the 3D coordinates of the object, allowing them to recognise legs simply by their geometry. Once the pet can recognise legs, Goertzel then programs it to map the leg movements to the movement of its own legs. Obviously, the pet's own legs are a different size and shape, so the exact same motions wouldn't be appropriate. But the pets experiment with slightly different variations on the theme - and then settle on the set of movements that elicits the most praise from the avatar.


So far, Goertzel says he has successfully taught his dogs to play fetch, basic soccer skills such as kicking the ball, faking a shot and dribbling, and to dance a simple series of moves, just by showing them how (watch a video of the demo at www.novamente.net/puppy.mov).

Imitation isn't the only way the pets learn, however. They can also learn things humans may not have intended to teach them. As well as seeking praise, they are also programmed with other basic desires such as hunger and thirst, as well as some random movements and exploration of the virtual environment. As they explore, their "memory" records everything that happens. It then carries out statistical analyses to find combinations of sequences and actions that seem to predict fulfilment of its goals, such as appeasement of hunger, and uses that knowledge to guide its future behaviour. This can then lead to more sophisticated behaviour, such as a dog learning to touch its bowl when a human walks into the room, because that increases the chance of a goal being fulfilled. "It learns that going near the bowl is symbolic for food," says Goertzel. "This is a sort of rudimentary gestural communication."

Goertzel is aiming even higher. He says learning gestures could eventually form the basis for virtual pets to learn language, just as it does in young children. "Eventually we want to have virtual babies or talking parrots that learn to speak," he says (see "If only they could talk").


Deb Roy, an AI researcher at the Massachusetts Institute of Technology, worries that people will tire of training their virtual pets. "Philosophically I am on board. These are lovely and powerful ideas," he says, "But what are the results that show [Goertzel's team] are making progress compared to people who have tried similar things?"

Novamente has a few tricks up its sleeve to stop people from getting bored. For starters, the synthetic characters will learn quickly as more and more people use them. Although each pet has its own "brain", Novamente's servers will pool knowledge from all the brains. So once one pet has mastered one trick, it will be much easier for another one to master it, too.

Researchers at Novamente are not the only ones who hope to create compelling synthetic characters. Selmer Bringsjord, Andrew Shilliday and colleagues at Rensselaer Polytechnic Institute in Troy, New York, are working on a character called EddieMovie Camera, that they hope will reason about another human's state of mind - potentially leading to characters that understand deceit and betrayal - and predict what other characters will do next.

The fusing of virtual worlds and AI will almost certainly be good for AI. Since the field failed to deliver on its initial promises of machines you can chat to, robotic assistants that do your housework and conscious machines, it has been hard to get funding to build generally intelligent programs. Instead more specific, "narrow AI" such as computer vision or chess-playing have flourished. Novamente is planning to make its pets so much fun that people will actually pay money to interact with them. If so, the multibillion-dollar games industry could drive AI towards delivering on its original promise.

Could the fusion of games, virtual worlds and artificial intelligence take us closer to building artificial brains?


Novamente is a company that creates virtual pets equipped with artificial intelligence.
As they move forward on this goal they hope the pets will learn to make common-sense assumptions like humans, which could eventually allow them to understand and produce natural language, for example.

One of the biggest challenges faced by researchers trying to imbue computers with natural language abilities is getting computers to resolve ambiguities. Take this sentence: "I saw the man with a telescope." There are three possible ways to interpret the sentence. Either I was looking at a man holding a telescope, or I saw a man through my telescope, or more morbidly, I am sawing a man with a telescope. The context would help a human figure out the real meaning, while a computer might be flummoxed.

But in an environment like Second Life, a synthetic character endowed with AI could use its immediate experience and interactions with other avatars and objects to make sense of language the way humans might. "The stuff that really excites me is to start teaching [pets] simple language," says Ben Goertzel of Novamente.

But other AI researchers doubt that virtual environments will be rich enough for synthetic characters to move towards the kind of general intelligence that is required for natural language processing. Stephen Grand, an independent researcher from Baton Rouge, Louisiana, who created the AI game Creatures in the mid-1990s, applauds the Novamente approach, but thinks there are limits to learning inside a virtual world.

"Just imagine how intelligent you would be if you were born with nothing more than the sensory information available to a Second Life inhabitant," he says. "It's like trying to paint a picture while looking through a drinking straw."

Thursday, March 25, 2010

IBM Simulates a Cat-Like Brain: AI or Shadow Minds for Humans?



IBM's Almaden Research Center have announced that they had produced a "cortical simulation" of the scale and complexity of a cat brain.

This simulation ran on one of IBM's "Blue Gene" supercomputers, in this case at the Lawrence Livermore National Laboratory (LLNL).

This isn't a simulation of a cat brain, it's a simulation of a brain structure that has the scale and connection complexity of a cat brain.

It doesn't include the actual structures of a cat brain, nor its actual connections; the various experiments in the project filled the memory of the cortical simulation with a bunch of data, and let the system create its own signals and connections.

Put simply, it's not an artificial (feline) intelligence, it's a platform upon which an A(F)I could conceivably be built.


Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses.





Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated "Blue Matter" project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections.

The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.

Wednesday, March 10, 2010

Android Phone powered robot



Some clever California hackers, Tim Heath and Ryan Hickman, are building bots that harness Android phones for their robo-brainpower.

Their first creation, the TruckBot, uses a HTC G1 as a brain and has a chassis that they made for $30 in parts. It's not too advanced yet—it can use the phone's compass to head in a particular direction—but they're working on incorporating the bot more fully with the phone and the Android software.

Some ideas they're looking to build in soon are facial and voice recognition and location awareness.

If you're interested in putting together a Cellbot of your own the team's development blog has some more information.