Wednesday, November 18, 2009

Brisbane maps robotic future

Scientists in Brisbane are blurring the line between biology and technology and creating a new generation of robot "helpers" more in tune with human needs.

The University of Queensland is hosting the the Australian Research Council's Thinking Systems symposium this week, which brings together UQ's robotic navigation project with the University of New South Wales' robotic hands project and a speech and cognition project out of the University of Western Sydney.

Scientists are working towards a range of robotic innovations, from the development of navigation and learning robots to the construction of artificial joints and limbs and the creation of a conversational computer program, a la 2001: A Space Odyssey's HAL.

UQ's School of Information Technology and Electrical Engineering head, Professor Janet Wiles, said the symposium paired "some very clever engineers...with very clever scientists" to map the future of robotics - and it was going to be a very different world.

"You're bringing together neuroscience, cognitive science, psychology, behaviour and robotics information system to look at the cross disciplinary projects we can do in this space," Professor Wiles said.

"We're doing a combination of the fundamental science and the translation into the technology and that's one of the great benefits of our project."

The group aims to advance robotic technology by decoding the way human and animal brains work to equip machines with the ability to operate in the real world.

"There's a strong connection to cognition - the way the brain works as a whole - and navigation, so what we've been doing is studying the fundamental of navigation in animals and taking the algorithms we've learnt from those and putting them into robots," Professor Wiles said.

Over the next two decades, she sees robots becoming more and more important, expanding from their current roles as cleaners, assemblers and drones and into smarter machines more closely integrated with human beings in the form of replacement limbs and joints.

"It's not going to be the robots and us. Already a lot of people are incorporating robot components; people who have had a leg amputated who now have a knee and in the knee. It is effectively a fully-articulated robotic knee [with] a lot of the spring in the step that a natural knee has," Professor Wiles said.

"The ability of robots to replace component parts is an area which is going to be growing.

"This is where you're going to blur the line between technology and biology when you start to interface these two fields."

At UQ, the team is working on developing computer codes or algorithms that would enable a robot to "learn" rapidly about its near environment and navigate within it.

"Navigation is quite an intriguing skill because it is so intrinsic to what we do and we are really not aware of it unless we have a poor sense of navigation," Professor Wiles said.

"The kind of navigation we are dealing with is how you get from one place to another, right across town or from one room in a building to another you can't see."

With about four million robots in households right now, performing menial chores such as vacuuming the carpet, improvements in navigation has the potential to widen the scope of these creations to take a larger place in everyday life.

According to Professor Wiles, the ability to rapidly process information and apply it to the area they are working in will give robots the edge into the future.

"Robots need to learn new environments very rapidly and that's what a lot of our work focuses on.

"When you take a robot out of the box you don't want to program into it with the architecture of your house, you want the robot to explore the house and learn it very quickly," Professor Wiles said.

"Household robotics is going to be really big in the next 15 years or so and this is one of the things you need is for robots to be able to look after themselves in space."

But as Australian universities and international research institutes look into replicating the individual parts of biological creatures and mimic them in machines, the question of intelligence inevitably become more important.

While the sometimes frightening scenarios played out in science fiction novels and films - where so often robots lay waste to humanity - remains securely in the realm of fantasy, Professor Wiles believes that some day machines will think like us.

"There's strong AI [artificial intelligence] and weak AI. Strong AI says there will be artificially intelligent creatures which are not biological. Weak AI says they will have a lot of the algorithms and they do already have a lot of those algorithms," she said.

"The bee, whose brain is a tiny as a sesame seed, already has better navigation abilities than even our best robots.

"So we have a little way to go before robots reach biological intelligence let alone human intelligence but I don't see why we shouldn't take steps towards it."

Wednesday, October 21, 2009

Scientists Develop New Material For Robot Muscles

A group of researchers at the University of Texas have made a breakthrough in their quest to find a stronger, more effective material for use in making artificial muscles for robots. They’ve found a way to create tangled nanotube ribbons that are, relative to weight, stronger than steel and stiffer than diamond.

The ribbons have the remarkable ability to expand in width by 220% when a voltage is applied. When the voltage is removed, the ribbons return to their normal state. The shift in state occurs over mere milliseconds. Additionally, the nanotube ribbons retain their properties in an extreme range of temperature: between -196 °C and 1538 °C.

The scientists construct the nanotube ribbons into an aerogel, which contains primarily air. A cubic centimeter of the stuff weighs just 1.5 milligrams; one gram could cover a space of 30 square meters. Researchers are still experimenting with ribbon lengths. So far, the team has produced ribbons that are 1/50th of a millimeter thick, 16 centimeters wide and several meters long. Larger sheets are in the works.

Besides serving as material for artificial muscles, the nanotube ribbons could be used to create sturdy, lightweight structures in space.

Hydrogen muscle silences the domestic robot

IF ROBOTS are ever going to be welcome in the home they will need to become a lot quieter. Building them with artificial muscles that run on hydrogen, instead of noisy compressed-air pumps or electric motors, could be the answer.

Kwang Kim, a materials engineer at the University of Nevada in Reno, came up with the idea after realising that hydrogen can be supplied silently by metal hydride compounds.

Metal hydrides can undergo a process called reversible chemisorption, allowing them to store and release extra hydrogen held by weak chemical bonds. It's this property that has led to the motor industry investigating metal hydrides as hydrogen "tanks" for fuel cells.

To make a silent artificial muscle, Kim and his colleague Alexandra Vanderhoff first compressed a copper and nickel-based metal hydride powder into peanut-sized pellets. They then secured them in a reactor vessel and pumped in hydrogen to "charge" the pellets with the gas. A heater coil surrounded the vessel, as heat breaks the weak chemical bonds and releases the stored hydrogen.

The next step was to connect the vessel to an off-the-shelf artificial muscle, which comprises an inflatable rubber tube surrounded by Kevlar fibre braiding. Two of these placed either side of a robotic joint can mimic the push/pull action of muscles by being alternately inflated and deflated (see diagram).

Turning the heater on and off controls the flow of hydrogen into the rubber tube, causing the muscle to move silently. Even better, the pair say, the muscle performs as well as those that run on compressed air systems (Smart Materials and Structures, DOI: 10.1088/0964-1726/18/12/125014). Importantly, the gas didn't leak.

"The system has biological muscle-like properties for humanoid robots that need high power, large limb strokes - and no noise," says Kim.

Yoseph Bar-Cohen, an engineer specialising in artificial muscle technology at NASA's Jet Propulsion Lab in Pasadena, California, says this is "a novel approach" for controlling artificial muscles. "It is an important contribution and increases the arsenal of potential actuators that may become available in the future," he says.

iRobot Unveils Morphing Blob Robot

iRobot's latest robot is unique on many levels. The doughy blob moves by inflating and deflating - a new technique its developers call "jamming."

As the researchers explain in the video below, the jamming mechanism enables the robot to transition from a liquid-like to a solid-like state.

Earlier this week, researchers from iRobot and the University of Chicago presented the new "blob bot" at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

As a new kind of chemical robot (or chembot), the blob bot has stretchy silicone skin, which is composed of multiple cellular compartments that each contain a "jammable slurry."

When some of these cells are unjammed, and an actuator in the center of the robot is inflated, the robot inflates in the areas of the unjammed cells. By controlling which cells are unjammed, the researchers can change the shape of the robot and make it roll in a specific direction.

The new robot is being funded by DARPA, which gave iRobot $3.3 million to work on the chembot last year. The goal is to build a robot that can squeeze through tiny openings smaller than its own dimensions, which could be valuable in a variety of missions.

The video shows the robot from about one year ago, and since then the researchers have been working on adding sensors and connecting multiple blob bots together.

Saturday, September 26, 2009

L300 Automatic Robot Lawn Mower Demo Video

Auto Lawn Mow presents the new line L300 basic model of Auto Lawn mowers. As the top of the line model, this is a robotic auto lawn mower that can handle over three acres of grass. Clean, effective and fully automatic, you can do whatever you want to do while the robot mows the grass. To understand the power of our flagship auto lawn mower, explore the range of features and benefits of the L300. This beautiful robotic lawn mower is everything you ever wanted and more!

Controlling and programming the new top of the range L300 model couldn't be easier. Using a Bluetooth™ mobile phone set the days and times you want L300 to cut or use the simple control panel on the rear of the mower. The heavy duty wheel motors are ideal for 30° slopes.

Intelligent mowing technology, means where the grass is longer, the L300 Robotic Lawn Mower will perform a 'Smart Spiral' function. In shorter grass L300 will save power by slowing the blade down.

This really is the Rolls Royce of robot lawn mowing. It not only looks good on the outside, it is very high-tech on the inside.

Fully Autonomous - The L300 Robotic Lawn mower returns to the recharge base on its own when the battery gets low - You can go all season long without worry. Get up to 8 hours without having to re-charge.

Thursday, September 17, 2009

Taiwan plans to build robot pandas

A CUTTING-EDGE lab in Taiwan aims to develop panda robots that are friendlier and more artistically endowed than their endangered real-life counterparts.

THE Centre for Intelligent Robots Research said its world-first panda robot was taking shape at the hands of an ambitious group of scientists hoping to add new dimensions to the island's reputation as a high-tech power.

"The panda robot will be very cute and more attracted to humans. Maybe the panda robot can be made to sing a panda song,'' the centre's 52-year-old director Jerry Lin said.

Day by day, the panda has evolved on the centre's computer screens and, if funding permitted, the robot would take its first steps by the end of the year.

"It's the first time we try to construct a quadrupedal robot," Jo Po-chia, a doctoral student who is in charge of the robot's design, said.

"We need to consider the balance problem."

The robo-panda was just one of many projects on the drawing board at the centre attached to the National Taiwan University of Science and Technology.

The Taipei-based centre also aimed to build robots that looked like popular singers, so exact replicas of world stars could perform in the comfort of their fans' homes.

"It could be a Madonna robot. It will be a completely different experience from just listening to audio,'' Mr Lin said.

Mr Lin and his team were also working on educational robots that acted as private tutors for children, teaching them vocabulary or telling them stories in foreign languages.

There is an obvious target market: China, with its tens of millions of middle-class parents doting on the one child they are allowed under strict population policies.

"Asian parents are prepared to spend a lot of money to teach their children languages,'' Mr Lin said.

Robots running amok were a fixture of popular literature but parents did not have to worry about leaving their children home alone with their artificial teachers, he said.

"A robot may hit you like a car or a motorbike might hit you," he said.

"But it won't suddenly lose control and get violent. Humans lose control, not robots. It's not like that.''

Mr Lin's long-term dream was to create a fully-functioning Robot Theatre of Taiwan, with an ensemble of life-like robots able to sing, dance and entertain.

Two robotic pioneers, Thomas and Janet, appeared before an audience in Taiwan in December, performing scenes from the Phantom of the Opera, but that was just the beginning, Mr Lin said.

"You can imagine a robot shooting down balloons, like in the wild west, using two revolvers, or three, but much faster than a person," Mr Lin said.

"Some things robots can do better than humans with the aid of technologies."

Robot prepares tea at CeBIT

Fair visitors look at the humanoid robotic system "Rollin' Justin" preparing a tea on March 2, 2009 at the world's biggest high-tech fair CeBIT in Hanover, central Germany.


A biomimetic underwater robot, named "RoboLobster", designed by Professor Joseph Ayers, is seen, Aug. 17, 2007, in Nahant, Massachusetts. RoboLobster is intended to be used to recognize changes in seawater and to locate and destroy underwater mines.

Let's talk about sex ... with robots

David Levy came to fame with chess programs, but now has ambitions to use his prize-winning chatbot software to spice up robots.

People often talk to machines, including computers and robots, and a growing number of AI (artificial intelligence) researchers are working to enable them to talk back. And soon, human-computer interactions may include having sex with them. That's the view of David Levy, who has just won the 2009 Loebner prize for the most human-like chatbot.

The annual competition, held last week in Brighton, is based on the "Turing test" inspired by a 1950 paper written by the British computer scientist Alan Turing: that when communicating only via a keyboard, a judge should have trouble distinguishing the software from a human being.

Turning predicted that by 2000, "an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning". Conversation programs are still a very long way from that, but Levy's Do-Much-More chatbot, developed at Intelligent Toys Ltd in London, has its charms. (An earlier version was called Do-A-Lot, after Eliza Doolittle in Shaw's play Pygmalion.) One Loebner conversation, for example, went as follows:

Judge-3: What do you do when you're not doing the Loebner prize?

Do-Much-More: Oh, I do much more than I intended to do at first. I got more interested in the job as I went along.

Judge-3: Is that difficult?

Do-Much-More: No, it isn't at all difficult. It is as peaceful as a platypus playing with a potato pudding.

Surprise win

Levy last won the competition when it was held in New York in 1997, so why did he wait so long to re-enter? It does, after all, carry a lot of prestige, and this year's cash prize was $3,000. "About 18 months ago, I was approached by an American startup, and I got involved with developing a chatbot for them. So I took some work I'd done after the last competition, and we extended it. I was quite pleased with it, and it occurred to me that the advances in chatbot quality since I first won the prize were really nothing to write home about. So, more as an experiment that anything else, I thought it would be interesting to see how I fared against the cream of the crop. I didn't enter with the idea that I was going to win. It surprised me a lot."

Levy has, of course, seen dramatic improvements in chess computers since "the Levy Challenge": in 1968, he bet £1,000 that no computer program would beat him in a chess match within 10 years. He didn't lose what had become a $5,000 challenge until 1989, and by 1997, a chess computer was capable of beating the world champion, Garry Kasparov. Chatbots started with Joseph Weizenbaum's Eliza "psychotherapist" in the 1960s: why haven't they made similar progress?

"It's a very difficult problem to solve, and to solve any of the major tasks in AI requires a huge amount of effort," says Levy. "One of the reasons computer chess progressed was that the subject was so interesting that there were hundreds of people all over the world working on chess programs, and on the hardware as well. I think that if the same effort was devoted to good conversational programs – if research institutes or governments or corporations threw enough money at it – the state of the art would advance even further."

Well, people nowadays often interact with artificial intelligences in games and on the web, so why aren't commercial needs already driving that investment?

"There are two things about the commercial world: one is to have the need, and the other is to have the confidence or the courage to invest significant resources," says Levy. Until recently there was justifiable doubt whether throwing a lot of money at the problem would produce something good enough to be used commercially. Now companies are probably beginning to realise that it might bring about the kind of advances they're looking for.

"For a program to be commercially successful in this field, it has to be interesting and entertaining over a long period. It's not enough to have someone conduct a conversation for two or three minutes and say, 'Oh, isn't that cute?' "

Of course, AI researchers have developed both chatbots and humanoid or at least pet-like robots, and it seems most likely the two will eventually converge. It's hard to imagine a good companion or carer robot that can't understand what people say, and that might also apply to sex robots. This is an area Levy got to know well through researching his 2007 book Love and Sex With Robots, which he then rewrote as a PhD thesis for Maastricht University in the Netherlands. It caused quite a stir.

"It did, yes, and I was very pleased about that," he replies. "I've done more interviews about Love and Sex With Robots than I have about computer chess!"

Almost human

But so far there hasn't been any commercial interest in adding conversation software to sex robots. "The state of the art is only a little further advanced than the Real Dolls of this world," he says. "There's a Japanese company that has a product called HoneyDoll, which has some electronic sensors. If the man strokes the nipples in the right way, the doll can make orgasmic sounds … There's also an engineer in Germany, Michael Harriman, who has developed a doll that has heating elements so most of the body is warm, apart from the feet."

There's also a lot of AI research going into artificial emotions and artificial personalities; into things such as artificial skin in the medical industries; and in Japan, into carer robots, which the Japanese government sees as the only way of caring for rapidly growing numbers of older people. All these should make it possible to produce far more sophisticated robot companions than Tamagotchi, Furby, Aibo and Robosapiens.

"I think the sex robot will happen fairly soon because the bottom is dropping out of the adult entertainment market, because there's so much sex available for nothing on the internet," says Levy. "I think the market was worth something like $12bn a year, and they aren't going to want to lose all their income, and this seems to me an obvious direction to go. The market must be vast, if you think of the number of vibrators that sell to women. I'm sure a male sex doll with a vibrating penis will sell better than sex dolls today. I'll be surprised if it's more than another three years or so before we see more advanced sex dolls with more electronics and electromechanics.

"There will be a huge amount of publicity when products like this hit the market. As soon as the media starts writing about 'My fantastic weekend with a sex doll', it will be like the iPhone all over again, but the queues will be longer.

"I am firmly convinced there will be a huge demand from people who have a void in their lives because they have no one to love, and no one who loves them. The world will be a much happier place because all those people who are now miserable will suddenly have someone. I think that will be a terrific service to mankind."

Twendy-One demonstrates robo-dexterity!

Twendy-One demonstrates its ability to hold delicate objects by manipulating a drinking straw between its fingers at the Department of Mechanical Engineering laboratory in Waseda University in Tokyo, Japan, Wednesday, Jan. 14, 2009. The sophisticated robot has been developed by the university's team, led by Dr. Shigeki Sugano, in hope of supporting people in aging societies.

Driverless trucks and voice-activated pets could be commonplace by 2019

Driverless juggernauts could be on our roads within ten years, experts predict.

And these trucks look like being the forerunners of a robot revolution.

According to the Royal Academy of Engineering, artificially intelligent robots and computers capable of making life and death decisions will become more and more common in all aspects of life.

The academy wants a public debate about the social, legal and ethical issues raised by the increasing use of 'thinking' machines such as surgeons, soldiers, babysitters, therapists, carers for the old and even sex partners.

Their report, called Autonomous Systems, explains how the computer-directed trucks would use data from laser-radar, video cameras and sat-nav to steer through traffic and pedestrians.

Report co-author Professor Will Stewart, of Southampton University, said driverless lorries and cars would make motoring far safer.

'The machine is a perfectly safe object. It is not prone to some of the things that you and I are prone to,' he said. 'It can run 24 hours a day without getting tired and it will always do the same thing.'

He said the technology is already in place for driverless cars and robotic taxis that take passengers to any destination are likely within 20 years. Fully automated trains are already in use on London's Docklands Light Railway and a driverless taxi that can do 25mph on a network of narrow roads will be launched next year at Heathrow.

Professor Stewart said automated vehicles would be most useful for haulage, adding: 'I think in ten years 30 per cent of trucks could be machine-operated.' Their computers will be programmed to predict the behaviour of other road users, to slow down safely if other vehicles get too close and to learn from their mistakes.

If a lorry detected a mechanical or software fault it would pull over and radio for help.



Using laser-radar and cameras they will scan for traffic and build up a 3-D picture of the road around them. They will be programmed to anticipate dangers such as pedestrians crossing, other vehicles and debris. Driverless taxis are due to appear at Heathrow next year


Intelligent and responsive robot dogs, pets and birds that react to voice commands and seek out their owner's company. Can be fitted with sensors and alarms to alert relatives if their owner falls ill.


An autonomous robot was used in a kidney transplant in London in June. Initially designed for remote areas or battlefields, they could be fitted with 3-D ultrasound and video cameras and used for routine operations.


Primitive versions on sale in Japan can recognise faces, make conversation and keep track of babies. Later models could educate and entertain children and contact parents by phone or alarm if they get into trouble or fall ill.

Welcome to the robot revolution

Much like the then-fledgling PC industry in the late 1970s, the robotics industry is on the cusp of a revolution, contends the head of Microsoft Corp.'s robotics group.

Today's giant, budget-bending robots that are run by specialists in factories and on assembly floors are evolving into smaller, less-expensive and cuter machines that clean our carpets, entertain us and may someday take care of us as we grow old. The move is akin to the shift from the mainframe world of the 1970s to the personal computers that invaded our offices and homes over the past 20 to 25 years.

"The transition is starting," said Tandy Trower, general manager of Microsoft's 3-year-old robotics group. "It's like we're back in 1977 -- four years before the IBM PC came out. We were seeing very primitive but very useful machines that were foreshadowing what was to come. In many ways, they were like toys compared to what we have today. It's the same with robots now."

Trower said many countries are making significant investments in robotics, and advances are beginning to multiply. Robotic aids and companions -- some looking like an updated version of R2-D2 and others more humanoid -- will begin moving into our homes in three to five years as technology advances and prices drop, he predicted.

"Robots are really an evolution of the technology we have now," Trower said. "We're just adding to our PCs, really. We're letting them get up off our desks and move around. They're evolving into something you will engage with and will serve you in your life someway."

Some, experts though, are hesitant to talk of revolutions, especially in an industry that has seen many promises made that have yet to materialize.

James Kuffner, an associate professor at the Robotics Institute at Carnegie Mellon University, warns that any revolution could be lengthy, as robots likely won't soon be doing dishes and walking dogs for about 20 years.

"People ask me when they'll have a Jetsons-like robot walking around their house," Kuffner said. "I tell them the first gas-powered engine was built in 1885, but it took until 1915 before a large segment of the population could afford a car. When that happened, society was transformed. In the 1950s, the first computers were built, but it wasn't until the early '80s when the personal computer came on the scene. And, of course, it completely transformed society."

Kuffner said the he believes the robot revolution countdown should start in 1996 when Honda Motor Co. released the P2, a self-contained, life-size humanoid machine. Going by historical example, a good portion of the population could have a robot in the home by 2026, he said.

"The Roomba vacuum cleaner is often seen as the first successful home robot, but it's pretty limited," Kuffner added. "So, sure, you can say we have robots in our homes. But a humanoid robot like you see in Hollywood movies, designed to perform a large number of tasks without special programming or tuning? In about 20 years."

Neena Buck, an independent robotics analyst based in Cambridge, Mass., said agreed that the robotics business will take off, but that it will be some time before humanoid robots are washing cars or dancing. First, she said, there will be single-task robots for house cleaning and the like, and exoskeletal robots to help people with infirmities.

"A Jetsons robot -- I don't think that's how it will happen," she said. "Maybe people need to change their vision of a robot."

Trower told Computerworld that robotics has been slow to grow in recent years because of the lack of a standard software platform -- the very thing Microsoft Corp. mandated he create.

The Microsoft robotics group, which is tasked with generating profits within three to five years, is now updating its Robotics Studio software, which includes a tool set and a set of programming libraries that sit on top of Windows. The studio also includes a programming language and a simulator, so that developers can first try out programs in a virtual world. The latest version of the studio platform is slated to ship by the end of this year, according to Trower.

"The robotics industry needs portability," said Trower. "There's been no standard. We wanted to make it easy for the industry to bootstrap itself. I truly think software is holding the robotic industry back."

Software was definitely holding back graduate students at the University of Massachusetts, Amherst, in their quest to build a new version of the school's uBot robot.

Bryan Thibodeau and Patrick Deegan are both graduate students who have been building the fifth generation of uBot, dubbed uBot-5, a two-wheeled, two-armed robot that can maintain its balance.

The developers said they expect to save significant time during the development of uBot 6 due to the use of Robotics Studio in their current project. "We can transfer applications we've written before for this to other robots," said Deegan. "This is the fifth generation, and we had to write code from scratch every time. The next time, we won't. It'll save us tons of time -- probably six months minimum. Now, we can start from here and keep going."

During a demonstration of the uBot-5, Thibodeau said that the developers will spend a lot less time simply reinventing the wheel. "Now we can focus on doing more, instead of doing the same thing over again," he added.

Deegan and Thibodeau noted that they hope the uBot will eventually be used to help care for the growing elderly population, helping them stay in their homes longer and more safely.

With two arms that one day could open a door, two wheels to move it about a home, and a rotating torso and touch screen that could enable it to "look" about its environment, Trower called uBot-5 is a good example of what's likely the next generation of in-home robots.

"The idea of dexterous manipulation makes a difference," said Trower. "It would be able to interact with things in the home environment, load the dishwasher, fold clothes. Once it has two arms, it opens up a huge variety of possibilities."

A touch screen that sits on the uBot-5's shoulders could act, for example, as a sort of portal for an elderly woman living alone. If the woman fell and was unresponsive, the robot could be programmed to recognize the problem and alert emergency response services. Her doctor could access the robot through his computer, see what the robot sees and speak to the woman through the robot. His face could appear on the screen, making it more natural for the two to talk to each other, using the robot as the conduit.

Richard Doherty, research director at The Envisioneering Group, a market research firm n Seaford, N.Y., said progress in the robotics industry could be limited or slowed because people will be afraid of losing their jobs -- such as a home care assistant -- to robots.

"In this country, people are afraid for their jobs. They don't want to see a robotic coffee maker or robots that could change your oil … or take care of the elderly," said Doherty. "It's job inertia. … We need to see robots in a different light. We need people to understand that this machine could help care for their grandmother."

This is exactly the kind of aid and companionship that one artificial intelligence researcher expects to see from robots in the coming years. David Levy, a British artificial intelligence researcher whose book, Love and Sex with Robots, was released last November, said in a previous interview that robotics will make such dramatic advances in the coming years that humans will be marrying robots by the year 2050.

"Robots started out in factories making cars. There was no personal interaction," said Levy, who is also an international chess master who has been developing computer chess games for years. "Then people built mail-cart robots, and then robotic dogs. Now robots are being made to care for the elderly. In the last 20 years, we've been moving toward robots that have relationships with humans, and it will keep growing toward a more emotional relationship, a more loving one and a sexual one."

While iRobot Corp.'s Roomba may be a vacuum cleaner and not a companion, Trower noted that people who own the robots identify with them, often naming them, drawing faces on them and even insisting that broken ones be repaired rather than replaced with a new machine.

"This is part of the evolution," said Trower. "We now see robots coming into people's lives and living with us. It's sneaking in and saying, 'Aren't I cute?'"

Smart building

Think of it as home automation but on a far larger scale: The Small Robotics Building project is a joint undertaking by Shimizu Corp and Yasukawa Electric Corp in Japan.

Utilizing smart infrastructure technology and robotics, the companies are creating an automated living environment that can handle such duties as reception, deliveries, cleaning, and security, without the need for human intervention.

Instead of relying on individual robots to perform functions like human detection and device control, all this is handled by the building-wide network, which then dispatches robots to perform various tasks.

Saturday, September 12, 2009

New Nasa spacebot

NASA's Limbed Excursion Mechanical Utility Robot (LEMUR) is being designed as an inspection/maintenance robot for equipment in space. A scaled-up version of Lemur IIa, could help build large structures in space. The Lemur IIa pictured here is shown on a scale model of a segmented telescope.

Friday, September 11, 2009

Cardio Home Automation System

The Cardio Home Automation System Designed and produced in 1992 by Secant, a Canadian company.
Home automation is one of those things that everyone was predicting was going to take off years ago but has never really eventuated for the masses.

Amigo intelligent home network

This is a video from the EU-IST funded Amigo project. The Amigo project develops an open service oriented middleware architecture for context-aware networked home environments. This video envisions a day in the life of a family living in such an intelligent home environment. It's a couple of years old, but still reasonably relevant.

Monday, September 7, 2009

Plasmobot: the slime mould robot

THOUGH not famed for their intellect, single-celled organisms have already demonstrated a surprising degree of intelligence. Now a team at the University of the West of England (UWE) has secured £228,000 in funding to turn these organisms into engineering robots.

In recent years, single-celled organisms have been used to control six-legged robots, but Andrew Adamatzky at UWE wants to go one step further by making a complete "robot" out of a plasmodium slime mould, Physarum polycephalum, a commonly occurring mould that moves towards food sources such as bacteria and fungi, and shies away from light.

Affectionately dubbed Plasmobot, it will be "programmed" using light and electromagnetic stimuli to trigger chemical reactions similar to a complex piece of chemistry called the Belousov-Zhabotinsky reaction, which Adamatzky previously used to build liquid logic gates for a synthetic brain.

By understanding and manipulating these reactions, says Adamatzky, it should be possible to program Plasmobot to move in certain ways, to "pick up" objects by engulfing them and even assemble them.

Initially, Plasmobot will work with and manipulate tiny pieces of foam, because they "easily float on the slime", says Adamatzky. The long-term aim is to use such robots to help assemble the components of micromachines, he says.

Wednesday, September 2, 2009

next-gen exploration robots

Two All-Terrain Hex-Legged Extra-Terrestrial Explorer (ATHLETE) rovers traverse the desert terrain adjacent to Dumont Dunes, CA. The ATHLETE rovers are being built to be capable of rolling over Apollo-like undulating terrain and "walking" over extremely rough or steep terrain for future lunar missions.

Tuesday, September 1, 2009

Parasitic Robots Feed Off Stray Energy

A Mexican artist has created a series of robots as art forms that depict human life in a consumption based society.

Saturday, August 29, 2009

Housekeeper Robots

.. from our friends in Japan, of course. Where else?

Friday, August 28, 2009

Lobsters teach robots magnetic mapping trick

SPINY lobsters have become the unlikely inspiration for a robot with a unique sense of direction. Like the lobster, it uses a map of local variations in the Earth's magnetic field to find its way around - a method that could give domestic robots low-cost navigational capabilities.

In 2003, computer scientist Janne Haverinen read in Nature (vol 421, p 60) about the amazing direction-finding ability of the Caribbean spiny lobster Panulirus argus. A team from the University of North Carolina, Chapel Hill, had moved the critters up to 37 kilometres from where they were caught and deprived them of orientational cues, but found they always set off in the right direction home. They concluded P. argus must navigate with an inbuilt map of local anomalies in the Earth's magnetic field.

"My first inspiration came from birds, ants and bees," says Haverinen. "But the spiny lobster clinched it for me."

The findings set Haverinen, who works in the intelligent systems lab at the University of Oulu, Finland, wondering if he could draw magnetic maps of buildings for domestic and factory robots. It is well known that compasses are sent haywire by the metal in buildings - plumbing, electrical wiring and the steel rods in reinforced concrete, for instance - and cannot find magnetic north. Haverinen's idea was that these distortions of the Earth's magnetic field might create a distinctive magnetic topography.

"So we decided to try to use this 'magnetic landscape' - the array of disturbances - that was upsetting the compass as a map for a robot," says Haverinen.

The team used a magnetometer to scan the magnetic field strength close to the floor in their lab (see picture) and in a 180-metre corridor in a local hospital. They then stored the field variations in the memory of a small wheeled robot and mounted a magnetometer on a rod projecting in front of it to prevent interference from its motors.

The robot was able to work out where it was and to drive along the corridor without a vision system. What's more, the magnetic map stayed true a year after the first mapping was done, Haverinen reports in Robotics and Autonomous Systems (DOI: 10.1016/j.robot.2009.07.018).

"So there just might be enough stable information for robots to work out where they are in the ambient magnetic field," he says. That would obviate the need for expensive "indoor GPS" systems in which triangulation between fixed radio beacons in a building tells robots their position.

"Reliance on any one guidance method is not a great idea in case it fails," warns Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. "But you could use a system like this, if it's proven to work, to boost your confidence in a robot by using it in conjunction with, say, vision-based navigation."

Saturday, August 22, 2009

Real-Life Decepticons: Robots Learn to Cheat

The robots — soccer ball-sized assemblages of wheels, sensors and flashing light signals, coordinated by a digital neural network — were placed by their designers in an arena, with paper discs signifying “food” and “poison” at opposite ends. Finding and staying beside the food earned the robots points.

At first, the robots moved and emitted light randomly. But their innocence didn’t last. After each iteration of the trial, researchers picked the most successful robots, copied their digital brains and used them to program a new robot generation, with a dash of random change thrown in for mutation.

Soon the robots learned to follow the signals of others who’d gathered at the food. But there wasn’t enough space for all of them to feed, and the robots bumped and jostled for position. As before, only a few made it through the bottleneck of selection. And before long, they’d evolved to mute their signals, thus concealing their location.

Signaling in the experiment never ceased completely. An equilibrium was reached in the evolution of robot communication, with light-flashing mostly subdued but still used, and different patterns still emerging. The researchers say their system’s dynamics are a simple analogue of those found in nature, where some species, such as moths, have evolved to use a biologist-baffling array of different signaling strategies.

“Evolutionary robotic systems implicitly encompass many behavioral components … thus allowing for an unbiased investigation of the factors driving signal evolution,” the researchers wrote Monday in the Proceedings of the National Academy of Sciences. “The great degree of realism provided by evolutionary robotic systems thus provides a powerful tool for studies that cannot readily be performed with real organisms.”

Of course, it might not be long before robots directed towards self-preservation and possessing brains modeled after — if not containing — biological components are considered real organisms.

Monday, August 17, 2009

Robot tourguide in Taiwan

The MSI produced robot named "Rich" demonstrates giving a tour walking down a garden trail in the Grand Hills apartment showroom of the Far Glory property company in Linkou, Taipei County, Taiwan

Tuesday, August 11, 2009

Robots to get their own operating system

THE UBot whizzes around a carpeted conference room on its Segway-like wheels, holding aloft a yellow balloon. It hands the balloon to a three-fingered robotic arm named WAM, which gingerly accepts the gift.

Cameras click. "It blows my mind to see robots collaborating like this," says William Townsend, CEO of Barrett Technology, which developed WAM.

The robots were just two of the multitude on display last month at the International Joint Conference on Artificial Intelligence (IJCAI) in Pasadena, California. But this happy meeting of robotic beings hides a serious problem: while the robots might be collaborating, those making them are not. Each robot is individually manufactured to meet a specific need and more than likely built in isolation.

This sorry state of affairs is set to change. Roboticists have begun to think about what robots have in common and what aspects of their construction can be standardised, hopefully resulting in a basic operating system everyone can use. This would let roboticists focus their attention on taking the technology forward.

One of the main sticking points is that robots are typically quite unlike one another. "It's easier to build everything from the ground up right now because each team's requirements are so different," says Anne-Marie Bourcier of Aldebaran Robotics in Paris, France, which makes a half-metre-tall humanoid called Nao (pictured).

Some robots, like Nao, are almost autonomous. Others, like the UBot, are semi-autonomous, meaning they perform some acts, such as balancing, on their own, while other tasks, like steering, are left to a human operator.

Also, every research robot is designed for a specific objective. The UBot's key ability is that it can balance itself, even when bumped - crucial if robots are to one day work alongside clumsy human beings. The Nao, on the other hand, can walk and even perform a kung-fu routine, as long as it is on a flat, smooth surface. But it can't balance itself as robustly as the UBot and won't easily be able to learn how.

On top of all this, each robot has its own unique hardware and software, so capabilities like balance implemented on one robot cannot easily be transferred to others.

Bourcier sees this changing if robotics advances in a manner similar to personal computing. For computers, the widespread adoption of Microsoft's Disk Operating System (DOS), and later Windows, allowed programmers without detailed knowledge of the underlying hardware and file systems to build new applications and build on the work of others.

Programmers could build new applications without detailed knowledge of the underlying hardware

Bringing robotics to this point won't be easy, though. "Robotics is at the stage where personal computing was about 30 years ago," says Chad Jenkins of Brown University in Providence, Rhode Island. Like the home-brew computers of the late 70s and early 80s, robots used for research today often have a unique operating system (OS). "But at some point we have to come together to use the same resources," says Jenkins.

This desire has its roots in frustration, says Brian Gerkey of the robotics research firm Willow Garage in Menlo Park, California. "People reinvent the wheel over and over and over, doing things that are not at all central to what they're trying to do."

For example, if someone is studying object recognition, they want to design better object-recognition algorithms, not write code to control the robot's wheels. "You know that those things have been done before, probably better," says Gerkey. But without a common OS, sharing code is nearly impossible.

The challenge of building a robot OS for widespread adoption is greater than that for computers. "The problems that a computer solves are fairly well defined. There is a very clear mathematical notion of computation," says Gerkey. "There's not the same kind of clear abstraction about interacting with the physical world."

Nevertheless, roboticists are starting to make some headway.The Robot Operating System or ROS is an open-source set of programs meant to serve as a common platform for a wide range of robotics research. It is being developed and used by teams at Stanford University in California, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others.

ROS has software commands that, for instance, provide ways of controlling a robot's navigation, and its arms, grippers and sensors, without needing details of how the hardware functions. The system also includes high-level commands for actions like image recognition and even opening doors. When ROS boots up on a robot's computer, it asks for a description of the robot that includes things like the length of its arm segments and how the joints rotate. It then makes this information available to the higher-level algorithms.

A standard OS would also help researchers focus on a key aspect that so far has been lacking in robotics: reproducibility.

Often, if a team invents, say, a better navigation system, they will publish the results but not the software code. Not only are others unable to build on this discovery, they cannot independently verify the result. "It's useful to have people in a sense constrained by a common platform," says Giorgio Metta, a robotics researcher at the Italian Insitute of Technology in Genoa. "They [will be] forced to do things that work, because somebody else can check. I think this is important, to make it a bit more scientifically oriented."

ROS is not the only robotic operating system vying to be the standard. Microsoft, for example, is trying to create a "Windows for robots" with its Robotics Developer Studio, a product that has been available since 2007.

Gerkey hopes to one day see a robot "app store" where a person could download a program for their robot and have it work as easily as an iPhone app. "That will mean that we have solved a lot of difficult problems," he says.

Monday, August 10, 2009

Boffins work on world's first synthetic brain

LONDON: The world's first synthetic brain could be built within 10 years, giving us an unprecedented insight into the nature of consciousness and our perception of reality.

Scientists working on the Blue Brain Project in Switzerland are the first to attempt to "reverse-engineer" the mammalian brain by recreating the behaviour of billions of neurons on a computer.

Professor Henry Markram, director of the project at the Ecole Polytechnique Fédérale de Lausanne, has already simulated parts of the neocortex, the most "modern" region of the brain, which evolved rapidly in mammals to cope with the demands of parenthood and social situations.

Professor Markram's team created a 3D simulation of about 10,000 brain cells to mimic the behaviour of the rat neocortex. The way all the cells connect and send signals to each other is just as important as how many there are.

"You need one laptop to do all the calculations for one neuron, so you need 10,000 laptops," Professor Markram told the TEDGlobal conference in Oxford yesterday. Instead, he uses an IBM Blue Gene supercomputer.

The artificial brain is already revealing some of the inner workings of the most impressive 1.5 kilograms of biological tissue ever to evolve. Show the brain a virtual image and its neurons flicker with electrical activity as the image is processed.

Ultimately, scientists want to use synthetic brains to understand how sensory information from the real world is interpreted and stored, and how consciousness arises.

Sunday, August 9, 2009

Artificial intelligence technology could soon make the internet an even bigger haven for bargain-hunters

Software "agents" that automatically negotiate on behalf of shoppers and sellers are about to be set free on the web for the first time.

The "Negotiation Ninjas", as they are known, will be trialled on a shopping website called Aroxo in the autumn.

The intelligent traders are the culmination of 20 years' work by scientists at Southampton University.

"Computer agents don't get bored, they have a lot of time, and they don't get embarrassed," Professor Nick Jennings, one of the researchers behind the work, told BBC News.

"I have always thought that in an internet environment, negotiation is the way to go."

Price fixing

The agents use a series of simple rules - known as heuristics - to find the optimal price for both buyer and seller based on information provided by both parties.

Heuristics are commonly used in computer science to find an optimal solution to a problem when there is not a single "right answer".

They are often used in anti-virus software to trawl for new threats.

"If you can't analyse mathematically exactly what you should do, which you can't in general for these sorts of systems, then you end up with heuristics," explained Professor Jennings.

"We use heuristics to determine what price we should offer during the negotiation - and also how we might deal with multiple negotiations at the same time.

"We have to factor in some degrees of uncertainty as well - the chances are that sellers will enter into more negotiations than they have stock."

To use one of the intelligent agents, sellers must answer a series of questions about how much of a discount they are prepared to offer and whether they are prepared to go lower after a certain number of sales, or at a certain time of day.

They are also asked how eager they are to make a sale.

At the other end, the buyer types in the item they wish to purchase and the price they are willing to pay for it.

The agents then act as an intermediary, scouring the lists of sellers who are programmed to accept a price in the region of the one offered.

If they find a match, the seller is prompted to automatically reply with a personalised offer.

The buyer then has a choice to accept, reject or negotiate. If they choose to negotiate, the agent analyses the seller's criteria to see if they can make a better offer.

The process continues until either there is a sale or one of the parties pulls out.

Aroxo will be trialling the Negotiation Ninjas from the autumn, and plans to have the system fully operational in time for Christmas shopping this year.

The site currently offers mainly electrical goods.

While the sellers will not have to pay to use the Ninjas, they pay to contact a buyer. The charge from Aroxo is 0.3% of the buyer's original asking price.

For Professor Jennings, this application of his research marks a return to a more traditional retail model.

"Fixed pricing is a relatively recent phenomenon," he said. "Throughout history most transactions have been negotiated. Only in the last 100 years have we gone for fixed pricing."

Sunday, August 2, 2009

Will artificial intelligence invade Second Life?

Popular culture is filled with different notions of what artificial intelligence should or will be like. There's the all-powerful Skynet from the "Terminator" movies, "Star Wars"-style androids, HAL from "2001: A Space Odyssey," the classic sentient computer program, carrying on a witty conversation through a computer terminal. Soon, we may have to add another to the list.

In September 2007, a software company called Novamente, along with the Electric Sheep Company, a producer of add-ons for virtual worlds, announced plans to release artificial intelligences (AI) into virtual worlds like the ultra-popular "Second Life."

Novamente's "intelligent virtual agents" would use online games and virtual worlds as a development zone, where they will grow, learn and develop by interacting with humans. The company said that it will start by creating virtual pets that become smarter as they interact with their (human-controlled) avatar owners. (An avatar is the character or virtual representation of a player in a virtual world.) More complex artificially controlled animals and avatars are expected to follow.

Novamente's artificial intelligence is powered by a piece of software called a "Cognition Engine." Pets and avatars powered by the Cognition Engine will feature a mix of automated behaviors and learning and problem-solving capabilities. Ben Goertzel, the CEO of Novamente, said that his company had already created a "fully functioning animal brain".

Goertzel envisioned Novamente's first artificial intelligences as dogs and monkeys, initially going on sale at your local virtual pet shop in October 2007.

These virtual pets will work much like real pets -- trainable, occasionally misbehaving, showing the ability learn and perform tasks and responding positively to rewards. After dogs and monkeys, Novamente would then move on to more complex creatures, such as parrots that, like their real-life counterparts, could learn to speak.

Finally, the company expects to produce virtual human babies that, propelled by its own artificial intelligence, would grow, develop and learn in the virtual world.

While we frequently see or read about robots with interesting capabilities, scientists have struggled for decades to create anything approaching a genuine artificial intelligence. A robot may be an expert at one skill, say shooting a basketball, but numerous basic tasks, such as walking down stairs, may be beyond its capabilities. This is where a virtual world has its advantages, Goertzel says.

On the next page, we'll look at why virtual worlds may present the next and best frontier for the development of artificial intelligence.

Advantages of Artificial Intelligence in Virtual Worlds

While we already deal with some virtual AI -- notably in action games against computer-controlled "bots" or challenging a computer opponent to chess -- the work of Novamente, Electric Sheep Company and other firms has the potential to initiate a new age of virtual AI, one where, for better or worse, humans and artificial intelligences could potentially be indistinguishable.

If you think about it, we take in numerous pieces of information just walking down the street, much of it unconsciously. You might be thinking about the weather, the pace of your steps, where to step next, the movement of other people, smells, sounds, the distance to the destination, the effect of the environment around you and so forth.

An artificial intelligence in a virtual world has fewer of these variables to deal with because as of yet, no virtual world approaches the complexity of the real world. It may be that by simplifying the world in which the artificial intelligence operates (and by working in a self-contained world), some breakthroughs can be achieved.

Such a process would allow for a more linear development of artificial intelligence rather than an attempt to immediately jump to lifelike robots capable of learning, reason and self-analysis.

Goertzel states that a virtual world also offers the advantage of allowing a newly formed artificial intelligence to interact with thousands of people and characters, increasing learning opportunities. The virtual body is also easier to manage and control than that of a robot.

If an AI-controlled parrot seems to have particular challenges in a game world, it's less difficult for programmers to create another virtual animal than if they were working with a robot. And while a virtual world AI lacks a physical body, it displays more complexity (and more realism) than a simple AI that merely carries on text-based conversations with a human.

Novamente claims that its system is the first to allow artificial intelligences to progress through a process of self-analysis and learning. The company hopes that its AI will also distinguish itself from other attempts at AI by surprising its creators in its capabilities -- for example, by learning a skill or task that it wasn't programmed to perform.

Novamente has already created what it terms an "artificial baby" in the AGISim virtual world. This artificial baby has learned to perform some basic functions.

Despite all of this excitement, the AI discussed here are far from what's envisioned in "Terminator." It will be some time before AIs are seamlessly interacting with players, impressing us with their cleverness and autonomy and seeming all too human.

Even Philip Rosedale, the founder of Linden Labs, the company behind "Second Life," has warned against becoming caught up in the hype of the supposedly groundbreaking potential of these virtual worlds.

But "Second Life" and other virtual worlds may prove to be the most valuable testing grounds to date for AI. It will also be interesting to track how virtual artificial intelligences progress as the virtual worlds they occupy change and become more complex.

Besides acting as an incubator for artificial intelligence, "Second Life" has already been an important case study in the development of cyber law and the economics and legality of hawking virtual goods for real dollars.

The popular virtual world has even been mentioned as a possible virtual training facility for children taking emergency preparedness classes

Scientists secretly fear AI robot-machines may soon outsmart men

A robot that can open doors. Computer viruses that no one can stop.

Advances in the scientific world promise many benefits, but scientists are secretly panicking over the thought that artificially intelligent machines could outsmart humans.

While at a conference, held in Monterey Bay, California, leading experts warned that mankind might not be able to control computer-based systems that carry out a growing share of society’s workload, reports The Times.

“These are powerful technologies that could be used in good ways or scary ways,” warned Eric Horvitz, principal researcher at Microsoft who organised the conference on behalf of the Association for the Advancement of Artificial Intelligence.

Alan Winfield, a professor at the University of the West of England, believes that boffins spend too much time developing artificial intelligence and too little on robot safety.

“We’re rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced,” he said.

The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans.

According to the panel, identity thieves might feasibly plant a virus on a person’s smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves.

Saturday, August 1, 2009

Friday, July 31, 2009

Google Alert - home robotics

Google News Alert for: home robotics

WPI to Host exxonmobil Bernard Harris Summer Science Camp for ...
Media Newswire (press release)
The camp's theme at WPI is "Revolutionary Robotics," which is fitting for the university, since it's the home of the nation's first undergraduate degree ...
See all stories on this topic

 This once a day Google Alert is brought to you by Google.

Remove this alert.
Create another alert.
Manage your alerts.

Sunday, July 26, 2009

A New Generation of Service robots: The Care-O-Bot 3

Care-O-Bot 3 is a prototype of the next generation of service robots. Created by the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Germany, the 1.45 meter high robot has Stereo-vision color cameras, laser scanners and is aware of its surroundings. It can be controlled both manually and by spoken commands. The bot has a highly flexible arm which can pick up items without breaking them.

It moves on a platform with four separately steered and driven wheels that allow it to travel in any direction. The Care-O-bot can identify objects thanks to color cameras, laser scanners, and a 3D range camera. The camera allow it to be directed with hand gestures as well as spoken commands. It can also learn to recognize unfamiliar items like cups and bottles.

Thanks to the team from the Fraunhofer Institute, robotically-enabled laziness may soon be a reality.

Saturday, July 25, 2009

Creating 'service robots'

Since human beings began to imagine robots - and the beginning was a 1920 Czech play called Rossum's Universal Robots, in which, yes, the machines do end up as our steely overlords - we've imagined them as mechanical, though perhaps more powerful, versions of ourselves.

Think of Maria from Fritz Lang's "Metropolis," Gort from "The Day the Earth Stood Still," C-3PO from "Star Wars," or the Terminator.

"It's very far from reality," said Ranjan Mukherjee, a professor of mechanical engineering at Michigan State University. "Human beings are very complex and making robots that are like human beings is not easy."

He knows what he's talking about. Mukherjee has received $247,000 in federal stimulus money to develop technology that could, among other things, allow two-legged robots to move more like humans: to climb stairs, walk on uneven ground, maybe even jump, tasks that even the most advanced robots can't do particularly well.

But, if everyday service robots are to become a reality, and Mukherjee believes they will, they will have to be able to operate in environments designed for humans. Meaning they should probably be able to handle the stairs.

"Most of the people working on bipeds" - that is, robots with two legs - "design them to walk on a flat surface," Mukherjee said. "Very few bipeds can climb stairs. We take that as the first challenge. Eventually, we would like to get to the point where it can handle undulated surfaces, uneven surfaces."

Consider the force
The reason robots aren't good with such surfaces has a lot to do with the way their movement is powered.

Most have motors that drive their movement that are along the lines of those that power electric fans. The power can be stronger or weaker, but it's more or less continuous.

What robots generally can't do is apply what are called impulsive forces: short, strong bursts of power that could allow them to make quick corrections when their balance shifts or when they encounter unexpected changes in terrain.

Humans do this all the time. When we jump from a height, for example, our muscles fire quick bursts of power that keep us from falling on our faces when we land.

Saturday, July 11, 2009

Penbo, the "first real robot for girls"

Penbo is supposedly the "first real robot for girls".

And yes, it has a baby. Called Bebe. It responds to touch and voice and...the baby, which is the closest thing it has to a remote control, since it'll summon Penbo and interact and play games with it. Penbo responds differently to different color babies—there are 4 colors, each with around 21 features.

But really, the best feature is the Penbo dance, which you can see in the video above: Put two together and they waddlewaddlewaddle. Which is how I guess they make more babies.

Penbo will hit QVC with Prime-8 on July 25, then Amazon later on, for $80.

'The Matrix' Fulfilled: EATR Military Robots to Use Biomatter as Fuel

The Defense Advanced Research Projects Agency (DARPA), a research and development organization for the Department of Defense, aims to "maintain the technological superiority of the U.S military." It seeks to accomplish this goal by developing robots, lasers, spacecraft, and other awesome futuristic weapons of annihilation. It also apparently has no desire to heed the warnings of the 'Matrix' or 'Terminator' movies.

According to The Register, the group's latest design, the Energetically Autonomous Tactical Robot (EATR), is a pilot-less, steam-powered drone. Designed for long-range missions, it refuels itself by devouring any fossil fuels or biological matter in its path. While DARPA mentions using refuse such as apple cores for fuel, it's difficult to imagine these things not being used to harvest people as an energy source.

So, what happens when the EATRs completely decimate the Earth's natural resources (if they don't completely wipe out the human race first)? Well, it should only take a few slight adjustments to make sure the remnants of those scooped-up humans can be turned into tasty and nutritious green cracker treats. It's just too bad Charlton Heston isn't around to tell everyone "I told you so."

Tuesday, June 23, 2009

Living Safely with Robots, Beyond Asimov's Laws

In situations like this one, as described in a recent study published in the International Journal of Social Robotics, most people would not consider the accident to be the fault of the robot. But as robots are beginning to spread from industrial environments to the real world, human safety in the presence of robots has become an important social and technological issue. Currently, countries like Japan and South Korea are preparing for the “human-robot coexistence society,” which is predicted to emerge before 2030; South Korea predicts that every home in its country will include a robot by 2020.

Unlike industrial robots that toil in structured settings performing repetitive tasks, these “Next Generation Robots” will have relative autonomy, working in ambiguous human-centered environments, such as nursing homes and offices. Before hordes of these robots hit the ground running, regulators are trying to figure out how to address the safety and legal issues that are expected to occur when an entity that is definitely not human but more than machine begins to infiltrate our everyday lives.

In their study, authors Yueh-Hsuan Weng, a Kyoto Consortium for Japanese Studies (KCJS) visiting student at Yoshida, Kyoto City, Japan, along with Chien-Hsun Chen and Chuen-Tsai Sun, both of the National Chiao Tung University in Hsinchu, Taiwan, have proposed a framework for a legal system focused on Next Generation Robot safety issues. Their goal is to help ensure safer robot design through “safety intelligence” and provide a method for dealing with accidents when they do inevitably occur.

The authors have also analyzed Isaac Asimov’s Three Laws of Robotics, but (like most robotics specialists today) they doubt that the laws could provide an adequate foundation for ensuring that robots perform their work safely.

One guiding principle of the proposed framework is categorizing robots as “third existence” entities, since Next Generation Robots are considered to be neither living/biological (first existence) or non-living/non-biological (second existence). A third existence entity will resemble living things in appearance and behavior, but will not be self-aware.

While robots are currently legally classified as second existence (human property), the authors believe that a third existence classification would simplify dealing with accidents in terms of responsibility distribution.

One important challenge involved in integrating robots into human society deals with “open texture risk” - risk occurring from unpredictable interactions in unstructured environments. An example of open texture risk is getting robots to understand the nuances of natural (human) language. While every word in natural language has a core definition, the open texture character of language allows for interpretations that vary due to outside factors.

As part of their safety intelligence concept, the authors have proposed a “legal machine language,” in which ethics are embedded into robots through code, which is designed to resolve issues associated with open texture risk - something which Asimov’s Three Laws cannot specifically address.

“During the past 2,000 years of legal history, we humans have used human legal language to communicate in legal affairs,” Weng told “The rules and codes are made by natural language (for example, English, Chinese, Japanese, French, etc.). When Asimov invented the notion of the Three Laws of Robotics, it was easy for him to apply the human legal language into his sci-fi plots directly.”

As Chen added, Asimov’s Three Laws were originally made for literary purposes, but the ambiguity in the laws makes the responsibilities of robots’ developers, robots’ owners, and governments unclear.

“The legal machine language framework stands on legal and engineering perspectives of safety issues, which we face in the near future, by combining two basic ideas: ‘Code is Law’ and ‘Embedded Ethics,’” Chen said. “In this framework, the safety issues are not only based on the autonomous intelligence of robots as it is in Asimov’s Three Laws.

Rather, the safety issues are divided into different levels with individual properties and approaches, such as the embedded safety intelligence of robots, the manners of operation between robots and humans, and the legal regulations to control the usage and the code of robots. Therefore, the safety issues of robots could be solved step by step in this framework in the future.”

Weng also noted that, by preventing robots from understanding human language, legal machine language could help maintain a distance between humans and robots in general.

“If robots could interpret human legal language exactly someday, should we consider giving them a legal status and rights?” he said. “Should the human legal system change into a human-robot legal system? There might be a robot lawyer, robot judge working with a human lawyer, or a human judge to deal with the lawsuits happening inter-human-robot.

Robots might learn the kindness of humans, but they also might learn deceit, hypocrisy, and greed from humans. There are too many problems waiting for us; therefore we must consider if it is a better to let the robots keep a distance from the human legal system and not be too close to humans.”

In addition to using machine language to keep a distance between humans and robots, the researchers also consider limiting the abilities of robots in general. Another part of the authors’ proposal concerns “human-based intelligence robots,” which are robots with higher cognitive abilities that allow for abstract thought and for new ways of looking at one’s environment.

However, since a universally accepted definition of human intelligence does not yet exist, there is little agreement on a definition for human-based intelligence. Nevertheless, most robotics researchers predict that human-based intelligence will inevitably become a reality following breakthroughs in computational artificial intelligence (in which robots learn and adapt to their environments in the absence of explicitly programmed rules).

However, a growing number of researchers - as well as the authors of the current study - are leaning toward prohibiting human-based intelligence due to the potential problems and lack of need; after all, the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans.

In their study, the authors also highlight previous attempts to prepare for a human-robot coexistence society. For example, the European Robotics Research Network (EURON) is a private organization whose activities include investigating robot ethics, such as with its Roboethics Roadmap. The South Korean government has developed a Robot Ethics Charter, which serves as the world’s first official set of ethical guidelines for robots, including protecting them from human abuse.

Similarly, the Japanese government investigates safety issues with its Robot Policy Committee. In 2003, Japan also established the Robot Development Empiricism Area, a “robot city” designed to allow researchers to test how robots act in realistic environments.

Despite these investigations into robot safety, regulators still face many challenges, both technical and social. For instance, on the technical side, should robots be programmed with safety rules, or should they be created with the ability for safety-oriented reasoning? Should robot ethics be based on human-centered value systems, or a combination of human-centered value systems with the robot’s own value system?

Or, legally, when a robot accident does occur, how should the responsibility be divided (for example, among the designer, manufacturer, user, or even the robot itself)?

Weng also indicated that, as robots become more integrated into human society, the importance of a legal framework for social robotics will become more obvious. He predicted that determining how to maintain a balance between human-robot interaction (robot technology development) and social system design (a legal regulation framework) will present the biggest challenges in safety when the human-robot coexistence society emerges.