Showing posts with label brains. Show all posts
Showing posts with label brains. Show all posts

Friday, January 28, 2011

Robots learn from rats' brains



Queensland engineers have translated biological findings to probabilistic algorithms that could direct robots through complicated human environments.

While many of today's machines relied on expensive sensors and systems, the researchers hoped their software would improve domestic robots, cheaply.

Roboticist Michael Milford worked with neuroscientists to develop algorithms that mimicked three navigational systems in rats' brains: place cells; head direction cells; and grid cells.

In an article published in PLoS Computational Biology this week, he described simulating grid cells - recently discovered brain cells that helped rats contextually determine their location.

To explain the function of grid cells, Milford described getting out of a lift at an unknown floor, and deducing his location based on visual cues like vending machines and photocopiers.

"We take it for granted that we find our way to work ... [but] the problem is extremely challenging," said the Queensland University of Technology researcher.

"Robots are able to navigate to a certain point, but they just get confused and lost in an office building," he told iTnews.

The so-called RatSLAM software was installed in a 20kg Pioneer 2DXe robot with a forward facing camera that was capable of detecting visual cues, their relative bearing and distance.

The robot was placed in a maze similar to those used in experiments with rats, with random goal locations that simulated a rat's collection of randomly thrown pieces of food.

It calibrated itself using visual cues, performing up to 14 iterations per second to determine its location when placed in one of four initial starting positions.

Milford explained that environmental changes like lighting, shadows, moving vehicles and people made it difficult for robots to navigate in a human world.

Machines like the Mars Rovers and those competing in the DARPA Challenges tended to use expensive sensors - essentially "throwing a lot of money" at the problem, he said.

But a cheaper solution was needed to direct domestic robots, which were currently still in early stages of development and "very, very, very, dumb".

"The only really successful cheap robot that has occurred so far is the [iRobot Roomba] vacuum cleaner," he said. "They don't have any idea where they are; they just move around randomly."

The grid cell project was the latest in almost seven years of Milford's research into applying biological techniques to machines.

The team had been approached "occasionally" by domestic robot manufacturers, he said, but was currently focussed on research, and not commercialisation.

Sunday, October 10, 2010

Aiming to Learn as We Do, a Machine Teaches Itself: NELL, the Never-Ending Language Learning system

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers atCarnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency andGoogle, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”

Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”

Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”

With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.


Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.

For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.

NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”

A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”

“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”

A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.

When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)

NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.

His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”

Wednesday, March 10, 2010

Android Phone powered robot



Some clever California hackers, Tim Heath and Ryan Hickman, are building bots that harness Android phones for their robo-brainpower.

Their first creation, the TruckBot, uses a HTC G1 as a brain and has a chassis that they made for $30 in parts. It's not too advanced yet—it can use the phone's compass to head in a particular direction—but they're working on incorporating the bot more fully with the phone and the Android software.

Some ideas they're looking to build in soon are facial and voice recognition and location awareness.

If you're interested in putting together a Cellbot of your own the team's development blog has some more information.







Wednesday, November 18, 2009

Brisbane maps robotic future


Scientists in Brisbane are blurring the line between biology and technology and creating a new generation of robot "helpers" more in tune with human needs.

The University of Queensland is hosting the the Australian Research Council's Thinking Systems symposium this week, which brings together UQ's robotic navigation project with the University of New South Wales' robotic hands project and a speech and cognition project out of the University of Western Sydney.

Scientists are working towards a range of robotic innovations, from the development of navigation and learning robots to the construction of artificial joints and limbs and the creation of a conversational computer program, a la 2001: A Space Odyssey's HAL.

UQ's School of Information Technology and Electrical Engineering head, Professor Janet Wiles, said the symposium paired "some very clever engineers...with very clever scientists" to map the future of robotics - and it was going to be a very different world.

"You're bringing together neuroscience, cognitive science, psychology, behaviour and robotics information system to look at the cross disciplinary projects we can do in this space," Professor Wiles said.

"We're doing a combination of the fundamental science and the translation into the technology and that's one of the great benefits of our project."

The group aims to advance robotic technology by decoding the way human and animal brains work to equip machines with the ability to operate in the real world.

"There's a strong connection to cognition - the way the brain works as a whole - and navigation, so what we've been doing is studying the fundamental of navigation in animals and taking the algorithms we've learnt from those and putting them into robots," Professor Wiles said.

Over the next two decades, she sees robots becoming more and more important, expanding from their current roles as cleaners, assemblers and drones and into smarter machines more closely integrated with human beings in the form of replacement limbs and joints.

"It's not going to be the robots and us. Already a lot of people are incorporating robot components; people who have had a leg amputated who now have a knee and in the knee. It is effectively a fully-articulated robotic knee [with] a lot of the spring in the step that a natural knee has," Professor Wiles said.

"The ability of robots to replace component parts is an area which is going to be growing.

"This is where you're going to blur the line between technology and biology when you start to interface these two fields."

At UQ, the team is working on developing computer codes or algorithms that would enable a robot to "learn" rapidly about its near environment and navigate within it.

"Navigation is quite an intriguing skill because it is so intrinsic to what we do and we are really not aware of it unless we have a poor sense of navigation," Professor Wiles said.

"The kind of navigation we are dealing with is how you get from one place to another, right across town or from one room in a building to another you can't see."

With about four million robots in households right now, performing menial chores such as vacuuming the carpet, improvements in navigation has the potential to widen the scope of these creations to take a larger place in everyday life.

According to Professor Wiles, the ability to rapidly process information and apply it to the area they are working in will give robots the edge into the future.

"Robots need to learn new environments very rapidly and that's what a lot of our work focuses on.

"When you take a robot out of the box you don't want to program into it with the architecture of your house, you want the robot to explore the house and learn it very quickly," Professor Wiles said.

"Household robotics is going to be really big in the next 15 years or so and this is one of the things you need is for robots to be able to look after themselves in space."

But as Australian universities and international research institutes look into replicating the individual parts of biological creatures and mimic them in machines, the question of intelligence inevitably become more important.

While the sometimes frightening scenarios played out in science fiction novels and films - where so often robots lay waste to humanity - remains securely in the realm of fantasy, Professor Wiles believes that some day machines will think like us.

"There's strong AI [artificial intelligence] and weak AI. Strong AI says there will be artificially intelligent creatures which are not biological. Weak AI says they will have a lot of the algorithms and they do already have a lot of those algorithms," she said.

"The bee, whose brain is a tiny as a sesame seed, already has better navigation abilities than even our best robots.

"So we have a little way to go before robots reach biological intelligence let alone human intelligence but I don't see why we shouldn't take steps towards it."

Thursday, September 17, 2009

Welcome to the robot revolution


Much like the then-fledgling PC industry in the late 1970s, the robotics industry is on the cusp of a revolution, contends the head of Microsoft Corp.'s robotics group.

Today's giant, budget-bending robots that are run by specialists in factories and on assembly floors are evolving into smaller, less-expensive and cuter machines that clean our carpets, entertain us and may someday take care of us as we grow old. The move is akin to the shift from the mainframe world of the 1970s to the personal computers that invaded our offices and homes over the past 20 to 25 years.

"The transition is starting," said Tandy Trower, general manager of Microsoft's 3-year-old robotics group. "It's like we're back in 1977 -- four years before the IBM PC came out. We were seeing very primitive but very useful machines that were foreshadowing what was to come. In many ways, they were like toys compared to what we have today. It's the same with robots now."

Trower said many countries are making significant investments in robotics, and advances are beginning to multiply. Robotic aids and companions -- some looking like an updated version of R2-D2 and others more humanoid -- will begin moving into our homes in three to five years as technology advances and prices drop, he predicted.

"Robots are really an evolution of the technology we have now," Trower said. "We're just adding to our PCs, really. We're letting them get up off our desks and move around. They're evolving into something you will engage with and will serve you in your life someway."

Some, experts though, are hesitant to talk of revolutions, especially in an industry that has seen many promises made that have yet to materialize.

James Kuffner, an associate professor at the Robotics Institute at Carnegie Mellon University, warns that any revolution could be lengthy, as robots likely won't soon be doing dishes and walking dogs for about 20 years.

"People ask me when they'll have a Jetsons-like robot walking around their house," Kuffner said. "I tell them the first gas-powered engine was built in 1885, but it took until 1915 before a large segment of the population could afford a car. When that happened, society was transformed. In the 1950s, the first computers were built, but it wasn't until the early '80s when the personal computer came on the scene. And, of course, it completely transformed society."

Kuffner said the he believes the robot revolution countdown should start in 1996 when Honda Motor Co. released the P2, a self-contained, life-size humanoid machine. Going by historical example, a good portion of the population could have a robot in the home by 2026, he said.


"The Roomba vacuum cleaner is often seen as the first successful home robot, but it's pretty limited," Kuffner added. "So, sure, you can say we have robots in our homes. But a humanoid robot like you see in Hollywood movies, designed to perform a large number of tasks without special programming or tuning? In about 20 years."

Neena Buck, an independent robotics analyst based in Cambridge, Mass., said agreed that the robotics business will take off, but that it will be some time before humanoid robots are washing cars or dancing. First, she said, there will be single-task robots for house cleaning and the like, and exoskeletal robots to help people with infirmities.

"A Jetsons robot -- I don't think that's how it will happen," she said. "Maybe people need to change their vision of a robot."

Trower told Computerworld that robotics has been slow to grow in recent years because of the lack of a standard software platform -- the very thing Microsoft Corp. mandated he create.

The Microsoft robotics group, which is tasked with generating profits within three to five years, is now updating its Robotics Studio software, which includes a tool set and a set of programming libraries that sit on top of Windows. The studio also includes a programming language and a simulator, so that developers can first try out programs in a virtual world. The latest version of the studio platform is slated to ship by the end of this year, according to Trower.

"The robotics industry needs portability," said Trower. "There's been no standard. We wanted to make it easy for the industry to bootstrap itself. I truly think software is holding the robotic industry back."

Software was definitely holding back graduate students at the University of Massachusetts, Amherst, in their quest to build a new version of the school's uBot robot.

Bryan Thibodeau and Patrick Deegan are both graduate students who have been building the fifth generation of uBot, dubbed uBot-5, a two-wheeled, two-armed robot that can maintain its balance.

The developers said they expect to save significant time during the development of uBot 6 due to the use of Robotics Studio in their current project. "We can transfer applications we've written before for this to other robots," said Deegan. "This is the fifth generation, and we had to write code from scratch every time. The next time, we won't. It'll save us tons of time -- probably six months minimum. Now, we can start from here and keep going."

During a demonstration of the uBot-5, Thibodeau said that the developers will spend a lot less time simply reinventing the wheel. "Now we can focus on doing more, instead of doing the same thing over again," he added.

Deegan and Thibodeau noted that they hope the uBot will eventually be used to help care for the growing elderly population, helping them stay in their homes longer and more safely.

With two arms that one day could open a door, two wheels to move it about a home, and a rotating torso and touch screen that could enable it to "look" about its environment, Trower called uBot-5 is a good example of what's likely the next generation of in-home robots.

"The idea of dexterous manipulation makes a difference," said Trower. "It would be able to interact with things in the home environment, load the dishwasher, fold clothes. Once it has two arms, it opens up a huge variety of possibilities."

A touch screen that sits on the uBot-5's shoulders could act, for example, as a sort of portal for an elderly woman living alone. If the woman fell and was unresponsive, the robot could be programmed to recognize the problem and alert emergency response services. Her doctor could access the robot through his computer, see what the robot sees and speak to the woman through the robot. His face could appear on the screen, making it more natural for the two to talk to each other, using the robot as the conduit.

Richard Doherty, research director at The Envisioneering Group, a market research firm n Seaford, N.Y., said progress in the robotics industry could be limited or slowed because people will be afraid of losing their jobs -- such as a home care assistant -- to robots.

"In this country, people are afraid for their jobs. They don't want to see a robotic coffee maker or robots that could change your oil … or take care of the elderly," said Doherty. "It's job inertia. … We need to see robots in a different light. We need people to understand that this machine could help care for their grandmother."


This is exactly the kind of aid and companionship that one artificial intelligence researcher expects to see from robots in the coming years. David Levy, a British artificial intelligence researcher whose book, Love and Sex with Robots, was released last November, said in a previous interview that robotics will make such dramatic advances in the coming years that humans will be marrying robots by the year 2050.

"Robots started out in factories making cars. There was no personal interaction," said Levy, who is also an international chess master who has been developing computer chess games for years. "Then people built mail-cart robots, and then robotic dogs. Now robots are being made to care for the elderly. In the last 20 years, we've been moving toward robots that have relationships with humans, and it will keep growing toward a more emotional relationship, a more loving one and a sexual one."

While iRobot Corp.'s Roomba may be a vacuum cleaner and not a companion, Trower noted that people who own the robots identify with them, often naming them, drawing faces on them and even insisting that broken ones be repaired rather than replaced with a new machine.

"This is part of the evolution," said Trower. "We now see robots coming into people's lives and living with us. It's sneaking in and saying, 'Aren't I cute?'"

Monday, September 7, 2009

Plasmobot: the slime mould robot


THOUGH not famed for their intellect, single-celled organisms have already demonstrated a surprising degree of intelligence. Now a team at the University of the West of England (UWE) has secured £228,000 in funding to turn these organisms into engineering robots.

In recent years, single-celled organisms have been used to control six-legged robots, but Andrew Adamatzky at UWE wants to go one step further by making a complete "robot" out of a plasmodium slime mould, Physarum polycephalum, a commonly occurring mould that moves towards food sources such as bacteria and fungi, and shies away from light.

Affectionately dubbed Plasmobot, it will be "programmed" using light and electromagnetic stimuli to trigger chemical reactions similar to a complex piece of chemistry called the Belousov-Zhabotinsky reaction, which Adamatzky previously used to build liquid logic gates for a synthetic brain.

By understanding and manipulating these reactions, says Adamatzky, it should be possible to program Plasmobot to move in certain ways, to "pick up" objects by engulfing them and even assemble them.

Initially, Plasmobot will work with and manipulate tiny pieces of foam, because they "easily float on the slime", says Adamatzky. The long-term aim is to use such robots to help assemble the components of micromachines, he says.

Tuesday, June 23, 2009

Living Safely with Robots, Beyond Asimov's Laws


In situations like this one, as described in a recent study published in the International Journal of Social Robotics, most people would not consider the accident to be the fault of the robot. But as robots are beginning to spread from industrial environments to the real world, human safety in the presence of robots has become an important social and technological issue. Currently, countries like Japan and South Korea are preparing for the “human-robot coexistence society,” which is predicted to emerge before 2030; South Korea predicts that every home in its country will include a robot by 2020.

Unlike industrial robots that toil in structured settings performing repetitive tasks, these “Next Generation Robots” will have relative autonomy, working in ambiguous human-centered environments, such as nursing homes and offices. Before hordes of these robots hit the ground running, regulators are trying to figure out how to address the safety and legal issues that are expected to occur when an entity that is definitely not human but more than machine begins to infiltrate our everyday lives.

In their study, authors Yueh-Hsuan Weng, a Kyoto Consortium for Japanese Studies (KCJS) visiting student at Yoshida, Kyoto City, Japan, along with Chien-Hsun Chen and Chuen-Tsai Sun, both of the National Chiao Tung University in Hsinchu, Taiwan, have proposed a framework for a legal system focused on Next Generation Robot safety issues. Their goal is to help ensure safer robot design through “safety intelligence” and provide a method for dealing with accidents when they do inevitably occur.

The authors have also analyzed Isaac Asimov’s Three Laws of Robotics, but (like most robotics specialists today) they doubt that the laws could provide an adequate foundation for ensuring that robots perform their work safely.

One guiding principle of the proposed framework is categorizing robots as “third existence” entities, since Next Generation Robots are considered to be neither living/biological (first existence) or non-living/non-biological (second existence). A third existence entity will resemble living things in appearance and behavior, but will not be self-aware.

While robots are currently legally classified as second existence (human property), the authors believe that a third existence classification would simplify dealing with accidents in terms of responsibility distribution.


One important challenge involved in integrating robots into human society deals with “open texture risk” - risk occurring from unpredictable interactions in unstructured environments. An example of open texture risk is getting robots to understand the nuances of natural (human) language. While every word in natural language has a core definition, the open texture character of language allows for interpretations that vary due to outside factors.

As part of their safety intelligence concept, the authors have proposed a “legal machine language,” in which ethics are embedded into robots through code, which is designed to resolve issues associated with open texture risk - something which Asimov’s Three Laws cannot specifically address.

“During the past 2,000 years of legal history, we humans have used human legal language to communicate in legal affairs,” Weng told PhysOrg.com. “The rules and codes are made by natural language (for example, English, Chinese, Japanese, French, etc.). When Asimov invented the notion of the Three Laws of Robotics, it was easy for him to apply the human legal language into his sci-fi plots directly.”

As Chen added, Asimov’s Three Laws were originally made for literary purposes, but the ambiguity in the laws makes the responsibilities of robots’ developers, robots’ owners, and governments unclear.

“The legal machine language framework stands on legal and engineering perspectives of safety issues, which we face in the near future, by combining two basic ideas: ‘Code is Law’ and ‘Embedded Ethics,’” Chen said. “In this framework, the safety issues are not only based on the autonomous intelligence of robots as it is in Asimov’s Three Laws.

Rather, the safety issues are divided into different levels with individual properties and approaches, such as the embedded safety intelligence of robots, the manners of operation between robots and humans, and the legal regulations to control the usage and the code of robots. Therefore, the safety issues of robots could be solved step by step in this framework in the future.”

Weng also noted that, by preventing robots from understanding human language, legal machine language could help maintain a distance between humans and robots in general.

“If robots could interpret human legal language exactly someday, should we consider giving them a legal status and rights?” he said. “Should the human legal system change into a human-robot legal system? There might be a robot lawyer, robot judge working with a human lawyer, or a human judge to deal with the lawsuits happening inter-human-robot.

Robots might learn the kindness of humans, but they also might learn deceit, hypocrisy, and greed from humans. There are too many problems waiting for us; therefore we must consider if it is a better to let the robots keep a distance from the human legal system and not be too close to humans.”

In addition to using machine language to keep a distance between humans and robots, the researchers also consider limiting the abilities of robots in general. Another part of the authors’ proposal concerns “human-based intelligence robots,” which are robots with higher cognitive abilities that allow for abstract thought and for new ways of looking at one’s environment.

However, since a universally accepted definition of human intelligence does not yet exist, there is little agreement on a definition for human-based intelligence. Nevertheless, most robotics researchers predict that human-based intelligence will inevitably become a reality following breakthroughs in computational artificial intelligence (in which robots learn and adapt to their environments in the absence of explicitly programmed rules).

However, a growing number of researchers - as well as the authors of the current study - are leaning toward prohibiting human-based intelligence due to the potential problems and lack of need; after all, the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans.

In their study, the authors also highlight previous attempts to prepare for a human-robot coexistence society. For example, the European Robotics Research Network (EURON) is a private organization whose activities include investigating robot ethics, such as with its Roboethics Roadmap. The South Korean government has developed a Robot Ethics Charter, which serves as the world’s first official set of ethical guidelines for robots, including protecting them from human abuse.

Similarly, the Japanese government investigates safety issues with its Robot Policy Committee. In 2003, Japan also established the Robot Development Empiricism Area, a “robot city” designed to allow researchers to test how robots act in realistic environments.

Despite these investigations into robot safety, regulators still face many challenges, both technical and social. For instance, on the technical side, should robots be programmed with safety rules, or should they be created with the ability for safety-oriented reasoning? Should robot ethics be based on human-centered value systems, or a combination of human-centered value systems with the robot’s own value system?

Or, legally, when a robot accident does occur, how should the responsibility be divided (for example, among the designer, manufacturer, user, or even the robot itself)?

Weng also indicated that, as robots become more integrated into human society, the importance of a legal framework for social robotics will become more obvious. He predicted that determining how to maintain a balance between human-robot interaction (robot technology development) and social system design (a legal regulation framework) will present the biggest challenges in safety when the human-robot coexistence society emerges.

Sunday, March 22, 2009

Aye, robot




Japan’s female catwalk robot is just the tip of the iceberg ... in the future, robots will fight our wars and tuck us up in bed. Edd McCracken talks to the Scots academics working to bring automatics to the people

WHEN THE HRP-4C was unveiled in Japan last Monday as the world's first female catwalk robot, it looked impressive enough. And then it moved. With all the grace of someone who sat in something nasty, it ensured that the science-fiction dream of humanoid robots in society remained firmly rooted in films like Short Circuit, Blade Runner, and WALL-E.

But scientists in Edinburgh and Aberdeen are working hard to change that. If robots get their looks from Japan, they could potentially get their brains from Scotland.

Both Edinburgh University and Robert Gordon University are world leaders in developing artificial intelligence for robots, creating software that will allow machines to learn and evolve.
advertisement

Scientists at both institutions claim that smart robots will be vital parts of our lives in 10 years' time. A robot-free future is not an option.

"The aim is to have robots integrated into society in the future, there is no doubt about that," said Sethu Vijayakumar, professor of robotics and director of the Institute of Perception, Action and Behaviour at the University of Edinburgh. "In three or four years' time, we will have the technology to build a robot that would be a companion for the elderly, for example."

Microsoft founder Bill Gates has said that the robotic industry is "developing in much the same way that the computer business did 30 years ago". Costs are expected to come down, the hardware to become more compact, and the machines to become commonplace. South Korea has stated its intention to have a robot in every home by 2019.

According to Vijayakumar, within the next decade there will be commercially available robots doing specific tasks. The first is likely to be in aiding the mobility of the infirm. Robots will also undertake dirty and dangerous jobs that humans would baulk at, such as working in nuclear power plants and going into crumbling buildings after natural disasters.

Not far beyond that, robots will replace soldiers on the front line of battle, teach children foreign languages at school, help in surgery. Robots, it seems, will fight our battles, clean our homes, and give us a hug at the end of the day.

In Edinburgh, where one of the first smart robots, Freddy, was built in 1973, Vijayakumar and his team are working on solving one of the biggest obstacles to the creation of a fully autonomous, multi-purpose robot: how to give it the ability to learn. "We do not want to pre-programme everything, but we want to allow it to learn while watching humans and observing, like how we teach kids how to play tennis," he said.

Robert Gordon University is the world leader in developing software that will allow robots to "evolve". Last month, researchers revealed a robot brain that could adapt to a changed environment. The equivalent in nature, of creatures evolving from amphibians to mammals, took millions of years. The robot brain repeated the trick in a matter of hours. "Computers are a lot faster than nature is," said Chris MacLeod, director of research in the university's school of engineering. "It can evolve from something that can do very little, like move, to something that can do something useful very quickly."

Scientists say we will definitely see an adaptable, teachable, multi-purpose version of the HRP-4C, but not for another 25 years. One thing we won't see, thankfully, is the machines taking over. A robot apocalypse will remain within the realms of The Terminator films, insist experts. "The biggest problem you will have with them, is you may trip over them," said Professor Chris Melhuish, from the Bristol Robotics Laboratory, the UK's largest robotics centre. "We are spending huge amounts of effort creating robots that will be massively helpful in the future, so I get very cross when people start talking about robots taking over the world."

For robots to become "self-aware" they need to replicate the human brain, and its 100 billion brain cells, "which is more than the number of stars in our galaxy", added MacLeod. "We will get nowhere near that level of complexity in our lifetime."