Sunday, March 22, 2009
Aye, robot
Japan’s female catwalk robot is just the tip of the iceberg ... in the future, robots will fight our wars and tuck us up in bed. Edd McCracken talks to the Scots academics working to bring automatics to the people
WHEN THE HRP-4C was unveiled in Japan last Monday as the world's first female catwalk robot, it looked impressive enough. And then it moved. With all the grace of someone who sat in something nasty, it ensured that the science-fiction dream of humanoid robots in society remained firmly rooted in films like Short Circuit, Blade Runner, and WALL-E.
But scientists in Edinburgh and Aberdeen are working hard to change that. If robots get their looks from Japan, they could potentially get their brains from Scotland.
Both Edinburgh University and Robert Gordon University are world leaders in developing artificial intelligence for robots, creating software that will allow machines to learn and evolve.
advertisement
Scientists at both institutions claim that smart robots will be vital parts of our lives in 10 years' time. A robot-free future is not an option.
"The aim is to have robots integrated into society in the future, there is no doubt about that," said Sethu Vijayakumar, professor of robotics and director of the Institute of Perception, Action and Behaviour at the University of Edinburgh. "In three or four years' time, we will have the technology to build a robot that would be a companion for the elderly, for example."
Microsoft founder Bill Gates has said that the robotic industry is "developing in much the same way that the computer business did 30 years ago". Costs are expected to come down, the hardware to become more compact, and the machines to become commonplace. South Korea has stated its intention to have a robot in every home by 2019.
According to Vijayakumar, within the next decade there will be commercially available robots doing specific tasks. The first is likely to be in aiding the mobility of the infirm. Robots will also undertake dirty and dangerous jobs that humans would baulk at, such as working in nuclear power plants and going into crumbling buildings after natural disasters.
Not far beyond that, robots will replace soldiers on the front line of battle, teach children foreign languages at school, help in surgery. Robots, it seems, will fight our battles, clean our homes, and give us a hug at the end of the day.
In Edinburgh, where one of the first smart robots, Freddy, was built in 1973, Vijayakumar and his team are working on solving one of the biggest obstacles to the creation of a fully autonomous, multi-purpose robot: how to give it the ability to learn. "We do not want to pre-programme everything, but we want to allow it to learn while watching humans and observing, like how we teach kids how to play tennis," he said.
Robert Gordon University is the world leader in developing software that will allow robots to "evolve". Last month, researchers revealed a robot brain that could adapt to a changed environment. The equivalent in nature, of creatures evolving from amphibians to mammals, took millions of years. The robot brain repeated the trick in a matter of hours. "Computers are a lot faster than nature is," said Chris MacLeod, director of research in the university's school of engineering. "It can evolve from something that can do very little, like move, to something that can do something useful very quickly."
Scientists say we will definitely see an adaptable, teachable, multi-purpose version of the HRP-4C, but not for another 25 years. One thing we won't see, thankfully, is the machines taking over. A robot apocalypse will remain within the realms of The Terminator films, insist experts. "The biggest problem you will have with them, is you may trip over them," said Professor Chris Melhuish, from the Bristol Robotics Laboratory, the UK's largest robotics centre. "We are spending huge amounts of effort creating robots that will be massively helpful in the future, so I get very cross when people start talking about robots taking over the world."
For robots to become "self-aware" they need to replicate the human brain, and its 100 billion brain cells, "which is more than the number of stars in our galaxy", added MacLeod. "We will get nowhere near that level of complexity in our lifetime."
Labels:
brains
Sony PS3 Robot coming?
Sony has filed a patent application describing a robot companion for the Sony PS3.
The PS3 robot would sport a camera, display, GPS, Gyro sensor and a microphone. The feature set suggests that the robot could roam around your home autonomously. The patent does not specify functions of the PS3 robot.
Japan is in love with robots and there are already many companion robots available that have Wi-fi connection to offer up services via the web or a PC. A robot that connects to the PS3 could deliver the same features.
Tuesday, March 17, 2009
Japan's HRP-4C 'fashion model robot' unveiled
Standing at just over 5-feet tall and 95-pounds, HRP-4C, developed by Japan's National Institute of Advanced Industrial Science and Technology for $200,000 USD, will make its catwalk debut next week at the Tokyo fashion show. The she-bot features 30 motors spread throughout its body with an additional eight motors in its face for expression.
Monday, March 16, 2009
A.I. In The Enterprise
Artificial intelligence will likely find its way into corporations.
Smart machines and robots that can think and react more quickly than people have been the stuff of science fiction for decades. But only recently has there been enough computing power and memory available at a reasonable price to allow it to progress to the next step.
Tests are now underway in the military to create a smart surveillance system that can interpret facial features to identify people, determine what movements are unusual and sound the alarm where necessary. And that's just the beginning. The technology will find its way into government, corporations, and eventually, even the home.
In business, these tools will almost certainly will fall under the domain of CIOs as part of their expanding role in enterprise information management. But this kind of information is somewhat different than what CIOs have dealt with in the past. It still uses computers, databases and data mining, but the method of gathering information and its application head in a sharply different direction.
Forbes caught up with Rachel Goshorn, assistant professor of system engineering at the Naval Postgraduate School in Monterey, Calif., to talk about artificial intelligence and what's changing.
Forbes: Why has artificial intelligence taken so long to get out of the labs and into the real world?
Goshorn: It has been used on a limited scale for inspecting food and things like solder joints, where the rules were simple, but the computing was so intense that, for a long time, it didn't get much further.
Now that computers are cheap, it can be applied for a lot more markets, right?
Yes. One of the things that changed is that before, there was never the ability to make a sequential behavior recognition model. It was static. There weren't the programming techniques to build a model. Everything worked like a flow chart. It gave you the option of "yes" or "no." The kind of logic we needed from computer science didn't exist. We had to create a matrix so you could keep adding to it and give it statistical weighting. We needed the ability to detect, identify features and then predict behavior and react.
All of which are incredibly compute-intensive, right?
Correct. AI is not a single thing. It's algorithms for detection, identification, prediction and reaction. These are all working in parallel. That's why you couldn't do it with a flow chart.
How does this work?
It's a sequential behavior classification system. We started out by classifying cars on a freeway. At time one, what lane are they in? At time two, what lane are they in? You concatenate where a car is over time, and you get a pattern. That allows you to predict possible accidents based upon patterns. The behavior classifier fuses electrical engineering, computer vision and compilers. You have to design an alphabet, and each letter has to be assigned to a feature or a symbol with time and space associated with it. Behavior is how you put these symbols together. It could be all A's if you're in the slow lane. A behavior is an infinite set of sentences all made up of these symbols.
Then do you match this against so-called normal behavior?
Yes. Behavior is not 100% predictable, so you have to match it against a measure of something known. If it's too far away from something that's known, it can be flagged as abnormal. This modeling has to be problem-dependent. You use the same AI system-level algorithms, but they change depending upon what application you're using it for. The first phase is you get some raw data, maybe from sensors, and you have to detect certain features.
So this is all about probability rather than definitive answers?
Exactly.
What are the real-world applications of this?
In the radio frequency world, we were able to detect when a frequency was a friend or a foe. If you're in the middle of the desert and you've got the frequency of a garage door opener that's used to set off an IED [improvised explosive device]--you know historically these are used to initiate IEDs--you can point to where that frequency is coming from, and you can jam the frequency so it will not ignite the IED. You also can send a missile to that spot.
In the past, much of the technology that ended up in the commercial world came out of the military. Where do you think that will happen with artificial intelligence?
In the industrial world, you can do it for quality inspection. It also goes into marketing and analyzing. Inside large warehouses or grocery stores, you can learn the behavior of shoppers so you can place products in different places in the store. You also can learn the behavior of online shoppers to reach them more effectively.
How does this work with existing equipment?
If you have a microphone and a camera, they don't have a lot in common. To use them as inputs, you have to do a lot of programming. But if on the microphone you could put detection, you don't send back the voice. You can identify where a person is so the camera zooms in on him or her. They can be correlated. The idea of bringing multiple sensors--they could be voice, image, temperature or pressure--and extracting features rather than pixels or analog voices is all doable. We do this in our brains. Once you translate it into something that isn't analog or a pixel, you're into the human system. It smells like mustard or it's a brown dog. Once we classify it, we're through with the sensors and into the realm of intelligence.
What's the short-term and long-term use?
The immediate use is Homeland Security. Sensors are becoming so cheap. If you're in Southwest Asia and troops have to go over a hill, you want to make sure it's safe. You can drop cheap sensors--microphones, cameras, magnetometers. Then you pull out the features to know what you're hearing and seeing, and over time and space those features make up various behaviors. Is everything OK over the hill? Are there people there? Are they good guys or bad guys? You can do the same with the border patrol here in the United States. Are they carrying bombs?
You can take any market, whether it's quality control, manufacturing, financial or medical, and you can apply it. That's what managers and executives do. If you're buying something in Phoenix and San Diego at the same time and you've never been to either of those cities, an alarm comes up. American Express applied AI years ago. In the financial world, if you want to model all the variables of shareholder selling, inside selling and everything with buy and sell, you can start predicting behavior.
How about the consumer world?
You can see an AI babysitter, dog-sitter or house-sitter. Based on certain behaviors, you can set off alarms and react. You can measure temperature and pressure better than intensive care in a hospital. This works for the elderly as well as babies. You also can do this on the Internet. A lot of videogames are using this for multiplayer modes like fighting.
Ambient assisted living is another area. You can enable smart rooms and look for the behavior of the elderly. If someone is bedridden at home and wanted to communicate with hand gestures to turn on the TV or call the doctor, hand positions could mean different things and a sequence of hand positions could mean different things.
Does this make robots more viable?
That's where AI was first applied. They were expert systems. The whole industrial automation world was created to build smart robots.
Smart machines and robots that can think and react more quickly than people have been the stuff of science fiction for decades. But only recently has there been enough computing power and memory available at a reasonable price to allow it to progress to the next step.
Tests are now underway in the military to create a smart surveillance system that can interpret facial features to identify people, determine what movements are unusual and sound the alarm where necessary. And that's just the beginning. The technology will find its way into government, corporations, and eventually, even the home.
In business, these tools will almost certainly will fall under the domain of CIOs as part of their expanding role in enterprise information management. But this kind of information is somewhat different than what CIOs have dealt with in the past. It still uses computers, databases and data mining, but the method of gathering information and its application head in a sharply different direction.
Forbes caught up with Rachel Goshorn, assistant professor of system engineering at the Naval Postgraduate School in Monterey, Calif., to talk about artificial intelligence and what's changing.
Forbes: Why has artificial intelligence taken so long to get out of the labs and into the real world?
Goshorn: It has been used on a limited scale for inspecting food and things like solder joints, where the rules were simple, but the computing was so intense that, for a long time, it didn't get much further.
Now that computers are cheap, it can be applied for a lot more markets, right?
Yes. One of the things that changed is that before, there was never the ability to make a sequential behavior recognition model. It was static. There weren't the programming techniques to build a model. Everything worked like a flow chart. It gave you the option of "yes" or "no." The kind of logic we needed from computer science didn't exist. We had to create a matrix so you could keep adding to it and give it statistical weighting. We needed the ability to detect, identify features and then predict behavior and react.
All of which are incredibly compute-intensive, right?
Correct. AI is not a single thing. It's algorithms for detection, identification, prediction and reaction. These are all working in parallel. That's why you couldn't do it with a flow chart.
How does this work?
It's a sequential behavior classification system. We started out by classifying cars on a freeway. At time one, what lane are they in? At time two, what lane are they in? You concatenate where a car is over time, and you get a pattern. That allows you to predict possible accidents based upon patterns. The behavior classifier fuses electrical engineering, computer vision and compilers. You have to design an alphabet, and each letter has to be assigned to a feature or a symbol with time and space associated with it. Behavior is how you put these symbols together. It could be all A's if you're in the slow lane. A behavior is an infinite set of sentences all made up of these symbols.
Then do you match this against so-called normal behavior?
Yes. Behavior is not 100% predictable, so you have to match it against a measure of something known. If it's too far away from something that's known, it can be flagged as abnormal. This modeling has to be problem-dependent. You use the same AI system-level algorithms, but they change depending upon what application you're using it for. The first phase is you get some raw data, maybe from sensors, and you have to detect certain features.
So this is all about probability rather than definitive answers?
Exactly.
What are the real-world applications of this?
In the radio frequency world, we were able to detect when a frequency was a friend or a foe. If you're in the middle of the desert and you've got the frequency of a garage door opener that's used to set off an IED [improvised explosive device]--you know historically these are used to initiate IEDs--you can point to where that frequency is coming from, and you can jam the frequency so it will not ignite the IED. You also can send a missile to that spot.
In the past, much of the technology that ended up in the commercial world came out of the military. Where do you think that will happen with artificial intelligence?
In the industrial world, you can do it for quality inspection. It also goes into marketing and analyzing. Inside large warehouses or grocery stores, you can learn the behavior of shoppers so you can place products in different places in the store. You also can learn the behavior of online shoppers to reach them more effectively.
How does this work with existing equipment?
If you have a microphone and a camera, they don't have a lot in common. To use them as inputs, you have to do a lot of programming. But if on the microphone you could put detection, you don't send back the voice. You can identify where a person is so the camera zooms in on him or her. They can be correlated. The idea of bringing multiple sensors--they could be voice, image, temperature or pressure--and extracting features rather than pixels or analog voices is all doable. We do this in our brains. Once you translate it into something that isn't analog or a pixel, you're into the human system. It smells like mustard or it's a brown dog. Once we classify it, we're through with the sensors and into the realm of intelligence.
What's the short-term and long-term use?
The immediate use is Homeland Security. Sensors are becoming so cheap. If you're in Southwest Asia and troops have to go over a hill, you want to make sure it's safe. You can drop cheap sensors--microphones, cameras, magnetometers. Then you pull out the features to know what you're hearing and seeing, and over time and space those features make up various behaviors. Is everything OK over the hill? Are there people there? Are they good guys or bad guys? You can do the same with the border patrol here in the United States. Are they carrying bombs?
You can take any market, whether it's quality control, manufacturing, financial or medical, and you can apply it. That's what managers and executives do. If you're buying something in Phoenix and San Diego at the same time and you've never been to either of those cities, an alarm comes up. American Express applied AI years ago. In the financial world, if you want to model all the variables of shareholder selling, inside selling and everything with buy and sell, you can start predicting behavior.
How about the consumer world?
You can see an AI babysitter, dog-sitter or house-sitter. Based on certain behaviors, you can set off alarms and react. You can measure temperature and pressure better than intensive care in a hospital. This works for the elderly as well as babies. You also can do this on the Internet. A lot of videogames are using this for multiplayer modes like fighting.
Ambient assisted living is another area. You can enable smart rooms and look for the behavior of the elderly. If someone is bedridden at home and wanted to communicate with hand gestures to turn on the TV or call the doctor, hand positions could mean different things and a sequence of hand positions could mean different things.
Does this make robots more viable?
That's where AI was first applied. They were expert systems. The whole industrial automation world was created to build smart robots.
Labels:
articles,
home automation,
smart home
Subscribe to:
Posts (Atom)