Automaton iconAutomaton

The All-In-One Guitar Playing Robot and Game

With all the excitement surrounding the release of Guitar Hero 5 and The Beatles: Rock Band, I decided now was a good time to post a video I shot last month at National Instruments Week. In the past we've covered the way Guitar Hero could help amputees train brain-computer interfaces and how to turn the controllers into real musical instruments. It's been over a year since we posted about Slashbot, a robot that could play the game.

Today's video features another guitar-playing robot. But this one is different: the musical game (in this case, the open-source Frets on Fire clone), the vision acquisition system (that reads the notes off the screen), and the robotic control are all running off a single processor. Check it out:

This demo was a way for Intel and National Instruments to show off their new virtualization tool, which allows engineers to assign a specific task to a particular core. I thought it was a rather impressive way to show off the technology, but I'd be curious to hear your thoughts. Is this virtualization capability worthwhile?

Boston Startup iWalk Lands Funding for Robotic Prosthetics

It's been a good couple of weeks for Boston-area robotics startups: two young companies have recently closed on significant venture rounds. One of them, iWalk, is commercializing prosthetic limb technology developed at MIT's Media Lab under Dr. Hugh Herr, himself a double amputee.

Herr, an avid rock climber, lost both legs to frosbite after a climbing accident at age 17. He immediately began building his own prosthetic limbs to enable him to get back into climbing. As he started his academic research career he dedicated his efforts to developing prosthetic limbs using robotics. Among his projects is a powered prosthetic ankle and foot called the PowerFoot One.

 

The PowerFoot One

 

I had the opportunity to hear Dr. Herr speak at an MIT robotics conference last November. At the time, I knew nothing of his background or research. Halfway in to the presentation as he was describing his research, he rolled up the cuffs of his slacks to reveal metal and electronics. Until that point I'd had no idea he was a double amputee; as he'd walked to the front of the room, his gait -- though not completely normal -- was so smooth, I'd never have guessed he was dealing with anything but a couple of stiff joints.

Herr's work, including an assistive device for patients with muscle control difficulty, has been commercialized by other companies in the past. But iWalk -- who has tested prototypes with veterans and other disabled patients -- appears to be focused entirely on the technology coming out of Herr's research, and the $20M Series-B investment is just what they need to start serious production.

Check out this short video of a National Geographic feature on Hugh Herr.

Robotic Fish, Coming to a Pool Near You

Nature is full of great examples of highly efficient mechanical systems, and roboticists are keen to capitalize on those designs. When it comes to underwater vehicles, fish are a popular animal to emulate.

MIT's RoboTuna, developed in the early 90s, has spawned (ha!) two spinoff robotic fish projects. One we've talked about before: GhostSwimmer. This project, led by Boston Engineering, has evolved. The previous demonstration videos showed the robot propelled by a tail propeller but directed by the movement of the "fin"; the new version is actually propelled by the tail fin motion. Check it out here:

 

 

 

Next up is a newer set of fish from MIT. The two different types of fish mimic two different types of swimming mecahnics in fish like trout versus fish like sharks. MIT's news release says "the new robotic fish, each less than a foot long, are powered by a single motor and are made of fewer than 10 individual components."

 

 

 

Soon I'm sure we'll have even more roboticists swimming with the fishes!

 

Thomas and Janet: first kissing humanoid robots

First, the rehearsal:

Then the kiss:

While at the IEEE-sponsored International Conference on Service and Interactive Robotics (SIRCon) 2009, IEEE Spectrum scored an interview with the developers of theatrical robots Thomas and Janet, who they claim are the first kissing humanoid robots.

The first kiss happened back on 27 December 2008, during a robotic performance of several scenes of Phantom of the Opera at National Taiwan University of Science and Technology (known as Taiwan Tech). Chyi-Yeu Lin, a mechanical engineering professor, directed the performance in front of a packed house of about 400. The overcrowded auditorium burst out in a resounding cheer when Christine (played by Janet) and the Phantom (played by Thomas) kissed.

Lin’s team spent three years developing the autonomous robots hand-eye coordination, intrinsic self-balancing mechanisms, and other technologies. He says that most of the movements during a scene are programmed into the robot ahead of time.

However, their startup and synchronization is controlled by a network connected to a computer that acts as a server for both robots.To make the robots smooches and expression seem realistic, the team adopted several techniques, including manual molding, non-contact 3D face scanning, and 3D face morphing. The robot’s six expressions come about via servos pulling at several points in the face and mouth.

Showing the video of the play at SIRCon, Li-Chieh Cheng, a Ph.D. student at Taiwan Tech’s Intelligent Robot Lab, said such performances bridge the distance between advanced robotics technologies and the public.

“Available service robots could be very expensive and are only used at certain places. However, tickets for theater performance are affordable for everyone,” Cheng says.

But last December’s performance wasn’t perfect. “In addition to unexpected malfunction of motors, the network controlling robots were somewhat interfered with by signals from walkie-talkie used by stage staff,” Cheng says.

Taiwan Tech has some grand plans. “We aim to form a group composed of autonomous robots, which are like well-trained versatile performers. They can not only perform different plays, sing songs, or broadcast news, but also interact with real persons appropriately,” Lin told IEEE Spectrum.

Human actors aren’t the only things in the works. “We’re designing life-size robots of panda and other animals with humanities, who can be gently hugged by children without causing danger and interact with them,” Lin says.

Robots evolve to exploit inadvertent cues

Robots evolve to exploit inadvertent cues

Human interaction heavily depends on inadvertent cues: A competitor's sweaty handshake before a negotiation, a girl blushing when introducing herself, or the trace of a smile crossing the face of a poker player all convey important information. Sara Mitri and colleagues at the Laboratory of Intelligent Systems (disclaimer: my former lab) at the EPFL in Switzerland have now shown that it is not just humans who can develop, detect and use inadvertent cues to their advantage (PNAS: "Evolution of Information Suppression in Communicating Robots with Conflicting Interests").

The researchers set up a group of S-bots equipped with omnidirectional cameras and light-emitting rings around their body in a bio-inspired foraging task (see picture above). Like many animals, the robots used visual cues to forage for two food sources in the arena. Rather than pre-programming the robots' control rules, the researchers used artificial evolution to develop the robots' control systems. As expected, robots capable of efficiently navigating the arena and locating food sources evolved in a matter of a few 100 generations.

This is when things became interesting: Due to the limited amount of food, robots now began to compete for resources. Robots began to evolve strategies to use light inadvertently emitted by their peers to rapidly pinpoint food locations, in some cases even physically pushing them away to make room for themselves. As evolution progressed, the exploited robots were soon all but extinct. A new generation of robots ensued that could conceal their presence by emitting confusing patterns of light or by ceasing to emit light altogether.

I think this research highlights an interesting point: Robots have applicability far beyond engineering. As a leading evolutionary biologist involved in the study put it: "Robots can be quite useful to get a better understanding of the interaction between organisms". While still in its infancy, watch out for robots boosting research in biology, psychology or medicine.

Thanks Sara!

Swarm Robots Evolve Deception

In a mere 50 virtual generations, swarm bots (remember them?) using genetic software evolved the capacity to lie to other robots about the location of a source of food. Initially, the robots were programmed as a group to search for an object that represented food, and they gradually learned to emit a blue light when they found the food to show other robots where it was. Researchers at EPFL in Switzerland evolved and mixed the programming of the most successful foragers until they had a bunch of robots who were very good at finding food, and then gave the virtual genes of each individual robot control over their blue light that signified food.

 

You might expect that the robots would learn not to signal when they found the food to reduce competition, which is passive deception, but they also evolved an actively deceptive behavior, where some robots would deliberately travel away from the food and signal their blue light, drawing other robots in the wrong direction. Crafty little buggers. Interestingly, this deceptive behavior didn't make much of a difference to the overall fitness of the group strategy of following blue lights... Some robots always tell the truth with their blue lights, which means it's always advantageous for a clueless robot to follow a blue light as opposed to just wandering randomly.

So why do some robots keep telling the truth if deception can effectively lure other robots away from the food? It's fairly simple, as I understand it... If all of the robots are deceivers, any new robot will quickly learn that avoiding blue lights is the best way to find food. And in that case, any robot that starts signaling its blue light when it does find food (through a "genetic mutation" in its software) will increase its own fitness by repelling other robots from the food it finds. As it passes this behavior on to its virtual children, there will be more and more truthful robots until it once again becomes more advantageous to be deceptive.

There are, however, populations of truthful and deceptive robots such that the combination of behaviors reaches a stable point. In this particular experiment, the stable evolutionary endpoint (after 500 generations) was that 60% of the robots were deceivers and 10% told the truth. Furthermore, about a third of the robots were slightly attracted to blue lights, another third were strongly attracted, and the final third avoided them completely. This type of experiment, its progression, and the results are particularly fascinating to me because the robots are behaving and evolving in much the same way as populations of animals do. Examples of both altruism and tactical deception can be found in many different species of animals as well as (of course) in humans, but these little robots offer a unique opportunity to study (and tweak) the evolution of behavior in real time.

[ EPFL ] via [ Not Rocket Science ]

Telepresence Robots May Enable Blue-Collar Commuting

Many of us here at Automaton have been skeptical of telepresence robots. We've seen companies like iRobot and Wowwee try to make their mark with consumers at home; Anybots and North End Technologies seem to want to corner the professional video conferencing (robo-conferencing?) market; and InTouch Health thinks a robot can enable a doctor to visit patients remotely. Though these companies continue to get investments, the technology hasn't taken off -- in fact, iRobot even shelved their telepresence robot project. It seems to be a technology still looking for the killer application.

I was sitting comfortably in Skeptic Zone when I had a chance to talk with John Merchant, an IEEE member and president of RPU Technologies. John has an interesting theory: he thinks telepresence robots are the key to enabling "blue-collar telecommuting" while simultaneously helping to lower our greenhouse gas emissions.

He described how telephones, laptops, and widely available internet access changed the white collar workers went to work each day. With Blackberries and home computers, professional life becomes flexible enough to work with the main office while travelling or even sitting at home. This has enabled people to take jobs in distant geographic locations where they don't have to commute each day, usually giving them more time to spend with their families, and creating less overhead for employers. I think most people will agree that this has been a great technological development.

But the manual labor workforce has lagged behind. Someone needs to be physically present to operate machinery, load pallets, and carry out other manual tasks. Even heavily automated assembly lines require an operator or supervisor. These workers haven't been able to take advantage of the many benefits telecommuting has provided for others. But, says Merchant, robots might make the difference.

Imagine your job is to perform quality inspection in a production line. Your task is a matter of taking workpieces out of a bin, fixing them to a test jig, and verifying some parameter. Instead of physically driving in to work, though, you wake up, have a nice breakfast, see your kids off to school, and then log into your computer, which connects to a robot at the factory. This robot lets you see and manipulate your part of the production line. For the next eight hours you effectively play a video game as you guide the robot through the quality inspection process, then sign off for the evening just in time for your kids to get home from soccer practice.

Besides the convenience factor, think of the other possible benefits. Second and third shifts become not new people showing up for work, but time-shifted people logging in to the same robot someone else has just signed off of. By not driving in, significant amounts of greenhouse gas emissions from cars are eliminated -- the average commuting American spends 100 minutes in the car each day; think of how that could be cut down with telepresence commuting. It could even remove workers from work environments that could damage health or be potentially dangerous... look at how NASA and the Air Force are using telepresence today, to explore planets and perform surveillance missions.

Of course, there are negative implications as well. If telepresence becomes commonplace, outsourcing control of the robots to developing countries with cheaper labor would be an extremely attractive proposition. And though such generalized autonomy technology is in its infancy today, the possibility that these telepresence robots could be made fully autonomous and not require human guidance will certainly make many people nervous.

And we haven't even mentioned the technical challenges. Take a look at some of today's telepresence robots -- with the exception of NASA's, none of them have manipulators; most of them are glorified LCDs on wheels. And the required bandwith, Merchant points out, will be the biggest obstacle.

But still, Mr. Merchant's concept really made me think. Of the telepresence applications I've heard suggested, this is the one that catches my interest. The benefits are obvious. So what do you think? Is this the killer app? Or is telepresence still looking for its niche?

This all reminds me of a quote from a professor of mine that really stuck with me. He was being interviewed about his work in robotics, and he was asked, "So are you going to build the robot that watches your kids while you go to work?" "No," he said. "I want to build a robot that can go to work for me so I can spend more time with my kids."

Previously:

Personal Robots Market Will Grow To Over $5 Billion by 2015, Telepresence Next Big Thing

Is telepresence the best application roboticists can come up with?

CES: Remote presence is the name of the game

A New Contest For LEGO Lunar Robots

The Google Lunar X Prize provides a $20-million purse for the first private team to land a robotic vehicle on the moon, drive it around, and send back pictures and video. (We've profiled one of the favorites, the team from Carnegie Mellon University). Unfortunately, the technology (and money) required for a prize-winning mission goes far beyond something you can build in the basement with your kids.

But now there's a new challenge, called MoonBots, that anyone can participate in. Announced this week, the contest asks six-member teams of children and adults to design a LEGO Mindstorms robot suitable for the moon.

From the MoonBots website:

Once registration for the contest opens, teams will be asked to submit designs illustrating how they will build, program and operate their robots using LEGO MINDSTORMS robotic kits. There will be no charge to enter the contest and registration will be open to teams across the globe.

The teams with the best designs will be provided with free LEGOs to actually build and program their robots. The completed bots will compete in challenges similar to the Lunar X Prize, but on a simulated "Moonscape."

If LEGOs aren't your thing, and you want to work on a real Lunar X robot, check out Team FREDnet, which I featured in a June article about participatory space exploration.

Who's Proposing Ethical Guidelines for Robots?

When it comes to "killer robots", my philosophy -- as most of you know by now -- comes down to, "We should think about it, but there's no need to panic." Fortunately, there's a group that's doing just that: the International Joint Conference for Artificial Intelligence. Their meeting last month generated some really interesting discussion, including calls for a wider debate on the use of AI in military and security applications, but also discussed the benefits that more advanced artificial intelligence could bring us. (You can read many of the papers online here)

Keep in mind that the definition of "artificial intelligence" reaches far outside of robotics, so these discussions also considered technology like computer viruses -- which some researchers claimed had reached "cockroach-like" levels of self-preservation and reproduction -- and software that tries to break CAPTCHAs. Spambots and the Conficker worm generally don't figure into my day-to-day robotics engineering but they might be closer relatives to Skynet than my Roomba is.

The gist of it is hesitancy to hand decision making over to an autonomous robot. This comes on the heels of a plan released by the US Air Force to move toward decison-making aerial drones by the year 2047 (pdf) -- an idea that would make almost anyone nervous, and an interesting change from the Army's Future Combat Systems' insistence that they want to keep a human in the loop. But the IJCAI's concerns about autonomous decision making have legs well outside of military robots; there is much discussion and nervousness lately of automated computer trading systems on the stock market. (As I'm writing this I'm also listening to NPR, who have just informed me that the SEC has already moved to ban this practice)

At the same time the proceedings of this conference came out, an interesting article (behind a pay login -- sorry, non-IEEE members) was published in the IEEE Intelligent Systems magazine that referenced the work of a couple of roboticists who are interested in rewriting Asimov's Three Laws of Robotics for modern application. The good news is, they got it right that Asimov's laws are plot devices and neither can nor should be taken as gospel for a robotic code of ethics. The bad news is... well, take a read for yourself, and then I'll tell you what I think.

Here are Asimov's original laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

And here are the proposed new three laws:

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  2. A robot must respond to humans as appropriate for their roles.
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

While it's a cute PR move, copying the famous Three Laws infrastructure, these really appear to be some guidelines of which there happen to be three. They also are guidelines for the human creators of robots, not robots themselves -- and while the creators acknowledge this fact (you can read it in the abstract linked above), it defeats the purpose of the original Three Laws, which were hard-coded into the positronic brain of every robot and were supposedly failsafe. And let's face it... these laws, especially the first two, are pretty darn vague. You think Asimov's laws had loopholes to be exploited... boy, what happens the first time a robot gets around to reading some history books and finding what used to be some of humanity's highest legal and professional ethical standards?

It is all tongue-in-cheek, of course, but I think this Three Laws buiness just shows how uncertain we are of what we want our robots to do. We can't even make an absolute statement of "don't kill anyone." We can't tell if robots should be doing our bidding as mechanical slaves, or if there are roles in which it is "more appropriate" for a robot and a human to interact on a more equal level. We've got a long way to go until we understand where artificial intelligences actually fit in our world -- and the good news is, I think we've got a decent amount of time on our hands to do this.

So in summary: keep thinking, folks, but keep not panicking.

Video: Japanese Exoskeletons Stroll Through Tokyo Streets

Sankai-san is back. Check out the latest demo of the HAL exoskeleton suit, invented by Yoshiyuki Sankai at the University of Tsukuba in Japan. Sankai is commercializing the suit through his company, Cyberdyne, which has recently demonstrated the system on the streets of Tokyo.

What the video doesn't show is that the suit uses bio-electrical sensors attached to the body to capture electromyogram signals on the skin and control the actuators. Cyberdyne claims that the suit can "multiply the original strength by a factor of 2 to 10."

Would that let people commute to work on foot without breaking a sweat?

For more on the technology behind the HAL suit, see this article on exoskeletons that my colleague Harry Goldstein and I wrote a while ago.

Video: NECN

 

Most Commented Posts

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement
Load More