Automaton iconAutomaton

NASA Ready to Send Humanoid Robot to Space

Robonaut 2

In one giant leap for robotkind, NASA will send the world’s first humanoid robot to space later this year.

The humanoid, called Robonaut 2 or R2, is set to launch on space shuttle Discovery on 1 November, 2010, and travel to the International Space Station, where it will become a permanent resident and work alongside humans as a robotic helper.

The Robonaut features dexterous arms and hands that can manipulate objects and tools just like humans do. Astronauts will mount the robot on a fixed pedestal inside one of the space station labs and use it to perform tasks like flipping switches, cleaning air filters, and holding tools.

The main goal is to find out how manipulation robots behave in space -- and also give crew members a second pair of hands. NASA hopes the experience will allow it to upgrade the robot in the future, so it would be able to support astronauts in more complex tasks, including repairs and scientific missions inside and outside the ISS.

"It’s the first time ever in the history of the planet that we’ve decided to launch a humanoid robot into space," says Nic Radford, the Robonaut deputy project manager. "It’s been an amazing experience."

The robot can perform tasks autonomously or under remote control, or a mix of both. Astronauts on the station will operate the robot using a laptop. The Robonaut can also be "joysticked" and directly controlled from Earth, though there's a delay of several seconds for commands to reach the space station.

Most of the time the robot will receive instructions designating a task and carry it through autonomously. But NASA has tested a sensor suit that a human operator can wear to transmit motions to the robot. Eventually the Robonaut could become a powerful telepresence system for space exploration.

Robonaut 2
 
Robonaut 2
 
Robonaut 2
 
Robonaut 2

And why a human-shaped robot? The advantage of a humanoid design, Radford says, is its ability to interact with the same exact technologies that the crew can.

“Space shuttles and stations were made with humans in mind,” he explains. “The technology we’ve invested in over the years requires five fingers and two arms to operate. So the humanoid system with fine points of dexterity is a logical design as opposed to redesigning the shuttle interface for a non-humanoid robot.”

The Robonaut, which looks a bit like Star Wars' Boba Fett, is about 150 kilograms [330 pounds]. Built primarily with aluminum with steel parts, it carries over 350 sensors and has a total of 42 degrees of freedom.

Each arm is about 80 centimeters long and can hold 9 kg [20 lb] in Earth's gravity. Each hand has 12 degrees of freedom: 4 degrees of freedom in the thumb, 3 degrees of freedom each in the index and middle fingers, and 1 each in the other fingers.

Behind its helmet visor are four visible light cameras: two provide stereo vision for the robot and remote operators, and two work as auxiliary cameras. A fifth infrared camera is housed in the mouth area for depth perception.

Because the head is full of cameras, the robot's computer system -- 38 PowerPC processors -- are housed inside the torso. Or as NASA puts it, "R2 thinks with its stomach -- literally."

This version of the robot has no legs or wheels. It's 101 cm [3 feet, 4 inches] tall from waist to head. “From the waist up he looks quite like me,” Radford jokes. “Big biceps and very muscular.”

Working with GM over the past three years, NASA originally designed Robonaut 2 as a prototype to be used on Earth so engineers could understand what would be needed to eventually send a robot to space. But when mission managers saw the robot early this year, they were so impressed they decided to send it to the space station. The Robonaut team was given six months to get the robot ready.

Here's an overview of the project:

In a second phase of the Robonaut project, at an undecided date, NASA will be making the unit mobile using a leg-type system, giving it the ability to move around inside the ISS. The third phase will feature a robot that will perform missions outside the space station.

The Robonaut is also a part of Project M, which wants to put a humanoid robot on the moon in 1,000 days -- beating Japan’s proposed goal of 2015. Even if the effort, which Radford describes as "a quick, hot burn project," is not successful, it will likely develop useful technologies.

It seems the future of space exploration will surely include advanced telepresence robots with expert operators controlling them safely from Earth. But Radford sees the robots being used in cooperation with humans, not replacing them completely. “They are just another tool in a human explorer’s toolbox,” says Radford.

“This is the dawning of a different era,” he concludes. “Ubiquitous robots that are in and around us, doing everything. We represent the beginning of that.”

As for Robonaut, it's currently cocooned inside a foam-cushioned aluminum frame called the Structural Launch Enclosure to Effectively Protect Robonaut, or SLEEPR for short, where it's awaiting to rocket to space. Bon voyage, Robonaut!

Watch Robonaut getting ready for the big trip:

Images and videos: NASA

Robot Companions to Befriend Sick Kids at European Hospital

aliz-e project

If you've ever spent time with an interactive robot, it's always a novel experience at first -- but over time the thrill will fade. Each time the robot meets you, it will run through a routine of canned responses that gets old pretty quickly.

But what if robots were able to remember who you are and retain information associated with you? It might make them better companions, and a step up from passive toys.

The ALIZ-E project is a European Commission-funded venture aimed at producing robots that can forge "longer-term constructive bonds" with people in a hospital setting. Grouping academic and commercial partners, it kicked off six months ago and will run for four and a half years.

It's essentially an artificial intelligence effort. Aldebaran Robotics' Nao humanoids, used in RoboCup and research centers around the world, will be interacting with and helping kids at the Hospital San Raffaele in Milan. The children are about 8 years old, and have recently been diagnosed with diabetes.

The patients will be learning about diabetes and how to manage it during their hospitalization. The researchers, meanwhile, will see whether these robot caregivers can be as effective as pet therapy in hospitals, which can be expensive and pose hygiene issues. Other groups are exploring similar ideas using a robot seal called Paro.

aliz-e project"We hope the robots can make their stay at the hospital more pleasant and also take over some of the tasks of the medical staff, such as the educational aspects of the [diabetes] program," says ALIZ-E coordinator Tony Belpaeme of the University of Plymouth, in the U.K. "In every aspect of the project's development we are trying to push boundaries."

The researchers are programming 20 Nao robots to work one-on-one with children and caregivers, and are focusing on cognitive abilities including natural language processing, child speech recognition, facial expression recognition, and episodic and semantic memory. The children will be evaluated on their knowledge of diabetes from training with Nao versus a nurse. 

"If you use a chatbot, the chatbot really doesn’t understand what you are talking about," says Belpaeme. "Here, we want the robot, upon hearing a proper name, say Fabio, to recall interaction history, images, and memories in general pertaining to Fabio."

The Nao robots wouldn't be able to store much information, so the researchers will use a cloud-based computer system for the heavy data processing. The robots will rely on this cloud infrastructure to process speech and non-verbal inputs, store a distributed model of long-term memory, and use that information to produce a more lifelike interaction.

The computing architecture for this will be GostaiNet from partner firm Gostai, a French start-up known for its open-source Urbi robotics platform. Urbi has been used in everything from Aibo robot dogs to Segway RMP platforms to Lego Mindstorms NXT kits.

The main innovation of Urbi is its script language, says Gostai CEO Jean-Christophe Baillie. "This language makes it much easier to handle complex parallel tasks and also to handle and process events, which are two essential parts of any robotic program," he says. "Urbiscript is strongly coupled to C++ and other traditional languages, so it's easy to integrate it within a project."

Could more realistic human-robot interactions develop into relationships? Will children become attached to Nao? We'll see how the sick kids take to robot's bedside manner. In the future they might look back at Nao as the first of many robot doctors.

Boston Dynamics' Marc Raibert On The Future Of BigDog

Back in May (I think, although the video wasn’t posted until now), Marc Raibert, founder of Boston Dynamics, gave a talk at Stanford on the current progress and future plans for BigDog. It’s an over an hour long, but (as you might expect) the juicy bits come in towards the end regarding the future plans. If you don’t have an hour or so, I’d recommend starting in at about the 46:50 mark, where you get to see some video of a quieter BigDog with an electric motor, among other things. If you don’t have time for even that, here’s a summary of what I thought were the most interesting bits:

-Marc Raibert says he’s inspired by mountain goats, which is pretty daunting when you’re designing a quadrupedal robot.

-Robots vs. mules: mules are better, except: they can only carry about a third of their body weight, they don’t take direction well, and they’re not easy to warehouse.

-That video of BigDog slipping on ice and recovering? It wasn’t programmed specifically to deal with slippery surfaces, and they didn’t even know it was icy out, they were just shooting some other test video and it happened to cross a patch of ice, recovering using its standard dynamic balance programming.

-BigDog is able to run (actually run, including a stride phase without any ground contact) at a little bit over 6 mph, although they’re still working on its balance while running.

-Boston Dynamics has two working BigDogs, both of which you can see in action at 30:40 (this is new video). Raibert wants to get 7 or 8 of them together to go dog sledding (!).

-BigDog can’t yet get up on its own, but they’re working on it… The next generation will have the hip (or shoulder) joints positioned outside of the body and higher up, with an increased range of motion that will allow the robot to get its legs under its body, which the current generation can’t do.

-Kinematically, the orientation of BigDog’s legs (knee front or knee back) just doesn’t matter. They’re able to take the legs off and swap them around.

-The noise BigDog makes is “much worse” in person. The videos “don’t do it justice.”

-Electric motor BigDog still sounds like bees (although they’ll be able to mute it completely), only runs for 10 minutes, and is slightly underpowered… They’re contemplating a “hybrid” version, where you can switch to silent operation for 10 minutes and then back to gas.

-BigDog can follow people autonomously using a scanning LIDAR system, engineers say it’s “really scary to have the robot following you going down hills” (ha!).

-There’s no redundancy in the walking system, “BigDog goes down when you shoot off a leg.”

-The biggest challenge so far has been making the system able to run in the heat (due to the engine).

There’s also a little bit of an update on PETMAN; unfortunately, the outtakes weren’t approved for webcast (neither, for that matter, were the BigDog outtakes. FROWNY FACE.). But you do get to see a CAD rendering of PETMAN:

Marc says PETMAN freaks him out a little bit because of the whole Uncanny Valley thing, but he’s trying to be mindful of that while designing PETMAN.

At the end, Marc Raibert even gives a shout-out to that brilliant BigDog parody video… He says that his new metric is how many views his BigDog YouTube videos (and their parodies) receive.

[ Boston Dynamics BigDog ]
[ Stanford @ YouTube ]

Cyborg Fly Pilots Robot Through Obstacle Course

Swiss researchers have used a fruit fly to steer a mobile robot through an obstacle course in the lab. They call it the Cyborg Fly.

Chauncey Graetzel and colleagues at ETH Zurich's Institute of Robotics and Intelligent Systems started by building a miniature IMAX movie theater for their fly. Inside, they glued the insect facing a LED screen that flashed different patterns. These patterns visually stimulated the fly to beat its left or right wing faster or slower, and a vision system translated the wing motion into commands to steer the robot in real time.

The fly, in other words, believed to be airborne when in reality it was fixed to a tether ("A" in the image below), watching LEDs blink ("B") while remote controlling a robot ("C") from a virtual-reality simulation arena ("D"). Is this The Matrix, or Avatar, for flies?

cyborg fly

Graetzel tells me the goal of the project was to study low-level flight control in insects, which could help design better, bio-inspired robots. "Our goal was not to replace human drivers with flies," he quips.

Watch:

The key component in their setup was a high-speed computer vision system that captured the beating of the fly's wings. It extracted parameters such as wing beat frequency, amplitude, position, and phase. This data, in turn, was used to drive the mobile robot. Closing the loop, the robot carried cameras and proximity sensors; an algorithm transformed this data stream into the light patterns displayed on the LED screen.

In a paper in the July 2010 issue of IEEE Transactions on Automation Science and Engineering, they describe the vision system's latest version. It uses a camera that focuses on a small subset of pixels of interest (the part of the fly's wings responsible for most lift, for instance) and a predictive algorithm that constantly reevaluates and selects this subset. The researchers report that their system can sample the wings at 7 kilohertz -- several times as fast as other tracking techniques.

"As autonomous robots get smaller, their size and speed approach that of the biological counterparts from which they are often inspired," they write in the paper, adding that their technique could "be relevant to the tracking of micro and nano robots, where high relative velocities make them hard to folow and where robust visual position feedback is crucial for sensing and control."

The main Cyborg Fly experiments took place about two years ago as part of a research effort led by professor Steven Fry at the the Fly Group at ETH/University of Zurich. That work was a collaboration with ETH's Institute of Robotics and Intelligent Systems, directed by professor Bradley Nelson. Tufts University's Center for Engineering Education and Outreach, in Boston, directed by mechanical engineering professor Chris Rogers, was also involved.

The Cyborg Fly is not the only "flight simulator" for bugs, and other research groups have used insects to control robots. But still, the ETH project stands out because of its high-speed vision component. This system could be useful not only for biology research, to study insect flight and track fast movements of appendages or the body, but also for industrial applications -- for monitoring a production line or controlling fast manipulators, for example.

Graetzel says they tested two different "movie theater" configurations. One used two parallel LED panels, with the fly in the middle. They later upgraded it to a cylindrical LED panel. They also used two types of robot. The first was an e-puck, a small wheeled robot designed for use in research projects. Later the researchers built a robot using Lego NXT.

The Cyborg Fly project was a finalist in the robotics category at this year's Graphical System Design Achievement Awards, an event organized by National Instruments, in Austin, Tex.

Graetzel has since received his PhD degree and moved on to other things -- that do not involve flies.

Another video:

Images and videos: ETH Zurich/Institute of Robotics and Intelligent Systems

Read also:

Man Replaces Eye with Bionic Camera
Fri, June 11, 2010

Blog Post: Canadian filmmaker Rob "Eyeborg" Spence has replaced his false eye with a bionic camera eye

Monkey Controls Robot with Mind
Wed, June 02, 2010

Blog Post: A monkey with a brain-machine interface commands a 7-degree-of-freedom robotic arm

Robot Bacteria Builds Pyramid
Thu, March 25, 2010

Blog Post: Researchers made a swarm of bacteria perform micro-manipulations and build a tiny pyramid

Cockroach-Inspired Robot Dashes Off
Tue, October 13, 2009

Blog Post: This UC Berkeley robot can survive a 7-story fall -- and dash off at high-speed.

Rooftop Robotics: Or How I Learned to Program a Microcontroller Watching the Sunset

When I received an e-mail early this month from the DorkBot mailing list advertising an "Intro to Robotics" class to take place on a rooftop near Prospect Park in Brooklyn, N.Y., I was intrigued, to say the least. The class description: "You'll learn and do hands on stuff, without too much theoretical crap. After this class you'll have all the basic skills and knowledge needed to start making your own robots." Rooftop robotics? Sure.

The teacher, Lee von Kraus, is a neural engineer pursuing a PhD in neurobiology at SUNY Downstate Medical Center. The class began with casual conversation, exchanging stories of Tesla vs. Edison rivalries and elephant electrocutions. Then von Kraus, who is rapaciously intelligent with a swimmer’s build and an affinity for the animal kingdom, described the fundamentals of electricity -- how voltage and current work in a circuit and the symbols for ground, switches, resistors, and so on. One student, who described himself as "more of a high voltage guy," offered an analogy comparing the flows of electrical currents to that of a river.

Now it was time for the hands-on part. The six students formed two groups and received breadboards for prototyping circuits. We learned how to use a multimeter, then connected power to a microcontroller, a BASIC Stamp module by Parallax. The microcontroller has a small, specialized BASIC interpreter (PBASIC) built into ROM. We connected it to a PC via a serial port and used its free software to program it to display “Hello World!” on the screen. In order to test the program, we connected the microcontroller to a switch circuit. We were able to see after every time we pressed a small button, the software spelled out, “Hello World!”

It turns out, von Kraus said, that's just how robots work. "A robot," he explained, "Is something that senses its environment then acts upon it." By adding sensors and motors to our circuit and writing a more complex program we'd be able to build all sorts of robotic contraptions. We talked about DIY projects for building mini robots out of toothbrush heads and mechanical pencils. The class wound down with a discussion of "H-bridges" and "pulse-width modulation,” which microcontrollers can use to control motor direction and speed.

The two-hour long, US $30 class was a casual introduction to electronics and robotics, a perfect babystep into a world that both excites and intimidates the average human being. Though after the course you probably won't be able to hack your Roomba into a beer-fetching servant bot, at least you'd know where to start.

And you can always ask von Kraus, if he's not too busy working on his dissertation or teaching other classes, such as science for art majors, cellular biology, wilderness survival class, and night kayaking tours of NYC. He hopes his research in neurobiology will contribute to brain augmentation efforts. "We are all just circuits," he said in the end.

Iran's Humanoid Robot Surena Walks, Stands on One Leg

iran surena 2 humanoid robot

Researchers at Tehran University, in Iran, unveiled last month an adult-sized humanoid robot called Surena 2.

The initial press reports by Iran's official news media didn't include many details, saying only it could "walk like a human being but at a slower pace" and perform some other tasks, and questions surfaced about the robot's real capabilities.

Now IEEE Spectrum has obtained detailed information about Surena and exclusive images and videos showing that the robot can indeed walk -- and even stand on one leg.

Aghil Yousefi-Koma, a professor of engineering at the University of Tehran who lead the Surena project, tells me that the goal is to explore "both theoretical and experimental aspects of bipedal locomotion."

The humanoid relies on gyroscopes and accelerometers to remain in balance and move its legs, still very slowly, but Yousefi-Koma says his team is developing a "feedback control system that provides dynamic balance, yielding a much more human-like motion."

Surena 2, which weighs in at 45 kilograms and is 1.45 meter high, has a total of 22 degrees of freedom: each leg has 6 DOF, each arm 4 DOF, and the head 2 DOF. An operator uses a remote control to make the robot walk and move its arms and head. The robot can also bow. Watch:

Surena doesn't have the agile arms of Hubo, the powerful legs of Petman, or the charisma of Asimo -- but hey, this is only the robot's second-generation, built by a team of 20 engineers and students in less than two years. A first version of the robot, much simpler, with only 8 DOF, was demonstrated in late 2008.

iran surena 2Yousefi-Koma, who is director of both the Center for Advanced Vehicles (CAV) and the Advanced Dynamic and Control Systems Laboratory (ADCSL) at the University of Tehran, says another goal of the project is to "to demonstrate to students and to the public the excitement of a career in engineering."

Next the researchers plan to develop speech and vision capabilities and improve the robot's mobility and dexterity. They also plan to give Surena "a higher level of machine intelligence," he says, "suitable for various industrial, medical, and household applications."

The robot was unveiled by Iranian President Mahmoud Ahmadinejad on July 3rd in Tehran as part of the country's celebration of "Industry and Mine Day." The robot is a joint project between the Center for Advanced Vehicles and the R&D Society of Iranian Industries and Mines.

Below, a demo the researchers gave on Iranian TV and more photos.

iran surena 2

iran surena 2 humanoid robot

Photos and videos: Center for Advanced Vehicles/University of Tehran

READ ALSO:

First Humanoid Robot Kiss
Mon, August 24, 2009

Blog Post: Theatrical robots Thomas and Janet perform the first robot kiss

Strange Robot Mimics Human Body
Tue, August 11, 2009

Blog Post: Researchers design a humanoid robot by copying the human body's inner structures

Humanoid Robot Imitates You
Tue, April 27, 2010

Blog Post: Wear a sensor suit and this Korean robot will reproduce all your movements

Run, Petman Robot, Run
Thu, April 22, 2010

Blog Post: Boston Dynamics' Petman bipedal bot is a faster walker. Can it jog?

Robots Podcast: Distributed Flight Array

The cycle of the Autonomous Distributed Flight Array: Assembly, Flight, and Chaotic End

The latest episode of the Robots podcast interviews Raymond Oung, a researcher at the Institute of Dynamic Systems and Control (IDSC) at the ETH Zurich, about his amazing Distributed Flight Array (DFA).

Use the player below to listen or download the mp3, and continue reading to learn more.

Oung is one of my colleagues at IDSC, which is headed by Prof. Raffaello D'Andrea and well known for its unconstrained and creative take on autonomous systems and its successful collaborations with innovators in other fields. The Distributed Flight Array developed by Oung is no exception:

You can think of the Distributed Flight Array as a combination between vertical take-off and landing vehicles, and modular reconfigurable robots. It is a flying platform consisting of multiple, autonomous, single-propeller vehicles, and these single propeller vehicles - or modules - are able to generate enough thrust to lift themselves into the air, but are completely unstable in flight, kind of like a helicopter without a tail rotor.

A DFA cycle begins with several modules scattered around the ground. The modules drive around and begin to self assemble, randomly connecting with its peers. Once a sufficient number of modules are assembled, they are able to coordinate and take flight. The DFA will then hover for a pre-determined amount of time before breaking apart and falling back to the ground, only to repeat the cycle in a new configuration.

First prototype of the IDSC's DFA airborne

According to Oung, the DFA is a rich test bed to conduct research on algorithms and architectures in distributed estimation and control (have a look at his publications at CDC2009 and ICRA2010), because it abstracts many of the real-world issues for the next generation of distributed multi-agent systems. Apart from its value for research, Oung also points out the DFA's value as a pedagogical tool and its artistic value:

Robots, as you know, are inherently eye-catching to the public - control theory not so much. Concepts in control theory are usually difficult for the general public to appreciate [...], so projects like the Distributed Flight Array provide the opportunity to illustrate control theory research to the general public in a tangible way.
[...]
One of the motivations behind art is the expression of the imagination. By my definition art is made with the intention of stimulating thought and emotion. I'm not sure if the flight array really stimulates emotion, but it certainly stimulates thought. For what it is, it does communicate ideas to a broad audience, such as expressing the underlying math and control algorithms behind it, so in that sense I do believe it is a piece of art.

For more details including current work, potential applications and future plans for the DFA have a look at the Robots website or tune in to the podcast!

Some videos:

More images:

Four DFA modules ready to take flight

The DFA connection mechanism developed at the IDSC

The DFA's custom-built omni-wheels

LineScout Robot Climbs on Live Power Lines to Inspect Them


Hydro-Québec's LineScout rolling on a high-voltage line. Image: Hydro-Québec

Canada's Hydro-Québec Research Institute started the LineScout project after the 1998 North American ice storm that led to massive power outages and left millions of people without electricity for several days. The idea was to have a small mobile robot that could be able to roll on high-voltage transmission lines and de-ice them.

The first line scout was a little rover that would hang head down like a sloth and was equipped with claws to break the ice. The new generation, featured in a recent IEEE Spectrum article, is larger and equipped with cameras and a thermo-infrared imager. The remote-controlled robot has been used dozens times to do inspection and maintenance on high-voltage lines (2000 amps, 735 kilovolts). It uses cameras to inspect line conditions and discover irregularities, while also employing a smart navigation system to pinpoint locations in need of attention.

Japanese robotics company HiBot and the Electric Power Research Institute in the United States are also developing power line inspection robots.

Canada's LineScout has arms to maneuver over obstacles such as splices, hardware components, and aircraft warning markers. Unlike with conventional transmission line servicing, the robot can service the lines while they are energized, saving precious company resources, reducing safety risks and downtime.

The robot was recently tested on the BC Hydro transmission lines -- a project that last June received the prestigious Edison Award from the Edison Electric Institute. The video below describes the technology and the tests conducted on Western Canada's rugged terrain.

If you want to learn more of about power-line inspecting robots, Hydro-Québec will host the 1st International Conference on Applied Robotics for the Power Industry (CARPI 2010) in October.

Samuel Bouchard is a co-founder of Robotiq in Quebec City.

Singapore Researchers Unveil Social Robot Olivia

Olivia, a social robot from Singapore, loves to talk -- and gesticulate with its sleek 6-degrees-of-freedom white plastic arms.

Designed as a research platform for human-robot interaction, Olivia is a creation of the A*STAR Social Robotics Laboratory, or ASORO, part of Singapore's Agency for Science, Technology, and Research.

The researchers plan to use the robot, unveiled at RoboCup 2010 in June, as a receptionist to greet visitors and provide information, and later, as a personal assistant and companion in people's homes.

Olivia's head has a pair of stereoscopic camera eyes and it can rotate and also tilt up or down. It appears to float over a ring of light, a design that reminds me of EVE, the little flying bot from WALL-E.

A third camera, on the robot's forehead, can zoom in on the speaker's face. Olivia uses software to detect lip movements and determine if a person is speaking to her. It uses eight microphones to locate the source of human speech and turn its face in the direction of the speaker.

So far Olivia can respond to specific keywords and phrases, such as "switch off the lights" or "introduce yourself." But the ASORO researchers, as other robotics groups, want the robot to respond to natural speech. They also plan to program Olivia to display sadness, happiness, and other behaviors to improve communication.

The robot, which is 1.6 meter tall and weighs in at 152 kilograms, is powered by an onboard Core i7 processor. The researchers plan to mount Olivia on a mobile base and upgrade it with new arms with three-finger hands so it can grasp objects.

IEEE Spectrum's Harry Goldstein met Olivia at RoboCup in Singapore and prepared the video below:

More images:

 

Images: IEEE Spectrum and ASORO

Robots Preparing to Defeat Humans in Soccer

Can a team of soccer-playing robots beat the human World Cup champions by 2050?

That's the ultimate goal of RoboCup, an international tournament where teams of soccer robots compete in various categories, from small wheeled boxes to adult-size humanoids.

IEEE Spectrum's Harry Goldstein traveled to Singapore to attend RoboCup 2010 -- and check out how the man vs. machine future of soccer is playing out.

Special thanks to Aamir Ahmad and Prof. Pedro U. Lima from the Institute for Systems and Robotics, Instituto Superior Técnico, in Lisboa, Portugal; Prof. Mike Wong Hong Kee from Singapore Polytechnic; and the Advanced Robotics & Intelligent Control Centre at Singapore Polytechnic for additional footage.

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More