Automaton iconAutomaton

Who's Afraid of the Uncanny Valley?

Are you creeped out by realistic, humanlike robots?

To pay homage to the vast assortment of anthropomorphic automatons, lifelike mannequins, and CGI humans out there, IEEE Spectrum prepared a, dare we say, beautiful slideshow. Watch our Ode To the Uncanny Valley below and then tell us about your reaction.

Many people say they find such imagery eerie, creepy, scary, freaky, frightening. One explanation for such visceral reaction is that our sense of familiarity with robots increases as they become more humanlike -- but only up to a point. If lifelike appearance is approached but not attained, our reaction shifts from empathy to revulsion.

This descent into creepiness is known as the uncanny valley. It was proposed by Japanese roboticist Masahiro Mori in a 1970 paper, and has since been the subject of several studies and has gained notoriety in popular culture, with mentions in countless YouTube videos and even on a popular TV show. The uncanny valley is said to have implications for video game design and is blamed for the failure of at least one major Hollywood animation movie.

Yet it remains a controversial notion in some robotics circles. Is it a valid scientific conjecture or just pseudoscience?

There is something appealing about a simple concept that can explain something profound about our humanity and our creations. It's even more appealing when you see it as a graph (the one below is based on the Wikipedia version with some images added for fun; apparently the graph concocted by Mori was more elaborate, according to a note here).

You can see on both curves (solid line for still robots and dashed line for robots that move) how familiarity (vertical axis) increases as human likeness (horizontal axis) increases, until it plunges and then increases again -- hence the valley in uncanny valley.

As a kind of benchmark, the uncanny valley could in principle help us understand why some robots are more likable than others. In that way roboticists would be able to create better designs and leap over the creepiness chasm. But what if there's no chasm? What if you ask a lot of people in controlled experiments how they feel about a wide variety of robots and when you plot the data it doesn't add up to the uncanny valley graph? What if you can't even collect meaningful data because terms like "familiarity" and "human likeness" are too vague?

When Mori put forward the notion of the uncanny valley, he based it on assumptions and ideas he had on the topic. It was an interesting, prescient conjecture, given that there weren't that many humanoid robots around, let alone a CGI Tom Hanks. But as scientific hypotheses go, it was more speculation than a conclusion drawn from hard empirical data. This is what he wrote at the end of his 1970 paper:

Why do we humans have such a feeling of strangeness? Is this necessary? I have not yet considered it deeply, but it may be important to our self-preservation.

We must complete the map of the uncanny valley to know what is human or to establish the design methodology for creating familiar devices through robotics research.

In a recent Popular Mechanics article, writer Erik Sofge discusses some of the problems with the theory:

Despite its fame, or because of it, the uncanny valley is one of the most misunderstood and untested theories in robotics. While researching this month's cover story ("Can Robots Be Trusted?" on stands now) about the challenges facing those who design social robots, we expected to spend weeks sifting through an exhaustive supply of data related to the uncanny valley—data that anchors the pervasive, but only loosely quantified sense of dread associated with robots. Instead, we found a theory in disarray. The uncanny valley is both surprisingly complex and, as a shorthand for anything related to robots, nearly useless.

Sofge talked to some top roboticists about their views of the uncanny. Cynthia Breazeal, director of the Personal Robots Group at MIT, told him that the uncanny valley is "not a fact, it's a conjecture," and that there's "no detailed scientific evidence" to support it. David Hanson, founder of Hanson Robotics and creator of realistic robotic heads, said: "In my experience, people get used to the robots very quickly. ... As in, within minutes."

Sofge also talked to Karl MacDorman, director of the Android Science Center at Indiana University, in Indianapolis, who has long been investigating the uncanny valley. MacDorman's own view is that there's something to the idea, but it's clearly not capturing all the complexity and nuances of human-robot interaction. In fact, MacDorman believes there might be more than one uncanny valley, because many different factors -- in particular, odd combinations like a face with realistic skin and cartoonish eyes, for example -- can be disconcerting.

Hiroshi Ishiguro, a Japanese roboticist who's created some of the most striking androids, and a collaborator, Christoph Bartneck, now a professor at Eindhoven University of Technology, conducted a study a few years ago using Ishiguro's robotic copy, concluding that the uncanny valley theory is "too simplistic." Here's part of their conclusions:

The results of this study cannot confirm Mori’s hypothesis of the Uncanny Valley. The robots’ movements and their level of anthropomorphism may be complex phenomena that cannot be reduced to two factors. Movement contains social meanings that may have direct influence on the likeability of a robot. The  robot’s level of anthropomorphism does not only depend on its appearance but also on its behavior. A mechanical-looking robot with appropriate social behavior can be  anthropomorphized for different reasons than a highly human- like android. Again, Mori’s hypothesis appears to be too simplistic. 

Simple models are in general desirable, as long as they have a high explanatory power. This does not appear to be the case for Mori’s hypothesis. Instead, its popularity may be based on the explanatory escape route it offers. The Uncanny Valley can be used in attributing the users’ negative impressions to the users themselves instead of to the shortcomings of the agent or robot. If, for example, a highly realistic screen-based agent received negative ratings, then the developers could claim that their agent fell into the Uncanny Valley. That is, instead of attributing the users’ negative impressions to the agent’s possibly inappropriate social behavior, these impressions are attributed to the users. Creating highly realistic robots and agents is a very difficult task, and the negative user impressions may actually mark the frontiers of engineering. We should use them as valuable feedback to further improve the robots.

It's a good thing that researchers are trying to get to the bottom of the uncanny valley (no pun intended). Advancing the theory by finding evidence to support it, or disprove it, would be important to robotics because human-robot interaction and social robots are becoming ever more important. If we want to have robots around us, we need to find out how to make them more likable, engaging, and easier to interact with, and naturally their looks play a key role in that regard. Moreover, human-looking robots could be valuable tools in psychology and neuroscience, helping researchers study human behavior and even disorders like autism.

Ishiguro recently told me that the possibility that his creations might result in revulsion won’t stop him from "trying to build the robots of the future as I imagine them." I for one admire his conviction.

What do you think? Should we continue building robots in our image?

READ ALSO:

Geminoid F: Ultrarealistic Female Android
Tue, April 20, 2010

Blog Post: IEEE Spectrum obtained exclusive images and video of Hiroshi Ishiguro's new android

The Man Who Made a Copy of Himself
April 2010

Article: A Japanese roboticist is building androids to understand humans--starting with himself

Hubo II Humanoid Is Lighter and Faster
Tue, March 30, 2010

Blog Post: The creator of Albert Hubo is back with a new, better -- and less creepy -- humanoid robot

Robot Mimics Musculoskeletal System
Thu, March 04, 2010

Blog Post: University of Tokyo researchers are developing a humanoid that mimics the way our skeleton, muscles, and tendons work

PR2 Does The Impossible, Folds Towels

We cover a lot of robots around here, and to be fair, not every one of them makes you think “yeah, I could totally use one of those around the house!” Well, I could totally use a PR2 around my house now that it can autonomously fold stuff. Not sure how I’d get it up the stairs, but anyway…

So far, UC Berkeley’s Pieter Abbeel has only taught his PR2 to fold towels and other rectangles, but the important thing is that the PR2 is entirely unfamiliar with the things that it has to fold. Just toss a pile of towels of various sizes on the table, and PR2 will pick up each item, inspect it, and figure out how it should be folded. The folding routine even ends with an adorable little pat ‘n smooth. You have to remember, too, that even though PR2 is quite an impressive robot, the capabilities are mostly in the software:

“The reliability and robustness of our algorithm enables for the first time a robot with general purpose manipulators to reliably and fully-autonomously fold previously unseen towels, demonstrating success on all 50 out of 50 single-towel trials as well as on a pile of 5 towels.”

50/50 on towel folding? Yeah, that would definitely be an upgrade in my house.

[ UC Berkeley Robot Learning Lab ] via [ Willow Garage @ Twitter ]

Hubo II Humanoid Robot Is Lighter and Faster, Makes His Creator Proud

Why is Professor Jun Ho Oh smiling? Because, as he told me recently, he has a "new son."

It's Hubo II, the humanoid above, which Oh and his colleagues developed at the Korea Advanced Institute of Science and Technology's Humanoid Robot Research Center, aka Hubo Lab.

Professor Oh built the original Hubo in 2004. It was one of the first advanced full-body humanoid robots developed outside Japan. But he's probably better known for another humanoid: Albert Hubo, which had a Hubo body and an Albert Einstein animatronic head developed by Hanson Robotics.

Now Professor Oh is ready to introduce the new addition to his family. Hubo II is lighter and faster than its older brother, weighing 45 kilograms, or a third less, and capable of walking two times faster.

Watch the demo:


A major improvement over early humanoid designs is Hubo II's gait. Most humanoid robots walk with their knees bent, which is dynamically more stable but not natural compared to human walking. Hubo II, Professor Oh says, performs straight leg walking. It consumes less energy and allows for faster walking. Note Hubo II's left knee extended when the leg swings forward (middle image below):

The robot has more than 40 motors and dozens of sensors, cameras, and controllers. It carries a lithium polymer battery with a 480 watt-hour capacity, which keeps the robot running up 2 hours with movement and up to 7 hours without movement.

Hubo II uses two identical PC104 embedded computers with solid state hard disks and connected via a serial interface. The left one can control the entire robot, taking care of functions like walking and overall stabilization; the right one is normally empty and you can load speech, vision, and navigation algorithms to see how they perform on Hubo.

Another improvement is the hand design. It weighs only 380 grams and has five motors and a torque sensor. It can handle any object that fits on its palm, and its wrist can rotate in a humanlike way.

Talking with Professor Oh made me appreciate how difficult humanoid projects are. The challenge, he told me, is not just cramming all the hardware into a tight space, but also making sure everything works together. Cables can unexpectedly restrict joint movements; power and control boards interfere with each other; modules end up too heavy and create instability.

So many things can go wrong. The problem is that, whereas in a wheeled robot a failure usually means the robot stops on its tracks, in a humanoid robot failure often means a face-plant.

Professor Oh wants to make a robust design to avoid such catastrophic failures. He believes Hubo II is a big step in that direction. So needless to say, he's very proud of his new son. Congrats, Professor Oh!

PS: Wondering what happened to Albert Hubo? It has a cameo appearance in the video above, watch until the end...

DIY Turing Machine

Everyone who's read about Alan Turing and his ideas on computation probably has created a mental picture of the theoretical computing device he conceived and that we now call a Turing machine. Mike Davey, a DIY guy from Wisconsin, wasn't satisfied with just imagining the thing. So he built one.

Though there are other Turing machine implementations out there -- including a Lego-based design -- Davey wanted to built one that looked like Turing's original concept.

The result -- holy algorithms. The thing is a beauty. A read-write head? Check. A moving tape for the bits? Check.

From now on whenever I think of a Turing machine I'll picture Davey's.

Watch the video below to see the machine in action, then go to his web site aturingmachine.com to see descriptions of the hardware and the programs he's run. From the site:

My goal in building this project was to create a machine that embodied the classic look and feel of the machine presented in Turing’s paper. I wanted to build a machine that would be immediately recognizable as a Turing machine to someone familiar with Turing's work.

Although this Turing machine is controlled by a Parallax Propeller microcontroller, its operation while running is based only on a set of state transformations loaded from an SD card and what is written to and read from the tape. While it may seem as if the tape is merely the input and output of the machine, it is not! Nor is the tape just the memory of the machine. In a way the tape is the computer. As the symbols on the tape are manipulated by simple rules, the computing happens. The output is really more of an artifact of the machine using the tape as the computer.

The heart of the turing machine is the read-write head. The read-write head transports the tape and positions cells of the tape appropriately. It can read a cell determining what, if any, symbol is written there. The machine works on, and knows about, only one cell at a time. The tape in my machine is a 1000’ roll of white 35mm film leader. The characters, ones and zeros, are written by the machine with a black dry erase marker.

 

Tactile Gaming Vest Punches and Slices

“Ouch! That hurt!”

So exclaimed one user of the University of Pennsylvania’s Tactile Gaming Vest (TGV) during yesterday’s demos at the IEEE Haptics Symposium, in Waltham, Mass.

As conference participants steered their character in a shoot-em-up computer video game based on Half-Life 2, the vest variously smacked them and vibrated as they themselves got shot. Sometimes it smarted, depending on how tight the vest was on the user, or if the “shots” hit right on the collar bone. For me it was more like a series of surprise punches.

Four solenoid actuators in the chest and shoulders in front, plus two solenoids in the back, give you the feeling of a gunshot, says Saurabh Palan, a graduate student who works on the project. In addition, vibrating eccentric-mass motors clustered against the shoulder blades make you feel a slashing effect as you get stabbed from behind. Currently there is no feedback from your own weapons as you fire, just from weapons aimed at you. 

The solenoids and shoulder vibrators are controlled by custom electronics and linked to the game, so if your character gets shot from a certain direction, the appropriate solenoid “fires.” That makes it better than, say, laser tag, which makes your whole vest vibrate but doesn’t give you a hint as to where the shot came from. In that sense, then, the gaming vest is closer to a paintball excursion, but it doesn’t hurt as much (and there’s no messy paint to clean up afterwards).

Other tactile vests adorn the research sphere, but this one uses solenoids for their fast response, Palan explains. A similar vest, using pneumatics, has a slower response time, he says. Plus, it requires a huge air tank that sits next to you on the table, which makes a lot of noise and can be annoying, he adds.

Palan says this kind of device could be helpful for training military teams, in addition to making video gaming more immersive. Or it could make movies like Avatar even more enjoyable to watch, because you get physical feedback in addition to the 3D image experience.

It could also be fun for straight up action thrillers like Die Hard. If this kind of vest could be linked to the movie while you watch it, Palan says, the experience would be that much more exciting. “You could feel like you’re in the role,” he says. “So every time Bruce Willis gets shot, you feel it.”

Yippee ki yay.

Photos: (Top) Conference participant plays the game. (Bottom) Vest with solenoid actuators (courtesy of Saurabh Palan).

Vibrating Chair Fixes Bad Posture

Do you sit in a chair all day at work? Does your back hurt from hunching over? Yale researchers say they can fix your poor posture with a vibrating ergonomic chair that prods you into sitting up straight.

The chair uses seven force-sensitive-resistors (FSRs) placed on the seat and back to sense your body position, plus a distance sensor at the top of the chair back to detect how far you’re leaning away from it.

If the sensor system notices that you are starting to lean forward and hunch over, it triggers one or several of six feedback “tactors,” which are tiny motors like the ones in your cell phone, to start vibrating. The sensors and vibrators together cost just $70.

I tried out the chair yesterday at the IEEE Haptics Symposium in Waltham, Mass. All it took was a short calibration of my sitting upright, slouching, and relaxing poses, then we were off.

Sure enough, as I started to lean forward and hunch, zing! The vibrators under my thighs went off. As I straightened up again, they stopped. I noticed that I don’t sit all the way back in the chair, which is a no-no and prompts more vibrating, so I had to adjust a few times to make sure I was taking full advantage of the chair’s ergonomic potential.

Other positions that are no-go’s: leaning to one side with your arm on the chair’s arm, sitting with an ankle crossed over your knee, and crossing your legs entirely.

The chair’s vibrations are intuitive enough to not need a lot of training, and most of the people who tried it out yesterday caught on right away. Continuous vibrations from the back of the chair mean sit up straight, while vibrations under the thighs mean put those feet down. Pulsed vibration in the chair back means “lean back more,” facilitating guided posture changes.

(Note: this is not a massage chair! The goal is to stop vibrations by moving to the correct position - not to keep the chair going. Oh well.)

But all this sitting up straight can be just as tiring as slouching after awhile. The chair is ready for that, too. Ying (Jean) Zheng, a graduate student who leads this research in John Morrell’s human-machine interface lab at Yale, explained that the system can be programmed to let you sit back in a (proper) relaxed posture every 20 minutes or so.

While most subjects have used it for testing only, Zheng says she uses it for hours at work, and that it helps her posture (she did appear to be sitting up straight even without a vibrating chair of her own during the long day of demos).

According to the group’s paper, subjects studied do tend to sit up straight even after the vibrating stimulators are turned off. So it could work well for training and rehabilitation, too.

Photos: (Top) Force sensors placed under the “sit” bones and thighs, and behind the lumbar region of the spine and shoulder blades. The seventh force sensor is placed in the center rear of the seat to make sure you’re sitting all the way back in the chair.

(Bottom) Conference participant trying out the chair.

Computer-Controlled Swarm of Bacteria Builds Tiny Pyramid

Researchers at the NanoRobotics Laboratory of the École Polytechnique de Montréal, in Canada, are putting swarms of bacteria to work, using them to perform micro-manipulations and even propel microrobots.

Led by Professor Sylvain Martel, the researchers want to use flagellated bacteria to carry drugs into tumors, act as sensing agents for detecting pathogens, and operate micro-factories that could perform pharmacological and genetic tests.

They also want to use the bacteria as micro-workers for building things. Things like a tiny step pyramid.

The video below shows some 5,000 bacteria moving like a swarm of little fish, working together to transport tiny epoxy bricks and assemble a pyramidal structure -- all in 15 minutes.

The video was presented at the IEEE International Conference on Intelligent Robots and Systems last year, along with a wonderfully titled paper, "A Robotic Micro-Assembly Process Inspired By the Construction of the Ancient Pyramids and Relying on Several Thousands of Flagellated Bacteria Acting as Workers."

The bacteria, of a type known as magnetotactic, contain structures called magnetosomes, which function as a compass. In the presence of a magnetic field, the magnetosomes induce a torque on the bacteria, making them swim according to the direction of the field. Place a magnetic field pointing right and the bacteria will move right. Switch the field to point left and the bacteria will follow suit.

Each bacterium has flagella capable of generating about 4 piconewtons. It's a very small amount of thrust force, but put thousands of bacteria to work together and they can move mountains. Well, micro mountains.

Several research groups are trying to develop MEMS devices that emulate the propulsion mechanisms of bacteria. Martel asks, Why mimic the bacteria when you can use the little things themselves?

Martel and his colleagues developed an electronic microcircuit that contains both the bacteria and an array of conductors that produce magnetic fields. By carefully controlling which conductors are active, the microcircuit can make the bacteria move in specific directions. A computer and an optical microscope provide a feedback loop, tracking the motion of the bacteria and adjusting the conductors to achieve the desired behavior.

In addition to pyramid building, Martel's bacteria has done some other neat tricks, such as traveling through the bloodstreams of rats, steered by an MRI system, a la "Fantastic Voyage."

One of their current projects is developing an autonomous bacterial microrobot. They plan to use standard CMOS processes to create a chip containing both electronics and bacteria. The bacteria would reside in micro-reservoirs designed to generate thrust. For control, small conductors inside each reservoir would produce magnetic fields.

Several of these microrobots could then be used to perform tasks collectively, perhaps one day swimming inside our bodies, delivering drugs, detecting disease, and fixing an organ here, a blood vessel there. Who knew bacteria could be good robots?

UPDATE: If you're wondering which ancient pyramid inspired the researchers -- and is shown in the video on the left bottom corner -- it's the Djoser step pyramid, in Egypt, which the researcher note was "an important,  initial milestone in the history of man-made structures."

Images and video: NanoRobotics Laboratory, École Polytechnique de Montréal

Adam Savage On Armed Robots

Kevin Kelly from Wired recently interviewed Jamie Hyneman and Adam Savage from Mythbusters for the Commonwealth Club of California. As part of the interview, Adam and Jamie were asked (somewhat jokingly) whether they’re afraid that machines will take over in the future, particularly with regards to the present development of armed robots.

Now, if you’ve been reading this blog for a while, you’re probably aware that this is one of my favorite subjects to harp on, and as much as I respect admire worship ::cough:: like-in-a-strictly-professional-manner Adam Savage, I won’t let you down.

So, is it a bad idea to give a machine a gun? Of course it is. It’s a terrible idea. But guns were a terrible idea to begin with (from a lofty ethical viewpoint, anyway). The terrible part about guns is that guns can kill people, and not giving guns to robots isn’t going to change that. Really, the question should be, is giving guns to machines a better or worse idea than not giving guns to machines? This where I think armed robots have a use.

I guess fundamentally, the part that I don’t understand is where Adam says that he knows how machines work and he wouldn’t trust a machine with a gun. I wonder, though, if we have a better idea of how most machines work than how some humans work… Like, it’s a fundamental right for humans to have guns, and there are plenty of humans out there who are far less predictable or reliable than a robot. It’s certainly true that robots are more prone to things like mechanical failures, but we already entrust our lives to robots on a daily basis (often without realizing it). Giving a robot a gun is just an especially obvious way of making it dangerous.

I hate to keep coming back to this analogy, but it’s like driving a modern car: between things like anti-lock brakes and cruise control and (now) parking and lane assist features, your car (if you have a fancy one) has the ability to control your brakes, your accelerator, and your steering. If you have power windows and door locks, it has control over those things, too. It’s only designed to be autonomous in very specific situations, but what we’re talking about here is mechanical (or software) failure. And generally, that just doesn’t happen, because cars have been designed and tested with safety and reliability in mind. I don’t see why it couldn’t be any different with armed semi-autonomous (or even autonomous) robots.

Whether or not it’s ethical to arm robots is (I’d like to think) a separate issue. The short answer? No, it’s not. As has been pointed out, arming a robot makes it easier to resort to violence since the consequences are much less severe. Some people might even argue that that reason alone should keep robots out of combat, but I doubt that those people are going into combat themselves. My guess (and this is really just a guess since I’m in no way qualified to make any other sort of comment) is that if some kind of armed conflict is inevitable and there is substantial risk of injury or death, most people who’d be directly involved in that conflict would rather send a robot in their place if possible. And, that’s really what it’s all about: robots doing the dangerous things so that humans don’t have to.

[ FORA.tv ] via [ Gizmodo ]

When Will We Become Cyborgs?

I remember when, a decade ago, Kevin Warwick, a professor at the University of Reading, in the U.K., implanted a radio chip in his own arm. The feat caused quite a stir. The implant allowed him to operate doors, lights, and computers without touching anything. On a second version of the project he could even control an electric wheelchair and produce artificial sensations in his brain using the implanted chip. Warwick had become, in his own words, a cyborg.

The idea of a cyborg -- a human-machine hybrid -- is common in science fiction and although the term dates back to the 1960s it still generates a lot of curiosity. I often hear people asking, When will we become cyborgs? When will humans and machines merge? Although some researchers might have specific time frames in mind, I think a better answer is: It's already happening.

When we look back at the history of technology, we tend to see distinct periods -- before the PC and after the PC, before the Internet and after the Internet, and so forth -- but in reality most technological advances unfold slowly and gradually. That's particularly true with the technologies that are allowing us to modify and enhance our bodies.

Radio chips like Warwick's are just one of the technologies people have had implanted in their bodies. As Rodney Brooks wrote in a recent IEEE Spectrum article:

Our merger with machines is already happening. We replace hips and other parts of our bodies with titanium and steel parts. More than 50 000 people have tiny computers surgically implanted in their heads with direct neural connections to their cochleas to enable them to hear. In the testing stage, there are retina microchips to restore vision and motor implants to give quadriplegics the ability to control computers with thought. Robotic prosthetic legs, arms, and hands are becoming more sophisticated. I don't think I'll live long enough to get a wireless Internet brain implant, but my kids or their kids might.

And then there are other things still further out, such as drugs and genetic and neural therapies to enhance our senses and strength. While we become more robotic, our robots will become more biological, with parts made of artificial and yet organic materials. In the future, we might share some parts with our robots.

Indeed! In the past few years there's been tremendous progress in the development of advanced prosthetics. Two examples are Dean Kamen's DEKA Research bionic arm and the artificial hands and fingers developed by U.K. company Touch Bionics. These devices are already transforming the lives of people who've tried them.

 

Watch the video above to see Amanda Kitts, who lost left arm in a car accident, demonstrating advanced hand control of the DEKA arm in a study at at the Neural Engineering Center for Artificial Limbs, part of the Rehabilitation Institute of Chicago. Amazing!

Or consider the case of Dawn O'Leary, a woman from Maryland who had both arms amputated after an accident. She was fitted with a prosthetic hand by Touch Bionics called i-Limb. The device uses sensors on her skin to pick up nerve signals and operate the bionic digits, enabling her to carry out complex tasks such as grasping the handle of a cup. From a report in the local newspaper:

Holding something is what O’Leary was excited to try. She said she was able to hold a mug and pick up a tissue. She said she wants to learn how to use the computer and holding a rod and reel.

One thing she really wants to do is hold a crayon.

"I want to be able to color with my grandkids," she said.

(Watch a video of O'Leary trying the device here.)

These are just two examples of how technologies are evolving in our path to cyborg life. Along the way, we'll have to address many safety, privacy, and most important, ethical issues. Nevertheless the advantages of becoming bionic people are too enticing. I can imagine a time when we'll all become part of an ubiquitous flesh-and-silicon world where our bodies and devices are constantly communicating. Or as Warwick put it:

Will we evolve into a new cyborg community? I believe humans will become cyborgs and no longer be stand-alone entities. What we think is possible will change in response to what kinds of abilities the implants afford us. Looking at the world and understanding it in many dimensions, not just three, will put a completely different context on how we -- whatever "we" are -- think.

What you think?

March Madness, Robot Style

 

A rookie, all-girl team ran their robot to victory in a ball-kicking tournament held last weekend at Manhattan’s Javits Center. The team, from the Mary Louis Academy (TMLA), a Catholic, all-girls high school in Queens, will go on to Atlanta’s Georgia Dome next month to compete with hundreds of other robotics victors for the grand championship. The annual cycle of tournaments is sponsored by FIRST (For Inspiration and Recognition of Science and Technology), a nonprofit established by inventor Dean Kamen in 1989.

The game’s like soccer, but played by boxy robots on wheels (see video above). The field sports two raised bumps, dividing it into three sections, in order to challenge kids’ design minds. Robots can either slide under open spaces between the bumps or crawl over the bumps, aiming to roll or kick balls into goals on the ends of the field.

Two alliances of three teams each compete in each round, meaning there are always six robots on the field at any one time. Each round lasts 2 minutes and 15 seconds, with the robots running purely autonomously, according to pre-programmed code, for the first 15 seconds. Sixty-four teams competed in the NYC regional games last weekend.

The TMLA team formed this year at the instigation of Kathy Rutherford, a 1979 TMLA graduate who has judged FIRST competitions for the past six years and is an electrical and biomedical engineer and a senior IEEE member. To get her alma mater more involved in engineering, she introduced the school’s principal and math department chair to the competition last year, and now she’s seeing the fruits of her (and the girls’) labor.

“It’s better than my 30-year reunion,” Rutherford says, watching the girls scurry around their robot, tightening bolts and attaching zip ties to loose wires. “It’s really about how you inspire the next generation, and here they are!”

In January, when this year’s game challenge was announced and kits containing critical parts were handed out to eager teams around the world, TMLA science teacher and FIRST team mentor Vinod Lala told Spectrum that the girls were mainly interested in getting their feet wet, learning about the competition, and just building something in their first year. They didn’t even have a catchy name for their robot, calling it simply “TMLA.” But they went much farther than they’d planned.

When they heard their team number called during the playoffs “draft” last Sunday, Lala says, some of the girls had to look down at their T-shirts to make sure they’d heard right. “We had to pick our jaws up off the floor,” he says. Team captain Vanessa Ronan was surprised, but ran to the field to accept the invitation to join the top-ranked alliance.

 

Angela Guiliani, another TMLA team member, agrees that her team wasn’t expecting to make it to qualifying rounds. “It’s not that we weren’t doing well,” she says, “but we hadn’t even gotten our [ball] kicker working yet.”

That turned out not to matter, because the alliance wanted them for a different purpose: defense. “Our robot was moving really well,” Guiliani says, even if it didn’t kick, and she thinks that’s why they were picked; the robot could deftly protect its partners and stop the opposition from scoring goals.

Together with their partners from Wissahickon High School, in Pennsylvania (Miss Daisy), and New York’s Stuyvesant High School (Stuypulse), TMLA advanced through two quarterfinal rounds and into semifinals. Their alliance’s strategy was working perfectly.

Then, after an aggressive shove from another robot, a wheel and chain flew off TMLA’s robot, putting it out of commission. Rather than waiting while they replaced it, holding up the game, the alliance chose a replacement team—incidentally the only other all-girls team participating in the regional competition, the Iron Maidens of the Bronx High School of Science.

But the deal was that if the alliance won, then all four teams, including TMLA, would be qualified for the championship games in Atlanta.

The battle was fierce. With the Iron Maidens playing defense, blocking goals their opponents tried to score, the alliance advanced to finals. They won their first round, bolstered by the Stuyvesant robot’s ability to hook onto the railings in the center of the field and hoist itself off the ground—an action earning them extra points.

A tense second round left the group with more points than the opposition, but due to penalties they lost the match. They pulled their robots off the field, made final adjustments, and set up for round three, the final countdown.

They won handily. The bleachers erupted with cheering from the fans. Kids screamed and jumped up and down and hugged and high-fived. The captain of Stuyvesant’s team pulled out his phone and started texting madly.

Now the winning teams are headed to the championships, and with them go the rookies—no longer quite so rookie—of TMLA. Only about 16 percent of rookie teams advance to the championship each year, according to FIRST spokespeople.

“It’s a bit of a Cinderella story,” says TMLA’s principal, Sister Kathleen McKinney. “We went into this as kind of an unknown…  There aren’t that many all-girls teams, and not that many Catholic schools" involved in the program, she says. Their team didn’t have any sponsors, receiving funding instead from a group of generous alums. The fact that they get to advance their first time in the game, McKinney says, is “really exciting.”

The girls are wearing their team shirts at school and getting congratulated in the halls. Lala’s classes even applaud when he walks into the room.

It’s not all wine and roses, of course. Lala is exhausted, and says he was looking forward to a break. But he’s very proud of the team, and ready to dedicate another month to the project. “I’ve sacrificed four or five months—what’s one more?” he says. He’s already fielding questions from girls who want to join the team next year.

Greg Ronan, the team’s parent mentor and a fiber optics engineer, says that FIRST is great for the girls because it gives them exposure to things they don’t normally get to do—like machining robot parts.

The other good thing, he adds, is that some kids may already like this kind of tinkering, but FIRST helps them discover that they’re not alone. “It’s important to know others are interested in the same things,” Ronan says.

As for whether they will actually get to go to the championships in Atlanta, having now qualified, McKinney is emphatic: they’re going, whatever it takes. “If it was our basketball team, or our speech team, and they qualified, we’d send them,” she says. “This is just as important.”

That’s music to the ears, and the reason FIRST exists: to make sure people realize that science and engineering really are as important as other sports.

Photo: TMLA, Wissahickon HS, and Stuyvesant HS students planning their alliance strategy.

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More