Automaton iconAutomaton

Rooftop Robotics: Or How I Learned to Program a Microcontroller Watching the Sunset

When I received an e-mail early this month from the DorkBot mailing list advertising an "Intro to Robotics" class to take place on a rooftop near Prospect Park in Brooklyn, N.Y., I was intrigued, to say the least. The class description: "You'll learn and do hands on stuff, without too much theoretical crap. After this class you'll have all the basic skills and knowledge needed to start making your own robots." Rooftop robotics? Sure.

The teacher, Lee von Kraus, is a neural engineer pursuing a PhD in neurobiology at SUNY Downstate Medical Center. The class began with casual conversation, exchanging stories of Tesla vs. Edison rivalries and elephant electrocutions. Then von Kraus, who is rapaciously intelligent with a swimmer’s build and an affinity for the animal kingdom, described the fundamentals of electricity -- how voltage and current work in a circuit and the symbols for ground, switches, resistors, and so on. One student, who described himself as "more of a high voltage guy," offered an analogy comparing the flows of electrical currents to that of a river.

Now it was time for the hands-on part. The six students formed two groups and received breadboards for prototyping circuits. We learned how to use a multimeter, then connected power to a microcontroller, a BASIC Stamp module by Parallax. The microcontroller has a small, specialized BASIC interpreter (PBASIC) built into ROM. We connected it to a PC via a serial port and used its free software to program it to display “Hello World!” on the screen. In order to test the program, we connected the microcontroller to a switch circuit. We were able to see after every time we pressed a small button, the software spelled out, “Hello World!”

It turns out, von Kraus said, that's just how robots work. "A robot," he explained, "Is something that senses its environment then acts upon it." By adding sensors and motors to our circuit and writing a more complex program we'd be able to build all sorts of robotic contraptions. We talked about DIY projects for building mini robots out of toothbrush heads and mechanical pencils. The class wound down with a discussion of "H-bridges" and "pulse-width modulation,” which microcontrollers can use to control motor direction and speed.

The two-hour long, US $30 class was a casual introduction to electronics and robotics, a perfect babystep into a world that both excites and intimidates the average human being. Though after the course you probably won't be able to hack your Roomba into a beer-fetching servant bot, at least you'd know where to start.

And you can always ask von Kraus, if he's not too busy working on his dissertation or teaching other classes, such as science for art majors, cellular biology, wilderness survival class, and night kayaking tours of NYC. He hopes his research in neurobiology will contribute to brain augmentation efforts. "We are all just circuits," he said in the end.

Iran's Humanoid Robot Surena Walks, Stands on One Leg

iran surena 2 humanoid robot

Researchers at Tehran University, in Iran, unveiled last month an adult-sized humanoid robot called Surena 2.

The initial press reports by Iran's official news media didn't include many details, saying only it could "walk like a human being but at a slower pace" and perform some other tasks, and questions surfaced about the robot's real capabilities.

Now IEEE Spectrum has obtained detailed information about Surena and exclusive images and videos showing that the robot can indeed walk -- and even stand on one leg.

Aghil Yousefi-Koma, a professor of engineering at the University of Tehran who lead the Surena project, tells me that the goal is to explore "both theoretical and experimental aspects of bipedal locomotion."

The humanoid relies on gyroscopes and accelerometers to remain in balance and move its legs, still very slowly, but Yousefi-Koma says his team is developing a "feedback control system that provides dynamic balance, yielding a much more human-like motion."

Surena 2, which weighs in at 45 kilograms and is 1.45 meter high, has a total of 22 degrees of freedom: each leg has 6 DOF, each arm 4 DOF, and the head 2 DOF. An operator uses a remote control to make the robot walk and move its arms and head. The robot can also bow. Watch:

Surena doesn't have the agile arms of Hubo, the powerful legs of Petman, or the charisma of Asimo -- but hey, this is only the robot's second-generation, built by a team of 20 engineers and students in less than two years. A first version of the robot, much simpler, with only 8 DOF, was demonstrated in late 2008.

iran surena 2Yousefi-Koma, who is director of both the Center for Advanced Vehicles (CAV) and the Advanced Dynamic and Control Systems Laboratory (ADCSL) at the University of Tehran, says another goal of the project is to "to demonstrate to students and to the public the excitement of a career in engineering."

Next the researchers plan to develop speech and vision capabilities and improve the robot's mobility and dexterity. They also plan to give Surena "a higher level of machine intelligence," he says, "suitable for various industrial, medical, and household applications."

The robot was unveiled by Iranian President Mahmoud Ahmadinejad on July 3rd in Tehran as part of the country's celebration of "Industry and Mine Day." The robot is a joint project between the Center for Advanced Vehicles and the R&D Society of Iranian Industries and Mines.

Below, a demo the researchers gave on Iranian TV and more photos.

iran surena 2

iran surena 2 humanoid robot

Photos and videos: Center for Advanced Vehicles/University of Tehran

READ ALSO:

First Humanoid Robot Kiss
Mon, August 24, 2009

Blog Post: Theatrical robots Thomas and Janet perform the first robot kiss

Strange Robot Mimics Human Body
Tue, August 11, 2009

Blog Post: Researchers design a humanoid robot by copying the human body's inner structures

Humanoid Robot Imitates You
Tue, April 27, 2010

Blog Post: Wear a sensor suit and this Korean robot will reproduce all your movements

Run, Petman Robot, Run
Thu, April 22, 2010

Blog Post: Boston Dynamics' Petman bipedal bot is a faster walker. Can it jog?

Robots Podcast: Distributed Flight Array

The cycle of the Autonomous Distributed Flight Array: Assembly, Flight, and Chaotic End

The latest episode of the Robots podcast interviews Raymond Oung, a researcher at the Institute of Dynamic Systems and Control (IDSC) at the ETH Zurich, about his amazing Distributed Flight Array (DFA).

Use the player below to listen or download the mp3, and continue reading to learn more.

Oung is one of my colleagues at IDSC, which is headed by Prof. Raffaello D'Andrea and well known for its unconstrained and creative take on autonomous systems and its successful collaborations with innovators in other fields. The Distributed Flight Array developed by Oung is no exception:

You can think of the Distributed Flight Array as a combination between vertical take-off and landing vehicles, and modular reconfigurable robots. It is a flying platform consisting of multiple, autonomous, single-propeller vehicles, and these single propeller vehicles - or modules - are able to generate enough thrust to lift themselves into the air, but are completely unstable in flight, kind of like a helicopter without a tail rotor.

A DFA cycle begins with several modules scattered around the ground. The modules drive around and begin to self assemble, randomly connecting with its peers. Once a sufficient number of modules are assembled, they are able to coordinate and take flight. The DFA will then hover for a pre-determined amount of time before breaking apart and falling back to the ground, only to repeat the cycle in a new configuration.

First prototype of the IDSC's DFA airborne

According to Oung, the DFA is a rich test bed to conduct research on algorithms and architectures in distributed estimation and control (have a look at his publications at CDC2009 and ICRA2010), because it abstracts many of the real-world issues for the next generation of distributed multi-agent systems. Apart from its value for research, Oung also points out the DFA's value as a pedagogical tool and its artistic value:

Robots, as you know, are inherently eye-catching to the public - control theory not so much. Concepts in control theory are usually difficult for the general public to appreciate [...], so projects like the Distributed Flight Array provide the opportunity to illustrate control theory research to the general public in a tangible way.
[...]
One of the motivations behind art is the expression of the imagination. By my definition art is made with the intention of stimulating thought and emotion. I'm not sure if the flight array really stimulates emotion, but it certainly stimulates thought. For what it is, it does communicate ideas to a broad audience, such as expressing the underlying math and control algorithms behind it, so in that sense I do believe it is a piece of art.

For more details including current work, potential applications and future plans for the DFA have a look at the Robots website or tune in to the podcast!

Some videos:

More images:

Four DFA modules ready to take flight

The DFA connection mechanism developed at the IDSC

The DFA's custom-built omni-wheels

LineScout Robot Climbs on Live Power Lines to Inspect Them


Hydro-Québec's LineScout rolling on a high-voltage line. Image: Hydro-Québec

Canada's Hydro-Québec Research Institute started the LineScout project after the 1998 North American ice storm that led to massive power outages and left millions of people without electricity for several days. The idea was to have a small mobile robot that could be able to roll on high-voltage transmission lines and de-ice them.

The first line scout was a little rover that would hang head down like a sloth and was equipped with claws to break the ice. The new generation, featured in a recent IEEE Spectrum article, is larger and equipped with cameras and a thermo-infrared imager. The remote-controlled robot has been used dozens times to do inspection and maintenance on high-voltage lines (2000 amps, 735 kilovolts). It uses cameras to inspect line conditions and discover irregularities, while also employing a smart navigation system to pinpoint locations in need of attention.

Japanese robotics company HiBot and the Electric Power Research Institute in the United States are also developing power line inspection robots.

Canada's LineScout has arms to maneuver over obstacles such as splices, hardware components, and aircraft warning markers. Unlike with conventional transmission line servicing, the robot can service the lines while they are energized, saving precious company resources, reducing safety risks and downtime.

The robot was recently tested on the BC Hydro transmission lines -- a project that last June received the prestigious Edison Award from the Edison Electric Institute. The video below describes the technology and the tests conducted on Western Canada's rugged terrain.

If you want to learn more of about power-line inspecting robots, Hydro-Québec will host the 1st International Conference on Applied Robotics for the Power Industry (CARPI 2010) in October.

Samuel Bouchard is a co-founder of Robotiq in Quebec City.

Singapore Researchers Unveil Social Robot Olivia

Olivia, a social robot from Singapore, loves to talk -- and gesticulate with its sleek 6-degrees-of-freedom white plastic arms.

Designed as a research platform for human-robot interaction, Olivia is a creation of the A*STAR Social Robotics Laboratory, or ASORO, part of Singapore's Agency for Science, Technology, and Research.

The researchers plan to use the robot, unveiled at RoboCup 2010 in June, as a receptionist to greet visitors and provide information, and later, as a personal assistant and companion in people's homes.

Olivia's head has a pair of stereoscopic camera eyes and it can rotate and also tilt up or down. It appears to float over a ring of light, a design that reminds me of EVE, the little flying bot from WALL-E.

A third camera, on the robot's forehead, can zoom in on the speaker's face. Olivia uses software to detect lip movements and determine if a person is speaking to her. It uses eight microphones to locate the source of human speech and turn its face in the direction of the speaker.

So far Olivia can respond to specific keywords and phrases, such as "switch off the lights" or "introduce yourself." But the ASORO researchers, as other robotics groups, want the robot to respond to natural speech. They also plan to program Olivia to display sadness, happiness, and other behaviors to improve communication.

The robot, which is 1.6 meter tall and weighs in at 152 kilograms, is powered by an onboard Core i7 processor. The researchers plan to mount Olivia on a mobile base and upgrade it with new arms with three-finger hands so it can grasp objects.

IEEE Spectrum's Harry Goldstein met Olivia at RoboCup in Singapore and prepared the video below:

More images:

 

Images: IEEE Spectrum and ASORO

Robots Preparing to Defeat Humans in Soccer

Can a team of soccer-playing robots beat the human World Cup champions by 2050?

That's the ultimate goal of RoboCup, an international tournament where teams of soccer robots compete in various categories, from small wheeled boxes to adult-size humanoids.

IEEE Spectrum's Harry Goldstein traveled to Singapore to attend RoboCup 2010 -- and check out how the man vs. machine future of soccer is playing out.

Special thanks to Aamir Ahmad and Prof. Pedro U. Lima from the Institute for Systems and Robotics, Instituto Superior Técnico, in Lisboa, Portugal; Prof. Mike Wong Hong Kee from Singapore Polytechnic; and the Advanced Robotics & Intelligent Control Centre at Singapore Polytechnic for additional footage.

Cornell's Ranger Robot Breaks New Walking Record

cornell ranger robot

Ranger, a four legged bi-pedal robot, set an unofficial record at Cornell last month for walking 23 kilometers (14.3 miles), untethered, in 10 hours and 40 minutes. Walking at an average pace of 2.1 km/h (1.3 miles per hour), Ranger circled the indoor track at Cornell’s Barton Hall 108.5 times, taking 65,185  steps before it had to stop and recharge. Ranger walks much like a human, using gravity and momentum to help swing its legs forward, though its looks like a boom box on stilts. Its swinging gait is like a human on crutches since the robot has no knees, and its two exterior legs are connected at the top and its two interior legs are connected at the bottom.

Engineering students at Cornell’s Biorobotics and Locomotion Laboratory stayed up all night on Tuesday, July 6th, 2010 while their professor Andy Ruina cheered them on over Skype. Jason Cortell, a research engineer specializing in electronics and the lab’s manager, steered Ranger using a remote control. He walked for most of the 11 hours but was carted around when he felt tired, controlling the robot all the same. "When he had to take a bathroom break, he made a run for it while Ranger was on a straightaway," says Ruina.

This is a competitive milestone for the lab after (unofficially) competing for the record with Boston Dynamics’ BigDog over the past two years. The original record was set by Ruina’s lab in April 2008 when Ranger walked 9 km (5.6 miles) around Barton Hall. The record was subsequently broken by Boston Dynamics' BigDog when it walked 20.6 km (12.8 miles). "Ranger competing with BigDog is like Bambi meets Godzilla," says Ruina. "While DARPA funds Boston Dynamics with tens of millions of dollars a year, we've probably received a total funding of 1 million over many years." Most of Ruina’s lab's funding comes from the NSF's Robust Intelligence program.

"I don't anticipate bi-pedal robots being necessarily important in the world of engineering," says Ruina. "What fascinates me is the scientific aspect of bi-pedal robots. It's an indirect way to understand human beings. By studying the legs, joints, and length ratios we appreciate the beauty of nature's design."

But the overall task of the project isn't to reverse engineer a human being -- it is a study of electrical efficiency and their goal is to figure out how to build a robot that moves as efficiently as a human. "Human beings are robust and energy stingy," explains Ruina. "We are trying to get a robot to be as reliable as a human being. If Ranger walks 14 miles, he uses 3 cents of electricity, which is more than twice as much as a human of equal weight would have used for the same distance." The data could have an impact on biological research for rehabilitation, prosthetics for humans, and improving athletic performance.

What's up next for Ranger? The lab aims to have Ranger walk 30 to 80 km (about 20 to 50 miles), while continuing to cut back on energy consumption. Ruina also wants to see Ranger on an outdoor track with solar cells on top of its head. "Ranger would stop when it gets tired," he explains. "Then wait for the sun to charge him back up so he could go, go go!"

Here's a video report from IDG and more photos:

 Images: Biorobotics and Locomotion Laboratory/Cornell University

Engineers Turn Robot Arm into Formula 1 Simulator

cybermotion simulator robocoaster kuka 500

Sometimes it's hard to believe that researchers are contributing to science instead of just having a blast.

As Paolo Robuffo Giordano and colleagues at the Max Planck Institute for Biological Cybernetics, in Tübingen, Germany, would have it, scientific research means riding the business end of a giant industrial robot arm while playing video games. But hey -- they produced some serious research on it, which was presented at ICRA 2010

The CyberMotion Simulator is basically a full motion simulator adapted to a racing car game. Players (or subjects, the researchers prefer to call them) sit in a cabin on a robot arm some 2 meters off the ground and drive a Ferrari F2007 car around a projected track with force-feedback steering wheel and pedals. The aim is to make the experience as realistic as possible without having to buy a real F2007, and to test the simulator with an environment that requires sudden, massive acceleration.

The robot arm is a Robocoaster, a modified six-axis Kuka KR 500 that can lift up to 500 kg. It's usually found in amusement parks and normally does not allow users any control. Robuffo Giordano and his collaborators want to use it to study how we perceive motion; their paper, though, deals with the mechanics of the simulator.

"A motion simulation system is a fundamental tool to understand how humans experience the sensation of motion," he says. "By running suitable experiments, one can gain better insights into the cognitive processes of the human brain."

Most motion simulators usually use six-axis actuated hexapods or Stewart platforms to recreate motion; the CAE 5000  is a typical flight simulator that moves on six cylindrical jacks. But lab director Heinrich Bulthoff wanted to use a robotic manipulator arm to study multi-sensory motion perception, and F1 racing seemed like a challenging way to test the notion.

"The main challenges were related to the adaption or extension of existing motion control algorithms," says Robuffo Giordano, who worked on the F1 arm as a control and robotics engineer. "Our system offers a much larger motion envelope [than Stewart platforms], allowing subjects to be freely displaced in six degrees of freedom in space and even be placed upside-down."

The video above shows what it's like to turn the KUKA 500 into an F1 car. You can hear the loud whine of the arm as the driver takes the curves in the simulation of the famed Monza track. The 3D visuals of the course were created by Robuffo Giordano's colleagues Joachim Tesch and Martin Breidt. Not bad for a DIY arcade game.

The team was able to reduce the delay in the robot's reaction to just 40 milliseconds and they are satisfied with the results. They believe the CyberMotion Simulator can be adapted to recreate the experience of being in a plane, helicopter, ship, and other vehicles. Another possibility is telepresence applications.

For now, the researchers are working on expanding the motion envelope of the simulator by adding movement to the cabin itself, as well as other improvements.

Meanwhile, I'd love to plug in Gran Turismo 5 and go for a spin, robot-style.

Updated on 9 August 2010 to clarify goal of the project and correct delay in robot's reaction.

Read also:

Autonomous Car Learns To Powerslide Into Parking Spot
Mon, May 10, 2010

Blog Post: Stanford's Junior uses two control schemes to stop right on target

CyberWalk: Giant Omni-Directional Treadmill To Explore Virtual Worlds
Wed, April 28, 2010

Blog Post: Built by European researchers, it's the world's largest VR platform

Tactile Gaming Vest Punches and Slices
Fri, March 26, 2010

Blog Post: Now you, too, can feel the pew-pew-pew

Robots With Knives: A Study of Soft-Tissue Injury in Robotics
Thu, May 06, 2010

Blog Post: What would happen if a knife-wielding robot struck a person?

Telenoid R1: Hiroshi Ishiguro's Newest and Strangest Android


Image: Osaka University and ATR Intelligent Robotics and Communication Laboratories

Japanese roboticist Hiroshi Ishiguro has just unveiled a new teleoperated android: a strange robotic creature called the Telenoid R1.

Ishiguro, a professor at Osaka University, is famous for creating humanlike androids designed to "transmit the presence" of people to a distant place. His previous remote controlled androids include a robot replica of himself that he named Geminoid HI-1 and a smiling female android called the Geminoid F.

But the new Telenoid R1 robot is quite different. The previous androids had lifelike appearances, every detail trying to reproduce the features of a real person. The Telenoid has a minimalistic design. The size of small child, it has a soft torso with a bald head, a doll-like face, and stumps in place of limbs. It looks like an overgrown fetus.

Ishiguro and his collaborators say the idea was to create a teleoperated robot that could appear male or female, old or young, and that could be easily transported. The new design pushes the envelope of human-robot interaction, and Ishiguro is certainly not afraid of exploring the depths of the uncanny valley.

The researchers, who demonstrated the robot today at a press conference in Osaka, hope it will be used as a new communication device, with applications in remote work, remote education, and elderly care. The goal of the project, a collaboration between Osaka University and Japan's Advanced Telecommunications Research Institute International, known as ATR, is to investigate the essential elements for representing and transmitting humanlike presence.

Here's how the system works: An operator sits at a computer with a webcam and special teleoperation software developed by ATR. The computer captures voice and tracks the operator's face and head movements. The voice and some movements are transmitted to the Telenoid. The operator can also push buttons to activate other behaviors.

Even its creators admit the Telenoid R1, which will be demonstrated at this year's Ars Electronica festival in Linz, Austria, is a bit, uh, eerie:

The unique appearance may be eery when we first see it. However, once we communicate with others by using the telenoid, we can adapt to it. If a friend speaks from the telenoid, we can imagine the friend’s face on the telenoid’s face. If we embrace it, we have the feeling, that we embrace the friend.

The Telenoid R1 uses dc motors as actuators, and there are only nine in its body. Ishiguro's previous androids use pneumatic actuators; the Geminoid HI-1 has 50 actuators, and the Geminoid F has 12. The Telenoid's smaller and simpler body helped reduce development and production costs. A research version of the robot will cost about US $35,000, and a commercial version about $8,000. They will be available later this year, distributed by Eager Co. of Japan.

UPDATED: Added price and availability.

Videos and more images:

All images courtesy of Osaka University and ATR Intelligent Robotics and Communication Laboratories

Read also:

Robotics: Sat, April 03, 2010
Geminoid F: New Smiling Female Android
Robotics: Fri, April 02, 2010
Who's Afraid of the Uncanny Valley?
Robotics: April 2010
Hiroshi Ishiguro: The Man Who Made a Copy of Himself
Robotics: Tue, November 17, 2009
The Amazing Androids of Hiroshi Ishiguro

Little Soccer Robots Dribble, Kick, Score

The Darmstadt Dribblers have some of the most impressive humanoid robots in the RoboCup tournament. For the second year in a row, the team from the Technische Universität Darmstadt, Germany, took the title in the kid-size humanoid league (for robots 30-60 centimeters tall). How did they do it?

IEEE Spectrum's Harry Goldstein went to RoboCup 2010 in Singapore to find out. Watch the video below to see these amazing little robots playing and also an interview with Dribblers team member Dorian Scholz. Then visit their YouTube channel for more more videos, including this year's kid-size final.

Most Commented Posts

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement
Load More