Automaton iconAutomaton

How Robonaut 2 Will Help Astronauts In Space

robonaut 2 nasa humanoid robot space shuttle ISS

Robonaut 2 is ready for lift-off!

Tomorrow is a huge day for robotkind. If all goes as planned, at 4:50 p.m. EST, the space shuttle Discovery will blast off from Cape Canaveral, Florida, carrying aboard a crew of astronauts and also NASA's Robonaut 2, which will become the first humanoid robot in space.

The shuttle's destination is the International Space Station, ISS, where Robonaut 2 will become a permanent resident and work alongside humans as a robotic helper. Astronauts will mount the robot on a fixed pedestal inside one of the ISS labs and use it to perform tasks like flipping switches and holding tools.

robonaut 2 nasa space shuttle gm international space stationSo no, Robonaut won't be fixing meals for the human crew. The main goal is to find out how manipulation robots behave in space -- and also give crew members a second pair of hands. NASA hopes the experience will allow it to upgrade the robot in the future, so it would be able to support astronauts in more complex tasks, including repairs and scientific missions outside the ISS.

The robot can perform tasks autonomously or under remote control, or a mix of both, Nic Radford, the Robonaut deputy project manager, told us. Astronauts on the station will operate the robot using a laptop, he said, though it can also be "joysticked" and directly controlled from Earth, with a few seconds of delay.

Sending Robonaut to space is a great feat for NASA, but it raises the question: Is this another step in using robots to replace humans in space exploration? In my opinion, using teleoperated and semi-autonomous robots makes a lot of sense. Robotic explorers have already demonstrated that unmanned missions offer formidable rewards, with immensely smaller costs and risks than manned ones. Of course, NASA enjoys cheering for its robots, but it's quick to point out that robots are not a replacement for humans in space, but rather "companions that can carry out key supporting roles."

That might be the case for now, as robots still can't match human manipulation and other capabilities. But robots are catching up fast. One of Robonaut 2's key features is its dexterous, humanlike arms and hands. Each arm is about 80 cm [31 in] long and can hold 9 kg [20 lb] in Earth's gravity. Each hand has 12 degrees of freedom: 4 DOFs in the thumb, 3 DOFs in both the index and middle fingers, and 1 DOF in the other fingers. The fingers are articulated and driven by tendons, just like human hands, and Robonaut is able to use the same tools that human astronauts use.

NASA developed the robot in a joint project with General Motors. The goal was to build a robotic assistant that could work side by side with humans, whether they are astronauts in space or workers at GM manufacturing plants on Earth.

At the IEEE Humanoids conference last December, I spoke with GM researcher Muhammad E. Abdallah, who explained how Robonaut's hands work:

The Robonaut's hands work a bit differently than similar humanlike robot hands. Existing tendon-driven robotic fingers typically control their joints using tension controllers on each tendon. In other words, desired joint torques are translated into desired tendon tensions. The problem is that, in this approach, there's a coupling between the tendon and joint displacement that results in disturbances in the movement of the fingers. NASA and GM engineers solved the problem by implementing a joint-based torque control method. It decouples the tendon effects and is faster and more reliable than traditional methods.

The ability to control torque is important for Robonaut, and other humanoid robots, for that matter, because its hands will interact with unexpected objects or items slightly out of position. Industrial robots, by contrast, interact with known objects in well-defined spaces. Robonaut's hands mimic human hands in their ability to adapt to variation -- a capability that NASA demonstrated by having different people shake hands with the robot.

But the robot is more than just arms and hands, of course. Robonaut 2 weighs in at 150 kg [300 lbs] and if you're wondering, it has no legs -- it will remain stationary inside the ISS, although NASA researchers have been experimenting with robotic legs and wheels. Built primarily with aluminum with steel parts, it carries over 350 sensors and has a total of 42 degrees of freedom.

Behind its helmet visor are four visible light cameras: two provide stereo vision for the robot and remote operators, and two work as auxiliary cameras. A fifth infrared camera is housed in the mouth area for depth perception. Because the head is full of cameras, the robot's computer system -- 38 PowerPC processors -- are housed inside the torso. Or as NASA puts it, Robonaut 2 "thinks with its stomach -- literally." See this cool infographic that SPACE.com prepared:

Get a detailed look at Robonaut 2, NASA's first humanoid robot to fly to space, in this infographic.

In a second phase of the Robonaut project, at an undecided date, NASA will be making the unit mobile using a leg-type system, giving it the ability to move around inside the ISS. The third phase will feature a robot that will perform missions outside the space station. Robonaut is also a part of Project M, which wants to put a humanoid robot on the moon in 1,000 days -- beating Japan’s proposed goal of 2015.

For now, all eyes will be locked on the space shuttle at Cape Canaveral. It's been a long wait for this launch. And once Robonaut arrives at the ISS, it might take several months until astronauts unpack it and bring it to life. Still, I find the idea of a robot in space -- a staple of science fiction -- truly exciting. What do you think? Is this the beginning of a new era in robotic space exploration?

PS: Watch the "movie trailer" NASA prepared about the "new recruit."

Images: NASA; videos: IEEE Spectrum and NASA; infographic: SPACE.com

Robot Knifefish Can Swim In Any Direction

Bio-inspired robots are an awesome idea, since they take aspects that evolution has been refining for however many bajillions of years and put them into practice giving robots new capabilities. Northwestern University has created a robot called GhostBot modeled on the black ghost knifefish, which uses one single horizontal fin to propel itself forwards, backwards, and even straight up:

Pretty cool, right? Here's how it works:

Observations revealed that while the fish only uses one traveling wave along the fin during horizontal motion (forward or backward depending on the direction on the wave), while moving vertically it uses two waves. One of these moves from head to tail, and the other moves tail to head. The two waves collide and stop at the center of the fin.

The team then created a computer simulation that showed that when these “inward counterpropagating waves” are generated by the fin, horizontal thrust is canceled and the fluid motion generated by the two waves is funneled into a downward jet from the center of the fin, pushing the body up. The flow structure looks like a mushroom cloud with an inverted jet.

To get a sense of the potential of this kind of mobility system, check out a video of the actual fish:

That's a pretty impressive adaptation, if you ask me, and effectively puts conventional thrusters to shame.

[ Northwestern ] VIA [ Fast Company ]

Stochastic Robots Assemble and Disassemble Themselves

"Stochastic" is another way of saying random, and stochastic robots are robots that harness the powers of randomness to construct themselves. It's a fairly simple idea that can result in fairly complex objects: you've got some number of different modules, which can come together to form a robot. Instead of putting the modules together and building the robot directly, you instead just toss all of the modules and shake it really really hard. As the modules randomly run into each other, each is programed to latch on if it happens to bump into a module that it's supposed to be next to in the final design. And if you do this for long enough, eventually you'll end up with a fully assembled robot. Or that's the basic idea, anyway.

The following video demonstrates an interesting application of this concept. Along with lots of assembling modules come a few disassembling modules, whose job is to break up the assembled robots. This creates a system that's sort of a robotic chemical reaction, and by adjusting how long the disassembling bots take to recharge themselves, the overall number of functional robots can be controlled:

One application for these types of robots might be in the medical field, where building a robot inside someone's body could prove to be much more effective than building one outside. All you have to do is inject a bunch of little modules into the bloodstream, they'd randomly whirl about and run into each other and grab on where appropriate, and in a little bit you'd have your robot. You could even program the modules not to assemble themselves until they reached a certain place in the body, and while such precision might take a while (or a whole bunch of injections), the potential is there for extremely precise treatments and repairs.

[ Nils Napp ] VIA [ Autonomous Robots Blog ]

Snakebot Worms Its Way Into Your Heart, Literally

Next time you need heart surgery, this little snakebot is going to make himself right at home deep inside your chest via a small hole in your solar plexus. It's CardioARM, and don't panic, he's here to help. Developed by CMU's Howie Choset, CardioARM has 102 joints (plus a camera for a head) and can be directed to slither around your vital organs with the utmost precision, making it unnecessary to 'crack open your chest,' which is apparently what they normally do when your ticker needs an overhaul.

Last February, CardioARM was successfully tested on a human for the first time, performing a diagnostic heart mapping procedure, which sounds like it was probably a pile o' fun for everyone involved. Dr. Choset has bigger plans for his snakebots, though:

"He hopes to test the device in other surgeries, such as ablation—which involves burning away a small amount of heart muscle to correct an abnormal beat."

Burning? Burning, you say? What, with lasers? We're giving these flesh-burrowing robot snakes lasers now? What else?!

“We’re hoping to use a remote-controlled robot to go through small caves in Egypt,” [Choset] says, “and find remains of ancient Egyptian tombs.”

Snakebots. Lasers. Ancient Egyptian tombs. Wow, I smell a blockbuster...

[ CardioARM ] VIA [ Discover ]

BrainDriver: A Mind Controlled Car

braindriver mind controlled car

Imagine you could drive your car using only your thoughts. German researchers have just made that possible -- and they have the video to prove it.

Following his recent interview on the Robots Podcast about autonomous vehicles, Raúl Rojas, an AI professor at the Freie Universität Berlin, and his team have demonstrated how a driver can use a brain interface to steer a vehicle. Here's what the researchers say about the project, which they call the BrainDriver:

After testing iPhone, iPad and an eye-tracking device as possible user interfaces to maneuver our research car, named "MadeInGermany," we now also use Brain Power. The "BrainDriver" application is of course a demonstration and not roadworthy yet, but in the long run human-machine interfaces like this could bear huge potential in combination with autonomous driving.

To record brain activity, the researchers use an Emotiv "neuroheadset," an electroencephalography, or EEG, sensor by San Francisco-based company Emotiv, which design it for gaming. After a few rounds of "mental training," the driver learns to move virtual objects only by thinking. Each action corresponds to a different brain activity pattern, and the BrainDriver software associates the patterns to specific commands -- turn left, turn right, accelerate etc. The researchers then feed these commands to the drive-by-wire system of the vehicle, a modified Volkswagen Passat Variant 3c. Now the driver's thoughts can control the engine, brakes, and steering.

To road test their brain-controlled car, the Germans headed out to the former airport in Berlin Tempelhof. The video below shows a driver thought-controlling the car, Yoda-style. "Don't try this at home," the narration says, only half-jokingly.

The researchers caution that the BrainDriver application is still a demonstration and is not ready for the road. But they say that future human-machine interfaces like this have huge potential to improve driving, especially in combination with autonomous vehicles. As an example, they mention an autonomous cab ride, where the passenger could decide, only by thinking, which route to take when more than one possibility exist.

This type of non-invasive brain interface could also allow disabled and paralyzed people to gain more mobility in the future, similarly to what is already happening in applications such as robotic exoskeletons and advanced prosthetics.

Rojas group's research is part of the MadeInGermany project and follows previous work on autonomous cars, including his AutoNOMOS Project

Image and video: Raúl Rojas/Freie Universität Berlin

READ ALSO:

Robot Car Slides Into Parking Spot
Mon, May 10, 2010

Blog Post: Watch video of Stanford's "Junior" reverse powersliding -- and stopping right on target

Driverless Van Going from Italy to China
Tue, September 21, 2010

Blog Post: A 13,000 kilometer, three-month test drive -- with no driver

Honda's Unicycle of the Future
Mon, April 12, 2010

Blog Post: It only has one wheel, but Honda's futuristic personal mobility device is no pedal-pusher

Driving a Car With an iPhone
Tue, November 10, 2009

Blog Post: Engineers hacked an Oldsmobile Delta 88 to remote control it with an iPhone

IBM's Watson Jeopardy Computer Shuts Down Humans in Final Game

jeopardy watson ibm ken jennings
Watson, the Jeopardy supercomputer created by IBM, defeated its human opponents, Ken Jennings [above] and Brad Rutter, in the final round of the challenge.

The Jeopardy-IBM challenge has ended, and silicon prevailed over gray matter. Watson, the Jeopardy-playing supercomputer designed by IBM, defeated its human competitors, finishing in first by a wide margin.

As one of the contestants, Jeopardy super champion Ken Jennings, put it, "I for one welcome our new computer overlords."

In this third and final round, Watson's performance was much more impressive than its lackluster debut, but not as enthralling as in Day 2, when it completely dominated the game, except for its now-infamous "What is Toronto?????" mistake. This was one of a few errors that proved embarrassing for IBM, but perhaps entertaining for viewers. Watch what happened:

The final game started with Jennings, Watson, and the third contestant, Brad Rutter, another Jeopardy champion, each getting a bunch of answers right. It was funny to see that one of the categories was "Also On Your Computer Keys." You'd expect that Watson, a computer, would know about computer keys. Not! When this clue, "It's an abbreviation for Grand Prix auto racing" came up, Watson's choices (shown to viewers on the TV screen) were "gpc," "NASCAR," and "QED." Apparently Watson doesn't know about the "F1" key.

jeopardy watson ibm supercomputerThe computer seemed to struggle with other categories, like "Actors Who Direct," and Ken didn't waste the opportunity. A streak of correct answers put him in the lead of Day 3, his score reaching $8600 against Watson's $4800 and Brad's $2400. But in the second half of the show, Watson, with its massively parallel hardware (2880 POWER7 processing cores and 16 terabytes of RAM), fired up its DeepQA algorithms and pulled ahead, winning some high-value questions and making only two mistakes.

The Final Jeopardy category was "19th Century Novelists," and the clue, "William Wilkinson's 'an account of the principalities of Wallachia and Moldavia' inspired this authors's most famous novel." The score at this point: Watson with $23,440, Ken with $18,200, and Brad with $5,600.

Both humans got it right, answering: "Who is Bram Stoker?" Brad's total score from this round plus the previous rounds was $21,600. Ken finished just a little ahead with $24,000. What about Watson? No flubs this time. The computer got the answer right and finished with a commanding total of $77,147.

It might be worth to note that if Ken had wagered all he had, he'd have ended up with a final total of $41,200, and if Watson had wagered all it had but got it wrong, its final total would've been $35,734 -- in this scenario, Ken would have won! So why didn't Ken take the chance? Did he think the odds of Watson making such a big mistake were too slim? Indeed, Watson was very confident in its answer: It wagered the exact amount ($17,973) that, even if it had gotten the answer wrong, would keep its score ahead of Ken's; in this case, Watson would end up with $41,201 and Ken with $41,200. A $1 dollar difference! Now that would've been great television.

By winning the challenge, Watson not only helps IBM advance its master plan of making humanity obsolete but it also earns the company the $1 million top prize. The money, though, will go to charity. Sorry, Watson, no CPU upgrade for you!

So what does Watson's victory mean for the future of AI? Will it help advance the field or is it just a publicity stunt that will benefit IBM's image but have limited practical applications, much like the Deep Blue vs. Kasparov matches?

IBM insists that Watson "changes the paradigm in which we work with computers" and will "transform many industries." As examples, they say Watson could help clinicians trying to diagnose a hard case, or lawyers sifting through mountains of evidence material, or governments and companies managing natural resources like water.

From a more fundamental point of view, Watson is part of a shift in AI that has put emphasis on systems that are not programmed to solve problems but rather programmed to learn how to solve problems. Peter Norvig, Google's director of research, recently wrote about the promises of this new direction in AI:

This approach of relying on examples — on massive amounts of data — rather than on cleverly composed rules, is a pervasive theme in modern A.I. work. It has been applied to closely related problems like speech recognition and to very different problems like robot navigation. IBM’s Watson system also relies on massive amounts of data, spread over hundreds of computers, as well as a sophisticated mechanism for combining evidence from multiple sources.

The current decade is a very exciting time for A.I. development because the economics of computer hardware has just recently made it possible to address many problems that would have been prohibitively expensive in the past. In addition, the development of wireless and cellular data networks means that these exciting new applications are no longer locked up in research labs, they are more likely to be available to everyone as services on the web.

So maybe in a few years we'll all be carrying a Watson app on our smartphones. It will destroy us on a "Jeopardy!" game, but help us mine data, connect dots, and solve some of our hardest problems. Do you agree? What do you think Watson means for the future of computers -- and humanity?

Updated February 22, 2011

Roombots Want To Be All The Furniture You'll Ever Need

Having lots of furniture is a terribly inefficient way to live, considering that most of your furniture is not actually in use most of the time. A much better way to do it would be to just have one single piece of furniture that manages to be, say, a chair, a table, and a bed whenever you need it to be. You know, like my couch. But if you need more specific functionality, you may soon be able to get it using Roombots, little modular robots that can configure themselves into all kinds of different objects.

One Roombot is a fairly simple (and therefore relatively cheap) modular robot with lots of connectors and a hinge in the middle. By itself, it's not good for much, but when it gets together with a bunch of its friends, they can autonomously combine to turn themselves into all sorts of different pieces of furniture. They'd be able to move around on command, and when you don't need them anymore, they'd stack themselves neatly against the wall.

In a hypothetical near future, I can see myself getting out of bed in the morning and taking a shower. My bed, meanwhile, turns itself into a breakfast table and chair. After I eat, the table turns into a desk, but I decide I'd rather work on the couch today, so it turns into a couch instead. Each piece of furniture would be infinitely flexible, too, so I could ask my desk to reposition itself higher or lower and it would obey, or I could even ask for a bit more space and some extra bots would come over and stick themselves on to augment the desktop.

This stuff sounds pretty far out, but it's not too terribly complicated. This is one of the big advantages of modular robotics: lots of simple robots with clever programming can get together and team up to do complex tasks, like building me a couch with an integrated desk that I'll never, ever have to move from.

Images and video: Biorobotics Laboratory/EPFL

[ EPFL Biorob ]

Watson AI Crushes Humans in Second Round of Jeopardy

UPDATE: See who prevailed -- man or machine? -- in the third and final round!

watson jeopardy ibm computer challenge game
IBM's Watson Jeopardy computer and its human opponents, Ken Jennings and Brad Rutter.

What a difference a day makes in the life of an artificial intelligence.

After an unimpressive debut on Monday, Watson, the IBM Jeopardy-playing computer, crushed its carbon lifeform opponents last night.

The game started with Monday's score: Brad Rutter tied with Watson for first with $5000, and Ken Jennings last with $2000.

Ken was first to pick a category, but after host Alex Trebek read the clue, Watson buzzed faster. From then on, the computer just kept on going, buzzing and answering correctly seven times in a row, amassing $21,035. Ken and Brad stood there, hopeless. The IBMers in the audience grinned and clapped.

Which brings me to my first question about this whole thing: How does Watson ring the buzzer? Was something implemented to make the buzzing fairer to the human competitors, who are not electrically wired to the game hardware? Update: Here's how Watson receives the clue and rings in the buzzer: It receives the clue as a text file at the moment that the clue appears on the stage screen, so in principle at the same time the clue "hits Brad Rutter’s and Ken Jennings’ retinas." To buzz in, Watson receives a signal when a "buzzer enable" light turns on, and then it can activate a robotic finger to press the buzzer. Though some may disagree, IBM claims this is a fair design to compete with human contestants.

Anyway, after the seventh correct answer, the category was "The Art of the Steal" and an interesting clue came up. Watch what happened:

Clearly, Watson didn't quite understand the clue, which called for an art
period, not an artist, as answer. Curiously, the computer had the correct answer listed among its choices, but with a low probability. The humans had no problem understanding the question -- but they got the art period wrong.

Watson's confusion didn't last, though. Soon, the machine was again dominating the game, this time getting six straight correct answers and expanding its lead. Ken and Brad would occasionally get an answer right, but it was a Watson show.

The highlight of the night came at the end, during the Final Jeopardy round, when contestants can wager a certain amount (up to their total score) and then they see the final clue. The category was "U.S. cities," and Watson had $36,681, Rutter $5400, and Jennings $2400. Watch: 

Toronto????? Ooohhh. You can hear the IBMers gasping, terrified that this humiliating mistake is going to cost Watson everything. But nope. The smarty-pants (or smarty-racks) machine didn't go all in, its wagering-strategy algorithm deciding to bet just $947. (Here's how IBM explains the flub.)

So the night ended with Jennings with $4800, Brad with $10,400, and Watson with $35,734. The LCD-faced machine, with its HAL 9000 voice, vastly outperformed the best brains at this game. A massacre. 

Which brings me to my second question: What is Watson good for other than playing Jeopardy? Will it help advance AI for real or is this just an entertaining challenge, much like the Deep Blue vs. Kasparov matches?

IBM, wise about this PR opportunity, made sure to include a video segment in which its execs and scientists brag about Watson's potential "to transform many industries." Their comments, however, were vague -- things like "Life is about questions and answers," or "This changes the paradigm in which we work with computers" -- and the most concrete example they gave was using Watson to help clinicians diagnose a hard case involving lots of data.

The whole thing looks like a giant commercial for IBM, but hey, I'm not complaining; I was very entertained and feel like I want to learn more about how Watson works. And I'm looking forward to tonight's round. Do Watson's mistakes mean there's hope for Ken and Brad? What do you think will happen tonight?

Meka Robotics Announces Mobile Manipulator With Kinect and ROS

meka robotics m1 mobile manipulator

Meka Robotics is unveiling this week its Meka M1 Mobile Manipulator, a humanoid system equipped with two dexterous arms, a head with Microsoft Kinect sensor, and an omnidirectional wheel base. The robot runs Meka's real-time control software with ROS extensions.

Meka, a San Francisco-based start-up founded by MIT roboticists, says the M1 is designed to work in human environments and combines "mobility, dexterity, and compliant force-control." It seems Meka is targeting first research applications, whereas other companies developing similar manipulators -- like pi4 robotics in Germany and Rodney Brooks' Heartland Robotics -- are focusing on industrial uses.

meka robotics m1 mobile manipulatorThe M1-Standard [image, right] comes with a preconfigured set of manipulators: Meka's compliant manipulators with 6 axis force-torque sensors at the wrist and compliant grippers. The pan-tilt head comes with a Kinect 3D camera and 5 megapixel Ethernet camera. And the base is a  small-footprint omni platform with prismatic lift.

The robot's computer runs the Meka M3 and ROS software stacks. Meka says they're "pushing on deeper ROS integration" and expect upcoming versions of their M3 control software to "integrate many of the great packages that the ROS community is generating."

It looks like an amazing robot, but it doesn't come cheap. The M1-Standard is priced at US $340,000.

The M1-Custom [image, top], as the name suggests, allows customers to choose different sensors, hands, and head to build the robot they want (pricetag will vary accordingly). Meka says the first M1-Custom, seen in the video below, shipped last month.

Meka has been working on all the robots subsystems and ROS integration for some time. Inspiration for the M1, the company says, came in part from another robot, Georgia Tech's Cody, which uses Meka arms. With the M1, Meka has finally combined all the subsystems into a single, integrated robot.

More images:

meka robotics mobile manipulator m1

meka robotics mobile manipulator m1

Images and video: Meka Robotics

[ Meka Robotics

Automaton and BotJunkie Merge to Create Top Robotics News Blog

This is an amazing time for robotics. So much is happening. If only we could turbocharge this blog.

Guess what? We're doing exactly that. Starting this week, Automaton and BotJunkie, two of the world's leading robots blogs, are teaming up to create a monster robotics news machine. Well, we're still more human than machine, but we'll be churning out lots of great stuff -- daily stories, in-depth articles, product reviews, interviews, exclusive videos, and more.

The reason we're merging is simple. Evan Ackerman, the creator of BotJunkie, and I believe that together (along with other contributors) we can do a much better job covering all the cool news and happenings in robotics and AI. Did I mention there's a lot happening? (Read Evan's note on the merger.)

Indeed, BotJunkie and Automaton complement each other quite well. Evan has diligently chronicled the latest developments in robotics for more than three years, posting exclusive reviews and thousands of stories that are fun to read and consistently went viral on the Net. Automaton has focused on more in-depth technical posts and exclusive stories, and our robotics coverage has also produced scoops and earned journalistic recognition.

So if you're already an Automaton reader, expect even greater robotics content. If you're a BotJunkie reader migrating over here, welcome!

As Evan said, we really like what he's been doing at BotJunkie and we don't want that to change. Evan will be posting daily stories about the same types of things and with the same style and tone as he'd been doing over at BotJunkie. But he'll also be doing more -- traveling to conferences, reviewing more products, and occasionally becoming a robot himself. (He's so dedicated to his readership that, even though he'll be out on vacation for a couple of weeks, he prepared a bunch of posts to be published while he's away.)

As in any merger, there are bumps along the way, and several loyal BotJunkie readers have asked for a better RSS feed and web design for Automaton. We hear you. We agree with you. And we'll do all we can to make those things happen.

It may sound corny, but as Evan nicely put it, we love writing about robots, but it's you, our readers, who really make it worthwhile. We love the e-mails, the tips, the comments, the tweets. Keep those coming. You can reach us by e-mail -- e.guizzo (at) ieee (dot) org and evan (at) botjunkie (dot) com -- or on Twitter (AutomatonBlog and BotJunkie) and Facebook.

Thanks for reading and stay tuned!

Most Commented Posts

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement
Load More