Having lots of furniture is a terribly inefficient way to live, considering that most of your furniture is not actually in use most of the time. A much better way to do it would be to just have one single piece of furniture that manages to be, say, a chair, a table, and a bed whenever you need it to be. You know, like my couch. But if you need more specific functionality, you may soon be able to get it using Roombots, little modular robots that can configure themselves into all kinds of different objects.
One Roombot is a fairly simple (and therefore relatively cheap) modular robot with lots of connectors and a hinge in the middle. By itself, it's not good for much, but when it gets together with a bunch of its friends, they can autonomously combine to turn themselves into all sorts of different pieces of furniture. They'd be able to move around on command, and when you don't need them anymore, they'd stack themselves neatly against the wall.
In a hypothetical near future, I can see myself getting out of bed in the morning and taking a shower. My bed, meanwhile, turns itself into a breakfast table and chair. After I eat, the table turns into a desk, but I decide I'd rather work on the couch today, so it turns into a couch instead. Each piece of furniture would be infinitely flexible, too, so I could ask my desk to reposition itself higher or lower and it would obey, or I could even ask for a bit more space and some extra bots would come over and stick themselves on to augment the desktop.
This stuff sounds pretty far out, but it's not too terribly complicated. This is one of the big advantages of modular robotics: lots of simple robots with clever programming can get together and team up to do complex tasks, like building me a couch with an integrated desk that I'll never, ever have to move from.
The game started with Monday's score: Brad Rutter tied with Watson for first with $5000, and Ken Jennings last with $2000.
Ken was first to pick a category, but after host Alex Trebek read the clue, Watson buzzed faster. From then on, the computer just kept on going, buzzing and answering correctly seven times in a row, amassing $21,035. Ken and Brad stood there, hopeless. The IBMers in the audience grinned and clapped.
Which brings me to my first question about this whole thing: How does Watson ring the buzzer? Was something implemented to make the buzzing fairer to the human competitors, who are not electrically wired to the game hardware? Update: Here's how Watson receives the clue and rings in the buzzer: It receives the clue as a text file at the moment that the clue appears on the stage screen, so in principle at the same time the clue "hits Brad Rutter’s and Ken Jennings’ retinas." To buzz in, Watson receives a signal when a "buzzer enable" light turns on, and then it can activate a robotic finger to press the buzzer. Though some may disagree, IBM claims this is a fair design to compete with human contestants.
Anyway, after the seventh correct answer, the category was "The Art of the Steal" and an interesting clue came up. Watch what happened:
Clearly, Watson didn't quite understand the clue, which called for an art
period, not an artist, as answer. Curiously, the computer had the correct answer listed among its choices, but with a low probability. The humans had no problem understanding the question -- but they got the art period wrong.
Watson's confusion didn't last, though. Soon, the machine was again dominating the game, this time getting six straight correct answers and expanding its lead. Ken and Brad would occasionally get an answer right, but it was a Watson show.
The highlight of the night came at the end, during the Final Jeopardy round, when contestants can wager a certain amount (up to their total score) and then they see the final clue. The category was "U.S. cities," and Watson had $36,681, Rutter $5400, and Jennings $2400. Watch:
Toronto????? Ooohhh. You can hear the IBMers gasping, terrified that this humiliating mistake is going to cost Watson everything. But nope. The smarty-pants (or smarty-racks) machine didn't go all in, its wagering-strategy algorithm deciding to bet just $947. (Here's how IBM explains the flub.)
So the night ended with Jennings with $4800, Brad with $10,400, and Watson with $35,734. The LCD-faced machine, with its HAL 9000 voice, vastly outperformed the best brains at this game. A massacre.
Which brings me to my second question: What is Watson good for other than playing Jeopardy? Will it help advance AI for real or is this just an entertaining challenge, much like the Deep Blue vs. Kasparov matches?
IBM, wise about this PR opportunity, made sure to include a video segment in which its execs and scientists brag about Watson's potential "to transform many industries." Their comments, however, were vague -- things like "Life is about questions and answers," or "This changes the paradigm in which we work with computers" -- and the most concrete example they gave was using Watson to help clinicians diagnose a hard case involving lots of data.
The whole thing looks like a giant commercial for IBM, but hey, I'm not complaining; I was very entertained and feel like I want to learn more about how Watson works. And I'm looking forward to tonight's round. Do Watson's mistakes mean there's hope for Ken and Brad? What do you think will happen tonight?
Meka Robotics is unveiling this week its Meka M1 Mobile Manipulator, a humanoid system equipped with two dexterous arms, a head with Microsoft Kinect sensor, and an omnidirectional wheel base. The robot runs Meka's real-time control software with ROS extensions.
Meka, a San Francisco-based start-up founded by MIT roboticists, says the M1 is designed to work in human environments and combines "mobility, dexterity, and compliant force-control." It seems Meka is targeting first research applications, whereas other companies developing similar manipulators -- like pi4 robotics in Germany and Rodney Brooks' Heartland Robotics -- are focusing on industrial uses.
The M1-Standard [image, right] comes with a preconfigured set of manipulators: Meka's compliant manipulators with 6 axis force-torque sensors at the wrist and compliant grippers. The pan-tilt head comes with a Kinect 3D camera and 5 megapixel Ethernet camera. And the base is a small-footprint omni platform with prismatic lift.
The robot's computer runs the Meka M3 and ROS software stacks. Meka says they're "pushing on deeper ROS integration" and expect upcoming versions of their M3 control software to "integrate many of the great packages that the ROS community is generating."
It looks like an amazing robot, but it doesn't come cheap. The M1-Standard is priced at US $340,000.
The M1-Custom [image, top], as the name suggests, allows customers to choose different sensors, hands, and head to build the robot they want (pricetag will vary accordingly). Meka says the first M1-Custom, seen in the video below, shipped last month.
Meka has been working on all the robots subsystems and ROS integration for some time. Inspiration for the M1, the company says, came in part from another robot, Georgia Tech's Cody, which uses Meka arms. With the M1, Meka has finally combined all the subsystems into a single, integrated robot.
This is an amazing time for robotics. So much is happening. If only we could turbocharge this blog.
Guess what? We're doing exactly that. Starting this week, Automaton and BotJunkie, two of the world's leading robots blogs, are teaming up to create a monster robotics news machine. Well, we're still more human than machine, but we'll be churning out lots of great stuff -- daily stories, in-depth articles, product reviews, interviews, exclusive videos, and more.
The reason we're merging is simple. Evan Ackerman, the creator of BotJunkie, and I believe that together (along with other contributors) we can do a much better job covering all the cool news and happenings in robotics and AI. Did I mention there's a lot happening? (Read Evan's note on the merger.)
So if you're already an Automaton reader, expect even greater robotics content. If you're a BotJunkie reader migrating over here, welcome!
As Evan said, we really like what he's been doing at BotJunkie and we don't want that to change. Evan will be posting daily stories about the same types of things and with the same style and tone as he'd been doing over at BotJunkie. But he'll also be doing more -- traveling to conferences, reviewing more products, and occasionally becoming a robot himself. (He's so dedicated to his readership that, even though he'll be out on vacation for a couple of weeks, he prepared a bunch of posts to be published while he's away.)
As in any merger, there are bumps along the way, and several loyal BotJunkie readers have asked for a better RSS feed and web design for Automaton. We hear you. We agree with you. And we'll do all we can to make those things happen.
It may sound corny, but as Evan nicely put it, we love writing about robots, but it's you, our readers, who really make it worthwhile. We love the e-mails, the tips, the comments, the tweets. Keep those coming. You can reach us by e-mail -- e.guizzo (at) ieee (dot) org and evan (at) botjunkie (dot) com -- or on Twitter (AutomatonBlog and BotJunkie) and Facebook.
This is HOAP-2, and it likes to clean. It doesn't really know how to clean, but that's okay, because it does know how to learn. A human can move HOAP-2's arms in different cleaning patterns, and the bot will remember and then be able to clean by itself later on. Take a look:
The cool thing here is, of course, that HOAP is learning to erase instead of being programmed to erase. Robot learning is the focus of tons of research today. Now, in the case of HOAP, some people would argue that this is a waste of time, because robots should be able to detect marks on a whiteboard and erase them autonomously. And that's true, but it's also not the point.
If you're a teacher with a bunch of dirty whiteboards and no naughty kids and someone hands you a robot, you don't want to have to worry about whether your whiteboards are the right shade of white or the right size or whatever... And what if you have chalkboards instead? It really makes much more sense to have a robot be a generalist, and to be an effective generalist a robot has to be adaptable, something that (for now at least) robots are notoriously bad at. But robots are notoriously good at following instructions, so robots that can learn new tasks from humans on the fly have the potential to be much more effective, and much less frustrating for their users.
You can relax: so far, we humans aren't completely redundant, as IBM's Watson artificial intelligence system managed to not answer every single question instantly and correctly in the first round of a three day Jeopardy exhibition match with past champions Ken Jennings and Brad Rutter.
After round one (which was as far as today's show got), Watson was tied with Brad for first with $5000, and Ken Jennings was in third with $2000. It's worth mentioning that Watson led for most of the round, with a few incorrect questions setting it back significantly towards the end. Watson definitely showed more 'common sense' than I was expecting, but it still seemed burdened with a fair amount of the 'so smart it's kinda dumb' that computers are known for. It definitely didn't get everything right, displaying a significant number of low-certainty answers (below its buzzing threshold) as well as buzzing in with several answers that seemed pretty far off. It also managed to give an incorrect answer to a question that Ken had incorrectly answered with the same answer moments before, which is a bug that I bet is getting resolved as we speak.
While I was honestly hoping that Watson would do slightly better, I'm relieved that it at least ended up tied for first. Irrespective of expectations, I feel like Watson turned in an impressive performance, and I think that's one of the most important aspects of this exhibition... In my experience, the overall perception that the general public has of the current state of robotics and artificial intelligence is some mix of Roomba, the Terminator, and Commander Data, which is (to put it mildly) somewhat inaccurate and unrealistic. So, it's good to see a state-of-the-art AI system put on a credible public performance, complete with some fallability to keep us feeling comfortably in control.
The next segment (featuring the the second round plus Final Jeopardy) airs on ABC tonight, with the final match on Wednesday.
There have been a couple interesting TED Talks on robots recently featuring robots (and their human researchers). This first talk is from Heather Knight of Marilyn Monrobot Labs, who programmed a Nao to not just tell jokes, but actually pay attention to whether the audience was laughing and then adapt its comedy routine on the fly:
The other talk is by Cynthia Breazeal, from MIT's Personal Robotics Group. She talks about her past research and where she sees the future of interactive robots. If for no other reason, it's worth watching for a priceless Cookie Monster moment:
It's interesting how Cynthia discusses her research history in terms of robots like Leo who are autonomous, versus her vision for future in which she seems to focus on telepresence-type robots. I'm not sure what (if anything) to make of that, and while I tend to agree that at least commercially, telepresence probably has a stronger immediate market than autonomy (especially emotional autonomy) at its current stage of development, I'd still love to see more of Leo.
It's also interesting just how much of a difference presence makes when it comes to humans interacting with technology, and how even a subtle anthropomorphic design can inspire emotional attachment. Autom, in particular, is a good example of how the way to get people to bond with robots is not to try to make them as humanoid as possible, but just to make them slightly familiar, and we humans can fill in all the blanks with no problems.
Getting a Roomba to obey gesture commands turns out to be pretty simple thanks to the magic of Kinect: the sensor is connected to a PC, which talks to the Roomba via a little Bluetooth dongle and sends it driving and steering commands based on the positions of your hands and hips.
I'm well aware that this hack basically defeats the entire purpose of having a Roomba, without really giving you many of the benefits of an upright vacuum, but to let those facts bother you would go against the spirit of what this is: it's a hands-free Roomba, man! How cool is that?
Building robots has never been cheap, which sucks, because all that time and effort and expense that goes into building a hardware platform eats up all the time and energy that you'd probably rather spend making that hardware platform do something useful and cool. Part of the point of PR2 and ROS was to remove this hardware barrier and let people start focusing on software. Every once in a while, though, you run into someone who for some reason can't find $400,000 between their couch cushions for their own PR2. For these unfortunate souls, a new option may be the Bilibot.
The idea behind the Bilibot (which is some variation on "cheap robot" in German) is to create a robotics platform that's cheap enough for just about anyone to be able afford, yet capable enough for serious robotics researchers to be productive with. It consists of three primary components: an iRobot Create base to let it move around, a Kinect sensor to let it see where it's going, and a computer pre-configured with ROS. Part of the appeal of the platform is that it'll be capable of doing stuff right out of the box: there will be a single button you can push to get the robot to start following you around, for example.
The primary reason you might want a Bilibot is that the target price they're shooting for is a scant $650, which is basically just the cost of the parts plus the time it takes to put the thing together. And if you want to go even cheaper, you can build it yourself, the plans will be freely available.
Want in? They're currently finalizing the computer hardware, but you can sign up to be notified when pre-orders start for real at the website below.
Modular Robotics' Cubelets are designed to be an absurdly simple way to build robots. You don't have to know how to program anything or even how to build anything; just snap a few specialized Cubelet blocks together and poof, you've got a robot. Want to build something different? Just use different blocks in different combinations, it's that easy:
One set of 20 Cubelets would cost you $300, if you could buy them, which you can't, because they're sold out. In that set you'd get:
Last time I posted about Cubelets, I posed a question that nobody even tried (as far as I could tell) to answer, so I'm just going to go ahead and pose it again: How many different permutations of robot you can make with one set of 20 Cubelets, keeping in mind the following:
-Each Cubelet has either 5 or 6 attachment points (depending on what it does)
-The same set of Cubelets functions differently when arranged differently
-Cubelet permutations must be able to exist in physical space (tricky!)
You may ignore the fact that using (say) two inverse blocks in a row is functionally identical to not using any inverse blocks, and assume that a Cubelet robot that has a different size or layout counts as a different robot. And while the definition of "robot" is, as always, a little bit iffy, suffice it to say that to count, a Cubelet robot has to be able to sense something or perform some action.
If you can convince us that you have the right answer (post it in the comments section below), it's good for an Automaton t-shirt. Good luck!