Building robots has never been cheap, which sucks, because all that time and effort and expense that goes into building a hardware platform eats up all the time and energy that you'd probably rather spend making that hardware platform do something useful and cool. Part of the point of PR2 and ROS was to remove this hardware barrier and let people start focusing on software. Every once in a while, though, you run into someone who for some reason can't find $400,000 between their couch cushions for their own PR2. For these unfortunate souls, a new option may be the Bilibot.
The idea behind the Bilibot (which is some variation on "cheap robot" in German) is to create a robotics platform that's cheap enough for just about anyone to be able afford, yet capable enough for serious robotics researchers to be productive with. It consists of three primary components: an iRobot Create base to let it move around, a Kinect sensor to let it see where it's going, and a computer pre-configured with ROS. Part of the appeal of the platform is that it'll be capable of doing stuff right out of the box: there will be a single button you can push to get the robot to start following you around, for example.
The primary reason you might want a Bilibot is that the target price they're shooting for is a scant $650, which is basically just the cost of the parts plus the time it takes to put the thing together. And if you want to go even cheaper, you can build it yourself, the plans will be freely available.
Want in? They're currently finalizing the computer hardware, but you can sign up to be notified when pre-orders start for real at the website below.
Modular Robotics' Cubelets are designed to be an absurdly simple way to build robots. You don't have to know how to program anything or even how to build anything; just snap a few specialized Cubelet blocks together and poof, you've got a robot. Want to build something different? Just use different blocks in different combinations, it's that easy:
One set of 20 Cubelets would cost you $300, if you could buy them, which you can't, because they're sold out. In that set you'd get:
Last time I posted about Cubelets, I posed a question that nobody even tried (as far as I could tell) to answer, so I'm just going to go ahead and pose it again: How many different permutations of robot you can make with one set of 20 Cubelets, keeping in mind the following:
-Each Cubelet has either 5 or 6 attachment points (depending on what it does)
-The same set of Cubelets functions differently when arranged differently
-Cubelet permutations must be able to exist in physical space (tricky!)
You may ignore the fact that using (say) two inverse blocks in a row is functionally identical to not using any inverse blocks, and assume that a Cubelet robot that has a different size or layout counts as a different robot. And while the definition of "robot" is, as always, a little bit iffy, suffice it to say that to count, a Cubelet robot has to be able to sense something or perform some action.
If you can convince us that you have the right answer (post it in the comments section below), it's good for an Automaton t-shirt. Good luck!
Innvo Labs was out in force at CES 2011, and I got some cute pics of their new Pleo Reborn plus 10 minutes with Innvo’s COO Derek Dotson, one of Pleo’s original daddies from back in the Ugobe days:
While it’s a bit disappointing that those adorable pink and blue Pleos won’t be available over here, and that the male and female behaviors that we heard about weren’t implemented, Pleo rb is still much more sophisticated than the original Pleo, especially in terms of software and interactivity.
I’m particularly looking forward to some of those future features that Derek alludes to in our interview, like the wireless connectivity and nose cam access. I mean, if both of those get hooked up, presto, you’ve got a remotely accessible surveillance dino. It’ll be a while yet, but just bombard Innvo with emails, ’cause they’re listening.
Looks like Pleo Reborn is backordered until about April, which is good news for Innvo and the commercial future of Pleo but bad news for you if you want one. They’re $470, and extra food and learning stone kits are $20 each.
I'm quick to opt for automation where it increases productivity. However, choosing machinery over people to detect humans with foul intent at airports demonstrates a lack of understanding of just how keenly tuned the human brain is to detect subtle facial and behavioral cues.
“Okay,” you’re thinking, “Jeanne’s had a bad airport day.” And you'd be right!
You know the drill: Delays, lines, unfriendly agents, and, of course, the choice between a humiliating pat-down or a scanning machine we have to trust to be safe. I'd say this was definitely my second worst airport day ever. The first?
That would've been when a U.S. Transportation Security Administration agent scooped the pumpkin filling out of my daughter’s leftover Thanksgiving pie. Perhaps you all might wonder when Al Qaeda started watching Chef Paula Deen so they could hide explosives in homemade pumpkin pie and then convince college girls to carry them through airport security?
But TSA employees are denied the right to deploy the most advanced natural intelligence and sensing systems in existence -- the one inside their own cortices! -- in favor of the artificial stupidity of bureaucratic procedure. Pumpkin filling? Sorry, ma'am, that exceeds the 3 ounces limit for liquids and gels. It has to go.
When an organization like TSA has some US $8 billion to spend, is it better off hiring large numbers of poorly paid, unprotected staff to baby-sit radiation-scattering machines that share with the world details only your proctologist knew before? Or should it be investing in highly select, well-paid, and highly educated professionals using the sensing systems, evolved over millennia and trained over decades, to detect people with something to hide?
The advantage of a machine is that it cannot be accused of bias. But bias can be counteracted by both training and quality control. Performance reviews can show many subjects were unnecessarily delayed, with an analysis of characteristics highlighting any bias in who is being stopped unnecessarily. We in the artificial intelligence community can help you with that sort of analysis, TSA.
We in the AI community are also working assiduously to replicate the capabilities of human beings in machines, but when it comes to facial detection and behavior recognition, our algorithms barely match a child’s capabilities. Delaying people thousands of hours a day and risking radiation damage to them and TSA personnel harms our economy and reduces overall efficiency. Why not deploy natural intelligence instead of artificial stupidity?
Thank heavens we’re testing telepresence systems on our MT490 mobile robot. I’m sure I’m not the only person who’ll be opting to let my avatar drive to more meetings!
Jeanne Dietsch, an IEEE member, is CEO and co-founder of MobileRobotsin Amherst, N.H., and vice president of emerging technologies at Adept Technology.
This is a robotic dragonfly. If I told you that some company had just invented it and it was flying around today, you’d probably be impressed. Instead, I’m going to tell you that it was developed by the CIA and was flying in the 1970s. And not just flying like proof-of-concept-it-gets-off-the-ground flying, but reportedly, the flight tests were "impressive," whatever that means. It was powered by an ultraminiaturized gasoline engine (!) that would vent its exhaust backwards to increase the bot’s thrust, and the only reason they seemed to have scrapped it was that its performance in a crosswind wasn’t that good:
In the 1970s the CIA had developed a miniature listening device that needed a delivery system, so the agency’s scientists looked at building a bumblebee to carry it. They found, however, that the bumblebee was erratic in flight, so the idea was scrapped. An amateur entymologist on the project then suggested a dragonfly and a prototype was built that became the first flight of an insect-sized machine.
A laser beam steered the dragonfly and a watchmaker on the project crafted a miniature oscillating engine so the wings beat, and the fuel bladder carried liquid propellant.
Despite such ingenuity, the project team lost control over the dragonfly in even a gentle wind. “You watch them in nature, they’ll catch a breeze and ride with it. We, of course, needed it to fly to a target. So they were never deployed operationally, but this is a one-of-a-kind piece.”
In of itself, this dragonfly is not particularly crazy. It’s also not particularly crazy that it was done 30 or 40 years ago, I guess. What IS crazy is when you start thinking about the state of technology 40 years ago versus the state of technology today, and what might be possible now (but currently top secret) if they had an operational insect robot way back then. It blows my mind.
The CIA also came up with a robot squid (its mission is STILL classified) and a robot research fish named Charlie. Pics and video of that, after the jump.
CIA’s Office of Advanced Technologies and Programs developed the Unmanned Underwater Vehicle (UUV) fish to study aquatic robot technology. Some of the specifications used to develop “Charlie” were: speed, endurance, maneuverability, depth control, navigational accuracy, autonomy, and communications status.
The UUV fish contains a pressure hull, ballast system, and communications system in the body and a propulsion system in the tail. It is controlled by a wireless line-of-sight radio handset.
Cute! And once again, seriously not bad for such a long time ago.
Wondering what a $15k telepresence robot can do for you? WONDER NO LONGER. With the help of a 4G wireless hotspot, this QB wandered out of the Anybots office into downtown Mountain View, Calif., looking for a snack. A mile later, it found a Red Rock Coffee and ordered a berry scone, tipped something like 125% (!) and then rolled out. Classy.
While it’s a little hard to tell from the vid, I’m assuming that Anybots sent a chaperone of some sort along to make sure that nobody just grabbed QB by the neck and made off with it. And if they didn’t, well, let me know next time you send a robot out for coffee, because I totally want one and I think grand theft robot is the only way it’s gonna happen.
Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.
Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on "cognitive developmental robotics," which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)
Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.
Here are some of the expressions that Affetto can make to share its emotions with the caregiver.
Norri Kageki is a journalist who writes about robots. She is originally from Tokyo and currently lives in the San Francisco Bay Area. She is the publisher of GetRobo and also writes for various publications in the U.S. and Japan.
Robotics is off to a good start this year. In January, there was CES, with lots of cool new robot products and demos, and we've also seen plenty of robot hacks using Microsoft's Kinect 3D sensor, which is creating quite a stir. But there was much more, of course, so it's time to review the most striking, stunning, and strange robot videos of January.
No. 10 This mind-bending action sequence from the Indian robot movie Enthiran is a must-watch. Insane, awesome, ridiculous? You be the judge.
Northrop Grumman’s sexily badass X-47B unmanned combat air system made its first flight ever on Friday, circling a desert runway a couple times all by itself before successfully not crashing. Northrop seemed pretty happy about the way things went:
“The flight provided test data to verify and validate system software for guidance and navigation, and the aerodynamic control of the tailless design. The X-47B aircraft will remain at Edwards AFB for flight envelope expansion before transitioning to Naval Air Station Patuxent River, Md. later this year. There, the system will undergo additional tests to validate its readiness to begin testing in the maritime and carrier environment.”
"Flight envelope expansion" means that they’re going to see how crazy the X-47B can get in the air. After that, they’re going to get it ready for its intended purpose, which is carrier operations. We know that drones are already pretty good at precision maneuvers, but I hear carrier landings are especially tricky. I’m optimistic (I always am about robots), but seeing this thing manage an autonomous carrier touchdown is going to go a long way towards convincing skeptics that drones really can function on a level similar to even the most skilled humans in many aspects of combat aircraft control.
As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:
RoboEarth greatly speeds up robot learning and adaptation in complex tasks.
Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.
The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.
But before you yell "Skynet!," think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.
That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.
Imagine the following scenario: A service robot like the one in the hospital room [photo, top] is pre-programmed to serve a drink to a patient. A simple program might include: Locate the drink, navigate to its position, grasp it, pick it up, locate the patient in the bed, navigate to the patient, and finally hand over the drink.
Now imagine that during task execution this robot monitors and logs its progress and continuously updates and extends its rudimentary, pre-programmed world model with additional information. It updates and adds the position of detected objects, it evaluates the correspondence of its map with its actual perception, and it logs successful and unsuccessful attempts during its task performance. If the robot is not able to fulfill a task, it asks a person for help and stores any newly learned knowledge. At the end of its task performance, the robot shares its acquired knowledge by uploading it to a Web-style database.
Some time later, the same task is to be performed by a second robot that has no prior knowledge on how to execute the task. This second robot queries the database for relevant information and downloads the knowledge previously collected by other robots. Although differences between the two robots (e.g., due to wear and tear or different robot hardware) and their environments (e.g., due to changed object locations or a different hospital room) mean that the downloaded information may not be sufficient to allow this robot to re-perform a previously successful task, this information can nevertheless provide a useful starting point.
Recognized objects, such as the bed, can now provide occupancy information even for areas not directly observed. Detailed object models (e.g., of a cup) can increase the speed and reliability of the robot's interactions. Task descriptions of previously successful actions (e.g., driving around the bed) can provide guidance on how the robot may be able to successfully perform its task.
This and other prior information (e.g., the previous location of the cup, the likely place to find the patient) can guide this second robot’s search and execution strategy. In addition, as the two robots continue to perform their tasks and pool their data, the quality of prior information will improve and begin to reveal underlying patterns and correlations about the robots and their environment.
As you can see in the video above, RoboEarth has a way to go. One year into the project, we can download task descriptions from RoboEarth and execute a simple task. We can also upload simple things, like an improved map of the environment. But for now we are far from using or creating the rich amount of prior information described in the scenario above, or addressing potential safety or legal challenges.
I think that the availability of such prior information is a necessary condition for robots to operate in more complex, unstructured environments. The people working on RoboEarth -- me included -- believe that, ultimately, the nuanced and complicated nature of human spaces can't be summarized within a limited set of specifications. A World Wide Web for robots will allow them to achieve successful performance in increasingly complex tasks and environments.