LEGO today announced the WeDo kit, which is meant to be not just a younger kid version of the Mindstorms but is designed to appeal to "emerging markets" -- developing countries -- and even interfaces with the OLPC XO computer to do so, which I think is pretty sweet.
Like the Mindstorms kit it comes with a kit of LEGO bricks and sensors and has a similar building-block software interfaced based on LabView, though it's simpler and aimed for a younger audience. The press release describes something of the curriculum they have in mind for it:
Working in teams, children invent their own solution by building a LEGO model and programming it to perform a certain task. Cause and effect learning is enhanced by the models remaining tethered to a computer; similar to scientists in working labs, children can test and adjust their programming in real time. After reflecting on what did and did not work, students can consult with peers, adapt programming, adjust models or begin again.
Unfortunately it's not up on the LEGO website up, and there's no hint on how much it'll cost. Only info at this point is that it will be available in January 2009.
I saw "WALL-E" last night with some folks from work and it was universally agreed to be an excellent movie. The characters are fantastic, the animation is as always impressive, and the story is fun. Fans of Apple products will find a few winks to Macs and iPods, and I'm pretty sure EVE is exactly what an Apple robot would look like. For his part, WALL-E bears a striking resemblance to Johnny 5.
I can't wait till I can have my own. For now I'll have to do with the LEGO version, I guess.
RoboticsCourseWare.org is similar to MIT's OpenCourseWare (OCW) initiative ... the repository is searchable, browsable, and open for downloads. No registration or login is required for accessing the posted materials. Materials are typically made available under a Creative Commons License ...
As part of our special report The Singularity, we interviewed Rodney Brooks on his views on superhuman intelligence and the future of robotics. Watch the interview (transcript below) and read also the article he wrote for the report,I, Rodney Brooks, Am a Robot.
We are machines. We're made up of biomolecules that obey the laws of physics and chemistry. So therefore, in principle, one can imagine building another sort of machine, out of other sort of stuff which was equally intelligent as us, or perhaps more intelligent. It's a working hypothesis, but maybe we're overestimating our abilities.
I think it's really hard to make long-term predictions about the nature of artificial intelligent machines, just as it's been really difficult to make long term predictions about electronics. If you look at electronics in the early twentieth century, who could've imagined what we'd have at the end of the twentieth century?
It's undeniable that right now artificial intelligence and computers in general are benefiting from exponential increases in computing power and even the exponential increase in scientific research that's happening. So some people hypothesize that at some point an intelligent machine will spring into existence and there will be a complete separation between before and after. But I don't think that technology quite works that way. We are building more and more intelligent systems. They're much more intelligent than they were five years ago, ten years ago, or twenty years ago, and that process will continue.
We may at some point say, "Wow, things have really changed," in the same way if you look at cell phones today compared to cell phones twenty years ago'that's a big difference. But it didn't happen all on one day, it gradually evolved. So it's going to be a period, if it happens, it's not going to be a sudden event.
Perhaps I'm a little cynical, but sometimes when I look at some of the literature around the singularity, it sort of seems to me to be a techno-salvation, where somehow technologists are looking for a way to get eternal life without having to go to the bother of believing in God, because they can believe the technology is going to provide it for them. And I'm skeptical about all forms of eternal life.
A lot of technologists see one of two possible outcomes: either everything is going to hell and the machines that we build will just want to take over us, from us, it will be like a Hollywood movie. Others sort of see, these machines are going to make life great. We'll be writing poetry and eating grapes and life will be wonderful. Nothing's ever as bad as we expect or as good as we expect. I think it will be somewhere in the middle. Many of our machines have turned out to be great for us, but they come with problems, and I think that's going to happen with intelligent machines too.
I'm a techno-optimist. I really think technology is great, and will continue to be. Whether it can solve every problem for everyone who's now alive, I don't think is likely to be true. I think there's going to be a continuing cycle of living and dying for a long time to come. What's happening 200 years from now, 500 years from now is impossible to begin to imagine. But in the lifetime of'my own lifetime, in the lifetime of my kids, I don't think things are going to be too-too different from what they are for most of us today.
If you've got two iPhones and a LEGO Mindstorms kit, you too can build your very own iPhone-controlled LEGO rover. Just check out what they've done over at BattleBricks; they provide the LEGO model and source code for you to join in the fun. Basically, one iPhone generates color patterns on the other iPhone's screen that the NXT brick can recognize and respond to.
Now imagine what they'll be able to do with the 3G iPhone's GPS receiver...
I saw this article last week: a company with satellite offices in different countries solves the videoconferencing problem with their own home-built robot named POGO. While I applaud the DIY approach, I really have to ask why it was necessary.
The company had already considered and discarded video conferencing as too expensive and less convenient. That led to the birth of POGO at a cost of about $8,000 to $9,000...
... having the robot enables team members to use white boards to explain concepts to the offshore team and get instant reactions instead of trying to communicate through phone or e-mail.
My first thought is that this article is really leaving some information out. But given what we have to work with, why was a robot the solution? If they're using a webcam, clearly they don't need high-res projectors or video cameras for their conferences (which would certainly be expensive). And how did a webcam and monitor tacked onto a mobile base cost $8k? I don't understand how the expensively-implemented cheap solution is superior to the more expensive COTS solution. The only difference seems to be remote-controlled mobility. But if all they're doing with it is "using white boards to explain concepts", why is the mobility necessary if the whiteboards are fixed in place?
A video from CBS indicates that one of the advantages is that the Pakistani employees can move POGO from room to room to look at whiteboards, but I'm really thinking a handful of webcams are still cheaper than this... What are the advantages of a robot in this situation? Is there market for this outside of the novelty of it?
Also, whoever decided has a "slight resemblance to R2-D2" has clearly never seen R2-D2.
The first of their new series focuses on the technology that's come out of the DARPA Grand Challenges (which we talked about a bit last fall), and they promise a compelling lineup of robotics researchers, professionals, and hobbyists in future episodes. Who knows -- maybe you'll even hear my melodic voice coming to you over the intertubes!
Look for new eps every other Friday morning on a computer near you.
AMAM 2008 was a one-week, single-track conference, including four keynotes, over 70 posters, and a "robot zoo" populated by a menagerie of mobile machines. The coffee break buzz indicated that most of the 150 attendees found the meeting to be a superb confluence of the cutting edge in bio-inspired robotics.
Locomotion specialists from both biology and engineering were well represented, and the meeting continually broke down barriers between disciplines to focus on the shining promise of the field: highly functional robots built using biologically derived principles, which in turn serve as embodied models to address otherwise impractical questions in biology. An additional innovation at this conference was the invitation of several biomedical engineers working on ways to recover function in paralyzed human patients using intuitive brain-machine interfaces.
As for the venue, Cleveland may not be the most exotic of destinations but it has an all-American cultural history steeped in the industrial tradition, and during the conference the attendees gathered at Case's sprawling campus were able to experience the city's quite pleasant late-spring weather. And if you're wondering, as many do, the name of the university dates to the 1967 merger of the Case Institute of Technology and the Western Reserve University, with "western reserve" referring to the formerly pristine and resource-rich Great Lakes region of the early 19th century.
Though I didn't see a single uninteresting presentation, I'll highlight just a few that I found especially exciting, in chronological order.
Hunter Peckham, an engineer at Case and executive director of the Cleveland Functional Electrical Stimulation Center, gave a keynote address on some of his recent studies and clinical trials in functional electrical stimulation. This work involves implanting electrodes to deliver electric pulses to the muscles of paralyzed people. Control of a limb is a difficult problem because there are more degrees of freedom (joints and muscles) than there are constraints (desired limb positions). Peckham first simulated the mechanics of the musculoskeletal system to decide which muscles were strictly necessary for a desired range of arm motions, then examined the neural architecture to determine which points should be stimulated to differentially activate those muscles in a useful way. Two patients have received these radio-controlled implants, which are activated by coupling stimulation to recorded activity in muscles which are still under voluntary control. For example, the patient may still be able to twitch his or her cheek, so electrical activity in the cheek muscles would be detected and would be used to trigger stimulation of a particular subset of arm muscles. Two or three co-contracting muscle groups are sufficient for a patient to feed him- or herself, representing a major improvement in quality of life.
Photo: Developed at the Cleveland FES Center, an external controller sends commands to an implanted device that jolts Jennifer French's muscles into action in the correct sequence, allowing her to stand up out of her wheelchair. Read more: Neural Engineering's Image Problem (IEEE Spectrum, April 2004) Photo by Ed Macdonald
Photo: Kanzaki-Takahashi Laboratory
The second full day of talks was sponsored by Mobiligence, a research consortium consisting of engineers and biologists at several Japanese universities. Their backing brought a significant international flavor to the conference. One thought-provoking talk was given by plenary speaker Ryohei Kanzaki, of the University of Tokyo. His research team is investigating the mechanisms by which silkworm moths walk toward the source of an airborne odor. He has built a large anatomical and physiological database of uniquely identifiable neurons in the moth's brain which contribute to its ability to track an odor plume. Using optical recording techniques, Kanzaki can associate activity in these neurons with the presence of an odor. He has built a hybrid robot in order to investigate the algorithm the moth uses to localize the odor source. In this setup, the moth walks in place on top of a trackball, and the output signal of the trackball is used to control the vehicle on which the moth and trackball are sitting. Small fans waft the odor from the floor up to the moth's antennae. This moth-driven robot is capable of localizing an odor source in a manner qualitatively similar to a real moth. Experimentally altering the feedback loop by changing the sign or gain of the coupling between the moth's walking and the robot's movement (and subsequent contact with the odor plume) affects the moth/robot's ability to find the odor source.
Photo: MacIver Lab
To kick off the third day, Malcolm MacIver of Northwestern University spoke about the compromises that real animals must make in order to be multifunctional. His research is on the weakly electric fish, a classic model system for the study of active sensing and neuroethology. These fish generate an AC electric field, in which they sense small perturbations caused by the presence of prey, predators, and obstacles in the muddy waters they inhabit. The black ghost knifefish species, in particular, also swims somewhat peculiarly, using a long fin on its underside to produce ribbon propulsion, allowing it to move through the water at nearly any orientation. MacIver described how a thorough simulation of the electric field and its accompanying sense organs revealed that the most effective way for the fish to explore the environment was to swim head-down at 90 degrees. However, hydrodynamic calculations demonstrated that the thrust-to-drag ratio of the ribbon fin propulsion system and the fish's body is optimized for moving directly forward or backward. So what's a fish to do? Well, it turns out that they swim straight forward (or backward) when they want to travel from place to place, but when they're hungry and hunting, they move with a 30-degree head-down tilt. This represents a compromise between drag (which increases exponentially with body angle) and increased sensory volume (which is basically linear with body angle).
Photo: CMU NanoRobotics Laboratory
Next, Metin Sitti, from Carnegie Mellon, showed a spectrum of the many bio-inspired robotics projects taking shape in his lab. My favorite was the hybrid bacteria robots, which are micron-scale (more commonly called "nanoscale") capsules driven by living bacteria. Setti gets the bacteria to attach by coating the surface of his capsule, which could potentially contain drugs or sensors for in-body use. Directing the motion of this hybrid robot is still a challenge, however. Another project was based on the water strider, an insect that can stand on water using surface tension forces. Basically the insect (and its biomimetic progeny) has extremely hydrophobic, very long legs, which are able to sum along their whole lengths the "ground reaction forces" of the water's surface tension. Because the animal (and robot) weighs only a few grams, these small forces can keep it afloat. Another strategy for walking on water is the one utilized by the basilisk lizard, which slaps its feet down on the water's surface and then picks them up again before they can sink. Setti and his students also built a robot which can stay on top of the water using this dynamic effect. Though this is not the most efficient way to walk on water, the advantage for the animal is that it can use basically the same gait to run across water or land. The advantages for a robot are less clear, especially as their prototype is unable to walk on land at all, but it is still a neat idea.
On the following day, Auke Ijspeert, of the EPFL in Switzerland, spoke about a robot that solves the land-water duality in a different bio-inspired way. The salamander uses the same central circuitry to drive its legs during walking as during swimming. In walking, the forward and hind end of the animal oscillate from side to side in a standing wave, with the limbs stepping at the same frequency. However, the undulation of the body when the animal swims forms a traveling wave with a shorter period. To perform both functions using the same neurons and only a larger or smaller descending "drive" signal is conceptually tricky, though the animal seems to work this way. Ijspeert conjectured that when the legs are active, at lower driving signal magnitudes, they set up a standing wave in the body and override the intrinsic tendency of the body oscillator to form traveling waves. When the drive increases beyond some threshold, however, the leg pattern generators saturate and shut down, and the body assumes its traveling wave state. Ijspeert's student Alessandro Crespi brought the robot that they built to demonstrate the feasibility of these hypotheses, and it spent hours crawling and pseudo-swimming around the robot zoo.
Possibly the high point of the whole conference was when Mark Raibert, founder of Boston Dynamics (formerly on the faculty at MIT) gave a plenary speech about the development of their recently famous DARPA robot, the quadruped BigDog. Though the talk was during a morning session, nearly all the attendees were present. His talk was inspiring (subtitled "Holy (expletive), this might actually work") because it marks one of the first times a legged robot can effectively traverse objectively difficult terrain -- sort of like a robotic Turing test. One of the major conceptual advances that enabled the current high level of performance was predictive leg placement based on accelerometer data. While each foot is in the air, the robot calculates where it expects its center of mass to be when that foot touches the ground, and then places the foot under that point. The algorithm must also ensure that the legs don't collide, and of course there are some additional difficulties related to multiple legs being in the air when the gait is a trot or run, but the results are extremely impressive. As exciting as the videos are, Raibert doesn't feel like the robot has achieved its objectives yet. One notable area that needs improvement is in the actuators. The hydraulic actuators used have very little compliance, unlike real muscles, and their force-length characteristics are such that the robot cannot hold its own weight if it crouches down. Both of these issues are partially compensated for by the control algorithm, using fast feedback to simulate compliance and maintaining the legs in a stretched posture, but the lack of a better artificial muscle is a major obstacle to improving the robustness of this robot.
On the last day, Dawn Taylor, a biomedical engineer from Case Western Reserve, showed video of a monkey feeding itself using a robotic arm controlled by electrodes implanted directly in the motor cortex of its brain. Apparently the first monkey they trained was quite capable of using the arm to reach and grasp when given artificial visual feedback, but was too frightened of the moving arm to feed itself directly. A subsequent monkey proved less sensitive, and they were able to demonstrate to their funding agency that the apparatus was capable of recovering this ability for quadriplegic patients. One very interesting part of this research was the total irrelevance of the specific brain-to-robot control mapping. In other words, even though specific parts of the brain normally activate specific motions, it didn't matter at all where the electrodes were placed within the motor cortex when controlling the artificial arm. Basically, the monkey adapted to an arbitrary mapping of brain activity to robot movement, and was quite capable of using the same neurons for very different control tasks in different contexts. This might be something akin to a person who can speak more than one language fluently -- after sufficient practice, it is no longer a process of interpreting or converting, but merely a different mapping of thoughts to sounds, which happens unconsciously. This is a useful property of the brain in real life, but the unexpected magnitude of the plasticity shown by these monkeys is also extremely helpful for doctors and engineers trying to wire up brain-machine interfaces.