Creating a successful robot company based around providing commercial services is not easy, although as of just the last few years, advances in robotics technology has at least made it possible. Companies like Savioke have shown that robotics has reached a point where autonomous platforms can operate in semi-structured environments, doing useful tasks reliably and cost effectively enough to make a compelling business case.
Luvozo, a startup founded in 2013 and based in College Park, Md., is bringing autonomous robots to semi-structured environments with an enormous amount of potential: skilled nursing facilities for seniors. They’re introducing a “robot concierge” called SAM, designed to “provide frequent check-ins and non-medical care for residents in long-term care settings” through autonomous navigation, telepresence, and an innovative fall hazard detection system. The potential market here is enormous, and to find out more, we stopped by Luvozo and spoke with CEO and co-founder David Pietrocola.
Airships, which are distinct from blimps by being much more rigid and sounding much less silly, are one of those unusual technologies that has been undergoing a resurgence recently after falling out of favor half a century ago. Airships have potential to be a very practical and cost effective way to move massive amounts of stuff from one place to another place, especially if the another place is low on infrastructure and has a reasonable amount of patience.
As part of the construction and ongoing maintenance of an airship, it’s important to inspect the envelope (the chubby bit that holds all the helium) for tiny holes that, over time, can have a significant impact on the airship’s ability to fly. The traditional way to do this involves humans, and like most things involving humans, it’s an expensive and time consuming process. To help out, Lockheed Martin has developed “Self-Propelled Instruments for Damage Evaluation and Repair,” or SPIDERs, which are teams of robots that can inspect airship skins for holes as well as representing one of the less ludicrous robot acronyms that we’ve seen recently.
For details on SPIDER, we spoke with hybrid airship engineer Ben Szpak about where the idea came from, how the robot works, and what their plans are for the future.
Video Friday is your weekly selection of awesome robotics videos, collected by your soft-bodied Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):
A few weeks ago, the very first Winograd Schema Challenge took place at the International Joint Conference on Artificial Intelligence in New York City. We spoke with Charlie Ortiz, director of the Laboratory for AI and Natural Language Processing at Nuance Communications and one of the organizers of the Winograd Schema Challenge, about how things went, why the challenge is important, and what it means for the future of AI.
Whether or not it’s a realistic or practical or good idea, urban commercial drone delivery is grinding remorselessly toward a thing that is going to happen. For many companies, “grind” is the right word, especially if they’re trying to do research and development in the United States, where regulations tend to be overly cumbersome and inflexible. To help move things along a bit, Amazon has decided to take its next phase of delivery drone testing to the United Kingdom.
In a paper recently published in the journal Bioinspiration & Biomimetics, Zarrouk describes his latest innovative robot: SAW, or Single Actuator Wave-like robot, “a novel bioinspired robot which can move forward or backward by producing a continuously advancing wave.” Basically, SAW moves around by doing the worm nonstop. Funky.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here's what we have so far (send us your events!):
There’s a small but growing handful of robotics companies trying to make it in the warehouse market with systems that work with humans on order fulfillment. Generally, we’re talking about clever wheeled platforms that can autonomously deliver goods from one place to another, while humans continue do the most challenging part: picking items off of shelves. There’s a lot of value here, since using robots to move stuff frees up humans to spend more of their time picking.
Ideally, however, you’d have the robot doing the picking as well, but this is a very difficult problem in terms of sensing, motion planning, and manipulation. And getting a robot do pick reliably at a speed that could make it a viable human replacement is more difficult still.
IAM Robotics, a startup based in Pittsburgh, Pa., is one of the first companies to take on the picking problem on a commercial level. Founded in 2012, they’ve developed an autonomous mobile picking robots called Swift that consists of a wheeled base and a Fanuc arm with a 15-lb payload and suction gripper that can reach 18 inches back into shelves. A height-adjustable carriage can access shelves between 3 and 85 inches, and an autonomously swappable tote carries up to 50 pounds of stuff. According to the company, the robot can autonomously navigate around warehouses and is “capable of picking at human-level speeds” of 200 picks per hour.
We spoke with IAM Robotics founder and CEO Tom Galluzzo to find out how they’re making this happen.
There’s a reason why you don’t see rotary motors or joints in nature: at anything above the molecular scale, too much stuff has to be permanently attached to too much other stuff for any of it to be freely rotating in the way a mechanical wheel or axle is. The more bioinspiration you want to work into a robot, the more of an issue this becomes, which is why it’s particularly impressive that researchers at Rutgers University in New Brunswick, N.J., have managed to put four silicone-based wheels with air-powered motors inside of them on a robot that’s as soft as a Crocs shoe.
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
The simple task of picking something up is not as easy as it seems. Not for a robot, at least. Roboticists aim to develop a robot that can pick up anything—but today most robots perform “blind grasping,” where they’re dedicated to picking up an object from the same location every time. If anything changes, such as the shape, texture, or location of the object, the robot won’t know how to respond, and the grasp attempt will most likely fail.
Robots are still a long way off from being able to grasp any object perfectly on their first attempt. Why do grasping tasks pose such a difficult problem? Well, when people try to grasp something they use a combination of senses, the primary ones being visual and tactile. But so far, most attempts at solving the grasping problem have focused on using vision alone.
This approach is unlikely to give results that fully match human capabilities, because although vision is important for grasping tasks (such as for aiming at the right object), vision simply cannot tell you everything you need to know about grasping. Consider how Steven Pinker describes all the things the human sense of touch accomplishes: “Think of lifting a milk carton. Too loose a grasp, and you drop it; too tight, and you crush it; and with some gentle rocking, you can even use the tugging on your fingertips as a gauge of how much milk is inside!” he writes in How the Mind Works. Because robots lack these sensing capabilities, they still lag far behind humans when it comes to even the simplest pick-and-place tasks.
As a researcher leading the haptic and mechatronics group at the École de Technologie Supérieure’sControl and Robotics (CoRo) Labin Montreal, Canada, and as co-founder of Robotiq, a robotics company based in Québec City, I’ve long been tracking the most significant developments in grasping methods. I’m now convinced that the current focus on robotic vision is unlikely to enable perfect grasping. In addition to vision, the future of robotic grasping requires something else: tactile intelligence.