Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’re also going to start posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
2015 Robot Film Fest – November 07, 2015 – Pittsburgh, Pa., USA
Asian Robotics Week – November 12-13, 2015 – Singapore
AAAI Fall Symposia – November 12-14, 2015 – Arlington, Va., USA
SF Bay Area Robotics Group Meetup – November 18, 2015 – San Francisco, Calif., USA
ROBOT 2015 - Iberian Robotics Conference – November 19-21, 2015 – Lisbon, Portugal
Robotics Expo – November 20-22, 2015 – Moscow, Russia
World Robotics Conference 2015 – November 23-25, 2015 – Beijing, China
Dronetech – November 26, 2015 – Bristol, U.K.
IREX 2015 – December 2-5, 2015 – Toyko, Japan
RoboUniverse Shanghai – December 8-10, 2015 – Shanghai, China
RoboUniverse San Diego – December 14-16, 2015 – San Diego, Calif., USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Simone Giertz decided that she wanted a robot to make (and feed) her breakfast. So, she went out and bought a uARM robot arm for US $339, put some work into it, and ended up with this unqualified success:
You can read the whole story of how she made this work(ish) at the link below, but I particularly liked this bit:
After a bit of research I found that the uArm has a learning mode. Basically you press a button, move it around, and it saves the sequence and replays it. What!? How cool is that?
Just to reiterate: I bought a robot arm, took it out of the package, plugged it in and could make it move any way I wanted without writing a line of code. I’m not sure if it’s possible to make programming robot arms any easier than that. Unless you add a mind reading function or something equally sci-fi.
[ Motherboard ]
This. Is. Amazing.
BitDrones, developed at Queen’s University’s Human Media Lab, are “interactive self-levitating programmable matter” consisting of payload-carrying nano-quadrotors operating in a motion capture space.
“PixelDrones” are equipped with one LED and a small dot matrix display. “ShapeDrones” are augmented with a light-weight mesh and a 3D printed geometric frame, and serve as building blocks for complex 3D models. “DisplayDrones” are fitted with a curved flexible high resolution touchscreen, a forward-facing video camera and Android smartphone board. All three BitDrone types are equipped with reflective markers, allowing them to be individually tracked and positioned in real time via motion capture technology. The system also tracks the user’s hand motion and touch, allowing users to manipulate the voxels in space.
The idea is that you get something that’s similar to a virtual reality experience, except there’s nothing virtual about it, since you don’t need a headset and the drones are physical objects that you can interact with. While it’s just a few drones for now, the system will be scaled up to handle “thousands of drones no more than a half inch in size, allowing users to render more seamless, high resolution programmable matter.”
[ Human Media Lab ]
This is simple and genius:
It launches on Indiegogo November 9, and the website doesn’ say how much it’s going to cost, which usually means that it’s going to cost more than it seems like it should.
The egress of Europe’s ExoMars 2018 rover off its lander will be the second most stressful moment of the mission after Mars landing. So to build up experience in the problem, a half-scale rover on a mock-up lander was placed in the outdoor 80 x 50 m ‘Mars Yard’ of French space agency CNES for a two-day test campaign on 28-29 October.
A total of five egress tests were performed: four ended in success and one in failure – the rover descended down at too steep an angle and began to topple off. In real life the mission would have ended there, but in this case it was safely caught by helping hands. The incident demonstrated the importance of such simulations, which also allowed the practical testing of various software tools developed over years of research.
[ Exomars ]
It’s videos like these that make me want hugging robots to be my life.
Baymax Science: We are testing whether humans prefer the correct robot form or the correct robot motion. Subjects chose to hug either a distorted Baymax (Baymaxter: Baymax + Baxter) that could move, or a correctly shaped static Baymax. Subjects overwhelmingly chose the hugging Baymax over the static Baymax.
[ CMU ]
Back in the day, all we had to play with were buttons and bits of string:
Button spinner, a traditional hand toy, consists of a button and a loop of thread. When we play the button spinner, the slow and linear finger motion is converted to high-speed rotation of the button via the twisted thread. The goal of the robotic button spinner is to regulate the rotation angle and reduce the position error of the rotor along the thread during its high-speed motion. By using high-speed robotic system, in which control of the robot hand and sensing of the angular position are conducted at every 1 ms, the control error in the non-linear system can be successfully compensated without any estimation. The angle can be regulated in sub-turn order and the position error along the thread is only about 5 pixels (0.8 mm). The core structure of the button spinner can be applied to other high speed rotating systems such as shaker, generator and also machine control.
The ABB IRB 8700 is ABB’s newest, biggest, fastest robot arm. It has a payload of 800kg extended (or 1000kg with its wrist down), and is 25% faster than any other industrial robot with a similar payload.
[ ABB ]
Not to be outdone by ABB’s massive new industrial arm, KUKA is also announcing a new robot arm this week:
Okay, maybe KUKA was a little bit outdone there.
[ KUKA ]
While nature often seems to do an excellent job starting fires all by itself, sometimes humans have to help out by setting controlled burns for conservation purposes. It’s dangerous work, but drones have the potential to make it easier by dropping EXPLODING PING PONG BALLS from the air:
[ UNL ]
Pendulum walkers are extraordinarily efficient, as long as you don’t challenge them with anything. This walker, from Chongqing University of Posts and Telecommunications in China, has just set a record by walking for 340,000 continuous steps over 54 hours covering a distance of 134.03 km using just 0.8 kWh of energy. Hold on to your seat, and don’t blink:
This shatters the previous record, held by Cornell’s Ranger robot, of 65 km in 30 hours using 0.5 kWh. Game on.
Sawyer’s initial setup takes all of 5 minutes, which isn’t nearly enough time to savor the experience of unboxing the fancy new robot you just bought:
“Do not grab the robot by its head.” Always a good tip.
[ Rethink Robotics ]
Halloween was inconveniently on a Saturday this year, so we’re a little late with this video from Dash Robotics:
Meanwhile, look how easy it is to program:
They’re just a few hundred bucks away from their goal on Kickstarter, so now is the time to score a robot for yourself.
[ Kamigami ]
Swiss Robotic Industry Day 2015 at EPFL:
[ EPFL ]
Thalmic Labs’ magical Myo wristband is one of the simplest (and coolest) ways of controlling robots:
Pepper: the most unnecessarily complicated drum machine ever?
Wooo! Play Free Bird!
[ Pepperhythm ]
We all loved the 1980s’ sci-fi visions of the future — the way drones zipped around, blending in as a natural part of everyday life. What if we told you the predictions made in these sci-fi classics are now reality? With the DJI Phantom X, we turn wide-eyed dreams of future possibilities into fact with multi-angle shooting, AI, obstacle avoidance and free-flight object tracking.
It’s going to take a lot of work to make this possible, but it certainly is possible, and at some point, it may even happen.
[ DJI ]
These Life After Gravity episodes are not getting any less up their own behinds, but they were filmed completely with drones, so that’s cool?
And the obligatory making of:
[ 3D Robotics ]
Chances are, you either are or know a cyborg: a person who is aided by embedded technology, such as artificial limbs or pacemakers. Director Paul Giamatti delves into the science that is taking us beyond replacement parts and into a new realm that is changing the nature of the human body and mind. Fusing biology and technology makes us stronger, faster and smarter. But as the natural and man-made worlds merge, will we become more than human?
The full episode premieres on the National Geographic Channel this Sunday at 9pm ET/PT.
[ Breakthrough ]
Robots in Depth is a new video series featuring interviews with researchers, entrepreneurs, VC investors, and policy makers in robotics, hosted by Per Sjöborg. In this interview, Michael Rubenstein describes how he has taken his robotics research from theory into practice by building cheap and small robots — 1024 of them to be exact.
[ Robots in Depth ]
A lecture on “Paths to Human-level AI” by Murray Shanahan at Aldebaran Paris:
Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was scientific advisor to the film Ex Machina, which was partly inspired by his book “Embodiment and the Inner Life” (OUP, 2010). In this talk he describes what he sees as the main obstacles to achieving human-level artificial intelligence given the current state of machine learning, and suggests a number of ways these obstacles might be overcome. These include speculations on a) Geoff Hinton's notion of thought vectors, b) hybrid symbolic-neural approaches, and c) cognitive architectures inspired by Bernard Baars's global workspace theory.
[ Aldebaran ]
The latest Robotics Institute Seminar from CMU:
Ryan Eustice Associate Professor, University of Michigan Self-driving test vehicles have become a reality on roadways and there is an ever present push toward making them a consumer product in the not so distant future. In this talk, I will give an overview of some of our on-going work (in collaboration with Ford Motor Company) in full-scale automated driving. In particular, we’ll look at some of our successes in high definition map building and precision localization, including our recent work in cross-modality localization using vision within a priori LIDAR maps. We’ll also review our work in multipolicy decision making in dynamic environments and discuss our new unique Mcity test facility for connected and automated driving.
[ CMU RI Seminar ]