Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
Nîmes Robotics Festival – May 17-19, 2019 – Nîmes, France
Isolierband Robotics Competition – May 19, 2019 – Israel
ICRA 2019 – May 20-24, 2019 – Montreal, Canada
URC 2019 – May 30-1, 2019 – Hanksville, Utah
2nd Annual Robotics Summit & Expo – June 4-6, 2019 – Boston, Mass., USA
ICUAS 2019 – June 11-14, 2019 – Atlanta, GA, USA
Energy Drone Coalition Summit – June 12-13, 2019 – Woodlands, Texas, USA
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, UK
Let us know if you have suggestions for next week, and enjoy today's videos.
Soft-bubble, “a highly compliant dense geometry tactile sensor for robot manipulation,” is a sort of combined gripper and 3D camera that uses a soft membrane to grasp and image objects at the same time.
HAPS Mobile, a SoftBank-backed company, is developing a high-altitude pseudo satellite: a massive, solar-powered, long endurance drone that acts like a much cheaper and more versatile satellite over a smaller area.
[ HAPS Mobile ]
Some solid uneven terrain traversal from HUBO.
[ HUBO Lab ]
Cassie is getting better at walking, thanks to Caltech. It’s not 100 percent quite yet, but that’s what makes these videos worth watching until the end, right?
[ AMBER Lab ]
To be officially presented at CHI 2019 is “Slappyfications: Towards Ubiquitous Physical and Embodied Notifications.”
There’s an actual paper, sort of, which does have a picture of a little robotic version of the Slappyficationator, which is why I’m allowed to include it in Video Friday. And in case you were worried, the authors state: “We want to emphasize that no participant nor actor within the video or study was harmed. All of them agreed to participate by their own will or were forced to participate by slapping them even harder.”
[ Paper ]
We wrote about this Transfrmable HOvering Rotorcraft in 2017, but here's a little teaser of an update:
[ AIRLAB ]
I love this robot.
[ YouTube ]
The University Rover Challenge (URC) is the world's premier robotics competition for college students. Held annually in the desert of southern Utah in the United States, URC challenges student teams to design and build the next generation of Mars rovers that will one day work alongside astronauts exploring the Red Planet.
[ URC 2019 ]
On Saturday March 16th, nine international teams entered into battle. Their quest: to bring down the rival Queen drone. All in the name of counter-drone research. Watch our recap for a taste of what DroneClash is all about.
The basic principle is simple: take down everything that isn't yours. In a big arena of 200 squared meters, teams will battle against each other. They can use as many drones as they like, but each team is only allowed two FPV video streams to control their drones. In practice this means, only two drones can be tele-operated per team. However teams may switch between drones or create autonomous drones, and anything in between. So long as it flies.
[ DroneClash ]
Wheels can be used for more than just mobility. Mars rovers have used them to dig trenches before, but it was sort of an accident, so CMU is figuring out how to do it properly.
[ CMU ]
There’s a lot of tech crammed into Misty.
[ Misty Robotics ]
Close your eyes and imagine a world without safety fences: A world in which humans work side-by-side with robots. In this world, robots directly collaborate with people, becoming a crucial part of the team. Alongside their co-workers, these collaborative robots take over tedious, repetitive tasks, lifting up to 35 kg, thus maintaining your worker’s health and safety while automating entire assembly lines.
[ FANUC ]
One of the main challenges with embodying an agent is annotating how and when motions can be played and composed together in real-time, without any visual artifact. The inherent problem is to do so—for a large amount of motions—without introducing mistakes in the annotation. To our knowledge, there is no automatic method that can process animations and automatically label actions and compatibility between them. In this paper, we address this problem with a compact taxonomy of chit chat behaviors, that we can utilize to simplify and partially automate the graph authoring process. We measured the time required to label actions of an embodiment using our simple interface, compared to the standard state machine interface in unreal engine, and found that our approach is 7 times faster. We believe that our labeling approach could be a path to automated labeling: once a sub-set of motions are labeled (using our interface), we could learn a prediction that could attribute a label to new clips—allowing to really scale up virtual agent embodiments.
[ Disney Research ]