What happens when Facebook’s notion of openness runs afoul of the desire for a private life
31 May 2011
0 min read
Your weekly selection of awesome robot videos
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today's videos!
Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.
[ Sanctuary ]
Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."
We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.
[ Paper ]
This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.
[ Paper ]
Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.
[ Georgia Tech ]
The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.
[ ESA ]
If only cobots were as easy to use as this video from Kuka makes it seem.
The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.
[ Kuka ]
We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.
[ HiPeR Lab ]
The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.
[ Brubotics ]
Apparently, the World Cup needs more drone footage, because this is kinda neat.
[ DJI ]
Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.
[ MIT ]
The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.
[ NC State ]
To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.
[ Charm Lab ]
[ River Lab ]
This video may encourage you to buy a drone. Or a snowmobile.
[ Skydio ]
Moxie is getting an update for the holidays!
[ Embodied ]
Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.
[ CMU ]
This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”
The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.
[ UPenn ]
Neural rendering harnesses machine learning to paint pixels
Matthew S. Smith is a freelance consumer-tech journalist. An avid gamer, he is a former staff editor at Digital Trends and is particularly fond of wearables, e-bikes, all things smartphone, and CES, which he has attended every year since 2009.
On 20 September, Nvidia’s Vice President of Applied Deep Learning, Bryan Cantanzaro, went to Twitter with a bold claim: In certain GPU-heavy games, like the classic first-person platformer Portal, seven out of eight pixels on the screen are generated by a new machine-learning algorithm. That’s enough, he said, to accelerate rendering by up to 5x.
This impressive feat is currently limited to a few dozen 3D games, but it’s a hint at the gains neural rendering will soon deliver. The technique will unlock new potential in everyday consumer electronics.
Cantanzaro’s claim is made by possible by DLSS 3, the latest version of Nvidia’s DLSS (Deep Learning Super Sampling). It combines AI-powered image upscaling with a new feature exclusive to DLSS 3: optical multi-frame generation. Sequential frames are combined with an optical flow field used to predict changes between frames. DLSS 3 then slots unique, AI-generated frames between traditionally rendered frames.
“When you’re playing with DLSS super resolution on performance mode in 4K, seven out of every eight pixels are being run through a neural network,” says Cantanzaro. “I think that’s one of the reasons why you see such a great speed-up. In that mode, in games that are GPU-heavy like Portal RTX […] seven out of every eight pixels are being generated by AI, and as a result we’re 530 percent faster.”
This example, which references testing by the 3D graphics publication and YouTube channel Digital Foundry, is a best-case scenario. But results in other tests remain impressive. Most show DLSS 3 delivering a two to three-times performance gain over purely traditional rendering at 4K resolution. And while Nvidia leads the pack, it has competitors. Intel offers XeSS (Xe Super Sampling), an AI-powered upscaler. AMD’s RDNA 3 graphics architecture includes a pair of AI accelerators in each compute unit, though it’s yet unclear how the company will use them.
Microsoft Flight Simulator | NVIDIA DLSS 3 - Exclusive First-Lookwww.youtube.com
Games have led the wave of neural rendering because they’re well suited to use of machine learning techniques. “That problem there, where you look at little patches of an image and try to guess what’s missing, is just a really good fit for machine learning,” says Jon Barron, senior staff researcher at Google. The similarity between frames, along with frame rate high enough to obscure minor errors in motion, works to machine learning’s strengths.
It’s not perfect: DLSS3 has trouble with scene transitions, while XeSS can cause a shimmering effect in some situations. However, both Barron and Catanzaro think obstacles in quality can be overcome by feeding neural rendering models additional training data. 2023 provides the chance to see the technology progress as Nvidia, Intel, and AMD work with software partners to enhance their respective neural rendering techniques.
This is just the tip of the spear. Barron sees a fork between “2D neural rendering” techniques like Nvidia DLSS 3, which improves the results of a traditional graphics pipeline, and “3D neural rendering,” which generates graphics entirely through machine learning. Barron co-authored a paper on DreamFusion, a machine learning model that generates 3D objects from plain text inputs. The resulting 3D models can be exported to rendering software and game engines. Nvidia has shown equally impressive results with Instant NeRF, which generates full color 3D scenes from 2D images.
Anton Kaplanyan, Vice President of Graphics Research at Intel, believes that neural rendering techniques will make 3D content creation more approachable. “If you look at the current social networks, it’s so much commoditized. A person can just click on a button, take a photo, share it with their friends and relatives,” says Kaplanyan. “If we want to elevate this experience into 3D, we need to pull people [in] who don’t know the professional tools, to become content creators as well.”
DreamFusion can generate 3D models from plain text inputs.Google
The pace of 3D neural rendering’s improvement through 2023 will be a key component of its future. It’s impressive, but unproven compared to traditional rendering. “Computer graphics are amazing, it works really well, and we have really good ways of solving a lot of problems that may be the way we do it forever,” says Barron. He notes content creators and developers are already familiar with the tools used to create for, and optimize, a traditional graphics pipeline.
The question, then, is how quickly the graphics industry will embrace 3D neural rendering as an alternative to tried-and-true methods. It may prove an unsettling transition because of the conflicting incentives that surround it. Machine learning models often run well on modern graphics architectures, but there’s tension in how GPU, CPU, and dedicated AI co-processors—all of which are relevant to AI performance, depending on its implementation—combine in a consumer product. Betting on the wrong technique, or the wrong architecture to support it, could prove a costly mistake.
Still, Catanzaro believes the lure of 3D neural rendering will be hard to resist. “I think that we’re going to see a lot of neural rendering techniques that are even more radical,” he says, referencing generative text-to-image and text-to-3D techniques. “The graphical quality from some of these completely neural models is quite extraordinary. Some of them are able to do shadows and refractions and reflections and, you know, these things that we typically only know how to do in graphics with ray tracing, are able to be simulated by a neural network without any explicit instructions on how to do that. So I would consider those even more radical approaches to neural rendering than DLSS, and I think the future of graphics is going to use both of those things."
Neural rendering is alluring not just because of its potential performance but, also, its potential efficiency. The 530 percent gain DLSS 3 delivers in Portal with RTX can improve framerates—or it can lower power consumption by capping the framerate at a target. In that scenario, DLSS 3 can reduce the cost of rendering each frame.
“Moore’s Law is running out of team. ... My personal belief is that post-Moore graphics is neural graphics."
—Bryan Cantanzaro, Nvidia VP of Applied Deep Learning.
That’s a big deal, because consumer electronics has a problem. Moore’s Law is dead—or, if not dead, on life support. “Moore’s Law is running out of steam, as you know, and my personal belief is that post-Moore graphics is neural graphics,” says Cantanzaro. For Nvidia, neural rendering’s represents a way to keep delivering big gains without doubling up on transistors.
Intel’s Kaplanyan disputes the demise of Moore’s Law (Intel CEO Pat Gelsinger insists it’s alive and well), but agrees neural rendering can improve efficiency. “There are some solutions to chip size, there are the chiplets, which Pat has talked about,” he says. “On the other hand, I also agree that we have a great opportunity with machine learning algorithms to use this energy and this area way more efficiently to produce new visuals.”
Efficiency is a battleground for AMD, Nvidia, and Intel, as all three companies work with device manufacturers to design new consumer laptops and tablets. For device makers, efficiency gains lead to thinner, lighter devices that last longer on battery, while at the same time enhancing what users can accomplish with the device.
“I am very excited about enabling... the experiences that you would otherwise see only in high-end Hollywood movies or Triple-A games, but those experiences you would be able to make yourself,” says Kaplanyan. “You’d be able to do it on your laptop, or some other very power-confined device.”
NVIDIA’s New AI: Wow, Instant Neural Graphics! 🤖www.youtube.com
It’s clear 2023 will be a foundational year for neural rendering in consumer devices. Nvidia’s RTX 40-series with DLSS 3 support will roll out broadly to consumer desktops and laptops; Intel is expected to expand its Arc graphics line with its upcoming ‘Battlemage’ architecture; and AMD will launch more variants of cards using its RDNA 3 architecture.
These releases lay the groundwork for a revolution in graphics. It won’t happen overnight, and it won’t be easy—but as consumers demand ever more impressive visuals, and more capable content creation, from smaller, thinner form factors, neural rendering could prove the best way to deliver.
Download these free whitepapers to learn more about emerging technologies like 5G, 6G, and quantum computing
Looking for help with technical challenges related to emerging technologies like 5G, 6G, and quantum computing?
Download these three whitepapers to help inspire and accelerate your future innovations: