Skydio blew our minds when they announced the R1 back in February—it’s by far the smartest, most autonomous consumer camera drone we’ve ever seen. The company promised that they’d keep on making the R1 even more capable, and today they’re announcing a slew of upgrades, including a new software development kit (SDK) that lets you leverage the R1’s obstacle-dodging cleverness in any custom application you can dream up.
The Skydio R1 is amazing, and you should read our February article about it, but in a nutshell, it’s a drone that uses an array of 12 cameras to dynamically detect and avoid obstacles while it tracks you and films what you’re doing. This means that it can follow someone riding a mountain bike through a forest, dodging trees and branches and keeping them in frame the whole time. It’s basically the kind of capability that every single company working on drone delivery has implicitly promised and so far failed to deliver, and now you can spend some cash (okay, kind of a lot of cash) and play with it yourself.
Skydio, based in Redwood City, Calif., is announcing a bunch of new things today, starting with new skills for the R1. These skills include specific autonomous camera behaviors, like boomerang, vortex, rocket, and “dronie,” which seems to be a real word now. Other drones have similar skills, but of course, the R1 will execute them with all of its obstacle-avoidance AI. There’s also a new feature called cable-cam, which “allows users to pilot R1 to specific positions and orientations and then reliably retrace the flight path between two points, all while maintaining full obstacle avoidance, and optionally tracking a subject.” The idea here is to mimic the performance of those overhead cameras that capture such incredible footage at sporting events, except that you can record yourself since the camera doesn’t need infrastructure or someone controlling it.
Before we get to the SDK, which is really the most interesting bit, we should also mention that Skydio has started mass production of the R1. You can’t score a Frontier Edition drone anymore, but you can pay US $500 less for a regular one, which Skydio will sell for $2,000.
Meanwhile, if you’re more interested in the R1 for what it could potentially do, here’s what the Skydio Autonomy Platform includes:
Skills SDK allows developers to create new Skills for Skydio R1 in a few lines of Python via a rich set of APIs. Leverage relevant, comprehensive R1 telemetry signals: takeoff-relative position, speed, distance from ground, location, user location, battery levels, temperature, and more. Lets developers set waypoints and issue movement commands to R1 (with the safety of Skydio’s 360-degree obstacle avoidance), make 3D map distance queries to understand the environment around R1, track subjects — including both people and cars — and define how it should move around them. Skills can be easily shared with other Skydio owners via the Developer Console.
For all the detail you could ever want about the SDK (and some other stuff), we spoke with Skydio CEO Adam Bry.
IEEE Spectrum: How does the Skydio Autonomy Platform work?
Adam Bry: The Skydio Autonomy Platform has two components. There’s the Skills SDK, which enables people to write their own behaviors. These are very simple Python programs that run on the vehicle and leverage all the underlying autonomy systems that we have, so it has access to the 3D structure of the world, detailed telemetry, and all the object-tracking information—where people are, where cars are, and then it gives people the ability to control the behavior of the drone at a high level. It’s a really fast, easy, and powerful way for people to create different kinds of vehicle behavior. The demo app is a 3D-scanning app that can use 3D motion to capture a bunch of different views to build up a 3D map of a building, and it does all that while guaranteeing collision avoidance. And there are a lot of other things like that, that we think could be built quickly and easily.
The other component is the Mobile SDK, which allows people to write their own mobile apps to control the drone. The idea here is if somebody wants to create their own workflow, their own branded experience for whatever application they’re trying to serve, they can do that by creating a mobile app. We try to make that as easy as possible with our SDK, which has all of the core infrastructure and communication handling that our app uses.
The R1 is being marketed as a consumer drone right now—did you intend for it to be useful in industry as well?
This has always been part of our plan—our general vision for these products is that the same core navigation technology and hardware can be used for a bunch of different industries and applications, just with different software on top that determines the behavior and the user interface. We had a huge amount of interest in this after we launched; I think a lot of people in various commercial spaces saw the core tech and thought, “If it can do that, it could probably do all these other tasks.”
Are there some examples of commercial applications that the SDK could be used for?
I think there’s a few different buckets of things. I think one we’ve heard from a lot of people on is general inspection, which can cover a lot of stuff—everything from residential houses looking for roof damage, to large commercial structures, to energy infrastructure like power lines, power plants, wind farms. A lot of these things have been talked about as potential use cases for drones, but what we’ve found is that most of these things aren’t working very well with manually flown drones, or semi-autonomous drones where you need to have a person in the loop controlling it. It’s both not reliable and too expensive in many cases for it to make sense.
Another is security—being able to scan a particular area at some regular interval and potentially even respond to things that you see, so being able to count a number of vehicles and detect their type, or being able to look for people and understand where they are and how they’re moving.
How does the R1 differ from other consumer drone platforms?
The thing that we’re focused on that we think we can uniquely enable is this general 3D motion close to structures. So if you want to scan a building, you want to be able to capture every side in detail, and you want to guarantee that you’ve been able to get enough views to make an accurate full 3D reconstruction. Or, taking it to the next level, if you want to map the inside of something, where you want to say, “everything within this bounded volume, I want you to explore and map,” that stuff isn’t anywhere close to being possible outside of the autonomy system that we’ve developed. And all of those problems look similar from an autonomy perspective, where you basically have some volume you want to operate within, with some guidelines or constraints of what you want to capture, and then you want to trust the vehicle to go off and get it.
Can the R1 be useful in a research context as well?
That’s another area where we’ve gotten a lot of interest, both for research and for education—being able to use it as a platform for students to experiment on. And absolutely, we think it’s another exciting area for development. With respect to robotics research, I think to some extent that ties to the core of why we’ve been able to build this system to begin with. There’s a lot of benefit you get to having all the algorithms working and running side by side, and there’s this challenge in the research community oftentimes where if you want to do research on a particular area of quadrotor flight or estimation or something, in order to test it, you need the whole rest of the stack. And I think that by providing our system with an SDK on it, we can help get around that to some extent, where people can focus their research on particular aspects of control or motion planning while leveraging our entire perception system. Or vice versa—you can imagine doing different kinds of perception experiments while using our state estimation and motion planning.
Has the R1 become better at its fundamental capabilities since its initial release?
It definitely has, and there are some important ones in the software update that we’re shipping now. We’re not advertising this as a new feature, but the core behaviors are working better. A couple of the things include building in a better understanding of the vehicle’s dynamics into the way that it does planning and control, so that’s one area where it can maneuver more predictably and with more agility. And the other is that we found ways of aggregating images over time to more precisely pick out thin structures like branches to more reliably detect them.
What kinds of things have your early customers been asking for?
As soon as people see it and see what it can do, we get a million feature requests. Some of those are reflected in the software that we’ve shipped—being able to follow cars was the most requested thing when we launched.
We have a pretty broad view of what’s possible. [People have requested] being able to fly for 2 hours, so you’d never have to worry about it if you’re going on a long bike ride or something. That’s not going to be a software update to R1, but it’s not out of the realm of possible—there are things you can do from a design standpoint, and technology is moving in the right direction for those things to be possible. I more or less just map things to different time horizons; some things can be done very short term through software, some things will require future versions of the hardware, and some things will require breakthroughs in both, but we’re pretty optimistic. We’re still very early days for the whole industry. I haven’t heard a customer ask for anything yet that’s impossible.
What’s next for Skydio?
I think from an overall standpoint, we’re just a tiny, tiny fraction of the way there for these devices. Our vision as a company is that someday they reach ubiquity, and that they’re owned and used by and benefit tens of millions of people from consumer applications. We’re nowhere near that, but we’re taking important steps in that direction.
[ Skydio ]