Some 50 people who are involved in cybersecurity or closely follow developments—including tech company executives, government officials, legislative analysts, researchers, and journalists—gathered last week at the University of California, Berkeley for what was, for most, a unique kind of exercise—a cybersecurity fantasy game. The aim—to come up with solutions to security problems before they happen, by imagining a future world in which cybersecurity has fallen apart.
The event was sponsored by the Hewlett Foundation, and hosted by the Rand Corp. and Berkeley’s Center for Long-Term Cybersecurity. Rand will issue a full report on the proceedings later this year. (I attended as a participant, but the ground rules allow me to report on it as well, as long as I don’t name the others.)
The first of the two scenarios addressed that day covered the Internet of Things. (The second, addressing a breakdown in financial and health care security, I cover in another post.)
Here’s the rough set-up as I can best recreate it (interestingly, I was told, it was written last summer, though one hypothetical thread raised similar concerns as the current Apple/FBI standoff): It is now March, 2021. In recent years, there have been a few significant events. Back in 2017 or so, a girl was kidnapped on a field trip; authorities got their hands on the suspect’s cell phone but were unable to decrypt the information it contained. The girl died, and, in reaction, Congress passed “Danielle’s Law,” requiring backdoor access to all IT products sold in the United States. This didn’t stop criminals from using encryption, and in 2018 a serial murderer began leaving encrypted cell phones next to his victims and challenging authorities to identify him by decrypting them. He was eventually caught—not by decryption, but through purchase records. In 2019 Congress repealed Danielle’s Law.
Meanwhile, hackers were regularly attacking networked devices and the data they collect. Most notoriously, in 2019, through the “shagbit” hack, in which hackers got into both mobile phone location records and fitness tracker data. By matching the two, they identified illicit sexual activity and blackmailed the participants. Elsewhere, in a cyberprotection racket, hackers demanded monthly payments from small businesses to avoid business disruption, say, by warming up smart refrigerators at restaurants and causing food to spoil. Most recently, a self-driving car plowed into a sidewalk, killing five people; it turned out to have been hacked to pilot itself—with its owner inside—to the door of a man obsessed with her, but the remote piloting wasn’t accurate. The media played up all of these—and more—failures of the Internet of Things, and the public is increasingly worried, though people are still using the technology. What do should be done to both increase security and allay the public’s concerns?
We were divided into teams of eight or so, and tasked with brainstorming solutions, taking slightly different perspectives. My team of eight was to look at increasing the cost to hackers of pursuing attacks. My group settled on the creation of a CyberUL. (UL, formerly known as Underwriters Laboratory, develops standards and performs safety testing at 64 laboratories, serving 104 countries. It came into being to verify the safety of electrical products and prevent fires.)
The group thought companies could be encouraged to seek CyberUL certification by some kind of relief from liability for damages caused by approved products. If CyberUL certification permeates the industry, the group expects, vulnerabilities would drop dramatically, discouraging many categories of hackers. Another group with a similar approach expressed this same idea as “hive security,” that is, if enough gadgets on the IoT are protected, it will discourage hacking of all gadgets.
Key, members of my team thought, would be to insist that, as part of the CyberUL certification, companies commit to supporting the security of their products through patches and updates for a set—and publicized—number of years. This would drive companies to keep up with security patches for their products, and encourage companies to differentiate themselves from competitors by offering longer security windows. Companies would also need to come together in an organized way to gather and share information about hacks, the group thought, in order to support security.
My group also concluded that, in addition to a CyberUL, the U.S. needs a “Cyber Safety Board” something we also called an “IoT Go Team.” This government organization, under the Department of Homeland Security, would operate along the lines of the National Transportation Safety Board, which investigates major plane and train crashes. It would be called to the scene when any IoT failure causes major damage or loss of life, analyze the situation, and make recommendations about how to prevent future such IoT failures.
Finally, members of my group suggested that IoT safety requires certain legal reforms, including limited “hack back” authority (that is, allowing companies to go after detected hackers by mining the hackers’ computers), as well as controls on what government and legal authorities could do with data being logged as part of security monitoring. It would have to be clear to the public that any data logging tools would be used to improve security, not for surveillance.
It turned out that we weren’t the only group who thinks a voluntary certification system is needed. Most of the other groups came up with similar approaches. Two called theirs “Good Housekeeping Seals”. One of those would encourage participation through spectrum licensing—products that get the seal could have access to more useful bands of spectrum. Another suggested that participation could be encouraged through the domain name infrastructure, dedicating certain suffixes to approved products.
Another team, rather than a suggesting a formal stamp of approval, thought the approach should be more like nutrition labeling, that is, easy to understand details should be provided on every product that would let consumers make a smart choice in terms of levels of security.
But the consensus was clear: To secure the IoT, use carrots—reduced liability, special frequencies—more than sticks, drive companies to upgrade security, and require them to make it clear to consumers just how secure their products really are.
Tekla S. Perry is a senior editor at IEEE Spectrum. Based in Palo Alto, Calif., she's been covering the people, companies, and technology that make Silicon Valley a special place for more than 40 years. An IEEE member, she holds a bachelor's degree in journalism from Michigan State University.