If there’s something strange about Elon Musk’s latest comments on AI’s threat to humanity or a Silicon Valley startup extolling how machine learning can save the world, who are you going call to differentiate fact from fiction? Maybe the founders of Certified Artificial, a service aimed at helping investors and tech conference organizers navigate the confusion and hype surrounding many claims about AI.
Launched in early August, Certified Artificial promises a “neutral, independent third-party certification service” for helping separate the AI snake oil from the real deal. One part of this service focuses on companies requesting third-party verification of the fact that they’re using the latest AI techniques in their services and products rather than simply relying on groups of human workers or older statistical methods. Certified Artificial’s other line of business involves evaluating the quality of advice coming from certain thought-leaders who frequently discuss AI technologies and their social impacts.
“Our goal is not to penalize anyone because they made a little misstep on how they talked about AI,” says Tim Hwang, partner and technical director of Certified Artificial, and director of the Harvard-MIT Ethics and Governance of AI Initiative. “We want to signal places where someone has either been consistently spreading disinformation about AI or is opining about it so it impacts in a way that erases a lot of people doing really amazing work in this space.”
The newest part of the service includes an online browser extension that anyone can install in order to see assigned ratings for thought-leaders whenever their names pop up in search engines or websites. Those experts who demonstrate both technical knowledge about AI and responsible awareness of the technology implications may receive gold, silver, or bronze certification badges. On the other hand, individuals who frequently spread misinformation about AI can receive a “Do Not Recommend” badge.[shortcode ieee-pullquote quote=""We're not here to commit character assassination. We're here to level the playing field."" float="left" expand=1]
Inspiration for the Certified Artificial service came from two articles about AI penned in 2018 and in August of this year by Henry Kissinger, former U.S. National Security Advisor and Secretary of State under President Nixon. Kissinger’s musings on the possible impacts of AI earned him the “Do Not Recommend” rating based on a “50-point diagnostic tool developed by experts in the field of machine learning and technology studies,” according to a press release published on 6 August.
“I think it’s very clear from these articles that this is someone who’s very prominent, who obviously has a lot of experience in foreign policy, but doesn’t know a great deal about AI,” says Hwang, a former public policy lead on AI at Google. What’s more, says Hwang, “[Kissinger] is in fact opining about AI and saying things about AI that either aren’t true or are totally unrepresentative of the work that’s actually going on in the space.”
Other individuals who have received the “Do Not Recommend” rating include Elon Musk, CEO of the automotive and energy company Tesla and the aerospace company SpaceX. Musk has frequently warned about the risks of AI technologies someday outstripping the intelligence and understanding of humans. He has also founded related initiatives such as the OpenAI research company focused on “safe artificial general intelligence” and the Neuralink company focused on developing a brain-machine interface technology.[shortcode ieee-pullquote quote=""This isn't just some bumper sticker that you purchase."" float="right" expand=1]
The beta version of the browser extension aims for some topic specificity in displaying the thought-leader ratings whenever there is a search result or Web page related to AI technologies, says Clayton Aldern, partner and research director for Certified Artificial. But in the future, he and Hwang expect to improve the service by adding a machine learning model trained on recognizing certain topics.
“We’re not here to commit character assassination,” says Aldern, a data scientist who also directs his own data-science and machine-learning consultancy. “We’re here to level the playing field.”
The second part of Certified Artificial’s services banks on the idea that the marketplace will increasingly demand independent, third-party certification of AI products and services—especially as more startups and established companies alike jump on the AI hype bandwagon. Potential customers that might like to receive Certified Artificial’s stamp of approval include startups that are “having trouble getting traction in the marketplace” and want to stand out amidst a “glut of alleged AI companies,” Aldern explains.
Companies seeking certification will not have to give up their source code or similarly sensitive trade secrets. Instead, Aldern and Hwang envision the certification being more of a holistic process. Besides applying their own evaluation methods, they are also keeping a group of advisers on tap.
The members of the advisory group and the exact methodology behind the thought-leader ratings and company certifications are being kept secret for now. That has already raised at least one question about the transparency of the entire venture. But the Certified Artificial founders argue that the legitimacy and objectivity of their certification processes rely upon “keeping the cards close to our chest” so that outsiders have a difficult time gaming the system, Aldern says.
Whether or not Certified Artificial’s business plan takes off remains to be seen. Still, the two founders sound cautiously optimistic about having identified a real market demand and opportunity at a time when plenty of AI hype still abounds in press releases, IPO filings, and news articles.
“To our knowledge, this is the first time something like this has been tried,” Hwang says. “And so we’re figuring out what would be a price that is both reasonable to really do a good job in these certifications, but also one that is a significant signal to say that, ‘Hey, this isn’t just some bumper sticker that you purchase.’”
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.