This article originally appeared in the March 2017 issue of IEEE Robotics & Automation Magazine. We thank RAM and the authors for giving us permission to reproduce it here.
Algorithms with learning abilities collect personal data that are then used without users’ consent and even without their knowledge; autonomous weapons are under discussion in the United Nations; robots stimulating emotions are deployed with vulnerable people; research projects are funded to develop humanoid robots; and artificial intelligence-based systems are used to evaluate people. One can consider these examples of AI and autonomous systems (AS) as great achievements or claim that they are endangering human freedom and dignity.
We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust for technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.
Technology Ethics
The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is an industry connections program of the IEEE Standards Association launched in April 2016 and is part of a broader IEEE program on ethics (TechEthics). A major tenet of the initiative is that, by aligning the creation of AI/AS with the values of its users and society, we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.
A primary goal of the IEEE Global Initiative is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems. The term technologist refers to anyone involved in the research, design, manufacture, or messaging around AI/AS, including universities, organizations, and corporations making these technologies a reality for society.
The Initiative
There are two main outputs of the initiative: the document “Ethically Aligned Design” and standards proposals that could be matured into actual operational standards adopted by industry and designers. The first version of “Ethically Aligned Design” was published 13 December 2016. It represents the collective input of more than 100 global thought leaders in the fields of AI, robotics, law and ethics, philosophy, and policy from the realms of academia, science, government, and corporate sectors. Our goal is that “Ethically Aligned Design” will provide insights and recommendations from these peers that provide a key reference for the work of AI/AS technologists in the coming years.
To achieve this goal, in the current version of the document, we identify issues and candidate recommendations in fields comprising AI and AS. The IEEE has also made “Ethically Aligned Design” available under the Creative Commons Attribution Noncommercial 3.0 U.S. License so any organization can begin to benefit from its collective wisdom right away.
As of this month, seven IEEE Standard Projects have already been approved, demonstrating the IEEE Global Initiative’s pragmatic influence on issues of AI/AS ethics:
- IEEE P7000: Model Process for Addressing Ethical Concerns During System Design (Working Group already in process)
- IEEE P7001: Transparency of Autonomous Systems (Working Group has started)
- IEEE P7002: Data Privacy Process (Working Group has started)
- IEEE P7003: Algorithmic Bias Considerations (Working Group is starting soon)
- IEEE P7004: Standard for Child and Student Data Governance (Project approved on March 23)
- IEEE P7005: Standard for Employer Data Governance (Project approved on March 23)
- IEEE P7006: Standard for Personal Data AI Agent (Project approved on March 23)
For the IEEE Global Initiative overall, a key value defining our work is the distinction regarding human-aligned autonomous and intelligent systems. Our insights are formed by a desire to incorporate aspects of human wellbeing that may not automatically be considered in the current design and manufacture of these technologies. Our aspiration along these lines is to reframe the notion of success so human progress can include the intentional prioritization of individual, community, and societal values. We want ethics to become the new green.
Learn more about the IEEE Global Initiative, including how to join the work, here.
Raja Chatila is executive committee chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. An IEEE Fellow, he is director of the Institute of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris. Follow him on Twitter: @raja_chatila
Kay Firth-Butterfield is the executive committee vice-chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She is also executive director and founding advocate of AI Austin. Follow her on Twitter: @KayFButterfield
John C. Havens is executive director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of “Heartificial Intelligence: Embracing Our Humanity To Maximize Machines.” Follow him on Twitter: @johnchavens
Konstantinos Karachalios is managing director of the IEEE Standards Association. He previously served for 25 years with the European Patent Office. He holds a master’s degree in mechanical engineering and a Ph.D. in energy engineering (nuclear reactor safety) from the University of Stuttgart.
Updated 3/30/17 12:15 pm ET