Most Virginia Tech fans know two things about our university: we turn out great engineers and, sometimes, our football teams aren’t bad either.
But, did you know that the work being done by SPIA faculty member (and world-class paddler) Dr. Sara Mattingly-Jordan is helping engineers design algorithms and autonomous systems to “be good”?
What exactly is it that you’re doing?
Right now, I am involved in three projects with the IEEE (Institute of Electrical and Electronics Engineers) related to their Global Initiative on Ethics of Autonomous and Intelligent Systems (https://ethicsinaction.ieee.org/). For the whole global initiative, I’m the chair of the “Glossary Committee” which has the really big task of identifying what the terms we use mean.
For example, what do we mean by “ethics”? And how does that meaning of “ethics” relate to other terms like “morality” or “intelligence”?
What came out of my work with the Global Initiative in general is my work with the IEEE-SA (Standards Association), to help create global technical standards for designing and programming artificial intelligence to make ethical decisions (http://standards.ieee.org/).
I started working on P7000, a standard in development that addresses the design of ethical systems, but quickly found myself working on other standards, such as those that are setting the terms for design of algorithms to protect privacy of employee data or even standards for interoperable terminology for robotics. My work on standards setting groups led me to a leadership position with the IEEE Society for the Social Implications of Technology standards group, for whom I am the current Vice-Chair. That means I get to see what the world’s leading experts on the future of ethics in AI and autonomous systems think should be codified into global standards. I also work on coordinating responses from global partners to the EAD project.
I’m not sure I know what that all means. What is a “standard”?
Trying to define it briefly, standards are what we all rely on– often without knowing we rely on them– for safe products that work. When you see ISO 9000, which is a manufacturing standard produced by the International Standards Organization, it means that the firm that made that product has certain safeguards, procedures, and policies that help to ensure safe, quality products. IEEE makes standards that we are all very happy to have in our life, such as the standards for how WiFi and LAN networks operate. We rarely know that standards have an influence on our lives, but if they didn’t exist and if hundreds of volunteers from across industries around the world didn’t put their time into making standards, we’d live a much less interoperable and well organized life.
What is an “ethical” standard then? Does that mean you’re deciding whether a robot can kill us or not?
That’s a great question: I, as a single person, am not deciding anything. Our standards are made by consensus, which means that anywhere from a few tens to a few hundred experts have to decide on the terms of a standard.
(To become involved in IEEE Standards making in this space, see: https://ethicsinaction.ieee.org/)
When it comes to what a robot can or cannot do, there are multiple standards that will eventually come together to govern robot behavior. Current projects under development are operating according to the best ethical judgments of the teams making the robot. In the future, as our standards are published and become part of the landscape of professional behavior, then it will be more clear which ethical expectations will be programmed into the robots or which expectations the designers hope the robots will learn.
As far as robots “killing people”, it is not possible to prevent all incidents of injury to humans due to their interaction with people, but one of the goals of standards setting and research into ethical human computer interaction, is to minimize the possibility to as close to zero as is possible.
You said you’re working on a Glossary. Does that mean there’s a textbook for ethical artificial intelligence?
No. Well, at least not yet! The Glossary, which you can see here is a set of possible definitions for standards making working groups and the many many members of the Global Initiative to use when they are drafting policies and standards. Ideally, the Glossary will help people from professions as diverse as electrical engineering, human factors engineering, sociology, robotics, public policy, and ethics to speak to one another and mean the same thing. One thing that has surprised me throughout this work is that professionals in fields that are very far from one another often use the same term to mean roughly, but not completely, different things. For example, in philosophy and in robotics, the word ontology is used. But, they don’t mean the same thing! Resolving debates about the terms we use are really critical to moving forward on the hard work of creating the standards.
Let’s go back to the Global Initiative for a bit, what is that? What is global about it and what do you do for that initiative?
The Initiative is working to bring together experts in philosophy and artificial intelligence and autonomous systems from around the world to identify the ethical issues in AI and to point other researchers, governments, and professionals towards conversations and solutions. I was not heavily involved in writing the first “Ethically Aligned Design” document, but I got involved in helping to improve the second version and to identify gaps where we were falling short of our global ambitions. I also got a chance to practice responding to a stakeholder engagement exercise: we solicited feedback from around the world and I got to review and systematize that for our brief white paper on the ambitions of becoming truly global ((http://standards.ieee.org/develop/indconn/ec/becoming_leader_global_ethics.pdf)).
In the second round of Ethically Aligned Design, which is open for public comment now, I hope to get the chance to craft our response to the public again. Of course, as head of the Glossary, I’ll have to respond to those comments on my work individually!
How has your work with these many initiatives in IEEE influenced your work as a SPIA faculty member?
Well, I’m hoping to find some masters and graduate students to work specifically on AI ethics and policy in the future! Also, we are currently working on the planning stages of a collaborative IEEE/SPIA event related to AI and state and local government policy and management. We are hoping to hold a few events during Spring 2018, so stay tuned!
I am always interested to have more SPIA students involved in my work here. I’ve been very lucky to have a few CPAP PhD students work with me a bit on the initial Glossary work and one of them has become a regular part of this committee (Maria Ingram). But, I can always use more!
Finally, you mentioned something about your sports background. How does that and your work on AI go together?
They don’t go together at all! Other than being very reliant on Garmin GPS technology for training, I am a very “unwired” athlete right now. But, my coaches are pushing for more datafication of my training and life in order to optimize my training. With all of the interesting work I have going on with IEEE and my teaching and research, we have to be very careful with time as we gear up for Va’a Sprints in Tahiti in July 2018 (https://www.tahitivaa2018.org/en/accueil-en/).
A digitized paddle that syncs with my heart rate and speed data, which automatically gives readouts on force and watt outputs, is probably in my future. I compete in the open division, which means I race against the best women in any age group from all of the nations that field competitors. While I am not feeling confident racing against the fantastic, younger, paddlers from the Polynesian nations, who aren’t having to chip away at ice to train right now, I have reason to be confident in the algorithms that will be embedded in my training gear!