WEF Releases Ethics by Design Report as a Guide to Responsible AI
By John P. Desmond, AI Trends Editor
The World Economic Forum (WEF) has released “Ethics by Design—An Organizational Approach to the Responsible Use of Technology,” a report detailing steps and recommendations for achieving ethical use of technology.
“Ethics will be crucial to the success of the Fourth Industrial Revolution. The ethical challenges will only continue to grow and become more prevalent as machines advance. Organizations across industries—both private and public—will need to integrate these approaches.” stated WEF’s Head of Artificial Intelligence and Machine Learning Kay Firth-Butterfield in a press release.
The report recommends that a comprehensive approach to fostering organization ethics around AI should include three components:
Attention: Timely, focused attention toward the ethical implications of the technology. Attention techniques include reminders, checklists, and frequent refresher training;
Construal: Having individuals interpret their work in ethical terms. Examples include mission statements imbued with ethical language and an emphasis on culture. Promotion of technical decisions by providing the corporation’s vision, purpose, and values. Getting beyond purely legal or regulatory compliance terms.
Motivation: Encouraging prosocial actions, setting social “norm nudges’ and other cultural change activities can be used to promote ethical behaviors.
Research for the report included interviews with executives from seven countries, which helped to create a blend of insights into models organizations can use to help employees learn, stated Don Heider, executive director, Markkula Center for Applied Ethics. “Executives will find practical, specific recommendations to enable their organization to be intentional in their efforts to embed ethical thinking into their cultures and practices,“ he stated.
“The ethical framework for each organization is going to be slightly different,” said Beena Ammanath, Executive Director, Deloitte AI Institute for Trustworthy and Ethical Technology, in an interview with AI Trends.
In a manufacturing company focused on using technology to predict factory floor machine performance, fairness and bias may be less of a factor than it is for a company that evaluates human talent or that oversees reskilling and upskilling the labor force, she suggested. “Once you have agreement on what ethics means, once you have agreement on that, you look at the three critical components,” she said.
For example, “Most technology companies advanced on their AI journey already have some feel for ethical training,” Ammanath said. “So you put in reminders and checklists, and annually, the training is refreshed, so it is timely and refocused attention.” Companies can innovate ways to provide training, such as by using gaming to boost engagement.
Google’s AI Ethicist Gebru Flagged a Concern, and No Longer Works at Google
To interpret their work in ethical terms, employees need to be able to speak out about their concerns, she said. Asked how that worked out for Timnit Gebru, the AI ethicist at Google who was let go in a dispute over her ethical concerns around large language model research, Ammanath was understanding, cautioning that it had just happened the previous week, and she was not aware of the details
Gebru had submitted a paper to an industry conference that Google asked to be withdrawn, leading to a disagreement that resulted in Gebru leaving the company. She is known in the ethics community in addition to her work at Google, for her work with Joy Buolamwini, a computer scientist based at the MIT Media Lab, and founder of the Algorithmic Justice League, on bias in facial recognition software. Their study showed facial recognition software was much more likely to misidentify people of color, particularly women, versus white men. IBM, Amazon, and Microsoft rolled back their face recognition product lines after the study was publicized. (See AI Trends.)
“There is no playbook,” Ammanath commented on Gebru’s experience. “We have to learn and then improve.” Asked if there is hope Google will recover lost credibility around AI ethics, she said, “It’s like a child growing up. If you get burned, you learn, and then you move on. I am very optimistic. And it is important that every employee is aware of the ethical implications of the systems they are building and employees should be empowered to raise ethical concerns, act on them, and have a way to see what that means for the company.”
Recalling training early in her career, Ammanath said when she started as a data analyst, training was centered on the core values and mission of the company, “But there was nothing saying to make sure the procedures you are writing do not cause human harm,” she said.
Humans for AI Works to Improve Diversity in Tech
In addition to her role at Deloitte, Ammanath is the founder of Humans for AI (HFAI), started three years ago to focus on AI literacy and addressing the “diversity crisis” in AI. The website states, “AI systems require a diverse workforce of humans.”
The website offers these facts: 51% of the world’s population is female; 18% of AI authors at conferences are women; five percent of the AI workforce are women and minorities; the pool of minorities could potentially fill 37% of the tech workforce; and 17% of tech employees are women.
Programs offered by Humans for AI include the Alliance for Inclusive AI (AIAI), in partnership with the University of California at Berkeley, which aims to include more women and minorities in the field of AI by mentoring, facilitating internships, and connecting people with job opportunities.
HFAI has volunteer ambassadors around the globe, committed to increasing awareness and engagement for the group’s mission on the ground through planning and hosting local events. “At Humans for AI we believe in building a diverse workforce for the future with a focus on advocacy, awareness, education and outreach,” stated Deepa Naik, co-CEO of Humans for AI, an in email message for AI Trends. “Diversity enhances creativity, critical thinking and decision-making crucial towards building ethical and low-biased systems especially in emerging technologies like AI which will continue to make a huge impact on our lives.”
The WEF is one of a vastly expanded number of organizations fostering responsible policies around use of AI. The AI Policy Observatory of the Organization for Economic Cooperation and Development (OECD) tracks more than 300 AI policy initiatives in 60 countries, a sharp uptake from 2017 when Canada was first with its National AI strategy, according to a recent account in Forbes.
AI Global, a non-profit committed to furthering trustworthy AI, has created the Responsible AI Trust Index, providing a means to evaluate AI systems and models against best practices. In December 2020, working with the WEF and the non-profit Schwartz Reisman Institute, AI Global convened the first meeting on a new program for Responsible AI Certification (RAIC). Using a five-element scorecard, the index aims to set certification levels for an AI system.
“With the increased use of AI in every aspect of our life, from social media advertising to predictions on health treatment, it is imperative that there is independent oversight to ensure AI systems are built in a way that is safe and protects those using it,” stated Ashley Casovan, executive director of AI Global.
See the World Economic Forum press release on the Ethics by Design report, and the account in Forbes on ethical AI initiatives.
Credit: Source link