Originally published at UXmatters October 4, 2021.
Unfortunately, this was the first and only article in this column.
Welcome to the first edition of my new column: A Better Future: Designing for good in a changing world. My hope is that this column will be a natural extension of the two series I’ve previously written for UXmatters, “Understanding Gender and Racial Bias in AI” and “The State of UX Design Education.” My goal is to continue educating myself and this community on the good, the bad, and the ugly of design futures, with a focus on the perils of designing in an artificial intelligence (AI)–powered world and what we, as UX designers and researchers, can do to address these challenges.
To get us all on the same page about capital G, Good design, I’m kicking off my column with a discussion of design ethics. This topic feels particularly relevant given the recent news from Menlo Park, California. As I write this, The Wall Street Journal has released The Facebook Files, its investigative research concluding what many of us have suspected for years: Facebook has special rules for elite users. Instagram is toxic for teenage girls. Facebook is an angry place and makes the world an angrier place. Facebook is not responding to the alarms its employees have raised regarding the treatment of minorities and vulnerable populations in the developing world. None of these revelations should come as a surprise. Sean Parker, the founding president of Facebook, warned us of this very thing in 2017. He admitted that Facebook’s goal was to “consume as much of your time and conscious attention as possible.” The social-media giant is guilty of “exploiting a vulnerability in human psychology.” Parker said that he, Mark Zuckerberg, and Kevin Systrom, co-founder of Instagram, “understood this consciously. And we did it anyway.”
The Ethics of AI in the Big-Five Tech Giants
How could this happen? In the absence of strict governmental regulation, the men behind the algorithms—“billionaire overlords” as Parker refers them and himself—at Facebook, Google, Apple, Amazon, and Microsoft have been left to police themselves. With growth as their only goal, there’s no motivation to slow their roll, take stock, or ask whether they are doing the right thing. If they had wanted to do the right thing, Facebook would have scrapped Instagram for Kids, its product targeting children under thirteen, as soon as executives saw the results of its own research on what Instagram does to teenagers. The company would have chucked their Ray-Ban Stories smart glasses in the bin when journalists pointed out how these glasses could result in the exploitation of women, children, and other vulnerable populations. (If you’re unconvinced about Zuckerberg’s growth goals, check out Ben Grosser’s film Order of Magnitude. In it, Grosser compiles every public instance of Zuckerberg talking about becoming more and bigger.)
Are the tech giants considering the ethical impacts of their products? In 2016, Facebook, Google and its DeepMind subsidiary, Amazon, Microsoft, and IBM teamed up to create the nonprofit Partnership on AI. Apple joined the group in 2017. According to The Verge, the group’s two key goals are to educate the public about AI and collaborate on AI best practices, including input from researchers, philosophers, and ethicists. Overlooking the fact that they should address ethical considerations before selling the benefits of AI to the general public, the organization has nevertheless been conducting solid research and asking the right questions. Mustafa Suleyman, the co-founder of DeepMind and co-chair of the Partnership on AI, has said that the Partnership is committed to an open dialogue about the ethics of AI, while also admitting that the group is not a regulatory body and cannot enforce rules on its member organizations. Even so, they’ve published a set of tenets to guide the organization. Sadly, the only way to find these tenets today is to dive into the Internet Archive’s Wayback Machine. Tenet 3 states, “We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI.” But it’s impossible to engage in an open dialogue with the guiding principles of these organizations hidden from public view.
The Health Ethics & Policy Lab published a report in 2018 that mapped and analyzed the state of ethical AI among private companies, research institutions, and governments around the globe. This report describes a convergence around five of eleven key principles: transparency, justice and fairness, nonmaleficence—a technocratic do-no-harm rule—responsibility, and privacy. Of the 84 ethical principles and AI guidelines it has identified, organizations in the United States published 20 of them. For-profit companies have written many of these guidelines, including Google, IBM, Intel, and Microsoft. While Amazon, Apple, and Facebook were conspicuously absent from this list. Amazon executives say they are doing work in this area, but are uninterested in discussing what they are doing with anyone outside of Amazon, which is too bad because they do not have a great track record for developing their own AI solutions. For an example, just look at their sexist AI recruiting tool. Facebook claims to have an in-house ethics team, but they are also keeping mum about their work.
Microsoft, the oldest of the big five, has the most mature Guidelines for Human-AI Interaction. These guidelines are really tactics that Microsoft’s AI principles support, but they are posted on a separate Web site. The first tactic is: “Help the user understand what the AI system is capable of doing.” This is impossible. One cannot provide transparency into a thing that is a black box by design. In many cases, the engineers and data scientists who develop these software systems don’t understand what they can do. The US military’s own report on AI in the military notes the lack of explainability that is common to all AI tools. How could any nonexpert know what the software can do when the dudes who create it can’t explain it?
Speaking of the military-industrial complex, Google’s principles state that they don’t do business with the US military, but they do—or at least their software does—according to reporting from NBC News. Google, Amazon, Microsoft, Dell, IBM, Hewlett Packard, and Facebook all benefit from contracts with governmental agencies, including Immigration and Customs Enforcement (ICE), the Federal Bureau of Prisons, and the Department of Homeland Security (DHS). They hide the details of these deals in subcontractor agreements, so any mentions of accountability or transparency in their ethics statements are a farce.
Many of us doom-scrolling Facebook and Instagram or Googling from our Apple, Microsoft, or Android devices are far removed from the real physical harm these platforms are causing. But we are still being manipulated in obvious and subtle ways. These platforms are harvesting our data whether we realize it or not. Some of us may be more complicit than others, but we are all victims of these privateers.
The Ethics of AI in the UX Design Community
Shortly after our Facebook feeds served up this breaking news, a long-time friend and colleague and I had a conversation on Facebook about the Facebook Files. (Where else would we have this conversation?) She has spent her “career in and around software development, and there have definitely been clients, or even just features, that teams would not work on for moral reasons.” She remarked, “This [revelation from Facebook] makes dark UX patterns look like child’s play.” Where were all the designers? she wondered.
To answer her question for all of us, here’s a summary of what the UX community has had to say about ethics in research and design in recent years. The good news is that the practice of user research has reached ethical maturity. Not every UX professional or organization may be practicing ethical user research, but there are well-established best practices in this area. Josephine Scott shares real-life research dilemmas and how to solve them when conducting big-data research—for example, when doing live A/B testing, conducting large-sample unmoderated testing, or using survey-intercept tools.
When it comes to ethics in UX design, enough of us have questioned our role that Jakob Nielsen felt compelled to speak out on “The Role of Design Ethics in UX” at a recent conference. He says that we should never deceive through design. Treating users well makes them loyal customers, which drives long-term business value. If we have to ask whether something is ethical, it probably isn’t. To paraphrase, if something truly is useful, usable, and desirable, we can be honest about it and sell more widgets.
On UXmatters: Juned Ghanchi discusses the challenges of incorporating ethics into his design process and says we can do it without making value judgments. We can balance business needs against equity and accessibility goals. Peter Hornsby talks about the jaded jargon and industry doublespeak in which we engage or that we buy into, fooling ourselves into thinking that we’re designing for good. Hornsby adapts Isaac Asimov’s Three Laws of Robotics to UX design.
- “A UX designer may not injure a user or, through inaction, allow a user to come to harm.
- “A UX designer must meet the business requirements except where such requirements would conflict with the First Law.
- “A UX designer must develop his or her own career as long as such development does not conflict with the First or Second Law.”
I love the simplicity and familiarity of Hornsby’s approach, but worry about the welfare of those of us who cannot afford to quit a job because we disagree with the intended or unintended consequences of something we’ve designed.
Chris Kiess looks at the big picture of UX design ethics and divides the problem into three categories: existential values, ill or misdirected intent, and benevolent intent. He then dives into specific design challenges such as dark patterns, influence, distraction, and hidden information. Vikram Chauhan takes Kiess’s discussion a step further and questions at what point our persuasive designs become evil. Both agree that dark patterns have no place in Good UX design. (I think we can all agree that dark patterns have no place in UX design, but they still exist.) Both of these authors also question who is ultimately responsible when things go awry. Is it the UX designers, or does the fault lie with stakeholders, business owners, project managers, or others who demand alterations to our designs? According to the beating drum of journalists at Harvard Business Review, “Everyone in Your Organization Needs to Understand AI Ethics.” So maybe we’re all to blame.
While our UX design work may be well intentioned and useful, we often lack carved-in-stone standards. Xinru Page and her colleagues have proposed a set of standards that are specific to responsible privacy design in social media. Huatong Sun has written guidelines for cultural sustainability in Global Social Media Design, explaining how to create local-global online networks that are sensitive to the cultural contexts in which they are sold. Dorothy Shamonsky has examined the professional code governing architects and translated those principles to User Experience, placing an emphasis on usability and accessibility. The outcome is a list of proposed standards encompassing accessibility, ergonomics, safety, appropriate attention, movement, beauty, transparency, security, mind, community, and innovations for designing holistic, ethical user experiences. She admits that we need to do more work in this area. Perhaps because we lack one organization to rule us all, we’ll never have just one set of ethical guidelines.
The UX community does not have the kind of organization our graphic-design partners have in AIGA. The folks at AIGA took a stand 20 years ago when they published the first edition of Design Business and Ethics, which is now in its third edition. The publication includes standards of professional practice outlining the designer’s responsibility to clients and to other designers and, more importantly, to the public, society, and the environment. These standards of practice feel both fresh and prescient, offering commandments to “avoid projects that result in harm to the public, … communicate the truth in all situations, … and respect the dignity of all audiences.”
It’s this last point that really sticks with me. Treating people with dignity is what matters above all else. Dignity appears on the list of eleven principles in the global landscape of ethics from the Health Ethics & Policy Lab, to which I referred earlier. “Dignity is believed to be preserved if it is respected by AI developers in the first place.” (The emphasis is mine.) This sums up what many of us are already saying. Good design is the result of good intentions. Only if we can design and create useful, useable, and desirable products that treat everyone—regardless of gender, race, culture, ethnicity, and ableness—with dignity, are we doing Good design.