House introduces bipartisan bill on AI in banking and housing
The bill would require a report on how these industries use AI to valuate homes and underwrite loans
Read more...What are the ethical responsibilities of companies that are able to manipulate human behavior on a massive scale? It’s a question one hopes technologists and designers ask themselves when building world-changing products — but one that hasn’t been asked often enough.
Operant conditioning, intermittent reinforcement, the search for self-actualization — the techniques used by product managers at the world’s largest companies are equal parts psychology and technology. As Sean Parker, founding president of Facebook, recently acknowledged, the company has long been engaged in the business of “exploiting a vulnerability in human psychology.”
Our gadgets and apps are more persuasive than ever. Yet for the makers of these technologies, few guidelines exist on how to change user behavior ethically. Without a standard, businesses tend to unthinkingly push the envelope in the never-ending quest for more engagement, more growth, and, ultimately, more profits. As one startup founder told me, “At the end of the day, I have an obligation to my investors and employees, and I’ll do anything I can, short of breaking the law, to get people using my product.”
The tech industry needs to do better than the threat of jail time to decide to do the right thing.
Thankfully, most technologists and designers I know are working to make people’s lives better. Around the world, entrepreneurs aspire to build products customers love. Whether working at a large Silicon Valley tech company or out of a garage, they dream of moving people to action by offering them the next indispensable improvement to their lives, and most try to go about this in an aboveboard way.
How It’s Used
Of course, many of them also wouldn’t mind getting rich. But this mix — the drive to make both a difference and a profit — is how humankind has solved many of our most vexing problems. There’s nothing wrong with building products people want to use, but the power to design user behavior ought to come with a standard of ethical limitations.
The trouble is the same techniques that cross the line in certain cases lead to desirable results in others. For example, Snapchat’s use of streaks — which tally the number of consecutive days friends have shared photos — has been criticized for conditioning teens to compulsively keep coming back to the app. But the same persuasion technique is used by the language app Duolingo to help people learning a new language stick with the program.
The same variable rewards used to extract cash from gamblers playing electronic slot machines are also used in video games that help kids with cancer distract themselves as they receive painful treatments.
Clearly, it’s not the persuasion technique itself that’s the problem — it’s how the technique is used.
But without a test to tell the difference between good and evil uses, it’s easy to see how designers can go astray.
The Regret Test
The tech industry needs a new ethical bar. Google’s motto, “Don’t be evil,” is too vague. The Golden Rule, “Do unto others as you would have them do unto you,” leaves too much room for rationalization.
I’d argue that what we ought to be saying is, “Don’t do unto others what they would not want done to them.” But how can we know what users do and don’t want?
I humbly propose the “regret test.”
If we’re unsure of using an ethically questionable tactic, “If people knew everything the product designer knows, would they still execute the intended behavior? Are they likely to regret doing this?”
If users would regret taking the action, the technique fails the regret test and shouldn’t be built into the product, because it manipulated people into doing something they didn’t want to do. Getting people to do something they didn’t want to do is no longer persuasion — it’s coercion.
So how do we tell if people regret using a product? Simple! We ask them.
Just as companies test potential features they’re considering rolling out, they could test whether a questionable tactic is something people would respond to favorably if they knew what was going to happen next.
This testing concept isn’t new to the industry — product designers test new features all the time. But the regret test would insert one more ethical check by asking a representative sample of people if they would take an action knowing everything the designer knows is going to happen.
The test wouldn’t necessarily require much added effort or cost. In a recent article, Jakob Nielsen of the Nielsen Norman Group wrote that he believes usability test results can come from testing with as few as five people.
Shipwrecks
The history of technological innovation involves many unintended consequences. As the cultural theorist Paul Virilio once said, “The invention of the ship was also the invention of the shipwreck.” The beautiful thing about the regret test is that it could help weed out some of those unintended consequences, putting the brakes on unethical design practices before they go live to millions of users.
The regret test could also be used for regular check-ins. Like many people, I’ve uninstalled distracting apps like Facebook from my phone because I regret having wasted time scrolling through my feed instead of being fully present with the people I care about. Wouldn’t it be in Facebook’s interest to know about people like me?
If any company, be it Facebook or another business, doesn’t listen to users who increasingly resent it for one reason or another, it risks more people ditching its service altogether. And that’s exactly why understanding regret is so important. Ignoring people who regret using your product is not only bad ethics, it’s also bad for business.
Nir’s Note: Thank you to Jason Amunwa, Rafael Arizaga Vaca, Ahmed Bouzid, Jamie Kimmel, Julie Li, Jennifer McDonald, Bo Ren, Irina Raicu, Julian Shapiro, Shannon Vallor, AnneMarie Ward, Susan Weinschenk, Guthrie Weinschenk, and Casey Winters for reading versions of this essay.
Illustrations by John Devolle
Nir Eyal is the author of Hooked: How to Build Habit-Forming Products and blogs about the psychology of products at NirAndFar.com. For more insights on changing behavior, join his free newsletter and receive a free workbook.
This article was originally published on NirAndFar.com
Nir Eyal founded and sold two tech companies and today is a consultant to several companies and incubators. Nir blogs about the intersection of psychology, technology, and business at NirAndFar.com
All author postsThe bill would require a report on how these industries use AI to valuate homes and underwrite loans
Read more...The artists wrote an open letter accusing OpenAI of misleading and using them
Read more...The role will not be filled by Elon Musk, though he will be involved in who is chosen
Read more...