CommentsCONNECTING CALIFORNIA--A: Test me all night, baby.
No, really. Sign me up to be the subject of A/B testing. I’d even be willing to sign a blanket consent form, right now, so that all of Silicon Valley’s biggest brains can test me for the purpose of improving the human future.
Everybody’s doing it. In fact, you’ve likely been A/B tested without your knowledge if you’ve ever used Google or Facebook.
With A/B testing, different users are given different variants of a website or an email or a purchasing button to test what small changes online make you more likely to click, or read, or buy, or spend more time in a particular online environment. (A/B typically suggests two variables but, in reality, we are in a multi-variable world.) If you’re reading this column online, you could be being A/B tested right now—it could be running in three different formats, with your reaction to each variable (different headlines, different layouts, maybe even different handsome photos of your columnist) being measured, recorded, and statistically analyzed.
The gold standard for California’s technology industry, A/B tests are also called bucket testing and split-run testing, and they neither can be detected or escaped. A/B tests are how we improve our designs, our interfaces, and even ourselves.
Conducted carefully and repeatedly, they allow for refinements to fit the needs of users and remove guess-work for those running sites and delivering more products.
This notion of tests is old—it’s often attributed to 1908 tests that were used to improve industrial processes at a Guinness brewery in Ireland. But Google has optimized its globe-dominating search business for such testing. Facebook is similarly devoted to A/B testing to continuously refine its site. On the other side is Snap, whose CEO Evan Spiegel doesn’t like to do such testing, preferring a more visceral approach. Is that why Snap is facing such challenges in keeping users?
A/B testing can feel more like a religion or a cult than a scientific procedure. It requires building unseen rituals into everything you put up online. But the disciplines of experimenting and testing help avoid the human preference for the status quo.
We should demand even more from A/B testing. The human race must redesign and improve all sorts of systems—energy, traffic, food and water supply, communications, and even governing systems —if we’re going to avoid self-inflicted disasters, from climate change to famines to wars. So why don’t we commit ourselves to a culture of continuous optimization in the real world, not just the virtual?
B: I am not your test subject, baby.
And I have no desire to be Silicon Valley’s guinea pig. Oh, yes, I know the internet is full of fine print that lets me know that I’m being tested. But that doesn’t mean I’m being meaningfully asked for my consent. And I’m not really being compensated for all the data that’s being collected from experiments conducted on me.
My online time is now given over to companies experimenting upon me for the purpose of getting me to choose to see which variables will change my own behavior. In essence, I’m a dystopian lab rat forced to design the maze—and the reward—that will entrap me. Great.
If you’re reading this column online, you could be being A/B tested right now—it could be running in three different formats, with your reaction to each variable (different headlines, different layouts) … being measured, recorded, and statistically analyzed.
And even the real world no longer provides an escape because the Internet of Things–with its web-connected air conditioning and appliances—tests me even when I’m relaxing in my own home, making a cup of coffee.
Facebook will tell you that all its services, provided to me free, are a form of compensation, but studies also tell me that spending more time on Facebook—which is the goal of many of their experiments—makes me less happy. Sadness is not a method of payment I accept.
Such testing has created an unacknowledged ethical crisis—and real public health concerns. The more we click, the more we’re being tested. And if experiments show the way to make us spend more time than is healthy for us in an online environment, or to spend more money than is good for our family’s finances, aren’t we being harmed by our own testimony? (Am I talking about my own behavior here, you ask? Can I plead the Fifth?)
In other fields, like medicine, society developed standards and review boards for governing the testing of human subjects. But these standards aren’t being applied to all the A/B testing to which we’re constantly subjected online.
There are questions here for our faltering democracy, too. California has hundreds of companies that will help an interest group or a politician test to determine the best ways to manipulate our emotions and online behavior for their purposes. Is such human testing a factor in the rise of polarization and fake information that is weakening our bonds to our fellow citizens?
If so, this world of testing needs real regulation—by the same authorities, and under the same laws, that allow for regulation of business practices in the name of protecting people from health and financial threats. One way to start might be to add regulation of A/B testing and other online experiments to the privacy regulations that some jurisdictions impose on tech companies.
And there are other, more prosaic problems. All these A/B tests can be wasteful, producing data that can become quickly outdated. That data creates its own gravity and a bias in favor of the status quo. That’s dangerous because the past doesn’t always predict the future, especially online.
A/B testing and multivariable varieties of it are also impersonal. Such testing doesn’t capture who the users are, and the needs of people can be as diverse and different as individuals themselves.
Of course, smart people in Silicon Valley know this, which is why they are moving beyond A/B testing to the realm of machine learning: a world of algorithms that learn about each individual user. The promise, as yet unrealized, is that the algorithms will continuously improve in giving each user customized products and answers.
Such machine learning blurs the line between human, interface, and machine. In testing their way into this future, California’s brightest brains are simultaneously hiding behind their screens and intruding into their fellow citizens’ lives and minds in a way that they would never dare in person.
Yes, their goal may improve the human experience in many fields. But constant testing and ever greater refinement can be deeply disrespectful to humans, our privacy, and our rights. Yes, we have the right to choose, A or B. But how much choice does continuous testing really leave us test subjects about the nature of our collective future?
(Joe Mathews is Connecting California Columnist and Editor at Zócalo Public Square … where this column first appeared. Mathews is a Fellow at the Center for Social Cohesion at Arizona State University and co-author of California Crackup: How Reform Broke the Golden State and How We Can Fix It (UC Press, 2010)
-cw