Preamble

These vignettes draw comparisons between software and medicine — in their dual capacities to heal and to hurt. They explore the nature of addictive technologies in relation to business, the power that software designers are presently wielding over the masses, and a new way of imagining companies: as medicine men for the species. I hope these vignettes will help to inspire the engineering community to adopt a common set of ethical principles to guide the evolution of software (which, in turn, will help to guide the evolution of our species).

Thanks to Sep Kamvar and Annie Correal for reading drafts.

Jonathan Harris, May 2012

Social Engineers

We inhabit an interesting time in the history of humanity, where a small number of people, numbering not more than a few hundred, but really more like a few dozen, mainly living in cities like San Francisco and New York, mainly male, and mainly between the ages of 22 and 35, are having a hugely outsized effect on the rest of our species.

Through the software they design and introduce to the world, these engineers transform the daily routines of hundreds of millions of people. Previously, this kind of mass transformation of human behavior was the sole domain of war, famine, disease, and religion, but now it happens more quietly, through the software we use every day, which affects how we spend our time, and what we do, think, and feel.

In this sense, software can be thought of as a new kind of medicine, but unlike medicine in capsule form that acts on a single human body, software is a different kind of medicine that acts on the behavioral patterns of entire societies.

The designers of this software call themselves “software engineers”, but they are really more like social engineers.

Through their inventions, they alter the behavior of millions of people, yet very few of them realize that this is what they are doing, and even fewer consider the ethical implications of that kind of power.

On a small scale, the effects of software are benign. But at large companies with hundreds of millions of users, something so apparently small as the choice of what should be a default setting will have an immediate impact on the daily behavior patterns of a large percentage of the planet.

At Facebook, for example, they use a term called “Serotonin”, which refers to the bonding hormone released by the brain in moments of intimacy. In design reviews, Facebook designers are asked, “Where is the serotonin in this design?” meaning, “how will this new feature release bonding hormones in the brains of our users, to keep them coming back for more?”

In its capacity to transform the behavior of people, software is a kind of drug — a new kind of drug. As there are many kinds of drugs (caffeine, echinacea, Tylenol, Viagra, heroin, crack), so are there many kinds of software, feeding different urges and creating different outcomes.

Urges & Outcomes

All technology extends some pre-existing human urge or condition: a hammer extends the hand, a pencil extends the mind, a piano extends the voice. All technology amplifies something we already possess.

Technologies become viral when they amplify something that is already in us, but blocked. When a technology eliminates a major blockage, the uptake can be explosive. Facebook gained 500 million users in under 5 years by finding a basic human blockage (our need to share and connect), and offering a way around it — as a surgeon might extract a clot to restore the flow of blood.

When designing technology, you should understand what human urge or condition you will be extending.

There are many kinds of urges. There are the seven deadly sins (lust, greed, envy, sloth, gluttony, pride, and wrath). There is the urge to find meaning, joy, wonder, and happiness. There is the urge to explore, to improve, to learn, to gain wisdom, to teach. There is the urge to feel loved, to connect, to feel useful, to nurture, to help, to belong.

Each urge, when extended, creates a different kind of outcome and a different kind of person.

When millions of people have a given urge extended, it creates a different kind of world.

So choose your urges wisely.

The Ethics of Code

We understand the potency of drugs and medicine — both to do harm and to heal — so we entrust the FDA to regulate them.

If a given drug is found to harm more than it heals, we’re encouraged not to use it. But sometimes a drug is so addictive that we use it anyway — even if it hurts us — and we go to extraordinary lengths to obtain another dose. For this reason, harmful drugs are often very good for business (see: Mexico), because, maddened by addiction, users stray from rationality and reason just to get their fix, regardless of the cost.

A lot of software is designed to be addictive. In Silicon Valley, the addictivity of a given piece of software is considered an asset. Companies strive to make their products “viral” and “sticky” so that “users keep coming back” to “get their daily fix.” This sounds a lot like dealing drugs. It might be good for business, but is it good for people?

As citizens, there are some things we could do.

We could introduce citizen oversight for software companies, by creating a crowd-sourced FDA for software.

We could call it the Ethical Software Administration (ESA) — a neutral third-party watchdog group to monitor the actions of major companies with more than ten million users. When such companies introduce new products and features, change their default settings, or modify their terms of service, the ESA could take a look at the changes and issue public warnings when needed.

Since the dynamics of software are so different from those of big pharma, the ESA oversight mechanisms would need to reflect the culture of software.

Technological innovation will always outpace any legislation that tries to constrain it; regulating technology tends not to work. So the ESA would need a different approach, perhaps an open online forum where anyone can post their concerns, and where every company receives an aggregated “ethics” score, based on their actions. Users could file objections, rally support behind those objections, and force the hand of companies to reform gratuitous policies by threatening boycotts of products.

The EFF could oversee this initiative, keeping careful eye on transgressive companies, and mustering legal support for violations that users find particularly bold. If companies repeatedly violate ethical norms, they could be forced to post warning labels on their websites, as tobacco companies are forced to warn buyers their products cause cancer. The irony is that many software companies already use this kind of language to promote their very products (i.e. “Warning: this game is known to be highly addictive and could keep you from your friends and family,” which could easily be the latest ad campaign for Angry Birds or Farmville).

Even when regulations do exist, companies often violate them anyway, preferring to pay fines than to change profitable policies. Factories pay EPA fines so they can pollute rivers; power plants pay carbon tax so they can keep spewing smoke. When profits are sufficiently large, no amount of reprimanding will change how a company acts. But with software, the dynamics are different; software depends on its users. When users choose to stop using software, the company producing that software no longer has a business. If we object to the policies of a given software company, all we have to do is stop using its software.

A complementary approach is to build awareness and accountability within the engineering community.

We could ask our educational institutions to add an ethics curriculum to every engineering program. Universities offering degrees in computer science, electrical engineering, applied math, and interaction design could create coursework to explore the ethical considerations of those fields, especially the tradeoffs between page views, corporate profit, personal health, social impact, and simply doing what’s “right.”

From a young age, engineering students could be taught to speak up for what they believe. Too many engineers remain silent, leaving decisions to “management,” and simply writing code as they're told. This is the same division of ethical accountability that allowed the Manhattan Project to happen. Scientists say, “Oh, but I was only doing science,” politicians say, “Oh, but I was only using what scientists made me,” and businesspeople say, “Oh, but I was only connecting supply and demand.” When people don’t see the big picture, or when they think they’re only responsible for the thing that’s right in front of them, it’s easy for many individuals to be complicit in the creation of damaging things.

We could ask our engineers to take a Hippocratic Oath, as medical students are required to do before we call them doctors. The basic tenets of an Engineering Oath could mirror the medical ones:

We could draft an Engineering Oath, post it online, and allow individual technologists to add their signatures, stating their name, hometown, personal website, and affiliations. This browsable directory would easily allow people to see what percentage of a given company’s engineers have taken the oath, and it would give the engineering community a common set of ethics, to guide the evolution of software.

As engineers, we can ask ourselves some basic questions:

Will we feel accountable for the behavioral outcomes of the software we introduce to the world? Will we recognize our responsibility to our fellow human beings to build them decent, useful, powerful, and ethical tools? Will we make things that trick and seduce, or things that nourish and teach? Will we optimize for page views and profit, or for social impact and beauty?

Healers & Dealers

On the Web, there are two main kinds of companies: marketplaces and attention economies.

Marketplaces operate by connecting one group of people to another group of people and allowing them to conduct a transaction, of which they take a cut. Etsy connects buyers to sellers; Kickstarter connects creators to backers; Airbnb connects travelers to hosts; OkCupid connects daters to daters. Marketplace companies build tools to solve problems that exist in the world. At their best, they operate like healers — mixing up medicine to answer a need.

Attention economies operate by convincing users to spend large amounts of time online, clicking many things, and viewing many ads. These companies often masquerade as “communication tools” that help people “connect”. But in attention economies, most of the “connecting” happens alone, while you’re staring at a screen, and it often leaves you feeling empty. Attention economy companies operate less like healers and more like dealers — creating addictive experiences to keep people hooked.

Both kinds of companies fulfill urges that are already in us, but the way that they answer those urges is different.

Marketplaces aim to eliminate urges by feeding them quickly (find a date, book a room, etc.), while attention economies aim to keep the urges going forever (continuous updates, another cool video, more new messages, etc.).

There is an ancient pact between tools and their users which says that tools should be used by their users, and not the other way around. Good tools should help their users accomplish a task by satisfying some pre-existing urge and then getting out of the way. Attention economies, at their most addictive, violate this pact.

Like good medicine, good tools should appear briefly when you need them, and then disappear, leaving you free to get on with your life.

The Problem of Advertising

The problem with non-addictive tools — particularly when they're free — is that they're bad for business, especially when that business is advertising.

On the Web, where people have learned not to value things directly, the most common business model is to make a product, give it away for free, attain tremendous scale, and then, once you have a lot of users, to turn those users into the product by selling their attention to advertisers and their personal information to marketing departments.

This is a dangerous deal — not necessarily in economic terms, but in human terms — because, once the user has become the product, the user is no longer treated as an individual but as a commodity, and not even a precious commodity, but as one insignificant data point among many — a rounding error — meaningful only in aggregate.

Thinking of humans this way produces sociopathic behavior: rational in economic terms but very bad in human terms.

Yet many companies operate under this premise.

Businesses — initially invented as a means of solving social problems (my village needs bread and I can bake it, my neighbor needs a roof and I can build it, etc.) — have become disconnected vehicles that exist primarily to profit, often with little regard for what people actually need, what social problems the companies purport to be solving, or for what kind of outcomes their products and actions are likely to have in the world.

Advertising — initially invented as an accelerant to make existing business models flourish more deeply — has become the business model itself, turning whole companies into marketing departments, products into attention hooks, and people into products.

When advertising is the business model, companies cannot afford to create non-addictive technologies, because their businesses rely so heavily on page views and clicks. These companies cannot optimize for meaning and beauty (like healers), but have to optimize for addiction and volume (like dealers).

A Staging Ground for the Future

But why does it matter how software companies behave? People are free to use or not to use software. There is no coercion here — people are free to decide.

There are several reasons why software is important:

First, because of network effects, if many people use a given piece of software, it becomes more and more likely that you will use it, too. As a citizen of a global community, you will want to use the tools and platforms that allow you to connect with the rest of your tribe and your species. And since not all of us are engineers, we should be able to trust those of us who are to build us nourishing spaces and tools, in the same way we trust farmers to grow us good food and architects to build us good buildings.

Second, software is the staging ground for the future. The stakes are low right now, but they're about to get higher.

At this moment of transition, we're straddling the rare evolutionary threshold between two scales of existence. Darwinian evolution at the individual level is about to be transcended by another kind of evolution at the species level. The Internet is helping us wake up and see that what we really are, in addition to our individuals selves, is a network of individuals cells, composing a larger human organism. We act with individual agency, but our choices and actions (and possibly even our thoughts and our feelings) have a very real impact on the broader whole in which we exist.

Through the Internet, we are growing a species-level nervous system, capable of transmitting thoughts, ideas, and information, but also physiological reactions and empathy. This latter phenomenon is new, and we've only glimpsed it briefly several times at scale. For instance, when a young Occupy Wall Street protestor was pepper-sprayed by police at UC Davis, within a few minutes, millions of people around the world had seen the video. And many of them not only saw the video and felt a kind of moral outrage, but they also felt a kind of physical nausea — a visceral sense of pain and disgust, deep in their gut. And that part is new. It's like those millions of viewers shared a simultaneous and collective wince in response to an external stimulus affecting another human being on the other side of the world. It's as though the nervous systems of those millions of people were temporarily connected to the nervous system of the pepper-sprayed-girl, causing them to share her pain. It was only a glimmer, and it only lasted briefly, but it was prophetic of what is to come.

Soon, through the Internet, we are likely to fulfill the ancient Buddhist idea of experiencing the suffering of all living things.

As long as the Internet is external to us, it will be easy enough to turn off. But it's likely that soon, we will begin to augment our human bodies with technological components that give us direct biological access to the network.

There is already precedent for technological augmentation of the human body (pacemakers, prosthetics, etc.) so it's only a matter of time before we start accepting more and more technology into our bodies. We will soon embed wifi-enabled devices into our skin to monitor biometric signs and sync with digital health records. We'll embed cancer-fighting nanobots that swim through our bloodstream, keeping us clean. We'll embed micro-processors into our brains to provide direct access to the Internet through thought. At that point, the stage will be set for a kind of universal empathy — body to body, brain to brain, heart to heart, connecting the whole human species.

This may all sound far-fetched and sci-fi, but I mention it here to suggest what comes after software. And to show that even though the stakes are low right now (with apps and social networks), we are establishing the ethics and cultural norms that underlie how we build technology, and these precedents will affect how designers and developers behave in the years ahead, when the technological interventions will enter our physical bodies, and will be much harder to ignore.

Software is the staging ground for the future, affording us the time and space to get our ethics right, before the stakes are raised.

Because, the way these biological interventions will happen is that there will be some guy who starts a company, and he will have a small design team, and they will make certain choices, and decisions around things like default settings, and they will build their product, and release it into the world, and early adopters will adopt it, and then ordinary folks will try it too, and soon thereafter the physical bodies of millions of people will forever be augmented by the flippant choices made on a Tuesday afternoon in a little sunny room in Palo Alto.

Then technology really will be a drug. Let's just hope that those designers get their ethics right.

Medicine Men

Some companies will not want to carry this ethical burden, so they will ignore these kinds of questions. You can see this eschewal of duty in other domains. Cigarette companies know their products cause cancer, but they sell them anyway. Fast food companies know their meals cause obesity and diabetes, but they serve them anyway. When something's very good for business, ethics often take a second seat.

But there will be other companies that accept this ethical burden, and choose to use their powers wisely.

We can call such companies “medicine man companies” — behaving like medicine men for the species.

A medicine man company would observe a given community, society, or even a whole civilization, and try to sense what's ailing it. Then, it would create technological interventions to counteract those ailments. It would use software as a kind of medicine, traveling into the world and subtly altering the behavior of people. A medicine man company would become a new kind of healthcare provider, helping people heal.

In designing interventions to address particular problems, you should understand that you can never simply "fix" a problem. By adding a new element into a system, you increase the complexity of that system, which may have the effect of fixing the problem you saw, but which will also inevitably introduce new and different problems. This is how interventions work. They address one issue, and in doing so, they create new issues, and the world becomes more complex. So if you intervene, do it with humility, knowing that your well-intentioned actions will create unforeseen problems of their own.

So then why act? Why add complexity? If any intervention will create both good and bad, then why intervene at all? Why not simply sit and watch?

We should act because the world is getting crazy, and beautiful interventions are needed.

Crazy Times

With terrorists bringing down airplanes, earthquakes bringing down cities, and revolutions bringing down governments, what will fall next is anyone's guess. Half the world is starving, the other half can't stop eating, and the liquids we need like water and oil are getting harder to find. Many are losing their jobs and their homes and their faith in the whole idea of money and markets, and the cult of the dollar is becoming increasingly specious. Scientists in Switzerland are trying to replicate the Big Bang in a tunnel, others are cloning life and engineering genetics, doomsday prophets are preaching apocalypse, new age mystics await universal awakening, there's buzz about imminent tech to turn air into energy, and the Mayan calendar's about to run out, just as our planet passes through the center of the galaxy for the first time in 12,000 years, which might make the north and south poles flip and change places, uncoupling the crust of the earth from its core. Politicians, economists, and corporate tycoons are desperately trying to prop up a worldview that is broken and quickly collapsing, telling people that everything's fine and things are getting back to normal. But normality no longer applies.

Yet through all this, we get up in the morning, we have a cup of coffee, we eat a bowl of cereal, and we live another day. We go on dates, cook dinner, get haircuts, buy new socks, and think about what to do in the summertime, or what to give Mom on her birthday. No matter how dramatic the backdrop, still we get on with our everyday lives, which don't feel epic at all.

It is this range of experience — from the scientists trying to play God, to the leaders trying to play wise, to the children trying to play house — that defines what it's like to be living right now: to be fluent in the crazy complexity of our interconnected global reality, and still to come home at night and be a good Dad.

Shifting perspective between these two scales and still maintaining our common humanity is what it's all about.

Because in all of the craziness, our common humanity is what we are finally starting to see.

A Self-Fulfilling Prophecy

What ends up happening in the world — on a very large scale — has a lot to do with what people believe will happen.

Because of network effects, if enough people start to believe in a particular outcome, their subconscious will start to shape their actions, and something like that outcome will end up emerging. That’s why it’s so important to put forth beautiful (and believable) visions of how the world can be. Conversely, that’s why fear-mongering, cynicism, and hopelessness are so dangerously toxic.

The future is a self-fulfilling prophecy.

At the moment, the negative visions seem to be winning. They are well-funded and profitable, and you see them every time you open a newspaper, turn on cable news, or visit certain websites. They greet you with sensationalism and proclamations by pundits on the right and the left about how awful people are, how bleak the future is, and how scared we all should be. This is bad medicine — it keeps people weak, afraid, and addicted. These messages are powerful, but they are also lies, and we do not need to believe them.

The future is ours to imagine. It’s up to us to put forth visions of how things are and could be. The more beautiful and believable our visions can be, the better chance they’ll have at succeeding.

The bad medicine is strong and corrupting, for it can make companies and the people who run them incredibly rich. But it’s so important not to add addictive experiences into the world. As software engineers, we need to realize we are really social engineers, and that the software we design, if it becomes successful, will have far-reaching impact on human behavior at the species level.

Try to make good medicine.