26
Thu, Dec

Artificial Morality

IMPORTANT READS

PERSPECTIVE--Artificial Intelligence is one thing. Artificial morality is another. It may sound something like this:

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.”

The words are those of Microsoft president Brad Smith, writing on a corporate blogsite last fall in defense of the company’s new contract with the U.S. Army, worth $479 million, to make augmented reality headsets for use in combat. The headsets, known as the Integrated Visual Augmentation System, or IVAS, are a way to “increase lethality” when the military engages the enemy, according to a Defense Department official. Microsoft’s involvement in this program set off a wave of outrage among the company’s employees, with more than a hundred of them signing a letter to the company’s top executives demanding that the contract be canceled.

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

Wow, words of conscience and hope. The deeper story in all this is ordinary people exercising their power to shape the future and refusing to increase its lethality.

With this contract, the letter goes on, Microsoft has “crossed the line into weapons development. . . . The application of HoloLens within the IVAS system is designed to help people kill. It will be deployed on the battlefield, and works by turning warfare into a simulated ‘video game,’ further distancing soldiers from the grim stakes of war and the reality of bloodshed.”

This revolt was what Smith was responding to when he said he believed in a “strong defense,” implying that moral clichés rather than money are what drive the decisions of large corporations, or at least this particular large corporation. Somehow his words, which he attempted to convey as reflective and deeply considered, are not convincing — not when juxtaposed with a defense contract worth nearly half a billion dollars.

Smith goes on, acknowledging that no institution, including the military, is perfect, but pointing out that “one thing is clear. Millions of Americans have served and fought in important and just wars,” cherry-picking such lauded oldies as the Civil War and World War II, where America’s enhanced lethality freed slaves and liberated Europe.

Fascinatingly, the tone of his blog post is not arrogant toward the employees — do what you’re told or you’re fired — but, rather, softly placating, seeming to indicate that the power here isn’t concentrated at the upper levels of management. Microsoft is flexible: “As is always the case, if our employees want to work on a different project or team — for whatever reason — we want them to know we support talent mobility.”

The employees who signed the letter demanded cancellation of the Defense contract. Smith offered their personal consciences an out: Come on, join another team if you don’t want to cross the line and work on weapons development. Microsoft honors employees of multiple moral persuasions!
Artificial Intelligence is a high-tech phenomenon that requires highly complex thinking. Artificial morality hides behind the nearest cliché in servitude to money.
What I see here is moral awakening scrambling for sociopolitical traction: Employees are standing for something larger than sheer personal interests, in the process pushing the Big Tech brass to think beyond their need for an endless flow of capital, consequences be damned.

This is happening across the country. A movement is percolating: Tech won’t build it!

“Across the technology industry,” the New York Times reported in October, “rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What if moral thinking — not in books and philosophical tracts, but in the real world, both corporate and political — were as large and complex as technical thinking? It could no longer hide behind the cliché of the just war (and surely the next one we’re preparing for will be just), but would have to evaluate war itself — all wars, including the ones of the past 70 years or so, in the fullness of their costs and consequences — as well as look ahead to the kind of future we could create, depending on what decisions we make today.

Complex moral thinking doesn’t ignore the need to survive, financially and otherwise, in the present moment, but it stays calm in the face of that need and sees survival as a collective, not a competitive, enterprise.

Moral complexity is called peace. There is no such thing as simplistic peace.

(Robert Koehler is a Chicago award-winning journalist and editor. His book, Courage Grows Strong at the Wound is available. Contact him at [email protected] or visit his website at commonwonders.com. Made available to CityWatch by PeaceVoice.)

-cw