READ:

Humans suck and we ruin everything, including AI.


Did John McCarthy, one of the “founding fathers” of AI, rub his hands together and laugh maniacally in 1955 hoping that this thing would one day lead to global instability?


What comes to mind when you hear “Gatling?”

Probably words like death; destruction; war; evil.

As  many of us know, the Gatling gun was used to efficiently kill on the  battlefield in the late 19th century; giving birth to a new age of  unprecedented bloodshed across the world. And, because it’s difficult to  separate the human from the invention, when Paul Scharre brought up Richard Gatling early on in his book, Army of None, I couldn’t help but think of death; destruction; war; evil.

Fu** this guy – what an evil genius. How could someone invent such a terrible weapon?

Turns out, I misjudged ‘ole Richard. He didn’t invent the Gatling gun to kill people–he invented it to save people. As Scharre explains:

Richard Gatling. / Wikimedia

“Richard  Gatling’s motivation was not to accelerate the process of killing, but  to save lives by reducing the number of soldiers needed on the  battlefield…The future Gatling wrought was not one of less bloodshed  however, but unimaginably more. The Gatling gun laid the foundations for  a new class of machine; the automatic weapon.”

So, um, yeah. Humans are actually the worst.

This  accomplished inventor, who prided himself on his agricultural patents  (he invented the rice-sowing machine while working at a dry goods store  in St. Louis), saw the death and destruction that ravaged the United  States during the Civil War and thought: I  can save thousands, if not millions, of people in future wars by  creating a gun that “supersede the necessity of large armies…”

And  what did the rest of humankind do? We took that well-intentioned  invention, tweaked it over time, and made it more efficient; more  deadly. At first, one side using a machine gun probably did save lives  by making battles easier and quicker to win, but once both sides had  them, as Scharre graphically puts it, “Men weren’t merely killed by  machine guns; they were mowed down, like McCormack’s mechanical reaper  cutting down stalks of grain.”

Maybe the trajectory of Gatling’s invention shouldn’t be so surprising, as the old story goes: Einstein once asked Freud, “Why war?” Freud replied, “Because man is what he is.”

This  brings me to the lesson I’ve learned from Scharre’s book so far–a  lesson I’m not entirely sure he meant to teach me: Technology, such as  Artificial Intelligence (AI), may be built with the best intentions, but  we ultimately can’t control how it’s used. And there’s a high  probability that we’ll either intentionally or unintentionally use it to  do harm.

Without going into the tired philosophical debate about whether-or-not humans are inherently bad or good (been there, done that), there are countless examples of us mucking stuff up–social media being the most recent and frustratingly obvious. (Thanks Facebook.)

But  the danger we face with technology, such as AI-powered weapons (the  great-great-grandbaby of the Gatling gun), is made clear in a report  called, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” In it, the researchers (including Scharre) write:

“Artificial intelligence (AI) and machine learning (ML) have progressed rapidly in recent years, and their development has enabled a wide range of beneficial applications. For example, AI is a critical component of widely used technologies such as automatic speech recognition, machine translation, spam filters, and search engines. Additional promising technologies currently being researched or undergoing small-scale pilots include driverless cars, digital assistants for nurses and doctors, and AI-enabled drones for expediting disaster relief operations. Even further in the future, advanced AI holds out the promise of reducing the need for unwanted labor, greatly expediting scientific research, and improving the quality of governance. We are excited about many of these developments, though we also urge attention to the ways in which AI can be used maliciously.”


The report goes on to explain several ways in which AI can be intentionally used maliciously (look at AI Safety literature to see how it can unintentionally cause  harm). In particular, the most relevant malicious use-case today is  AI’s impact on political security due to the ways in which it’s changing  the “nature of communication.” No need to look any further than right now to imagine a near-future where AI systems “masquerade as people with  political views” and thus sway public opinion, or even incite violence.

Did  the inventors behind social media, or ‘bots’ for that matter, want to  create a society where the threats described in this report are  realized? Did John McCarthy, one of the “founding fathers” of AI, rub his hands in maniacal glee in 1955 with the hope that this technology would one day lead to social  instability, and assist in the rise of tiny-handed tyrants across the globe?

Did you John!? Did you?

Professor John McCarthy. / Flickr

I  honestly don’t believe so–but that doesn’t really matter, because John  couldn’t control how the technology he envisioned would be utilized by  others in the future. And neither could Richard Gatling.

So,  I guess that leaves me in a weird place here. Maybe I’ll keep reading  Scharre’s book and feel less pessimistic about all of this. (Probably  not though, I mean the next part of his book is titled, “Building the  Terminator.”)

Instead,  let me turn to you, dear reader. Can you convince me humans aren’t  awful blobs of cells, and we won’t ruin the beneficial applications of  AI? And thus, ruin ourselves?

If  so, please–enlighten me. I’ll wait over here, in my Toronto-basement,  clutching onto the last shred of optimism my late-20s (and American  politics) hasn’t killed yet.🔷



TWEET THIS STORY NOW:



Embed from Getty Images


(This piece was originally published on PMP Blog!)


(Cover: Pixabay.)


Author image
A digital storyteller and researcher interested in everything from global security dilemmas to Bollywood movies. The world is complex. I try to figure it out.
Toronto, Canada. Website

About us | Our Writers | Disclaimer | Terms & Conditions | Privacy Policy | Submissions | Code of Conduct


No part of this publication may be reproduced or used without the express permission of the publisher.