Spoiler: These are all fake AI generated profile photographs.

Memes and social networks have become weaponised. The happy days of kittens and hot dog legs on a deck chair are still with us, but we have been joined by political-bot armies. These fake users or “sock puppets” are targeting your social news feeds to influence and co-opt you to spread their propaganda.

One result is that our relationship with truth and the line between reality and falsehood is looking increasingly fragile. The problem is compounded by the fact in the UK, two-thirds of Britain (66%)1 are active on social media, and 15% talked about which political party they would vote for2.

To a large degree the future of politics lies in using social networks and memes to weaponise issues. Our government is ill-equipped to deal with this new landscape, and doesn’t seem willing to address the problem with enough attention and care. In fact, they are using memetics to influence us. In doing so, have they compromised their ability to police memetics? Regardless, the electoral system’s vulnerability is a worry for everyone, irrespective of partisan politics. It is one of society’s most fundamental rights to choose our own leaders. However, our governments seem ill-equipped to understand the new reality of information warfare.

How will we discern state or politically sponsored disinformation and propaganda now and in the future? With great difficulty.

Memetic wars favour insurgencies because they weaken monopolies on narrative and empower challenges to centralised authority. A government could use memes to increase disorder within a system, but if the goal is to improve stability, it's the wrong tool for the job. Instinctively the rebel or rationalist in all of us leans toward the positive aspect of more voices being heard. But what if those challenging voices don't have societies best interests at heart? What if those voices actively wish to harm the populations, they are being distributed into?

“Hearts and minds” is as true to today as it as has ever been. It is entirely correct to say that the future of warfare isn’t only on the battlefield, but on our screens and in our minds. Military, private interests and intelligence agencies around the world are already waging information wars in cyberspace. Brexit and Trump’s campaign have come under scrutiny for reportedly contracting UK-based firm Cambridge Analytica to mine Facebook data and influence voter behaviour. Academics working on behalf of UK lawmakers3 believe that Cambridge Analytica’s work on Brexit won the day for Brexit.

The work of these memetic warriors is already profoundly influencing public perceptions of truth, power, and legitimacy. This threat will intensify as artificial intelligence tools become more widely available. I've spent some time looking into tools, algos, pattern and image, facial and language recognition and development rates. Within a year, it will be straightforward to create high-quality digital deceptions whose authenticity we cannot easily verify.

Despite their tremendous resources both Facebook and Twitter face an impossible task in identifying and removing fake accounts. Shillbots — sock puppets, meat puppets, bots — whatever we call them are going to become more prevalent and embedded in our social discourse.

“Alarmism can be good — you should be alarmist about this stuff… We are so screwed it’s beyond what most of us can imagine. We were utterly screwed a year and a half ago and we’re even more screwed now. And depending on how far you look into the future it just gets worse.” — Aviv Ovadya, Chief Technologist for the University of Michigan’s Center for Social Media Responsibility.

Faceoff becomes a reality on bedroom grade tech:

Face2Face: Real-time Face Capture and Reenactment of RGB Videos. / Technical University Munich

Put simply; this is only the beginning.

Fear, uncertainty, and doubt (pleasingly known as FUD) will be actively spread online. FUD information will be hyper-targeted at specific internet users that are likely to propagate it.

The tactics will likely cluster into these groups.

1. Reputational manipulation: Using digital deception to incite unfounded reactions in an adversary; and de-legitimise an adversary’s leaders and influencers.

2. Laser phishing: using AI to target people with memetics which mimic trustworthy entities to persuade targets to act in ways they otherwise wouldn’t.

3. Computational propaganda: the exploitation of social media, human psychology, rumour, gossip, and algorithms to manipulate public opinion.

There is no Geneva Convention on digital information attacks or military doctrine on how to be proportional in retaliation. As AI influences other more established technologies, understanding the tactics and circumstances that define the future of information warfare is more critical than ever.

Combatting memetic warfare is going to be difficult. Mainstream education about its existence is the first step, exposure, undermining through counter memetics and tracing and tracking all have roles too.

Most of all we must remember, “Truth is a virus” too.🔷

Source: We Are Social.

Ipsos; Institute of Development Studies; Demos; Innovate UK; August 7, 2015, to August 13, 2015; 1,250 respondents; 16-75 years.

Interesting read:

- A style-based generator architecture for generative adversarial networks.

Liked this story?
Found it useful?
Heres what you can do next:

Support our magazine!

Support our writers!

Share this story on social media.

(This is an original piece, first published by the author in PoliticsMeansPolitics.com. | The author writes in a personal capacity.)

(Cover: Fake AI generated profile photographs.)