With FaceApp, Dr Helen de Cruz discovers a whole new level of creepy.
First published in July 2019.
Remember Cambridge Analytica? Not long ago right? So I wonder why my friends are enthusiastically sharing aged/gender swapped pictures of themselves on an app called FaceApp. The creator is a Russian tech firm we don’t know much about, Wireless Lab.
While you are enthusiastically aging yourself, it is important to note that the technology behind this is a deep neural network which learns from the data it gets. Think of AlphaGo, AlphaZero, and how scarily good AlphaGo got just by training itself on amateur games.
AlphaGo was able to defeat the world Go champion. Note that before the use of these neural networks, computers were not able to beat a good amateur. But by training themselves they can defeat the world champion.
So, now you have similar technology, but this time it trains itself on lots and lots of human faces. The results are so much better than Photoshop because it is able to correct itself, learn from its mistakes, by the massive number of pictures we feed it with.
Now you may wonder: what about my privacy? Well, FaceApp states it can do whatever it wants with what you put into it.
“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.”
Hello, irrevocable? Reproduce and modify? It gets worse...
They say they won’t sell your data to third party, but if they are sold (to an entity that you would rather not have access to your data) they can sell your user content along.
“If we sell or otherwise transfer part or the whole of FaceApp or our assets to another organization (e.g., in the course of a transaction like a merger, acquisition, bankruptcy, dissolution, liquidation), your information such as User Content and any other information collected through the Service may be among the items sold or transferred.”
OK, scary perhaps, but what would a face recognition neural network AI software be doing with your face? Well, they may be creating deepfakes, videos impossible to distinguish from real (read “Deepfakes Are Coming. We Can No Longer Believe What We See” by Dr Regina Rini who teaches philosophy at York University in Toronto.)
And this is worrisome since we won’t be able to tell what is real and what isn’t as the by then well trained AI will be able to manipulate, say, politicians in saying something they did not in fact say, and we won’t be able to tell the difference.
Here is my worry about the broader implications for democracy. Cambridge Analytica used data from people filling out quizzes for fun, and their friend network. This system is much more sophisticated.
You may recall this excellent article by Carole Cadwalla on how massive misinformation was selectively directed at people who were susceptible to, for example, scare stories about immigrants “flooding” the country or being criminals in the run-up to Brexit.
The far-right (led by pluto-populists who divert people’s attention away from their wealth, to scapegoats) is relying on massive disinformation to stoke up xenophobia, Islamophobia, etc. Disinformation so rampant it’s called “epistemic rights violation”.
Imagine how this disinformation will affect voters once deepfake videos become available. And with sophisticated AI training itself on millions of user-supplied content, it is going to happen.
But our pictures are already all over the internet! What more does it do?
Well, the aim of this program is to train a neural network so it can manipulate faces. It will collect both your face (nicely centered and uploaded by you) and information about your browsing history.
“These tools collect information sent by your device or our Service, including the web pages you visit, add-ons, and other information that assists us in improving the Service.”
So, the neural network (we don’t know how the neural network operates!) is trained both to recognize faces and link these to browsing behaviors of people, which is worrisome indeed.
Twitter and other platforms also have not so good privacy policies...
True, but this one is still pretty extensive. Also, while I get some substantial informational, etc. benefit from Twitter, the same is not true of this app. It is giving them info for little return.
The neural network is not using your millions of faces to train itself.
Well, we know it does learn. Compare ‘female’ Steve Jobs done with the app in 2017 to the current version in 2019. The current version is MUCH BETTER. My speculation is that the app learns.
So, how does it learn? My speculation is user feedback. It can do this by gauging whether a pic is being shared via the app. If it is, it is more likely to look ‘convincing’ than a picture not shared. With millions of people doing this, the app can learn what looks convincing.
Behind FaceApp is a Generative Adversarial Network (GAN). It works by having two AIs play a zero-sum game against each other: A generates fake images that B needs to tell from the real one. Aim of A: generate fake images B fails to distinguish. B needs to distinguish between fake and real.
A reminder of how sophisticated GANs have become: this site gives you the picture of a human who does not exist, entirely computer-generated, every two seconds.🔷
● “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation” (An ArXiv paper on how the neural network behind this program works.)
Share this article now:
Tell us your story:
Have you got a story to share
with our readers?
You can share your experience today
by submitting your story to us: