Debates Over ZAO and FaceApp Usher in the Era of Surveillance Capitalism

Shortly after the clamor abated around FaceApp, the Russian AI-powered app churning out disturbingly realistic photos of users as their older selves, a new controversial fad hit mobile stores in the form of the Chinese deep-fake app ZAO. While undoubtedly amusing and catering to the most primal of our desires – the yearning to take a peek into the future and to get an inkling of what our lives would be like were we someone else – these internet chart-busters initiated not only a deluge of jubilant social media posts, but also a serious debate about the future of privacy, security and truth at a time when data sells better than oil. 

Shoshana Zuboff, Harvard professor and celebrated scholar once named “the true prophet of the information age”, coined the term “surveillance capitalism” to describe an economic system that turns human experience into free raw material that translates into behavioral data used for product and service improvement and for the creation of prediction products. Under this system our personal data is also utilized to shape our behavior through what the author calls instrumentarianism. Unlike Karl Marx’s idea of capitalism that hinges on exploitation of human labor, Zuboff’s surveillance capitalism feeds on every aspect of human life. 

ZAO and FaceApp are the two latest harbingers of surveillance capitalism. The apps store hoards of user data with little regard to privacy and transparency and no mention of how that data is used. The truth is that it can be used in a myriad ways both benevolent and nefarious. Several months ago, a photo storage app Ever was accused of using customers’ private files for drilling a facial recognition AI. Going back, in 2018, Cambridge Analytica harvested personal data of 87 million Facebook users to promote a questionable political cause that was Donald Trump’s presidential ambitions. These issues date way back to the advent of Google, a company that Zuboff calls the pioneer of surveillance capitalism. 

FaceApp was developed by Russian company Wireless Lab in 2017 (Source: FaceApp)

Today, however, our digital footprints extend way beyond social media and search engines. Apple recently confirmed that it had allowed its workers to listen to recordings made by the company’s virtual assistant Siri, which essentially connotes that Siri as well as other smart assistants like Alexa and Google Assistant or their Chinese counterparts Tmall Genie and Baidu’s Xiaodu are all constantly eavesdropping on their owners. While the tech giants vow not to let their human employees listen to those recordings, they can train sophisticated AI algorithms to do that instead with more efficiency. The same algorithms that collect and analyze data from license-plate readers, our fitness bracelets or even Roomba vacuum cleaner robots, constantly measuring our apartments. 

Despite the dystopian aura, AI could use our digital profiles for good. Algorithms, nourished with enough user data, could help us build incredibly powerful health datasets or allow banks to devise more reliable credit assessment systems. Deep-fakes could find room in education – imagine Isaac Newton lecturing you on Newton’s laws of motion – and even some types of therapy, for instance, allowing people with certain disabilities and their partners, as one article put it, to interpose their faces on pornographic content to compensate for the lack of sexual activity in their real life through virtual engagement. Even so, the technology could also be applied in more controversial instances such as advertisement, crime propensity analysis, or political and corporate struggles that are prone to skullduggery.

One frequently overlooked fact is that the functioning of AI depends directly on the data that it consumes. While generally perceived as a more objective and efficient answer to human data analysis, AI algorithms are only as objective as the information that is funneled through them. Fueled by biased or inaccurate data, the algorithms will metastasize the falsehoods that they perceive as truth, which makes AI vulnerable to manipulation. RAND researchers Osonde Osoba and William Welser argue that competitors could use this flaw to mislead their rivals’ AI systems by feeding them misinformation. This assumption was easily proved in 2016 when Twitter users converted Microsoft’s amicable AI chat-bot Tay into a fascist insult cannon by bombarding it with racial slurs and chauvinistic messages.

The transformation of Microsoft’s Twitter bot Tay (Source: @geraldmellor via Twitter) 

Unfortunately, racist chat bots should be the least of our worries. The increasing personalization of the Internet that comes with the proliferation of AI recommendation mechanisms and targeted advertisements create what researchers call filter bubbles that essentially segregate different groups of people from each other. AI targets specific groups and supplies them with the information they will be most receptive too, while shielding them from divergent opinions and reinforcing their biases thus creating bevies of misinformation and misunderstanding between different parties. Toss deep-fakes into the equation and fake news become even more dangerous of a weapon. 

For a more perplexing case, take policing AI algorithms that tackle street crime by detecting criminal “hot spots”, an inherently noble pursuit. However, directing police’s attention to those areas creates an endless chain reaction – the more criminals get arrested in the area, the more adamant the AI gets in marking that neighborhood as a hot spot of crime while overlooking other localities, which raises scores of concerns. 

In China the situation is even more opaque. Privacy as an individual’s right is very much a western concept that China imported relatively recently. Historically, local scholars eschewed discourse about personal rights putting more emphasis on one’s duties as a family member and a citizen of a state. Interestingly, the modern Chinese word for privacy “yinsi”, initially bore a murkier meaning of “secrets to hide”. 

The first state document to somehow acknowledge the Chinese citizen’s right to what was referred to as “personal dignity” was the 1954 Constitution of the People’s Republic of China. However, considering the country’s underdeveloped legislation it was unavailing. The notion of privacy protection got a new lease on life during the reform and opening up period, followed by several developments in the 1990s and 2000s, until the first comprehensive tort law was enacted in 2010 recognizing violation of privacy as a serious offence. 

Nonetheless, while now the unswerving dictates of the government seem to be dealing decently with corporate carelessness and unlawful practices – after all ZAO immediately made changes to their user agreement upon first signs of popular distress – it is also that same government that gets accused of violating users’ privacy. In reality, nowadays virtually all governments mine their citizen’s personal data to a varying degree to repel terrorism, track the spread of illnesses, improve urban infrastructure and education. Naturally, the stockpiling of data makes for more contentious use cases like propaganda and censorship, too.

Unlike westerners, the Chinese seem to be less preoccupied with the government having access to their personal information taking a more pragmatic stance on technological progress. FaceApp set off alarm bells in the US for political reasons, presumably giving the Russian government access to Americans’ data. In China, ZAO raised more hardheaded concerns, with the users rightfully suspecting that unattended deep-fakes could be used by swindlers to crack their Face ID protected payment apps or allow for even more sophisticated ploys. 

In 2018 Robin Li, CEO of Baidu, one of China’s internet powerhouses, made an emotive statement regarding this matter. “I think Chinese people are more open or less sensitive about the privacy issue. If they are able to trade privacy for convenience, for safety, for efficiency, in a lot of cases they’re willing to do that,” said Li, nettling the champions of a more principled perspective on privacy. 

ZAO user replaces Leonardo DiCaprio’s face with his own (Source: @PaperX)

Ultimately, the problem with the hurried development of AI and all its applications is not the technology itself but the lack of a code of ethics. Surveillance capitalism is not a what-if phenomenon out of a George Orwell novel, but the natural evolution of classic capitalism, a new environment to which we will need to acclimate and which requires control.

Rules and regulations more often than not lag behind technology. The first practical automobile was invented in 1885, but it was not until 1949 that the UN came up with a standardized set of traffic regulations. AI is more complicated than cars not only technologically but also ethically. To what extent should we trust AI and to what extent should we let them meddle with our private affairs? Say, an AI denies you a credit card on money laundering suspicions, without providing you with any evidence, how do you refute the verdict? 

In 2014, Facebook secretly manipulated the news feeds of almost 700,000 of its users, showing some predominantly negative news and others predominantly positive news. The aim of the notorious experiment was to track people’s emotional responses and see what content they would share afterwards. While we seem to brook being used as lab rats for now, how much longer will it take until the situation comes to a head?

The European Union’s General Data Protection Regulation rolled out in 2018 is a landmark achievement in leashing the unethical use of technology. The document calls for stringent consent requirements, increased accountability of data controllers and unveiling the logic behind automated decisions. 

The EU’s efforts are in line with the most prevalent ethical demands for the AI industry. People need to know what data is collected and for what purposes. Likewise, the logic and rationale behind decision should be crystal clear and open to objections in cases which may significantly affect the individuals involved.

Some even argue that anyone who studies computer science should be subject to rigorous ethical training. In 2016, a paper published by the National Science And Technology Council of the US read, “Ethical training for AI practitioners and students is a necessary part of the solution. Ideally, every student learning AI, computer science, or data science would be exposed to a curriculum and discussion on related ethics and security topics.” 

In the near future all aspects of our existence from our genomes to our emotions will become raw material that, if regulated with good reason, will only benefit our living environment. Artificial intelligence in and off itself means no harm. The issue is in our flawed approach to data that warrants privacy violation and is vulnerable to cyberattacks and malicious misuse. In this regard the debates over FaceApp and ZAO are a great opening gambit, but we will need a viable tactic going forward.