The Cambridge Analytica scandal embroiling Facebook in recent months has raised a crucial question about many of today’s most influential technology companies: What are companies like Facebook, Twitter, and LinkedIn?
Are they merely platforms on which users can connect with each other and with sellers of information, goods, and services? Or are they media companies that have a responsibility to ensure that the content published on their sites is fair, truthful, and doesn’t violate any laws?
When debating this question, it is vital to understand the impact of Section 230 of the US Communications Decency Act., published in 1996, has been called “the most important law in technology.” The reason for this is that the arcane sounding statute includes a safe harbor provision that shelters online companies from the legal consequences of most of the content posted by users.
Section 230 has been massively influential in the development of the technology industry over the past twenty years. It has allowed tech companies to grow without having to worry about the legal implications of negative reviews, libelous statements, or even human trafficking ads posted on their sites. The statute essentially treats tech companies like mere platforms. As such, they are in principle not responsible for the fairness, veracity, or legality of content posted on their sites.
For small startup companies, this has arguably been a boon. After all, just one costly lawsuit could mean bankruptcy for a startup. Yet, given the huge scale and influence of Facebook, LinkedIn, and Twitter, it seems wrong to absolve them of the moral responsibility to exert control (at least to a certain degree) over the content that appears on their sites. Technology companies have started to acknowledge this, with Facebook recently stating that it needs to “mitigate areas where technology can contribute to divisiveness and isolation.”
While Facebook’s statement might sound encouraging, putting technology companies in the roles of global news editors creates a whole new set of problems. After all, when we buy newspapers, we typically know the ideas and values these publications seek to promote. Some papers have a liberal bent while others have a conservative bent. But what do Facebook, LinkedIn, and Twitter stand for? And would we even want them to be associated with a particular political or social ideology? Given their huge user bases and global reach, making these companies into overt political organizations might be a very dangerous idea.
The reality, though, is that these companies already promote certain ideas and values. As Harvard Law Professor Lawrence Lessig has famously remarked, code is law. Regulators therefore need to take a much closer look at the ideas and values that these companies are promulgating. But what kinds of regulatory bodies are right for this task? After all, no national government has a legitimate basis for regulating a technology used by over 2 billion people globally (as is the case with Facebook). What is more, the proliferation of regulations formulated by different countries makes compliance very arduous — especially for smaller companies — and ultimately leads to the splintering of the Internet.
An international organization seems like the right kind of body for regulating Facebook and similar companies then. But as ICANN (the non-profit responsible for managing the Internet’s domain name system) shows, international technology governance can be tricky. When international organizations are controlled by states, they can become arenas for power struggles between different nations. As a result, agreements drafted by these organizations often reflect the interests of the world’s most powerful states while neglecting the concerns of less powerful nations.
If responsibility for governance is put in the hands of private (or semi-private) actors, on the other hand, agreements formulated by these non-elected bodies arguably lack democratic legitimacy. This might hamper the agreements’ effectiveness and open them up to challenges from state and non-state actors alike. Examples of this kind of dynamic can be observed with technical standards coming out of international standards organizations, such as Microsoft’s controversial OOXML file format for office productivity applications.
In other words, there is no perfect answer to the question of how to regulate Facebook, LinkedIn, and Twitter. It is also not easy to determine who should regulate them. One thing is clear, though: Given their huge reach and important role in modern society, these companies need to be held responsible for providing fair and truthful content on their sites. The laissez-faire approach that has worked for several decades is no longer suitable in an age in which Facebook, LinkedIn, and Twitter essentially serve as news outlets for the whole world and amplify existing biases, thereby contributing to the election of demagogues that undermine global peace and security.🔷
Tweet this story now:
(This piece was originally published on PMP Blog!)