The Role of the Internet in the Freedom of Speech
Censorship in the Post-Modern Information Economy: Part 1
I wanted to take a moment today to discuss a topic that’s actually quite hard to meaningfully discuss. “Freedom of speech” is such an ever-present term in the United States that, outside of defamation (libel and slander) laws, that it gives a sort of “de facto carte blanche” or “live and let live” air1 towards how speech is policed. This, especially as augmented in certain conservative discourse, almost further implies that “freedom of speech” entitles one to a full “freedom from social consequences” due to speech, instead of simply a lack of state-driven consequences. Other liberal2 democratic countries have historically dealt with speech protections differently; Germany, as an example, has explicitly banned rhetoric that frames Nazis in a good light. However, the game has changed in recent years.
I think it's relatively easy to take the Internet for granted, when at no point in history has anonymized freedom of information-sharing been so ubiquitous. And we're seeing massive societal shifts as a result of its existence. Everything from how we perceive information about ourselves, to information about others, to information about the world around us is all changing. And it's easier to take a bipolar approach, viewing either Internet content as grassroots and anecdotally authoritative or as wholly invented fiction depending on its origin. As with most things, the truth is much more gray; complicating things is that commonly, information is published sooner than it is verifiable as truth, fiction, or anywhere in-between.
I have struggled to bring this piece together—not because of lack of content, but because of lack of structure. There is so much to discuss here, and prioritizing that has been a bit of a journey. So instead of attempting to meander through moderately connected musings, I decided to make a brief series looking at the freedom of speech. For this piece, we’ll examine the Internet and the ideologies that it was founded in and what it's become today. Future pieces will examine the current U.S. political space, then the current E.U. political space, then we’ll bring it all back together at the end to look at the big picture, and how current technologies like AI are complicating things.
As a Background
I think it makes sense first and foremost to discuss briefly the context of the Internet. The communication protocols that enable modern web browsing were originally designed to link research computers and enable free sharing of information. The World Wide Web was invented in ‘89, graphical browsers (like the one you are viewing this piece on) popped up in ‘93, and shortly thereafter, the commercialization of the Internet took off as it was finally accessible on consumer devices. Though, it wasn’t until “Web 2.0” in the early 2000s that the average content of the Internet significantly shifted with the rise of user-generated content, including on social media platforms like Facebook. That is all to say, the world of information changed very, very quickly.
But of course with that, not all information is noteworthy to share, or for that matter, good to share. The Internet was founded without restrictions on content in part because to make so otherwise would be self-defeating to the point of the Internet. Those original founders were unlikely to have a firm concept of the breadth and deprh of information that would eventually be shared over those communication protocols. And this has enabled bad actors of several sorts to enter the system and abuse its freedom of information-sharing. This ostensibly libertarian ideal of the Internet has butted against regulations and legislations many times now to maintain the sanctity of its original form.
Worth noting is that information-sharing has needed to evolve into encrypted information-sharing. Encryption tends to be standard across communication these days: even this newsletter was delivered over HTTPS (or SMTP if reading over email), with the “S” meaning “secure” (or encrypted). In uncomplicated terms, it means that you, as the reader, can ensure that the source (or sender) is actually the source. Without encryption, just as you can falsify a return address on a piece of physical mail, you can falsify the originator of a website or a piece of e-mail. That is to clearly establish that there are very legitimate reasons (even though I’m only mentioning the one) for encryption to exist: bad actors, like scammers, have a far easier time conducting business when they can more easily seem like a legitimate source.
What does this mean?
There is a lot more that can be said about the Internet in terms of its functionality, but to not saturate this mostly policy-intended piece with technical jargon, the above should suffice for the points below.
Base Internet architecture, like DNS3, is largely decentralized, of which the main takeaways are intentional redundancy and limited authority. This Internet ethos persists to this day both in how the Internet is structured and in people perceive it. Social media grew up in this time, where first, local and focused message boards (say forums devoted to woodworking or skiing) were the norm, later to be replaced with general-purpose social media like Facebook and the platform formerly known as Twitter (X).
All this said, it's probably more important to note that the Internet has centralized to some great extent. I'm sure nearly every reader is familiar with just how much of the Internet goes down whenever Cloudflare or AWS experiences a major outage. And there are very practical reasons why this is the case—security and scalability are very hard for small shops to bake from scratch and there's a lot of value in working with someone who’s done it before. So a few big players emerged and now the Internet is quite dependent on their continued and constant performance.
Another point is that the Internet is truly global. State actors often act to limit its reach, but the same protocols that service the Chinese government's documents are the same protocols that service cat memes on Facebook. And that universality is something remarkable, when we otherwise don't agree on the construction of electrical plugs or which side of the road to drive on.
To bring these two points together, the Internet in its base form is under threat not from just the commonly-thought issue of state censorship, but business censorship as well. That previously-mentioned centralization serves as a springboard. It was not too long ago that the American right was actively conflating public freedom of speech with privately-moderated speech, and at least for now, American tech companies seem largely aligned with that brand of reactionary conservatism, as the Trump administration comes cracking down on the exact thing they claimed to be so against during the Biden administration. That centralization of content means a centralization of people accompanying a centralization of power.
Why does this matter?
The Internet is a tool. And like any tool, it is a positive force multiplier in the right hands and a weapon in the wrong ones. And when the toolbox changes hands, there’s normally an expectation that the next craftsman will be at least nearly equally deft. However, that libertarian ethos really struggles to adjust to the presence of bad actors.
Just this week, X (formerly Twitter) rolled out a feature noting many “right-wing American” content creators as never being based in America. That does matter. Because foreign propaganda and domestic propaganda often have very different goals even if the methods would be the same. Considering their nature and their effect, I would label these as bad actors.
That said, absolutely no one is immune to the effects of bad actors, or in getting wrapped up in sensationalism. One interesting observable social phenomenon is the propensity of factionalism in major events. I take Charlie Kirk's shooting as an example: right-wing groups (including the administration) were quick to blame left-wing violence, and left-wing groups jumped on the retort once the alleged shooter was purported to come from conservative background. Right-wing groups then jumped on the allegation that the suspect had a relationship with a transgender individual (vaguely linking that individual’s transgenderism with the “woke” left). My high-level view of this event says two things: that while yes, right-wing groups were largely more reactionary and inflammatory than their left-wing counterparts, people on many different sides jumped to conclusions too quickly about an individual that has not yet been proven to have committed the crime. That, and that the suspect in question is as complex an individual as any of us with seemingly and potentially contradictory beliefs and actions.
While different groups are differently inoculated to these effects, the lack of accountability is astounding. And in that previous incident, the most startling thing that I find is not the act of violence itself, rather that the explicitly inflammatory rhetoric held by the highest office says, “no, it’s everyone else who’s inflammatory,” as if that is effective for healing an already-broken nation or that they're not directly responsible themselves. And while certainly possible without the Internet, the sheer speed and volume of speculation almost forces imaginations to run rampant (which has huge implications for the default law of the land as “innocent until proven guilty”).
And on maybe the most basic level, this part is just a fight between personal accountability and social responsibility. That is, discovering the balance between keeping producers of disinformation responsible while keeping consumers aware enough of the possibilities to hold themselves accountable. But it can be hard to strike that balance. So this is the framing that this series operates in: the Internet’s origination didn't provide for the existence of bad actors and we need to clean up the mess that those bad actors made, while protecting from future threats, and while actively safeguarding democracy and democratic values. It is layered, to say the least.
So how do we change this?
All this leads to that we’ve teched our way into an information problem that we can’t tech our way out of. We might be able to learn this lesson from the social sciences, but we haven’t dealt with this problem nearly at this speed or scale before. So all we’re left with is imperfect solutions for very, very difficult problems.
…we’ve teched our way into an information problem that we can’t tech our way out of.
And so we have several issues that we’re butting up against without a convenient way forward. Or rather, if we try to pause long enough to solve this problem, if history is of any indication, two even more complicated problems will appear in the meantime (as we’re now dealing with data privacy and artificial intelligence issues, which largely build from the base issue of information-sharing). But there are a few things that we can think about non-autocratic countries doing that could alleviate some of this.
The simplest answer would be for the legislative and governance processes to become more “agile.” For my software friends, no, I’m not saying that legislators should have a daily stand-up4: I’m saying that the system should be more responsive to environmental change. While this sounds simple, this is a hard ask. Democracy is intrinsically and intentionally slow, filled with checks and balances, so that (in theory) politicians and political parties can’t subject their country to policy whiplash every time the constituency flips. Predictability of change (especially if it’s slow) is good for business, and typically at least tolerable for constituents, so it tends to be the choice for modern policy rollout. But obviously, technology is evolving faster than our governments tend to want to keep up with. I feel even GDPR was quite late to the game, but it's better late than never. Is it possible? Certainly. Is it likely? Well, in my view, it requires people with technical expertise getting more actively involved in the governance and legislative processes to provide that push.
A non-replacing answer could be aligned with the solution to Karl Popper’s paradox of tolerance: that is, change how we think about the freedom of speech to exclude any speech for ideologies that would lead to the abolition of the freedom of speech. This concept already exists in partial form in places (e.g. the previously mentioned illegality of Holocaust denial in Germany). This is a hard argument in places like the United States, but could find traction in the aftermath of this current administration. That would likely be part of a larger discussion on misinformation/disinformation.
The last solution I can think of is to change the nature of the Internet. However, doing so would likely irreparably fracture the Internet. Some may argue that this is already happening—siloed sections of the Internet exist where not all websites are accessible even outside authoritarian countries (think the U.K., which now forces age verification responsibilities on private providers, but more on that upcoming). There will be tremendous resistance to change and it might not even be a good change, but trusted, centralized authority figures could at least in theory replace decentralized ones; if I'm being honest, I'm not in favor of this argument at all because it intrinsically suffers from not being able to deal with bad actors even worse than the current system. I can imagine a situation where the United States government was a trusted authority and this current administration was able to completely suppress information that did not align with their stated goals. It's important to understand that a place like China can only reliably suppress because the government also owns5 their Internet infrastructure. Right now, the United States government would have to exert its power on the corporations that own it or nationalize the infrastructure, both of which are complicated and lengthy processes.
So the fundamental structure of the Internet butts up against authoritarian regimes and democratic regimes alike for very different reasons. And a lot of that has to do with the spread of information and the freedom of speech. As with most things, if there was an easy answer, we would have already done it. While I do not necessarily agree with some of the structural ideals of the Internet (since we don’t live in a world made for those ideals), I think this is a situation where governments, as well as perhaps people’s expectations, need to adjust and adapt to how the world is changing around them. The Internet has lowered the barrier of access to information, and at least for now, there’s more associated positives than negatives, the main issue is in limiting bad actors’ effect on it. This includes keeping bad actors out of positions of power in government and business, which we need to do a better job at.
Or in other words, that “freedom of speech” is a given, the default, a natural state of humanity that should not be infringed upon by government.
For explicit clarity, where I use “liberal” in the piece refers to market liberalism and not the pseudo-derogation often ascribed to the left by the right in common, modern American discourse in reference to social liberalism. Notably, market liberalism is included in the pre-MAGA Republican political stance (and even maintained through much of Trump’s first term).
DNS is the system of how we determine that something like “https://www.google.com” actually takes you to Google servers.
It's not quite this straightforward, but the basis of this is accurate enough for this piece.




It's interesting how you articulate the "freedom from social consequences" idea; could you elaborate on how internet anonymity shifts this dynamic furder?