September 7, 2025 | Jonathan Burdick
I stared at the photo with disdain, maybe even fury. The image was obviously created with artificial intelligence, an obviously A.I. child holding a paintbrush and standing by an obviously A.I. painting of their non-existent military veteran father who, it was implied, had died in combat. “Nobody ever likes or shares my posts,” the caption read with a frowny-face emoji.
The creator got their wish. This obviously A.I. generated post had tens of thousands of likes, shares, and comments. Then I did something almost always regrettable: I read the comments. Amen. God bless you. Thank you for your service. You’re so talented! It was comment after comment of utter nonsense, of people who seemingly couldn’t tell the difference between a real photo from one clearly generated by artificial intelligence. Is this what the kids are calling brain rot, I wondered? According to Merriam Webster, "Brain rot refers to material of low or addictive quality, typically in online media, that preoccupies someone to the point it is said to affect mental functioning." Oxford, which awarded "brain rot" its 2024 Word of the Year describes it as "the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging."
This "A.I. slop," as John Oliver warned us about on his HBO show, definitely seemed to check all of the boxes.
In the midst of all these comments praising a fictional child's artistry, another then caught my attention: “This comment section proves that Dead Internet Theory is real.” I was already somewhat familiar with this term, but I soon found myself going down the rabbit hole. Those thousands of comments? Bots. A photo of something that didn’t exist was getting praised and shared by bots, using the power of phony engagement to force the post into all of our algorithms and onto our feeds. I didn't know whether to feel better or worse.
But it helped explain the unexplainable. I began reading the Wikipedia page for Dead Internet Theory. I needed to know more. Forbes describes the theory as “the belief that the vast majority of internet traffic, posts and users have been replaced by bots and AI-generated content, and that people no longer shape the direction of the internet.” The internet only exists now to sell products, keep us distracted, and force ideas upon us. Also central to this theory: this is all being done deliberately to control us.
I spiraled. I scoured Reddit threads dedicated to the topic. I watched YouTube videos dissecting the claims. The more that I read about it, the more that I was convinced it was fact. “It’s true,” I began yelling from my rooftop. “They are deliberately making the internet unusable!” Forget that I didn’t even really know who “they” was even supposed to be. The government? The billionaire class? The Illuminati?! The deeper I got, the more the algorithm kept feeding me. My kids just wanted me to get off the roof.
As with all conspiracy theories, there are elements of truth. Algorithms and bots are unquestionably changing the nature of the internet. Tim Berners-Lee, inventor of the World Wide Web who recently published a book that is part-memoir and part-solutions, has criticized the deliberate development of algorithms that financially incentivize the toxic nature of social media today.
“It is really important that we realize this toxicity comes from the algorithms and you can change that,” he said in a recent interview. “There’s some coder who can … tweak the way the AI is trained so that it can be more healthy, constructive, creative and helpful.”
Simply put, algorithms can be designed to help the user experience, not simply exist as coding that continuously feeds us what corporate entities believe will keep us engaged on their platforms so they can keep serving us their lucrative advertisements in the never-ending battle for our attention. We also have to worry about misinformation and disinformation. Click on a clickbait article or a misleading informational meme out of curiosity and algorithms recognize that interaction and will push you more of the same. Ask me how I know.
Frankly, the internet can be a confusing place. For many of us, we are looking for ways to curate a less confusing experience, to be part of online communities that help bring clarity to the world. We naturally seek out like-minded folks when building these communities. There are plenty of positives about this, of course. We can follow writers and filmmakers and artists whose work and perspectives we admire. We get glimpses into the everyday happenings of our favorite celebrities not tainted by intrusive paparazzi. Niche communities connect people online from all around the world who otherwise may have felt alone in their interests and hobbies.
On the flipside, it’s also made it easier for neo-Nazis to recruit, for Flat Earth and Sandy Hook conspiracy theories to spread unchecked, and, in general, for people to dig in their heels when it comes to their personal beliefs and worldview. Somebody questions your beliefs or perspective? One can retreat back to their selectively curated echo chamber and seek validation, even from the comforts of their phone.
Meanwhile, as one retreats to their echo chamber more and more, their minds are challenged less and less. A 2024 study for National Library for Medicine explains, “An echo chamber is a closed system where other voices are excluded by omission, causing your beliefs to become amplified or reinforced.” Even when RSS Feed Readers were popular (something I still use and encourage others to as well... I prefer Feedly), users manually decided what feeds to follow, but in doing so, it at least would show users every headline from that source in chronological order of when it was published. Readers were at least exposed to every article which increased the likelihood of reading something one might not otherwise have read or, better yet, something challenging. Challenging one's own viewpoints by reading other viewpoints is, generally speaking, good. A healthy amount of skepticism is also an important component of critical thinking. Changing one's mind when presented with new information is absolutely okay.
Diane Ravitch, a historian specializing in education and educational policy, is the perfect example of someone who changed their mind on something very important to them. For years, she was a vocal proponent of the No Child Left Behind Act of 2001, school choice, charter schools, and a competition-driven educational system. By 2010 though, after seeing the destructive nature of such policies that she previously championed in action, she changed her mind. She realized what she had previously supported in practice only served to weaken public education at the expense of privatized schools that could pick and choose their students. In her 2010 book “The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education,” Ravitch candidly discussed her ideological about-face.
“What should we think of someone who never admits error, never entertains doubt but adheres unflinchingly to the same ideas all his life, regardless of new evidence?” she wrote. “Doubt and skepticism are signs of rationality. When we are too certain of our opinions, we run the risk of ignoring any evidence that conflicts with our views. It is doubt that shows we are still thinking, still willing to reexamine hardened beliefs when confronted with new facts and new evidence.”
What Ravitch describes of her evolution of thought seems increasingly rare. Part of this almost certainly derives from the ideological isolation of our echo chambers and the decrease in good faith exposure to differing viewpoints. If there is crossover, at least from my anecdotal experience, it seems to primarily be in mindless and partisan back-and-forth jabs in unmoderated Facebook comment sections.
To paraphrase John Steinbeck, folks tend to only want advice if it's advice they already agree with anyway. The same can be said of hearing or reading ideas differing from one's own. If it isn't what we want to hear, an affirmation of what we already believe or think, our roadblocks are already up.
In terms of how we perceive what is happening in the world, it is even hard to find agreement. I still get most of my news the old fashioned way: journalists. For national and international news, Associated Press and Reuters are still my daily reads. I like to read BBC and Spiegel International to get perspectives from across the Atlantic. I listen to NPR's Morning Edition on the way to work. I’ll check in with CNN and Fox News a few times a week (and sometimes even more polarizing websites), but mostly to see and compare how they’re framing stories compared to the center.
In the early-1900s, the Society of Professional Journalists and the American Society of Newspaper Editors were both founded. Both devised journalistic codes of ethics. These tenets include seeking truth and reporting it, minimizing harm, acting independently, and being accountable and transparent. The reality is though that more and more people are getting their information and news from sources that don’t abide by any codes of ethics. As studies continue to demonstrate, more and more people lack the basic media literacy needed to navigate this increasingly crowded infosphere, essential skills such as understanding framing, selection bias, and word choice.
According to a Pew Research Center poll from September 2024, over half of TikTok users say they regularly use the app to get their news. In social media, that’s only surpassed by X, the doom-scrolling app formerly known as Twitter, at 59 percent, and Truth Social, at 57 percent — and it’s more than users of Facebook (48 percent), YouTube (37 percent), and Reddit (33 percent). Many of the popular sources of information on these platforms financially incentivize creators to farm for as many likes, shares, and comments as possible, a especially problematic motive when lacking codes of ethics.
One could argue that the lack of corporate gatekeepers in our ability to get information is a net positive, which theoretically could be true. After all, as Berners-Lee initially envisioned, the World Wide Web could be a democratic marketplace of ideas not restricted by money or power or geography or social status. That vision though existed within a non-commercialized version of the web during a time long before algorithms and corporate money ruled the web.
It doesn't help either that reading, the gateway to knowledge, is down in general. A recent study shows that reading for pleasure in the U.S. has dropped significantly in the past two decades with only 16 percent saying they read for pleasure. But that's a discussion for another day.
Whatever the case, exploring ideas outside of one's own interests and comfort zone is important. Consider yourself a humanities person? Subscribe to some science and math newsletters. Is science and math your thing? Swing by the library and grab some books on history, literature, and philosophy. Read a cookbook. Watch a video about how to change your spark plugs. Listen to a podcast about something completely outside of your usual wheelhouse.
“It’s great if you come across ideas and topics that you didn’t specifically select. That can change your day and even your life," wrote Cass R. Sunstein in 2017, adding that a “well-functioning information market” should have you “serendipitously” discovering new ideas and topics which, in turn, helps one grow and expand their horizons.
Even if that topic is Dead Internet Theory and has you spiraling. Just, you know, don't forget to take a deep breath and read what others have to say too.