AI & Internet’s Existential Crisis

It is turning out to be a summer of the Internet’s existential crisis, thanks to the rapid growth of Generative AI (artificial intelligence) systems. An information (and disinformation) tsunami not only threatens to upend the Internet information order, but it also poses a threat to our already stretched attention spans. 

Wix.com, a website publishing company, recently announced a tool that allows its customers to create websites using “generative AI.” Instead of relying on templates that can be customized with the help of humans, the company will make making a website as easy as giving commands to ChatGPT. Wix isn’t the first to offer this — Framer, for example, offers similar capabilities. However, Wix is a large enough business to get everyone’s attention. 

As someone who has been playing with generative AI tools, the devil is in the details. To create something of extraordinary beauty using generative AI, one must marry vivid imagination with a masterful understanding of the commands (and language.) Otherwise, you are left with the millionth variation of a still from an existing anime. The same will be true when it comes to creating websites as well. And therein lies the crux of the problem: the generative AI tools will make it dead simple to create clones-of-clones-of-clones. 

Today, it is Wix. Don’t worry. Tomorrow it will be a dozen other companies with more nefarious ideas who will use AI systems inspired by Wix to create an endless chain of websites and then fill them with information created by AI tools. They can use them to farm for ad dollars, referral links, or plain-old misinformation. By creating an intricate web of crosslinking and interlinking, these seemingly legitimate websites could start to weaken the “PageRank paradigm.” That can’t be good for Google and its money machine. 

Most of what I have written is fairly obvious — this is not an original doomsday scenario. Only a dozen years ago, Demand Media ran “content farms” and went public. It used cheap labor to flood the web with generic websites with fallow information to farm ad dollars. Its main rival, Associated Content, was acquired by Yahoo. Those two companies were pioneers of shallow content and created headaches for legitimate websites and normal people looking for information. It took a while for even Google to beat them back. 

Demand Media and Associated Content were limited by how many people they could hire to create content. The new “AI” tools have no such problems — they can create websites instantly, create content instantly and cheaply, and become much more sophisticated in their information arbitrage. This is not some hypothetical idea. The grift is already underway

And it is not just the websites. We are at a point where a book can be created instantly for less than a sawbuck. Amazon, for example, is already awash in AI-generated books. Music, too, is in a similar situation. The number of audio tracks created using AI is rumored to have crossed 100 million. 

Video, animation, and images — the relentless march of “AI” generated content is here. And you and I will find ourselves swimming in misinformation or the cheap fake equivalent of information. Soon, there will be so much of everything that we won’t have enough attention for anything. 

While the attention apocalypse looms, the Internet is having an existential crisis. Humans have freely and liberally created and shared information on the Internet for the past twenty-five years. From forums to blogs to news publications to research papers — the gigantic corpus of information shared on the Internet was created by humans. 

AI models have been trained using this content created by humans. However, we face a stark reality: AI will create content humans consume. And the machines will take machine-generated information to training machines, which could lead to degenerated future AI models. 

Humans certainly are not feeling very incentivized to share. Look around you, and you can see creators are reevaluating how their content is used to train AI models. Whether it is Hollywood writers, comedians, or musicians — everyone is waking up to the idea of what can and should be put online. “Artists who feel their work was scraped by AI without credit or compensation are seeking recourse.” writes Wired. “Fan fiction writers who shared their work freely to entertain fellow fans now find their niche sex tropes on AI-assisted writing tools.” 

Where does fresh content come from in the future? Will we even be incentivized to create something new? Or all future AI refinements be based on erroneous pseudo-babble on social networks like Twitter and Reddit? 

As I said earlier, AI is like an ouroboros, a snake that swallows its tail.

5 thoughts on this post

  1. I’ve been saying this same thing since I first saw it write or produce any content. Humans will take easy route and use AI and then everything will homogenize into some mathematical oddity after several iterations of usage. We’ll lose the source data AI needs to be effective.

    1. Original thought is so underrated in our society and most of our systems – educational and industrial are meant to homogenize. Why wouldn’t technology only enhance that automated standardized homogenization

      1. You are correct. Look at any of the arts at the moment. Nothing but sequels and remakes. Music is sampling mostly and everything is a copy. We have already seen the signs of the loss of originality and this will be the final blow. You can see why Taylor Swift would be so popular (I’m not a fan) but she does do original work and people are starving for it.

  2. So, what’s the answer to: Where does fresh content come from in the future?, and how that content will be ranked. I’m seriously concerned.
    I’m thinking, maybe I create a reading group that only read books released before 2023.

Comments are closed.