We need a new place of trust on the internet
How do we preserve spaces for human interaction in a world of AI?
The world has changed tremendously over the past decades. Much of civil discourse, commerce, and leisure activities have moved from the offline to the online world. This trend will continue to accelerate with new generations who grew up with the internet as the cornerstone of daily life, replacing older generations who are less chronically online.
Some of you may have heard of the 'dead internet' theory. For those who have not, it describes a world where the vast majority of internet activity consists of bots and artificially generated content. There are very few humans in sight, and if you meet them, you cannot meaningfully differentiate them from a bot.
What started as something like a conspiracy theory is beginning to develop into more and more of a reality. State actors have utilized troll and bot farms for years to influence topics online, but the recent advancements in GenAI have shown a glimpse into a world where the internet is dominated by artificially created content, posted by artificially created users. There are companies solely focused on creating AI-generated content and flooding it to YouTube, Instagram, and TikTok. There are hundreds, if not thousands, of companies focused on developing AI avatars that are replacing human interaction across almost all business functions, including sales, customer support, onboarding, and more.
Critical Thinking alone will not save us
For now, a keen eye can still differentiate between AI-generated and human-generated content, but the speed of change is showing that it will be just a matter of time until that phase is over. So, critical thinking alone will not save us anymore. Given the media literacy and critical thinking employed by most people in a Photoshop/fake news world, I am not hopeful
Some people might ask why it matters whether content is generated by a human or by an AI. And while there might be a capitalist case to argue for artificially generated content feeds, based on the individual preferences of the users, think TikTok on steroids, it loses its appeal quickly once you leave pure entertainment and enter the world of commerce, and especially politics. Human-to-human communication is fundamental to well-functioning democracies, the exchange of ideas, and to some degree, to feeling human.
Much of our lives will change, and I would argue that the vast majority of people strongly underestimate how different life will be 5 years from now.
No matter what that world looks like, we should preserve some online spaces in which humans can have conversations with other humans. Once we have conscious AI systems, I am willing to reconsider this conversation to avoid discriminating against our new artificial friends, but let’s table that for now. So, what can be done to preserve human-to-human communication?
Ideas to preserve human-to-human communication on the internet
Proof-of-Humanity Systems
Websites have been trying to figure out whether you are really human for a long time, usually by employing CAPTCHAs. Sadly, this will not be sufficient anymore going forward since bots are increasingly good at solving them. Modern CAPTCHA versions might find ways to circumvent that issue temporarily, but LLM progress combined with computer and browser use will almost certainly make them obsolete soon.
There are a couple of well-known projects trying to build a proof of personhood, like World ID and Humanity Protocol.
World ID, by Worldcoin tackles this challenge by creating a global identity verification system designed to ensure one human equals one digital identity. The process centers around a specialized device called the Orb, which captures an infrared scan of a user's iris and converts it into a unique, irreversible hash. This hash is checked against a global database to guarantee no duplicates exist, after which the system generates a World ID. The elegance lies in the privacy preservation. Iris images are never stored, and users can prove their humanity to websites, voting systems, or decentralized applications without revealing any personal biometric data.
Meanwhile, the Humanity Protocol offers an alternative approach using palm recognition through smartphone cameras, eliminating the need for specialized hardware while still generating cryptographic proofs of uniqueness on the Sui blockchain.
Both systems represent early attempts to build the infrastructure needed for human-verified online spaces. However, both face adoption challenges around privacy concerns or the inherent limiting factor of a physical, centralized nature of initial verification steps.
Of course, these accounts could then be used for nefarious purposes, including generating AI content, but it would be difficult to scale up compared to simply creating hundreds or thousands of accounts on a social media platform for your bot army without any issue.
Invite-only communities
The in-person verification systems or crypto-verifications could be combined with a web of trust where other people vouch for your personhood. This would only work sufficiently if there is some kind of humanness verification involved at some point, otherwise, bots could vouch for other bots. However, it would be easier to disentangle botnets if they were all linked to each other since they incorrectly vouched for someone’s personhood.
Use AI to combat AI
We could build AI systems that detect artificially generated content at a better level than humans could. However, this might work with images and video for a while, but it would be very difficult to do with text. For now, an overuse of em dashes can still spot ChatGPT content quickly, but there might be people who simply like using them, or the models could change the way they output text. Who can prove that the current sentence I am writing was written by me or an LLM? Nobody can.
So AI might play a role in helping us solve the problem, but it cannot be the only layer to protect us.
Mandatory watermarks for AI-generated content
One often proposed solution would be adding mandatory watermarks to AI-generated content at the output level so that artificially generated content is immediately obvious to the user. Some social platforms like Instagram and TikTok have been experimenting with telling their users proactively about AI having been part of the generation process. The issue here is that nefarious actors will switch to models that do not add mandatory watermarks, and platforms will struggle to keep up with detecting that content automatically. Even if all the major closed source model providers are forced to commit to that, open-source models are so far advanced that this does not seem like a solution that would fix everything, because someone would just open-source models that do not have a built-in watermark. So I think user notices mentioning that AI has been used to create a post will be an interim solution at best.
Change the incentive structures
Find ways that make it economically suboptimal to use bot activity in online platforms. This would be by far the best path forward. If there is an economic incentive to do it, people will do it, no way around that. I do not see how this would be an actual solution though, since engagement is and will remain a key metric that advertisers like to pay money for, and if there is no way to separate between bot and non-bot interaction, there will still be strong commercial incentives to fake engagement and to influence potential buyers. Even if there was a way to restructure platforms so that economic incentives will disincentivize commercial actors from flooding them with AI content, the same would not be true for state actors. Governments or political activists will always have an incentive to influence public opinion, and if AI can be used for that purpose, they will use it. I do think that online authenticity and real-world interaction will become premium experiences that people will crave as a counter to the new digital world.
Strict regulation of AI
One argument that sometimes comes up is that AI could get heavily regulated or outright forbidden. The EU is trying to prevent AI abuses with the EU AI Act. In my opinion, regulating AI so heavily that we will not end up in a world flooded by AI-generated content feels almost impossible. The country that regulates it to such a degree will have massive commercial disadvantages, which will also translate into military disadvantages. The EU is already being criticized by many stakeholders within the Union that it is preventing progress, and that China and the US are running away in the AI race because of it. While strong regulation is not the only reason for that, there is some truth to that. Any nation or collection of nations is in a dilemma. If they are not making progress with AI, their economies and militaries will suffer. They will lose access to the other parts of the world that do use AI. So for any single party, it is rational to pour resources into developing AI since someone else will definitely do it.
Where does that leave us?
So given all of the above, my prediction is that there will be human-only spaces on the internet that use a strict and regular verification of humanity, and other spaces that will allow AI activity, which will have fundamentally different dynamics. Both will have their place and co-exist. Platforms that allow AI will have an extremely tailored algorithm that goes far beyond the personalization possible today. Much of the content will be specifically created solely for you based on your preferences and stated desires. On the other hand, human-only spaces will present a much slower form of interaction in which direct communication between friends and human strangers will be front and center.
I am generally a techno-optimist. My fundamental belief is that the world is a better place today, mostly because of technological progress, which has led to higher productivity and therefore a higher standard of living across all economic levels. We can cure diseases that would have killed people 100 years ago, we can communicate with our loved ones easily, even if they are on the other side of the world, and we can listen to our favorite artist's music wherever we are. I also love the internet. Without it, I would not live where I live and would not have met many of the most important people in my life. However, technological progress requires us to rethink how we live, especially given the speed of change ahead.
I would love to hear your thoughts on how we can design human-to-human communication on the internet going forward, and what the internet will look like 5 years from now.
If you want to learn more about the technology and process behind World ID, you can read their white paper here