Digg, a potential Reddit competitor, just shut down because it couldn’t control the bots that were rampant on its site. On Wednesday, Reddit said it was taking on the challenge itself.
The company will begin labeling automated accounts serving users similar to how it labels “good bots” in X, and will require accounts suspected of being bots to verify they are human.
Reddit emphasizes that this is not a site-wide verification requirement, and only occurs if there is something, including on-site activity or other technical markers, that suggests the account is not human. According to Reddit, if an account fails to pass the test, it may be restricted.
To identify potential bots, Reddit uses special tools that look at account-level signals and other factors, such as the rate at which accounts attempt to write and post content. However, using AI to create posts or comments does not violate that policy (although community moderators may set their own rules).
To verify that an account is a human, Reddit uses third-party tools like passkeys from Apple, Google, and YubiKey, as well as third-party biometric services like Face ID and Sam Altman’s World ID. Some countries also use government IDs. Reddit points out that this last category may be required in some countries, such as the UK and Australia, and some states in the US, due to local regulations regarding age verification, but this is not the method the company recommends.
“When we need to verify that an account is a human, we do so in a way that prioritizes privacy,” Reddit co-founder and CEO Steve Huffman said in a statement Wednesday. “Our goal is to confirm that there is someone behind an account, not who that person is. Our goal is to increase transparency into what’s happening on Reddit while preserving the anonymity that makes Reddit unique. There’s no need to sacrifice one or the other.”
The changes are aimed at addressing the growing problem of bots engaging more broadly on social platforms and the web, where they are often used to influence politics, spread misinformation, exaggerate their popularity, secretly sell products, and generate fake ad clicks. According to Cloudflare, traffic from bots will exceed human traffic by 2027, including bots such as web crawlers and AI agents.
Reddit in particular has become a popular destination for bots seeking to manipulate narratives, funnel money to companies and their products, repost links, post spam, drive traffic, and conduct surveys. Furthermore, since Reddit’s content is used for AI training thanks to lucrative deals with AI model providers, there are also suspicions that bots are posting questions on the site to generate more training data, especially in areas where AI is lacking in information.
Reddit’s other co-founder, Alexis Ohanian, also addresses a related issue known as the “dead internet theory,” which posits that there are more bots than humans online and that the majority of content, interactions, and web activity on the internet is generated by automation or AI rather than humans. In the age of AI agents, this theory is becoming reality.
The company announced last year that it would require human authentication in response to the growing number of bots and the need to meet “evolving regulatory requirements.” However, the company noted today that its current solution, which Huffman recently discussed on the TBPN podcast, is not the best.
“The best long-term solution is one that is decentralized, individualized, private, and ideally requires no identity at all,” Huffman said in today’s announcement.
In addition to these changes, Reddit said it will continue to remove bots and spam, averaging 100,000 account deletions per day, and will continue to improve its tools, but will rely on reports of suspicious bots. Developers running so-called great bots can learn more about the new “APP” label in the r/redditdev community.
