Reddit announced a new approach to identifying bot profiles in the app. This change comes as the company works to combat the rising flood of artificial intelligence-powered bot profiles that have been wading into Subreddit conversations.
According to Reddit CEO Steve Huffman, the platform is rolling out new in-stream tags for bot accounts, in order to increase transparency and limit the impact of bot activity on Reddit’s overall engagement.
As per Huffman: “Our product has always been human conversation: messy, opinionated, sometimes great, sometimes not, but always real (or at least, really creative writing). As AI becomes a bigger part of the internet, we want to make sure that when you’re on Reddit, you know when you’re talking to a person and when you’re not.”
This means Reddit will now label accounts that utilize automation in allowed ways (i.e. “good bots”) as [App] in their username. “If you see that label, you know you’re interacting with a machine, not a person,” Huffman said.
Developers will be able to register their apps to receive this label, and the signifier will ensure that these bots remain compliant with Reddit’s rules around automated profiles. Detected bot accounts without this designation will face restrictions and potential bans.
“If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it,” Huffman said. “This will be rare and will not apply to most users. Accounts that can’t pass may be restricted.”
The update comes after Reddit itself inadvertently made bot profiles into a much bigger concern in the app (or at least approved a project that did) due to an academic study conducted on unwitting Reddit users.
In April 2025, researchers from the University of Zurich published the results of a live test of AI bot profiles on Reddit. The research aimed to find out whether these bots could sway people’s opinions on divisive topics.
And they could. In fact, the study showed that AI bots were more persuasive than humans in changing Redditors’ minds on controversial subjects. Which is a valuable finding, but Redditors who had been caught up in the experiment were rightfully angry that they’d been manipulated, without any indication that they were interacting with bot profiles.
The backlash prompted Reddit to rethink its approach to similar research projects, while the company also pledged to implement more transparency labels to keep users informed.
That has led to this new initiative, which will see Reddit take more significant steps to expose AI bot activity.
Huffman also said Reddit is exploring more ways for users to “confirm humanness,” and wants users to comply with increasing regulations around verification in a privacy-friendly way.
Huffman said Reddit’s aim is to increase transparency on the platform while preserving the anonymity that makes Reddit unique. “You shouldn’t have to sacrifice one for the other,” Huffman said.
In combination, these measures will provide more transparency and assurance, though it will be interesting to see how Redditors respond to finding out that their favorite Redditors are actually AI bots, assuming this ends up exposing some profiles.
At the same time, it’ll also be worth looking into how Reddit’s tools evolve to cover the expanding use of AI tools, which will be increasingly difficult to differentiate from humans.