Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Science
Related: About this forumExperiment using AI-generated posts on Reddit draws fire for ethics concerns -- Retraction Watch
https://retractionwatch.com/2025/04/28/experiment-using-ai-generated-posts-on-reddit-draws-fire-for-ethics-concerns/An experiment deploying AI-generated messages on a Reddit subforum has drawn criticism for, among other critiques, a lack of informed consent from unknowing participants in the community.
The university overseeing the research is standing by its approval of the study, but has indicated the principal investigator has received a warning for the project.
The subreddit, r/ChangeMyView (CMV), invites people to post a viewpoint or opinion to invite conversation from different perspectives. Its extensive rules are intended to keep discussions civil.
. . .
This is one of the worst violations of research ethics Ive ever seen, Casey Fiesler, an information scientist at the University of Colorado, wrote on Bluesky. Manipulating people in online communities using deception, without consent, is not low risk and, as evidenced by the discourse in this Reddit post, resulted in harm.
. . .
The university overseeing the research is standing by its approval of the study, but has indicated the principal investigator has received a warning for the project.
The subreddit, r/ChangeMyView (CMV), invites people to post a viewpoint or opinion to invite conversation from different perspectives. Its extensive rules are intended to keep discussions civil.
. . .
This is one of the worst violations of research ethics Ive ever seen, Casey Fiesler, an information scientist at the University of Colorado, wrote on Bluesky. Manipulating people in online communities using deception, without consent, is not low risk and, as evidenced by the discourse in this Reddit post, resulted in harm.
. . .
The genie is out of the bottle and won't be put back in. This particular "experiment" was not well planned or controlled but at least it had a valid scientific purpose. The main problem is that the subjects were not informed and did not have an opportunity to consent/deny.
4 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

Experiment using AI-generated posts on Reddit draws fire for ethics concerns -- Retraction Watch (Original Post)
erronis
22 hrs ago
OP
erronis
(19,282 posts)1. To add another excerpt:
The moderators had also asked the University of Zurich to block the research from being published. The response from the university noted that is outside their purview. A university response quoted in the post stated:
This project yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.
highplainsdem
(55,596 posts)2. The researchers and their university should be sued.
andym
(5,947 posts)3. The "experiment" is happening for real by bad actors and others, so this is much adieu about nothing
AI-originated posts without attribution to AI are now occurring and pose risks according to some analyses. For example, https://www.rand.org/pubs/articles/2024/social-media-manipulation-in-the-era-of-ai.html
The research in the OP is actually useful because it points out some of the potential consequences.
erronis
(19,282 posts)4. AI-Reddit study leader gets warning as ethics committee moves to 'stricter review process'
New from Retraction Watch:
https://retractionwatch.com/2025/04/29/ethics-committee-ai-llm-reddit-changemyview-university-zurich/
The university ethics committee that reviewed a controversial study that deployed AI-generated posts on a Reddit forum made recommendations the researchers did not heed, Retraction Watch has learned.
The principal investigator on the study has received a formal warning, and the universitys ethics committees will implement a more rigorous review process for future studies, a university official said.
As we reported yesterday, researchers at the University of Zurich tested whether a large language model, or LLM, can persuade people to change their minds by posting messages on the Reddit subforum r/ChangeMyView (CMV). The moderators of the forum notified the subreddit about the study and their interactions with the researchers in a post published April 26.
. . .
Reddit has issued a response to the study as well. Reddits chief legal officer Ben Lee posted on the CMV thread:
. . .
The principal investigator on the study has received a formal warning, and the universitys ethics committees will implement a more rigorous review process for future studies, a university official said.
As we reported yesterday, researchers at the University of Zurich tested whether a large language model, or LLM, can persuade people to change their minds by posting messages on the Reddit subforum r/ChangeMyView (CMV). The moderators of the forum notified the subreddit about the study and their interactions with the researchers in a post published April 26.
. . .
Reddit has issued a response to the study as well. Reddits chief legal officer Ben Lee posted on the CMV thread:
What this University of Zurich team did is deeply wrong on both a moral and legal level. It violates academic research and human rights norms, and is prohibited by Reddits user agreement and rules, in addition to the subreddit rules. We have banned all accounts associated with the University of Zurich research effort.
. . .