A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.
more than 1,700 comments made by AI bots
The bots made more than a thousand comments over the course of several months and at times pretended to be a “rape victim,” a “Black man” who was opposed to the Black Lives Matter movement, someone who “work[s] at a domestic violence shelter,” and a bot who suggested that specific types of criminals should not be rehabilitated.
The experiment was revealed over the weekend in a post by moderators of the r/changemyview subreddit, which has more than 3.8 million subscribers. In the post, the moderators said they were unaware of the experiment while it was going on and only found out about it after the researchers disclosed it after the experiment had already been run. In the post, moderators told users they “have a right to know about this experiment,” and that posters in the subreddit had been subject to “psychological manipulation” by the bots.
First AI came for the artists, then coders, now the trolls and shitposters.
How dare they take the jobs from hardworking Russians influencing online discourse.
Gotta love the hollow morality in telling users they’ve been psychologically manipulated this time but yet do nothing about the tens of thousands of bots doing the exact same thing 24/7 the rest of the time.
Right? The only difference here is that they fessed up.
r/changemyview is one of those “fascist gateway” subs, or at least one of the subs that I suspect as that. The gateway works by introduing “controverial” topics which are really fascist but they get upvote because “look at this idiot!”. But slowly it moves the overton window and gives those people who actually do believe in inequality a space to grow in. Opinions that are racist, anti-feminist, anti-trans, ultra-nationalist, authoritarian or generally against equality and justice.
Reddit was slowly boiled over the last decade like a frog.
And you can be absolutely sure that not only researchers are researching this for science, there are plenty of special interests that do this. It really started with climate change denial.
they get upvote because “look at this idiot!”
seems like a great reason: upvoted
Lol
Ah, to freely give away my data to a corporation whilst acting as a lab rat…
That’s place is mostly bots anyway, I’m sure no one noticed.
It’s like the MMO Erenshor. Every player other than yourself is a bot pretending to be a human being.
Unless they PK me for no reason while spamming the chat with “jajajaja” it’s not immersive.
Why you gotta call me out like that
… well how many deltas did it get?
I hate it as an experiment on principle but c’mon how well did it do?
Found it, 137 deltas.
Impressive, a bit worrisome
Feels like AI would really excel in this. It’s personalized argumentation that can basically auto complete for the most statistically likely (ie popular) version of an argument. CMV posts largely aren’t unique, there’s a lot of prior threads to draw from which got deltas.
This is what should get departments defunded, not DEI 🫠
I remember when my front page was nothing but r/changemyview for like a week and I just unsubscribed from the subreddit completely because some of the questions and the amount of hits felt like something fucky was going on. Guess I was right.
Alot if it was leaning very right wing
Indeed. A lot of fascist Nazis sympathizers brigading in every thread.
Same happened to the relationshipsadvice and aita subreddits, the number of posts suddenly skyrocketed with incredibly long, overly-detailed stories that smacked of LLM-generated content.
Rateme and Roastme were going bonkers for awhile too. That whole site just seems to be AI slop anymore.
To be fair, I can see how it being “unauthorized” was necessary to collecting genuine data that isn’t poisoned by people intentionally trying to soil the sample data.
This is straight up unethical, at best.
Both
It can be both.
Ok maybe I dont miss reddit after all…
It’d be pretty trivial to do the same here, 1700 or so comments over ‘several months’, is less than 25 a day. No need even for bot posting, have the LLM ingest the feed, spit out the posts and have the intern make accounts and post them.
Well at least this place is exclusively where people who got banned from reddit end up so they will struggle to find us at least…
We can tell they couldn’t immediately detect and ban bots as reddit claimed or mean they reddit probably did detect but dint bother blocking the content, because they want AI engagement.
Yeah, but we already got botfarms, and botinfluence.
Hey, I didn’t get banned (yet), I just prefer the vibe here.
I created my account to have one ready, then when I got banned, it was easy to hop over
I made multiple accounts over the years in case I got one accidentally banned, the others one I could avoid the same sub. But reddit purged alot of accounts, In series of ban waves.
It’ll be interesting to find out if this research got IRB approval prior to the study
I hope for the sake of their careers they did
Given the highlighted positions the bots took, I’m not sure we should worry about their specific careers ending.
I remember the good old days when the NSA had to physically fly an airplane over the border and spray one of our towns with chemicals if they wanted to run psychological experiments on people!
…posters in the subreddit had been subject to “psychological manipulation…
There are no users there. It’s just bots talking to other bots.
Russian bots vs research bots
If you think this is shocking, just wait for the big reveal about r/StanfordBasement.
What difference does it make if you’re talking to a bot? We never meet our interlocutors anyway. Would these people have the same reaction if it were revealed they were talking to a role playing person because I’m pretty sure we’ve already done that many times over.
I’m guessing the problem with saying some of these things is proliferation of fake news. Like the anti BLM guy, I wonder how much this person stuck to the facts. You can’t present anti-BLM without making up shit, and they projected it to millions of people.
You’re right that for a perfect logician, only the argument matters, but for humans, no.
What kind of question is this? A role-playing person talking to others who don’t know they’re role-playing is deceitful, so are you saying there’s absolutely nothing wrong with deception?
It’s generally expected outside of /r/jokes, /r/twosentencehorror, etc. that the people you’re talking to are telling the truth as they know it, or else why talk at all?
I don’t know what your experience is on Reddit but mine came to be that what I was reading couldn’t be trusted. I remember stumbling across a post on some technical subject that I happen to understand very well and couldn’t believe the twaddle that was advanced in the comments with utter conviction and certainty. It got me thinking about all the things I had read and just accepted because I know nothing about them. This is our information landscape, for better or worse.
Why should it be any different in a role playing scenario? These platforms are motivating engagement and people love an emotional story and so that is presented to us. If we loved true stories more, we would get them instead. I don’t think there’s any malice intended, we’re getting what we want because morons love their feels over their knowledge. It’s the reason the Americans have Trump and Elon and antivax, these people inhabit social media but it is the last thing they should turn to for truth because they are dumb as a sack of rocks and are getting played, shorty.
Sure, this is unfortunately our disinformation landscape now, but regardless, is deception okay (given how it’s always intentional by definition)?
Going back to your original question, I do think people would be annoyed by bots and outed human liars alike; at least, I would be, since the bots are controlled by people anyway.
As for my Reddit experience, I like /r/scams, among other similarly healthy and truly informative places to be.