Sunday, May 4, 2025

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’; The Atlantic, May 2, 2025

 Tom Bartlett, The Atlantic; ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’


[Kip Currier: The indifference and nonchalance of the University of Zurich researchers in this AI study -- who blatantly manipulated the Reddit human subjects without informed consent -- is deeply unsettling.

In the wake of outcries about this research study, the responses of the University of Zurich ethics board are perhaps even more troubling. That board's stated purpose:

"is to “support members of the University in their perception of ethical responsibility in research and teaching“, to “promote ethical awareness within the University” and to “represent ethical issues to the public at large"." 

https://www.ethik.uzh.ch/en/ethikkommission.html 

The words "perception [italics added] of ethical responsibility" should give every researcher and Internet user pause in light of the Zurich ethics commission's providing a Get-Out-Of-Jail-Free card to virtually any of Zurich's researchers with its lack of substantive guardrails and accountability.]


[Excerpt]

"The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.

When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)

Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments."

No comments:

Post a Comment