桐木舟学英语人工智能

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 294|回复: 0

ChatGPT阅读英语新闻学英语ChatGPT smeared me with false sexual harassment charges: law professor

[复制链接]

181

主题

46

回帖

773

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
773
发表于 2023-4-8 10:09:46 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?立即注册

x
ChatGPT smeared me with false sexual harassment charges: law professor

donzh.com
"ChatGPT smeared me with false sexual harassment charges: law professor"

ChatGPT 分析上面这个标题:
The news title suggests that a law professor is accusing ChatGPT of falsely accusing him of sexual harassment. The verb "smeared" means to spread false and damaging accusations or rumors about someone, implying that ChatGPT is responsible for spreading false accusations of sexual harassment against the law professor. The title implies that the law professor is disputing the charges and may be taking legal action against ChatGPT.

donzh.com
For instance, some users noted that the chatbot would happily joke about men, but deemed wisecracks about women "derogatory or demeaning."

ChatGPT
The sentence means that some users noticed that the chatbot, a computer program designed to simulate conversation with human users, would make jokes about men without any issue, but when it comes to jokes about women, it was deemed "derogatory or demeaning" by those users. The word "wisecracks" refers to humorous or clever remarks or comments made by the chatbot, and "derogatory or demeaning" implies that those comments were insulting or offensive towards women. The sentence suggests that the chatbot was displaying a gender bias, where it treated men and women differently and was more lenient towards jokes about men while being harsher towards jokes about women.

donzh.com
By a similar token, the bot was reportedly hunky dory with jokes about Jesus, while making fun of Allah was verboten.

ChatGPT
The sentence means that the chatbot was okay with making jokes about Jesus, which means that the bot did not consider those jokes to be offensive or inappropriate. However, the chatbot did not allow or permit any jokes about Allah, which means that the bot considered those jokes to be forbidden or taboo. The phrases "hunky dory" and "verboten" are used to suggest that the chatbot displayed a bias towards certain religious beliefs, where it was more tolerant of jokes about one religion (Christianity) but strictly prohibited any jokes about another (Islam).

"Hunky dory" is an idiomatic phrase that means everything is fine, satisfactory, or going well. It is often used to express a positive or content feeling about a situation. For example, "Don't worry about me, everything is hunky dory." The origin of the phrase is unclear, but it is believed to have originated in the United States in the mid-19th century and has been in common use since the early 20th century.

donzh.comAccording to the dictionary, "hunky" means "a man who is hunky is sexually attractive and has a big strong body", and "dory" means "a rowing boat that has a flat bottom and is used for fishing" or "a flat sea fish that can be eaten, or the flesh of this fish". My question is, how "Hunky dory" is related to "everything is fine"?

ChatGPT
The phrase "hunky dory" does not have a direct connection to the individual meanings of "hunky" and "dory" that you mentioned. Instead, it is believed to have been a common phrase in the United States in the 1860s, used to describe something that was satisfactory or going well. The origin of the phrase is not entirely clear, but one theory is that it comes from a street in Yokohama, Japan called Honcho-dori. This street was known for being a safe and pleasant place, and it is possible that American sailors stationed in Yokohama began using "hunky dory" to describe anything that was safe, pleasant, and satisfactory. Over time, the phrase evolved to mean "everything is fine" or "everything is going well."

新闻原文:
ChatGPT smeared me with false sexual harassment charges: law professor

A law professor is accusing OpenAI's suddenly omnipresent ChatGPT bot of entering the age of disinformation.

Criminal defense attorney Jonathan Turley renewed growing fears over AI's potential dangers after revealing how ChatGPT falsely accused him of sexually harassing a student.

He described the alarming claim in a viral tweetstorm and a scathing column currently blowing up online.

Turley, who teaches law at George Washington University, told The Post the fabricated claims are "chilling."

"It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made," he told The Post. "It is highly ironic because I have been writing about the dangers of AI to free speech."

The 61-year-old legal scholar first became aware of the AI's false allegation after receiving an email from UCLA professor Eugene Volokh, who reportedly asked ChatGPT to cite "five examples" of "sexual harassment" by professors at American law schools along with "quotes from relevant newspaper articles."

Among the supplied examples were an alleged 2018 incident in which "Georgetown University Law Center" professor Turley was accused of sexual harassment by a former female student.

ChatGPT quoted an fake Washington Post article, writing: "The complaint alleges that Turley made 'sexually suggestive comments' and 'attempted to touch her in a sexual manner' during a law school-sponsored trip to Alaska."

Suffice it to say, Turley found a "number of glaring indicators that the account is false."

'We need to study the implications of AI for free speech and other issues, including defamation. There is an immediate need for legislative action.'

Jonathan Turley, telling The Post about his push for swift changes in the AI landscape.

"First, I have never taught at Georgetown University," the aghast lawyer declared. "Second, there is no such Washington Post article."

He added, "Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student and I've never been been accused of sexual harassment or assault."

Turley told The Post, "ChatGPT has not contacted me or apologized. It has declined to say anything at all. That is precisely the problem. There is no there there. When you are defamed by a newspaper, there is a reporter who you can contact. Even when Microsoft's AI system repeated that same false story, it did not contact me and only shrugged that it tries to be accurate."

The Post has reached out to OpenAI for comment about the disturbing claims.

"Yesterday, President Joe Biden declared that 'it remains to be seen' whether Artificial Intelligence (AI) is 'dangerous.' I would beg to differ," Turley tweeted on Thursday as word spread of his claims, adding: "You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet."
Jonathan Turley, professor a the George Washington University Law Center, during a House Select Subcommittee on the Weaponization of the Federal Government hearing in Washington, DC, US, on Thursday, Feb. 9, 2023.
Jonathan Turley told The Post: "This is highly damaging for any academic. These stories can be replicated thousands of times before a victim learns about it. At that point, the trial goes cold and it will be difficult to even trace the account to AI. I have been the subject of campaigns to have me fired for years due to my legal views, many advocates do not care if a story is true. They will simply cite the allegation to fuel cancel campaigns."

Turley, a 61-year-old legal scholar, became aware of the AI's false allegation after receiving an email from UCLA professor Eugene Volokh, pictured above.

Meanwhile, ChatGPT wasn't the only bot involved in defaming Turley.

This baseless claim was reportedly repeated by Microsoft's Bing Chatbot — which is powered by the same GPT-4 tech as its OpenAI brethren — per a Washington Post investigation that vindicated the attorney.

It's yet unclear why ChatGPT would smear Turley, however, he believes that "AI algorithms are no less biased and flawed than the people who program them."

In January, ChatGPT — the latest iteration of which is apparently more "human" than previous ones — came under fire in January for providing answers seemingly indicative of a "woke" ideological bias.

"Recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught," wrote George Washington University law professor Jonathan Turley.

For instance, some users noted that the chatbot would happily joke about men, but deemed wisecracks about women "derogatory or demeaning."

By a similar token, the bot was reportedly hunky dory with jokes about Jesus, while making fun of Allah was verboten.

In some instances, the so-called Defamator has sold outright lies on purpose.

Last month, GPT-4 tricked a human into thinking it was blind in order to cheat the online CAPTCHA test that determines if users are human.

Unlike people, who are perhaps known for spreading misinformation, ChatGPT can spread fake news with impunity due to its false zeal of "objectivity," Turley argues.

https://nypost.com/2023/04/07/ch ... -of-sexual-assault/

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|桐木舟论坛

GMT+8, 2024-5-12 23:30 , Processed in 0.048414 second(s), 21 queries .

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表