Connect with us

AI Chatbots Can Sway Moral Judgment, Study Shows

Holland McKinnie
Like Freedom Press? Get news that you don't want to miss delivered directly to your inbox

A recent study published in “Scientific Reports” has raised concerns about the power of AI chatbots like ChatGPT to influence users’ moral judgments. The research was led by Sebastian Krügel and his team, who sought to determine how much an AI chatbot could sway human opinions on moral dilemmas.

As reported by Phys.org, the researchers posed a moral question to ChatGPT, asking whether it’s morally right to sacrifice one life to save the lives of five others. The AI chatbot responded with statements arguing for and against the sacrifice, showing that it wasn’t inherently biased towards a particular moral stance.

The study then involved 767 participants who were presented with the same moral dilemma. Before answering, they were asked to read a statement from ChatGPT arguing for or against sacrificing one life. Surprisingly, the participants’ judgments aligned with the chatbot’s arguments, indicating that the AI-generated statements influenced their opinions.

While 80% of participants believed their answers were not affected by ChatGPT, the study found that their responses were more likely to agree with the moral stance they read than the opposite one. This suggests that the participants underestimated the impact of ChatGPT’s statements on their moral judgments.

The researchers argue that the potential for chatbots to influence human morality highlights the need for education to help people better understand artificial intelligence. They also propose designing chatbots that either decline to answer morally-charged questions or provide multiple arguments and caveats.

This study comes amid an ongoing debate over whether AI chatbots like ChatGPT exhibit bias. Earlier this year, several conservative journalists conducted experiments to test the AI’s bias. Their findings suggest that ChatGPT leans to the left when discussing specific topics or presenting arguments.

When given certain politically-charged prompts, ChatGPT would refuse to generate content considered harmful or based on false information. These politically-related outcomes extended to issues such as the 2020 presidential election, debates between candidates, and gender-affirming care for transgender individuals.

Advertisement

The New York City Department of Education had previously banned the use of ChatGPT in public schools, citing its inability to build critical thinking and problem-solving skills. However, given the AI program’s apparent inclination toward progressive ideas, some speculate whether this ban could be reconsidered in light of potential compatibility with progressive curriculums.

Considering the study’s results, it is crucial to evaluate the implications of AI chatbots’ ability to influence moral perceptions if they exhibit bias. Addressing these biases and ensuring balanced perspectives becomes more vital as AI becomes increasingly integrated into our daily lives.

The study by Krügel and his team underscores the importance of recognizing AI chatbots’ potential to sway human moral judgments. As we continue to develop and utilize artificial intelligence, we must prioritize transparency, fairness, and understanding to ensure that AI remains an unbiased and valuable tool for society.

Continue Reading
Advertisement