Large language models validate misinformation

New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.

In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of Waterloo researchers’ efforts to investigate human-technology interactions and explore how to mitigate risks.

They discovered that GPT-3 frequently made mistakes, contradicted itself within the course…

Continue Reading


News Source: www.sciencedaily.com


Posted

in

by

Tags: