What Can You Do When A.I. Lies About You?

2023-08-03 - Scroll down for original article

Click the button to request GPT analysis of the article, or scroll down to read the original article text

Original Article:

Source: Link

The technology’s reliance on statistical pattern prediction also means that most chatbots join words and phrases that they recognize from training data as often being correlated. That is likely how ChatGPT awarded Ellie Pavlick, an assistant professor of computer science at Brown University, a number of awards in her field that she did not win. “What allows it to appear so intelligent is that it can make connections that aren’t explicitly written down,” she said. “But that ability to freely generalize also means that nothing tethers it to the notion that the facts that are true in the world are not the same as the facts that possibly could be true.” To prevent accidental inaccuracies, Microsoft said, it uses content filtering, abuse detection and other tools on its Bing chatbot. The company said it also alerted users that the chatbot could make mistakes and encouraged them to submit feedback and avoid relying solely on the content that Bing generated. Similarly, OpenAI said users could inform the company when ChatGPT responded inaccurately. OpenAI trainers can then vet the critique and use it to fine-tune the model to recognize certain responses to specific prompts as better than others. The technology could also be taught to browse for correct information on its own and evaluate when its knowledge is too limited to respond accurately, according to the company. Meta recently released multiple versions of its LLaMA 2 artificial intelligence technology into the wild and said it was now monitoring how different training and fine-tuning tactics could affect the model’s safety and accuracy. Meta said its open-source release allowed a broad community of users to help identify and fix its vulnerabilities.