Researchers develop new method to prevent AI from hallucinating, according to a new study
A method developed by a team of Oxford researchers could prevent AI models from making “confabulations” which are a certain type of hallucination or inaccurate answer, according to a new study. As the hype for generative artificial intelligence (genAI) continues, criticism has increased regarding AI models’ hallucinations. These are plausible-sounding false outputs from large language models (LLMs) like OpenAI’s GPT or Anthropic’s Claude. These hallucinations could be especially problematic when it comes to fields such as medicine, news, or legal questions. “‘Hallucination’ is a very broad cat...