How to help AI stop ‘hallucinating’

How to help AI stop ‘hallucinating’

Ask any Artificial Intelligence (AI) chatbot a question, and its answers are either helpful, amusing or just plain ‘trippy.’ Welcome to the brave new world of ‘AI hallucinations’. This is when an AI model generates incorrect information, but presents it as if it were a fact, to best match your query. In this Q&A, we speak with Amperity’s Head of Generative AI, Joyce Gordon (pictured), on how to manage and even eradicate the hallucinating hiccups. 

Why are hallucinations so hard to eliminate? 

LLMs are trained on discrete collections of data and tasked with generating new content based on that input data. There are many reasons why hallucinations are challenging to eliminate. Three of the most fundamental include:

  1. Input data that is not representative of future queries: in order for a model to generate an accurate output, it requires high quality and relevant inputs to draw inspiration from. The tasks that we ask LLMs to complete are boundless. As a result, in many cases, we are asking LLMs to generate content when it doesn’t have good inspiration to draw from. The chasm between input data and the output required for the user query is a frequent cause of hallucinations.
  2. Ambiguity in prompting: when users input open-ended prompts, particularly prompts that require multiple steps to generate an output, they might end up with unexpected results. Ambiguity in the prompt leaves room for interpretation, resulting in an output that differs from the user’s intention.
  3. Probabilistic outputs: the outputs of LLMs are probabilistic as opposed to deterministic. The probabilistic outputs enable a great deal of flexibility, however, they also leave the door open for the LLM to return a different answer when asked the same question. Sometimes those different answers can result in hallucinations.

Is it possible to eliminate hallucinations?

While theoretically it is possible in the future, it’s unlikely. It would be a challenging task to eliminate hallucinations since it requires the LLM to perpetually be trained on the most up to date data, and be able to overcome any inherent uncertainties in language, while also ensuring that the input data is completely unbiased. Generative AI has wide applications across many types of use cases of varying complexity. It might be possible to eliminate hallucinations in use cases that are relatively constrained, while more challenging to eliminate hallucination in use cases that are more complex.

What are the best approaches to eliminating hallucinations?

The answer here really depends on the use case at hand. Three promising classes of techniques to reduce hallucinations are:

  • Passing the LLM high quality data: Depending on the use case, this can be achieved in the training process, fine tuning, or within the context window directly.
  • Prompt engineering: ensuring the prompt is well tuned to the problem at hand.
  • Feedback loops: building in mechanisms for the AI to check and correct its responses.

Would eliminating hallucinations impact AI creativity?

It depends on how one thinks about creativity. Hallucinations, at their core, are mistakes. Historically, many notable innovations have resulted from accidental events. However, there are many other ways that AI can drive creativity beyond hallucinations. If hallucinations are prevalent, AI adoption will be lower, which will limit the opportunity that AI has to help people drive creative solutions.


About the author Joyce Gordon

Joyce Gordon is the Head of Generative AI at Amperity, leading product development and strategy. Previously, Joyce led product development for many of Amperity’s ML and ML Ops investments, including launching Amperity’s predictive models and infrastructure used by many of the world’s top brands.  Joyce joined the company in 2019 following Amperity’s acquisition of Custora where she was a founding member of the product team. She earned a B.A. in Biological Mathematics from the University of Pennsylvania and is an inventor on several pending ML patents.