• Tight Briefs
  • Posts
  • The Risk of AI Hallucinations For Marketers

The Risk of AI Hallucinations For Marketers

The Risk of AI Hallucinations For Marketers

As the AI arms race rages on between the tech giants, marketers and agencies have been gobbling up as many AI tools as possible to stay ahead of the curve and boost efficiency. Every major marketing conference over the past year was packed with features on AI. You’ve probably experimented with ChatGPT, asking it to write up creative strategies or spending budgets. While some marketers are swigging down the AI Kool-Aid, AI might be ingesting something more electric.

What Are AI Hallucinations?

AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

- IBM

The idea of an AI Terence McKenna certainly interests us but for the companies scrambling to become the champion of the AI space, these hallucinations create huge problems for the credibility and reliability of their platforms.

Some notable examples of AI hallucination include:

  • Google’s Bard chatbot incorrectly claimed that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system.

  • Microsoft’s chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.

  • Meta pulled its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.

These are relatively harmless hallucinations but things turn into a much worse trip when AI is put in charge of making meaningful decisions. Such as healthcare AI models incorrectly identifying issues that could lead to medical intervention or autonomous driving AI that could create highly dangerous situations.

Luckily, we’re still far away from those possibilities…right?

Marketing Sobriety Check

Gartner's latest Strategy and Leadership Predictions for Service and Support Leaders in 2024 highlights ‘AI hallucinations’ as a significant risk for businesses.

“As chatbots become more realistic and, therefore, more trusted by users, ‘hallucinations’ present a major complication,” the report states. “Incorrect information from hallucinations can create a heightened risk when presented with a veneer of authenticity.”

Analyst Brad Fager predicts that by 2027, a company’s generative AI chatbot will directly lead to the death of a customer from bad information it provides.

Outside of chatbots, marketers have begun implementing a variety of AI into their workflows where hallucinations could have less mortal but monetarily tragic results. Including:

  • Automated lead generation

  • Automated campaign optimization

  • Targeted advertising

  • Predictive analytics

  • Customer journey analysis

  • Social listening

If an AI is providing you with sauced results you could be making decisions, both strategic and financial, that could lead to misalignment and loss of revenue.

Avoiding The Risks

We don’t recommend totally abandoning AI as part of your programs. It's obvious that this technology will become a mainstay in all aspects of our lives eventually. To completely ignore AI would be negligent to your future business success.

Thankfully, all of the leading AI companies are working to solve these issues as a top priority. In the meantime there are some best practices you can put into place to protect yourself.

Data Governance: Artificial intelligence is only as good as the data it is given. Ensure that the data you are giving your AI tools is high-quality and accurate. Regular audits of this data will lower the chances of inconsistencies with your results.

Human Oversight: Double check the AI’s responses. This may seem like it defeats the purpose of using AI but at this stage a human should always be monitoring the performance of AI systems especially when it involves critical decision-making situations.

Learn and Adapt: AI technology is moving quickly. In order to keep up with the best practices you should invest time and resources into education and training that will empower you and your team with the knowledge to navigate the evolving AI landscape and avoid pitfalls.