The Challenge of AI Hallucinations

Large Language Models (LLMs) can generate fluent text, but they often "hallucinate"—producing content that is nonsensical, factually incorrect, or untethered from the source material. This poses a significant risk for reliable AI applications. This tool explores a novel approach to mitigating this risk.

ESTIMATED HALLUCINATION RISK

35.7%

Unconstrained generation can lead to high unpredictability.

A Linguistics-Based Solution

Systemic Functional Linguistics (SFL) offers a powerful framework for understanding and controlling language generation. Instead of treating language as a set of rules, SFL sees it as a system of choices for making meaning. We can leverage this to guide LLMs towards more factual and grounded outputs by structuring the "choices" they can make.

🌍

Ideational

This metafunction concerns the "content" of language: who is doing what, to whom, where, and when. By pre-defining the process (e.g., material action, verbal process), participants, and circumstances, we can constrain the model's "worldview" to match the source data, reducing factual deviation.

💬

Interpersonal

This covers the social aspect of language: the speaker's attitude, the relationship with the listener, and the type of interaction (e.g., statement, question). Controlling mood (declarative, interrogative) and modality (certainty, obligation) ensures the generated text has the intended tone and stance, preventing unwarranted claims.

📚

Textual

This organizes the message itself, creating coherence and flow. By structuring the Theme (the starting point of the message) and Rheme (what is said about the Theme), we can ensure the text is logically structured and focused, preventing the model from generating tangential or irrelevant information.

Interactive Skeleton Builder

Experiment with generating text skeletons using different strategies. The "Basic" approach is a simple template. The "SFL-Optimized" approach uses linguistic constraints to guide the LLM. See how the hallucination risk score changes based on the level of semantic and structural control.

Generation Controls

Generated Skeleton & Risk Analysis

CALCULATED HALLUCINATION RISK

8.2%

Comparative Results

This chart visualizes the impact of SFL-guided skeleton generation. The SFL-Optimized approach consistently reduces the potential for hallucination by providing clear, unambiguous instructions to the language model, leaving less room for creative (and potentially incorrect) interpretation.

Explore the Toolkit

This interactive demonstration is part of the ongoing research for the Hallucination Risk Calculator & Toolkit. The project aims to develop practical tools for developers and researchers to build more reliable and trustworthy AI systems.

View on GitHub