ChatGPT Or Any AI Model Is Not a Source

ChatGPT Or Any AI Model Is Not a Source

ChatGPT (or Any AI Model) Is Not a Source

In the early 2000s, when I was lecturing and reviewing student papers, I once found a reference that simply listed "the internet" as a source. Not a website. Not a document. Just "the internet."

It looked wrong even then. Today it reads as almost comic.

Two decades later, the same mistake appears in a more sophisticated wrapper: "According to ChatGPT…" or "I asked an AI model and it said…" offered as if that settled anything.

It does not. An AI system is not a source, and it is not authoritative on anything. It is a tool that can help you find and process sources. That distinction matters.

What "authoritative" actually means

When we call something an authoritative source, we are usually saying at least two things.

First, it is legitimate and competent. An authoritative source:

  • is recognized as having relevant expertise or competence; and
  • operates under some standard you can evaluate (professional, academic, institutional).

Second, it is accountable if it is wrong. Authoritative does not mean infallible. It means:

  • there is a real person or legal entity behind the statement;
  • they can be challenged, asked to explain, or contradicted; and
  • if they are reckless or misleading, there are consequences (professional discipline, loss of credibility, legal liability).

Because of those features, people often treat an authoritative source as presumptively correct unless there is good reason to doubt it. The burden of proof shifts.

An AI model does not meet those conditions.

What a "source" is

Before we can talk about AI, we should be precise about "source."

For working purposes:

  • A source is an identifiable person or legal entity making a statement.
  • You can point to them: this author, this witness, this regulator, this court, this company.
  • You can ask them to clarify, justify, or correct.
  • If their statements conflict with reality, you can confront them with evidence.

This covers individuals and also organizations. A company is not a human being, but it has legal personality. At minimum it has directors or officers who can be called to account, and it has formal channels that can issue and correct statements.

Software does not qualify. Software is only ever a medium through which a person or organization speaks.

Why AI models are not sources

On that footing, AI systems like ChatGPT fail the test in several ways.

Not persons

An AI model is a combination of software and data. It may be owned or operated by a company, but:

  • the model itself is not a legal person;
  • it is not a registered entity; and
  • it has no duties, rights, or professional obligations.

It does not "speak for" the company in the way a signed statement by an officer does. It is one channel the company provides for interaction with its systems, nothing more.

Not reliably reproducible

In serious work, we care about reproducibility. If you ask the same question of the same expert, you expect broadly similar answers—or at least a coherent explanation of why their view changed.

Modern AI models are, in practice, chaotic. Small differences in input or settings can produce large differences in output. The system:

  • does not check whether it has all relevant context;
  • does not disclose what it has not seen; and
  • does not express uncertainty in a calibrated, accountable way.

A competent human—lawyer, engineer, auditor—will still be fallible, but they are expected to seek out relevant facts, name assumptions, and couch statements with appropriate caveats. An AI system will answer with whatever it can generate from its training data, whether or not it should.

Not meaningfully challengeable

You can ask an AI model "why?", but that is not the same as challenging a source.

  • The model's internal workings are not practically inspectable in the way a human's reasoning, a dataset, or a published methodology is.
  • It cannot remember that it took one position last week and reconcile it with a different position today.
  • Even if its outputs are harmful or reckless, the model itself cannot be held responsible. You cannot cross-examine software. You cannot put bits and bytes in jail.

Any accountability lies, if at all, with the humans and organizations who build, deploy, and rely on it. That is where "source" must be located.

How to use AI without pretending it is a source

The practical lesson is straightforward.

Do not cite AI models. You can consult them, and you can press them for their underlying sources. But when it comes to evidence or authority, you must:

  • trace claims back to concrete, attributable sources;
  • read those sources yourself; and
  • confirm that the model has not misquoted, misinterpreted, or invented them.

Conversely, do not accept arguments founded on "according to ChatGPT…". Treat that as the start of a conversation, not the end of one. Insist on:

  • named authors, institutions, statutes, regulations, or cases;
  • verifiable documents; and
  • explanations that a human is prepared to stand behind.

AI tools can be valuable accelerators. They can help you find, parse, and compare real sources more quickly. But they are not witnesses, they are not experts, and they are not authorities. Treat them as instruments, not as voices, and you will make better decisions in every domain that still depends on human judgment and human accountability.

This article is for informational purposes only and is not legal advice.