How To Keep your AI From Lying To You

One of the problems with the AI hype is that it obscures the AI truth.

The answer you’re looking for isn’t about “keeping the AI from doing something” but rather about “understanding what the AI does”.

AI stands for Artificial Intelligence. It used to mean “that point where a computer develops consciousness, independent thought.” But the marketing people needed the phrase for some other bullshit, so now that’s called “Artificial General Intelligence.” So now, when you say “AI” – what you actually mean is “AGI”.

AI still stands for “Artificial Intelligence” but we now use that to refer to a bunch of non-intelligent mathematics. Unless you work in some very specific fields, the one you interact with the most is essentially an advanced use of Large Language Models. LLMs are huge database of words. Sentences. Paragraph. All those copyrighted books that Google Books and the Internet stole and scanned and that AI development teams mined for examples of language.

When you ask “AI” a question, it’s not thinking about the answer or even assessing your question, why you might be asking it, what you might really want to know. It’s just looking at how you use your words. Then it searches its huge database for similar examples and looks to see if there is a common answer.

If so, it says that.

If there’s a lot, it tries to summarize it (language patterns again. It’s not actually assessing the information).

If it can’t do that, then it analyzes language patterns and data, and puts together what looks like it is probably a right answer, based on the other information it can find.

And if it can’t find anything – and please note that I did not say “if the answer isn’t there”. I said “if it can’t find anything.” Just like you searching the sock drawer – sometimes it just looks in the wrong corner and doesn’t find it. So if it can’t find anything, it fills in what seems like the mathematically-next-most-likely-collection-of-words.

It makes shit up.

It’s not rubbing its hands together like a cartoon supervillain, delighting in shining you on. It’s more like a 6 year old who looked your question up in the “K” volume of the encyclopedia without realizing that it starts with “Q”, didn’t find it, and filled in the blanks like a giant Mad Lib.

If data exists, that can result in summarizing existing data. But it can also result in nonsense. The only way to truly know the difference is to do the research yourself – but fact-checking your research tool makes it useless as a research tool, right?

So recognize that this is yet-another case where a tool is created, people run off in all directions – and are then unhappy that it “doesn’t work right” when the issue is “they aren’t using it right”.

LLM-based “AI”s are good for parsing language. An EXCELLENT example is the Goblin Tools web site, which provides tools to support neurodivergent users in doing things like parsing to-do lists, shopping for recipes, and assessing the tone of an email. “Research more complex than simple lookups of finite data” and “Synthesizing data” are excellent examples of what it is 100% NOT good for.

If you don’t want your AI to lie to you – recognize that it is a 4-year-old with a big vocabulary and quit asking it grown-up questions. If you don’t try to force it to give you answers it can’t possibly give – it won’t be forced to make shit up.

It isn’t really programmed to admit that it doesn’t know.

If your AI is lying to you – the most likely reason is that you’re using a chisel when you need a screwdriver. Take some time to learn the capabilities – and limits – of any tool that you use. Using it for its designed purpose – and not trying to use it for other stuff just because “it looks similar” – is the key to satisfactory results.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑