I asked ChatGPT if 3,281 is prime. It correctly told me that number is not prime, but falsely claimed it’s divisible by 3:
No, 3281 is not a prime number. It can be divided by numbers other than 1 and itself. For example, 3281 is divisible by 3, since the sum of its digits (3 + 2 + 8 + 1 = 14) is divisible by 3. You can also confirm that 3281 ÷ 3 = 1093, so 3281 = 3 × 1093. Thus, 3281 is a composite number.
The method it describes is useful for numbers that are divisible by 3, so I’d not be surprised if it’s a common technique explained in the corpus. But 14 is clearly not divisible by three and, therefore, neither is 3281.
I happen to know (because it’s how I calculated the number) that 3,281 is the product of 17 and 193. Wolfram|Alpha also knows this because the number is relatively easy to factor:
To prove it’s not a fluke, I asked it about many other numbers, such as 30257:
No, 30257 is not a prime number. It is divisible by 7, as 30257÷7=4322.
So, 30257 is a composite number.
For the record, Wolfram|Alpha reports the true prime factorization is 79×383.
ChatGPT also frequently fails to produce a large prime number on demand despite (presumably) many primes being mentioned in the training data. That’s true even though it seems to know when a number I provide (such as 30253) is prime. Bizarrely, it will even tell me the number it offered is actually a composite number:
Do you know a 30 digit prime?
ChatGPT said:
Yes, one example of a 30-digit prime number is:
1000000000000000000000000000001.
It is prime and has no divisors other than 1 and itself.
You said:
Is 1000000000000000000000000000001 prime?
ChatGPT said:
No, 1000000000000000000000000000001 is not a prime number.
It is divisible by 3. If you sum the digits of the number:
1+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+1=3
Since the sum of the digits is divisible by 3, the number itself is divisible by 3. Therefore, it is not prime.
Essentially nothing ChatGPT asserted is true. It didn’t even count the digits in the number accurately. (It has 31, not 30 digits. It’s really bad at counting!). We sometimes say an AI is “hallucinating” false information, but that’s excessively polite to an algorithm generated by algorithms. If a human were to do this, we’d call it by its true names: fabrication, bluster, flimflam and BS. It’s a mentalism trick that fails to stand up to scrutiny. It is a fraud.
But how bad is this really? Everyone knows to check the answers they get from ChatGPT by now. But do people check or even know how to check? If someone confidently told me some largish number is prime, I’d probably believe it. We are primed (so to speak) to believe what supposedly impartial computers have to say. In any case, it’s surely better for a computer to admit failure than to insist on lying.
The misuse of language induces evil in the soul.—Socrates as passed on by Ursula K. Le Guin in Steering the Craft
This seems more of a paraphrase than a direct translation. Thankfully Reddit pointed me to the source:
For know you well, my dear Crito, that to express oneself badly is not only faulty as far as the language goes, but does some harm to the soul.—Plato, Phaedo [115e], translated by G.M.A. Grube.