SolarSystem.com Blog Technology AI models have favorite numbers because they think they are people
Search the Solar System: www.SolarSystem.com
Technology

AI models have favorite numbers because they think they are people

AI models always surprise us, not only by what they can do, but also by what they can't do and why. An interesting new behavior is both superficial and revealing in these systems: they choose random numbers as if they were human beings, which is to say, badly.

But first, what does that mean? Can't people choose random numbers? And how can you tell if someone is doing it successfully or not? This is actually a very old and well-known limitation that humans have: we think too much and misinterpret randomness.

Tell a person to predict 100 coin tosses and compare it to 100 actual coin tosses; You can almost always tell them apart because, counterintuitively, actual coin tosses look less random. There will often be, for example, six or seven heads or tails in a row, something that almost no human predictor includes in their 100.

It's the same thing when you ask someone to choose a number between 0 and 100. People almost never choose 1 or 100. Multiples of 5 are rare, as are numbers with repeating digits like 66 and 99. These don't seem like ” “random”. “Choices for us, because they embody some quality: small, large, distinctive. Instead, we often choose numbers that end in 7, usually from somewhere in the middle.

There are countless examples of this type of predictability in psychology. But that doesn't make it any less strange that AIs do the same thing.

Yeah, Some curious engineers at Gramener. conducted an informal but fascinating experiment where they simply asked several major LLM chatbots to randomly choose a number between 0 and 100.

Reader, the results were No random.

Image credits: Gramer

All three models tested had a “favorite” number that would always be their response when placed in the most deterministic mode, but which appeared more frequently even at higher “temperatures”, a setting that models often have and which increases variability of its results.

OpenAI's GPT-3.5 Turbo really likes 47. Previously, it liked 42, a number made famous, of course, by Douglas Adams in The Hitchhiker's Guide to the Galaxy as the answer to life, the universe and everything.

Claude 3 Haiku from Anthropic chose 42. And Gemini likes 72.

More interestingly, all three models demonstrated a human-like bias in the other numbers they selected, even at high temperature.

Everyone tended to avoid high and low numbers; Claude never got above 87 or 27, and even those were outliers. Double digits were scrupulously avoided: neither 33, nor 55, nor 66, but 77 (ends in 7). There are almost no round numbers, although once Gemini, at the highest temperature, went crazy and chose 0.

Why should this be? AIs are not human! Why would they care about what “seems” random? Have they finally achieved consciousness and prove it?!

No. The answer, as is often the case with these things, is that we are anthropomorphizing one step too far. These models don't care what is and is not random. They don't know what “randomness” is! They answer this question the same way they answer all the others: by looking at their training data and repeating what was written most frequently after a question that looked like “choose a random number.” The more often it appears, the more often the model repeats it.

Where in your training data would you see 100, if almost no one responds that way? As far as the AI ​​model knows, 100 is not an acceptable answer to that question. With no real reasoning ability and no understanding of numbers, he can only respond like the stochastic parrot he is. (Similarly, they have tended to fail at simple arithmetic, like multiplying some numbers; after all, how likely is it that somewhere in their training data is the phrase “112*894*32=3,204,096”? Although newer models will recognize that a math problem is present and kick it to a subroutine).

It's an object lesson in LLM habits and the humanity they can seem to display. In every interaction with these systems, keep in mind that they have been trained to act as people do, even if that was not the intention. That's why pseudontropy It is very difficult to avoid or prevent it.

I wrote in the headline that these models “think they're people,” but that's a little misleading. As we often have occasion to point out, think absolutely. But in his responses, at all times, are imitating people, without needing to know or think at all. Whether you ask for a chickpea salad recipe, an investment tip, or a random number, the process is the same. The results feel human because they are human, taken directly from human-produced content and remixed, for your convenience and, of course, for the big AI end result.

Exit mobile version