I lost the toot I got this article from, but its a fascinating read about how the supposed "intelligence" of LLMs is basically a psychic scam.
-
I lost the toot I got this article from, but its a fascinating read about how the supposed "intelligence" of LLMs is basically a psychic scam. To which humans have always been susceptible
https://softwarecrisis.dev/letters/llmentalist/
"In trying to make the LLM sound more human, more confident, and more engaging, but without being able to edit specific details in its output, AI researchers seem to have created a mechanical mentalist.
Instead of pretending to read minds through statistically plausible validation statements, it pretends to read and understand your text through statistically plausible validation statements.
The validation loop can continue for a while, with the mark constantly doing the work of convincing themselves of the language model’s intelligence. Done long enough, it becomes a form of reinforcement learning *for the mark*."
-
V volpeon@icy.wyvern.rip shared this topic