ChatGPT is not necessary to do your job or to be competitive at your job unless your job is guessing what words might come next after a block of text.
-
ChatGPT is not necessary to do your job or to be competitive at your job unless your job is guessing what words might come next after a block of text.
It cannot think. It can't reason. It can't be any more creative than that thing where you pick a random word. It can't count unless it counts out loud!
It literally just takes an incomplete text and estimates a list of the most probable next words. Then it picks one the more likely ones, in its estimation.
The only reason that LLMs seem to answer questions is because they've had a billion question answer pairs in their training data. They're just predicting along the question answer question answer sequence.
Let's say you send "Who are you?" LLM gets the input text like:
User: Who are you?
Assistant:
-
ChatGPT is not necessary to do your job or to be competitive at your job unless your job is guessing what words might come next after a block of text.
It cannot think. It can't reason. It can't be any more creative than that thing where you pick a random word. It can't count unless it counts out loud!
It literally just takes an incomplete text and estimates a list of the most probable next words. Then it picks one the more likely ones, in its estimation.
The only reason that LLMs seem to answer questions is because they've had a billion question answer pairs in their training data. They're just predicting along the question answer question answer sequence.
Let's say you send "Who are you?" LLM gets the input text like:
User: Who are you?
Assistant:
And it estimates the most likely words to go next. They might be like "My" and "Hello". It then picks one at random. Let's say it picks "Hello"
LLM gets the input text like:
User: Who are you?
Assistant: Hello
-
And it estimates the most likely words to go next. They might be like "My" and "Hello". It then picks one at random. Let's say it picks "Hello"
LLM gets the input text like:
User: Who are you?
Assistant: Hello
And it estimates the most likely words to go next. They might be like "my", "nice", and "there". It then picks one at random. Let's say it picks "nice"
LLM gets the input text like:
User: Who are you?
Assistant: Hello nice
-
And it estimates the most likely words to go next. They might be like "my", "nice", and "there". It then picks one at random. Let's say it picks "nice"
LLM gets the input text like:
User: Who are you?
Assistant: Hello nice
It just keeps doing that. Eventually it will pick a blank line or "User:" or a special word and that stops the loop.
If you're thinking that this process doesn't resemble how anyone writes a novel or a computer program, you are right. It's literally the same as mashing the three predictions on your phone keyboard randomly.
The main difference is that the LLM is large (10,100,200GiB) and your phone keyboard is not large (just a few MiB)
And that's why you don't need to compete with ChatGPT. It can't plan, count (without counting out loud), reason logically, reason geomerically, experience anything, be inspired, feel emotions, or even know what words mean.
It also suffers a lot of problems based on the way it works. For example if it randomly picks an old timey word, that increases its probability estimates for old timey words the next time it goes around the loop which means it will eventually pick a second old timey word.
That will make it estimate old timey words even more likely and soon it will pick a third! By the end of a paragraph now it's fully old timey.
So the moment you or it start hinting towards a certain style or voice it will just amplify that until it's stuck.
See how when I drop the style it has already gotten stuck in Victorian mode so it just keeps going that way?
-
It just keeps doing that. Eventually it will pick a blank line or "User:" or a special word and that stops the loop.
If you're thinking that this process doesn't resemble how anyone writes a novel or a computer program, you are right. It's literally the same as mashing the three predictions on your phone keyboard randomly.
The main difference is that the LLM is large (10,100,200GiB) and your phone keyboard is not large (just a few MiB)
And that's why you don't need to compete with ChatGPT. It can't plan, count (without counting out loud), reason logically, reason geomerically, experience anything, be inspired, feel emotions, or even know what words mean.
It also suffers a lot of problems based on the way it works. For example if it randomly picks an old timey word, that increases its probability estimates for old timey words the next time it goes around the loop which means it will eventually pick a second old timey word.
That will make it estimate old timey words even more likely and soon it will pick a third! By the end of a paragraph now it's fully old timey.
So the moment you or it start hinting towards a certain style or voice it will just amplify that until it's stuck.
See how when I drop the style it has already gotten stuck in Victorian mode so it just keeps going that way?
Ok so here's another problem. It can't think. In this list of 40 words, creole is the 14th word. That's not even one of the two middle words.
Also it ignored my instructions to not count!
So even though it "knows" (regurgitated) a good strategy for solving the problem, it can't apply it.
-
Ok so here's another problem. It can't think. In this list of 40 words, creole is the 14th word. That's not even one of the two middle words.
Also it ignored my instructions to not count!
So even though it "knows" (regurgitated) a good strategy for solving the problem, it can't apply it.
Here's another example of it eating shit while counting. Note that would I would do is get close and then go back and add or remove a word. But LLMs can't do that because they can't go back and edit. They can only go forward.
-
Here's another example of it eating shit while counting. Note that would I would do is get close and then go back and add or remove a word. But LLMs can't do that because they can't go back and edit. They can only go forward.
So if you can go back and remove a word you already typed then congrats. You are more powerful than an LLM.
Another thing an LLM can't do is plan. For example, if you write a mystery, you need to put clues in your mystery that will all come together and make sense at the end. Well, it can't do that. It will randomly choose clue-like words when it estimates clue-like words are likely.
But they won't build towards a mystery. I was trying to get it to produce an example of this. This is complete garbage if it was a human. What the fuck does any of it have to do with engine pressure? Answer: nothing. LLMs can't think.
-
So if you can go back and remove a word you already typed then congrats. You are more powerful than an LLM.
Another thing an LLM can't do is plan. For example, if you write a mystery, you need to put clues in your mystery that will all come together and make sense at the end. Well, it can't do that. It will randomly choose clue-like words when it estimates clue-like words are likely.
But they won't build towards a mystery. I was trying to get it to produce an example of this. This is complete garbage if it was a human. What the fuck does any of it have to do with engine pressure? Answer: nothing. LLMs can't think.
@gildilinie It would require an understanding of what a story actually is. But thats a very complex thing.