Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happensbut also
-
wrote 20 days ago last edited by
Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens
but alsoA paper found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going forward and never recover from.
-
Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens
but alsoA paper found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going forward and never recover from.
wrote 20 days ago last edited by@volpeon hemmmm, making an incorrect assumption and sticking to it sound TERRIFYINGLY human tbh.
-
Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens
but alsoA paper found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going forward and never recover from.
wrote 20 days ago last edited by catraxx@tech.lgbt@volpeon Sam Altman is a manipulator. The rumor profits basically just his company.
It's like Lockheed Martin promising they can deliver a small fusion reactor for houses. They claimed it would be ready like five years ago. It only served to boost their stock value.
Agi is not just unlikely to appear this year, but just as unlikely to ever be a thing.
-
Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens
but alsoA paper found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going forward and never recover from.
wrote 20 days ago last edited by@volpeon I can confirm that second bit; the deeper you go the more off base the assumptions get. And then you correct them, and they only remember it for like, one or two exchanges and then go back to the wrong assumption.
-
@volpeon Sam Altman is a manipulator. The rumor profits basically just his company.
It's like Lockheed Martin promising they can deliver a small fusion reactor for houses. They claimed it would be ready like five years ago. It only served to boost their stock value.
Agi is not just unlikely to appear this year, but just as unlikely to ever be a thing.
wrote 20 days ago last edited by@catraxx People finally need to view him as the con artist that he is
-
@catraxx People finally need to view him as the con artist that he is
wrote 20 days ago last edited by@volpeon Remember what all Musk had to do for people to stop seeing him like the next coming of christ?
-
@volpeon Remember what all Musk had to do for people to stop seeing him like the next coming of christ?
-
@volpeon Remember what all Musk had to do for people to stop seeing him like the next coming of christ?
wrote 20 days ago last edited by -
wrote 20 days ago last edited by
@krutonium @volpeon Yeah, the rescue diver in thailand. But tere was tons of stuff before that. Most normies first took actual note of him there, though.