I do not need to hear from people who can't code or at the very minimum seem to actually hate it how I can supposedly code more efficiently
-
I do not need to hear from people who can't code or at the very minimum seem to actually hate it how I can supposedly code more efficiently
This is a post about large language models
-
I do not need to hear from people who can't code or at the very minimum seem to actually hate it how I can supposedly code more efficiently
This is a post about large language models
I also do not need to hear from people who can't understand the cognitive load difference between writing code yourself and trying to understand code someone, or *something*, else wrote. Especially when the something else will be able to slip in little bugs that are easy to overlook and which no human coder, not even the most junior, would ever put in there
This is a post about that Ptacek article some people think has some good points for some baffling reason (it doesn't)
-
I also do not need to hear from people who can't understand the cognitive load difference between writing code yourself and trying to understand code someone, or *something*, else wrote. Especially when the something else will be able to slip in little bugs that are easy to overlook and which no human coder, not even the most junior, would ever put in there
This is a post about that Ptacek article some people think has some good points for some baffling reason (it doesn't)
If you wanna automatically produce shit code and spend your time babysitting the lying machine then that's a you problem. I'm sure you'll make a consultant who bills out at $150/hour very happy some day. But your character flaws have nothing to do with me so keep that shit to yourself
-
If you wanna automatically produce shit code and spend your time babysitting the lying machine then that's a you problem. I'm sure you'll make a consultant who bills out at $150/hour very happy some day. But your character flaws have nothing to do with me so keep that shit to yourself
Also, and I can't believe I'm going to actually deconstruct his arguments further, fucking linters and unit tests? Really?? Putting aside your AI is writing said unit tests so you have no idea what it's testing for, these are tools designed for catching the occasional human flub. They were *not* designed to hold back a tidal wave of sewage such as the one produced by LLMs
Like idk how to tell you this but you can easily introduce bugs that the linter and unit tests won't catch
Shocking, I know
-
Also, and I can't believe I'm going to actually deconstruct his arguments further, fucking linters and unit tests? Really?? Putting aside your AI is writing said unit tests so you have no idea what it's testing for, these are tools designed for catching the occasional human flub. They were *not* designed to hold back a tidal wave of sewage such as the one produced by LLMs
Like idk how to tell you this but you can easily introduce bugs that the linter and unit tests won't catch
Shocking, I know
@eniko Interjection here on "small bugs that can't be catched"
Newer AI things have the ability to run unit tests (or see the results of them at least) too, and they will write code that gets them to pass again.
Whatever it takes.
Which basically creates a potentially horrifying set of tests/""fixed"" tests which creates an even bigger state of uncertainty.