Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

NodeBB

  1. Home
  2. uncategorized
  3. Not So Fast: AI Coding Tools Can Actually Reduce Productivity

Not So Fast: AI Coding Tools Can Actually Reduce Productivity

Scheduled Pinned Locked Moved uncategorized
15 Posts 9 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • volpeon@icy.wyvern.ripV This user is from outside of this forum
    volpeon@icy.wyvern.ripV This user is from outside of this forum
    volpeon@icy.wyvern.rip
    wrote last edited by
    #1

    Not So Fast: AI Coding Tools Can Actually Reduce Productivity
    secondthoughts.ai/p/ai-coding-slowdown

    icedquinn@blob.catI volpeon@icy.wyvern.ripV 2 Replies Last reply
    0
    • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

      Not So Fast: AI Coding Tools Can Actually Reduce Productivity
      secondthoughts.ai/p/ai-coding-slowdown

      icedquinn@blob.catI This user is from outside of this forum
      icedquinn@blob.catI This user is from outside of this forum
      icedquinn@blob.cat
      wrote last edited by
      #2
      @volpeon
      1 Reply Last reply
      0
      • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

        Not So Fast: AI Coding Tools Can Actually Reduce Productivity
        secondthoughts.ai/p/ai-coding-slowdown

        volpeon@icy.wyvern.ripV This user is from outside of this forum
        volpeon@icy.wyvern.ripV This user is from outside of this forum
        volpeon@icy.wyvern.rip
        wrote last edited by
        #3

        Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

        lanodan@queer.hacktivis.meL sun@shitposter.worldS volpeon@icy.wyvern.ripV ? ? 5 Replies Last reply
        0
        • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

          Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

          lanodan@queer.hacktivis.meL This user is from outside of this forum
          lanodan@queer.hacktivis.meL This user is from outside of this forum
          lanodan@queer.hacktivis.me
          wrote last edited by
          #4
          @volpeon the takeaway is devs sucks ๐Ÿ˜„
          1 Reply Last reply
          0
          • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

            Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

            sun@shitposter.worldS This user is from outside of this forum
            sun@shitposter.worldS This user is from outside of this forum
            sun@shitposter.world
            wrote last edited by
            #5
            @volpeon I can believe it but it's also for a specific case of developing where the developer has high familiarity with the codebase, as I understand it
            volpeon@icy.wyvern.ripV 1 Reply Last reply
            0
            • sun@shitposter.worldS sun@shitposter.world
              @volpeon I can believe it but it's also for a specific case of developing where the developer has high familiarity with the codebase, as I understand it
              volpeon@icy.wyvern.ripV This user is from outside of this forum
              volpeon@icy.wyvern.ripV This user is from outside of this forum
              volpeon@icy.wyvern.rip
              wrote last edited by
              #6

              @sun Yeah, from what I've read in comments AI tools help people with getting started with things you aren't familiar with, but as you gain experience (provided you're willing to learn from what the AI produced) you may be better off writing things yourself. Makes sense to me

              1 Reply Last reply
              0
              • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

                volpeon@icy.wyvern.ripV This user is from outside of this forum
                volpeon@icy.wyvern.ripV This user is from outside of this forum
                volpeon@icy.wyvern.rip
                wrote last edited by
                #7

                The coding applications built on those models, like Cursor, are going to keep improving to make better use of the models
                This part is funny, though. Just as Cursor is forced to enshittify because Anthropic upped their prices for enterprise customers (which is most likely because they're in trouble themselves).

                volpeon@icy.wyvern.ripV 1 Reply Last reply
                0
                • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                  The coding applications built on those models, like Cursor, are going to keep improving to make better use of the models
                  This part is funny, though. Just as Cursor is forced to enshittify because Anthropic upped their prices for enterprise customers (which is most likely because they're in trouble themselves).

                  volpeon@icy.wyvern.ripV This user is from outside of this forum
                  volpeon@icy.wyvern.ripV This user is from outside of this forum
                  volpeon@icy.wyvern.rip
                  wrote last edited by volpeon@icy.wyvern.rip
                  #8

                  The problem tools like Cursor have is that unlike classic software, AI is horrible to run at scale. With something like a social network, the cost per user goes down as the number of user increases. With AI, you can't have this kind of parallelism that brings the cost down and that means there's linear growth. Computations on the GPU are specific to one model invocation, and a model invocation can't handle multiple requests at once.

                  ? volpeon@icy.wyvern.ripV ? 3 Replies Last reply
                  0
                  • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                    The problem tools like Cursor have is that unlike classic software, AI is horrible to run at scale. With something like a social network, the cost per user goes down as the number of user increases. With AI, you can't have this kind of parallelism that brings the cost down and that means there's linear growth. Computations on the GPU are specific to one model invocation, and a model invocation can't handle multiple requests at once.

                    ? Offline
                    ? Offline
                    Guest
                    wrote last edited by
                    #9

                    @volpeon When it comes to economy of scale you're much better off with dragons than AI. It makes sense.

                    1 Reply Last reply
                    0
                    • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                      The problem tools like Cursor have is that unlike classic software, AI is horrible to run at scale. With something like a social network, the cost per user goes down as the number of user increases. With AI, you can't have this kind of parallelism that brings the cost down and that means there's linear growth. Computations on the GPU are specific to one model invocation, and a model invocation can't handle multiple requests at once.

                      volpeon@icy.wyvern.ripV This user is from outside of this forum
                      volpeon@icy.wyvern.ripV This user is from outside of this forum
                      volpeon@icy.wyvern.rip
                      wrote last edited by volpeon@icy.wyvern.rip
                      #10

                      When you run an LLM, and then another one for a different user, they will use twice the amount of VRAM and twice the number of cores to get the same performance as the original single run.

                      Let's say you have a database server used by one application, and then you add another application. How much do the resource requirements increase? Not by another 100%, that's for sure.

                      krutonium@social.treehouse.systemsK 1 Reply Last reply
                      0
                      • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                        Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

                        ? Offline
                        ? Offline
                        Guest
                        wrote last edited by
                        #11

                        @volpeon I remember using TabNine way before AI was big and cool. It was just very quick autocomplete, but I only used it during my last job. And I genuinely have to wonder if I stopped using it because I subconsciously knew it didn't improve my coding performance

                        1 Reply Last reply
                        0
                        • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                          When you run an LLM, and then another one for a different user, they will use twice the amount of VRAM and twice the number of cores to get the same performance as the original single run.

                          Let's say you have a database server used by one application, and then you add another application. How much do the resource requirements increase? Not by another 100%, that's for sure.

                          krutonium@social.treehouse.systemsK This user is from outside of this forum
                          krutonium@social.treehouse.systemsK This user is from outside of this forum
                          krutonium@social.treehouse.systems
                          wrote last edited by
                          #12

                          @volpeon All true, but it's worth noting that you can queue up requests for the same model to run one after another on a group of GPU's. Not great scaling, but you could serve potentially a LOT of users from one GPU as long as everyone is willing to wait a little bit.

                          1 Reply Last reply
                          0
                          • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                            The problem tools like Cursor have is that unlike classic software, AI is horrible to run at scale. With something like a social network, the cost per user goes down as the number of user increases. With AI, you can't have this kind of parallelism that brings the cost down and that means there's linear growth. Computations on the GPU are specific to one model invocation, and a model invocation can't handle multiple requests at once.

                            ? Offline
                            ? Offline
                            Guest
                            wrote last edited by
                            #13

                            @volpeon they actually do batching on inference to handle several requests in parallel, that's how the whole thing even kinda works at search engine kinds of scales

                            volpeon@icy.wyvern.ripV 1 Reply Last reply
                            0
                            • volpeon@icy.wyvern.ripV volpeon@icy.wyvern.rip

                              Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.

                              ? Offline
                              ? Offline
                              Guest
                              wrote last edited by
                              #14

                              @volpeon The study accounted for this: https://bsky.app/profile/metr.org/post/3ltn3tdqnpc2x

                              1 Reply Last reply
                              0
                              • ? Guest

                                @volpeon they actually do batching on inference to handle several requests in parallel, that's how the whole thing even kinda works at search engine kinds of scales

                                volpeon@icy.wyvern.ripV This user is from outside of this forum
                                volpeon@icy.wyvern.ripV This user is from outside of this forum
                                volpeon@icy.wyvern.rip
                                wrote last edited by volpeon@icy.wyvern.rip
                                #15

                                @sergaderg Oh yeah, that completely slipped my mind. And yet, it doesn't seem like it helps a lot considering the massive hardware requirements.

                                edit: I looked into the performance characteristics and it seems there's a threshold of batch size 64 after which performance stops improving. On a scale of millions of requests, that's pretty much negligible.

                                1 Reply Last reply
                                0
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Login or register to search.
                                Powered by NodeBB Contributors
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups