• FarceOfWill@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.

    Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

      Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.

      Basically what you’re saying is that code review is not a guarantee against shipping bugs.

    • HauntedCupcake@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.

      However, generally, I agree it’s more risk than it’s worth