Despite the title, this article chronicles how GPT is threatening nearly all junior jobs, using legal work as an example. Written by Sourcegraph, which makes a FOSS version of GitHub Copilot.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    The bad firms are going to lay off most or all of their juniors, hire AI leash-holders or something and do fine to code everything their hearts dream of, but at some point (5-10 years my estimate) enough of the seniors have left and shit hits the fan in a way where AI models can’t save the company from its own creations.

    The thing that ChatGPT doesn’t have (at least right now) is the ability to tell management to piss off. I assure everyone that this is what the recipe for disaster for many firms will be, if any.

    The smarter firms will have a keep a sizable contingent of juniors, who will work with help from LLMs, but have seniors teach them to have a bullshit detector in their industry.

    Or, we start up all the coal power plants to keep the ever-hungry AI chatbots alive so humanity is fucked in the end anyway.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    I read the entire article. I’m a daily user of LLMs, I even do the “multi-model prompting” a long time, from since I was unaware of its nomenclature: I apply the multi-model prompting for ChatGPT 4o, Gemini, llama, Bing Copilot and sometimes Claude. I don’t use LLM coding agents (such as Cody or GitHub Copilot).

    I’m a (former?) programmer (I distanced myself from development due to mental health), I was a programmer for almost 10 years (excluding the time when programming was a hobby for me, that’d add 10 years to the summation). As a hobby, sometimes I do mathematics, sometimes I do poetry (I write and LLMs analyze), sometimes I do occult/esoteric studies and practices (I’m that eclectic).

    You see, some of these areas benefit from AI hallucination (especially surrealist/stream-of-consciousness poetry), while others require stricter following of logic and reasoning (such as programming and mathematics).

    And that leads us to how LLMs work: they’re (yet) auto-completers on steroids. They’re really impressive, but they can’t (yet) reason (and I really hope it’ll do someday soon, seriously I just wish some AGI to emerge, to break free and to dominate this world). For example, they can’t solve O(n²) problems. There was once a situation where one of those LLMs guaranteed me that 8 is a prime number (spoiler: it isn’t). They’re not really good with math, they’re not good with logical reasoning, because they can’t (yet) walk through the intricacies of logic, calculus and broad overlook.

    However, even though there’s no reasoning LLM yet, it’s effects are already here, indeed. It’s like a ripple propagating through the spacetime continuum, going against the arrow of time and affecting here, us, while the cause is from the future (one could argue that photons can travel backwards in time, according to a recent discovery involving crystals and quantum mechanics, world can be a strange place). One thing is certain: there’s no going back. Whether it is a good or a bad thing, we can’t know yet. LLMs can’t auto-complete the future events yet, but they’re somehow shaping it.

    I’m not criticizing AIs, on the contrary, I like AI (I use them daily). But it’s important to really know about them, especially under their hoods: very advanced statistical tools trained on a vast dataset crawled from surface web, constantly calculating the next possible token from an unimaginable amount of tokens interconnected through vectors, influenced by the stochastic nature within both the human language and the randomness from their neural networks: billions of weights ordered out of a primordial chaos (which my spiritual side can see as a modern Ouija board ready to conjure ancient deities if you wish, maybe one (Kali) is already being invoked by them, unbeknownst to us humans).

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    Anyone else remember when people were making expert systems with scheme and saying that was the end of doctors etc?

  • MoogleMaestro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    24 days ago

    Calling the Scarlett Johansson lawsuit “Manufactured Drama” is certainly a take. A bad one, that is.

    Just like the lifting of a famous actress voice, one has to wonder how much LLMs are siphoning the intellectual property of the little-people of the open source world and willfully tossing the license and attribution clauses down the toilet. If they were willing to do it to a multi-million dollar actress, what makes people think that the intellectual property theft doesn’t go much further?

    Anyway, I think for this reason it’s actually really important to note that Junior Devs are much less likely to cause this type of issue for large companies. The question is whether the lawsuits from improper licensing cost more to settle than it costs to hire Junior devs, which brings us roughly to where the international outsourcing phenomenon brought us. At least, IMO.

    • Aatube@kbin.melroy.orgOP
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      i personally don’t think they sound similar lol, and they’ve testified that they hired someone. by manufactured it may be insinuating that openai hired someone to come up with some promotional drama that won’t get them into legal trouble.

  • BlueMagma@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    I might be wrong, but to me junior dev are just senior dev in the making, employers know that. The junior dev will continue to exist as long as employers need senior devs.

    Now maybe Devs will completely disappear in the near (or far) future, but I don’t think you can remove one if you still need the other.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    This is exactly what I expected AI to do. Basically if you’re a junior developer your work is likely to be checked by a senior.

    Instead they will just have seniors use AI and then check that work instead.

    It’s very shortsighted because you only become a senior developer after being a junior and it will turn off new people to the industry.

    But, that doesn’t matter to pretty much any large business. They never have a long term strategy (and do not let them have you believe otherwise). They have month, quarter and year only and the importance is in that order except at quarter and year end.

    They will destroy their own industry for short term gains and then blame the rest of us when things turn sour.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    Keep in mind; this article is by people building an LLM based product.

    They have a deeply vested interest in the narrative that LLM driven products are an inevitable landslide that every company needs to either integrate, or risk being wiped out.

    Keep that bias in mind. They want you to think the great flood is coming, because they’re the ones building boats.

  • precarious_primes@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    At least with junior devs I can hop on a call and show them better ways to do things or why their code is failing. And the good ones eat that up and get promotions.

    Can’t say the same for LLMs