• atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    The onion articles? Or just all the other random shit they’ve shoveled into their latest and greatest LLM?

    • tea@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Some of the recently reported ones have been traced back to Reddit shitposts. The hard thing they have to deal with is that the more authoritative you wrote your reddit comments, shitpost or not, the more upvotes you would get (at least that’s what I felt was happening to my writing over time as I used reddit). That dynamic would mean reddit is full of people who sound very very confident in the joke position they post about (and it then is compounded by the many upvotes)

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        That dynamic would mean reddit is full of people who sound very very confident in the joke position

        A lot of the time people on reddit/lemmy/the internet are very confident in their non-joking position. Not sure if the same community exists here, but we had /r/confidentlyincorrect over on reddit

        • tea@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Yep. It’s gotta be hard to distinguish, because there are legitimately helpful and confidently correct people on reddit posts too. There’s value there, but they have to figure it out how to distinguish between good and shit takes.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Yeah. I was including Reddit shit posts in the “random shit they’ve shoveled into their latest and greatest LLM”. It’s nuts to me that they put basically no actual thought into the repercussions of using Reddit as a data set without anything to filter that data.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          It goes beyond me why a corporation with so much to lose does’t have a narrow ai that simply checks if its response is appropriate before providing it.

          Wont fix all but if i try this manually chatgpt pretty much always catches its own errors.

  • FlihpFlorp@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”

  • Deebster@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    […] a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent.

    In fact, Marcus thinks that last 20 percent might be the hardest thing of all.

    Yeah, it’s well known, e.g. people say “the last 20% takes 80% of the effort”. All the most tedious and difficult stuff gets postponed to the end, which is why so many side projects never get completed.

    • scrion@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      It’s not just the difficult stuff, but often the mundane, e. g. stability, user friendliness, polish, scalability etc. that takes something from working in a constrained environment to an actual product - it’s a chore to work on and a lot less “sexy”, with never enough resources allocated to it: We have done all the difficult stuff already, how much more work can this be?

      Turns out, a fucking lot.

      • Deebster@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Absolutely, that’s what I was thinking of when I wrote “tedious”; all the stuff you mentioned matters a lot to the user (or product owner) but isn’t the interesting stuff for a programmer.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      While I agree with the underlying point, the “Pareto Principle” is “well known” like how “a stitch in time saves nine” is well known. I wish this adage would disappear in scientific circles. It instantly decreases credibility. It’s a pet peeve but here’s a great example of why: pseudo-scientific grifters.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Well, they’ve got the people for it! It’s not like they recently downsized to provide their rich executives with more money or anything…

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.

  • Juice@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Good, remove all the weird reddit answers, leaving only the “14 year old neo-nazi” reddit answers, “cop pretending to be a leftist” reddit answers, and “39 year old pedophile” reddit answers. This should fix the problem and restore google back to its defaults

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    At this point, it seems like google is just a platform to message a google employee to go google it for you.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Does anybody remember “Cha-Cha?” This was literally their model. Person asks a question via text message (this was like 2008), college student Googles the answer, follows a link, copies and pastes the answer, college student gets paid like 20¢.

      Source: I was one of those college students. I never even got paid enough to get a payout before they went under.

  • xorollo@leminal.space
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Don’t worry, they’ll insert it all into captchas and make us label all their data soon.

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I still can’t figure out what captcha wants. When it tells me to select all squares with a bus, I can never get it right unless every square is a separate picture.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    “Many of the examples we’ve seen have been uncommon queries,”

    Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.

    • voluble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      “We don’t understand. Why aren’t people simply searching for Taylor Swift”

    • calabast@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.

      (That being said, fuck AI, fuck Google, fuck reddit.)

  • trollbearpig@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.

    But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can’t understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can’t buy intelligence I guess.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      This really is the lemmy mentality summed up.

      Yes you’re smarter than Google and the only one who really understands ai… smh

      • trollbearpig@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I’m sorry to be rude, but do you have anything to contribute here? I mean, I’m probably wrong in several points, that’s what happens when you are as opinionated as I am hahaha. But your comment is useless man, do better.