• trailee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    They seem to only be watching the questions right now. You’re automatically prevented from deleting an accepted answer, but if you answered your own question (maybe because SO was useless for certain niche questions a decade ago so you kept digging and found your own solution), you can unaccept your answer first and then delete it.

    I got a 30 day ban for “defacing” a few of my 10+ year old questions after moderators promptly reverted the edits. But they seem to have missed where I unaccepted and deleted my answers, even as they hang out in an undeletable state (showing up red for me and hidden for others).

    And comments, which are a key part to properly understanding a lot of almost-correct answers, don’t seem to be afforded revision history or to have deletes noticed by moderators.

    So it seems like you can still delete a bunch of your content, just not the questions. Do with that what you will.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    If we can’t delete our questions and answers, can we poison the well by uploading masses of shitty questions and answers? If they like AI we could have it help us generate them.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You are literally the same mentality as the coal rollers

      Tech that could improve life for everyone and instead of using it to make open source software or coding solutions to problems you attack it like a crab in a bucket simply because you fear change.

  • tearsintherain@leminal.space
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Reddit/Stack/AI are the latest examples of an economic system where a few people monetize and get wealthy using the output of the very many.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    lol wow this is going even more poorly than I thought it would, and I thought my kneejerk reaction to the initial announcement was quite pessimistic.

  • Bell@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      We already have those near constantly. And we still keep asking queries.

      People assume that LLMs need to be ready to replace a principle engineer or a doctor or lawyer with decades of experience.

      This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly. An LLM making up some functions because they saw it in stack overflow but never tested is not at all different than a hotshot intern who copied some code from stack overflow and never tested it.

      Except one costs a lot less…

    • sramder@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      […]will only take a few hallucinations before no one trusts LLMs to write code or give advice

      Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)

      • Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.

        Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.

        • sramder@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Which is why I used the former as an example and not the latter.

          I’m not trying to make a general case for AI generated code here… just poking fun at the notion that a few errors will put people off using it.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point, without having to put it in your codebase and discover that the hard way. You get none of that information with ChatGPT. The data spat out is not equivalent.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:

          Because it impersonates a human, people are more inherently willing to trust it. To think it’s “smart”. It’s dangerous how people who don’t know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.

          And the fact it’s a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won’t second guess it.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Your thinking is extremely black and white. Many many, probably most actually, second guess chat bot responses.

      • Seasm0ke@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Maybe for people who have no clue how to work with an LLM. They don’t have to be perfect to still be incredibly valuable, I make use of them all the time and hallucinations aren’t a problem if you use the right tools for the job in the right way.

      • stonerboner@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        This. I use LLM for work, primarily to help create extremely complex nested functions.

        I don’t count on LLM’s to create anything new for me, or to provide any data points. I provide the logic, and explain exactly what I want in the end.

        I take a process which normally takes 45 minutes daily, test it once, and now I have reclaimed 43 extra minutes of my time each day.

        It’s easy and safe to test before I apply it to real data.

        It’s missed the mark a few times as I learned how to properly work with it, but now I’m consistently getting good results.

        Other use cases are up for debate, but I agree when used properly hallucinations are not much of a problem. When I see people complain about them, that tells me they’re using the tool to generate data, which of course is stupid.

        • VirtualOdour@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, it’s an obvious sign they’re either not coders at all or don’t understand the tech at all.

          Asking it direct questions or to construct functions with given inputs and outputs can save hours, especially with things that disrupt the main flow of coding - I don’t want to empty the structure of what I’m working on from my head just so I can remember everything needed to do something somewhat trivial like calculate the overlapping volume of two tetrahedrons. Of course I could solve it myself but just reading through the suggestion it offers and getting back to solving the real task is so much nicer.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The last time I saw someone talk about using the right LLM tool for the job, they were describing turning two minutes of writing a simple map/reduce into one minute of reading enough to confirm the generated one worked. I think I’ll pass on that.

        • JDubbleu@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s a 50% time reduction for the same output which sounds great to me.

          I’d much rather let an LLM do the menial shit with my validation while I focus on larger problems such as system and API design, or creating rollback plans for major upgrades instead of expending mental energy writing something that has been written a thousand times. They’re not gonna rewrite your entire codebase, but they’re incredibly useful for the small stuff.

          I’m not even particularly into LLMs, and they’re definitely not gonna change the world in the way big tech would like you to believe. However, to deny their usefulness is silly.

          • barsquid@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            It’s not a consistent 50%, it’s 50% off one task that’s so simple it takes two minutes. I’m not doing enough of that where shaving off minutes is helpful. Maybe other people are writing way more boilerplate than I am or something.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, every time someone says how useful they find LLM for code I just assume they are doing the most basic shit (so far it’s been true).

    • antihumanitarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Have you tried recent models? They’re not perfect no, but they can usually get you most of the way there if not all the way. If you know how to structure the problem and prompt, granted.

    • capital@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      People keep saying this but it’s just wrong.

      Maybe I haven’t tried the language you have but it’s pretty damn good at code.

      Granted, whatever I puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.

      • CeeBee@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.

        For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.

        This is what LLMs are currently good for. They are just another tool like tab completion or code linting

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I use it all the time and it’s brilliant when you put in the basic effort to learn how to use it effectively.

        It’s allowing me and other open source devs to increase the scope and speed of our contributions, just talking through problems is invaluable. Greedy selfish people wanting to destroy things that help so many is exactly the rolling coal mentality - fuck everyone else I don’t want the world to change around me! Makes me so despondent about the future of humanity.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The quality really doesn’t matter.

      If they manage to strip any concept of authenticity, ownership or obligation from the entirety of human output and stick it behind a paywall, that’s pretty much the whole ball game.

      If we decide later that this is actually a really bullshit deal – that they get everything for free and then sell it back to us – then they’ll surely get some sort of grandfather clause because “Whoops, we already did it!”

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Messages that people post on Stack Exchange sites are literally licensed CC-BY-SA, the whole point of which is to enable them to be shared and used by anyone for any purpose. One of the purposes of such a license is to make sure knowledge is preserved by allowing everyone to make and share copies.

      • bbuez@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It does help to know what those funny letters mean. Now we wait for regulators to catch up…

        /tangent

        If anything, we’re a very long way from anything close to intelligent, OpenAI (and subsequently MS, being publicly traded) sold investors on the pretense that LLMs are close to being “AGI” and now more and more data is necessary to achieving that.

        If you know the internet, you know there’s a lot of garbage. I for one can’t wait for garbage-in garbage-out to start taking its toll.

        Also I’m surprised how well open source models have shaped up, its certainly worth a look. I occasionally use a local model for “brainstorming” in the loosest terms, as I generally know what I’m expecting, but it’s sometimes helpful to read tasks laid out. Also comfort in that nothing even need leave my network, and even in a pinch I got some answers when my network was offline.

        It gives a little hope while corps get to blatantly violate copyright while having wielding it so heavily, that advancements have been so great in open source.

  • inset@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I fully understand why they are doing this, but we are just losing a mass of really useful knowledge. What a shame…

    • zovits@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Vandalism is always reverted on SO, even if done by the original author. No knowledge is lost. Suing OA for violating the CC-BY license might be possible, but I’d wager SO is not interested in suing them, and since they hold the rights, not much can be done by others.

  • filister@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    While at the same time they forbid AI generated answers on their website, oh the turntables.

  • athos77@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    For years, the site had a standing policy that prevented the use of generative AI in writing or rewording any questions or answers posted. Moderators were allowed and encouraged to use AI-detection software when reviewing posts. Beginning last week, however, the company began a rapid about-face in its public policy towards AI.

    I listened to an episode of The Daily on AI, and the stuff they fed into to engines included the entire Internet. They literally ran out of things to feed it. That’s why YouTube created their auto-generated subtitles - literally, so that they would have more material to feed into their LLMs. I fully expect reddit to be bought out/merged within the next six months or so. They are desperate for more material to feed the machine. Everything is going to end up going to an LLM somewhere.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    This sort of thing is so self-sabotaging. The website already has your comment, and a license to use it. By deleting your stuff from the web you only ensure that the AI is definitely going to be the better resource to go to for answers.