• Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    103
    arrow-down
    2
    ·
    2 months ago

    But for that brief moment, we all got to laugh at it because it said to put glue on pizza.

    All worth it!

  • corroded@lemmy.world
    link
    fedilink
    arrow-up
    80
    arrow-down
    5
    ·
    2 months ago

    The problem isn’t the rise of “AI” but more so how we’re using it.

    If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that’s a great use of the technology. We don’t need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      arrow-up
      3
      arrow-down
      14
      ·
      2 months ago

      Thats fucking bullshit, the people developing it and shipping it as a product have been very clear and upfront about their uses and none of it is ethical.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    11
    ·
    2 months ago

    There’s literally no point.

    Like, humans aren’t really the “smartest” animals. We’re just the best at language and tool use. Other animals routinely demolish us in everythig else measured on an IQ test.

    Pigeons get a bad rap at being stupid, but their brains are just different than ours. Their image and pattern recognition is so insane, they can recognize words they’ve never seen aren’t gibberish just by letter structure.

    We weren’t even trying to get them to do it. They were just introducing new words and expected the pigeons to have to learn, but they could already tell despite never seeing that word before.

    Why the hell are we jumping straight to human consciousness as a goal when we don’t even know what human consciousness is? It’s like picking up Elden Ring on whatever the final boss is for your very first time playing the game. Maybe you’ll eventually beat it. But why wouldn’t you just start from the beginning and work your way up as the game gets harder?

    We should at least start with pigeons and get an artificial pigeon and work our way up.

    Like, that old reddit repost about pigeon guided bombs, that wasn’t a hail Mary, it was incredibly effective.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      17
      arrow-down
      2
      ·
      2 months ago

      Who’s jumping to human consciousness as a goal? LLMs aren’t human consciousness. The original post is demagoguery, but it’s not misrepresenting the mechanics. Chatbots already have more to do with your pigeons than with human consciousness.

      I hate that the stupidity about AGI some of these techbros are spouting is being taken at face value by critics of the tech.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 months ago

      Pigeons get a bad rap at being stupid

      Do they? I guess I haven’t encountered that much. I think about messenger pigeons in wars and such…

      Disgusting? Sure, I’ve heard that a lot. But I haven’t heard ‘stupid’ really as a word to describe pigeons.

      Anyway, I don’t disagree with you otherwise. My dogs are super stupid in my perception but I know which one of us would be better at following a trail after someone had left the scene. (Okay, maybe Charlie would still be too stupid to do that one, but Ghost could do it).

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        Something that blows my mind about dogs is that their sense of smell is so good that, when combined with routine, they use it to track time i.e. if their human leaves the house for 8 hours most days to go to work, the dog will be able to discern the difference between “human’s smell 7 hours after they left” and “human’s smell 8 hours after they left”, and learn that the latter means their human should be home soon. How awesome is that?!

    • JayTreeman@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      You might like the scifi youtuber Isaac Arthur. He has a huge library, but a number of episodes that talk about intelligence.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    2 months ago

    I mean, it also made the first image of a black hole, so there’s that part.

    I’d also flag that you shouldn’t use one of these to do basic sums, but in fairness the corporate shills are so desperate to find a sellable application that they’ve been pushing that sort of use super hard, so on that one I blame them.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        2 months ago

        Machine learning tech is used in all sorts of data analysis and image refining.

        https://physics.aps.org/articles/v16/63

        I get that all this stuff is being sold as a Google search replacement, but a) it is not, and b) it is actually useful, when used correctly.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 months ago

          This is why the term “AI” sucks so much. Even “machine learning” is kind of misleading.

          Large-scale statistical computing obviously has uses, especially for subjects that lend themselves well to statistical analysis of large and varied data sets, like astronomical observations.

          Sticking all of the text on the internet into a blender and expecting the resulting statistical weights to produce some kind of oracle is… Well, exactly what you’d expect the tech cultists to pivot to after crypto fell apart, tbh, but still incredibly dumb.

          Calling them both “AI” does a tremendous disservice to us all. But here we are, unable to escape the marketing.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            5
            ·
            2 months ago

            Yeah, it’s no oracle. But it IS fascinating how well it does language, and how close it sticks to plausible answers. It has uses, like narrowing down fuzzy queries, translation and other looser things that traditional algorithms struggle with.

            It’s definitely not a search engine or a calculator, though.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    13
    arrow-down
    7
    ·
    2 months ago

    On the grand scheme of things, I suspect we actually don’t have that much power in stopping the industrial machine.

    Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn’t resolve anything. Not really. They’d send the military out, shoot us down (possibly quite literally), then go back to business as usual.

    Unless there becomes a business incentive to change our ways, then capitalism will not follow, and instead it’ll do everything it can to resist that change. By the time there is enough economic inventive, it’ll be far too late to be worth fixing.

    • MBM@lemmings.world
      link
      fedilink
      arrow-up
      15
      ·
      2 months ago

      I mean, this isn’t just a social media thing. It was part of the reason there was a writer’s strike in Hollywood and they did manage to accomplish something. I don’t see why protests/strikes/politics would be useless here.

      • Th4tGuyII@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        You’re right, but I was making a point, as social media is most often where you hear people calling for revolution.

        I’ll agree that strikes can work, especially employment strikes - but that’s usually because there’s a specific, private entity to target, an employer to back into the metaphorical corner.

        As far as protesting/striking against the system, you need only look at the strikes and protests relating Palestine to know what kind of force such a revolutionary strike would be met with.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      4
      ·
      2 months ago

      A lot of people on Lemmy are expecting the glorious revolution to happen any time now and then we will live in whatever utopia they believe makes a utopia. Even if something like that happens, and I’m less certain by the day that it ever will, the result isn’t necessarily any better than what came before. And often worse.

      • Cornelius_Wangenheim@lemmy.world
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        2 months ago

        It’ll almost certainly be worse. When revolutions happen, the people who seize power are the ones who were most prepared, organized and willing to exercise violence. Does that at all sound like leftists in the West?

        • Wilzax@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          The only way to enact utopia is by making it so popular an idea that the propaganda machine gets drowned out. This is going to be a very long and slow process that may never end. But we can always aim for “not worse” and if we can do that, we can also aim for “a little better”. Anything faster than those baby steps feels really far from possible, but those baby steps are always worth taking.

          • ArmokGoB@lemmy.dbzer0.comM
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            Wake me up when people found a solarpunk city-state with nuclear capability so that they don’t just get rolled over by the nearest superpower.

  • alienanimals@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    This is a strawman argument. AI is a tool. Like any tool, it’s used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI’s climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).

    AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.

    AI is helping physicists speed up experiments into supernovae to better understand the universe.

    AI is helping doctors to expedite cancer screening rates.

    AI is powering robots that can do the dishes.

    AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.

    • ArmokGoB@lemmy.dbzer0.comM
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      Here’s some research on how much energy various machine learning models use.

      In 2021, Google’s total electricity consumption was 18.3 TWh, with AI accounting for 10%–15% of this total.

      Let’s call it 10% to make it seem as energy-efficient as possible. That’s 1.83 TWh a year, or about 5 GWh a day. An average US home uses 10.5 MWh a year. You could power 476 US homes for a year, and still have some energy left over, with the amount of energy Google uses on their AI-powered search in a single day.

      • Yprum@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        But then the problem is how google uses AI, not AI itself. I can have an LLM running locally not consuming crazy amounts of energy for my own purposes.

        So blaming AI is absurd, we should blame OpenAI, Google, Amazon… This whole hatred for AI is absurd when it’s not the real source of the problem. We should concentrate on blaming and ideally punishing companies for this kind of use (abuse more like) of energy. Energy usage also is not an issue in itself, as long as we use adequate energy sources. If companies start deploying huge solar panel fields on top of their buildings and parkings and whatnot to cover part of the energy use we could all end up better than before even.

        • ArmokGoB@lemmy.dbzer0.comM
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          I agree that we shouldn’t blame the tools. I also believe the idea that generative AI can be used for good, in the right hands. However, denying the negative impact these tools have is just as disingenuous as saying that the tools are only going to be used by fat cats and grifters looking to maximize profit.

          Also, did you know that you can just mod random people? It doesn’t even ask you. You just wake up one day as a moderator.

          • Yprum@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            But is it the tool that has the negative impact or is it the corporations that use the tool with a negative impact? I think it is an important distinction, even more so when this kind of blaming the AI stuff sounds a lot to distraction techniques, “no don’t look at what has caused global warming for the last century, look at this tech that exploded over the last year and is consuming crazy amounts of energy”. And saying that, I want to make sure its clear, that doesn’t mean it shouldn’t be handled, discussed or criticised (the use of AI I mean), as long as we don’t fall into irrational blaming of a tool that has no such issue.

            I didn’t know about the mod stuff, but also not sure why you mention it, am I going to find myself mod of some weird shit now? X)

            • ArmokGoB@lemmy.dbzer0.comM
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              But is it the tool that has the negative impact or is it the corporations that use the tool with a negative impact?

              Running machine learning models is extremely computationally-intensive. To my knowledge, it doesn’t scale particularly well when you have a bunch of users making arbitrary requests. The energy problem is mostly to do with the number of users, rather than the fact that it’s corporations doing it. This isn’t to say that big tech doesn’t create a bunch of other problems by controlling closed-source models.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    13
    ·
    2 months ago

    No. Once it has identified it as a math problem a different part of the code is called.

    Fucking morons with Twatter accounts