• epigone@awful.systems
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    but it allows for some people to type out one-liners and generate massive blobs of text at the same time that they could be doing their jobs. /codecraft

  • capital@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    6 days ago

    Lemmy

    Edit: Oh shit. I didn’t realize this whole community is just for this… oh well.

  • Whamburglar@sh.itjust.works
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    7 days ago

    AI absolutely has the potential to enable great things that people want, but that’s completely outweighed by the way companies are developing them just for profit and for eliminating jobs

    Capitalism can ruin anything, but that doesn’t make the things it ruins intrinsically bad

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      AI absolutely has the potential to enable great things that people want

      The current implementation of very large data sets obtained through web scrapping, very aggressive marketing of these services such that the results pollute existing online data sets, and the refusal to tag AI generated content as such in order to make filtering it out virtually impossible is not going to enable great things.

      This is just spam with the dial turned up to 11.

      Capitalism can ruin anything

      There’s definitely an argument that privatization and profit motive have driven the enormous amounts of waste and digital filth being churned out at high speeds. But I might go farther and say there are very specific bad actors - people who are going above simply seeking to profiteer and genuinely have mega-maniacal delusions of AGI ruling the world - who are rushing this shit out in as clumsy and haphazard a manner as possible. We’ve also got people using AI as a culpability shield (Israel’s got Lavendar, I’m sure the US and China and a few other big military superpowers have their equivalents).

      This isn’t just capitalism seeking ever-higher rents. Its delusional autocrats believing they can leverage sophisticated algorithms to dictate the life and death of billions of their peers.

      These are ticking time bombs. They aren’t well regulated or particularly well-understood. They aren’t “intelligent”, either. Just exceptionally complex. That complexity is both what makes them beguiling and dangerous.

    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      that doesn’t make the things it ruins intrinsically bad

      Hmmm tricky, see for example https://thenewinquiry.com/super-position/ where capitalism is very good at transforming everything and anything, including culture in this example, to preserve itself while making more money for the few. It might not indeed change good things to bad once they already exist, but it can gradually change good things to new bad things while attempting to make them look like the good old ones it replaces.

    • Debs@lemmy.zip
      link
      fedilink
      arrow-up
      13
      ·
      7 days ago

      I used to be so excited about tech announcements. Like…I should be pumped for ai stuff. Now I immediately start thinking about how they are going to use the thing to turn a profit by screwing us over. Can they harvest data with it? Can they charge a subscription for that? I’m getting so jaded.

  • chuckleslord@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    7 days ago

    Corporate is pushing AI. It’s laughably bad. They showed off this automated test writing platform from Meta. That utility, out of 100 runs, had a success rate of 25%. And they were touting how great it was. Entirely embarrassing.

  • hedgehog@ttrpg.network
    link
    fedilink
    arrow-up
    6
    arrow-down
    5
    ·
    7 days ago

    Ethical

    AI tools aren’t inherently unethical, and even the ones that use models with data provenance concerns (e.g., a tool that uses Stable Diffusion models) aren’t any less ethical than many other things that we never think twice about. They certainly aren’t any less ethical than tools that use Google services (Google Analytics, Firebase, etc).

    There are ethical concerns with many AI tools and with the creation of AI models. More importantly, there are ethical concerns with certain uses of AI tools. For example, I think that it is unethical for a company to reduce the number of artists they hire / commission because of AI. It’s unethical to create nonconsensual deepfakes, whether for pornography, propaganda, or fraud.

    Environmentally sustainable

    At least people are making efforts to improve sustainability. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

    That said, while AI does have energy a lot of the comments I’ve read about AI’s energy usage are flat out wrong.

    Great things

    Depends on whom you ask, but “Great” is such a subjective adjective here that it doesn’t make sense to consider it one way or the other.

    things that people want

    Obviously people want the things that AI tools create. If they didn’t, they wouldn’t use them.

    well-meaning

    Excuse me, Sam Altman is a stand-up guy and I will not have you besmirching his name /s

    Honestly my main complaint with this line is the implication that the people behind non-AI tools are any more well-meaning. I’m sure some are, but I can say the same with regard to AI. And in either case, the engineers and testers and project managers and everyone actually implementing the technology and trying to earn a paycheck? They’re well-meaning, for the most part.

  • potatar@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    7 days ago

    Ah! I thought you guys wanted cheap, fast, and most importantly interpretable cell state determination from single cell sequencing data. My bad.