• TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I don’t see any mention of any details about the study participants but I wouldn’t expect the general public to have this attitude.

  • Juice@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Okay but have you considered shoving AI down the throats of consumers and forcing them to use it? I say invest in more gigantic server farms!

    • xantoxis@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      They don’t care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they’re convinced every other company will follow suit, it doesn’t matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there’s no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.

      [*] Lol, it doesn’t do that either

  • Wirlocke@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.

    Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.

    It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.

      • Krauerking@lemy.lol
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah it’s just investment profit chasing from larger and larger bank accounts.

        I’m waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can’t be afforded to be “lost” means it’s just everyone else that has to eat dirt.

    • TimeSquirrel@kbin.melroy.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Internet of Things

      This is very much not a hype and is very widely used. It’s not just smart bulbs and toasters. It’s burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction’s network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.

      • Wirlocke@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.

        • DancingBear@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.

          Don’t even get me started on English muffins.

          With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.

      IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.

  • cass24@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    The less technologically literate shout “AI is theft!”

    Conspiracy theorists whisper of “government surveils” and “brain hacking chips”…

    As a result, those who don’t understand new technology become fearful of it.

    In itself, “AI” is a total buzzword.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

    Still waiting for that first good use case for LLMs.

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

        Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

        I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

        So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

        In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

          It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            That’s awesome, and I would probably would find those tools useful.

            Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

            So idk if it would be worth it once the venture capitalist money dries up.

            • bamboo@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              What are these code generators that have existed for a long time?

                • bamboo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.

        • Dran@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.

          E.g

          Have I ever written a bash function that orders non-symver GitHub branches?

          Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

      Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

      • psivchaz@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

    • beveradb@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s because businesses are using AI to weed out resumes.

        Basically you beat the system by using the system. That’s my plan too next time I look for work.

    • EvilBit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

      But 98% of GenAI hype is bullahit so far.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

        Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

        It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

        • EvilBit@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

          But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

  • yemmly@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.

    For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.

    As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.

    • yemmly@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.

  • Diplomjodler@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI in consumer devices at this point stands for data harvesting, wonky functionality and questionable usefulness. No wonder nobody wants that crap.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I have no qualms about AI being used in products. But when you have to tell me that something is “powered by AI” as if that’s your main selling point, then you do not have a good product. Tell me what it does, not how it does it.

    • hswolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Developer Stackholder: Am I pushing the wrong ideas onto the managers?

      No, it’s the developers who don’t know how to implement the features I want.

  • ATDA@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    To me AI helps me bang out small functions and classes for personal projects and act as a Google alternative for mundane stuff.

    Other than that any product that uses it is no different than a digital assistant asking chat gpt to do things. Or at least that seems like the perception from a consumer level.

    Besides it’s bad enough I probably use a homes energy trying to make failing programming demos much less ordering pizza from my watch or whatever.

  • teamevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I absolutely hate having to scroll past garbage AI answers I don’t care to see, nor would I trust