• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    70
    arrow-down
    2
    ·
    23 days ago

    Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there’s been no impact whatsoever in my personal life.

    In my professional life as an ICT person with over 40 years experience, it’s helped me identify which people understand what it is and more specifically, what it isn’t, intelligent, and respond accordingly.

    The sooner the AI bubble bursts, the better.

    • Vinny_93@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      23 days ago

      I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.

      Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I’d be fine with that.

      Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.

      But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.

      This of course is all what stakeholders and decision makers do not want for obvious reasons.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        arrow-up
        22
        arrow-down
        2
        ·
        23 days ago

        The thing that’s stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.

        ChatGPT is predictive text on steroids.

        Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.

        The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.

        It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.

        There is no understanding of the text at all, no true or false, right or wrong, none of that.

        AI today is Assumed Intelligence

        Arthur C Clarke says it best:

        “Any sufficiently advanced technology is indistinguishable from magic.”

        I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.

        That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.

        Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.

        • Vinny_93@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          23 days ago

          I think you’re right. AGI and certainly ASI are behind one large hurdle: we need to figure out what consciousness is and how we can synthesize it.

          As Qui-Gon Jinn said to Jar Jar Binks: the ability to speak does not make you intelligent.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    44
    ·
    23 days ago

    As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it’s pretty crap. I don’t see any use for it in my personal life.

    I think its influence is negative overall. Right now it might be useful for programming questions, but that’s only the case because it’s fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it’ll become less useful for future problems, while having effectively killed those useful sites in the process.

    Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.

    • Tyfud@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      22 days ago

      I wish more people understood this. It’s short term, mediocre gains, at the cost of a huge long term loss, like stack overflow.

  • LogicalDrivel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    22 days ago

    It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of “prompt engineering”. I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn’t stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn’t really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for “attitude problems”.

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    26
    ·
    22 days ago

    Impact?

    My company sells services to companies trying to implement it. I have a job due to this.

    Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don’t get me anywhere on my projects.

  • IMNOTCRAZYINSTITUTION@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    22 days ago

    My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers’ work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren’t even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.

  • lime!@feddit.nu
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    23 days ago

    it works okay as a fuzzy search over documentation.
    …as long as you’re willing to wait.
    …and the documentation is freely available.
    …and doesn’t contain any sensitive information.
    …and you very specifically ask it for page references and ignore everything else it says.

    so basically, it’s worse than just searching for one word and pressing “next” over and over, unless you don’t know what the word is.

  • Routhinator@startrek.website
    link
    fedilink
    English
    arrow-up
    19
    ·
    22 days ago

    I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

    And, even if they pull it, I don’t think I’ll ever go back. No more cloud drives, no more ‘apps’. Webpages and local files on a file share I own and host.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    22 days ago

    Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre’s entrance, with the previously described characters reacting in their own ways.

    I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I’m short on time.

    My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it’s a super valuable tool.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    22 days ago

    I worked for a company that did not govern AI use. It was used for a year before they were bought.

    I stopped reading emails because they were absolute AI generated garbage.

    Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+

    they still did nothing to curb AI use.

    they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.

    company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.

    today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.

    overall, I would say banning AI was the right choice.

    IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.

  • GiantChickDicks@lemmy.ml
    link
    fedilink
    arrow-up
    13
    ·
    22 days ago

    I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I’m AI in chat, which makes sense. A surprising number think I’m a bot when they call in, because I guess my voice sounds like a recording.

    Most of the time it’s just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can’t believe the abuse people hurl out when they assume it’s not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.

    It’s not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people’s exhaustion with being forced to deal with robots from my own experiences when I’ve needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we’re treating the people offering them decently so they want to continue offering that to us.

  • Aganim@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    23 days ago

    I cannot come up with a use-case for ChatGPT in my personal life, so no impact there.

    For work it was a game-changer. No longer do I need to come up with haiku’s to announce it is release-freeze day, I just let ChatGPT crap one out so we can all have a laugh at its lack of poetic talent.

    I’ve tried it now and then for some programming related questions, but I found its solutions dubious at best.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    22 days ago

    It’s made my professional life way worse because it was seen as an indication that the every hack-a-thon attempt to put a stupid chat bot in everything is great, actually.

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    22 days ago

    I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it’s a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

    You can’t put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I’ve had employees use it to come up with an algorithm or find an error, but I think it’s risky to have one generate large pieces of code.

    • sudneo@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      22 days ago

      Great points. Not only the output cannot be trusted, but also reviewing code is notoriously a much more boring activity than writing it, which means our attention is going to be more challenged, in addition to the risk of underestimating the importance of the review over time (e.g., it got it right last 99 times, I will skim this one).

  • Kaldo@fedia.io
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    22 days ago

    It is getting more present at work every day, I keep having to hear even seniors how they “discussed” something with chatgpt or how they will ask it for help. Had to resolve some issue with devops a while back and they just kept pasting errors into chatgpt and trying out whatever it spewed back, which I guess wasn’t that much different from me googling the same issue and spewing back whatever SO said.

    I tried it myself and while it is neat for some simple repetitive things, I always end up with normal google searches or clicking on the sources because the problems I usually have to google for are also complicated problems that I need the whole original discussion and context too, not just a summary that might skip important caveats.

    I dunno, I simultaneously feel old and out of touch, angry at myself for not just going with the flow and buying into it, but also disappointed in other people that rely on it without ever understanding that it’s so flawed, unreliable and untrustworthy, and making people into worse programmers.