• conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    If it worked for most shit and escalated to a human when it actually needed to, reliably, I’d be fine with it.

    I don’t believe there’s a realistic chance that there’s a lot of overlap between the people willing to invest to actually do it properly and the people paying for AI instead of people though.

    • Emmy@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The answer is always, the service will sick until you leave for another company.

      Then you’ll find out sucks just as much there, cause you have to buy from someone

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      If it worked for most shit and escalated to a human when it actually needed to, reliably, I’d be fine with it.

      If you think that’s how it will be implemented, I have some beans I’d like to sell you.

    • ArbiterXero@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The problem is the same as with the telephone answering trees.

      If they’re used to help you get where you’re going, then they’re great. But that’s not the best financially motivated decision. Solving your problem costs the companies money. Pissing you off and convincing you that your problem shouldn’t be fixed saves money on support.

      So making you go round in circles is the machine doing EXACTLY what they want it to do.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s an additional problem.

        But the bigger problem is that it’s not actually possible to do a good job without genuine meaningful investment in building out the tooling properly.

        • ArbiterXero@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s just it…… they are building it out properly, their goal is just not what you think it is.

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      In my experience the AI assistant is just trained on the information available on the firm’s website.

      In 2024 I never just call a company expecting to be able to be assisted by a person. It’s always quicker and easier to figure out how to interact with said company online. The only times you call are when it’s not possible to resolve your query by interacting with them online.

      That being the case, the entire purpose of the AI in this case is just to make it less convenient to call them. “Have you tried to resolve your issue online? Are you really sure about that? Maybe I could paraphrase this blog post from our website written by an intern 12 years ago.”

      • kalleboo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        90% of people calling support lines are due to questions that are in the top 10 ten on the FAQ. They’re just the type of people who don’t like reading and just want a social answer. The same kind of people who get told “just do a search, this is asked weekly” on Reddit.

        If there was a way to direct the “I just need a FAQ that I don’t need to read myself” people to an LLM and the “something is actually broken I need real help” to people, that would be ideal.