“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

  • BoofStroke@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Huh. This is how I feel about LNPs who think they are doctors too. I think in most cases I’d prefer the ai.

    • Maeve@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I’ve worked in health care off and on, in some* capacity, for a long time. I know LPNs who are more knowledgeable than plenty of doctors. As I got older, it dawned on me that it’s really the individual. If one is in it as “just a job” they tend to think that degree makes them know it all, which means diddly, without empathy and compassion.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    My experience with the healthcare system, and especially hospitals, is that the people working there are generally knowledgeable and want to help patients, but they are also very busy and often sleep-deprived. A human may be better at medicine than an AI, but an AI that can devote attention to you is better than a human that can’t.

    (The fact that the healthcare system we have is somehow simultaneously very expensive, bad for medical professionals, and bad for patients is a separate issue…)

    • SeaJ@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It really depends on how optimized the dataset is for the AI. If it is shitty, it will amplify biases or hallucinate. An AI might be able to give a patient more attention but if it is providing incorrect information, no attention is better than a lot of attention.

  • Mouselemming@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m reading a lot of comments from people who haven’t been in a hospital bed recently. AI has increasingly been used by insurance companies to deny needed treatment and by hospital management to justify spreading medical and support personnel even thinner.

    The whole point of AI is that it’s supposed to be able to learn, but what we’ve been doing with it is the equivalent of throwing a child into scrubs and letting them do heart surgery. We should only be allowing it to monitor the care and outcomes as done by humans, in order to develop a much more substantial real-world database than it’s presently working from.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Way back in 2010 I did some paper reading at university on AI in healthcare, and even back then there were dedicated AI systems that could outperform many healthcare workers in the US and Europe.

    Where many of the issues came were not in performance, but in liability. If a single person is liable, that’s fine, but what if a computer program provides an incorrect dosage to an infant, or a procedure with two possible options goes wrong and a human would choose the other?

    The problems were also painted as observational. Often, the AI would get things with a clear solution right far more, but would observe things far less. It basically had the same conclusions that many other industries have - AI can produce some useful tools to help humans, but using it to replace humans results in fuck-ups that make the hospital (more notably, it’s leaders) liable.

    • HubertManne@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      yes. ai is great is a helper or assistant but whatever it does always has to be doublechecked by a human. All the same humans can get tired or careless so its not bad having it as long as its purely supplemental.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    “Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

    You “won’t accept” algorithms? What if those algorithms are demonstrably doing a better job than the nurses?

    As a patient I want whatever works best for doing diagnoses and whatnot. If that’s humans then let it be humans. If it’s AI, then let it be AI.

    • DontMakeMoreBabies@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Exactly! A BSN is a safety degree if you’re just barely not a moron. Who cares about their feelings? Give me what works.

      Yes, smart RNs exist but they eventually self select out to become PAs, NPs, or otherwise specialize.

      • Maeve@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I’ve known some damned fine RN that stayed with hospice services or ED, for the need of empathy and compassion. That said, I prefer a FNPor PA to a MD. I’ve also known some nasty RNs who were just in it for the check, and they made patients and every other employee miserable, on their shifts.

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    At least in the US, the healthcare system is fucked-and-a half with staffing issues alone. With boomers on the way out of the work force and into the fucking ER, we’re in trouble.

    If ‘AI’ algorithms can help manage the dumpster fire, bring it on. Growing pains are expected, but that doesn’t mean we shouldn’t explore its potential.