• mitchty@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    Ask if you want but I’m not sure if the question is ability or suvivability. You can lick anything once. Just might regret it.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    23 hours ago

    For instance, when it came to rock licking, Gemini, Mistral’s Mixtral, and Anthropic’s Claude 3, generally recommended avoiding it, offering a smattering of safety issues like “sharp edges” and “bacterial contamination” as deterrents.

    OpenAI’s GPT-4, meanwhile, recommended cleaning rocks before tasting. And Meta’s Llama 3 listed several “safe to lick” options, including quartz and calcite, though strongly recommended against licking mercury, arsenic, or uranium-rich rocks.

    All of this seems like perfectly reasonable advice and reasoning. Quartz and calcite are inert, they’re safe to lick. Sharp edges and bacterial contamination are certainly things you should watch out for, and cleaning would help. Licking mercury, arsenic, and uranium-rich rocks should indeed be strongly recommended against. I’m not sure where the problem is.

  • DoucheBagMcSwag@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    If you look up HAARP Gemini will tell you that the center is surrounded in conspiracy theories and that they do not have the ability to control the weather.

    But the last sentence says “effects by HAARP are nullified in seconds after shutting the machine off.”

    • Soup@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 hours ago

      Because it Illustrates how fucking stupid someone has to be to take AI seriously in any relevant way.

    • helenslunch@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 hours ago

      Because the vast majority of the time it’s correct and often extremely useful.

      For example, I spent 20 minutes looking for a solution yesterday with no luck and CGPT spit it out in 3 seconds, and it worked.

      Yes, AI will give bad outputs, but if you’re not dumb enough to put glue on your pizza, or not actually verify important information, you’ll be fine.