ChatGPT is dismissing it, but I’m not so sure.

  • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
    link
    fedilink
    English
    arrow-up
    93
    arrow-down
    5
    ·
    2 months ago

    Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, it’s pure chance.

    Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person who’s taken the most acid, and ask them for an answer.

      • BaroqueInMind@lemmy.one
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        24
        ·
        edit-2
        2 months ago

        So then, if you knew this, why did you bother to ask it first? I’m kinda annoyed and jealous of your AI friend over there. Are you breaking up with me?

          • WarlockoftheWoods@lemy.lol
            link
            fedilink
            English
            arrow-up
            26
            arrow-down
            6
            ·
            2 months ago

            Dude, people here are such fucking cunts, you didn’t do anything wrong, ignore these 2 trogledytes who think they are semi intelligent. I’ve worked in IT nearly my whole life. I’d return it if you can.

          • Empricorn@feddit.nl
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            19
            ·
            2 months ago

            Defensive… If someone asks you for advice, and says they have doubts about the answer they received from a Magic 8-Ball, how would you feel?

    • dream_weasel@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 months ago

      First sentence of each paragraph: correct.

      Basically all the rest is bunk besides the fact that you can’t count on always getting reliable information. Right answers (especially for something that is technical but non-verifiable), wrong reasons.

      There are “stochastic language models” I suppose (e.g., click the middle suggestion from your phone after typing the first word to create a message), but something like chatgpt or perplexity or deepseek are not that, beyond using tokenization / word2vect-like setups to make human readable text. These are a lot more like “don’t trust everything you read on Wikipedia” than a randomized acid drop response.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 months ago

    Inside the nominal return period for a device absolutely.

    If it’s a warranty repair I’ll wait for an actual trend, maybe run a burn-in on it and force its hand.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    SMART data can be hard to read. But it doesn’t look like any of the normalized values are approaching the failure thresholds. It doesn’t show any bad sectors. But it does show read errors.

    I would check the cable first, make sure it’s securely connected. You said it clicks sometimes, but that could be normal. Check the kernel log/dmesg for errors. Keep an eye on the SMART values to see if they’re trending towards the failure thresholds.

  • Hozerkiller@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    Im sorry but you wanted to know about your warranty info and you went to chatgpt instead of the manufacturer or seller of the drive…