• CodeMonkey@programming.dev
    link
    fedilink
    arrow-up
    50
    arrow-down
    2
    ·
    6 months ago

    About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf

    Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.

    Even if an attacker kidnaps the user and sends in a body double, with your user’s id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.

    The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.

    • BluesF@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      Smart. I like the idea of replacing biometrics with something that can’t easily be cloned - learned behaviour. Perhaps with a robust ML approach you could use analysis of gait, expressions, and other subtle behavioural tics rather than or in addition to facial/fingerprint/iris recognition. I suspect that would be very hard to fake - although perhaps vulnerable to, idk, having a bad day and acting “off”.

      • milicent_bystandr@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        Ah, so only employ posh people.

        “Hi, I’m definitely Henry. My turn to take the RSA key sentry duty today.”

        “Henry, why are you acting like a commoner? You’re not like yourself at all!”

    • oatscoop@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Having read the paper, there seems to be a glaring problem: Even though the user can’t tell an attacker the password, nothing is stopping them from demonstrating the password. It doesn’t matter if it’s an interactive sequence – the user is going to remember enough detail to describe the “prompts”.

      A rubber hose and a little time will get enough information to make a “close enough” mock-up of the password entry interface the trusted user can use to reveal the password.

  • 018118055@sopuli.xyz
    link
    fedilink
    arrow-up
    17
    ·
    6 months ago

    There are some cases involving plausible deniability where game theory tells you should beat the person until dead even if they give up their keys, since there might be more.

  • JoYo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    This always sounded like parallel construction.

    Fine then, keep your secrets.

  • heavy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    6 months ago

    Where is this from? I don’t think exposing the key breaks most crypto algorithms, it should still be doing its job.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      The private key, or a symmetric key would break the algorithm. It’s kind of the point that a person having those can read it. The public key is the one you can show people.

      • heavy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        6 months ago

        Doesn’t break the algorithm though, you would just have the key and then can use the algorithm (that still works!) to decrypt data.

        Also you’re talking about one class of cryptography, the concept of key knowledge varies between algorithms.

        My point is an attacker having knowledge of the key is a compromise, not a successful break of the algorithm…

        “the attacker beat my ass until I gave them the key”, doesn’t mean people should stop using AES or even RSA, for example.