IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    At leas no mission critical services were hit, because nobody would run mission critical services in Windows, right?

    RIGHT??

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Pretending linux privelege escalation doesn’t exist… to fight something that gets root you have to be able to fight at the root level, or the root access malware can simply nuke the av from userland.

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Or you could just use kernel namespaces, SELinux, Systemd sandboxing, etc. There is zero need to run in ring 0 for security reasons.

          Also, privilege escalation is a lot rarer on Linux than it is on Windows.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Because that’s where filesystem access lives? AV wouldn’t do very much good if it could only run from userspace.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      I get storing bitlocker keys in AD, but as a net admin and not a server admin…what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?

      Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?

      Or DCs run on SEDs?

        • modeler@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          You need at least two copies in two different places - places that will not burn down/explode/flood/collapse/be locked down by the police at the same time.

          An enterprise is going to be commissioning new computers or reformatting existing ones at least once per day. This means the bitlocker key list would need printouts at least every day in two places.

          Given the above, it’s easy to see that this process will fail from time to time, in ways like accicentally leaking a document with all these keys.

          • JasonDJ@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            I think the idea is to store most of the keys in AD. Then you just have to worry about restoring your DCs.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        When I set it up at one company, the recovery keys were printed out and kept separately.

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      The good news is! This is a shake out test and they’re going to update those playbooks

      • ɔiƚoxɘup@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I wish you were right. I really wish you were. I don’t think you are. I’m not trying to be a contrarian but I don’t think for a large number of organizations that this is the case.

        For what it’s worth I truly hope that I’m 100% incorrect and everybody learns from this bullshit but that may not be the case.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Sysadmins are lucky it wasn’t malware this time. Next time could be a lot worse than just a kernel driver with a crash bug.

        3rd party companies really shouldn’t have access to ship out kernel drivers to millions of computers like this.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        The bad news is that the next incident will be something else they haven’t thought about

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I remember a few career changes ago, I was a back room kid working for an MSP.

      One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

      I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

      It was our air-gapped encryption key backup.

      I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:

    • Shut down the affected instance.
    • Detach the boot volume.
    • Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
    • Remove the file(s) recommended by Crowdstrike:
    • Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
    • Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
    • Detach and move the volume back over to original instance (attach)
    • Boot original instance

    Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.

    • Defaced@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.

    Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      To preface, I want to see a tech workers union so, so bad.

      With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.

      Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.

      • slacktoid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I feel you with this. They do not see themselves as workers. Thank you for the preface.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Agreed, sadly to many there is still the view of tech being a meritocracy, and that they’re in FAANG because of their hard work over everything else, so fuck everyone else. Naturally, many change their tune once their employer actions regressive policies, but it’s surprising how many people just have zero understanding of what a union does. They see cop shows or The Wire and assume it’ll be like the unions there…

    • ɔiƚoxɘup@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I’m in. This world desperately needs an information workers union. Someone to cover those poor fuckers in the help desk and desktop support as well as the engineers and architects that keep all of this shit running.

      Those of us that aren’t underpaid are treated poorly. Today is what it looks like if everybody strikes at once.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Lmao this is incredible

    Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

    "We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    “Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”

    N.B.: Reddit link is from the source

    I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      C-suites fired? That’s the funniest thing I’ve heard yet today. They aren’t getting fired - they are their own ass-coverage. How can they be to blame when all these other companies were hit as well?

      I guess this is a good week for me to still be laid off.

  • pelletbucket@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    This is why every machine I manage has a second boot option to download a small recovery image off the Internet and phone home with a shell. And a copy of it on a cheap USB stick.

    Worst case I can boot the Windows install in a VM with the real disk, do the maintenance remotely. I can reinstall the whole thing remotely. Just need the user to mash F12 during boot and select the recovery environment, possibly input WiFi credentials if not wired.

    I feel like this should be standard if you have a lot of remote machines in the field.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Sounds like a nightmare for security, and a dream for attackers.

      More companies need to do this, solid job security.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        You can sign the whole thing, it’s not like you have to turn off secure boot and just drop the user to a root shell. There’s nothing to be gained from it, especially if you have physical access to the machine.

    • person420@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Just need the user to mash F12 during boot and select the recovery environment, possibly input WiFi credentials if not wired

      In theory that sounds great, now just do it 1000+ times while your phone is ringing off the hook and you’re working with some of the most tech illiterate people in your org.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      This is why every machine I manage has a second boot option to download a small recovery image off the Internet and phone home with a shell. And a copy of it on a cheap USB stick.

      You’re fucking killing it. Stay awesome.

      Also gist this up pls. Thanks.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I wish it was more shareable, but it’s also not as magic as it sounds.

        Fundamentally it’s just a Linux install with some heavy customizations so that it does one thing only: boot Linux, and just enough prompts to get it online so that the VPN works, and download the root image into RAM that it boots into so I can SSH into the box, and then a bunch of Linux tools for me to use so I can reimage from there, or run a QEMU with the physical disk passed through so I can VNC into an install even if it BSOD.

        It’s a Linux UKI (combined kernel+initramfs into a simple EFI file the firmware can boot directly without a bootloader), but you can just as easily get away with a hidden Debian install or whatever. Can even be a second Windows install if that’s your thing. The reason I went this particular route is I don’t have to update it since it downloads it on the fly, much like the Mac recovery. And it runs entirely in RAM afrerwards so I can safely do whatever is needed with the disk.

        • flop_leash_973@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I dream of working somewhere where this kind of effort is appreciated enough to motivate me to put in the effort of actually doing it.

          • Max-P@lemmy.max-p.me
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            I wish too, it’s only deployed for family and family businesses because I’m a couple thousand miles away from them. I cobbled this together for the explicit purpose of being able to reinstall Windows remotely. It works wonderfully though!

            My real job is DevOps and 100% Linux, and most of the cloud servers are disposable and can be simply be rebuilt at the push of a button in some dashboard.

    • CaptPretentious@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I’m the corporate world, very much Windows gets used. I know Lemmy likes a circle jerk around Linux. But in the corporate world you find various OS’s for both desktop and servers. I had to support several different OS’s and developed only for two. They all suck in different ways there are no clear winners.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        It’s not just a cycley jerk in this case. Worried is dominant for desktop usage but Linux has like 90% of the server market and is used for basically all new server projects.

        Paying for Windows licensing when it doesn’t benefit you, it’s silly, and that’s been realized for years.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Web servers, sure, but even companies that manage infrastructure like Google and Amazon have a LOT of Windows servers kicking around for shit like AD, Outlook, Federation, Office/Teams, etc.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      My former employer had a bunch of windows servers providing remote desktops for us to access some proprietary (and often legacy) mission critical software.

      Part of the security policy was that any machines in the possession of end users were assumed to be untrustworthy, so they kept the applications locked down on the servers.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Issue is not just on servers, but endpoints also. Servers are something that you can relatively easily fix, because they are either virtualized or physically in same location.

      But endpoints you might have thousand physical locations, and IT need to visit all of them (POS, info/commercial displays, IoT sensors etc.).

      • Miaou@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Parent comment applies even more so to such endpoints imo

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I can’t imagine how much work it would be to migrate all your services onto Linux. The problem was people adopting windows in the first place.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I love the Linux bros coming out of the woodwork on this one when this could have very well have been Linux on the receiving end of this shit show. Given that it’s a kernal level software issue, and not necessarily an OS one.

        It’s largely infeasible to use Linux for many, most, of these endpoints. But facts are hard.

        • jabjoe@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          The is no single Linux. It’s not a monoculture like that. There are many distros with different build options, different configurations and different components.

          Also culture is different. Very few Linux admins would be happy putting in a closed blob kernel driver for anything. In Windows world that’s the norm, but not Linux.

          What’s just happened to Windows world would be harder in Linux world. At worse, one distros rolls out a killer update. Some distros would just reboot to the previous kernel.

        • save_the_humans@leminal.space
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Hey man, let us have this one. Any immutable/atomic distribution could have either prevented this or easily rolled back the update. Not to mention a Linux offering by something like Red Hat, for example, wouldnt recommend installing closed source third party kernel modules for exactly this reason. Not sure about the feasibility of these endpoints, but the way things are generally done on, and the philosophy of, Linux could very well have avoided this catastrophe.

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              An immutable distribution is one that treats the system files as read-only. Applications are handled separately, and updates to the system are done in an image-based way, rather than changing a few updated files, basically the OS gets replaced with an updated version. It prevents users or malicious outsiders from just changing system files. Fedora Silverblue and SteamOS as found on Valve’s Steam Deck are examples of immutable distros.

              Now, with soemthing like Crowdstrike that operates in kernel space…I’m too far outside my wheelhouse to grasp how that would work on an immutable system. How it would be implemented.

        • flop_leash_973@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          They are just butt hurt that this whole thing really shines a light on how inaccurate the line of “the world runs on Linux” truly is.

          The world runs on a lot of different things for different reasons and that does not fit nicely into their Richard Stallman like world view.

  • scottywh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn’t worthwhile to begin with.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    It might be CrowdStrike’s fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they’ve called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him “you can sleep when this is fixed”, as if he’s responsible for CloudStrike…

      Companies won’t learn. It’s always a calculated risk, and much of the fallout of that risk lies with the workers.

      • MrAlternateTape@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        That comment about sleep…that’s about where I tell them to go fuck themselves. I’ll find a new job, I’m not going to put up with bullshit like that.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Sounds so illegal, that it makes labour authoririty happy

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Is it illegal? I’m not American so I have no idea if there are laws in your country against on-call maximum hours.

          • uis@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago
            1. It’s not about oncall, they are literally in the office
            2. See 1
            3. Not sure about America, but it is very illegal in Russia.
  • Disaster@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.

    Also gotta say… way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Just a thought from experience: Be wary of any critical products and/or taking a job from a company run by an accountant. CrowdStrike CEO… accountant!

    Accounting firms are an obvious exception.