Developers: I will never ever do that, no one should ever do that, and you should be ashamed for guiding people to. I get that you want to make things easy for end users, but at least exercise some bare minimum common sense.

The worst part is that bun is just a single binary, so the install script is bloody pointless.

Bonus mildly infuriating is the mere existence of the .sh TLD.

Edit b/c I’m not going to answer the same goddamned questions 100 times from people who blindly copy/paste the question from StackOverflow into their code/terminal:

WhY iS ThaT woRSe thAn jUst DoWnlOADing a BinAary???

  1. Downloading the compiled binary from the release page (if you don’t want to build yourself) has been a way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into
  2. There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
  3. Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
  4. The install script they’re telling you to pipe is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.

The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    That’s becoming alarmingly common, and I’d like to see it go away entirely.

    Random question: do you happen to be downloading all of your Kindle books? 😜

  • Godort@lemm.ee
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    5 days ago

    It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.

    On the flip side, you can also just download the script from the site without piping it directly to bash if you want to review what it’s going to do before you run it.

    • Deello@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      4
      ·
      4 days ago

      It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.

      You’re not wrong but this is what lead to the xz “hack” not to long ago. When it comes to data, trust is a fickle mistress.

  • tgt@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 days ago

    What’s that? A connection problem? Ah, it’s already running the part that it did get… Oops right on the boundary of rm -rf /thing/that/got/cut/off. I’m angry now. I expected the script maintainer to keep in mind that their script could be cut off at litterally any point… (Now what is that set -e the maintainer keeps yapping about?)

    Can you really expect maintainers to keep network error in mind when writing a Bash script?? I’ll just download your script first like I would your binary. Opening yourself up to more issues like this is just plain dumb.

      • Ziglin (they/them)@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        It runs the curl command which tries to fetch the entire script. Then no matter what it got (the intended script, half the script, something else because somebody tampered with it) it just runs it without any extra checks.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    tbf, every time you’re installing basically anything at all, you basically trust whoever hosts the stuff that they don’t temper with it. you’re already putting a lot of faith out there, and i’m sure a lot of the software actually contains crypto-mineware or something else.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    4 days ago

    I’ve seen a lot of projects doing this lately. Just run this script, I made it so easy!

    Please, devs, stop this. There are defined ways to distribute your apps. If it’s local provide a binary, or a flatpak or exe. For docker, provide a docker image with well documented environments, ports, and volumes. I do not want arbitrary scripts that set all this up for me, I want the defined ways to do this.

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    I’m with you, OP. I’ll never blindly do that.

    Also, to add to the reasons that’s bad:

    • you can put restrictions on a single executable. setuid, SELinux, apparmor, etc.
    • a simple compromise of a Web app altering a hosted text file can fuck you
    • it sets the tone for users making them think executing arbitrary shell commands is safe

    I recoil every time I see this. Most of the time I’ll inspect the shell script but often if they’re doing this, the scripts are convoluted as fuck to support a ton of different *nix systems. So it ends up burning a ton of time when I could’ve just downloaded and verified the executable and have been done with it already.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    4 days ago

    You really should use some sort of package manager that has resistance against supply chain attacks. (Think Linux distros)

    You probably aren’t going to get yourself in trouble by downloading some binary from Github but keep in mind Github has been used for Malware in the past.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 days ago

      I mean, how about:

      1. Download the release for your arch from the releases page.
      2. Extract to ~/.local/bin
      3. Run
        • Admiral Patrick@dubvee.orgOP
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          edit-2
          5 days ago
          1. That’s been the way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into.
          2. There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
          3. Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
          4. The install script is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.

          The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice

          • uranibaba@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            4
            ·
            5 days ago

            If you trust them enough to use their binary, why don’t you trust them enough to run their install scripts as well?

            • moonpiedumplings@programming.dev
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              4 days ago

              Trust and security aren’t just about protecting from malice, but also mistakes.

              For example, AUR packages are basically install scripts, and there have been a few that have done crazy things like delete a users /bin — not out of any malice, but rather simple human error.

              Binaries are going to be much, much less prone to these mistakes because they are in languages the creators have more experience with, and are comfortable in. Just because I trust someone to write code that runs on my computer, doesn’t mean I trust them to write an install script, especially given how many footguns bash has.

              Steam once deleted someone’s home directory.

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I’m curious, op, do you think it’s bad to install tools this way in an automated fashion, such as when developing a composed docker image?

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 days ago

      Very much yes

      You want to make your Dockerfile be as reproducible as possible. I would pull a specific commit from git and build from source. You can chain together containers in a single Dockerfile so that one container builds the software and the other deploys it.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I mean, you’re not op. But your method requires all updates to be manual, while some of us especially want updates to be as automated as possible.

        • moonpiedumplings@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          You can use things like dependabot or renovate to update versions in a controlled manner, rather than automatically using the latest of everything.

          On the other side, when it comes to docker containers, you can use github actions or some other CI/CD system to automate the container build.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 days ago

          I don’t think it is that hard to automate a container build. Ideally you should be using the official OCI image or some sort of package repo that was been properly secured.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Protect from accidental data damage: for example the dev might have accidentally pushed an untested change where there’s a space in the path

      rm -rf / ~/.thatappconfig/locatedinhome/nothin.config

      a single typo that will wipe the whole drive instead of just the app config (yes, it happened, I remember clearly more a decade ago there was a commit on GitHub with lots of snarky comments on a script with such a typo)

      Also: malicious developers that will befriend the honest dev in order to sneak an exploit.

      Those scripts need to be universal, so there are hundreds of lines checking the Linux distro and what tools are installed, and ask the user to install them with a package manager. They require hours and hours of testing with multiple distros and they aren’t easy to understand too… isn’t it better to use that time to simply write a clear documentation how to install it?

      Like: “this app requires to have x, y and z preinstalled. [Instructions to install said tools on various distros], then copy it in said subdirectory and create config in ~/.ofcourseinhome/”

      It’s also easier for the user to uninstall it, as they can follow the steps in reverse

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Yes I understand all of that, but also in the context of my docker containers I wouldn’t be losing any data that isn’t reproducible

  • Azzu@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    4 days ago

    You are being irrational about this.

    You’re absolutely correct that it is bad practice, however, 98% of people already follow bad practice out of convenience. All the points you mentioned against “DoWnlOADing a BinAary” are true, but it’s simply what people do and already don’t care about.

    You can offer only your way of installing and people will complain about the inconvenience of it. Especially if there’s another similar project that does offer the more convenient way.

    The only thing you can rationally recommend is to not make the install script the “recommended” way, and recommend they download the binaries from the source code page and verify checksums. But most people won’t care and use the install script anyway.

    If the install script were “bloody pointless”, it would not exist. Most people don’t know their architecture, the script selects it for them. Most people don’t know what “adding to path” means, this script does it for them. Most people don’t know how to install shell completions, this script does it for them.

    You massively overestimate the average competence of software developers and how much they care. Now, a project can try to educate them and lose potential users, or a project can follow user behavior. It’s not entirely wrong to follow user behavior and offer the better alternatives to competent people, which this project does. It explains that it’s possible and how to download the release from the Github page.

  • IceFoxX@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    4.Since MS bought github, github is no longer trustworthy. Databreaches etc have increased since MS owns github. Distribution of malware via github as well. What is the 4 point supposed to say?

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    4 days ago

    I’ll die on the hill that curl | bash is fine if you’re installing software that self updates - very common for package managers like other comments already illustrated.

    If you don’t trust the authors, don’t install it (duh).

    • moonpiedumplings@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      If you don’t trust the authors, don’t install it (duh).

      Just because I trust the authors to write good rust/javascript/etc code, doesn’t mean I trust them to write good bash, especially given how many footguns bash has.

      Steam once deleted a users home directory.

      But: I do agree with you. I think curl | bash is reasonable for package managers like nix or brew. And then once those are installed, it’s better to get software like the Bun OP mentions from them, rather than from curl | bash.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      There was a malicious website on Google pretending to be the brew package manager. It didn’t leave any trace but when you ran the command it ran a info stealer and then installed brew.

      If this was rare I could understand but it is fairly common.