• X@piefed.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      3 days ago

      They could say “the connection is probably lost,” but it’s more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.

    • SorryQuick@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      But really it’s just how it will always be. How do you estimate transfer speed? Use the disk speed / bandwidth limit? Can’t do that since it’s shared with other users/processes. So at the beginning there is literally zero info to go off of. Some amount of per-file overhead also has to be accounted for since copying one 100gb file is not the same as copying millions of tiny files adding up to 100gb.

      Then you start creating an average from the transfer so far, but with a weighted average algorithm, since recent speeds are much more valued, but also not too valued. Just because you are ultra slow now doesn’t mean it will always be slow. Maybe your brother is downloading porn and will hog the bandwidth all day, or he’ll be done in a few seconds.

      So to put it simply, predicting transfer time is pretty much the same as predicting the future.

      • tetris11@feddit.uk
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I like rsync’s progress: speed and files left

        I detest the needless line chart windows 10 had

      • Eheran@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        Transfer speed on discs was and is almost exclusively a matter of file size, so it should be easy to estimate a much better time than the dumb “total bytes / current speed” that constantly fluctuates since file sizes are not all identical.

        • SorryQuick@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          16 hours ago

          That’s so wrong. It always fluctuates because the speed itself always fluctuates. It’s only easy when you know it doesn’t fluctuate because you’re not using the computer at the same time.

          • Eheran@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            15 hours ago

            Since file size is not taken into account, it fluctuates wildly even if you don’t do anything other than transferring those files.

              • Eheran@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                13 hours ago

                Since when is that so? Or where? W11 essentially “pauses” when there are lots of small files after bigger ones with >1000 MiB/s since those only reach perhaps 100 MiB/s.

                • SorryQuick@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  12 hours ago

                  Since forever. I can’t say for windows since I haven’t used it in forever but almost all sensible algorithms take it in consideration. There are also many factors, such as what filesystem (ext4…) you use. You can’t account for them all. Usually you simply add a small “overhead” constant per file, so smaller files get that many times while big ones only get it once.

        • Natanael@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          15 hours ago

          On disc you have read/write misses and seeks, and due to constant RPM + geometry the read/write the speed literally varies with the physical distance of the written data from the center of the cylinder (more dots per arcsecond at the outer edge)

    • Unbecredible@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      This is the only thing I can’t forgive. I accept that it’s hard to say how long something is going to take. But when that bar reaches 100% something should happen pretty fucking snappish.

      • mPony@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        19 hours ago

        Windows gets around this by sitting at 99% instead. it’s the “I’m not touching you” of file management.

          • Natanael@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            15 hours ago

            Shouldn’t be 5 min, but that’s what you get if the drive don’t have both enough RAM and capacitors to hold a decent write cache to extend it’s lifetime. Then the OS have to either wait for drive to report it’s done, or complete the sync from the file system driver’s cache. Or else you simply deal with it being both slower and dying faster…

            • village604@adultswim.fan
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 hours ago

              I was being hyperbolic, but the OS shouldn’t report the transfer as complete if the drive hasn’t reported it’s done.

              The fact that the system is able to tell you the transfer isn’t complete when trying to safely remove the drive means the information exists for the file transfer dialogue to utilize.

              A simple “finishing things up,” message in the transfer dialogue is all that’s needed. Especially since unplugging a thumb drive without safely removing it (which tons of people do) while a transfer is still ongoing can corrupt the data on the drive.

  • ChickenLadyLovesLife@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    I remember when Netscape (the browser) back in the late 90s or thereabouts came up with the “innovation” of having a progress bar that would go left to right, and when it got all the way to the right it would reverse and go in the other direction. The whole thing would just go back and forth until the action was done – not a “progress” bar at all, just a “well, maybe something is happening, it’ll be done when it’s done” animation. Later replaced by the ingenious shit going around in a circle that is ubiquitous today, that creates no illusions of it being a progress indicator at all.

    • uniquethrowagay@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      23 hours ago

      What’s the point of a progress bar if it can’t display progress in a meaningful way anyways? It being at 90% often says nothing about how long it will take still. Might as well use a spinny thing to let you know it’s not frozen.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        I was a programmer and I wrote lots of applications that showed the progress of long-running tasks with a progress bar that was reasonably accurate. It just took a little bit of extra work is all, plus knowledge of how to do it. Every time I put in a spinny thing instead (and incidentally it’s still possible to have the main task frozen while a little spinny thing on a separate thread happily spins away) it was because the managers and designers were too cheap and/or lazy to do it properly. Admittedly, adding a reasonably accurate time-remaining estimate is more complicated, but that’s also the part that is less important.

  • rekabis@lemmy.ca
    link
    fedilink
    arrow-up
    64
    arrow-down
    1
    ·
    3 days ago

    Back in the day (mid/late 90s), any download on Internet Explorer had a “file transfer” pop-up with an animation involving a planet (the Internet) sending flying sheets of paper (the download) to a Manila folder (the computer’s file system).

    I legit had one client ask me why they couldn’t make the download go faster my moving the planet closer to the folder, or vice versa.

    I recall just sitting there for a number of seconds while my poor brain tried to grasp just how badly out-of-whack their interpretation of the universe was.

    Spoiler alert: they were a very poor client, and refused to relinquish an entire raft of very poorly thought out or even entirely wrong concepts of computing and the Internet. They were also credulous AF, and while I could have made an arseload of money correcting what they did on a weekly or even daily basis, I just didn’t want that kind of headache.

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      46
      ·
      2 days ago

      “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” – Charles Babbage, discovering what Technical Support would be dealing with a century or more later.

    • WorldsDumbestMan@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      You should have chosen the money, and retired early. You make a shell company doing whatever, that just drains your built up money. You pay yourself benefits, you kick back and relax, doing whatever bullshit job you invented as the CEO of fuckaround.inc

  • gabor_legrady@lemmy.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    As a developer I hate these values and estimates, but I also agree that there is no good way to calculate the future. The honest would be “working” - but everyone asks for percentages and estimates.

    • Johanno@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      You could write “estimating” for about a minute or so and then give a reasonable estimation.

      I made once a download estimate infinity. Suspended the pc and over 20 hours later turned it on again. Since it calculated 20 hours no download it assumed it would take infinity to download it all