

I used dashes for decades. I’ve removed all of them all since ChatGPT became popular. It doesnt help that I think ChatGPT overuses them.
I used dashes for decades. I’ve removed all of them all since ChatGPT became popular. It doesnt help that I think ChatGPT overuses them.
Universities were already locking down their PCs in the 90’s, at least those with competent IT departments - BIOS password, locked boot menu, Windows 2000 with restricted user accounts.
You need to make up your mind on what time period you’re trying to use. 90s? 2000? Before you were talking about Windows 95.
But notice, you’re talking about universities: we’re talking about children under 18. Those computers were not as locked down. That has changed from the 90s. The security of the 90s (especially before TCP/IP was standard) was different than 2000-2010 security, which was different than 2010s+ security. Yet, you’re trying to claim it hasn’t changed? That’s so inaccurate it’s laughable.
Even in the Linux world, Pre-IP vs Slow Internet vs Fast Internet vs Post-sudo security models have changed a lot. I’d be skeptical of anyone trying to argue that the security and lockdown of these computers has not changed in 30 years. Is that your argument? If not, why did you start with “Windows 95?”
If you don’t do that, your every PC will have 15 copies of Counter Strike and a bunch of viruses in one week.
And? People still get viruses. People still install games if they can. The tools to do that on PCs are far better at trying to stop those than 30, 20, or even 10 years ago. Chromebooks are even more effective than those tools at locking them down to be unusable.
Chromebooks (and laptops in general) are way cheaper now than PCs were back then, so again, you need to buy your own and install a proper OS, the situation did not really change.
Before: if you wanted to do work at home, you or your family had to buy a computer. Kids (might) need to convince their parents to do experiments, but it was far easier to do that to convince a school administration.
Today? What families have a “family computer?”
Kids get a phone, they might get a tablet, and if they get a computer, its the school one. The need for a family computer has basically gone. All of the computers are locked down. Google happens to make locked down OSes for their replacements: Chromebooks, Phones, and Tablets. Yet, according to you, the requirements hasn’t changed. Yet, from a child’s perspective: they’ll probably never get the opportunity to play with a non-locked down computer.
You seemed to miss their argument. Those were the standard in 1995, before OSes had really integrated the internet. Haivng a floppy disk, discarding wifi, and having drivers auto-loaded/discovered automatically (or not needed at all) are independent developments. Even when Chromebooks started becoming standard: using drivers from physical disks were rare, Windows could automatically find and update drivers (how well, eh), WiFI existed and was faster than most internets. You could install Linux and it would mostly work, provided your hardware wasn’t too new.
The actual argument chromebooks are contributing to tech illteracy because, they’re:
Organizations buy these devices because they’re cheap (than cost), lock them down, and those locked-down devices become the only computer for most students. While it’s technically possible to install Linux, these users can’t: it’s not their devices: the organizations bought them because they were cheap and easily locked down for kids. If these are their main device, and they not allowed (either technically or by policy) to install another OS: where will they learn tech literacy? Not on their phone, not on their tablet, and not on their school-issued laptop.
They’ve been locked into a room and people wonder why they don’t know how to interact outside. You’re arguing that the room today is better than the one in 1995. That’s true, that doesn’t change the argument:
A lot of these areas have much more stringent gun laws. Yes, they can own the guns, but they can’t carry them. Carrying/displaying will probably get them arrested and charged with a weapons felony.
I’m usually told we’ve moved beyond the need for people to do that. Then we should just leave the use of force to the police: The organizations that consistently seem to try to prove we can’t trust them. I agree, the police should be an organization Americans can trust: How can we make them that way?
Does anyone see the irony?
The U.S. never fixed their trust issues with police. So this seems like the logical result.
Probably because the Democrats are so anti-gun/weapon. The target demographic probably leans anti-weapon, even if they’re not necessarily Democrats. The combination keeps them more vulnerable. It’s even worse when carrying weapons in these areas is outright banned: no training, no permits, only police. The safety of the group is generally prioritized over the safety of the individual. Which, like here, can be a problem.
From how they’re acting, it seems only a matter of time. They seem to check all of the boxes for a lethal or deadly force in nearly every state, even the strict ones. An unidentified suspiciously dressed group aggressively surrounding you and preventing your retreat? Lethal/deadly force can often be used to defend another person. Someone else can shoot these idiots in plain clothes with no identification.
Even if “police” identify themselves late, it seems to be setting themselves up for a weak defense
To me, your attempt at defending it or calling it a retcon is an awkward characterization. Even in your last reply: now you’re calling it an approximation. Dividing by 1024 is an approximation? Did computers have trouble dividing by 1000? Did it lead to a benefit of the 640KB/320KB memory split in the conventional memory model? Does it lead to a benefit today?
Somehow, every other computer measurement avoids this binary prefix problem. Some, like you, seem to try to defend it as the more practical choice compared to the “standard” choice every other unit uses (e.g: 1.536 Mbps T1 or “54” Mbps 802.11g).
The confusion this continues to cause does waste quite a bit of time and money today. Vendors continue to show both units on the same specs sheets (open up a page to buy a computer/server). News still reports differences as bloat. Customers still complain to customer support, which goes up to management, and down to project management and development. It’d be one thing if this didn’t waste time or cause confusion, but we’re still doing it today. It’s long past time to move on.
The standard for “kilo” was 1000 centuries before computer science existed. Things that need binary units have an option to use, but its probably not needed: even in computer science. Trying to call kilo/kibi a retcon just seems to be trying to defend the use of the 1024 usage today: despite the fact that nearly nothing else (even in computers) uses the binary prefixes.
209GB? That probably doesn’t include all of the RAM: like in the SSD, GPU, NIC, and similar. Ironically, I’d probably approximate it to 200GB if that was the standard, but it isn’t. It wouldn’t be that much of a downgrade to go to 200GB from 192GiB. Is 192 and 209 that different? It’s not much different from remembering the numbers for a 1.44MiB floppy, 1.5436Mbps T1 lines, or ~3.14159 pi approximation. Numbers generally end up getting weird: trying to keep it in binary prefixes doesn’t really change that.
The definition of kilo being “1000” was standard before computer science existed. If they used it in a non-standard way: it may have been common or a decent approximation at the time, but not standard. Does that justify the situation today, where many vendors show both definitions on the same page, like buying a computer or a server? Does that justify the development time/confusion from people still not understanding the difference? Was it worth the PR reaction from Samsung, to: yet again, point out the difference?
It’d be one thing if this confusion had stopped years ago, and everyone understood the difference today, but we’re not: and we’re probably not going to get there. We have binary prefixes, it’s long past time to use them when appropriate-- but even appropriate uses are far fewer than they appear: it’s not like you have a practical 640KiB/2GiB limit per program anymore. Even in the cases you do: is it worth confusing millions/billions on consumer spec sheets?
This is all explained in the post we’re commenting on. The standard “kilo” prefix, from the metric system, predates modern computing and even the definition of a byte: 1700s vs 1900s. It seems very odd to argue that the older definition is the one trying to retcon.
The binary usage in software was/is common, but it’s definitely more recent, and causes a lot of confusion because it doesn’t match the older and bigger standard. Computers are very good at numbers, they never should have tried the hijack the existing prefix, especially when it was already defined by existing International standards. One might be able to argue that the US hadn’t really adopted the metric system at the point of development, but the usage of 1000 to define the kilo is clearly older than the usage of 1024 to define the kilobyte. The main new (last 100 years) thing here is 1024 bytes is a kibibyte.
Kibi is the recon. Not kilo.
How do you define a recon? Were kilograms 1024 grams, too? When did that change? It seems it’s meant 1000 since metric was created in the 1700s, along with a binary prefix.
From the looks of it, software vendors were trying to recon the definition of “kilo” to be 1024.
Only recent in some computers: which used a non-standard definition. The kilo prefix has meant 1000 since at least 1795-- which predates just about any kilobyte.
tl;dr
The memory bandwidth isn’t magic, nor special, but generally meaningless. MT/s matter more, but Apple’s non-magic is generally higher than the industry standard in compact form factors.
Long version:
How are such wrong numbers are so widely upvoted? The 6400Mbps is per pin.
Generally, DDR5 has a 64-bit data bus. The standard names also indicate the speeds per module: PC5-32000 transfers 32GB/s with 64-bits at 4000MT/s, and PC5-64000 transfers 64GB/s with 64-bits at 8000MT/s. With those speeds, it isn’t hard for a DDR5 desktop or server to reach similar bandwidth.
Apple doubles the data bus from 64-bits to 128-bits (which is still nothing compared to something like an RTX 4090, with a 384-bit data bus). With that, Apple can get 102.4GB/s with just one module instead of the standard 51.2GB/s. The cited 800GB/s is with 8: most comparable hardware does not allow 8 memory modules.
Ironically, the memory bandwidth is pretty much irrelevant compared to the MT/s. To quote Dell defending their CAMM modules:
In a 12th-gen Intel laptop using two SO-DIMMs, for example, you can reach DDR5/4800 transfer speeds. But push it to a four-DIMM design, such as in a laptop with 128GB of RAM, and you have to ratchet it back to DDR5/4000 transfer speeds.
That contradiction makes it hard to balance speed, capacity, and upgradability. Even the upcoming Core Ultra 9 185H seems rated for 5600 MT/s-- after 2 years, we’re almost getting PC laptops that have the memory speed of Macbooks. This wasn’t Apple being magical, but just taking advantage of OEMs dropping the ball on how important memory can be to performance. The memory bandwidth is just the cherry on top.
The standard supports these speeds and faster. To be clear, these speeds and capacity don’t do ANYTHING to support “8GB is analogous to…” statements. It won’t take magic to beat, but the PC industry doesn’t yet have much competition in the performance and form factors Apple is targeting. In the meantime, Apple is milking its customers: The M3s have the same MT/s and memory technology as two years ago. It’s almost as if they looked at the next 6-12 months and went: “They still haven’t caught up, so we don’t need too much faster, yet-- but we can make a lot of money until while we wait.”
They describe an SSH infector, as well as a credentials scanner. To me, that sounds like it started like from exploited/infected Windows computers with SSH access, and then continued from there.
With how many unencrypted SSH keys there are, how most hosts keep a list of the servers they SSH into, and how they can probably bypass some firewall protections once they’re inside the network: not a bad idea.
You’re not wrong.
Realistically, there’s a bit of a nuance. Many modern web apps have different components that aren’t HTML. You don’t need HTML for a component. And those non-HTML components can provide the consistency they need. Sometimes, that’s consistency for how to get the data. Sometimes, that’s consistency for how to display the data. For displaying, each component basically has its own CSS, but it doesn’t need to. A CSS class isn’t required.
Tailwind isn’t meant to be a component system, It’s meant to supplement one. If you’re writing CSS’s components, it looks horrible. If you’re writing components at CSS that needs a foundation of best practices, it works pretty decent. They’re still consistency. They’re still components. They’re just not centered around HTML/CSS anymore. It doesn’t have to be.
Sematically, it is still worse HTML. Realistically, it’s often faster to iterate on, easier to avoid breakage: especially as the project becomes larger. Combine that with the code being more easily copied and pasted. It can be a tough combo to beat. It’s probably just a stepping stone to whatever’s next.