• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle


  • Oppenheimer was already really long, and I feel like it portrayed the complexity of the moral struggle Oppenheimer faced pretty well, as well as showing him as the very fallible human being he was. You can’t make a movie that talks about every aspect of such an historical event as the development and use of the first atomic bombs. There’s just too much. It would have to be a documentary, and even then it would be days long. Just because it wasn’t the story James Cameron considers the most compelling/important about the development of the atomic bomb doesn’t mean it’s not a compelling/important story.


  • The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.




  • The change doesn’t reflect unprecedented temperatures, with Fairbanks having reached 90 degrees twice in 2024, Srinivasan said. It’s purely an administrative change by the weather service.

    I think this is a bit disingenuous. Sure, it’s not technically “unprecedented” because it has happened before, specifically last year, but the change is because they want to better help people, and better helping people means making this change because hotter temperatures are happening more because of climate change.

    Thoman also clarified that the term swap doesn’t have anything to do with climate change.

    They may not be directly citing climate change, but it’s absolutely the root cause. I wonder if they’re just trying to stay under Trump’s radar so he doesn’t make them roll it back because they said the C phrase. In bad political times doing good sometimes means speaking the party line while doing good works behind their backs.


  • My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

    Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

    And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.


  • My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

    I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.






  • Even more surprising: the droplets didn’t evaporate quickly, as thermodynamics would predict.

    “According to the curvature and size of the droplets, they should have been evaporating,” says Patel. “But they were not; they remained stable for extended periods.”

    With a material that could potentially defy the laws of physics on their hands, Lee and Patel sent their design off to a collaborator to see if their results were replicable.

    I really don’t like the repeated use of the phrase “defy the laws of physics.” That’s an extraordinary claim, and it needs extraordinary proof, and the researchers already propose a mechanism by which the droplets remained stable under existing physical laws, namely that they were getting replenished from the nanopores inside the material as fast as evaporation was pulling water out of the droplets.

    I recognize the researchers themselves aren’t using the phrase, it’s the Penn press release organization trying to further drum up interest in the research. But it’s a bad framing. You can make it sound interesting without resorting to clickbait techniques like “did our awesome engineers just break the laws of physics??” Hell, the research is interesting enough on its own; passive water collection from the air is revolutionary! No need for editorializing!



  • The main issue is that nobody is going to want to create new content when they get paid nothing or almost nothing for doing so.

    This is the whole reason copyright is supposed to exist. Content creators get exclusive control over the content they create for the duration of the copyright, so they can make a living off of work that then enriches society. And for the further benefit of society, after 14 years this copyright ends and the works become public domain, where anyone can create derivative works that will have copyright on them going to their own creators and the cycle continues, further enriching society.

    Large companies first perverted this by getting Congress to extend the duration of copyright to truly absurd levels so they could continue to extract wealth from works they had to spend very little to maintain (mostly lawyers to enforce their copyrights). Since only they could create derivative works for 100(!) years, they did not have to compete with other creators in society, giving themselves a monopoly on what become cultural icons. Now corporate America has found a way to subvert creation itself, but it requires access to effectively all copyrighted works everywhere simultaneously. So now they just ignore the copyright, since it is impeding their wealth accumulation.

    And so now the creative engine copyright is supposed to foster dies, taking the social enrichment it was designed to facilitate with it. People won’t stop making art or generating what’s supposed to be copyrighted works, but when they can’t making a living on it, they have to turn it into a hobby and spend the bulk of their time and energy on work that will put food on the table.



  • Ah, I think I misread your statement of “followers by nature” as “followers of nature.” I’m not really willing to ascribe personality traits like “follower” or “leader” or “independent” or “critical thinker” to humanity as a whole based on the discussion I’ve laid out here. Again, the possibility space of cognition is bounded, but unimaginatively large. What we can think may be limited to a reflection of nature, but the possible permutations that can be made of that reflection are more than we could explore in the lifetime of the universe. I wouldn’t really use this as justification for or against any particular moral framework.


  • I think that’s overly reductionist, but ultimately yes. The human brain is amazingly complex, and evolution isn’t directed but keeps going with whatever works well enough, so there’s going to be incredible breadth in human experience and cognition across everyone in the world and throughout history. You’ll never get two people thinking exactly the same way because of the shear size of that possibility space, despite there having been over 100 billion people to have lived in history and today.

    That being said, “what works” does set constraints on what is possible with the brain, and evolution went with the brain because it solves a bunch of practical problems that enhanced the survivability of the creatures that possessed it. So there are bounds to cognition, and there are common patterns and structures that shape cognition because of the aforementioned problems they solved.

    Thoughts that initially reflect reality but that can be expanded in unrealistic ways to explore the space of possibilities that an individual can effect in the world around them has clear survival benefits. Thoughts that spring from nothing and that relate in no way to anything real strike me as not useful at best and at worst disruptive to what the brain is otherwise doing. Thinking in that perspective more, given the powerful levels of pattern recognition in the brain, I wonder if creation of “100% original thoughts” would result in something like schizophrenia, where the brain’s pattern recognition systems are reinterpreting (and misinterpreting) internal signals as sensory signals of external stimuli.


  • The problem with that reasoning is it’s assuming a clear boundary to what a “thought” is. Just like there wasn’t a “first” human (because genetics are constantly changing), there wasn’t a “first” thought.

    Ancient animals had nervous systems that could not come close to producing anything we would consider a thought, and through gradual, incremental changes we get to humanity, which is capable of thought. Where do you draw the line? Any specific moment in that evolution would be arbitrary, so we have to accept a continuum of neurological phenomena that span from “not thoughts” to “thoughts.” And again we get back to thoughts being reflections of a shared environment, so they build on a shared context, and none are original.

    If you do want to draw an arbitrary line at what a thought is, then that first thought was an evolution of non-/proto-thought neurological phenomena, and itself wasn’t 100% “original” under the definition you’re using here.