• 6 Posts
  • 96 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle

  • Xenophobia and racism, mostly. And yes, it’s a solution to the aging demographic crisis many countries face (at least in the medium-term).

    I remember seeing a video of a presentation back in the Bush years by some neo-con group that advocated for immigration to Pentagon or DoD officials or something. The argument for immigration was mostly the same: we have an aging population, so we could integrate immigrants (who are statistically younger) to solve this issue. I didn’t agree much with the broader idea of the presentation though. The broader idea was that there were still some parts of the world not a part of the global U.S.-led hegemony (mostly the middle-east and Africa), and we must spread democracy and capitalism to them. The argument was that globalism/capitalism ensures peace, and that both WWI and WWII happened because globalism was falling apart shortly before those wars. So, to ensure world peace, we need to globalize the entire earth and bring all countries into the the U.S.-led hegemony, even if that means starting wars to spread democracy, lol.











  • A lot of the “elites” (OpenAI board, Thiel, Andreessen, etc) are on the effective-accelerationism grift now. The idea is to disregard all negative effects of pursuing technological “progress,” because techno-capitalism will solve all problems. They support burning fossil fuels as fast as possible because that will enable “progress,” which will solve climate change (through geoengineering, presumably). I’ve seen some accelerationists write that it would be ok if AI destroys humanity, because it would be the next evolution of “intelligence.” I dunno if they’ve fallen for their own grift or not, but it’s obviously a very convenient belief for them.

    Effective-accelerationism was first coined by Nick Land, who appears to be some kind of fascist.



  • LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when “reading” large or multiple articles). With ChatGPT, it’s output seems more likely to be factually correct when it just generates “facts” from it’s model instead of “browsing” and adding articles to its context.