Hyperfixating on producing performant code by using Rust (when you code in a very particular way) makes applications worse. Good API and system design are a lot easier when you aren’t constantly having to think about memory allocations and reference counting. Rust puts that dead-center of the developer experience with pointers/ownership/Arcs/Mutexes/etc and for most webapps it just doesn’t matter how memory is allocated. It’s cognitive load for no reason.
The actual code running for the majority of webapps (including Lemmy) is not that complicated, you’re just applying some business logic and doing CRUD operations with datastores. It’s a lot more important to consider how your app interacts with your dependencies than how to get your business logic to be hyper-efficient. Your code is going to be waiting on network I/O and DB operations most of the time anyway.
Hindsight is 20/20 and I’m not faulting anyone for not thinking through a personal project, but I don’t think Rust did Lemmy any favors. At the end of the day, it doesn’t matter how performant your code is if you make bad design and dependency choices. Rust makes it harder to see these bad choices because you have to spend so much time in the weeds.
To be clear, I’m not shitting on Rust. I’ve used it for a few projects and great for apps where processing performance is important. It’s just not a good choice for most webapps, you’d be far better off in a higher-level language.
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.