It’s not dead though, it’s still linked to everywhere, from big news to niche communities because it still has that critical mass and inertia.
And I have to be cynical of the Fediverse, but realistically, what replaces it, at least here in the US? Discord? No, thanks, I’d at least rather have information be public.
I’m speaking as someone who has never used Twitter, but I can’t ignore it, as much as I’d like to.
The behavior is configurable just like it is on linux, UAC can be set to require a password every time.
But I think its not set this way by default because many users don’t remember their passwords, lol. You think I’m kidding, you should meet my family…
Also, scripts can do plenty without elevation, on linux or Windows.
The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.
And the other problem is that petals just can’t keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.
Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.
The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.
Facebook just didn’t release the code for llama imagegen.
The model you are looking for now is Flux.
TBH this is a great space for modding and local LLM/LLM “hordes”
^
Futurama had it right, spammers are the ultimate destroyers.
Top 50% of the population still.
After all, they wrote a review.
Trap them?
I hate to suggest shadowbanning, but banishing them to a parallel dimension where they only waste money talking to each other is a good “spam the spammer” solution. Bonus points if another bot tries to engage with them, lol.
Do these bots check themselves for shadowbanning? I wonder if there’s a way around that…
This. I’m surprised Lemmy hasn’t already done this, as it’s such a huge glaring issue in Reddit (that they don’t care about, because bots are engagement…)
GPT-4o
Its kind of hilarious that they’re using American APIs to do this. It would be like them buying Ukranian weapons, when they have the blueprints for them already.
Oh, and as for benchmarks, check the huggingface open llm leaderbard. The new one.
But take it with a LARGE grain of salt. Some models game their scores in different ways.
There are more niche benchmarks floating around, such as RULER for long context performance. Amazon ran a good array of models to test their mistral finetune: https://huggingface.co/aws-prototyping/MegaBeam-Mistral-7B-512k
Honestly I would get away from ollama. I don’t like it for a number of reasons, including:
Suboptimal quants
suboptimal settings
limited model selection (as opposed to just browsing huggingface)
Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac
Frankly a lot of attention squatting/riding off llama.cpp’'s development without contributing a ton back.
Rumblings of a closed source project.
I could go on and on, inclding some behavior I just didn’t like from the devs, but I think I’ll stop, as its really not that bad.
Jokes aside (and this whole AI search results thing is a joke) this seems like an artifact of sampling and tokenization.
I wouldn’t be surprised if the Gemini tokens for XTX are “XT” and “X” or something like that, so it’s got quite a chance of mixing them up after it writes out XT. Add in sampling (literally randomizing the token outputs a little), and I’m surprised it gets any of it right.
The plan is to monetize the AI results with ads.
I’m not even sure how that works, but I don’t like it.
Honestly I am not sold on petals, it leaves so many technical innovations behind and its just not really taking off like it needs to.
IMO a much cooler project is the AI Horde: A swarm of hosts, but no splitting. Already with a boatload of actual users.
And (no offense) but there are much better models to use than ollama llama 8b, and which ones completely depends on how much RAM your Mac has. They get better and better the more you have, all the way out to 192GB. (Where you can squeeze in the very amazing Deepseek Code V2)
RAM capacity and bandwidth.
That basically the only two things that matter for local LLM performance, as it has to read the entire model from memory for every token (aka half word). And for the same money, a “higher end” M2 (like an M2 Max or Ultra) will just have more of it than the equivalent cost M3 or (probably) M4.
but what am I realistically looking at being able to run locally that won’t go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working?
Honestly, not much. Llama 8B, but very slowly, or maybe deepseek v2 chat, preprocessed on the 270 with vulkan but mostly running on CPU. And I guess just limit it to 6 threads? I’d host it with kobold.cpp vulkan, or maybe the llama.cpp server if there will be multiple users.
You can try them to see if they feel OK, but llms are just not something that like old hardware. An RTX 3060 (or a Mac, or a 12GB+ AMD GPU) is considered bare minimum in the community, a 3090 or 7900 XTX standard.
OK, so the reaction here seems pretty positive.
But when I bring this up in other threads (or even on Reddit in the few subreddits I still use) the reaction is overwhelmingly negative. Like, I briefly mentioned fixing the video quality issues of an old show in an other fandom with diffusion models, and I felt like I was going to get banned and doxxed.
I see it a lot here too, in any thread about OpenAI or whatever.
Because gun violence is more of a risk than ever.
Honestly I would feel better with the sniper there, so they could stop random nut who shows up with an assault rifle. Which is really sad.