![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
So you access the models directly via terminal? Is that convenient? Also, do you get satisfying inference speed and quality with a 16GB card?
So you access the models directly via terminal? Is that convenient? Also, do you get satisfying inference speed and quality with a 16GB card?
Just imagine the license fees.
From an idealistical point of view, sure freedom of choice is the way to go.
What makes me nervous is that Safari has been the only big player left besides chrome in regard to usage share on mobile. So while from an idealistical point of view the ban of other engines was certainly a bad thing, it still helped to prevent google from extending its monopoly.
The fact chromium based browser are going to be allowed as well makes me nervous.
Running the AV container is optional, as is using the integrated backup solution. But I can see how that might feel bloated if you don’t need it.
With AIO it’s almost the same: sudo docker exec -u www-data nextcloud-aio-nextcloud php occ <command>
I decided to go with this one because it’s now the official distribution channel and supported by the devs. But the lsio one looks pretty solid as well.
This has helped me a lot in my scenario.
Nextcloud is a web-based, open-source cloud / collaboration software suite, which can be self-hosted
Welcome. I use it in conjunction with Fedora CoreOS so I hopefully never have to manually update anything ever again.
Thanks! Glad to see the 8x7B performing not too bad - I assume that’s a Mistral model? Also, does the CPU significantly affect inference speed in such a setup, do you know?