signing into cloud services and downloading apps is just so much easier to do!
This is actually true, but it doesn’t speak to why self hosting is “impossible” and more to how the lack of education around computers have reached an inflection point.
There’s no reason why self hosting should be some bizarre concept; In another reality, we would all have local servers and firewalls that then push our content into the wider internet and perhaps even intranet based notes. Society as a whole would be better if we chose to structure the internet that way instead of handing the keys to the biggest companies on the stock market.
I’ll give this podcast a listen to though, as it might be interesting. I think the reality is that some more docker frontends might help casual users jump into the realm of self hosting – especially be setting up proxy managers and homepage sites (like homarr) that work intuitively that never requires you to enter ports and IPs (though fearing that is also an education problem, not a problem with the concept itself.)
If you want self-hosting for everyone, then I suspect you’re gonna have something like a console – a self-contained box that requires virtually no configuration.
So something like a Synology NAS, I guess.
And even that is not for the masses. It’s good, it’s for the medium savvy folks, but it will never be for my wife - or for my mother!
I suppose that that’s an “appliance-like computer” though I was thinking more of general-purpose hardware that takes software modules.
Like, think of how you install a game on a console. Maybe you set up your account at the beginning and plonk in a wireless password, but beyond that, there’s no further essential configuration for the thing to work. Same kind of idea. You get box, do initial minimal setup. After that, you install software modules, and they have no more configuration involved.
If more configuration is required, then it’s just not going to be something sufficiently accessible for everyone to use.
That’s probably not what the typical user on this community is looking for, but I think that that’s probably what would be required if one wants everyone in the public to be able to self-host.
There’s no reason why self hosting should be some bizarre concept; In another reality, we would all have local servers
In the late 2000s, Opera had a very interesting product called “Opera Unite”. It was essentially a self-hosting platform built into the web browser. You could use it to chat, host a website, share photos, share files (and let other people share files with you), and a few other things. It had a guest book called “the fridge” where people could leave you post it notes.
They’d give you a subdomain which would either connect to it directly (if your network allows UPnP or you forwarded the port), otherwise they’d proxy it via their servers.
Basically, it was a super simple solution to create a decentralized web. The goal was to let everyone own their own data in a way that anyone could understand, without having to know anything about server hosting. Instead of just browsing the web, you could contribute to it at the same time.
It worked surprisingly well, but never caught on with the general public, and they killed it off about three years later.
My isp doesn’t allow self hosting (available to WAN). Isn’t that a pretty common condition in ToS for most ISPs?
I’ve never bothered to check, because I self host to serve 1-5 users, and I’ve never generated enough traffic for any ISP to notice. I would need to pay them more for a static IP address, but we have dynamic DNS services for that. My ISP doesn’t put any actual obstacles in place beyond dynamic IP.
Here we have net neutrality so they legally can’t restrict what you’re doing with your connection, unless it’s illegal or if you are doing stupid shit like messing with their infrastructure.
CGNAT is decently common though and that can restrict your self hosting capabilities substantially, but you can work around it if you want.
Have you looked into setting up a reverse proxy? If you haven’t already, check out Traefik or Ngnix.
I have not. I’ve only just started exploring self hosting and am quite a novice. Thanks for the lead!
Oh you summer child (I upvoted, but won’t waste an hour of my life listening to random internet stuff). I don’t think it’s lack of education, in this world it’s very possible to educate yourself, it’s a lack of understanding (due to misinformation and corporate sponsored laziness) the implications of that easy click, or of what others can get without your consent. Privacy isn’t dead, it’s just now mostly for the rich.
Condescension is a terrible way to kindle enthusiasm. C’mon, if you know this shit, extend a hand to those who don’t.
Valid, and I do help where I can (check my history if you care). This post just tweaked my twee radar, to which I generally respond, gently, “learn to think for yourself”, parental pattern I guess.
They even have a term for this — local-first software — and point to apps like Obsidian as proof that it can work.
This touches on something that I’ve been struggling to put into words. I feel like some of the ideas that led to the separation of files and applications to manipulate them have been forgotten.
There’s also a common misunderstanding that files only exist in blocks on physical devices. But files are more of an interface to data than an actual “thing”. I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
I’d say almost everything is already covered with Samba shares and docker bind mounts. With Samba shares the data is presented across network to my Kodi clients, the file browser on my phone, and the file browsers of all my computers. And with docker bind mounts those files are presented to any services that I want to run.
Devil’s advocate: what about the posts and comments I’ve made via Lemmy? They could be presented as files (like email). I could read, write and remove them. I could edit my comments with Microsoft Word or
ed
. I could run some machine learning processing on all my comments in a Docker container using just a bind mount like you mentioned. I could back them up to Backblaze B2 or a USB drive with the same tools.But I can’t. They’re in a PostgreSQL database (which I can’t query), accessible via a HTTP API. I’ve actually written a Lemmy API client, then used that to make a read-only file system interface to Lemmy (https://pkg.go.dev/olowe.co/lemmy). Using that file system I’ve written an app to access Lemmy from a weird text editing environment I use (developed at least 30 years before Lemmy was even written!): https://lemmy.sdf.org/post/1035382
More ideas if you’re interested at https://upspin.io
deleted by creator
That makes sense. I think the reason why they’re not represented as files is pretty simple. Data integrity. If you want to get the comments you just query the table and as long as the DB schema is what you expect then it’ll work just fine and you don’t have to validate that the data hasn’t been corrupted (you don’t have to check that a column exists for example). But with files, every single file you need to parse and validate because another application could have screwed them up. It’s certainly possible to build this, it might be slower but computers are pretty fast these days, but it would require more work to develop to solve the problem that the database solves for you.
TLDR: Selfhosting is hard. Obsidian is easy. Federation is a thing.
Projects like Runtipi have potential for the masses, imo. Single click deployment of apps on your own server…if you can get Runtipi installed first, of course. But hey, a step closer I suppose.
I’m very new to selfhosting, only started in earnest in April of this year. So I definitely felt the hosts frustrations in deploying (or trying to deploy) solutions I wanted to take back from Google and Microsoft. I’m still learning and am almost to the point where I’m comfortable pulling the plug on Google photos entirely. But it’s a lot of research for newbs.
The problem with those projects like Runtipi is the same you’ve with Docker - you’ll be hostage to yet another platform that can fuck you up at any moment without notice… like Docker hub did.
Docker does not lock you in with the docker hub though. So no hostage taking.
While that isn’t false, defaults carry immense weight. Also, very few have the means to host at scale like Docker Hub; if the goal is to not just repeat the same mistake later, each project would have to host their own, or perhaps band together into smaller groups. And unfortunately, being a good programmer does not make you good at devops or sysadmin work, so now we need to involve more people with those skillsets.
To be clear, I’m totally in favor of this kind of fragmentation. I’m just also realistic about what it means.
Linuxserver.io images don’t come directly from Docker Hub any more, and I don’t know if anyone noticed or cared. They use their own domain lscr.io that redirects to the Docker repository they’re using (currently Github) which makes it easy for them to move the repository without breaking things for users. https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter
That approach is a good idea in general. If you’re running a medium to large size project, never directly rely on domain names you don’t control because it makes it painful to migrate to something else in the future. Even if your own domain just has a bunch of redirects (both URL redirects and email forwarding), it’s better than nothing.
That’s the same as saying that Microsoft doesn’t make anyone hostage with MS Office, yet they do.
Bullshit. I can use docker without the docker hub very easily. Anyone can host docker images, and docker allows this, no weird hacks needed.
deleted by creator
They don’t though? The file formats are documented and other office suite software can read and write them.
They do, those formats are a mess, full of small details and non-standard implementations on MS Office and Excel is most like the worst case. Office formats are all open until you realize that means shit because Microsoft does what they want and the standards don’t cover everything. If you’re serious about office and you need to collaborate with MS Office users those “other office suites” won’t cut it. You’ll have compatibility issues.
I agree that they’re a mess, but there’s nothing in there that intentionally holds you hostage. The format is not binary - it’s readable to anyone that wants to read it.
non-standard implementations
Do you have an example?
That’s not really possible with docker TBH, and I say that as a diehard Podman advocate. Docker, the tooling that you install with your package manager, is open source. Sure they have windows and mac desktop stuff that isn’t open, but it’s not like you’re self-hosting with that, right?
Plus there’s always Podman to switch to, which can be a (mostly) drop-in replacement, if you want something with a more trustworthy provenience.
Wait, what did Docker Hub do?
https://blog.alexellis.io/docker-is-deleting-open-source-images/
You shouldn’t be hostage to a platform. Before Docker we didn’t have those kinds of issues APT repositories are easy to mirror and they’re not run by for profit companies.
Are there any docker FOSS alternatives? It sounds like a good thing in practice but yeah, they seem to have too much power atm.
Podman
Exactly my point.
@spaduf That’s really nice, I actually use Obsidian myself for note taking and I can say that I will never go back to normal note taking software. The internet and software needs to change to be for the user, for if the software doesn’t exist in the future. We never know if a service will go but we shouldn’t loose everything.
Love Obsidian and linked notes in general. The potential utility there is insane but it’s such a steep learning curve. I really think that in the not too distant future they’ll be teaching it in schools.
Proprietary software should NOT be taught in schools! We already have way too much of that
If you do not teach proprietary software un schools, you will hobble your students’ job hunting potential. We should ALSO teach open source alternatives, and teach the idea that there are functional alternatives, but a student who has never used the major apps isn’t getting their resume even looked at by a human.
Imo students need to be taught how to use Windows, Linux would also be neat, but it’s not necessary for most.
They also need to be taught how to use any office suite, it can be libre office or MS office, it doesn’t really matter which but they have to be able to use it.
Definitely talking about linked notes. Obsidian is far from the first or only player in the space. Logseq is out there for the FOSS diehards. I actually very much prefer the Logseq paradigm but struggled with performance issues on my machines.
I really hope so, it should be taught, because for me it was a slow learning process. First I started using Obsidian, and moving all my notes from Evernote, Notion, and others manually. After that I moved to Logseq, which is like Obsidian but open-source. And the last step was using an open-source syncing tool (Syncthing) to have my notes synced locally in all devices. And now I have been 2 years self-hosting my notes in a really easy way.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol NAS Network-Attached Storage NAT Network Address Translation
5 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.
[Thread #254 for this sub, first seen 31st Oct 2023, 08:40] [FAQ] [Full list] [Contact] [Source code]
Good bot.
I use self-hosting for home assistant and jellyfin (behind two duckdns domains). I started to add more (Joplin server, Immich, tiny tiny RSS etc.) and found my workload increasing a lot, So I’ve paused that effort for now. For Joplin, I use syncthing with my Pi as a hub instead, which is much simpler.