Is there any actual research? All I see are TikTok videos and Reddit comments.
Is there any actual research? All I see are TikTok videos and Reddit comments.
Love another iOS option.
This seems the correct advice. If the container is on the same host as the data, there’s no need to access the data via Samba. In fact, it’s likely the container doesn’t contain the samba client needed for such connectivity.
Assuming TrueNAS allows the containers to see local data, a bind mount is the way to go.
This is good stuff. Has it been posted to the project’s GitHub (issue, discussion, etc.)?
I imagine this would be up to the application. What you’re describing would been seen by the OS as the device becoming unavailable. That won’t really affect the OS. But, it could cause problems with the drivers and/or applications that are expecting the device to be available. The effect could range from “hm, the GPU isn’t responding, oh well” to a kernel panic.
Red Hat (RHEL) is not based on any other distro, like Ubuntu is with Debian. RHEL is downstream of Fedora, meaning that RHEL developers can work on code that affects Fedora AND RHEL. This is not really true of Debian and Ubuntu. They are distinct projects with different goals. In many ways, Ubuntu is beholden to what Debian does. This isn’t usually a problem because Debian is very conservative in its approach to software. Ubuntu doesn’t usually have to worry about Debian screwing with something Ubuntu is trying to do.
Which, is all to say that there is no other distribution you can officially equate to RHEL like you can with Debian & Ubuntu.
Tailscale is an overlay network. It will use whatever networking is available. If only one of those NICs is a gateway, then that’s what will be used to reach remote Tailnet resources.
Leaving this post here since it’s an interesting project to keep an eye on, but the conversation isn’t constructive. So, locking the comments.
You’ll need to be far more descriptive than “I can’t get it to work.” I can almost guarantee you that Fedora is not the problem.
I’m going to allow this post, despite its age and likely obsolescence. I encourage community members to use up and down votes to judge its value to the community.
TL;DR: No sleep = tired = low energy = eating for energy = overeating
I feel this, too. I’m averaging 4-5 hours/night. I know that I eat at night to get energy to stay awake and carry on.
This community is not unmoderated, nor is it micromanaged. As has been shared in these comments, some members of this community appreciate these new release postings. If you don’t, ignore/hide it and/or downvote it and move on.
Looks like work you’d get out of 98% of pros. I’m fine with my mistakes and imperfections. It’s the ones I pay others for that piss me off.
Check the ZFS pool status. You could lots of errors that ZFS is correcting.
Quick and easy fix attempt would be to replace the HDD with an SSD. As others have said, the drive may just be failing. Replacing with an SSD would not only get rid of the suspect hardware, but would be an upgrade to boot. You can clone the drive, or just start fresh with the backups you have.
PO is always a dumbass, lazy SOB
It’s a stub and almost worthless.
Yeah, and it’s so comprehensive.
yarn install
yarn dev
My point stands.
This is a question probably better-suited for one of the Proxmox communities. But, I’ll give it a try.
Regarding your concerns about new SSDs and old VM configs: why not upgrade to PVE8 on the existing hardware? This would seem to mitigate your concerns about PVE8 restoring VMs from a PVE7 system. Still, I wouldn’t expect it to be a problem either way.
Not sure about your TrueNAS question. I wouldn’t expect any issues unless a PVE8 installs brings with it a kernel driver change that is relevant to hardware.
Finally, there are several config files that would be good to capture for backup. Proxmox itself doesn’t have a quick list, but this link has one that looks about right: https://www.hungred.com/how-to/list-of-proxmox-important-configuration-files-directory/