• 4 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Yes, XML is different than JSON and YAML, but it’s not particularly easier or harder to manually read/edit than JSON or YAML are (IMO the are all a pain, each in its own way).

    If you want to look at it from the programmer’s side (which is not what OP was talking about)… marshalling/unmarshalling has been a solved issue for at least 20yrs now :) just have a library do it for you (do map json/yaml properties to you objects manually?).

    You don’t need to worry about attributes/child elements: <person name="jack" /> and <person><name>jack</name></person> will work the same (ok, this may depend on what language/library you pick - the lib I used back in the day worked either way).

    If anything, the issue with XML is all the unnecessarily complicated stuff they added to its “core” (eg. CDATA, namespaces, non-standalone documents, …) and all the unnecessarily complicated technologies/standards they developed around XML (from Xinclude to SOAP and many others)… but just ignore that BS (like the rest of the world does) and you’ll mostly be fine :)




  • Best of luck to you!

    I’m trying to understand Git, but it’s a giant conceptual leap.

    Git is not that different from svn (I mean, the biggest hurdle is going from a shared folder to any version control system)… I’d say the main difference is that branches live in a different namespace than files (ie. you don’t have trunk/src/whatever but just src/whatever in the main branch). On top of that there’s that commit and push are two different things (and the same with fetch and checkout) and that merges are way easier than in svn (where you had to merge stuff manually).

    If you create a repo locally and clone it twice in two different directories, you can easily simulate what would happen when you and a coworker collaborate via a centralized repo (say, github) - do a few experiments and you’ll see it’s not as complicated as it seems (I’d recommend using the CLI instead of some GUI client: it’s way easier to figure things out without the overhead of learning to differentiate between git concepts and how the GUI tries to help).



  • why is your network like this?

    Well, at the moment my network is actually flat :)

    This is an experiment I’m doing because I wanted to have all the management stuff on a different subnet (eg. adguard dns is on the “regular” subnet everyone uses, but its web interface is on the special subnet only select devices can talk to).

    Of course (like with most stuff in my homelab), it’s not like I really have a super-compelling security reason to that, it’s mostly that I wondered “what if?” :D

    Oh. the ping option you are referring to is -I (upper case) and takes either an interface name or an ip. I did try giving a .10/24 IP to the PC and the results were consistent with scenario 1 (pings where source and destination are on the same subnet work, pings acrrss subnets don’t), so I didn’t mention that in the OP


  • I don’t think I quite explained the situation well enough: my server only has 1 ethernet port (same as my PC), otherwise I wouldn’t have bothered with vlans (well, I would still have bothered, since my house still only has one “backbone” cable running through it, but I would have configured it on the switches only).

    Anyway… a few of the things you say/imply go against my understanding of networking, so one of us would better go back RTFM as you suggest :) (just kidding - most probably I just don’t understand what you mean)







  • If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

    My two cents: use a “full” computer as your router (with either something like OPNsense or any “regular” linux distro if you don’t need the GUI) and OpenWRT on your access points.

    Unless you use the GUI and backup/restore the configuration (as you would with proprietary firmwares), OpenWRT is frankly a pain to configure and deploy. At the moment I’m building custom images for all my devices, but (next time™) I’m gonna ditch all that, get an x86 router and just manually manage OpenWRT on my wifi APs (I only have two and they both have the same relatively straightforward config).

    It’s a pain that I know can be solved with buying dedicated access points (…right?)

    Routers and access points are just computers with network interfaces (there may be level-2-only APs, but honestly I’ve never heard of any)… most probably your issue is that the firmware of your “routers as access points” doesn’t want to be configured as a dumb AP.



  • With the very limited number of drives one may use at home, just get the cheapest ones (*), use RAID and assume some drive may fail.

    (*) whose performances meet your needs and from reputable enough sources

    You can look at the backblaze stats if you like stats, but if you have ten drives 3% failure rate is exactly the same as 1% or .5% (they all just mean “use RAID and assume some drive may fail”).

    Also, IDK how good a reliabiliy predictor the manufacturer would be (as in every sector, reliabiliy varies from model to model), plus you would basically go by price even if you need a quantity of drives so great that stats make sense on them (wouldn’t backblaze use 100% one manufacturer otherwise?)