Hi,

I’ve written some microservices and am looking to deploy them. Since I am not very confident in cloud pricing (= too expensive right now), I am looking into ways of operating a very small server setup.

Lets say I have 5 services, one is the database, 3 should be run as jobs every minute, one service should be scaled based on load.

I am aware that basically described tasks that k8s or nomad would be very good at it. The issue with them is: While I am going to update the services, etc, I do not need a large cluster. I am very sure that I can start with one pod/node and maybe get a second one, if needed.

For this setup, k8s (or other flavors) is just overkill (learning and maintaining it). Nomad from hashicorp looks totally cool for that, but it is recommended to have 3 servers with crazy specs for each of them (doing quorum, leader follower, replication, etc.) Which is overkill when I plan to have 1 worker node in total :D

Nomad has `-dev` option running server and agent on the same node, but in production? I don’t know. Nomads server also uses his ip and other things for identity. When they change, the server instance is basically dead and loses its data. That’s why a quorum of 3 servers is recommended as a minimal prod setup.

Docker compose is not ideal, because I would like to update single containers without tearing everything down.

Also, cron for my periodic tasks is not part of docker or docker swarm except plugins, workarounds, or configuring a container running `cron` but then meddling with `flock`, etc.

I am aware that it actually does not sound like I need an orchestrator, but monitoring all the jobs and restarting a container manually sounds not optimal for me and maybe there is something out there, that helps me.

Since the tech community knows more than me, I would love to get some other opinions or point of views.

Thanks!

  • from-nibly@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    If you are just starting and messing around you can go a long way with a single node k3s cluster (I prefer nixos since it makes managing and replicating things REALLY repeatable, but it is it’s own rabbit hole)

    BUT if you need several 9s your going to need even more than just 1 server with k3s on it. Your gonna want redundancy, monitoring, and processes.

    1. 3 nodes while only using capacity of 2
    2. Shared volume infra like ceph or a nas
    3. Load balancing firewall like opnsense
    4. Multiplexed internet
    5. UPS for power issues
    6. Onsite backups + cloud backups
    7. Kube-prometheus-stack (or the contents of)
    8. KEDA (for auto scaling)

    (Not a day 0 recommendation)

    The reason kubernetes is complex (and hard to learn) is cause it kinda forces you to consider all kinds of reliability, and scaling issues, that you may not need for a while.

    If you only have one machine, it does feel like a bit much to NEED an autoscaler.

    You can create a vanilla cron job that runs a docker container command so you don’t have to “install” anything on your node. L

    You can use multiple docker compose files to manage stuff independently so you can upgrade stuff without affecting other things.

    I know you say you want auto scaling, but what are you autoscaling against? Like is something else scaling up at different intervals? I think a thing to question is if your extra instances ever need to scale down. Auto scaling is a cost saving measure and if you have static infrastructure with no other load then why ever scale down? Do your cron jobs take too many resources and you have to scale down your micro services? If so you’ve got way more to consider that just plain autoscaling, and maybe you need to scale your infrastructure in which case your back to questioning whether or not you need to scale down.

    I’m questioning your requirements only because if you are trying to just “get something done” k8s and nomad are going to be a distraction since you aren’t already familiar with them. If learning k8s or nomad is also part of your goal, then awesome, I would definitely suggest k3s.

    • HosonZes@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      sure and if you have static infrastructure with no other load then why ever scale down? Do your cron jobs take too many resources an

      Need to think about this.

      I am sure, that several docker files or compose projects will do some of the jobs.

      Main reason I am hesitant: I am pretty sure, services will fail, and need to be restarted. Docker can do this. I also know I have to meddle with cron jobs and I can built the cron inside the docker container that should run the service. But this is some sort of work-arounding a lack of a “job” feature in docker and it also feels wrong to mix the service with the infrastructure together (self-imposed pain :D)

      One of the continuous services is actually one, I do not know in advance, how it will perform. It processes many concurrent jobs, which is a good thing, but I also suspect, that I still will have to replicate this service on the same machine to utilize resources better, adding a small load balancer for it, etc. One service instance is concurrent but still limited to a certain degree. This part will need some monitoring but also tailoring. And for this I would prefer a proper solution, albeit one with reduced learning curve.

      Monitoring is a thing, too: I would love to have a dashboard to see my tiny services in action and vanilla docker does not provide it. There will be error streams I am going to log into an observability platform but I still feel, that I need more, than docker provides out of the box.

      Also: I love new tech, but I do not expect, that my tiny one server vps SaaS will instantly blow up, so I will have to autoscale it onto multiple federated cloud providers or a fleet of VPS machines. While I do not want to waste time learning a thing, I cannot properly handle alone, I also would like to learn a technology that helps me getting started right now but also provides a path for later growth. That’s why I hesitate to spend time with single docker containers and low level fiddling (the cron part) inside those, because I did not find evidence, that it provides the potential (even with docker swarm)

      • from-nibly@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        To clarify I was talking about using the hosts native cron not the containers cron. And simply execute the container at the given time.

        But if you want to learn something then go with k3s. It’s incredibly simple to get going.