Hey all, I’d love some more eyes on this problem I’ve been having.
Context:
- I’m behind a CGNAT.
- I have a domain
- I have VPN with a dedicated IP
- My DNS records are pointed at that dedicated IP
- I have a TP_Link A8 Router, and a Surfboard DOCSIS 3.1
- Router has Bonded light
- I’m running a server with Proxmox VM
- It works amazing locally
Goal(s):
- Use NextCloud/OwnCloud
- Ability to access NC/OC from outside local network
- Being able to use domain name instead of dedicated IP when accessing page
Actions:
- Install a Debian 12 VM (or LXC depending upon attempt)
- Update package repositories
- Add user to sudoers file
- Install UFW
- Install VPN application
- Enable UFW
- Deny ALL but 40,443
- Install Docker Engine
- Enable VPN
- Install Cosmos Server
- Go through initial setup
- Configure domain as Dedicated IP
- Go through initial setup
- Here my attempts just hang.
- I have tried this using NGINX Reverse Proxy
- I have tried this using Apache2 as a reverse proxy
Technical Information
- Port scanning options see ports as open
- SSL certificate application (letscrypt) hangs
I have also followed the ‘how to’ https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html from Nextcloud, using manual installation, and can install it, but when I get to the letscrypt stage, I can never get it to complete. I’ve tried the AIO as well. as the Docker image.
The issue is always with SSL/connecting from the outside. I can access it locally, but that doesn’t help me leave commercial clouds behind!
I’ve included my network diagram of what I *think* is going on
Thanks!
You can use Let’s Encrypt DNS authentication to get an SSL without using any ports. The idea is to insert a CNAME of a string of text to your DNS to verify that you own the domain, thus getting the certificate issued. Google for that and there should be a guide for the OS that you use.
Was going to suggest the same. A guy at work was trying to tell me we’d have to open ports eventually for an application behind a VPN. While he was telling me I was wrong, I added the record, and pulled certs. They should really lead with that IMHO
sudo certbot certonly --manual --preferred-challenges dns -d
And it’s a TXT record that you need to add.
IIRC Getting the LetsEncrypt certificate for NGINX Reverse Proxy requires direct access to the web site on port 80 - you are behind CGNAT and stuffed…
Possibly have a look at Cloudflare tunnel (Cloudflared in Docker) - this gives you http / https access with certificates. I used these instructions and it took less than an hour to get up and running https://www.crosstalksolutions.com/cloudflare-tunnel-easy-setup/ Note my TTL on the domain was set low to speed up transfer of name servers.
This also lets me access the sites directly using the full DNS entry even though my router does not handle hair pinning - no need for a local DNS server anymore.
Note the above are slightly out of date to the screen layout but in principal they work fine.
There is a small security concern - Cloudflare can intercept all traffic (even to/from https sites) internally - that does not worry me but your use case (or principals) may differ :-)
I am not a Nextcloud wizard, but I have been successful using acme.sh in different contexts, specifically using “DNS mode” to prove I have control of a domain name without inbound IP access.
Does any inbound IP traffic work? I’d start by making sure that the port forwarding is working correctly with plaintext traffic like HTTP/port 80 and then look at encryption.
You also might need to use alternate ports if your ISP doesn’t want you running servers, which is probably the case if you’re behind CGNAT.