Okay, then I’m thinking your router/NAT maybe causing the problem. Typically, your ISP won’t block subdomains for dns, they may outright block Source NAT (SNAT), but if you could get through via the IP, you should be good to go.
— GPG Proofs —
This is an OpenPGP proof that connects my OpenPGP key to this Lemmy account. For details check out https://keyoxide.org/guides/openpgp-proofs
[ Verifying my OpenPGP key: openpgp4fpr:27265882624f80fe7deb8b2bca75b6ec61a21f8f ]
Okay, then I’m thinking your router/NAT maybe causing the problem. Typically, your ISP won’t block subdomains for dns, they may outright block Source NAT (SNAT), but if you could get through via the IP, you should be good to go.
An easy way to check is to visit a site like this and check for port 443: https://www.yougetsignal.com/tools/open-ports/. You don’t need to be on the server that’s hosting your portfolio, just any thing that’s on the same network as your portfolio (something behind your external router)
Just to make sure.
https://fqdn/
it does not connect (probably with the ERR_CONNECTION_TIMED_OUT
that you mentioned below)What happens if you, on the hotspot, try browsing to https://206.x.x.x
? When you are on the same network as the portfolio, can you reach https://[internal ip]
?
What I’m leaning towards is a router/firewall that may be causing some issues. To help with troubleshooting, does your website server have any local firewalls (for ubuntu that would typically be ufw
, but it could be iptables
or firewalld
)?
Try this command from a terminal on the system from which you’re attempting to connect:
nslookup <yourfqdn>
It should come back with something like this:
~ ❯ nslookup stronk.bond
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: stronk.bond
Address: 172.67.174.80
If it says something like “can’t find” that means that your dns isn’t configured appropriately. Does your IP address start with 192.168
, 10.
, or 172.
? That would be a private IP address (something which isn’t accessible from the internet.
Oh! And where is everything - is your workstation/laptop on the same network as your portfolio? Is the portfolio on a different network? That could effect things as well.
What does your nginx config look like for ssl? It should specify a certificate and key file - that certificate subject needs to match your fully qualified domain name (fqdn). Certificate can have subject alternative names (SAN) for other names and even IP addresses.
For instance, you could have a single certificate for foo.bar with a SAN for just foo and an IP SAN for 192.168.1.30.
Certificates also need to be signed by a certificate authority (CA), and in order for your browser to visit https://foo.bar/
without a warning your browser must trust that CA.
If you did a self signed cert, this is most likely the problem you’re running into.
It’s important to know that your communication is still encrypted because of SSL, but since your browser doesn’t trust the CA (or the subject doesn’t match the FQDN) the browser will say it’s not secure.
We were visiting for about a week and I think it took three separate days, about 20 minutes each day before she felt comfortable doing the VPN stuff herself.
It was definitely painful, but if you’re patient, it’s doable.
Good luck with whichever option you choose!
Speaking as someone who has recently taken on a far-remote (e.g. about 22 hour drive away) support for a MIL, the best thing you could do is set up a VPN.
For me, I’m still on Plex with a very old lifetime account with my MIL using a dedicated user account - that access is over the Internet. The VPN is to provide access to Overseerr so that she can do things like request specific movies/TV shows without having to email/call.
It’s not perfect - one day I woke up to 26 seasons of “Into the Country”, but it works fairly well.
I sat down with her one day while visiting about a year or so ago and walked her through connecting to the VPN, then getting to the hosted site, then disconnecting from the VPN - basically running drills and making her take notes until she felt she could do it by herself.
I use netbox too - and if you’re careful about it, you can actually use terraform to create the netbox details. I use one manifest file to handle deployment to Proxmox, set up DNS in PowerDNS, and create the relevant netbox entries.
I bought a car that comes with a “free” 300k/30 year warranty, but only if to do oil changes every 4k miles or 3 months. Maybe this guy has something similar?
For me, I may try And keep it up for a bit, but driving to one particular dealer every 3 months just to get a ridiculous warranty that will probably never actually pay out isn’t worth it.
Also of note - if you’re using docker (and Linux), make sure the user is/group id match across everything to eliminate any permissions issues.
Not really, but I can give you my reasons for doing so. Know that you’ll need some shared storage (NFS, CIFS, etc) to take full advantage of the cluster.
I hope that helps give some reasons for doing a cluster, and apologies for not replying immediately. I’m happy to share more about my homelab/answer other questions about my setup.
Those are beasts! My homelab has three of them in a Proxmox cluster. I love that for not a ton of extra money you can throw in a PCIe expansion slot and the power consumption for all three is less than my second hand Dell Tower server.
Sorry, I wasn’t clear - I use PowerDNS so that I can more easily deploy services that can be resolved by my internal networks (deployed via Kubernetes or Terraform). In my case, the secondary PowerDNS server does regular zone transfers from the primary in order to ensure it has a copy of all A, PTR, CNAME, etc records.
But PowerDNS (and all DNS servers really), can either be authoritative resolvers or recursors. In my case, the PDNS servers are authoritative for my homelab zone/domain and they perform recursive lookups (with caching) for non-authoritative domains like google.com, infosec.pub, etc. By pointing my PDNS servers to PiHole for recursive lookups, I ensure that I have ad blocking while still allowing for my automation to handle the homelab records.
This is overkill.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can’t recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: “My setup is overkill” rather than “This is overkill” - the Op is asking a very valid question and the passive phrasing of my post’s first sentence could be taken multiple ways.
I put my Plex media server to work doing Ollama - it has a GPU for transcoding that’s not awful for simple LLMs.
Hosting on the public web isn’t too crazy - start with port forwarding on standard ports (443 for sale/web) and add in a dynamic DNS address.
More than likely your residential ISP doesn’t change your IP that often, but Dynamic DNS solves that problem before it hits. I use Cloudflare, but mostly because I’m lazy and haven’t moved off of them after their most recent sketch behavior.
Sure! I mostly followed this random youtuber’s video for getting Wyoming protocols offloaded (Whisper/Piper), but he didn’t get Ollama to use his GPU: https://youtu.be/XvbVePuP7NY.
For getting the Nvidia/Docker passthrough, I used this guide: https://www.bittenbypython.com/en/posts/install_ollama_openwebui_ubuntu_nvidia/.
It’s working fairly great at this point!
I spun up a new Plex server with a decent GPU - and decided to try offloading Home Assistant’s Preview Voice Assistant TTS/STT to it. That’s all working as of yesterday, including an Ollama LLM for processing.
Last on my list is figuring out how to get Home Assistant to help me find my phone.
This is the way. Layer 3 separation for services you wish to access outside of the home network and the rest of your stuff, with a VPN endpoint exposed for remote access.
It may be overkill, but I have several VLANs for specific traffic:
There are two new additions: a ext-vpn VLAN and a egress-vpn VLAN. I spun up a VM that’s dual homed running its own Wireguard/OpenVPN client on the egress side, serving DHCP on the ext-vpn side. The latter has its own wireless ssid so that anyone who connects to it is automatically on a VPN into a non-US country.
I would get multiple drives and do RAID. Here’s a helpful calculator to figure out drive quantity, size, and configuration. The reason to do RAID is redundancy. Hard drives will fail (even NAS branded drives). You do not want your photos, media, etc to be lost in that case. I personally do not go with anything below RAID5 (and for super sensitive things, I’ll even go RAID6 - despite the hit on overall capacity. If the optiplex has drive capacity for multiple drives, I strongly recommend you go this route.