Sysadmin and FOSS enthusiast. Self-hosting on Proxmox with a focus on privacy and digital sovereignty. Documenting my experiences with Linux, home labs, and the ongoing fight to keep Big Tech out of our hardware.

@unknownuniverse@unkn.uk

  • 3 Posts
  • 19 Comments
Joined 2 months ago
cake
Cake day: March 31st, 2026

help-circle
  • The home server is an old, low-powered mini PC running Debian. It acts as the bridge between the WireGuard tunnel and my local LAN.

    I’ve just finished migrating one of my AdGuard Home instances onto it today. Its role is now twofold:

    Routing: It has ip_forward enabled and a bit of NAT (iptables/nftables) so that traffic arriving from the VPN can actually “hop” onto the local network to reach my other VMs and containers.

    DNS: It provides ad-blocking for the tunnel. VPN clients point to this node’s internal WireGuard IP for DNS queries.

    Technically, it’s just another WireGuard peer, but with AllowedIPs configured to advertise my 192.168.x.x subnet back to the hub (VPS2). This is what allows  VPS1 and my mobile devices to resolve and reach home services without a single open port on my router.


  • You’re right, and for a lot of people, one VPS is the sensible choice. I actually addressed this in the post:

    "VPS1 is my web-facing server. It handles the public side of things. VPS2 is the VPN hub. At first glance, that probably looks unnecessary. Strictly speaking, it is unnecessary. I could have crammed WireGuard onto VPS1 and called it done. But splitting the roles makes the whole thing cleaner.

    One machine serves public traffic. The other handles VPN duties. That means fewer networking compromises, fewer chances of Docker or firewall rules becoming annoying, and a clearer separation between the public-facing stack and the private tunnel. It also means I can change one side without poking the other with a stick and hoping nothing catches fire."



  • Exactly that, VPS2 handles the WireGuard port and has no domain pointing to it, so it’s basically hiding in plain sight. VPS1 holds the domain and handles the web traffic.

    I keep SSH open on both, but locked down (key-based auth + restricted to my IPs).

    Your idea of using the provider firewall (Ionos in my case) as a “mechanical” lock is a good one, block it at the edge and only open it when needed. I’ve thought about doing that, but I’m generally happy relying on a hardened SSH config and the provider’s KVM if everything goes sideways.






  • Since they publish their client-side source code (https://mega.io/developers), anyone can verify that the encryption actually happens locally on your device before a single byte is uploaded.

    Unlike Google or Microsoft where you just have to hope they aren’t scanning your files for ads or AI training (which they are!) Mega’s transparency means if there was a backdoor in the client code, the FOSS community would have flagged it years ago, it gives independent researchers a chance to check the behaviour. As an offsite backup is crucial, for me Mega is one of the better providers, not saying they are perfect but good enough for now.


  • The two I use are Nextcloud and Mega. Nextcloud is my primary location and I have a script that runs daily to replicate the Nextcloud with Mega. I chose Mega because it has end to end encryption and Mega cannot see your data. They also cannot recover your account if you forget your password. They have had issues/controversy in the past but these days they are, in my eyes a solid choice. I also make use of their S3 bucket so that my Proxmox Backup Server can save offsite so technically Nextcloud is included in that as well!


  • Which phone and message app are you using? I also don’t see a way to view photos or files and which camera app?

    Obviously GrapheneOS is the best way to go for privacy but if you do stick to OEM Android then make sure you’re using apps like the Fossify suite. I use their apps with all contacts and calendar synced via davx and self hosted on Nextcloud.

    What about KeePass, where is that data backed up?



  • Yes, Android is open source. But the thing is, Google’s clampdown on sideloading isn’t just about the OS code itself. It’s really about controlling the whole app ecosystem and making it harder for people to install apps outside of Google’s own channels.

    Sure, folks can fork Android and make their own versions — that’s been happening for years with projects like LineageOS. But the tricky part is keeping all the apps working smoothly without Google’s proprietary stuff like Play Services. Without that, a lot of apps just don’t behave right, and the user experience takes a hit.

    So basically, just having Android’s code open isn’t enough to keep it truly open and easy to use. The real control is in the ecosystem around it, and that’s what Google’s tightening grip is all about.


  • Thanks for the feedback. You’re right, it’s really just scanning for known extension IDs, not poking around your entire computer. Saying “computer scan” might sound a bit dramatic, but the privacy risk is still pretty serious given what info they can guess from those extensions.

    About the home lab and network side — I get that LinkedIn isn’t scanning your whole network or anything. What I meant is more about how you can block or filter those sneaky requests at the network level, like with DNS blocking or firewall rules, so they never even get sent out. It’s not a classic home lab threat, but if you’re running your own DNS or network filters, it’s a handy extra layer to keep things tighter.

    Sure, switching browsers or faking your user agent works too, but not everyone wants to give up Chromium or LinkedIn completely. That’s why I mentioned a few different ways to protect yourself.

    Appreciate the note on wording — I just wanted to show why this isn’t just some minor browser oddity and why it’s worth thinking about from a privacy and network defence angle.