• 0 Posts
  • 40 Comments
Joined 6 months ago
cake
Cake day: May 1st, 2025

help-circle



  • Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.

    However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.

    But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.

    (OTOH, rsync is still wonderful for large transfers.)


  • I run mbsync/isync to keep a maildir copy of my email (hosted by someone else).

    You can run it periodically with cron or systemd timers, it connects to an IMAP server, downloads all emails to a directory (in maildir format) for backup. You can also use this to migrate to another IMAP server.

    If the webmail sucks, I wouldn’t run my own. I would consider using Thunderbird. It is a desktop/Android application. It syncs mail to your desktop/phone, so most of the time, it’s working with local storage so it’s much faster than most webmails.






  • To be fair, if you want to sync your work across two machines, Git is not ideal because well, you must always remember to push, If you don’t push before switching to the other machine, you’re out of luck.

    Syncthing has no such problem, because it’s real time.

    However, it’s true that you cannot combine Syncthing and Git. There are solutions like https://github.com/tkellogg/dura, but I have not tested it.

    There’s some lack of options in this space. For some, it might be nicer to run an online IDE.

    To add something, I second the “just use Git over ssh without installing any additional server”. An additional variation is using something like Gitolite to add multi-user support to raw Git, if you need to support multiple users and permissions; it’s still lighter than running Forgejo.




  • Yep, I do that on Debian hosts, EL (RHEL/Rocky/etc.) have a similar feature.

    However, you need to keep an eye for updates that require a reboot. I use my own Nagios agent that (among other things) sends me warnings when hosts require a reboot (both apt/dnf make this easy to check).

    I wouldn’t care about last online/reboots; I just do some basic monitoring to get an alert if a host is down. Spontaneous reboots would be a sign of an underlying issue.



  • I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it’s such a popular service among self-hosters that I have little doubt that you’ll find a workable process.

    (And likely you could cheat, and set up a small Linux VM to “bridge” k8s and Cloudflare Tunnels.)

    Kubernetes is different, but it’s learnable. In my opinion, K8S only comes into its own in a few scenarios:

    • Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.

    • Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you… in a way that works in all K8S implementations! This is also very cool, but I suspect that there’s not a lot of this in self-hosting.

    • Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.

    Like the person you’re replying to, I also run Talos (as a VM in Proxmox). It’s pretty cool. But in the end, I only run there 4 apps I’ve written myself, so using K8S as a kind of SaaS… and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.

    I also do this for learning. Although I’m not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you’ll have fun!



  • Came in here to mention Incus if no one had.

    I love it. I have three “home production” servers running Proxmox, but mostly because Proxmox is one of very few LTS/comercially-supported ways to run Linux in a supported way with root (and everything else on ZFS). And while its web UI is still a bit clunky in places, it comes in handy some times.

    However, Incus automation is just… superior. incus launch --vm images:debian/13 foo, wait a few seconds then incus exec foo -- bash and I’m root on a console of a ready-to-go Debian VM. Without --vm, it’s a lightweight LXC container. And Ansible supports running commands through incus exec, so you can provision stuff WITHOUT BOTHERING TO SET UP ANYTHING.

    AND, it works remotely without fuss, so I can set up an Incus remote on a beefy server and spawn VMs nearly transparently. + incus file pull|push to transfer files.

    I’m kinda pondering scripting removal of the Proxmox bits from a Proxmox install, so that I just keep their ZFS support and run Incus on top.