• 9 Posts
  • 121 Comments
Joined 3 years ago
cake
Cake day: July 29th, 2023

help-circle

  • I’m a +1 on this. A secondhand Synology set up with some RAID will delay this decision for a few years and give you time to build your expertise on the other aspects without worrying much about data security. It’s a pity that you’re nearly at the limit of 8TB - otherwise I would have suggested a two bay NAS with 2x8TB, but if you’re going to use second hand drives (I do because I’m confident of my backup systems) maybe 4x6TB is better. Bigger drives are harder to come by 2nd hand - and plenty of people will not be comfortable with secondhand spinning rust anyway - if that’s you, then a 2 bay with 2x12TB might be a good choice.

    The main downside (according to me) of a Synology is no ZFS, but that didn’t bother me until I was two years in and the owner of three of them.


  • Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC’s are cloned from a golden image that has my keys, tailscale setup etc.




  • Great. There’s two volumes there - firefly_iii_upload & firefly_iii_db.

    You’ll definitely want to docker compose down first (to ensure the database is not being updated), then:

    docker run --rm \
      -v firefly_iii_db:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."
    

    and

    docker run --rm \
      -v firefly_iii_upload:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."
    

    Then copy those two .tar files to the new VM. Then create the new empty volumes with:

    docker volume create firefly_iii_db
    docker volume create firefly_iii_upload
    

    And untar your data into the volumes:

    docker run --rm \
      -v firefly_iii_db:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar"
    
    docker run --rm \
      -v firefly_iii_upload:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"
    

    Then make sure you’ve manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.


  • I’m not clear from your question, but I’m guessing you’re talking about data stored in Docker volumes? (if they are bind mounts you’re all good - you can just copy it). The compose files I found online for FireflyIII use volumes, but Hammond looked like bind mounts. If you’re not sure, post your compose files here with the secrets redacted.

    To move data out of a Docker volume, a common way is to mount the volume into a temporary container to copy it out. Something like:

    docker run --rm \
      -v myvolume:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/myvolume.tar ."
    

    Then on the machine you’re moving to, create the new empty Docker volume and do the temporary copy back in:

    docker volume create myvolume
    docker run --rm \
      -v myvolume:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/myvolume.tar"
    

    Or, even better, just untar it into a data directory under your compose file and bind mount it so you don’t have this problem in future. Perhaps there’s some reason why Docker volumes are good, but I’m not sure what it is.


  • thirdBreakfast@lemmy.worldtoSelfhosted@lemmy.worldIdeas
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    I’m local first - stuff I’m testing, playing with, or “production” stuff like Jellyfin, Forgeo, AudioBookshelf, Kavita etc etc. Local is faster, more secure, and storage is cheap. But then some of my other stuff that needs 24/7 access from the internet - websites and web apps - they go on the VPS.















  • I started doing this, maybe 15 years ago, but if I look through my spam folder now, most of it is to the email address I used before I began using unique addresses (the rest is to random addresses in my domains that I’ve never used).

    My hypotheses from that are that

    • there is probably less ‘selling of email lists’ going on than we think
    • I’m less interested in dubious internet sites than I used to be
    • or (most likely) these days, your internet thing has to be offering me some real value if I’m going to consciously give you any of my data.