• 3 Posts
  • 73 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • My instance is close to two years old now, and on average has had about 2 MAU, with no (local) communities.

    Currently we have about 700 active federated communities (that had any federated activity within last month), out of 900.[1]

    The on-disk size of both lemmy and pict-rs database[2]

    postgres@postgres:~$ pwd
    /var/lib/postgresql
    postgres@postgres:~$ du -sh data/
    31G	data/
    

    I use pict-rs with S3 provider and the bucket size is currently at 22.82 GB (read: external network storage, this is probably mostly just thumbnails[3]).

    So in total there is almost 54GBs spent just for lemmy.

    So assuming you have 100G remaining after system stuff and dedicate that box only to lemmy (and pict-rs media files) and use it mostly for yourself [4], you should be alright for about 3-4 years (assuming that I am gaining about 27GBs total per year and that you will federate with a similar amount of a similarly active communities).

    If you offload media storage to a hosted S3 bucket[5] then you should be good for a lot longer as you will only need space for the postgres databases.


    1. The rest is either dead (instance gone) or no one is subscribed to them anymore (as such my instance is not getting any new content from there: neither posts nor comments or votes) ↩︎

    2. Postgres itself reports about 2G less, don’t really know why but I am guessing it has something to do with the filesystem being btrfs ↩︎

    3. Edit: I currently do not use the “privacy” mode of pict-rs where it proxies all content (so that a bad guy can’t post an image link to his server and unmask users IPs), this would increase the S3 size and slightly postgres size. ↩︎

    4. You should use Lemmy Subscriber Bot to automatically federate little bit of random communities so that public All feed is not exact copy (minus NSFW comms) of whatever you as the only user subscribe to. ↩︎

    5. Though keep in mind that S3 buckets eventually cost some money too, for example Cloudflare R2 charges $0.015 per 1GB, above the first 10GBs. ↩︎










  • Yeah didn’t add that bit before, edited in. Archer is here as just dumb AP/routing box for the furthest room, connected to Omnia by ethernet (so yes, Archer acts as client device @ .1.20 and forwards everything to Omnia).

    EDIT: Sadly I don’t have OpenWRT on the TP-Link, but the plan was to replace it with more capable Mikrotik so that I could setup the more advanced bits (Mobility Domain, “roaming”)









  • Your use-case and situation seems very close to mine except I specifically do not host communities.

    First of all, you can run as many services from single nginx as you want (or can handle), usually you do this by having each service on it’s own (sub)domain and routing it all to the same IP, nginx then proxies the requests to the corresponding service running locally on a given port (see nginx reverse proxy).

    I would definitely recommend docker images unless you have specific needs, afaik the ansible recipe installs and manages a docker compose project too (unless they also added official bare-bones ansible setup). Might be wrong here, I do docker and manage it myself, updating is usually a file edit and two commands away.

    About the VPS being enough - from my monitoring, every foreign subscribed community increases the load, with bigger/more active communities increasing it more.
    The main limiting resource for my setup is disk space, sometime ago I’ve calculated my database size is increasing about 1G per month with about 500 subscribed communities and that’s only the postgresql database size without any media. The stats from my s3 provider (you can host images locally too), hint that I am gaining 1-5GBs of media per month.

    I don’t have any metrics how much the amount of active users drains the server as my instance is intentionally small, but I can imagine that having 10-100-1000 active users at the same time would drastically increase the load of at least postgres as well as increase the bandwith.

    And about my setup for comparison, I am renting a dedicated server from Hetzner (AX41-NVMe) running a bunch of other services as well (minecraft server, factorio server, file sharing service, …) and as of the last 30 days my monitoring reports the “average” load average (same for all 1/5/15m) being around 1 core (out of 12 core processor, 6*2 smt).
    Memory is sitting at about 50% month average out of 64G.
    Though, most of the services are really under-utilized (minecraft) or don’t require much (factorio).

    Rule of thumb, if your users subscribe to a lot of outside communities expect at least increased disk space consumption, at worst also increased bandwidth and load.
    If any of your hosted communities get popular on the wider fediverse, definitely expect increased bandwith and load - more servers hitting your server with more data (upvotes, comments, edits…) means nginx, lemmy and postgres also need to process more.
    At baseline there will be a lot of a spiky but small chatter from other instances and the biggest resource drain will be postgres.

    I wouldn’t personally go into this with anything less then 4 vCPUs, 32G of RAM and non-shared/virtual storage (disk latency kills postgres performance).