

Andreas (lead engineer) has told the story of how he got that money - they just happen to know each other and $100k is peanuts for the Shopify founders.
But you’re right to suspect anything of the sort!


Andreas (lead engineer) has told the story of how he got that money - they just happen to know each other and $100k is peanuts for the Shopify founders.
But you’re right to suspect anything of the sort!


Ran WireGuard on a Pi1 and it was fine for two users. Albeit WireGuard was the ONLY thing running aside from a Gitlab Runner.
A 4b should be more than enough for many use cases except things that cause torrents of packets - but even then YMMV. It really depends on the workload.
One bit of advice: if you can, use a storage device other than the micro-sd slot for the 4B. Again YMMV.


It depends on many factors including:
So you’re right that you make an initial guess and go from there.
Many tools/sites/projects will have minimum system requirements and you can get an idea of minimums using those stats. Some frameworks might even have guidelines available. The one I use most often for example has a configurable memory footprint. So that’s a datapoint I personally use.
If they’re all the same type of site (example Ghost blogs) using the same setups then it’s often less intense since you can pool resources like DBs and caching layers and go below minimum system requirements (which for many sites include a DB as part of the requirements).
Some sites might be higher traffic but use fewer resources, others might be the inverse.
Then there’s also availability. Are these sites for you? Is this for business? What kind of uptime guarantee do you need? How do you want to monitor that uptime and react to needs as they arrive/occur?
The best way to handle this is in a modern context also depends on how much and what style of ops you want to engage in.
Auto-scaling on an orchestration platform (something like K8S) or cloud-provider auto-scaling of VMs or something else? Do you want deployments managed as-code via version control? Or will this be more “click Ops”. No judgement here just a thing that will determine which options are best for you. I do strongly recommend on some kind of codified, automated ops workflow - especially if it’s 25 sites, but even with just a handful. The initial investment will pay for itself very quickly when you need to make changes and are relived to have a blueprint of where you are.
If you want to set it and forget it there are many options but all require some significant initial configuration.
If you’re ok with maintenance, then start with a small instance and some monitoring and go from there.
During setup and staging/testing the worst that can happen is your server runs out of resources and you increase its available resources through whatever method your provider offers. This is where as-code workflows really shine - you can rebuild the whole thing with a few edits and push to version control. The inverse is also true - you can start a bit big and scale down.
Again, finding what works for you is worth some investment (and by works I don’t just mean what runs, but what keeps you sane when things go wrong or need changing).
Even load testing, which you mentioned, is hard to get right and can be challenging to instrument and implement in a way that matches real-world traffic. It’s worth doing for sites that are struggling under load, but it’s not something I’d necessarily suggest starting with. I could be wrong here but I’ve worked for some software firms with huge user bases and you’d be surprised how little load testing is done out there.
Either way it sounds like a fun challenge with lots of opportunities for learning new tricks if you’re up for it.
One thing I recommend avoiding is solutions that induce vendor lock-in - for example use OpenTofu in lieu of something like CloudFormation. If you decide to use something like that in a SaaS platform - try not to rely on the pieces of the puzzle that make it hard (sticky) to switch. Pay for tools that bring you value and save time for sure, but balance that with your ability to change course reasonably quickly if you need to.


Canada too.


Email is notoriously hard to self host. It requires constant care, planning, and interfacing with the big guys when your email can’t get delivered despite jumping through all the hoops (DKIM, DMARC, SPF and more).
I used to run email services for my small business and former start-up. It was a never-ending pain. IP warming, monitoring, deliverability checks…. blah blah blah.
Both Google and Microsoft would regularly blacklist massive IP address blocks because of one bad IP address. Days to weeks for resolution in some cases.
I’m a little salty though ‘cause I just switched to proton away from RackSpace. There are so few good and reliable options that aren’t the big guys and the big guys want it that way.


If it’s a backup server why not build a system around an CPU with an integrated GPU? Some of the APUs from AMD aren’t half bad.
Particularly if it’s just your backup… and you can live without games/video/acceleration while you repair your primary?


Good enough? I mean it’s allowed. But it’s only good enough if a licensee decides your their goal is to make using the code they changed or added as hard as possible.
Usually, the code was obtained through a VCS like GitHub or Gitlab and could easily be re-contributed with comments and documentation in an easy-to-process manner (like a merge or pull request). I’d argue not completing the loop the same way the code was obtained is hostile. A code equivalent of taking the time (or not) to put their shopping carts in the designated spots.
Imagine the owner (original source code) making the source code available only via zip file, with no code comments or READMEs or developer documentation. When the tables are turned - very few would actually use the product or software.
It’s a spirit vs. letter of the law thing. Unfortunately we don’t exist in a social construct that rewards good faith actors over bad ones at the moment.


As someone who worked at a business that transitioned to AGPL from a more permissive license, this is exactly right. Our software was almost always used in a SaaS setting, and so GPL provided little to no protection.
To take it further, even under the AGPL, businesses can simply zip up their code and send it to the AGPL’ed software owner, so companies are free to be as hostile as possible (and some are) while staying within the legal framework of the license.


Pros:
Cons:


Pijul is a very exciting project. I’ve wanted to try it for months buy haven’t found the time.
I’m on iOS and do the same thing.
The WireGuard app has a setting to “connect on demand”. It’s in the individual connections/configurations.

You can then set either included or excluded SSIDs. There’s also an option to always connect when you’re on mobile/cellular data.
I imagine the Android app is similar.
Neat, I’ll have to look it up. Thanks for sharing!


Nextcloud isn’t exposed, only a WireGuard connection allows for remote access to Nextcloud on my network.
The whole family has WireGuard on their laptops and phones.
They love it, because using WireGuard also means they get a by-default ad-free/tracker-free browsing experience.
Yes, this means I can’t share files securely with outsiders. It’s not a huge problem.


SMB : https://en.m.wikipedia.org/wiki/Server_Message_Block
In short it’s a way to share network access to storage across MacOS/Linux/Windows.
MacOS switched from AFS to SMB (as the default file sharing / network storage protocol) a few years ago as it was clear that was how everything was headed - though iOS and MacOS also have native support for NFS.
On linux, you can use samba to create SMB shares that will be available to your iOS device.
It’s a lot of configuration though - so maybe not the best choice.
As for Nextcloud - indeed you can use it in your local network without making it available on your WAN connection. That’s how we use it here.
When we need it remotely - we VPN into our home network. But no exposed ports. :)


Neat solution!


I use Nextcloud. But that also means setting up and managing Nextcloud. By the same token you could use google drive.
For notes and photos you can export them within the app. Notes specifically requires that you print and then hit the share on the print dialogue to save the notes to the file system as a pdf.
Notes also has another option: if you have a non-Apple mail account on your phone - you can enable notes for that email account and simply move (or copy) your notes from one account to the other. The notes will then become available within that email account mailbox structure on any device or machine where that email account is enabled.
For voice recordings you can save any voice recording directly to the iOS filesystem.
The iOS files app also allows you to connect to any other server/desktop via SMB.
There are lots of options here. None are awesome, but they work.


Update: I went and had a look and there’s a Terraform provider for OPNSense under active development - it covers firewall rules, some unbound configuration options and Wireguard, which is definitely more than enough to get started.
I also found a guide on how to replicate pfBlocker’s functionality on OPNSense that isn’t terribly complicated.
So much of my original comment below is less-than-accurate.
OPNSense is for some, like me, not a viable alternative. pfBlockerNG in particular is the killer feature for me that has no equivalent on OPNSense. If it did I’d switch in a heartbeat.
If I have to go without pfBlockerNG, then I’d likely turn to something that had more “configuration as code” options like VyOS.
Still, it’s nice to know that a fork of a fork of m0n0wall can keep the lights on, and do right by users.


If you backup your config now, you’d be able to apply the config to CE 2.7.x.
While this would limit you to an x86 type device, you wouldn’t be out of options.
I am an owner of an SG-3100 as well (we don’t use it anymore), but that device was what soured me on Netgate after using pfSense on a DIY router at our office for years…
I continued to use pfSense because of the sunk costs involved (time, experience, knowledge). This is likely the turning point.
While you’re technically right, I don’t see a material difference between paying with cash and paying with data (Verge sign up is free, but it’s still sign up).
The current automation guidelines and defaults renew certs 30 days from expiry. So even today certs aren’t around for more than 60 days, it’s just that they’re valid for 90.
Additionally you can fairly easily monitor certs to get an alert if you drop below the 30 day threshold and automatic cert renewal hasn’t taken place.
I use Grafana self hosted for this with their synthetic monitoring free tier but it would be relatively trivial to roll your own Prometheus-exporter to do the same.