Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 175 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle





  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.


  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.




  • IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.

    With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.

    That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.

    I’ve fogotten Minecraft servers online and really nothing happened whatsoever.




  • Titus is fairly trustable (he’s made a few videos on the dangers of custom Windows ISOs like AtlasOS) but the thing is written in good chunks with AI assisted development and it’s also the dude’s Rust learning experience as well, so the code is not great. Parts of it are meant to run under ArchISO to install Arch (another sin, an automatic Arch installer) so it makes sense to want to just one-liner download and run the prebuilt binary.

    I wouldn’t use it personally but his audience is for it. It targets quick and easy, not proper and secure. It’s mostly meant to easily install and clone his setup, it’s too early in development to really be that useful for everyone.

    On the winutil side he also does the | iex PowerShell sin, but the toolbox do be really useful to debloat a Windows install.


  • The developer benefits from reaching more people, some of whom are likely to purchase the proprietary license. Or sometimes you dual-license just so that licenses are compatible. Each license has pros and cons for both the developers and the users.

    Qt for example, the LGPL means you need to dynamically link to it, and if you ship your own Qt libraries you must provide the source code for it. But if you’re a company that writes proprietary software and can’t dynamically link, then you can purchase the proprietary license which allows you to do a lot more, but you’re compensating the devs for it. And for the Qt devs that’s good because either you pay them, or you use it for free but must share your changes with everyone.

    For ElasticSearch, that makes it so Amazon can’t just patch it up and sell the modified version without sharing what they changed. They wanted to add back a FOSS license to stop the bleed to OpenSearch which many in the FOSS community switched to purely for the license because even separate software should be compatible license-wise if you want a sustainable FOSS project. But the AGPL requires sources merely for being able to talk to it over the network, so Elastic gets the free dev work, or the juicy license payments. The other free licenses achieve similar goals with technical differences that might matter for the user. But as a developer using ElasticSearch maybe you do want to ship your software under the SSPL, so you can pick the SSPL version.

    Dual-licensing MIT/GPL for example, you can build proprietary software, or GPL software where you can vendor it in as GPL-only as well, and thus guarantee your user their GPL rights.



  • The identifier is unavoidable for push notifications to work. It needs to know which phone to send it after all, even if it doesn’t use Google’s services, it would still need a way to know which device has new messages when it checks in. If it’s not a phone number it’s gonna be some other kind of ID. Messages need a recipient.

    Also, Signal’s goal is protecting conversations for the normies, not be bulletproof to run the next Silk Road at the cost of usability. Signal wants to upgrade people’s SMS messaging and make encryption the norm, you have to make some sacrifices for that. Phone numbers were a deliberate decision so that people can just install Signal and start using E2E texting immediately.

    If you want something really private you should be using Tor or I2P based solutions because it’s the only system that can reasonably hide both source and destination completely. Signal have your phone number and IP address after all. They could track your every movements.

    Most people don’t need protection against who they talk to, they want privacy of their conversations and their content. Solutions with perfect anonymity between users are hard to understand and use for the average person who’s the target audience of Signal.



  • If your stuff is all Docker then yeah, immutable makes sense as it makes the entire box declarative and immutable: you can get back the exact same operating Docker environment on the server, and then you can get back the exact same Docker workloads going with the Docker compose configurations.

    If you ever need to run stuff you’d run on Debian, you can just shove it in a Debian container.

    That said, if most of the stuff is containers, the risk of just the core Debian breaking is fairly low. Pick whatever is easiest for you to deal with based on your needs. Immutable distros have a bit of a learning curve.