• 1 Post
  • 47 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2024

help-circle
  • Security through obscurity never works, so changing you SSH port does barely anything

    … for security that is.

    What it does is keep a lot of automated bots from spamming your server. No, they don’t have any chances to get access when key authentification is used (and they won’t try either… most go for the incredible low hanging fruits like admin/admin user/password sets), but they can become a strain on your own ressources.

    What actually helps (and is usually configurable with any firewall) is rate limiting access. Just blocking someone’s access for 10 seconds after a failed attempt will make absolutely no difference for you but a big one for those spammers. Now add some incremental increase after multiple fails and you are perfectly set.

    PS: 53 is the standard port for DNS when your server operates as such.

    PPS: Don’t use it. People should really let that stuff die and exclusively run encrypted DNS (via TLS, HTTPS or Quic…)


  • Mainly my normal phone app. But for a long time it’s not sync’d to some google cloud (which would be the default) but a Radicale instance.

    I used Nextcloud before but honestly it’s a mess to maintain. So much that I would not suggest it without planning to extensively use a lot of the different available addon functions.

    Just for file sharing and caldav/carddav I will pick some simple solutions (like Radicale and Syncthing) over Nextcloud any day.


  • And to give you a reference to some of the details glossed over…

    The anubis instance listening to a socket doesn’t work as described there. Because the systemd service is running as root by default but your web server would need access to the socket. So you first need to harmonise the user the anubis service runs as with the one from your web server with the permissions of the /run/anubis directory.

    (see Discussion here for example)

    Also having one single setup example in the docs with unix sockets when that isn’t even the default is strange in the first place…

    Half the Environmental Variables are just vaguely describing what they do without actual context. It probably makes perfect sense when you know it all and are writing a description. But as documentation for third-person use that’s not sufficient.

    Oh, and the example setup for caddy is nonsensical. It shows you how to route traffic to Anubis and then stops… and references Apache and Nginx setups to get an idea how to continue (read: understand that you then need a second caddy instance to receive the traffic…).

    PS: All that criticsm reads harsher than it is meant to be. Good documentation needs user input and multiple view points to realize where the gaps are. That’s simply not going to happen with mostly one person.


  • More than once. But -not actually surprsing by a work in progress by mostly one single person- it’s not actually what I would call well-structured or even coherent. 😅

    More than once googled for a detail I didn’t understand and ended up on the issue tracker realizing I’m not alone and some behavior is indeed illogical or erratic.

    And then some of it is of course referencing forwarding- and header-information, how it’s handled, where it’s flattened… and as my question should have told you, I don’t even much clue how it is handled normally.



  • Logs of what exactly? I don’t even know where to look. Neither is nginx logging an error, nor is a request ending on an unavailable port and just timing out logged anywhere. How would I set up extensive logging of anything but errors and accesses?

    As far as I’m concerned this is not some error but something regarding the details how proxy_pass works, that I don’t understand.

    In fact it isn’t even an actual problem per se. I can easily move the reverse proxy up one block so only the actual pages are protected. But the point is that I want to understand why a request that should be routed internally (and is without Anubis in the mix) ends up there. I would suspect some way the default headers are transmitted screwing things up.




  • Not the only one, but probably a minority. Dual-wielding identical weapons is mostly a meme popularized by fantasy literature and games, and the movies and pc games based on those.

    In actual reality people are quite bad at coordinating similar weapons and don’t get much benefit out of it. So the classical dual-wield is a bigger main weapon and a smaller supporting offhand, beginning with shields being used offensively (and getting smaller and more maneuverable with the main one becoming lighter and faster - see buckler) and ending with classic combinations like rapier & parrying dagger or Daishō (a katana & wakizashi pair).




  • I would suggest starting out with nginx and just setting up a basic homepage for yourself. Even if it’s just a title and background… doesn’t matter.

    This way you have to solve problems like how to reach your page from the outside (your own domain? DDNS? etc) and also how to set up Certbot for HTTPS (which a lot of services will also require later). That already includes setting up kind of parallel configurations (one for http that than redirects to https, one for https) in nginx. And you will do both later again, because you use the same dual setups to serve two different websites on the same IP depending on which the addressname a visitor entered and also will redirect some of them (or sublocations of your page) to other services that provide a webinterface as a reverse proxy.



  • These people have no clue how to get around these DNS filters.

    But not thanks to the virtue of some effective blocking but just a lack of knowledge of the average user…

    I have used several of those cheap routers over the years. And they simply can’t block you from using encrypted DNS (unless they want to create giant blocklists and want to play wack-a-mole with DNS servers…).

    So all they usually do is very low tech like ignoring the DNS you set in the router configuration and reroute it (or not providing such configuration in the first place). But they can effectively ony do so with unencrypted DNS.

    With encrypted DNS they could at best try to block the default port used by DNSoverTLS but that still leaves DoH. And they can’t block that because it’s just regular encrypted HTTPS traffic (with the DNS quesry inside).

    Iirc even Windows allows easy configuration of DoH nowadays (and for much longer if you were ready to edit the registry) where you can simply chose between unencrypted, DoH only or encryption preferred if available.




  • Take a look at the config file (/etc/radicale/config). It’s extensively commented. Although you barely need to change any defaults for regular use.

    Just create an htpasswd file (with htpsswd, apache-tools or just any of the one million available online generators) and edit two lines under [auth] to read type = htpasswd and htpasswd_filename = <the location and file you created>.

    And you can start (and enable) Radicale via the systemd service usually included in the installed package. (Or for early testing just start the server manually… radicale starts it with the defaults from the config file. You can also configure everything with parameters but that’s an insanely long list (radicale --help if you are interested in seeing them)…)

    The webinterface to login will be available (by default settings) under http://localhost:5232/.

    All you have to do then is change the config so Radicale listens on the server’s IP instead so it’s available in in your network. (Plus the usual stuff of making it available from the outside if you need that like for any other sevice)

    And any calendar/contact software will bring a wizard that guides you through the process of sync’ing, usually just asking for an address to reach your server, as well as user and password.

    EDIT: I looked up the defaults and you can skip all the autehntification stuff in the beginning. By default just anyone can access the webpage at port 5232. So you can just test it and only bother with authentication later (definitely when you plan to make it available from the outside, for example to sync phones).


  • With radicale, do I need to install some other somewhere in order to use it?

    No, you just need to install Radicale. That’s it. calDAV and cardDAV are widely used formats available as an option with basically any calendar.

    Can a self hosted calendar still send and receive invites to other calendars?

    Oh, I see your problem. You don’t host your calendar. You host a service that is used to synchronise all the regular calendars you already use over different devices.

    Or are you at the moment using Google’s calendar in browser only?