

Out sound like you are outsourcing comment-writing to an LLM.


Out sound like you are outsourcing comment-writing to an LLM.


If you need to keep windows around, you can also do what I do for my VMs. Download Win11 Enterprise IoT LTSC from massgrave.dev. Doesn’t have Bing search in the search bar, just local, no Co-pilot, no bullshit.
I’d prefer not having to use it at all, but some software just doesn’t play nice with wine yet.


Yup, typo. It is lit, though.


I have deployed LiveKit for my homeserver yesterday. I’m not gonna lie, it does involve a bit of work, but once it’s running it is very seamless.


I had a brother laser printer with a pre-heating roll that went bad. Sourcing a replacement for that was pretty annoying. But I get your point.


Yeah, I usually approach this stuff from the standpoint of someone who is already actively self-hosting. For people stuck in Google/MS, it is certainly better.


Vaultwarden is free. Bitwarden is free. Bitwarden Premium is 10€/year.
For what it offers, Proton is pretty expensive. They are also making inter-operation with other services difficult or impossible.
There’s much worse, but they aren’t that great either.


… sure. Nothing here is wrong, but there’s ways to try and mitigate that. And then it’s kinda an arms race, and vigilance.


Good as a general recommendation.
I also feel like the risk levels are very different. If it’s something that performs a function but doesn’t save/serve any custom data (e.g. bentopdf), that’s a lot easier to decide to do than something complicate like Jellyfin.
I do have public addresses for Matrix, overleaf, AppFlowy, immich because they would be much less useful otherwise. Haven’t had any problems yet, but wouldn’t necessarily recommend it to others.
I’d never host any stuff with “Linux ISOs” on a public adress, that seems like it’d be looking for trouble.
What annoys me with Tuta is that they make PGP encryption very difficult (they don’t implement it at all, you have to use external solutions, which is made more difficult because you can’t use external clients).
They argue it is less secure than their solution where they send non Tuta users a link and you give them a password. I argue that PGP is something people would use, while their solution isn’t.
Proton does implement it, but I also have my gripes with Proton. Both of them feel like they want to build a walled garden / avoid being inter-operable.
However, this is likely apocryphal, since it was popularized in the 1940s, almost 50 years after the town was founded. The most likely origin is from nearby Chicken Creek, as noted by Josiah Edward Spurr in 1896, “The creek is so named from the size of the gold, which is about that of chicken feed (corn).”
Hey, that was made at my former uni. And now I’m wondering whether other unis adopted it. It always seemed like a neat solution.


Google search app, and Google news, I’m pretty sure.
Anti-Clickbait. Much more likely to actually make me click, though.


I still find them preferable. Less “sponsored” stuff, etc. More tags, etc. for search.


I’m in Germany, and it works pretty fine. They’ve got several datacenters around here, never had an issue with speed or latency.
I don’t like that they got that evil megacorp vibe, but what big Internet firm doesn’t?
Well, I need to run two separate tunnels to not run into hairpinning issue, so, some weirdness, I guess. More down to my services, though.


Interesting. As I said, I never tried yunohost. I usually work with podman, and just assign local ports to pods, then route traffic to those ports internally, which seems to work fine.
Anyway, I feel like we won’t be solving OPs issue here. Still, interesting to see some of the problems people with different setups have to deal with.


Yeah, I feel like we’re missing some info here.
I have to admit that I have no experience with yuno. Always seemed interesting, but not like something that fits into my work flow.
If they’re self-hosting at home (which I’m also doing for some services), I’d presume they’re probably running their stuff on a single machine, so I’m not sure where their router would come Into it. The data the cloudflare tunnel process receives should look the same to the router no matter the port it is ultimately sent to, and when it is sent to an address internal to the machine, shouldn’t pass through the router again.


I presume they mean pointing their cloudflare tunnel to direct lemmy.example.com to http://localhost/:[port], and I don’t think there’s any special rules about that port from cloudflares site.
I use tunnels and ports in about that range for all my sites, and don’t have any problems.
Didn’t Vivaldi? I don’t really use them cause I mostly avoid non-FOSS software, but I seem to remember them announcing they’d be keeping support.