Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.
Hmm, I bought a used laptop on which I wanted to tinker with linux and docker services, but I kinda wanted to separate the NAS into a separate advice to avoid the “all eggs in one basket” situation (also I can’t really connect that many hard drives to it unless I buy some separately charged USB disk hubs or something, if those exist and are any good?)
However I do see the merit in your suggestion considering some of the suggestions here are driving me into temptation to get a $500 NAS and that’s even without the drives… that’s practically more than what my desktop is worth atm.
Could be a regional thing but Synology HDDs are around 30% more expensive than ‘normal’ WD/Seagate/Toshiba that I’m seeing at first glance. Maybe it does make it up for quality and longevity but afaik HDDs are pretty durable if they are maintained well, and I imagine them being in RAID1 should be good enough security measure?
Considering the price of the diskstation itself it’s all quickly adding up to a price of a standalone PC so i’m trying to keep it simple since it’s for a relatively low performance environment.
gummibando@mastodon.social
Sorry, with ‘docker drives’ I meant ‘docker volumes or bind mounts’. I dont have a lot of experience with it yet so I’m not sure if I’m going to run into problems by mapping them directly to a NAS, or if I should have local copies of data and then rsync / syncthing them into the NAS. I heard you can theoretically even run docker on the NAS but not sure if that’s a good idea in terms of its longevity or performance.
Is the list of “approved HDDs” just a marketing/support thing or does it actually affect performance?
Thanks for the answers! The DS2xx series looks like something I could start with. DS223 is a bit cheaper and has 3 USB ports so that could be useful, I’d guess I don’t need to focus on performance since it’s mostly just for personal data storage and not some intensive professional work.
If its by the developer of NPxSB why not just update that one, or am I misunderstanding something in the title?
Sure, but nothing is theoretically stopping them from documenting every single data source input into the training module and then crediting it later.
For some reason they didn’t want to do that of course.
Logseq
having everything laid out in a few yaml files that I can tear down and rebuild on a whim
Oh absolutely, but for me docker compose already does that. Kubernetes might be a good learning exercise but I don’t think I need load balancing for 1 user, me, on the home network 😅
What’s the benefit of kubernetes over docker for a home server setup?
I always thought you’re supposed to buy similar drives so the performance is better for some reason (I guess the same logic as when picking RAM?) but this thread is changing my mind, I guess it doesn’t matter after all👀
These bridges are usually self-hosted so I’m assuming this is not due to infrastructure costs but rather the bridge code maintenance issues? Do they require so much work to stay functional, are other bridges at risk of abandonment too?
I can still see the value in owning it in this shitty climate however - maybe I want to keep the patent just so I can distribute it freely instead of someone else staking their claim on it and then charging people for the same thing?
Someone else is imprinting their definition
I mean yeah, that’s how words work? AA has the meaning because a bunch of people imprinted their meaning on it.
Open source has a meaning because a bunch of people imprinted their meaning on it too, it has no relevance to actual words “open” or “source”. The issue is that other people are now imprinting their own meaning on it and muddling it instead of following the existing meaning or coming up with their own terminology.
I think the only thing we’re missing is the official OSI definition for open-source-for-reading-but-not-modifying so we don’t use the same name as for the open-source-for-reading-and-modifying code? The issue seems that we don’t have OSI-defined names for both, just for one, so people started misusing it unknowingly while the businesses misused it maliciously.
Am I understanding correctly and this is truly FOSS and fully offline, there’s no remote server or model we have to connect to? What was the model trained on? I’m really curious but I also don’t want to support proprietary unethical data sourcing.
Muddying the waters is the oldest trick in the books, big corporations have even started doing it with “indie” games - Dave the Diver is stylized and marketed as an indie game despite being developed by a division of a multi-billion company Nexon.
I definitely have an issue with it as well, it’s really hard to say whether something is actually FOSS nowadays or not, and whether it can be taken away or acquired by someone else down the line. That could be my fault as well since I never bothered to learn about the licenses beyond what MIT / Apache2 are, and even those I understand superficially.
There should absolutely be more pushback for things like these though.
I’m not sure what Ansible does that a simple Docker Compose doesn’t yet but I will look into it more!
My real backup test run will be soon I think - for now I’m moving from windows to docker, but eventually I want to get an older laptop, put linux on it and just move everything to the docker on it instead and pretend it’s a server. The less “critical” stuff I have on my main PC, the less I’m going to cry when I inevitably have to reinstall the OS or replace the drives.
Ahh, so the best docker practice is to always just use outside data volumes and backup those separately, seems kinda obvious in retrospect. What about mounting them directly to the NAS (or even running docker from NAS?), for local networks the performance is probably good enough? That way I wouldn’t have to schedule regular syncs and transfers between “local” device storage and NAS? Dunno if it would have a negative effect on drive longevity compared to just running a daily backup.
Does Fluent Reader count? Doesn’t have an amazing interface but it’s free and simple to use.