• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: September 7th, 2023

help-circle
  • I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.



  • The whole premise of this discussion was about technological progress and growth going by your initial comment. That means refining existing models and training new ones, which is going to cost a lot of energy. The way this industry is going, even privacy conscious usage of open source models will contribute to the insane energy usage by creating demand and popularizing the technology.



  • With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?



  • Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.






  • That’s what a firewall and a DNS service is for respectively, imho. As long as you get an IPv6 prefix from your ISP, you can expose as many devices or services to the public as you want, by just allowing incoming traffic to a listening port. That was sort of the whole point of having a large enough address space when moving away from v4. Maybe it’s just me but reading stuff about “private AI” on a website where the relation to the product is not immediately obvious, makes me question their legitimacy.

    The more I look at their site, the more it reads like a sales pitch for IPv6, which sounds kind of expensive at $6-10 a month.









  • Doubt.

    Cool attitude. In my experience, most docker/docker-compose setups will work transparently with podman/podman-compose. If you want to tighten security, lock down ressource access, run rootless (daemon and inside the container), integrate with SELinux, then you might need to put in extra-work, just like you would if you used docker.

    Why re-invent the wheel?

    They aren’t. Podman is mostly just a docker-compatible CLI wrapper around an existing OCI runtime (runc by default). It also lets you manage pods and export k8s yaml, which is arguably the more important industry standard at this point. Podman was also completely usable in rootless mode way before Docker support for that was on the table, which was the main reason I switched years ago. Podman development effort also yielded buildah, which is a godsend if you want to build container images in a containerized environment, without granting docker socket access (which is a security nightmare) or using some docker in docker scenario (which is just a nightmare in general).