If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.
If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.
The whole premise of this discussion was about technological progress and growth going by your initial comment. That means refining existing models and training new ones, which is going to cost a lot of energy. The way this industry is going, even privacy conscious usage of open source models will contribute to the insane energy usage by creating demand and popularizing the technology.
Do we really need to grow our energy consumption as a society by such a disproportionate amount?
With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?
Grouping seasons into a series folder doesn’t work well in some cases, because that’s not the way they are released in Japan. A new season is (most of the time) effectively an entire new show entry. Show seasons are mostly a north american thing. No matter which software you use, there’s always going to be some minor issues if you group seasons into one entry.
Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.
Shoko also has rate limits. The problem is that AniDB does rate limiting in an extremely stupid way for a UDP API and doesn’t even have the decency to define clear time limits.
It’s always been a “whole ass computer”, not some kind of simple storage device.
Pretty sure that the registry path for official images is “library” (at least it used to be). So it should be “docker.io/library/debian”, though I can’t double check at the moment.
You mean hiding their public IP? I guess that’s a feature.
That’s what a firewall and a DNS service is for respectively, imho. As long as you get an IPv6 prefix from your ISP, you can expose as many devices or services to the public as you want, by just allowing incoming traffic to a listening port. That was sort of the whole point of having a large enough address space when moving away from v4. Maybe it’s just me but reading stuff about “private AI” on a website where the relation to the product is not immediately obvious, makes me question their legitimacy.
The more I look at their site, the more it reads like a sales pitch for IPv6, which sounds kind of expensive at $6-10 a month.
What problem does this solve? Do ISPs not provide IPv6 prefixes anymore?
I would fucking hope not. TERM is explicitly passed along as the only exception, which is the only sensible default for temporary privilege elevation in a shell.
It’s a phoronix article, there’s never more than two paragraphs and a quote in there anyway.
That script is a wrapper around a single call to qrencode. I’ve been making qr codes from wireguard config files in the terminal at least since PiVPN existed. There are plenty of guides on how to do this as well.
I get what you’re saying, but this feels like a weird question to ask in a community for selfhosting enthusiasts.
https://en.m.wikipedia.org/wiki/Showreel
I’ve heard of it before and I don’t work in advertising or video production. Why is everyone focusing on this term like these guys invented it?
It’s because you forgot how communication between humans works. The primary reaction to your post and comments is confusion. In other words: if you obfuscate the content then you can’t complain when people don’t discuss it.
Doubt.
Cool attitude. In my experience, most docker/docker-compose setups will work transparently with podman/podman-compose. If you want to tighten security, lock down ressource access, run rootless (daemon and inside the container), integrate with SELinux, then you might need to put in extra-work, just like you would if you used docker.
Why re-invent the wheel?
They aren’t. Podman is mostly just a docker-compatible CLI wrapper around an existing OCI runtime (runc by default). It also lets you manage pods and export k8s yaml, which is arguably the more important industry standard at this point. Podman was also completely usable in rootless mode way before Docker support for that was on the table, which was the main reason I switched years ago. Podman development effort also yielded buildah, which is a godsend if you want to build container images in a containerized environment, without granting docker socket access (which is a security nightmare) or using some docker in docker scenario (which is just a nightmare in general).
I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.