I just exported my data from BitWarden and imported into ProtonPass. Was pretty easy. Hate the color palette of the app and browser extension though, lol.
I just exported my data from BitWarden and imported into ProtonPass. Was pretty easy. Hate the color palette of the app and browser extension though, lol.
This is more complicated than some corporate infrastructures I’ve worked on, lol.
There’s plenty of open source projects that distribute executables (i.e. all that use compiled languages). The projects just provide checksums, ensure their builds are reproducible, or provide some other method to verify.
In practice, you’re going to wind up in dependency hell before pypi stops hosting the package. E.g. you need to use package A and package B, but package A depends on v1 of package C, and package B depends on v2 of package C.
And you don’t need to use pypi or pip at all. You could just download the code and directly from tbe repo, import it into your project (possibly needing to build if it has binary components). However, if it was on pypi before, then the source repo likely had all the code pip needs to install it (i.e. contains setup.py and any related files).
Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
OSMC’s Vero V looks interesting. Pi 4 with OSMC or Librelec could work. I’m probably going to do something like this pretty soon. I just set up an *arr stack last week, and just using my smart TV with the jellyfin app installed ATM.
My PC running the Jellyfin server can’t transcode some videos though; probably going to put an Arc a310 in it.
I’ve been trying out Logseq the past couple days, which I guess is an alternative to Obsidian (never tried that). Can’t say I really understand the point or appropriate workflow of notes in a “graph” structure rather than a tree structure though (or the purpose of a journal). I like Joplin, but I’ve been having trouble with syncing on Android.
If it’s a modern US Samsung model originally provided by a carrier, you can’t. A long time ago, people used to find/use security exploits for Samsung phones, but I think they just don’t care much anymore since you can buy international versions or other bootloader unlockable phones.
Yeah, I was looking into this recently, and even games like Roblox are labelled Teen (even though I think it’s obvious they target younger children).
I like the Turris Omnia and (highly configurable) Turris Mox. They come with OpenWrt installed.
A long time ago, I used Syncthing to do this. Sometimes there would be file conflicts, which was a pain to resolve, so I switched to BitWarden (using their server for syncing) and have been using it ever since.
IDK, looks like 48GB cloud pricing would be 0.35/hr => $255/month. Used 3090s go for $700. Two 3090s would give you 48GB of VRAM, and cost $1400 (I’m assuming you can do “model-parallel” will Llama; never tried running an LLM, but it should be possible and work well). So, the break-even point would be <6 months. Hmm, but if Severless works well, that could be pretty cheap. Would probably take a few minutes to process and load a ~48GB model every cold start though?
Lol, good catch.
Wary of the bill. Seems like every bill involving stuff like this is either designed to erode privacy or for regulatory capture.
Edit: spelling
Here are some that I’ve liked (haven’t played them in years though):
It’s also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, “give me a list of climate activists, their addresses, and their employers” if it was trained on this data or was good at “browsing” on its own. That’s currently not possible due to the guardrails on most models, and I’m guessing they try to avoid training on personal data that’s public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.
I use LLMs just about every day. It’s better than web-search for certain things, and is useful for some coding tasks. I think they’re over-hyped by some people, but they are useful.
GPL’d clients. Everything is encrypted/decrypted on the client before sending/receiving to/from the server. I may later switch to a self-hosted solution, but don’t want to set one up right now (was using BitWarden’s cloud before).