• 3 Posts
  • 769 Comments
Joined 4 years ago
cake
Cake day: January 17th, 2022

help-circle

  • IMHO LLM isn’t coherent with independence. That being said I wrote quite a bit on self-hosting LLMs. There are quite a few tools available, like ollama itself relying on llama.cpp that can both work locally and provide an API compatible replacement to cloud services. As you suggested though typically at home one doesn’t have the hardware, GPUs with 100+GB of VRAM, to run the state of the art. There is a middle ground though between full cloud, API key, closed source vs open source at home on low-end hardware : running STOA open models on cloud. It can be done on any cloud but it’s much easier to start with dedicated hardware and tooling, for that HuggingFace is great but there are multiples.

    TL;DR: closed cloud -> models on clouds -> self-hosted provide a better path to independence, including training.








  • just that the mobile Proton Mail app does not support fulltext search. I know why, but I still think it’s doable the same way as in the web browser

    If your mobile has a modern Web browser I’m pretty sure you can do full text search in there too.

    Also FWIW it’s a constant struggle for everyone.

    Corporations do their very best, both technically but also with marketing and lobbying, to make it nearly impossible. We have to learn, help each other, vote and it will never stop. Still, each step mattes so kudos on even attempting.






  • Sadly FUD as ANYTHING that is NOT increasing profit for surveillance capitalism, i.e Google, Meta, etc is a win for privacy!

    Of course /e/OS could be better, GrapheneOS could also be better (including on security) but the big picture is that still ANY of those solutions is making surveillance capitalism, the loss of privacy for profit and power, less efficient. That’s good for all of us who, being on Lemmy or other federated instance, believe we do benefit from having more privacy, or at least not trading it away.

    TL;DR: be inclusive, bring others up, don’t be exclusive aiming for perfection none of us can attain.





  • Can’t it source other LLM outputs as “verified source” and thus still say whatever sounds good, like any LLM? Providing “technical” verification, e.g. SHA, gives no insurance about the content itself being from a reputable source. I don’t think adding confidence and sourcing changes anything, the user STILL has to verify that whatever is provided is coherent and a third party is actually a good source. Thanks for making the process public though, doing better than OpenAI does.