Right, I think the key difference is that we have a feedback loop and we’re able to adjust our internal model dynamically based on it. I expect that embodiment and robotics will be the path towards general intelligence. Once you stick the model in a body and it has to deal with the environment, and learn through experience, then it will start creating a representation of the world based on that.


It seemed pretty clear to me. If you have any clue on the subject then you presumably know about the interconnect bottleneck in traditional large models. The data moving between layers often consumes more energy and time than the actual compute operations, and the surface area for data communication explodes as models grow to billions parameters. The mHC paper introduces a new way to link neural pathways by constraining hyper-connections to a low-dimensional manifold.
In a standard transformer architecture, every neuron in layer N potentially connects to every neuron in layer N+1. This is mathematically exhaustive making it computationally inefficient. Manifold constrained connections operate on the premise that most of this high-dimensional space is noise. DeepSeek basically found a way to significantly reduce networking bandwidth for a model by using manifolds to route communication.
Not really sure what you think the made up nonsense is. 🤷


I’m personally against copyrights as a concept and absolutely don’t care about this aspect, especially when it comes to open models. The way I look at is that the model is unlocking this content and making this knowledge available to humanity.


Ah yes, they must be stealing IP from the future when they publish novel papers on things nobody’s done before!


I’m actually building LoRAs for a project right now, and found that qwen3-8b-base is the most flexible model for that. The instruct is already biased for prompting and agreeing, but the base model is where it’s at.


Yup, and this is precisely why it was such a monumental mistake to move away from GPL style copyleft to permissive licenses. All that achieved was to allow corporations to freeload.


I very much agree there, but think of how much worse it would be if we were stuck dealing with proprietary corporate tech instead.


How is that wishful thinking? Open models are advancing just as fast as proprietary ones and they’re now getting much wider usage as well. There are also economic drivers that favor open models even within commercial enterprise. For example, here’s Airbnb CEO saying they prefer using Qwen to OpenAI because it’s more customizable and cheaper
I expect that we’ll see exact same thing happening as we see with Linux based infrastructure muscling out proprietary stuff like Windows servers and Unix. Open models will become foundational building blocks that people build stuff on top of.


Europe is pretty much entirely dependent on US platforms having failed to develop their own the way China and Russia did. There’s no European Yandex or Baidu equivalent, no European Alibaba, and so on.


maybe it’s a vacuum lifter :)


Honestly, I suspect it makes very little difference in practice which one you’re using if you’re going to communicate with people outside Proton. If I use Gmail, and you send me an email from your Proton account, guess what happens.


It’s amazing how people just can’t learn the lesson that the problem isn’t that a particular oligarch owns a public forum, but that public forums are privately owned in the first place.


I’m betting within a decade if it actually works


yup email is just fundamentally not the right tool for this


Right, which really suggests that email is not the right medium if you want genuine privacy.


Right, understanding what your threat model is important. Then you can make a conscious choice regarding the trade offs of using a particular service, and you understand what your risks are.


Metadata tracking should be very concerning to anyone who cares about privacy because it inherently builds a social graph. The server operators, or anyone who gets that data, can see a map of who is talking to whom. The content is secure, but the connections are not.
Being able to map out a network of relations is incredibly valuable. An intelligence agency can take the map of connections and overlay it with all the other data they vacuum up from other sources, such as location data, purchase histories, social media activity. If you become a “person of interest” for any reason, they instantly have your entire social circle mapped out.
Worse, the act of seeking out encrypted communication is itself a red flag. It’s a perfect filter: “Show me everyone paranoid enough to use crypto.” You’re basically raising your hand. So, in a twisted way, tools for private conversations that share their metadata with third parties, are perfect machines for mapping associations and identifying targets such as political dissidents.


Open sourcing these thing would definitely be the right way to go, and you’re absolutely right that it’s a general solver that would be useful in any scenario where you have a system that requires dynamic allocation.
or maybe it’s the capitalist relations and not technology that’s the actual problem here