• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


  • I agree that the author didn’t do a great job explaining, but they are right about a few things.

    Primarily, LLMs are not truth machines. That just flatly and plainly not what they are. No researcher, not even OpenAI makes such a claim.

    The problem is the public perception that they are. Or that they almost are. Because a lot of time, they’re right. They might even be right more frequently than some people’s dumber friends. And even when they’re wrong, they sound right. Even when it’s wrong, it still sounds smarter than most peoples smartest friends.

    So, I think that the point is that there is a perception gap between what LLMs are, and what people THINK that they are.

    As long as the perception is more optimistic than the reality, a bubble of some kind will exist. But just because there is a “reckoning” somewhere in the future doesn’t imply it will crash to nothing. It just means the investment will align more closely to realistic expectations as the clarity of what realistic expectations even are become more clear.

    LLMs are going to revolutionize and also destroy many industries. It will absolutely fundamentally change the way we interact with technology. No doubt…but for applications which strictly demand correctness, they are not appropriate tools. And investors don’t really understand that yet.


  • It will be interesting to see how a distributed system solves this problem.

    The issue really comes down to the infrastructure costs. The fediverse is by design significantly less efficient with hardware than a centralized system. It isn’t that it’s difficult to scale, it’s just that it’s expensive to scale. And since the hardware is maintained by generosity of donation…

    This is offset by the higher interest in volunteer labour, though.

    I think the “solution” is just to accept that instances will burst in and out of existence (and favour) based on time and generosity.



  • I have no idea how poorly the authors of the study communicated their work because I haven’t read the study.

    Jumping to the conclusion that it’s junk because some news blogger wrote an awkward and confusing article about it isn’t fair at all. The press CONSISTENTLY writes absolute trash on the basis of scientific papers. That’s like, science reporting 101.

    And, based on what you’re saying, this still sounds completely different. RNA sequencing may be a mechanism to “why”, but you would knock my fucking socks off if you could use RNA to predict the physical geometry of a fingerprint. If you could say we have a fingerprint, and we have some RNA, do they belong to the same person? That would be unbelievably massive.


  • Right, so this methodology is a completely different approach. I don’t think it’s fair to call snake oil on this specifically with the justification that other models (using an entirely different approach) were.

    Again, not saying it’s real or not, I’m just saying that it’s appropriate to try new approaches to examine things we already THINK we know, and to be prepared to carefully and fairly evaluate new data that calls into question things we thought we knew. That’s just science.


  • I mean, the research is the research and the data is the data.

    If there are specific critiques to the methodology of the research that calls the validity of the observed data into question, that’s fair. “It’s ‘well known’ that…” Isn’t a scientific argument. It’s actually the exact opposite, it’s literally religion.

    Also, the conclusions being drawn from the data by the researchers or 3rd parties might be a problem.

    To be fair, ML of today is unrecognizable to what it was in 2008. And, I’d be willing to bet the model your cousin was exposed to wasn’t a machine learning model, and instead some handcrafted marker analysis with dubious justification but a great sales team.

    The great thing about ML science is that it’s super accessible. This was an undergrad project. The next step, to establish the validity, really just requires a larger data set. If it’s bogus, that’ll come out. If it’s valid, that’ll come out too. The cost of reproducibility is so low that even hobbiests can verify the results.


  • That is not at all what this article is about. The headline is terrible.

    The research is suggesting that there may exist “per-person” fingerprint markers, whereas right now we only use “per-finger” markers. It’s suggesting that they could look at two different fingers, (left index and right pinky, for example) and say “these two fingerprints are from the same person”.

    When they say “not unique”, they mean “there appear to be markers common to all fingerprints of the same person”


  • The title of the article is so misleading it’s pretty much wrong.

    If you read the article, what the researchers did was train an AI model that appears to be able to associate different fingerprints of the SAME person.

    Example:Assume your finger prints are not on record. You do a crime and you accidentally leave a fingerprint of your left index finger at the scene.

    THEN you do another crime and leave your RIGHT MIDDLE finger print at the scene.

    The premise is that the AI model appears to be able to correlate DIFFERENT prints from the SAME person.

    So, I’m the context of the research, they aren’t saying that there is reason to believe that there exist fingerprint markers that might be present on a per-person basis, rather than strictly a per-finger basis.

    Terrible headline, terribly written article, and IMO not nearly enough evidence that the correlation actually exists and even less evidence that it’s appropriate to be able to be used as evidence.

    That being said, based on the comments in the comments section I think most people didn’t really grok what this research was, which is understandable based on the terrible headline




  • I think people underestimate the challenges involved when building software systems tightly coupled to the underlying hardware (like if you are a team tasked with building a next gen server).

    Successful companies in the space don’t underestimate it though, the engineers who do the work don’t underestimate it, and Linus doesn’t underestimate it either.

    The domain knowledge in your org required to mitigate the business risk isn’t trivial. The value proposition always needs to be pretty juicy to overcome the inertia present caused by institutional familiarity. Like, can we save a few million on silicon? Sure. Do we think we understand the challenges well enough to keep our hardware release schedules without taking shortcuts that will result in reputational impact? Do we think we have the right people in place to oversee the switch?

    Over and over again, it comes back to “is it worth it”, and it’s much more complex of a question to offer than just picking the cheaper chips.

    I imagine at this point there is probably a metric fuckton of enterprise software what strictly dictate that it must be run on X86. Even if it doesn’t have to. If you stray from the vendor hardware requirements, bullshit or not, you’ll lose your support. There is likely friction on some consumer segments as well on the uptake.



  • Big cloud provides will take the opportunity to move to ARM as it is cheaper for them.

    The cloud isn’t a literal empheral cloud. It’s still a physical thing with physical devices physically linked. Physical ram on physical slots with physical buses and physical chips (not just CPUs, many other ICs are in those machines too). The complexity of the demands of the arrangement and linkages of that physical hardware is incredible.

    Nobody is out there writing enterprise server firmware in java. How can you have a java VM when the underlying compents of the physical device don’t have the necessary code to offer the services required by the VM to run?

    To be incredibly blunt, and I don’t say this to be rude, your questions and assertions are incredibly ignorant. So much so that it’s essentially nonsense. It’s like asking “why do we still even have water when we have monster energy drink?” It demonstrates such a fundamental misunderstanding of the premise that it’s honestly difficult to even know where to begin explaining how faulty the line of thinking even is.

    Linus isn’t talking about JS developers at all. Even a little bit. I promise you, you would not enjoy hearing his unfiltered thoughts on JS developers.

    He’s talking about the professional engineers who design, build, and write firmware for enterprise grade servers. There no overlap between JS coders and these engineers.