Seems like a lot of people want that too, so hopefully they’ll add it soon
I bought everyone in my family a drink to get them to use Signal. Worked great and we’re still on it.
Don’t bother trying to sell people on privacy etc, for the most part. Show them Giphy integration, stickers, Stories, etc and show them that it’s fun. Signal has done great work there, in making it “noob-friendly”
This was recognized at least as far back as 1988:
https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalypse
Browsing with JS disabled by default and expecting most sites to have basic functionality like “display this text”
Whether this guy should be forced to turn over his passwords or not:
https://www.theregister.com/2017/03/20/appeals_court_contempt_passwords/
The appeals court found that forcing the defendant to reveal passwords was not testimonial in this instance because the government already had a sense of what it would find.
The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.
We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.
Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.
The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:
o1-mini
and llama3-8B
, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.
Gary Marcus is an AI crank and should be disregarded
https://en.wikipedia.org/wiki/Tom_Scott_(YouTuber)