There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.
But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.



Today’s AI is way worst then when ChatGPT was first released… it is way too censored.
But either way, I never considered LLMs to be A.I. even if they have the possibility to be great.
deleted by creator
which one is your favorite one? I might buy some hardware to be able to run them soon (I only have a laptop right now that is not the greatest, but I am willing to upgrade)
deleted by creator
thanks a lot man, I will look into it but I have on-board gpu… not a big deal if I need to upgrade (I spend more on hookers and blow weekly)
deleted by creator
It depresses me that we have to find new silly acronyms to mean something we already had acronyms for in the first place, just because we are simply too stupid to use our vocabulary appropriately.
AI is what “AGI” means. Just fucking AI. It has been for more than half a century, it is sensical, and it is logical.
However, in spite of its name, the current technology is not really capable of generating information, so it isn’t capable of actual “intelligence”. It is pseudo-generation, which it achieves by sequencing and combining input (AKA training) data. So it does not generate new information, but rather new variations of existing information. Due to this fact, I would prefer the name of “Artificial Adaptability” (or “AA”, or " A2") to be used in lieu of “AI”, or “Artificial Intelligence” (on the grounds that it means something else entirely).
Edit: to the people it may concern: stop answering this about “Artifishual GeNeRaL intelligence”. I know what AGI means. It takes all of 3 seconds to do an internet search, and it isn’t even necessary: everyone has known for months. I did not bother to explicit it, because I did not imagine that anyone would be simple enough to take literally the first word starting with “g” from my comment and roll with that in a self-important diatribe on what they imagined I was wrong about. So if you feel the need to project what you imagine I meant, and then correct that, please don’t. I’m sad enough already that humanity is failing, I do not need more evidence.
Edit 2: “your opinion only matters if you have published papers”. No. Also it is a really stupid argument from authority. Besides, anyone with enough time on their hands can get papers published. It is not a guarantee of quality, but merely a proof that you LARPed in academy. The hard part isn’t writing, it is thinking. And as I wrote before, I already know this, I need no more proof, thank you.
deleted by creator
You’ve shown your IQ right there. No time to waste with you. Goodbye.
If you haven’t published a few papers then your preference in acronyms is irrelevant.
AI comprises everything from pattern recognition like OCR and speech recognition to the complex transformers we know now. All of these are specialized in that they can only accomplish a single task. Such as recognizing graffiti or generating graffiti. AGI, artificial general intelligence, would be flexible enough to do all the things and is currently considered the holy grail of ai.
It doesn’t matter what you consider, they are absolutely a form of AI. In both definition and practice.
tell me how… they are dumb as fuck and follow a stupid algo… the data make them somewhat smart, that’s it. They don’t learn anything by themselves… I could do that with a few queries and a database.
You could do that with a few queries and a database lol. How do you think LLMs work? It seems you don’t know very much about them.
deleted by creator
deleted by creator
This isn’t an argument in the way you think it is. Something being “dumb” doesn’t exclude it from possessing intelligence. My most metrics toddlers are “dumb” but no one would ever suggest in seriousness that any person lacks intelligence in the literal sense. And having low intelligence is not the same as lacking it.
Can you even define intelligence? I would honestly hazard a guess that by “intelligence” you really mean sapience. The discussion of what is intelligence, sapience, or sentience is far more
than you’d expect.
Our brains literally run on an algorithm.
And where’s the intelligence in people without the data we learn?
I don’t know what you even mean by this. Everything learns with external input.
The hell you could! This statement demonstrates you have absolutely no clue what you’re talking about. LLMs learn and process information in a method extremely close to how biological neurons function. We’re just using digital computation instead of analogue (the way all biology works).
LLMs have regularly demonstrated genuine creativity and even some emergent properties. They are able to learn certain “concepts” (I put concepts in quotes, because that’s not the right word here) that we as humans intrinsically know. Things like “a knight in armour” are likely to refer to a man, because historically it was entirely men that became knights, outside of a few recorded instances.
It can also learn general distances between cities/locations based on the text itself. Like New York city and Houston being closer to each other than Paris.
No, you 100% absolutely in no way ever could do the same thing with a database and a few queries.
*too
sorry for all my spelling mistakes… I fixed a few. I think everyone should spee(a)k binary anyways.
I’m sorry I’m a spelling spaz.