No, it’s opt-in. If you do nothing you won’t have it.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
No, it’s opt-in. If you do nothing you won’t have it.
They’re not “pushing their Recall shit whether we like it or not”, they’re explicitly making it opt-in. They gave a fuck about their users’ complaints and made a bunch of modifications to it.
You may still not like it, but give them some credit.
That’s not what they’re arguing, not even close.
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And becomes new clickbait in the process.
I should note, there are cryptocurrencies that also don’t use proof of work. Ethereum, the second-largest, switched away from proof of work two and a half years ago.
not some fucking investors and shareholders that probably kept pressuring CS for the last several years to reduce costs and increase revenue,
This is presumably part of what would be at issue in court. The shareholders are claiming they were lied to. We’ll see how that holds up.
CrowdStrike (CRWD.O), has been sued by shareholders who said the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the July 19 global outage that crashed more than 8 million computers.
In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike’s assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world.
Basically, the company advertised itself as being one way to the shareholders, they bought in on that basis, and then it turned out they were misrepresenting themselves. Presumably they’re suing the company and not the executives personally because that’s where the money is.
Note that simply owning the shares doesn’t mean that it’s already “their money.” If I buy a share in a company I can’t walk up to it and demand that they give me a portion of the cash from the register. It’s more complicated than that and lawsuits like this are part of that complexity.
That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:
These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.
Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI’s abandoned market share.
AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.
I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.
We are talking specifically about OpenAI, though.
Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.
I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.
OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.
They don’t use GPUs, they use more specialized devices like the H100.
Not necessarily. Curation can also be done by AIs, at least in part.
As a concrete example, NVIDIA’s Nemotron-4 is a system specifically intended for generating “synthetic” training data for other LLMs. It consists of two separate LLMs; Nemotron-4 Instruct, which generates text, and Nemotron-4 Reward, which evaluates the outputs of Instruct to determine whether they’re good to train on.
Humans can still be in that loop, but they don’t necessarily have to be. And the AI can help them in that role so that it’s not necessarily a huge task.
It means that even if AI is having more environmental impact right now, there’s no reason to say “you can’t improve it that much.” Maybe you can improve it. As I said previously, a lot of research is being done on exactly that - methods to train and run AIs much more cheaply than it has so far. I see developments along those lines being discussed all the time in AI forums such as /r/localllama.
Much like with blockchains, though, it’s really popular to hate AI and “they waste enormous amounts of electricity” is an easy way to justify that. So news of such developments doesn’t spread easily.
Funny you should mention blockchains. Ethereum, the second-largest blockchain after Bitcoin, switched from proof-of-work to a proof-of-stake validation system two and a half years ago. That cut its energy use by 99.95%. The “blockchains are inherently a huge waste of energy” narrative is just firmly lodged in the popular view of them now, though, despite it being long proven false.
A lot of work has been going into making AIs more energy efficient, both in training and in inference stages. Electricity costs money, so obviously everyone’s interested in more efficient AIs. That makes them more profitable.
The term “model collapse” gets brought up frequently to describe this, but it’s commonly very misunderstood. There actually isn’t a fundamental problem with training an AI on data that includes other AI outputs, as long as the training data is well curated to maintain its quality. That needs to be done with non-AI-generated training data already anyway so it’s not really extra effort. The research paper that popularized the term “model collapse” used an unrealistically simplistic approach, it just recycled all of an AI’s output into the training set for subsequent generations of AI without any quality control or additional training data mixed in.
Kids these days don’t learn cursive writing, it’s destroying their literacy!
Not in every way. They’re cheaper and faster.