I will be in a perfect position to snatch a discount H100 in 12 months
Check out my digital garden: The Missing Premise.
I will be in a perfect position to snatch a discount H100 in 12 months
I actually like the new Notepad
At the level of the Pulitzer prize finalists, I think the use of AI is completely warranted and should be encouraged. To get anywhere near that level in the first place, you need to do be able to craft good writing on your own. That they use AI to help that process doesn’t bother me one bit.
You know what’s weird? Conservatives generally think people are lazy and would rather do nothing at all than be productive. But their efforts to make policies based on that assumptions, which are invariably harmful and evil, really encourage me to do everything to oppose them locally.
That’s why AI exacerbates inequality between more and less experienced workers. More experienced workers will know what garbage to look out for and its manifestations in poorly cleaned data sets. Newer workers will just have to trust the AI did it.
I absolutely agree with you. That is the internet platform business model after all.
Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.
That doesn’t mean a $20 subscription is the one and only means of addressing that problem though.
In other words, I think we can take OpenAI and Google at face value without also saying their business model is the best way to solve the problem.
Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.
Google Bard is currently free to use for now, so the danger is not locking up tech behind a subscription (though Google will 100% do that eventually).
I miss the physical keyboard of my first phone. It was so cool! I filled flipped out open and turned out horizontal to thumb type.
It was really hard moving to a virtual keyboard. Swype helps but it also make a ton of mistakes too.
The jury: sounds like magic to me! Sounds good!
Stereotypically bad science reporting
Yeah, probably, since it’s not being used commercially.
I think, like Obsidian, it stores them as markdown files.
Then Logseq. It’s an outliner (each line can be it’s own…thing…), but it’s open source and a direct competitor of Obsidian. In fact, I was ambivalent between the two when I first started with online note-taking.
This is obviously a bad idea. CMV
I am annoyed af. My pc does have TPM but it’s a bitch to set up and I’m not fighting that fight again. And fuck windows 11 on my current computer
And in this debacle, I don’t WISH to be anti-social, I’m anti-social but not in a voluntary manner. I’m in my prime years and I need friends and relationships at this age but my privacy standpoint is mangling with those.
But so this isn’t a conundrum you chose. That’s why people here are so into privacy. Instagram is social, sure, but is that the kind of socializing you want? Really? We know it’s bad for the mental health of teenage girls. What’s to desire about that? What’s to desire about the algorithm that actively tries to make you hooked on the app?
These are the kinds of questions behind the privacy communities, among others.
Also, don’t lie to women. Extreme things usually only look extreme until a person understands them. Explain yourself and give them an opportunity to come around and/or be willing to make compromises. Having an Instagram account you use every now and then to verify your humanity in a virtual world seems reasonable to me.
Elon Musk is going to get this first and become God Emperor of mankind. Mark my words!
This is a good question.
Open Empathic’s answer is that because AI is becoming more embedded in our lives, an “understanding” of emotions on AI’s part will help people in variety of ways, both within and outside of industries like healthcare, education, and, of course, general commercial endeavors. As far as they’re concerned, AI is a tool that will help encourage “ethical” human decision-making.
On the other hand…we have a ton of different ethical theories and industries ignore them wholesale to make profits. To me, this looks like your standard grade techno bro hubris. They intend to use “disruptive” technology to “revolutionize” whatever. The exploitative profit-making social hierarchy isn’t being challenged. The Hollywood’s writer strikes have just begun, for example. Once Open Empathic starts making breakthroughs in artificial emotional intelligence, the strikes will return and be even more prolonged, if not broken altogether.
I’d answer your question with people who care about other people should be deeply concerned.
Even without a focus on empathy, ChatGPT’s responses in a healthcare setting were rated as more empathic. At best, empathic AI is used to teach people how to be more empathetic to other humans, eventually needing it less and less over time. Far more likely is that human communication becomes mediated through empathic AI (and some company makes a lot of money off the platform of mediation) and the quality of face-to-face human interaction deteriorates.
Growth is the MO of every business.
Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I’d have to have already read the chapter to include what the model missed.