What you described in your second paragraph is basically how image generation AI works.
Starting from random noise and gradually moving towards the version a classifier identifies as best matching the prompt.
What you described in your second paragraph is basically how image generation AI works.
Starting from random noise and gradually moving towards the version a classifier identifies as best matching the prompt.
polishing each approach gets you slight improvements
Without the base model changing at all, research into better use of the models has, depending on the measurement, gone from around 35% success rates to 85% success rates.
I wouldn’t define that as slight.
I might see about figuring out if it can hook into my vs code instance so it’s a bit smarter at some point.
There’s an official plug-in to do this that takes like 15 minutes to set up.
Well, the Beatles and Beach boys are better than emo YouTube influencers as a genre.
But the quality of the production of the fever dream is dramatically worse.
The mistake you are making is in thinking that the future of media will rely on the same infrastructure as what it’s been historically.
Media is evolving from being a product, where copyright matters in protecting your product from duplication, to being a service where any individual work is far less valuable because of the degree to which it is serving a niche market.
Look at how many of the audio money makers on streaming platforms are defined by their genre rather than a specific work. Lofi Girl or ASMR made a ton of money, but there’s not a single specific work that is what made them popular like with a typical recording artist with a hit song.
The future of something like Spotify will not be a handful of AI artists creating hit singles you and everyone else want to listen to, but AI artists taking the music you uniquely love to listen to and extending it in ways that are optimized around your individual preferences like a personalized composer/performer available 24/7 at low cost.
In that world, copyright for AI produced works really doesn’t matter for profitability, because AI creation has been completely commoditized.
Are you saying the idea of a unicorn wasn’t new and original because it was drawing on the pre-existing features of a horse and narwhal?
but who is going to sort through the billions of songs like this to find the one masterpiece?
One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.
So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.
You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.
So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.
Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.
What version are you using?
GPT-4 is quite impressive, and the dedicated code LLMs like Codex and Copilot are as well. The latter must have had a significant update in the past few months, as it’s become wildly better almost overnight. If trying it out, you should really do so in an existing codebase it can use as a context to match style and conventions from. Using a blank context is when you get the least impressive outputs from tools like those.
It trends towards the mean, not the best.
That’s where some of the significant advances over the past 12 months of research have been, specifically around using the fine tuning phase to bias towards excellence. The biggest advance there has been that capabilities in larger models seem to be transmissible to smaller models by feeding in output from the larger more complex models.
Also, the process supervision work to enhance CoT from May is pretty nuts.
So while you are correct that the pretrained models come out with a regression towards the mean, there are very promising recent advances in taking that foundation and moving it towards excellence.
I mean, even if this song was coming from a human it’d be derivative, boring, and worthless.
If anything, the fact something comparable to mediocre human YouTube musical artists is being AI generated is the thing that is wild and impressive. The song itself in isolation is beyond meh.
That’s pretty amazing.
The song sucks, but here was the cutting edge of AI music just seven years ago.
That it’s gone from some nightmarish fever dream mashup to wannabe pop influencer levels of quality in less than a decade is pretty crazy, and as long as there isn’t a plateau in the next seven years we’ll probably be in a world where AI generated musical artists have a popular enough following that they will have successful holographic concert performances by 2030.
I over and over see people making the mistake of evaluating the future of AI based on the present state while ignoring the rate of change between the past and present.
Yeah, most of your experiences of AI in various use cases is mediocre right now. But what we have today in most areas of AI was literally thought to be impossible or very far out just a number of years ago. The fact you have any direct experiences of AI in the early 2020s is fucking insane and beyond anyone’s expectations a decade earlier. And the rate of continued improvement is staggering. Probably the fastest moving field I’ve ever witnessed.
This is BS. It’s a 3rd rate marketing group trying to game SEO for lead gen.
Go ahead and contact them, claiming to be a prospective client with a few hundred (insert niche retail or service here) stores and that you’re interested in their product.
At best they’ll end up revealing they have a SDK or some crap to do the active listening in your own app if you have one.
If this were real, more than this company would be doing it, and you’d see actual case studies around it.
Also, it’s 1000% not legal in half the US states given two party consent wiretapping laws unless the users are agreeing to it in some way, which again brings us back to that at best this is some shoddy SDK (and unlikely even that).
Edit: Looking at it closer and given the way it isn’t linked at all from elsewhere and is a one off mention of the services, I’m actually wondering if this was an April Fool’s page that they just never took down. It’s pretty funny if that, especially given the ridiculousness of a lot of the buzz word heavy language in the bullet points. Like the idea that they are actively listening to the voice data and then having AI analyze the purchase history of the users to then cross attribute ROI using your “tracking pixel” is hilarious.
Even just one of those steps is such a pie in the sky claim even for most billion dollar agencies.
This is one of the dumbest things I’ve ever seen.
Anyone who thinks this is going to work doesn’t understand the concept of signal to noise.
Let’s say you are an artist who draws cats. And you are super worried big tech is going to be able to use your images to teach AI what a cat looks like. So you instead use this to pixel mangle it to bias towards looking like a lizard.
Over there is another artist who also draws cats and is worried about AI. So they use this tool to make cats bias towards looking like horses.
All that bias data taken across thousands of pictures of cats ends up becoming indistinguishable from noise. There’s no more hidden bias signal.
The only way this would work is if the majority of all images in the training data of object A all had hidden bias towards object B (as were the very artificial conditions used in the paper).
This compounds by multiple axes for what you’d want to bias. If you draw fantasy cats, are you only biasing away from cats to dogs? Or are you also going to try to bias against fantasy to pointillism? You can always bias towards pointillism dogs, but now your poisoning is less effective combined with a cubist cat artist biasing towards anime dogs.
As you dilute the bias data by trying to cover multiple aspects that can be learned from your images by AI, you further plummet the signal into noise such that even if there was collective agreement on how to bias each individual axis, it’d be effectively worthless in a large and diverse training set.
This is dumb.
What if that already happened eons ago and you’re currently stuck in a digital twin of Earth circa 2023 reconstructed as best some future AI was able to from the memes the original jsdz left behind?
It’s generally easy to crap on what’s ‘bad’ about big players, while underestimating or undervaluing what they are doing right for product market fit.
A company like Meta puts hundreds of people in foreign nations through PTSD causing hell in order to moderate and keep clean their own networks.
While I hope that’s not the solution that a community driven effort ends up with, it shows the breadth of the problems that can crop up with the product as it grows.
I think the community will overcome these issues and grow beyond it, but jerks trying to ruin things for everyone will always exist, and will always need to be protected against.
To say nothing for the far worse sorts behind the production and more typical distribution of such material, whom Lemmy will also likely eventually need to deal with more and more as the platform grows.
It’s going to take time, and I wouldn’t be surprised if the only way a federated social network eventually can exist is within onion routing or something, as at a certain point the difference in resources to protect against content litigation between a Meta and someone hosting a Lemmy server is impossible to equalize, and the privacy of hosts may need to be front and center.
Literally just after talking about how people are spouting confident misinformation on another thread I see this one.
People replying to a Twitter thread with photos are automatically having the location data stripped.
God, I can’t wait for LLMs to automate calling out well intentioned total BS in every single comment on social media eventually. It’s increasing at a worrying pace.