• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • kromem@lemmy.worldtoPrivacy@lemmy.ml...
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    3
    ·
    5 months ago

    Literally just after talking about how people are spouting confident misinformation on another thread I see this one.

    Twitter: Twitter retains minimal EXIF data, primarily focusing on technical details, such as the camera model. GPS data is generally stripped.

    Yes, this is a privacy thing, we strip the EXIF data. As long as you’re not also adding location to your Tweet (which is optional) then there’s no location data associated with the Tweet or the media.

    People replying to a Twitter thread with photos are automatically having the location data stripped.

    God, I can’t wait for LLMs to automate calling out well intentioned total BS in every single comment on social media eventually. It’s increasing at a worrying pace.






  • The mistake you are making is in thinking that the future of media will rely on the same infrastructure as what it’s been historically.

    Media is evolving from being a product, where copyright matters in protecting your product from duplication, to being a service where any individual work is far less valuable because of the degree to which it is serving a niche market.

    Look at how many of the audio money makers on streaming platforms are defined by their genre rather than a specific work. Lofi Girl or ASMR made a ton of money, but there’s not a single specific work that is what made them popular like with a typical recording artist with a hit song.

    The future of something like Spotify will not be a handful of AI artists creating hit singles you and everyone else want to listen to, but AI artists taking the music you uniquely love to listen to and extending it in ways that are optimized around your individual preferences like a personalized composer/performer available 24/7 at low cost.

    In that world, copyright for AI produced works really doesn’t matter for profitability, because AI creation has been completely commoditized.



  • but who is going to sort through the billions of songs like this to find the one masterpiece?

    One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.

    So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.

    You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.

    So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.

    Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.





  • That’s pretty amazing.

    The song sucks, but here was the cutting edge of AI music just seven years ago.

    That it’s gone from some nightmarish fever dream mashup to wannabe pop influencer levels of quality in less than a decade is pretty crazy, and as long as there isn’t a plateau in the next seven years we’ll probably be in a world where AI generated musical artists have a popular enough following that they will have successful holographic concert performances by 2030.

    I over and over see people making the mistake of evaluating the future of AI based on the present state while ignoring the rate of change between the past and present.

    Yeah, most of your experiences of AI in various use cases is mediocre right now. But what we have today in most areas of AI was literally thought to be impossible or very far out just a number of years ago. The fact you have any direct experiences of AI in the early 2020s is fucking insane and beyond anyone’s expectations a decade earlier. And the rate of continued improvement is staggering. Probably the fastest moving field I’ve ever witnessed.


  • This is BS. It’s a 3rd rate marketing group trying to game SEO for lead gen.

    Go ahead and contact them, claiming to be a prospective client with a few hundred (insert niche retail or service here) stores and that you’re interested in their product.

    At best they’ll end up revealing they have a SDK or some crap to do the active listening in your own app if you have one.

    If this were real, more than this company would be doing it, and you’d see actual case studies around it.

    Also, it’s 1000% not legal in half the US states given two party consent wiretapping laws unless the users are agreeing to it in some way, which again brings us back to that at best this is some shoddy SDK (and unlikely even that).

    Edit: Looking at it closer and given the way it isn’t linked at all from elsewhere and is a one off mention of the services, I’m actually wondering if this was an April Fool’s page that they just never took down. It’s pretty funny if that, especially given the ridiculousness of a lot of the buzz word heavy language in the bullet points. Like the idea that they are actively listening to the voice data and then having AI analyze the purchase history of the users to then cross attribute ROI using your “tracking pixel” is hilarious.

    Even just one of those steps is such a pie in the sky claim even for most billion dollar agencies.


  • This is one of the dumbest things I’ve ever seen.

    Anyone who thinks this is going to work doesn’t understand the concept of signal to noise.

    Let’s say you are an artist who draws cats. And you are super worried big tech is going to be able to use your images to teach AI what a cat looks like. So you instead use this to pixel mangle it to bias towards looking like a lizard.

    Over there is another artist who also draws cats and is worried about AI. So they use this tool to make cats bias towards looking like horses.

    All that bias data taken across thousands of pictures of cats ends up becoming indistinguishable from noise. There’s no more hidden bias signal.

    The only way this would work is if the majority of all images in the training data of object A all had hidden bias towards object B (as were the very artificial conditions used in the paper).

    This compounds by multiple axes for what you’d want to bias. If you draw fantasy cats, are you only biasing away from cats to dogs? Or are you also going to try to bias against fantasy to pointillism? You can always bias towards pointillism dogs, but now your poisoning is less effective combined with a cubist cat artist biasing towards anime dogs.

    As you dilute the bias data by trying to cover multiple aspects that can be learned from your images by AI, you further plummet the signal into noise such that even if there was collective agreement on how to bias each individual axis, it’d be effectively worthless in a large and diverse training set.

    This is dumb.



  • It’s generally easy to crap on what’s ‘bad’ about big players, while underestimating or undervaluing what they are doing right for product market fit.

    A company like Meta puts hundreds of people in foreign nations through PTSD causing hell in order to moderate and keep clean their own networks.

    While I hope that’s not the solution that a community driven effort ends up with, it shows the breadth of the problems that can crop up with the product as it grows.

    I think the community will overcome these issues and grow beyond it, but jerks trying to ruin things for everyone will always exist, and will always need to be protected against.

    To say nothing for the far worse sorts behind the production and more typical distribution of such material, whom Lemmy will also likely eventually need to deal with more and more as the platform grows.

    It’s going to take time, and I wouldn’t be surprised if the only way a federated social network eventually can exist is within onion routing or something, as at a certain point the difference in resources to protect against content litigation between a Meta and someone hosting a Lemmy server is impossible to equalize, and the privacy of hosts may need to be front and center.