• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: August 6th, 2023

help-circle
  • Sure. Sure. They’ve been close or getting closer for 10 years now.

    I’ll believe it when it actually releases and not a moment sooner. Otherwise I would be the opposite of shocked if July 2025 rolls around and it’s still not out but still “close”. As I would be if December 2025 rolls around and “there are only a few more issues, very soon!” is the statement. It’s become a joke at this point and likely will remain the butt of jokes and rightfully so for years, perhaps decades to come in the open source and graphics design communities.


  • If you block ALL traffic from it? Sure. It’s possible but more involved and requires the right hardware to block their tracking domains while leaving streaming apps working.

    It’s best not to use smart TVs as well smart TVs. The apps they have are almost always slower or inferior in some way to the versions you get on streaming devices, updated less often, etc. I recommend pairing a TV with a quality streaming device like an Nvidia shield (or shield pro) or an AppleTV*. Alternatively if you want something a little cheaper in Androidtv space there is the Walmart brand Onn 4k pro.

    *warning with Apple is while they’re pretty good on privacy (meh, there are no excellent choices that support streaming apps in 1080p quality) and don’t have ads their app-store is a bit more locked down. They have all the major streaming services but if you do high seas type stuff it will be more involved and difficult. Though if you have a local media collection (source your own discs or high seas) and run Plex or Jellyfin they have apps for both of those that work great as well as Infuse which usually requires a subscription unless you don’t need 4k or any proprietary audio codecs like dolby for any of your media. I personally can say I enjoy my AppleTV 4K and I think it’s a great device but I run my own media-server and have some common streaming services I pay for.



  • Cons:

    You absolutely cannot get 2FA authenticator codes from 90% of services. Many services that require a phone number even without 2FA just for “verify you’re a human” or because they want your data or to verify region use shortcode services that also will not work with ANY VOIP provider.

    You will not receive their codes. These companies vary from banking institutions to gaming companies to online shopping marketplaces and stores to a Google account (used to be you could get an automated phone call to verify an account, not anymore, must be able to receive SMS from shortcodes that are disabled for VOIP numbers to register and to recover an account) just about anyone you could end up doing business with.

    A shockingly large amount of companies demand phone numbers and send verification texts before allowing you to do business with them, to create an account, to recover an account, to delete an account, to place an order, etc.

    They really shouldn’t, it’s a bad security practice but companies love it because with a phone number they can lower support costs by just allowing people to do a self-service where they get an automated text and can unlock their locked account. They also love harvesting that data and preventing anonymization with VOIP numbers and the reduction of fraud and increase of reliable KYC that comes with requiring them.

    And they all take it as a given that EVERYONE or at least 99% have a cell plan with a non-VOIP number that works with these and the 1% who don’t they don’t care about in the developed world and are an acceptable loss.



  • Take a look here for some alternatives:

    https://dessalines.github.io/essays/why_not_signal.html#good-alternatives

    • Matrix
    • XMPP
    • Briar
    • SimpleX

    Also just because there are no alternatives doesn’t mean your default position should be we just have to trust whatever exists now because it’s good enough. Or that we can’t criticize it ruthlessly, distrust it. Call it out and as a result of that build perhaps the desire for something better, a fix as it were.

    The evidence and history clearly points towards Signal being very suspicious and likely in bed with the feds. This is not conspiracy thinking. Conspiracy thinking is thinking that the country/empire that gave away old German engima machines whose code they’d cracked to developing countries without telling them they’d cracked it in the late 40s/early 50s, that went on to establish a crypto company just to subvert its encryption. That’s done everything Snowden revealed has in fact changed suddenly for the first time in half a century for no particular reason and not to its own benefit. That’s fanciful thinking. That’s a leap of logic away from the proven trends, the pattern of behavior, and indeed the incentivizes to continue using their dominant position to maintain dominance and power. They didn’t back down on the clipper chip because they just gave up and decided to let people have privacy and rights. They gave up on it because they found better ways of achieving the same results with plausible deniability.

    Also why is everything “tankies” with you people. Privacy advocates point out the obvious and suddenly it’s a communist conspiracy. LOL



  • Lot of cope and denial in these threads. Yes the same-day is probably a rosy estimate based off people using 6 digit codes or something easy to crack, doesn’t mean it’s false or that they can’t hypothetically target longer alpha-numeric passwords. For all we know they might not even be brute-forcing and could be conducting some sort of exploit that over time reveals the encryption keys themselves in some way.

    I’m still very curious about the nature of the mechanisms of action. I assume they manage to bypass the basic lock-out against entering too many passcodes too quickly somehow which is what enables this. If throttling could be properly enforced (to say nothing of something like 10 attempts and it refuses all future attempts and erases the key type of thing) this type of attack wouldn’t be practical for anyone using anything above a 6 digit numerical passcode in any reasonable timeframe. I wonder if they exploit wireless radios including cellular, wifi, bluetooth and force some code on the phones via these usually-on chips that enables this via exploiting problems in their architecture. Perhaps something that locks up, prevents functioning or resets certain checks via flooding parts of the hardware/software from these points of access. Or if it really is purely phy/log access to the lightning/usb-c port.




  • There is just no excuse for not even salting or SOMETHING to keep the secrets out of plaintext. The reason you don’t store in plaintext is because it can lead to even incidental collection. Say you have some software, perhaps spyware, perhaps it’s made by a major corporation so doesn’t get called that and it crawls around and happens to upload a copy of a full or portion of the file containing this info, now it’s been uploaded and compromised potentially not even by a malicious actor successfully gaining access to a machine but by poor practices.

    No it can’t stop a sophisticated malware specifically targeting Signal to steal credentials and gain access but it does mean casual malware that hasn’t taken the time out to write a module to do that is out of luck and increases the burden on attackers. No it won’t stop the NSA but it’s still something that it stops someone’s 17 year old niece who knows a little bit about computers but is no malware author from gaining access to your signal messages and account because she could watch a youtube video and follow along with simple tools.

    The claims Signal is an op or the runner is under a national security letter order to compromise it look more and more plausible in light of weird bad basic practices like this and their general hostility. I’ll still use it and it’s far from the worst looking thing out there but there’s something unshakably weird about the lead dev, their behavior and practices that can’t be written off as being merely a bit quirky.


  • I wish they would just push all the big mainstream porn sites to remove the most abusive misogynistic content rather than slapping these checks on everything.

    Also this will never be okay until there is a zero knowledge version that means neither the government, nor the sites, nor any other party can establish a given person’s habits which is probably not something they’ll ever do because tracking is probably part of the point.

    I’m not a fan of the easy access to porn that kids have or the proliferation of the industry in general but I am worried that as part of this harmless things like erotic roleplaying websites will be swept up as part of it and well I use those. And their point is not porn though some people host and share porn as part of it (which is why it’d get swept up with it eventually probably), it’s about writing, smutty, erotic writing. And I’d rather not have to tie my identity to my desires to roleplay out an elf who ends up making “friends” with the wolf-men tribe to my real life identity (I’m not claiming that’s something I do there but it’s an example of something that would be kind of embarrassing for others to know and it’s far from the weirdest stuff that goes on in places like that).

    Government having credits for how often I could say log in and continue a long-term erotic writing campaign with someone is just weird but that’s the end point of this kind of thing. Having credits seems not helpful anyways, the true porn addicts are just going to download stuff then share it in private forums, discords, p2p, etc. If the point is to stop kids from accessing this the credits thing seems odd.



  • It should be considered illegal if it was used to harm/sexually abuse a child which in this case it was.

    Whether it should be classed as CSAM or something separate, I tend to think probably something separate as a revenge porn type law that still allows for distinguishing between this and say a girl whose uncle groomed and sexually abused her while filming it as while this is awful it can (and often does seem) be the product of foolish youth rather than the offender and those involved all being very sick, dangerous, and actually violent offending adult pedophiles victimizing children.

    Consider the following:

    1. Underage girl takes a picture of her own genitals, unfortunately classified as the unhelpful and harmful term “child porn” and she can be charged and registered as a sex offender but it’s not CSAM and -shouldn’t- be considered illegal material or a crime (though it is because the west has a vile fixation on puritanism which hurts survivors of childhood sexual trauma as well as adults).

    2. Underage girl takes a picture of her genitals and sends it to her boyfriend, again /shouldn’t/ be CSAM (unfortunately may be charged similarly), she consented and we can assume there wasn’t any unreasonable level of coercion. What it is unfortunately is bound by certain notions of puritanism that are very American.

    3. From 2, boyfriend shares it with other boys, now it’s potentially CSAM or at the least revenge porn of a child as she didn’t consent and it could be used to harm her but punishment has to be modulated with the fact the offender is likely a child himself and not fully able to comprehend his actions.

    4. Underage boy cuts out photo of underage girl he likes, only her face and head, glues it atop a picture of a naked porn actress, maybe a petite one and uses it for his own purposes in private. Not something I think should be classed as CSAM.

    5. Underage boy uses AI to do the same as above but more believably, again I think it’s kind of creepy but if he keeps it to himself and doesn’t show anyone or spread it around it’s just youthful weirdness though really he probably shouldn’t have easy access to those tools.

    6. Underage boy uses AI to do same as 4-5 but this time he spread it around, defaming the girl, she/her friends find out, people say mean things about her, she has to go to school with a bunch of people who are looking and pleasuring themselves to fake but realistic images of herself against her consent which is violating and makes one feel unsafe. Worse probably being bullied for it, mean things, called the s-word, etc.

    Kids are weird and do dumb things though unfortunately boys especially in our culture have a propensity to do things that hurt girls far more than the inverse to the point it’s not even really worth talking about girls being creepy or sexually abusive towards peer-aged boys in adolescence and young adulthood. To address this though you need to address patriarchy and misogyny on a cultural level, teach boys empathy and respect for girls and women and frankly do away with all this abusive pornography that’s super prevalent and popular which encourages and perpetuates abusive actions and mentalities towards women and girls, this will never happen in the US however because it’s structurally opposed to being able to do such a thing. Also couldn’t hurt to peel back the stigma and shame around sexuality and nudity in the US which stems from its reactionary Christian culture but again I don’t think that will ever happen in the US as it exists, not this century anyways.

    Obviously not getting into adults here as that doesn’t need to be discussed, it’s wrong plain and simple.

    Bottom line I think is companies need to be strongly compelled to quickly remove revenge-porn type stuff (regardless of the age of the victim though children can’t deal with this kind of thing as well as adults so the risk of suicide or other self-harm is much higher so it should be treated as higher priority) which this definitely is. It’s abusive and unacceptable and they should fear the credit card companies coming down on them hard and destroying them if they don’t aggressively remove it and ban it and report those sharing it. It should be driven off the clear-web once reported, there should be an image-hash data-set like that used for CSAM (but separate) for such things and major services should use it to stop the spread.


  • So first it’s client-side scanning for CSAM. Not without some nobility. But the problem is once you wedge open that door it’s technically possible to do it for other things and so you become compelled to.

    It’ll move from just CSAM to stopping and tracking “propaganda” as deemed by them which will be narrow-ish at first (anything pro-Russia, RT links, etc) but gradually expand over time to anything outside the mainstream branded as extremist (and guess what, privacy advocates will definitely fall within that label). And once that’s in place the private stake-holders, copyright holders will come knocking, they’ll say rightly so “hey you have the capability right now, we demand you implement client-side scanning to detect copyright violations” and then that will be ordered by a court, further enshrined by a law and oh look now you can no longer send political thought that the ruling regime disagrees with, can no longer surf the high seas, and so on and so forth. Congratulations and please enjoy living in the “garden” of Europe.


  • Majestic@lemmy.mltoPrivacy@lemmy.mlMeta payment message
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    The venture capital dollars started running out. Returns started being demanded. Companies that made slightly improved and/or more accessible versions of more open products extinguished those products using venture capital dollars then started rolling out the enshittification, demands for money, intrusive ads, spying, dark patterns, sabotaging, paid tiers.

    Back in those days the internet was a curiosity. A hobby. A fun thing to share, something a company might hope to break even on or earn minor profits with, these days big profits are demanded, centralization. Addiction to high resolution and size video and image content which is expensive to host and serve. The network effect drained smaller sites and resources, concentrating people in larger venues that had the investment dollars to support them at the cost of their privacy. Combine with search engine optimization and it became harder to even find smaller places. Add in digitally uneducated kids who thought fb and such were most of the internet and never bother to venture beneath the top 6 google results and older people and this is what you have.

    Take something like Omegle. I don’t want to defend what it was for most of its existence as the bad outweighed the good IMO (like 4chan) but something like that if made today would require linking your facebook or google account and serve you video ads every 5 minutes on top of banner ads. But back then it was just something some random guy could make for fun and not think “hmm I need real identities to monetize these people to ad networks to pay for this and turn a big profit selling the data they input”.