Yes, it’s safe, because no, they don’t relay it. The brilliant thing about it is that it’s all done locally, on your machine.
Yes, it’s safe, because no, they don’t relay it. The brilliant thing about it is that it’s all done locally, on your machine.
Cryptocurrencies are not reliably fungible, nor stable, nor widely accepted. They have their uses, but they are not suitable replacements for PayPal and not what OP asked for.
There is no privacy-focused PayPal alternative in the US, in part because US money transfer laws and policies (e.g. Know Your Customer) directly oppose privacy.
However, there are a couple of new projects that might eventually lead to something less bad for privacy than PayPal is:
The rest of the sentence you truncated points out forwarding services. Yes, others exist beyond the four I mentioned, of course.
Edit to clarify: Your “it doesn’t” argument is that you can use forwarding from other domains that you own. Indeed you can, but that’s not a counterargument, because those are forwarding services. They do exactly what I described: the same thing as the example forwarding services in my original comment. You still have to maintain the them, as well as maintain the extra domains.
I don’t know if Element Web/Desktop was affected by the vulnerabilities in the title, but another one (also announced today) is fixed in Element Web/Desktop v1.11.81.
https://github.com/element-hq/element-web/security/advisories/GHSA-3jm3-x98c-r34x
The correct fix is to get the site maintainers to stop rejecting email addresses based on the characters they contain. They shouldn’t be doing that. Sadly, some developers believe it’s an appropriate way to deter bots, and it can be difficult to educate them.
If they won’t fix it, the workarounds are to either not use those sites, or to give them a different address. Unfortunately, the latter means having to maintain multiple email accounts, or forwarding services like Addy.io, SimpleLogin, Firefox Relay, or DuckDuckGo Email.
I no longer consider any email app to be okay for privacy if I can’t build it from source code. There are just too many opportunities and incentives for someone to exploit it. That could be the developer, or the maintainer of some obscure code library, or a company that buys one of them out, or an attacker who found a vulnerability. We no longer live in a world where it’s reasonable to think we’ll get privacy from communications software that we can’t inspect.
Thankfully, we also no longer live in a world without options. There are more than a few email apps with nothing to hide. :)
People in privacy circles do talk about phone numbers, but it’s usually about them being collected in the first place. Most of us realize that corporate promises to delete them later are easily reneged and impossible to verify, and therefore next to worthless. We need laws forbidding data collection. We don’t have them yet.
By the way, that title is useless to people who are browsing Lemmy to see which posts might interest them.
I don’t know why VPN providers promote themselves as like they are going to make your connection more private, everything is already encrypted (except DNS).
It’s true that most popular web sites have moved to HTTPS, but even if all of them had, not all network traffic is web traffic. Also, even if someone uses the network only for web browsing, DNS is not the only privacy-relevant data that gets exchanged outside the HTTPS connection.
You are just shifting the trust from your ISP to the people that run the VPN.
Some people have reason to distrust their ISP more than their VPN provider, so this is a valid use case.
VPN isn’t really comparable to HTTPS. The former protects all traffic, and with a relatively small attack surface, but only up to the VPN edge. The latter protects all the way to the network peer (the web server), but only web traffic, and with a massive attack surface: scores of certificate authorities in countries all over the world, any of which could be compromised to nullify the protection. They address different problems.
In other words, it’s the same effect as when you make separate identities to share with different contacts on any messaging service. SimpleX has adopted that as the normal way to operate.
Worth mentioning just in case you’re not aware: versioning is present not just on the protocol spec, but on individual rooms. That ought to ease any semantics changes that might be needed.
I think this could use some elaboration on what you mean by half dead.
I don’t remember the statement in the bug report verbatim, but it indicated that they intend to fix it, which is about what I had previously seen on other issues that they did subsequently fix. I expect it’s mainly a matter of prioritizing a long to-do list.
I can’t think of a reason why it wouldn’t be possible. The protocol is continually evolving, after all, and they already moved message content to an encrypted channel that didn’t originally exist. Moving other events into it seems like a perfectly sensible next step in that direction.
There are a few that do a good job of protecting our messages with end-to-end encryption, but no single one fits all use cases beyond that, so we have to prioritize our needs.
Signal is pretty okayish at meta-data protection (at the application level), but has a single point of failure/monitoring, requires linking a phone number to your account, can’t be self-hosted in any useful way, and is (practically speaking) bound to services run by privacy invaders like Google.
Matrix is decentralized, self-hostable, anonymous, and has good multi-device support, but hasn’t yet moved certain meta-data into the encrypted channel.
SimpleX makes it relatively easy to avoid revealing a single user ID to multiple contacts (queue IDs are user IDs despite the misleading marketing) and plans to implement multi-hop routing to protect meta-data better than Signal can (is this implemented yet?), but lacks multi-device support, lacks group calls, drops messages if they’re not retrieved within 3 weeks, and has an unclear future because it depends on venture capital to operate and to continue development.
I use Matrix because it has the features that I and my contacts expect, and can route around system failures, attacks, and government interference. This means it will still operate even if political and financial landscapes change, so I can count on at least some of my social network remaining intact for a long time to come, rather than having to ask everyone to adopt a new messenger again at some point. For my use case, these things are more important than hiding which accounts are talking to each other, so it’s a tradeoff that makes sense for me. (Also, Matrix has acknowledged the meta-data problem and indicated that they want to fix it eventually.)
Some people have different use cases, though. Notably, whistleblowers and journalists whose safety depends on hiding who they’re talking to should prioritize meta-data protection over things like multi-device support and long-term network resilience, and should avoid linking identifying info like a phone number to their account.
So you are basically saying that root CAs are unreliable or compromised?
Not exactly. They are pointing out that HTTPS assumes all is well if it sees a certificate from any “trusted” certificate authority. Browsers typically trust dozens of CAs (nearly 80 for Firefox) from jurisdictions all over the world. Anyone with sufficient access to any of them can forge a certificate. That access might come from a hack, a rogue employee, government pressure, a bug, improperly handled backups, or various other means. It can happen, has happened, and will happen again.
HTTPS is kind of mostly good enough for general use, since exploits are not so common as to make it useless, but if a government sees it as an obstacle, all bets are off. It is not comparable to a trustworthy VPN hosted outside of the government’s reach.
Also, HTTPS doesn’t cover all traffic like a properly configured VPN does. Even where it is used and not compromised, it’s not difficult for a well positioned snooper (like an internet provider that has to answer to government) to follow your traffic on the net and deduce what you’re doing.
If you care about keeping your domain enough that you don’t want there to be an excuse for someone to take it from you, then you use your real info, and choose a registrar that only exposes a proxy contact in your WHOIS entry.
If you don’t care about losing your domain, then you can use fake contact info.
All desktop environments are fancy compared to a simple window manager.
The unfortunately paradoxical thing about opt-out services is that using them requires giving out your details, and hoping that they aren’t (deliberately or accidentally) leaked.
CoreLogic defended its practices as legal, saying it’s too difficult to verify consent or anonymise personal data.
And this is what needs changing. It should not be legal for them to have it, nor for anyone to give it to them, in the first place.
The security provided by a browser is constantly changing, as the vulnerabilities, attacks, and countermeasures are constantly changing. It’s a cat-and-mouse game that never ends.
The privacy provided by a browser would be difficult to measure, since it depends a lot on browsing habits, extensions, code changes between versions, etc.
There’s no good way to calculate a metric for either type of protection, and even if there was, the metrics would be obsolete very quickly. For these reasons, I wouldn’t have tried what you attempted here.
However, there is a very simple way to compare the major browsers on privacy and reach a pretty accurate conclusion: Compare the developers’ incentives.