• 0 Posts
  • 18 Comments
Joined 11 months ago
cake
Cake day: December 27th, 2023

help-circle


  • well for e2ee you obviously have to let one e encrypt the data for the other e. (good luck with newsletters then) for usual services kindly asking them to support either s/mime or gpg for outgoing emails, that would at least make them know the wish, but good luck there too.

    i think the already mentioned solution with encrypting incoming messages on your side just before mda to your inbox should be the closest possible to what op wants. one would need to check if the message is already encrypted and skip encryption for those.

    if you only want the admin of that email (imap) server to not be able to read all emails, maybe placing a separate encrypting server (smtp+encrypt+forward) inbetween outside world and your email imap server could be a solution.

    one should have a look into the logfiles too as some mailers might log message subjects and of course sender/recipients along with ip adresses of incoming/outgoing servers which the op might not want to be readable as well (i dont know protonmail that much)

    also gpg IMHO allows for sign-then-encrypt hiding the signature within the encrypted data which could be wanted. also one might want to look exactly what parts of the messages contents and its headers are encrypted or plaintext on the server before feeling safe from the threat one wants to be protected from.



  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • maybe there was a mixup of individual datapoints and individual persons.

    lets see if that could fit.

    as far as i read things in this thread, the whole security is based on exactly these datapoints: Full Name, Date of Birth and SSN (three datapoints) plus username and password for 3 sites (six datapoints) makes 3+6= 9 datapoints per person.

    2.9 billion (us) should be 2.900.000.000 (correct me if i’m wrong, but where i live one “billion” is actually “1.000.000.000.000” thus a “bit” more)

    divided by 9 those 2.9billion would be ~ 320 million.

    on wikipedia they say the us had 331 million people in 2020…

    that would fit like an ass on a bucket! lol just to mention that.

    have a nice day!



  • smb@lemmy.mltoPrivacy@lemmy.mlDoes MATRIX recipients know my IP?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    5 months ago

    a public room is public. anyone could and should be able to enter it at any moment start recording and uploading everything to $terrorist@/or$three-letter-agency or such. The idea that someone else could also get the same already public data later is not threatening, as that data is already considered public as in “everyone in the world could have it a second after the data came into existance”. and also as removing from the public is not considered possible, uploading that already intentionally published data again does not pose a greater threat than its first publication, but uses just a bit of bandwidth, not more. if you are very sensitive about visibility of who you talk with, maybe don’t enter “public” rooms in the first place.

    if you join a private room, you already want to share with the other participants that you are f***ing talking to them, including when and who you exactly encrypted the data for, when, and to which servers they have to be forwarded. i expect the server of all participants to forward messages to the recipients. for this the server needs to know this type of information. Of course awareness, which data is used to make i.e. routing decisions is a good thing, but a “nightmare” would be teams zoom icq, whatsapp and similar. i am sure that messengers exist that could be less traceable for participants, but full anonymity to who you are communicating with so that even the servers know nothing about what happens in a room is imho not even a goal of matrix for the future.

    Not a “nightmare”, but what a nightmare it must be to find out that a system that looked so promising did not fulfill “every” dreamexpectation one had with options that are even the opposite of ones dreamexpectation like “public rooms”. that are meant to be public! how horrible!!!(lol)

    by the way -as it seems possibly noteworthy here - if you exchange emails with someones @gmail address, then google has all of your mail histories metadata, as well as the server of your provider has. just to mention, do not send emails to @gmail.com if you dislike google knowing about it. and if you share a document with edit history, then the edit history is likely also shared ;-) As “rooms” in matrix are meant to have a state that changes from the beginning sometimes possibly with every message and one can answer to a message which would reveal the existance of that message later when answered on, including at least a hint of what it was about, such information is imho meant to to be rather complete than hidden. maybe 1:1 chat solves this issue for you, as every chat with a new other person would start empty.

    i might be wrong, but matrix already is one of the most robust systems when it comes to “compromised servers”. so very far away from a nightmare. that is unless you are either a true criminal bastard or a true world saving hero, then every leaked byte might be the deadly one, that is true.

    So in case you are a true world saving hero: Maybe use a self build raspberry pi mesh proxy chain mounted on rooftops delivered by drones at night to proxy the signal of an in-memory-only-tasks-raspi to a free wifi, where the raspi that has its orders is using battery (like the rooftop proxy chain) but is hidden in a public transport to reach the proxy mesh by the transportations timetable. just to give a paranoic one some ideas and some work to do ;-) If you’ve build everything, then upload the code to github and designs to thingiverse so that “anyone” could have placed the proxy mesh to a free wifi on the rooftops, so you be more secure from beeing suspected ;-) lol btw a mesh system to accomplish this already exists, i think they named it b.a.t.m.a.n. (no joke) protocol, so the main struggle should be handling of solar power vs wifi signal strength, distances, humidity and windproof mount design beeing able to be deployed by manually controlled quadrocopters. good luck!


  • hm you have a point that it might not have been removed completely, but the problem with that point that i personally have is that this reached me too late to just believe it was really never removed. For some reasons i would not believe blindly in “evidences” that are in control of the one that is in question and could manipulate it later for such claims and also was experienced to not be trustworthy for what they say…

    saying that, there are ways to check if something was there at a time or not. the one source i know that could help here only seems to store records from 29th jun 2023 18:44:33 onwards which is too late for this.

    https://web.archive.org/web/20240000000000*/https://abc.xyz/investor/google-code-of-conduct/

    you are right, it does not make a difference in if they can be trusted, but it makes a difference in why not and what to expect if you do so despite the red flags or -as a gov- just let things go on. A person who by accident was speeding should maybe be treated differenrly than a person who intentionally(!) does so while risking others lifes. and what would be more proof of intention than a written statement or removed canary? thus such a statement does make a difference in terms of they just cannot handle their stuff, don’t care at all or maybe even have evil intentions.

    examples:

    some kids making a fire in the forest cause they don’t know the risks

    vs.

    some young adults making a fire in the woods cause they just don’t care despite knowing the risks

    vs.

    a company making fire in the woods because its cheaper to do stuff there and they lack the resouces to do it safe and someone else will pay the firefighters anyway.

    vs.

    a company stating to want to do so cause they like it despite they could afford doing it secure but just no one could or would sue them anyway.

    while i don’t want to say google is like no.4 here, to me these examples all make huge differences, no matter if the woods actually cought fire or not.


  • my idea currently is to finish some projects that have priority and afterwards then look for lineage os on raspberry pi, combined with gsm modem and maybe a gps module, all powered by a slim powerbank. might make up a huge bulky phone but i almost want to start building it now. On the other hand if i wait until my other projects are finished, the whole thing might be ready made available for self assembly…


  • smb@lemmy.mltoPrivacy@lemmy.mlWhat are the risks of sharing DNA?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    7 months ago

    All who could have an idea of what to do with it could seek a way to get that data out of every company or gov that have it for their specific reasons, no matter if data was collected lawful or not, or if access to the data is then lawful or not.

    1. search for source of evidences on crime scenes: if one of your relatives happened to have been (related to crime or just bad luck) at a place where later on some evidence was collected, you might cause trouble for them bcs your data is very similar to theirs and that is obvious to laboratories. depending on the the “later on” current state of technology it could affect relatives more than two or three steps away from you. if you live in a country where law enforcement gives a shit about truth and just seeks for one argument to punish just anyone they can point a finger at, that could become a huge problem for the whole family then just because there was data that could have been abused.
    2. illegal organ traders could - once they have access to your data - think you or your relatives could be a source of nice income if a client of theirs happen to pay enough. however you will probably never know as the illegal organ traders are unlikely to ring the doorbell to ask nicely for a contract. How much do you think would a richie in personal needs pay for “spare parts” if those who deliver them wants him to just never ask where it came from ? does it matter if such organ teaders could know a “compatible match” by data only? maybe not because they might know tomorrow or someone might put up an AI to do the matching (does it matter if that matching by AI is correct then? i guess such traders don’t really care and their customers probably, but wouldn’t that be possibly too late then?)

    For me the latter is actually enough to not willingly give my DNA data to anyone. for no reason. gov might already have it (covid probes had been collected and frozen at least) but actively pushing your data out inzo the world would be insane IMHO.

    Laboratories often use Microsoft Windows, Microsoft Active Directory and Microsoft Exchange, thus i personally see no reason to NOT believe that any data they have received once in time would - sooner or later - end up rotating uncontrolled in the hands of uncountable criminals waiting for any chance to make quick or huge money out of it.


  • looking at the official timeline it is not completely a microsoft product, but…

    1. microsoft hated all of linux/open source for ages, even publicly called it a cancer etc.
    2. microsoft suddenly stopped it’s hatespeech after the long-term “ineffectivenes” (as in not destroying) of its actions against the open source world became obvious by time
    3. systemd appeared on stage
    4. everything within systemd is microsoft style, journald is literally microsoft logging, how services are “managed” started etc is exactly the flawed microsoft service management, how systemd was pushed to distributions is similar to how microsoft pushes things to its victi… eh… “custumers”, systemd breaks its promises like microsoft does (i.e. it has never been a drop-in-replacement, like microsoft claimed its OS to be secure while making actual use of separation of users from admins i.e. by filesystem permissions first “really” in 2007 with the need of an extra click, where unix already used permissions for such protection in 1973), systemd causes chaos and removes the deterministic behaviour from linux distributions (i.e. before systemd windows was the only operating system that would show different errors at different times during installtion on the very same perfectly working hardware, now on systemd distros similar chaos can be observed too). there AFAIK still does not exist a definition of the 'binary" protocol of journald, every normal open source project would have done that official definition in the first place, systemd developers statement was like “we take care for it, just use our libraries” wich is microsoft style saying “use our products”, the superflous systems features do harm more than they help (journald’s “protection” from log flooding use like 50% cpu cycles for huge amount of wanted and normal logs while a sane logging system would be happily only using 3%cpu for the very same amount of logs/second whilst ‘not’ throwing away single log lines like journald, thus journald exhaustively and pointlessly abuses system resources for features that do more harm where they are said to help with in the first place), making the init process a network reachable service looks to me like as bad as microsoft once put its web rendering enginge (iis) into kernelspace to be a bit faster but still beeing slower than apache while adding insecurity that later was an abused attack vector. systemd adding pointless dependencies all along the way like microsoft does with its official products to put some force on its customers for whatever official reason they like best. systemd beeing pushed to distributions with a lot of force and damage even to distributions that had this type of freedom of choice to NOT force their users to use a specific init system in its very roots (and the push to place systemd inside of those distros even was pushed furzher to circumvent the unstable->testing->stable rules like microsoft does with its patches i.e.), this list is very far from complete and still no end is in sight.
    5. “the” systemd developer is finally officially hired by microsoft

    i said that systemd was a microsoft product long before its developer was then hired by microsoft in 2022. And even if he wasn’t hired by them, systemd is still a microsoft-style product in every important way with all what is wrong in how microsoft does things wrong, beginning with design flaws, added insecurities and unneeded attack vectors, added performance issues, false promises, usage bugs (like i’ve never seen an already just logged in user to be directly be logged off in a linux system, except for when systemd wants to stop-start something in background because of it’s ‘fk y’ and where one would 'just try to login again and dont think about it" like with any other of microsofts shitware), ending in insecure and instable systems where one has to “hope” that “the providers” will take care for it without continueing to add even more superflous features, attack vectors etc. as they always did until now.

    systemd is in every way i care about a microsoft product. And systemd’s attack vectors by “needless dependencies” just have been added to the list of “prooven” (not only predicted) to be as bad as any M$ product in this regard.

    I would not go as far to say that this specific attack was done by microsoft itself (how could i ?), but i consider it a possibility given the facts that they once publicly named linux/open source a “cancer” and now their “sudden” change to “support the open source world” looks to me like the poison “Gríma” used on “Théoden” as well as some other observations and interpretations. however i strongly believe that microsoft secretly actually “likes” every single damage any of systemd’s pointlessly added dependencies or other flaws could do to linux/open source very much. and why shouldn’t they like any damage that was done to any of their obvious opponents (as in money-gain and “dictatorship”-power)? it’s a us company, what would one expect?

    And if you want to argue that systemd is not “officially” a product of the microsoft company… well people also say “i googled it” when they mean “i used one of the search engines actually better than google.com” same with other things like “tempo” or “zewa” where i live. since the systemd developer works for microsoft and it seems he works on systemd as part of this work contract, and given all the microsoft style flaws within from the beginning, i consider systemd a product of microsoft. i think systemd overall also “has components” of apple products, but these are IMHO none of technical nature and thus far from beeing part of the discussion here and also apple does not produce “even more systemd” also apple has -as of my experience- very other flaws i did not encounter in systemd (yet?) thus it’s clearly not an apple product.


  • Before pointing to vulnerabilities of open source software in general, please always look into the details, who -and if so - “without any need” thus also maybe “why” introduced the actual attack vector in the first place. The strength of open source in action should not be seen as a deficit, especially not in such a context.

    To me it looks like an evilish company has put lots of efforts over many years to inject its very own overall steady attack-vector-increase by “otherwise” needless increase of indroduction of uncounted dependencies into many distros.

    such a ‘needless’ dependency is liblzma for ssh:

    https://lwn.net/ml/oss-security/20240329155126.kjjfduxw2yrlxgzm@awork3.anarazel.de/

    openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

    … and that was were and how the attack then surprisingly* “happened”

    I consider the attack vector here to have been the superlfous systemd with its excessive dependency cancer. Thus result of using a Microsoft-alike product. Using M$-alike code, what would one expect to get?

    *) no surprises here, let me predict that we will see more of their attack vectors in action in the future: as an example have a look at the init process, systemd changed it into a ‘network’ reachable service. And look at all the “cute” capabilities it was designed to “need” ;-)

    however distributions free of microsoft(-ish) systemd are available for all who do not want to get the “microsoft experience” in otherwise security driven** distros

    **) like doing privilege separation instead of the exact opposite by “design”


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.


  • ok, i have to admit, that i was thinking of google-“services” free phones like the new ones from huawei. but sure android is made by google (but not “owned” by them). however i can try to “rescue” my argument by saying something like “just use a nokia 3310! they’re still working and the batterie should still last a week if not more” ;-)

    however projects like lineage os might be a good choice to have threeth (as in more than “both”), more security, less dependency from google, and also more influence on the actual software included in the build, if it’s not even possible to just compile it yourself and have freedom of changing every line of code as you wish.


  • smb@lemmy.mltoPrivacy@lemmy.mlGoogle Allows Creditors to Brick Your Phone
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    16
    ·
    edit-2
    8 months ago

    anyone remember the time when google removed(!) their internal “don’t be evil” rule? guess this is part of the outcome of that “be evil” that came along with removal of the opposite. Abuse of this mechanism is IMHO veery predictable ;-)

    There are plenty of google-free cellphones, one could easily stick to better products of better companies. help yourself, google’s not gonna do that for you within the next 5billion* years as they IMHO already stated they “want” to be evil now, always remember that ;-)

    *) thats round about when our sun expands too much for earth, so i currently dislike doing any predictions beyond that point ;-) i do not predict google would last that long, only that they’ll keep beeing evil until their end.


  • my 2 cents just in case…:

    A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

    i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

    for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

    hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

    as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

    for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

    even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

    you likely want smart monitoring and once in a while run memtest.

    for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

    debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

    also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

    ok this is more than 2 cents, maybe 5. never mind

    hope these ideas help a bit