• 1 Post
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • To be frank with you, humans are the weakest security point in any system. Even if you did somehow (impossibly) 100% secure your device… you’re literally sending everything to X other family members who don’t care about security anyway and take zero preventative measures. That’s sort of the point of a chat app. All they would need to do is target your family instead of you to get the exact same info - this is how Facebook has everyone’s telephone number and profile photo, even if they don’t have an account. And if it’s a WhatsApp data breach… well. Your family is just one in a sea of millions of potentially better/easier targets.

    If there’s anything interesting about your family chats that is actually secret info, it probably shouldn’t be put into text anywhere except maybe a password manager. Just tell them not to send passwords or illegal stuff or security question info via whatsapp. It’s all you can realistically do in situations like this.

    We literally cannot keep all information private from everyone all the time, you have to pick and choose your battles. And even then, you’ll still lose some, even if you’re perfect.






  • It relies on slow legal mechanisms that vary widely by jurisdiction. It also highlights the huge problem with forcing users to find workarounds for legal manipulation. Instead of employing an “economies of scale” approach and having authorities crack down on obvious bullshit, you have to go through this process or pay someone to do it for you and pay companies for their credit reports on you and pay to file the lawsuit etc. etc.

    Additionally, any of these companies can close down and then open back with with a new name at any time and force you to start the process all over again. It’s called a “phoenix company” where I am.

    I also consider it pretty likely that trying to remove your information just verifies your information and therefore makes it more valuable for brokers. There’s no reason to assume they handle information ethically and are doing anything more than providing the opt-out for plausible deniability.



  • Add a little oil, and a few minutes in a frying pan or microwave will do it. Maillard reaction (browning) starts at around 140°C and shiitake aren’t exactly thick, so they won’t take much longer than it takes to get some extra colour on them. Average frypan and oven temp is usually around 180°C, so it’s not something you really need to think or worry about.

    They also think you need a certain hypersensitivity for this to happen. If this were a significant risk, there would be huge amounts of cases in East Asia. This case became a science tabloid spam piece because it’s so unusual.


  • I got it from the quote in the article from the author of the NEJM paper. You’re correct, but this seems to also happen in maybe 2% of people, and there’s a good chance 145C is only needed to be absolutely certain all sugars have 100% broken down. Hotpots might still get rid of most of it at 100C. I’m not a polysaccharine decomposition expert though, even though I know they’re very heat-sensitive.

    If you’re really worried (which you probably don’t need to be given it’s rarity), mushrooms can’t really be overcooked (unless you literally burn them), so nuking them in the microwave with a thin coat of oil or frying them off will help get them to temp if you want to be really certain.

    Second source from non-paywalled:

    It affects about 2% of people that consume the mushrooms raw or only lightly cooked… in people of all ages, … more often male than female.
    …shiitake dermatitis is not seen with the ingestion of thoroughly cooked at a temperature > 145 C.
    - Shiitake flagellate dermatitis


  • Poor bastard must have been itchy as fuck. Sadly the article on a shitty ad infested site is also padded out for word count. So here is the important parts. Hand-summarised, unlike the AI-assisted article:

    • A 72-year-old man presented with a 2-day history of an itchy, linear rash across his back. Two days before symptom onset, he had prepared and eaten a meal containing shiitake mushrooms. - Paywalled report from New England Journal of Medicine
    • Caused by the carbohydrate lentinan which triggers the release of interleukin-1 (and other chemicals), which causes cause inflammation.
    • The rash develops usually 2-3 days after eating undercooked shiitake.
    • Lentinan is broken down when thoroughly cooked at temperatures over 145° C / 293° F

    Because fuck shitty pop-science padded journalism and their marketing strategies and hostile UX, and fuck the NEJM too for paywalling medical research.




  • This is even worse when we factor that many accessibility issues are addressed through simple measures that many times must be accomplished when basic maintenance is done, like rewiring or fittings renewal.

    I completely agree. I would love to have the option to use non-networked solutions. But for multiple reasons, tinkering with the electricity supply and residence is outside my control.

    I can still control my networks and lightbulbs though. So here I am, somewhere I never anticipated, looking at networkable lightbulbs and foss repos. Like I said, I’m just happy to have an option.


  • I’m glad your relatives were able to make permanent modifications to their living spaces that sufficiently accommodated their accessibility needs! Many of us do not share those circumstances, and the number of people with a huge variety of different medical problems is steadily increasing. I, for one, am very happy to have some implementation options to choose from.


  • If you’re really old, odds are you have experienced physical pains that have made “forgetting to turn off the light/appliance/device” a difficult experience rather than just inconvenient. I never liked the idea of IoT devices until chronic pain fucked up the whole mobility thing for me, now I realise it’s a total necessity. Especially for societies with rapidly growing older demographics, increased rates of chronic illness, and inadequate social and medical systems.


  • We used post-it notes on a wall at a previous workplace to aid a truly useless manager. It didn’t make him a better manager, but it did have upsides. It felt great to crunch completed tasks up into little balls and throw them in the recycling when we did standups. The extra visibility in the room was really helpful too, other colleagues would ask us about our work or when we might be free for their whims, and we could just point at the wall and say “after all that shit is done?”. Usually they would see the mountain in the to-do columns and say “oh.” and then walk off dejectedly. It stopped a lot of bullshit requests with the mere presence of colourful papers fluttering in the aircon, including incompetent managerial scope creep.

    The fridge would work well for this with some little magnets and/or a whiteboard marker, like people do with reward charts for kids.


  • It’s not possible to remove bias from training datasets at all. You can maybe try to measure it and attempt to influence it with your own chosen set of biases, but that’s as good as it can get for the foreseeable future. And even that requires a world of (possibly immediately unprofitable) work to implement.

    Even if your dataset is “the entirety of the internet and written history”, there will always be biases towards the people privileged enough to be able to go online or publish books and talk vast quantities of shit over the past 30 years.

    Having said that, this is also true for every other form of human information transfer in history. “The history is written by the victors” is an age-old problem when it comes to truth and reality.

    In some ways i’m glad that LLMs are highlighting this problem.


  • Even as someone who declines all cookies where possible on every site, I have to ask. How do you think they are going to be able to improve their language based services without using language learning models or other algorithmic evaluation of user data?

    I get that the combo of AI and privacy have huge consequences, and that grammarly’s opt-out limits are genuinely shit. But it seems like everyone is so scared of the concept of AI that we’re harming research on tools that can help us while the tools which hurt us are developed with no consequence, because they don’t bother with any transparency or announcement.

    Not that I’m any fan of grammarly, I don’t use it. I think that might be self-evident though.