• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle






  • I am not sure it’s the same software, but it’s a fairly good guess I think. Same software capabilities and same lab, with the same area of research.

    Geoguesser is a subset of the skills used for general image geo location for open source intelligence.
    In the specific cases of only using the data present in the image and relying on geographic information, it certainly does better.
    Humans still do better, and can reach decent skill with minimal training, at placing images that require spatial reasoning or referencing multiple data sources.
    AI tools will likely be able to learn those extra skills, but it doesn’t change that it’s the photo that’s the data leak, and not the tool. The tool just makes it vastly more accessible, and part of the task easier for curious human.


  • Some blues are reversible, and some aren’t. Some of them do a statistical rearrangement of the data in the area being blurred that’s effectively reversible.

    Think shredding a document. It’s a pain and it might take a minute, but it’s feasible to get the original document back, give or take some overlapping edges and tape.

    Other blurs combine, distort, and alter the image contents such that there’s nothing there to recombine to get the original.

    A motion blur or the typical “fuzzy” blur can be directly reversed for the former, and statistical techniques and AI tools can be used on the later to reconstruct, because the original data is still there, or there enough that you can make guesses based on what’s there and context.
    Pixelating the area does a better job because it actually deletes information as opposed to just smearing it around, but tools can still pick out lines and shapes well enough to make informed guesses.

    Some blurs however create a random noise over the area being blurred, which is then tweaked to fit the context of whatever was being blurred.

    Something like that is impossible to reverse because the information simply is not there.
    It’s like using generative AI to “recover” data cropped from an image. At that point it’s no longer recovery, but creation of possible data that would fit there.

    The tools aren’t magical, they’re still ultimately bound by the rules of information storage.








  • It’s not a simple task, so I won’t list many specifics, but more general principles.

    First, some specifics:

    • disable remote root login via ssh.
    • disable password login, and only permit ssh keys.
    • run fail2ban to lock people out automatically.

    Generally:

    • only expose things you must expose. It’s better to do things right and secure than easy. Exposing a webservice requires you to expose port 443 (https). Basically everything else is optional.
    • enable every security system that you don’t have reason to disable. Selinux giving you problems? Don’t turn it off, learn how to write rules to let your application do the specific things it needs. Only make firewall exceptions where needed, rather than disabling the firewall.
    • give system users the minimum access they require to function.
    • set folder permissions as restrictively as possible. FACLs will help, because it lets you be much more nuanced.
    • automatic updates. If you have to remember to do it, it won’t happen. Failure to automate updates means your software is out of date.
    • consider setting up a dedicated authentication setup like authellia or keycloak. Applications tend to, frankly, suck at security. It’s not what they’re making so it’s not as good as a dedicated security service. There are other follow on benefits.
    • if it supports two factor, enable it.

    You mentioned using cloud flare, which is good. You might also consider configuring your firewall to disallow outbound connections to your local network. That way if your server gets owned, they can’t poke other things on your network.




  • So, you’re going to run into some difficulties because a lot of what you’re dealing with is, I think, specific to casaOS, which makes it harder to know what’s actually happening.

    The way you’ve phrased the question makes it seem like you’re following a more conventional path.

    It sounds like maybe you’ve configured your public traffic to route to the nginx proxy manager interface instead of to nginx itself.
    Instead of having your router send traffic on 80/443 to 81, try having it send the traffic to 80/443, which should be being listened to by nginx.

    Systems that promise to manage everything for you are great for getting started fast, but they have the unfortunate side effect of making it so you don’t actually know what it’s doing, or what you have running to manage everything. It can make asking for help a lot harder.


  • You’ll be fine enough as long as you enable MFA on your Nas, and ideally configure it so that anything “fun”, like administrative controls or remote access, are only available on the local network.

    Synology has sensible defaults for security, for the most part. Make sure you have automated updates enabled, even for minor updates, and ensure it’s configured to block multiple failed login attempts.

    You’re probably not going to get hackerman poking at your stuff, but you’ll get bots trying to ssh in, and login to the WordPress admin console, even if you’re not using WordPress.

    A good rule of thumb for securing computers is to minimize access/privilege/connectivity.
    Lock everything down as far as you can, turn off everything that makes it possible to access it, and enable every tool for keeping people out or dissuading attackers.
    Now you can enable port 443 on your Nas to be publicly available, and only that port because you don’t need anything else.
    You can enable your router to forward only port 443 to your Nas.

    It feels silly to say, but sometimes people think “my firewall is getting in the way, I’ll turn it off”, or “this one user needs read access to one file, so I’ll give read/write/execute privileges to every user in the system to this folder and every subfolder”.

    So as long as you’re basically sensible and use the tools available, you should be fine.
    You’ll still poop a little the first time you see that 800 bots tried to break in. Just remember that they’re doing that now, there’s just nothing listening to write down that they tried.

    However, the person who suggested putting cloudflare in front of GitHub pages and using something like Hugo is a great example of “opening as few holes as possible”, and “using the tools available”.
    It’s what I do for my static sites, like my recipes and stuff.
    You can get a GitHub action configured that’ll compile the site and deploy it whenever a commit happens, which is nice.