• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • They have a secondary motherboard that hosts the Slot CPUs, 4 single core P3 Xeons. I also have the Dell equivalent model but it has a bum mainboard.

    With those 90’s systems, to get Windows NT to use more than 1 core, you have to get the appropriate Windows version that actually supports them.

    Now you can simply upgrade from a 1 to a 32 core CPU and Windows and Linux will pick up the difference and run with it.

    In the NT 3.5 and 4 days, you actually had to either do a full reinstall or swap out several parts of the Kernel to get it to work.

    Downgrading took the same effort as a multicore windows Kernel ran really badly on a single core system.

    As for the Sun Fires, the two models I mentioned tend to be highly available on Ebay in the 100-200 range and are very different inside than an X86 system. You can go for 400 or higher series to get even more difference, but getting a complete one of those can be a challenge.

    And yes, the software used on some of these older systems was a challenge in itself, but they aren’t really special, they are pretty much like having different vendors RGB controller softwares on your system, a nuisance that you should try to get past.

    For instance, the IBM 5000 series raid cards were simply LSI cards with an IBM branded firmware.

    The first thing most people do is put the actual LSI firmware on them so they run decently.


  • Oh, I get it. But a baseline HP Proliant from that era is just an x86 system barely different from a desktop today but worse/slower/more power hungry in every respect.

    For history and “how things changed”, go for something like a Sun Fire system from the mid 2000’s (280R or V240 are relatively easy and cheap to get and are actually different) or a Proliant from the mid to late 90’s (I have a functioning Compaq Proliant 7000 which is HUGE and a puzzlebox inside).

    x86 computers haven’t changed much at all in the past 20 years and you need to go into the rarer models (like blade systems) to see an actual deviation from the basic PC alike form factor we’ve been using for the past 20 years and unique approaches to storage and performance.

    For self hosting, just use something more recent that falls within your priceclass (usually 5-6 years old becomes highly affordable). Even a Pi is going to trounce a system that old and actually has a different form factor.




  • All I have left to say about Google and Youtube in particular is that Youtubes ads have become so problematic, both in amount and quality (like seriously, people get banned for using innocuous words in videos targeted at adult audiences, yet completely fucked up ads are squarely targeted at children) and at this point, it’s time for YouTube to die.

    A new platform needs to come along.

    Which will be hard since Google has such a stranglehold on the datacenter and backbone level that they have an absolute advantage when it comes to bandwidth and storage costs. Which is the main cost for video platforms like YouTube.



  • For now.

    Tech companies repeatedly float shit people don’t want to see if the reaction is mild enough to actually go through with it.

    Then they either wait until it is, or mull over ways to sell this as a good idea to consumers.

    It was only 5 years ago TotalBiscuit / John Bain was still railing against the initial spread of microtransactions and DLC fragmentation of games.

    And now they are utterly and completely ubiquitous.




  • Email providers of every size don’t just blanket block unknown servers, that’s just asking for problems and loads of additional work.

    They block known problems and detect likely problems.

    Tools like ASSP (the spam filter I’ve used for a long ass time and used to install anywhere corporate filters weren’t in the budget) use advanced heuristics in combination with every form of blacklists/whitelist/greylist filtering you can think of (both on DNS and snmp levels), to look at the contents of the mail in combination with how “normal” the DNS registration and responses of the mailserver are. Add to that the default of checking that an @microsoft.com email actually comes from a known Microsoft server. There’s scores of public white and blacklists, generated by spam filters by receiving mail correctly from sources, which makes them go on whitelists and by detecting spam, which makes them go on blacklists. These lists have been around for decades by now and are constantly updated (mostly automatically).

    You don’t do email security and spam filtering by being an ass to everyone you don’t explicitly know. You do it be looking for any suspicious signs and user feedback. Just blocking by default is a far bigger headache than letting your tools do their work and then going in manually when they miss something.

    Google goes one step further and outright receives ALL mail, including spam, and just puts what is detected as spam in a spam folder.

    First company I got to that had no spam filtering deployed at all, went from 3 million emails received per day to just over 50K. Most people in that company ran a (pirated) Outlook plugin that did desktop level spam filtering and still had to manually filter more than 90% of the mail they received and then every week or so, deleted their spam folder.

    After I installed ASSP there, as I said, it went down to receiving only 50K emails per day, of which about 30K were still spam. After 2 weeks, it was down to 20K (a combination of me using the reporting tools from mail that landed in my own mailbox and the spam filter heuristics engine getting smarter from learning from the spam it received) and then I had a meeting with the whole company to teach them how to report spam (and whitelist known senders and false positives).

    A month or two into the deployment, people were used to using the reporting button and they were down to receiving maybe 1 or 2 spam emails per day (which often were still detected as questionable, but not definitely spam) as they (the email senders) were completely new to the system.

    This because spam outfits are relatively quickly detected, so they often have to change IPs, domains and methods and because of that, they perpetually exist on greylists which get scrutinized more heavily by filters.

    A domain like mine, that has been running and sending/receiving email for decades, mostly to completely official destinations like banks, corporate clients, governments and other established instances, without ever even hinting at sending spam, will rarely have any issue delivering its mail to its target as it is already known on black/whitelists generators as a good sender.


  • Never had any big issues, as there have always been providers here that stood by having an open network for its subscribers, even in the dialup age.

    And because they existed, the major providers don’t tend to do that either (at least not anymore).

    Most ludicrous thing is that the one time I DID have issues with port blocks (port 21/53/80/443 aka ftp/dns/http/https) was the first time I switched from a domestic line to a business one with one of the largest providers here. They did that as a default unless you called them to unblock everything.

    But in the past decade, on fiber, never had an issue, the providers that were first to deliver fiber were new ones that broke from two of the major ISPs respectively owning ALL the coax and ALL the copper in the country, which allowed them to set their own rules.

    And their competitive edge wasn’t on price, but on giving you a ludicrously fast and stable connection with the only limitation being what the fiber could carry, although now, when the major ISPs are also finally providing fiber, their pricing compared to my own ISP is kinda ludicrous.

    My current ISPs advertised philosophy is “security is your responsibility, a stable fast connection ours”. And so far, they’ve held true to that.

    Besides that, almost as long, I first rented and now own a box at a datacenter, which among its secondary tasks runs a backup NS and backup MX as I had the box anyway. To this date, the only times that backup had to do anything was when I was moving and when there are announced network maintenance or other works (of which the longest I can remember was 1 hour and only happen 2 times per year).

    I get that if I lived in the US, this would not be quite as practical to achieve.

    I worked for a US ISP in the early 00’s, was looking to provide WIFI in rural Texas areas. Setup the hardware and backend for them. Became quickly apparent from what they were demanding from the backend, that their focus wasn’t particularly to bring access to rural areas, but to milk the shit out of providing WIFI to rural areas.

    Don’t get me wrong tho, I still have several Gmail addresses that are as old as the service itself is. I rather use a gmail address to sign up to sites and have them deal with the subsequent deluge of spam, than to have that shit tax my own system :P



  • I’ve been in IT all my life, starting in the mid 80’s. Got an extensive home lab and host pretty much everything you tend to use as SAAS these days at home too. Home mail, cloud, web based office suite, etc.

    But for the “what if your ISP goes down”, well, then I switch to my neighbors ISP XD.

    There’s dozens of ISPs of various sizes where I live and there’s neighbors representing 8 of these ISPs. I have access to all their networks (most of them gave access).

    So if my ISP goes down, I switch to another one.

    That said, I haven’t had an outage longer than 30 minutes in 5 years and the average time between shorter outages (quick resets to minutes long) happen 1ce a year or so.

    There are some announced outages, usually once per quarter, for network upgrades and system maintenance. But generally, my ISP has a 99,99% uptime.