Are they from China/Chinese clients? A number of these are modified to never seed, so they always show as having 0%.
Are they from China/Chinese clients? A number of these are modified to never seed, so they always show as having 0%.
I left them years ago, but their VPN software has (had?) a critical bug - the killswitch treats “connecting” the same as “connected”.
Meaning that if the connection drops for any reason and is not immediately reestablished, you not only lose all protection, but you have a false sense of security.
Kind of. They will be multiples of 4. Let’s say you got a gigantic 8i8e card, albeit unlikely. That would (probably) have 2 internal and 2 external SAS connectors. Your standard breakout cables will split each one into 4 SATA cables (up to 16 SATA ports if you used all 4 SAS ports and breakout cables), each running at full (SAS) speed.
But what if you were running an enterprise file server with a hundred drives, as many of these once were? You can’t cram dozens of these cards into a server, there aren’t enough PCIe slots/lanes. Well, there are SAS expansion cards, which basically act as a splitter. They will share those 4 lanes, potentially creating a bottleneck. But this is where SAS and SATA speeds differ- these are SAS lanes, which are (probably) double what SATA can do. So with expanders, you could attach 8 SATA drives to every 4 SAS lanes and still run at full speed. And if you need capacity more than speed, expanders allow you to split those 4 lanes to 24 drives. These are typically built into the drive backplane/DAS.
As for the fan, just about anything will do. The chip/heatsink gets hot, but is limited to the ~75 watts provided by the PCIe bus. I just have an old 80 or 90mm fan pointing at it.
The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
I don’t want to speak to your specific use case, as it’s outside of my wheelhouse. My main point was that SATA cards are a problem.
As for LSi SAS cards, there’s a lot of details that probably don’t (but could) matter to you. PCIe generation, connectors, lanes, etc. There are threads on various other homelab forums, truenas, unraid, etc. Some models (like the 9212-4i4e, meaning it has 4 internal and 4 external lanes) have native SATA ports that are convenient, but most will have a SAS connector or two. You’d need a matching (forward) breakout cable to connect to SATA. Note that there are several common connectors, with internal and external versions of each.
You can use the external connectors (e.g. SFF-8088) as long as you have a matching (e.g. SFF-8088 SAS-SATA) breakout cable, and are willing to route the cable accordingly. Internal connectors are simpler, but might be in lower supply.
If you just need a simple controller card to handle a few drives without major speed concerns, and it will not be the boot drive, here are the things you need to watch for:
Also, make sure you can point a fan at it. They’re designed for rackmount server chassis, so desktop-style cases don’t usually have the airflow needed.
To anyone reading, do NOT get a PCIe SATA card. Everything on the market is absolute crap that will make your life miserable.
Instead, get a used PCIe SAS card, preferably based on LSi. These should run about $50, and you may (depending on the model) need a $20 cable to connect it to SATA devices.
I did this back in the days of Smoothwall, ~20 years ago. I used an old, dedicated PC, with 2 PCI NICs.
It was complicated, and took a long time to setup properly. It was loud and used a lot of power, and didn’t give me much beyond the standard $50 routers of the day (and is easily eclipsed by the standard $80 routers of today). But it ran reliably for a number of years without any interaction.
I also didn’t learn anything useful that I could ever apply to something else, so ended up just being a waste of time. 2/10, spend your time on something more useful.


Intel’s future hasn’t been looking great, for a bunch of reasons, unrelated to Trump.
I’m not saying you should avoid it (“Be greedy when others are fearful”), but you should really make sure you understand what you’re getting into.
The big caveat is that the BIOS must allow it, and most released versions do not.


What is your use case? I ask because ESXi is free again, but it’s probably not a useful skill to learn these days. At least not as much as the competition.
Similarly, 2.5" mechanical drives only make sense for certain use cases. Otherwise I’d get SSDS or a 3.5" DAS.
If you skipped the area code, it probably failed the general validation check. To really test this, you would’ve needed to try a different (but completely valid) number


They all have to work (at least to an extent) using only x1. It’s part of the PCIe spec.
Missing pins are actually extremely common. If your board has a slot that’s x16 (electrically x8), which is very common for a second video card, take a closer look. Half the pins in the slot aren’t connected. It has the full slot to make you feel better about it, and it provides some mounting stability, but it’s electrically the same as an x8 that’s open.


USB the protocol, or just uses a USB cable? If it’s not using the protocol, the cables are a cheap way of getting cables of a certain spec.
Related, Pirate Bay used to (might still?) have a section where they mock all of the threatening letters that cite a different jurisdiction. Usually the US DMCA, but also similar laws from other countries.
They never posted any letters that cited Swedish (IIRC) law, because those were valid threats.


Also, be sure to run extensive burn in tests before deploying for production use. I had an entire batch from GoHardDrive fail on me during that testing, so my data was never in danger.


Thank you for the extra context. It’s relieving to know you don’t just have a bunch of USB “backup” drives connected.
To break this down to its simplest elements, you basically have a bunch of small DASes connected to a USB host controller. The rest could be achieved using another interface, such as SATA, SAS, or others. USB has certain compromises that you really don’t want happening to a member of a RAID, which is why you’re getting warnings from people about data loss. SATA/SAS don’t have this issue.
You should not have to replace the cable ever, especially if it does not move. Combined with the counterfeit card, it sounds like you had a bad parts supplier. But yes, parts can sometimes fail, and replacements on SAS are inconvenient. You also (probably) have to find a way to cool the card, which might be an ugly solution.
I eventually went with a proper server DAS (EMC ktn-stl3, IIRC), connected via external SAS cable. It works like a charm, although it is extremely loud and sucks down 250w @ idle. I don’t blame anyone for refusing this as a solution.
I wrote, rewrote, and eventually deleted large sections of this response as I thought through it. It really seems like your main reason for going USB is that specific enclosure. There should really be an equivalent with SAS/SATA connectors, but I can’t find one. DAS enclosures pretty much suck, and cooling is a big part of it.
So, when it all comes down to it, you would need a DAS with good, quiet airflow, and SATA connectors. Presumably this enclosure would also need to be self-powered. It would need either 4 bays to match what you have, or 16 to cover everything you would need. This is a simple idea, and all of the pieces already exist in other products.
But I’ve never seen it all combined. It seems the data hoarder community jumps from internal bays (I’ve seen up to 15 in a reasonable consumer config) straight to rackmount server gear.
Your setup isn’t terrible, but it isn’t what it could/should be. All things being equal, you really should switch the drives over to SATA/SAS. But that depends on finding a good DAS first. If you ever find one, I’d be thrilled to switch to it as well.


You currently have 16 disks connected via USB, in a ZFS array?
I highly recommend reimagining your path forward. Define your needs (sounds like a high-capacity storage server to me), define your constraints (e.g. cost), then develop a solution to best meet them.
Even if you are trying to build one on the cheap with a high Wife Acceptance Factor, there are better ways to do so than attaching 16+ USB disks to a thin client.


Not just VPN, but geolocation in general. I am in Ohio, but I am often geolocated as being in Chicago due to my ISP. Similar for mobile.


The comments are all assuming that it’s the same people that are viewing this content and promoting hatred. It may not be - even the reddest areas still have 30%+ blue voters (and vice-versa). Although the implications of that would be equally noteworthy and confusing.
Something tells me the demographics don’t overlap very much. I’m betting that most people going to a club already have and use a Facebook account. Unless there’s a massive cultural difference among young adults in Australia.