Also @unrealSpeedy and @JWvdV, I understand that you own N100 setups (I guess HDDs connected via USB hubs), how much can it be pushed? In a sense what would be the maximum number of hdds? How many hdds per USB port? Would you have any estimate how many nodes that are gathering data can be served by one N100 and how many hdds with nodes already filled? Hope you do not mind me continuing this subject.
I’d expect a N100 to comfortably run a dozen nodes on Linux with even 16GB RAM. But I’m always interested in hearing about others setups.
I would say people with small and large setups have different priorities though: like even if a N100 config with a dozen USB HDDs wins a TB-per-watt contest… someone like Th3Van probably doesn’t want to be maintaining ten of those setups to scale to his level. Space, cooling, cabling, reliability and overall ease-of-management makes JBOD enclosures win when you have 100+ HDDs and compute to run nodes for each. Even if you end up paying for a few more watts every month…
I rethinked the setup, because I was strugeling to find a good power addapter for it, and all are out of stock, and there is the problem with powering the drives. The mobo offers one power port for drives, but I don’t know how many drives can power.
I ordered the mATX version with 350W ATX PSU, so I would have enough power for all drives I can put on it.
For 230$ I got the AsRock N100M mobo, the Asus Prime AP201 case that I posted and the Deepcool PF350 PSU. I’m also waiting for Samsung 32GB RAM to be confirmed; the model is the one listed on AsRock QVL page - 90$. I also have a good offer for those 2 Exos 22TB drives. I have some spare m.2 drives, and I think I’ll use one for OS and db-es.
So… will see in february the final setup and how much will draw. I’ll wait for the other drives untill these too fill, maybe, or for the next good offer. I don’t plan to use VPS, but I can link through Wireguard to some other routers and get more IP’s.
I’m pretty happy for the price, because I don’t buy used hardware and the next option for me would had been Synology DS923+, at double the price. I think I make/have ones of the most expensive setups around here, dedicated to Storj , excluding datacenter level setups like The Van.
I have to admit @snorkel that sometimes I got the feeling that some of those “datacentre level setups” are in reality not that much expensive, however, I would still happily wait for a precise numbers in terms of power consumption. I believe this might be interesting comparison. I have to admit that this N100 and your choice of components is really cool. Thank you for sharing all the information above.
Based on the CPU usage on my NAS box and assuming Passmark’s CPU Mark numbers meaningfully reflect storage node CPU workloads, I’d fully expect N100 to be capable of running ~50 nodes in terms of pure computational capacity. A bigger problem becomes the amount of RAM supported by N100, even 32 GB might become a bottleneck for that many nodes And it looks like it only supports 9 PCIe lanes
Oh I didn’t even notice that… and the Asrock MB suggested up top only has one x4 PCIe slot that’s x2 electrical? So 50 drives through 2 lanes = 100% IOWAIT
The questions remain still open.
Can N100 or any other of N series CPUs:
- power a server shelve via PCIe card, if so, how many nodes harvesting, how many already filled, what vanilla filesystem most suitable for such a task;
- power storagenodes located on HDDs connected via USB hub, if so, how many nodes harvesting, how many already filled, what vanilla filesystem most suitable for such a task;
- run as a storagenode x86 frontend for SPARC powered metadata ZFS jazz; what is recommended system setup;
And:
- what is N100 based system total power draw [separate numbers for central unit (mb, cpu, ram etc) and storage (hdds)],
- what is power draw per TB stored if system pushed to the acceptable limits.
Can’t see why not. Assuming 45 drives with total bandwidth of 8GB/s, a MoBo with two 4x PCIe slots (one for an NVMe drive, because you’re not going to have enough RAM for caches) and a PCIe gen4 card would do. One node per drive and you could potentially run 45 nodes accepting uploads, each in a separate /24 network
There’s one true file system for storage nodes: ext4.
USB is much more CPU-hungry. I’d be sceptic.
Sorry, not an expert in ZFS.
The CPU, RAM, and the controller card will be insignificant compared to 45 HDDs.
With 45 HDDs you can round everything outside of HDDs to zero and nobody will notice the error.
Well, I was hoping N100 supporting 12 or 24 external enclosure including already filled HDDs but 45 and in addition harvesting in /24 different subnets? Are you sure you are not overoptimistic wrt FW and GC and N100 4 cores.
With 45 HDDs you can round everything outside of HDDs to zero and nobody will notice the error.
Yeah, I do agree with you to some (significant) extend. However, it would still be interesting to have a value. My point of view was to compare N100 or N Series CPU based “server” setup to the second hand enterprise XEON servers (most of them are equipped with 1000W to 3000W power supplies). In such a case difference and overall cost long term might be significant as well as a carbon footprint.
USB is much more CPU-hungry. I’d be sceptic.
Yeah, however, it still would be interesting to get more info on USB hub based configurations as it seems to be the most popular. I will ping again @unrealSpeedy and @JWvdV as they seems to be running such setups. Maybe that have some additional info to add to this discussion.
If we add no-go’s, then it would mean that we prefer the specialized custom setups: not using what you have, but instead - build something explicitly for Storj, like the whole discussion in this thread.
The recommended 1 CPU core per node is to prevent a high load on CPU due to slow disk or high traffic and to prevent to run too many nodes on a weak hardware (because they could start to affect each other due to a high CPU load). We know, that usually the CPU is not used too much, but there are always exceptions. If you decided to ignore - this decision is up on you, but do not expect that it will work well.
We already did - this is one of the reasons to have this forum and having discussions like this. If you aware of unpredictable income - this decision is up on you, but do not expect that it can cover any your costs, that risk is up on you.
We cannot promote nor recommend to buy a hardware or make any investments explicitly for Storj, it’s also against a green idea - use what you have now and what will be online with Storj or without.
You will get a real answer only if you actually prepare a setup like that. All I can see is that it is plausible.
Though, now I see I haven’t taken network bandwidth into account—a 1Gbps connection will be a bottleneck, and there’s not enough lanes anymore to add a network card
There are setups that do not work. why not people letting know? you can mention use what you have in the same sentence!
: )
Yeah, absolutely. Im affraid I do not have anything to add apart to what I wrote before.
A 1Gb Internet connection is fine. If Th3Van doesn’t fill half of it with 100+ nodes your N100 will be OK
You can fill any bandwidth if you use VPSes to access many subnets.
On N100M mobo you have 1 pcie with 2 lanes and 1pcie with 1 lane, plus the 2 SATA connectors, so 5 HDDs at least. I don’t even want to go more than these.
If you want to use lots of drives (24 or 45+), you want SAS expanders, especially if you’re on a PCIe-starved system. Not to mention, it’s annoying to have to locate drives in such a setup if one needs to be replaced or reseated.
This is nearly impossible to get profitable with such a setup at the start. Exept you build it for another case anyway. Or from pure scrap metal.
Don’t know whether it’s only related to USB-interface, but I have 12 drives assigned to 11 STORJ-nodes (one system drive). With two drives also partially used for back-up purposes (using Syncthing). Also a VPN-server is running on it.
Essentially it’s always running almost 100%, of which 80-90% of IO-wait. Because I got missing online scores, I decided not to expand the system anymore. Some of the drives are SMR, what might be contributing to this problem. As the system fills up, this problem seems to be waning.
RAM usage isn’t an problem altogether. The system has 16G. I installed zram, of which only 650M is being swapped in general.
It’s a setup chosen because I had already all the hardware. I wouldn’t chose it, if I had free choice to buy anything I wanted.
This is already done:
All other will work, poorly, but will. I can add only recommendations like this to a prerequisites page, may also warn about using exFAT, because it will directly produce the usage discrepancy due to a huge cluster size and can be prevented beforehand, it’s also not advisable to use this FS for any load in my opinion, because it also a too fragile.
The usage of NTFS under Linux could be added to there too, but it will limit a usage from what you have now, besides the fact that it will work, but will be slow and may have a discrepancy issue too. So this may be added only as an info for consideration, but not a requirement.
The single BTRFS/zfs pools without a proper tuning/caching will be slow and may have issues with used space, but they are much better than exFAT or using NTFS under Linux. However, it will work and do not break anything, except potential issue of usage discrepancy and losing races more often. So this may be added only as an info for consideration, but not a requirement.
Verry nice. Maybe copy this to the web page, and its done.