New Storagenode build - Intel N100?

Ok, sorry for asking. Nevertheless, the last time I checked, it seemed most of the hubs had only a few 3.0 ports, other ports were 2.0 and also the power supply was usually significantly below 80W. I was also not able to find anything with 16 ports apart to StarTech 5G16AINDS-USB-A-HUB (which is over 300$). Thus I asked for a specific recommendation. You know, sometimes there is an exceptional product on the market, thus this question.

I think this is quite unwise on one USB host-port.

Besides, I changed my adapter for a higher wattage.

Thus I asked for a specific recommendation. You know, sometimes there is an exceptional product on the market. Cheers.

They are under revision rn. especialy for the hardware requirements, we are verry intersted if the 1core per node rule persists.

1 Like

If you wanted to support an external enclosure, and a few more internal drives: look at something like a 9300-4i4e. The internal port can directly talk to 4 SATA/SAS drives (or more, with an expander) and the external port can get you into hundreds of drives if you daisy-chain enclosures.

1 Like

or better some i8
i got that H310 with P19 drives (coz P20 are neweest and dont trust them)

2 times cheaper, and can get 8 HDDs, or
search for i16 like

like this maybe idk here
Important is, the controller gotto have HBA mode (means it can run HDDs in unRAID mode, IT mode )

1 Like

Yeah, once again, thats why I pointed the attention to the PCIe port of @snorkel 's choice, it is the AsRock N100DC-ITX. 1. The question is which controller is the best (in terms of price / performance ratio) and how many drives can be handled by N100 (or maybe there is any other, similar processor that might be even more suitable). 2. I would also like to stress out that I would rather not be abandoning this USB hub route as it seems it is probably more suitable for smaller setups. All in all, I am really wondering what is @arrogantrabbit opinion, in a sense if he is maybe willing to express himself on those topics … explicitly. Are you there @arrogantrabbit ? :slight_smile:

If you wanted a cheap motherboard, I will encourage you to do some research on N100 ITX NAS motherboards from Aliexpress.

For example: https://www.aliexpress.com/item/1005006393122396.html

Compared to the Asrock motherboard, you get way more ethernet ports (if you wanted to use it as a router) four additional SATA ports, and an additional M.2 slot. You give up having a smaller physical PCI-E slot. It is also DDR5 SODIMM so you get slightly better memory bandwidth.

However, you can buy ASM1166 on M.2 (~20 each from Aliexpress) to have up to 16 drives without having to use SATA port multipliers.

1 Like

I think it’s an expensive overkill :slight_smile:. Buy used old enterprise stuff.

SAS controllers support both SAS and SATA drives

Consider this for example: LSI SAS 9207-8i PCI-E 3.0 Adapter LSI00301 IT Mode Card Host Bus Adapter | eBay (I think this is exact one I use myself) along with pair of 3.3 FT Mini SAS to 4x SATA Cable SFF-8087 to SATA Cord Hard Drive Splitter Cable | eBay to connect your SATA drives to the controller, unless your case has a proper backplane.

PCIE is backwards compatible, so it will just work; with a small caveat that some old controllers expect the correct number of PCIE lanes wired: i.e. if you have a x4 PCIE port, all four lanes must be present. Or all 8 lanes in x8 port. Some consumer motherboards like to wire fewer lanes to wider ports, and while most devices can deal with this just fine, I found that some controllers really don’t like that. This will manifest in controller not initializing or otherwise being visible. So check your motherboard specs.

Instead of ASROCK consider ASROCK RACK products. Some of them have massive number of ports onboard already, and the whole product line is better suited for servers. Or, again, look into old used server boards. Often you can get them with a CPU and ram for under $100. Look into various hardware recyclers/surplus stores.

2 Likes

Right, when you wrote it, I immediately reminded myself about this fact! :slight_smile: To be frank this part about the controllers, extenders, HBA modes, daisy chaining external enclosures is a little bit of a black magic for me. :slight_smile: I have never owned a full blown server before. :slight_smile:

Thank you very much for this hint. I just wanted to underline again my way of thinking and the reason I got interested in this thread which basically is a way of connecting second hand server enclosure with 12 or 24 already filled HDDs with a low power CPU system. I would also like to add that my preference would be also second hand enterprise grade SAS drives as they seem to offer best bang per TB. Will read with interest the rest of this discussion. Also wanted to add that as for now I fully agree with my friend’s [ I hope he does not mind :slight_smile: ] opinion, I mean here @arrogantrabbit 's opinion, that it might be hard do beat “used old enterprise stuff” even with N100 however at the same time I have to admit that this N100 and similar CPUs look quite promising.

The N100 has a very hard to beat performance/W ratio. That’s it’s main advantage over other solutions. Myself I don’t want to run more than 4 drives on it, I don’t use VPS services. 4 drives on 4 cores is also in ToS, and 4 drives are officialy supported by that MoBo. If you look in the manual, they say you should use 90W 19V power addapter for 4 drives.
I’m surprised to find that there are no many options for N100 MoBos. I looked at Asus, AsRock, MSI, Gigabyte, Intel and Biostar. I stay away from chineese Aliexpress stuff. I don’t have expirience with Ali, but… it’s cheap chineese electronics. You get what you pay for.
So, from the main devs, Asus has one miniITX, but with only 1 SATA connector, and AsRock has this miniITX with 19V addapter plug and mATX with normal PSU, both with 2 SATA. The others don’t offer retail MoBos with N100, only miniPCs, which are too mini for my 4 HDD needs.
There are 2 other CPUs in the new N lineup, but I researced only this one.

1 Like

Yeah, I have not done any in depth reading on N100 but I think I fully agree with you about “a very hard to beat performance/W ratio” of the whole N series. N100, as I indicated, probably can do much more then 4 drives, especially if drives are already filled. As for now I see four options: 1. N100 and drives directly connected to the MB; 2. N100 and drives connected via USB hub; 3. N100 and drives connected via PCIe card; 4. A mix of the above. Probably the case would be to find the best solution for each of them. On the first glance it seems to be simple but usually the devil sleeps in small details.

I also agree with you about Ali however at the same time I take a stance that sometimes the products offered there are not bad. The case is to do in depth checking on them. As I indicated, I have to admit that I do not mind second hand equipment. And personally for two or four drives I would probably go for Dell E4200 with a docking station (2 x USB 3.0, 4 x USB 2.0, and 2 x eSATA, 128GB SSD, 5GB RAM, and UPS in a form of a battery, the whole setup probably around 75$ including S&H).

I have to admit that I “joined” Storj “movement” because I would like to have a reason to own a server setup. Currently my dream is second hand Oracle ZFS Appliance, hopefully apart to Intel server (running podman storage nodes) also including SPARC based ZFS server (I have seen a deal on T5-4 and really got hooked on it: 4 x SPARC T5 3,6 GHz (512 threads), 2 TB RAM and 4 x 800 GB SSD F80 Warpdrives). Thus I am reading like crazy every post about ZFS setup written by @arrogantrabbit to gain at least a slight edge … which of course is not easy … usually … he really does know what he is talking about.

I am very sorry, I was not aware that there is a requirement of 1 CPU per drive in the Storj TOS. I also have to admit that I do share some of the concerns expressed above and I do hope that we get some info about those planned changes to the TOS soon. To be honest, I really hope that maybe @elek might be instrumental on those TOS matters. I am recalling that he was mentioning some large scale operations in his public biography. Let’s keep the fingers crossed that he might have some strong arguments if a discussion on those important matters finally take place at the Storj Inc. premises.

Take a look at this beauty! Just found it when poking around for a cheap case:

https://www.asus.com/motherboards-components/gaming-cases/prime/asus-prime-ap201-microatx-case/

All the panels are mesh all-around, so very good airflow for a passive cooled CPU and bunch of drives. You can even put 2-3 drives at the bottom and they will stay cool enough.

1 Like

I would go for the black, tempered glass edition. Guys, would it make sense to combine N100 as a fronted and SPARC as a backed? I guess T5-4 could be a bit of an overkill but to be honest it went for I believe a very favorable price so maybe T5 single or T4 or T3? Can we challenge those SOC 2 folks with such or similar setups?

its not so much about the hardware, its the soc2 cert that matters, so if you have certificate, you can, (but why? afaik its less paid as “consumer network”)

1 Like

My point is that if the network develops and we still be discussing Raspberry Pi type of setups as reference ones sooner or later more and more problems will be materializing leading to unnecessary tensions. Please do not get me wrong I really do like the setup proposed by @snorkel and even though I do not mind second hand equipment I do have some concerns related to its reliability.

Nevertheless, simply observing some statistics related to the network, payment reports and reading some of the posts I am getting to a conclusion that something really wired is taking place in relation to the direction of the community development. To be more precise, I got a feeling that the part of the community that is really driving its development is significantly underrepresented in a public discussion making all this not transparent enough and again … simply just wired. :slight_smile:

Watching the prerequisites:

No ram necessairy :rofl:, smr drives? no problem. filesystems? no recommendations/no-go’s (exfat seems ok.) , forum? maybe? core speed? no mention, windows 8 minimum is a hint on this. NAS? not even mentioned.

tl:dr It should be reworked. mentioning some limits.

no wonder first users get problems.

Yeah, the guidance and stability of Storji Inc. environment could be more explicit, no question about it. Also probably a technology and user experience as well, particularly the ease of use, possibly performance and … carbon footprint :-). Lets hope that maybe @elek and @Alexey will came out with some conclusions.

Leaving apart SPARC T5-4 with its redundant 3000W power supplies (this was the setup to challenge @arrogantrabbit - a little bit of course) - I suggest to focus on the numbers.

@snorkel, would you mind sharing some pricing info about your setup and what would be the power consumption if we try to divide the central unit (mb, cpu, ram etc) and storage (hdds).

I believe this might be interesting and useful to compare those numbers to the ones provided by @Th3Van here, what do you think if I may ask?