Constant bandwidth traffic

you can always split out sata ports… if you got 4 sata ports and each can be split into 4 more ports… sure you won’t have the bandwidth… but its not the bandwidth thats the issue… after all this is all sent over the internet so rarely people can go beyond 1gbit

Ill have to check it I have all the hardware already I just didn’t wanna mess with it and causing it to go offline after all its been running 24/7 since may… But I dont think this machine will handle spliting the sata ports though since there probably sata3 and not sata6.

you could run zfs using raidz1 and then the data is split over more drives also keep in mind… sata3 is like 3gbit transfer for each channel… so that means if you split it into 4 drives you should still be able to get a sustained transfer of like 700mbit from each drive, thats like 100mb/s

should be plenty for spindle drives.
the real issue is that the hdd’s are slow for what we are trying to do with them and thus adding more helps spread the load.

Anyways I was right it was infact garbage collection though its back to normal 340gigs free just now.

This machine for sure doesnt handle garbage collection very well.

stress like that is what keels old worn drives, or atleast i’ve killed a few older drives that seemed fine… not sure exactly why it happens tho…

kinda like when people use non uniform drives in a raid array and then the slowest drive dies in no time at all… like if one losses a drive to random chance and then has 1 drive thats slower than the others and one is resilvering… and then goes yet another drive…

garbage collection in config.yaml is # out and set to 1hr… seem quite often if it slows down the machine that much… maybe there is more to it than that.

even if we said it was the original sata bandwidth it would be 150mb/s and even tho you split it into 4 drives, then afaik you get 150mb/s max to the drives, sure you would not get the full speed of the drives if getting data from all of them, but you can still get 150mb/s from either of them… which is more than the internet connection can handle most likely.

so i don’t see a reason not to just split out the sata ports for a storj server, aside from if you wanted to do some heavy lifting locally, but with sata expander splitters or whatever they are called, one can cheaply and easily connect a bunch of drives without requiring a ton of ports to begin with… i forget if the 150MB/s pr sec is per port or per pair of sata ports…

Ive been searching I cant find anything that actually split sata connections though. Only power this dell only had 2 sata slots total, Other option would to install a sata pcie expansion card.

Shouldn’t be, but there’s been a lot of test traffic the last few days so you’re probably seeing that.

I did take that into account, I’ve never seen it that consisent before.

You can check it yourself - in the SNOBoard look at how much each satellite is using, stefan-benten and saltlake are for testing, rest is customer data.

You can’t compare it like that, 2008 CPU and 2015 CPU will have DRASTICALLY different performance at same frequency and thread count. You can only do that within the same processor series, for example Intel 5400 series.
But yeah, the old CPUs are more than enough for this, assuming they are somewhat good, I’m also running on a decade old system, it’s doing great. Using a 2006 system for my main PC, time for an upgrade lol.

Look for SATA port multipliers. Go SAS if you can, it’s more versatile.

On the original topic, I might have found a cause of the problem; I updated my router just prior to seeing this happening. I have changed the LAN NIC from paravirtual to the Intel E1000e kind on the router. It seems to be good for now, new version doesn’t seem to play nicely with the paravirtual adapter even though it has better performance. I will know in a couple of days if this has resolved it.

1 Like

I think I might have to just remove one node from the machine and just add a SSD till I can find a budget upgrade for this, Most of the controllers ive found are in china and Im not about to get anything shipped from china cause it will take far to long to get, I do have a Hardware raid controller which is SAS but to install it in this compact Dell Case it wouldnt fit, Need thing super compact since there is hardly any room left to install. But thanks for the advice I didnt think of it first off I did find a Sata port mulipler but it also comes from china…

You can do what you need immediately and also order from China, and when it gets there do another upgrade.
If you already have a SAS controller you can get PCIe riser cable and install it somewhere else, possibly outside of the case, that’s how I have it set up. The cable is <$10 albeit from China too… Unfortunately if it is too kludgy for you then you have no choice.

got myself 2x lsi 9217 HBA’s i need to put in, which gives me 12i and 4e on 8087 and 8088
mostly using sata drives for now… but had i known how easy it was to split sata ports into multiple ports with sata port multipliers then i might not have bought my old server.

the main issue i’ve had with using SAS is that often it seems to be a bit behind the curve in disk support, so some old SAS gear (maybe cause its often more state of the art when sold) meaning its more outdated when i buy it… so i first ran into the 2tb limit… yay…

the i got a super nice 9260 i16 raid card with cachecade, but then i ran into the issue of hooking up some used 4kn sas drives i bought… and because i went to zfs i don’t use the raid functionality of the card, hadn’t really worked much with server gear before so learning the hard way lol

@Storgeez well i duno, just saying it seems to indicate it… and it would sort of make sense for the satellites to have a certain distribution of data for various reasons…
but might also be just pure chance…

i will agree and disagree to this… ofc new cpu’s have new features, which will make them better or worse for certain things, but the fundamental processing power in benchmarks often end up very consistent.

i recently spent a long time looking at cpu stuff, and it’s one of the main take aways i’ve gotten out of it… treads and hz can be used to compare cpu’s fairly accurately in the widest possible sense in the last decade… atleast

ofc one should always check the individual benchmarks when comparing, but i’m saying it’s very nice to be able to gauge cpu’s by heart… or i like to pick up such tricks when i’m spending the time on it anyways.
always nice to be able to tell a hot air ballon from a jetfighter, even tho each has their own advantages.
xD

eheh i thought about doing something like that, but now i’m getting rid of my riser card and going back to lowprofile which gives me like 4 or 5 pcie slots instead of two, if i can find a grate for the back… else i’m sure i can do some getto solution.
also why i got a 4i and 4e version of the 9217, then i got … i forget the numbers, but its something like 1.5GByte/s+ over that external port alone, i even studied my block diagram to really hook into the stuff right, not that it was that useful on this old server… quite crude.

happy you figured out the issue… storg

@deathlessdd yeah i will for performance reasons only run one node, but doing that does require some planning for the storage solution.
would hate having it die years in or having to do a graceful exit just to start a new larger node.

Probably gonna have to use my Rpi4 to take on the second node for now till I figure out another way to get performance back, Working on installing the OS on a SSD and boot USB would be better then its running now.

my setup is kinda rough around the edges… first try on linux in over 5 years…
and running zfs also a first… and mostly just crashed linux in the past… so xD
tried installing the OS directly on the zpool but that kinda blew up when i tried to “simulate” a disk failure

so i moved the OS to the ssd, where the l2arc and zil is located… i’m getting the sense that was a bad idea, because the ssd doesn’t run all that smooth… infact my spinning rust drives are having less backlog atleast according to netdata… i get 100ms spikes often… not great for an ssd…

so will most likely have to do something about that at one point…

You can get SAS expanders, better than SATA stuff.

SAS 2108 has no limit, I use Dell H310 flashed to IT mode, no large disk limit. I also screwed up buying a new server looking for a fancy RAID card, only to realize you cannot simply connect a disk through it, you need to obscure it with RAID. Idiotic design. So I bought a much worse card for much more money to convert it.

I’ve always been learning the hard way unfortunately. I bought this one a few years ago https://www.ebay.com/itm/Dell-Perc-H310-SATA-SAS-HBA-Controller-RAID-6Gbps-PCIe-x8-LSI-9240-8i-M1015/173030573558?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

TBH I never really tried it, I always looked up Passmark scores for the CPUs I was interested in comparing. It just seemed straightfoward. But if it works for you…

I’m sorry, I have to correct myself here, I used that to power the riser, it’s a single lane riser, I didn’t use it for PCIe. Mistake.

If you have 4 port external connector thats 3 GBps, maybe 2.5 GBps with overhead, plenty for for more than 10 hard drives.

I think my VM drive might be on its way out, with this stupid RAID card it’s impossible to check SMART, I’ve ordered a replacement drive and router fix seems to be holding for now.

Probably something wrong with the setup or compatibility issues but I don’t think you would need the L2ARC and SLOG, unless you really need to squeeze the performance out of what you already have. Having enough RAM cache should be sufficient. Especially if it’s the first time doing it, it gets too complicated too fast.
If you feel like experimenting, I’m all for learning new stuff. Go knowledge!

Don’t you have to format the drives to work with that raid controller though, I want something where I don’t have to do alot of transfering data around just want simple and I dont want my dell to be all janky.

didn’t seem much performance gained either, tho remoting my VM windows server got a lot more snappy, i’m thinking the OS and the L2ARC and the SLOG all running on the same regular crappy SATA.

Yeah i went with the 2308 chip because it is one of the later and the most recent LSI HBA i could find for cheaps, figured it might have stuff future me will want… but yeah even the 2008 chip i think is past the 2tb limit, but then comes the 4kn issue… and then there is also something with ssd iops i think…

maybe thats what i’m running into with the 9260 i’m still using… but it was designed to run ssd’s so i doubt thats the issue, and still even if i fix the issue then i’m switching to the HBA’s in a couple of weeks hopefully… so pointless to add more downtime and trouble for myself.

and yes it is SAS expanders, pretty sure you can hook drives directly to them if you are using sata drives… not sure about sas… but i don’t see why not…
but yeah learning this server stuff is so very different from consumer grade computing… i should have gotten an old server decades ago lol

yeah my PCI-E will do maybe 5.8GBps because of my cpu… max is like 6.4GBps but those cpu’s are like twice the power usage.
i plan on using 1 - 4e sas port to hook up to a DAS, i mean if its mainly for storj then internet is the bandwidth limit so meh…and 2.5GBps is like 25 hdd’s when full.

a bit scared of the power usage tho… when i bought my 5600 series cpu’s i looked at their performance, and wondered why they could equal most modern stuff in the consumer market at the time… the answer is POWERSAVING… this beast has one gear, i really tried to make it conserve power when not being used much… but its like 10% saved… so meh… kinda just gave up on it an went for performance then… lol atleast my cpu’s are the power saving type…

when i looked at it i picked the 5600 series because the E series was basically slower even tho it was newer atleast when one looked at what one could get… even this old server can get up to like 17k passmark on the cpu… pretty impressive… ofc those cpus eat like 3 times the wattage or so…
but atleast i know i can cheaply upgrade if i need to lol…

You need a format to hold any data…

I don’t know about IOps, I see references to several hundred thousand IOps, but I’ve been using mine with 4kn version w/o issues, I don’t know if that even has to do with the controller.

SAS expanders work only with SAS controller, devices can be either SATA or SAS, same as the controller.

Same here, they’re so much more than the PCs!

I can’t say for sure but i think at 32nm they had power saving functionality, not sure how much exactly, I’m at 5600 now and I got some settings in the firmware but the board uses so much power that it’s practically meaningless to save 10W on the CPU, the whole server draws between 100W and 200W when idle. So I didn’t explore too much in that direction…

Looks like I solved the 10Mbps issue, was router hardware incompatibility issue.

1 Like

actually i didn’t format my drives on the raid controller… the LSI controller will run them straight through when you create a drive in raid0 on 1 drive only… ofc it sort of still writes them as raid0…
but everything aside from maybe smart and such semi useful stuff is choked or hard to find.

4Kn is sector size, it improves IOps, but yeah it should work… which of the 4Kn sector sizes does it work with, because there are like something like 3 or 5, i just figured i wanted the newest chip if possible, for the things i cannot think of… i mean why would i buy an older version if the newer version costs almost the same, but yeah i maybe have bought the newer chip for no reason, but then i’m sure it’s on a new nanometer and thus more energy efficient xD.

i mean you can split a SAS port into 4 SATA ports by cable, no needing special gear and to my knowledge that should also work on SAS expanders, i think deathlessdd said he couldn’t hook his sata directly to the controller or maybe it was sas he wanted and i just said that to state that i know one can do that with sata and thus one should be able to that with sas… but sas cabling… thats a hell in itself.
tho i will say i would tend to agree with the YT channel the art of server, when he says get the 8087 and 8088 because they are much harder to wreck.
not sure if there are any advantage to the other newer cables and … plugs, adapters, connectors… lets go with connectors…

yeah i went and took a look at the old spec sheet for the 5600 series, it actually claims they can run at a 70% or more reduced energy requirement state… tho the caviat here seems that one will have to give up the turbo clock option on this family of cpus to get from C4 to C6, didn’t get done with it, decided to save it for a rainy day and when i shut down the machine anyways…

also a big problem is that my damn fans take like 40 watts and is stuck on full speed… but i might take apart the server today to maybe put in my HBA, look at my ram placements… i dropped down from 1066mhz to 800 because i placed a couple of blocks wrong, maybe ill try and reapply new thermal paste haven’t done that while i’ve had the server and since it was bought used, odds could be that it hasn’t been done since it went out of production from where it was, and that my tinkering with the bios in the past, i might have disabled power management (enabled intel c-state) which only supports C4 but then will allow my cpu’s to turboboost…

server stuff sure ain’t easy that’s for sure… also i noticed that i might be running a BIOS setup that is more suited for single cpu systems…and then there is the issue of just how many hours do one really want to pour into an old server that i most likely won’t keep for to much longer…

auto negotiation can sometimes go crazy, been a while since i’ve seen it tho…
good that you figured it out.

Sorry I forgot I didnt mention I knew you had to format the drives what I meant to say is this raid card require you to have to format it to use it with that particular raid card, Since if I wanted to just move some hard drives over to a raid controller for better support and speed. I didn’t want to format drives and have to transfer data back over.

yeah the whole io thing with moving hardware around can be a pain… lol sometimes one just needs to format to the hardware… but usually thats if its very different… like multiple generations apart or after a new paradigm

have moved hdd’s from my lsi 9260 i16 to a external usb and back again without issue… but i need to add them individually as a virtual drive on raid 0…