tho i will give you that there would be a certain decrease in latency from having a higher bus speed, which i assume is why pcie4 is faster.
however pcie5 is already announced and the new architecture where the memory controllers are on the memory thus giving us tb/s or PC1000000 RAM bandwidth options if the ram was fast enough… so upgrading for the sake up getting pcie4 … sure if one really needed it.
i don’t think the inherent latency on the pcie bus is high enough that i actually will feel a difference… it’s a bus, it’s basically just a row or rows of connectors / traces
so it is really down to the frequency sent over the bus, which granted like from 100mbit to 1gbit might be noticeable… but i’m not 100% sure… but it sure was on the network, hooked up a local gamer on a 100mbit connection because i figured he wouldn’t notice, but apparently it creates lag because of lower frequency.
however if i look at my 10 year old sata controller, then it can do sub 1ms speeds and it’s thus far been the ssd chips that been setting it’s limits, and so tho i don’t doubt your uplift in performance.
i don’t think it’s due to the pcie bus itself, on a modern system ram latency will be higher.
my server is dual xeon 5630 E 5 pcie slot 2.0 x8 running two hba’s for a total of 12i and 4e, the SX300, and then i got 6 onboard sata ports and something similar in minisas ports but haven’t even looked at those because i didn’t think they might support +2tb drives… but surprisingly enough the onboard sata controller does… maybe it gets help from the hba’s tho… i duno… seems weird…
i got 48gb ram populating 12 slots i think out of 18 to keep my ram frequency the highest possible, it came with 24gb and i think i paids like 2$ pr module when i upgraded… might have been 4 tho… can’t remember aside from that it was ridiculously cheap.
much like the server has been… the main cost has been power consumption and that was mainly due to a measurement issue with my wattage meter because of a power strip noise / surge filter… the server pulls about 300 watts and it’s about 1/3 cooling 1/3 disks and then the remaining 1/3 for cpu’s, ram, mobo and such attachments.
the only ssd you can get today that should have lower latency than mine is an intel optane 900P
and there maybe 1 or 2 other cards but atleast one of those requires like 150-200 watts just to run, the SX300 out performs the intel optane 900P in some areas such as random write iops and bandwidth, something with the optane series isn’t great at.
if anything my ram is the real bottle neck … but thats also part of why i wanted this fast an ssd… now my ssd which is my l2arc is basically 1/3 - 1/5 the bandwidth and about 3x - 5x the latency of my RAM, meaning my system basically has 1.4tb persistent memory… which i wanted to try out.
i’m not running an ibm system, it’s just an amalgamation of affordable parts… aside from the hdd trays… those i paid to much for but i didn’t know better at the time… and had bought the server, today i would have gotten a supermicro instead of a chenbro chassis, mainly because the bays are so much cheaper maybe because they are more popular or whatever…
chenbro hasn’t impressed, but the tyan mobo has been epic, its a 7012s
it’s been amazing and all the options it has is very nice… even if i wouldn’t have minded to be able to push the whole lockstep computing a bit further for ha purposes… but not like i’m running it anyways… it can do hotswap cpu’s even if it isn’t really built for it… but in theory it should work… seems to be a very HA minded setup… afaik it basically has a backup to everything, i just need like a dual psu and such to really to run in full HA mode
but every damn trace and connection on the mobo the ram, nic’s, cpu’s, hdd controllers, QPI pcie connections can be broken because there is in all cases multiple controllers and multiple traces and with the right bios configuration, one should be able to literally pull off any chip or cut any trace without the server even missing a beat.
first server i’ve bought so kinda wanted an old junker i could experiment on, also because it’s exposed to wrong conditions like humidity and last winter it got minus 20 Celsius because the room is unheated…
not really a place i would want to put a 10k $ new server when i get one… i’m thinking i want an IBM Power PC10 as my step up when my server room is a bit more completed and has more environmental controls, then this old box will just be like a NAS with a 36bay das or two 36 bay connected…
ram bandwidth tho… thats really where this setup is limited, even if i got two cpu’s worth of memory bandwidth, thats also something you have to take into account, even if my ram seems slow, i’m on a dual chip board.
any single chip board would have to be twice as fast to even compete, like say memory bandwidth… sure my ram is like pc10000 or whatever 10800 , but i got 6 channels of that per cpu, giving me a total memory bandwidth of more like 60GB/s, but ofc their latency doesn’t get better, but ofc modern memory has worse latency…
this machine is a monster, might not be a processing monster, but it should do fairly cheap processing and it still has 8 cores and 16 threads, while i only paid 5$ pr cpu…
and i’m confident i could hook up 100 hdd’s to this without any issues now… might need another HBA tho.
hell i could hook up 256 hdd’s if i wanted ofc at that level the bandwidth of the machine becomes a critical limitation, but so long as one can live with some 40-80gbit total throughput bandwidth then it’s not really a big problem…
ofc i cannot hook only ssd’s up to that would take my bandwidth to quick… i mean the one i hooked up now is like 20 to 30 gbit bandwidth alone xD but regular hdd’s takes 15-20 of those to get to those speeds and one ofc never wants the ssd nor the hdds to run full tilt for extended periods anyways…
so with a 20-30% usage on each drive this is a perfect little controller for lots and lots of drives… been trying to run some vm’s on it… but really could use more ram… 48gb is to low… but the only upgrade worth considering is opening the option for the server’s 288gb max… but thats not cheap… so most likely never going to happen… but who knows maybe my new ssd fixes my ram issue
sorry for the rant
TL;DR
Nah it’ll be fine…
it’s an old junker because my server room is very subpar and didn’t want to ruin a new very expensive server, so this like my datacenter’s storage solution, and the server really hasn’t been that expensive, 80-90% of the costs are disks if i don’t account for time spent and power used overhead.
next time i will get a 4U server so fan’s will be less of a drag
before i got a dehumidifier installed i’ve had fan wires corrode until the literally fell of and fan’s stopped running… hoping to get control of that and verify that it’s good enough before installing new expensive gear, so this winter will be interesting to see how that goes.
@deathlessdd
what are you running that makes you think you can actually keep up…
my setup is old, but it’s an old giant lol even if the cpu’s are lowest wattage draw possible, because it’s suppose to basically be a NAS or such storage controller type box
like which m.2 ssd’s you got in raid 0… i’m sure mine can keep up with both of those combined in most cases, mine also runs an internal version of raid5 for redundancy and ofc has ECC
the max speeds are 2.6GB/s read and 1.1GB/s write and 195k random read iops and 285k random write iops
and with a latency of 92 µs read and 15 µs write
i’m running sync always on my zfs pool so i wanted writes to be the fastest, because this is also used as a slog device.
i think the pcie 2.0 x8 bus is i think 40gbit and then it’s ofc hooked into two cpu’s with 200gbit bandwidth, so i think each cpu may be able to access pcie with 40gbit … thus the 5x pcie slots is the max bandwidth to 1 cpu from pcie