Slow SLOG device now looking for a fast one

It has usual and normal dual port SAS connector, a male part of SFF-8482
Connector’s primary objective is to present a multi pathing solution rather than bandwidth expansion. However in normal operation with proper OS+SAS Controller this type of connection does presenting multi pathing as well as bandwidth expansion. So no special hardware is needed.

I believe my tests were not as complete as they can be, but I’ve determined that IOPS of this device is pretty close to what spec sheet is claiming it should give.

Can’t really say anything on NVDIMMs or Optane for this matter. I’m using old HP’s gen8 hardware in conjunction with StorageWorks D2600/D6000(price of of eBay is ~$550 for a 70x3.5HDD SAS only slots) shelves for my STORJ setup.

Side note: almost any SAS2 controller (HBA) is pretty capable of running SAS and SATA devices without any issues. I have 48 HDDs attached to one HBA some of them are SAS some SATA, no issues whatsoever.

well thats a 1000$ optane drive right… the cheap 50-100$ optane drive is the one at the bottom…
while the older flagship models of workload / io accelerator cards are not that far removed from optanes numbers… in most cases they are only 20-50% lower and i just checked and power utilization is actually the same… so thats not going to be its saving graces.

cheapest model… only 280gb and costs about 500$
while i can get a 1.6tb thats not nearly as fast… but very close and ofc a few years older, for less than 200$

and then optane 16gb drives… well i looked at those a few times… and everything i keep coming back to it has to be for some other usage… since they excel at read and low latency…

slog needs write iops and write speed… i’m looking for something around 2000mb/s writes and atleast 200k random write iops preferably at Q1T1
anything else and i just might as well stick with the two sata ssds im running in a span like setup right now…

they give me 13000 random write iops and about 1200mb/s but i can still see i got an random write iops limitation… so i want to make sure i push that in the right direction which is why i want to go into the 200k range+

and it has to be a pcie card… tho i like the idea of having some m.2 slots, the dedicated cards are most likely superior because of better space utilization… and it has to be a low profile card…

so lets try to compare a bit straight from the horses mouth… don’t really like to trust manufacturers numbers, but trying to simply show the performance of these things without extensive benchmarks is a challenge on it’s own, and i haven’t had to much luck finding detailed and easily comparable data.

intel optane 905P price… a couple of grand most likely…

1.5TB 550k random read and write iops at lets say 2500mb/s and 4-20 watts

1.6TB 200-300 K random read and write iops at… well the data isn’t in that but its like 2000mb for both read and write… and about the same wattage and latency is like 9-80 nano seconds higher… so 0.009ms to 0.08ms a card like this is maybe 150-200$ while the other one is 1500-2000$

and the small optane cards i’ve seen is for something else… i duno what… maybe some sort of data processing certainly not a write cache… read cache maybe…

You are reading the data wrong. Think about the performance that I need for a storage node behind a bottlenecked internet connection. My plan is not to setup a datacenter!


the small optane cards are great… for some things and some use cases and they surely have incredible low latency, which is great for a lot of things…

but i want to run my slog and preferably my l2arc which should support 100-200tb in the future… so with over provisioning, and such that puts the sweet spot at around 1.6TB… only 6GB or so will go to the slog… even if the slog will take the main part of the write iops and speed, since the l2arc will fill rather slowly, but it will on the other hand make good use of the read speed and iops.

if i decided to put in dual 10gbit nic’s then i will also need about 2400mb/s and 12gb slog + whatever internal traffic there could be … so maybe 24gb /32gb would be nice to have…

i know this may seem insane… but i just don’t want to bother with an upgrade that i can throw out in a couple of years… which is what i would get if i downsize this to much… and if i plan on keep expanding my server… then this kind of stuff is what makes it scale into the ridiculous ranges.

at those scales… my mobo will start to become the limitation :smiley:

There are also some pretty neat micron cards on ebay… these are at about 150$
100k random write iops and 700 random read iops, pretty perfect card for a l2arc
also 1.6TB or 1.5TB and in the gbyte range bandwidth

if one did use one of these micron cards… with 700k iops random read
but the highest i’ve ever seen out of my arc is 500k… would that mean the ram would limit the io accelerator card or would it simply over double the iops the system can perform…

i suppose i should be able to tell this from looking at my block diagram and maybe a block diagram of my processors, pretty sure the RAM paths go directly into the processors as their controller is on there… but if the data from the pcie goes through the ram before entering the caches of the cpu’s

i guess thats really my question… does data from the pcie has to go through the ram before hitting the cpu caches?
if anyone knows that off hand, seems like a pretty specialized question… relating to cpu electron magic xD

an hour later
found this in one of the product sheets… of a top tier contender
direct access with host processor as memory

sorted something like 7 out of 10 top contenders out now… apparently its not always standard that the cards have ECC and RAID like features across the nand chips so that they can loose multiple chips and even have bit errors in the remaining data and still be able to function with 80-90% of full speed and internally remove all the errors and restructure the device from over-provisioned capacity and rebuild it’s internal array all while running…

it is ofc a must to have such features inside the device, which might have lead to brightsilence thinking me mad early on… those are ofc features of such single card io accelerator data integrity / endurance / performance models

however capacity models don’t come with these features… this is also partly why the devices are so expensive most often… because a device having 512gb space ends up with 320gb capacity to the end user… which is why the numbers are always kinda weird when comparing them to regular flash storage.

anyways i assumed it was self evident that a device like this would have intrigrity related features in the device itself and thought @BrightSilence mad for wanting to buy two cards and thus wasting a very valuable and limited number of pcie slots.

so in fact we was both agreeing on the same thing… tho neither of us was aware of it lol
oh the irony

I see what you mean, but you should still keep in mind that if you rely on a single device, it can protect against NAND failure or RAM failure or any storage medium it uses. But there are other things on those cards that can fail, like the controller or the capacitors on devices that rely on RAM and writing back to NAND on power loss. The storage medium can be redundant all it wants, but if the controller fails, non of that is accessible anymore and you can’t just move those chips to a new card to get the data. Single devices always have such limitations in redundancy. This is probably why many don’t bother building these features in because people want to use multiple devices to protect against those failures anyway.

true, but again i can pull the device and it won’t matter, the system isn’t really relying on it except in that small chance of a power failure and to boost performance, thus it requires multiple things to fail at one time or the system or the administrator being unaware of it… i’m still running on one PSU

so many other systems that are less redundant that this, sure there is a small risk but i trust the engineering of the card and that my setup can manage whatever happens with the card in most cases of single failures.
on multiple failures yes i would have a problem… some of the older cards did also have multiple controllers, tho from the reviews and the manufactures it sounded more like it was for performance reasons, and it posed problems… like it would show as two storage mediums to the system and they would pull so much power server cooling and power supply would often be overloaded…

200-300watts accelerator cards lol so yeah it evolved into using just one…

i’m not fully aware of the architecture of the device i’ve chosen the product sheet doesn’t mention anything about it’s dram, but from it’s latency i have to assume it’s in there…

the capacitors on these devices aren’t your regular electrochemical one, it will most likely be a ceramical or similar dry / solid state capacitor, which due to its nature rarely can hold much power, but a 20watt device doesn’t need much power for having like 10ms worth of juice to purge its dram.
solid state capacitors can break from overload and heat i would suppose… and surely a ton of other reasons, but they will in any practical sense never wear out… unlike chemical capacitors which are a bit more like modern batteries in a sense…

i suppose if the accelerator was newer it might use a ultra cap or such, which might make the number of capacitors required much less…

the image of this card says it very well just how many caps they either had to put on or put on for redundancy purposes.

oh yeah and long term i kinda want to try to do some clustering… but thats just so far away at this point… but then the whole server in this case becomes redundant and all data able to restored in case of issues… also a factor i try to keep in mind while designing my setup.

so having 3x mirrored io accelerators in the 3 servers required for a practical working cluster adds a lot of extra cost to the setup. and the grain would be going from a triple to a quad level failover protection when viewed from the slog.

not sure what the weakest link would be in my basic current idea of my future cluster, but haven’t researched or planned the design much…

what really worries my most about my setup is that if i move stuff around to much and accidentally touch some of the ram modules, then it sometimes drops a ram module…

but seems stable enough, so long as i don’t touch the wrong module or whatever… haven’t bothered to fix it… and doesn’t seem to have been a problem yet… but that is only a matter of time i fear…
and with the current configuration loosing a full ram block… would be suboptimal… ofc then the SLOG will most likely have the data xD atleast 99.9% of it.

i really should run a spare ram config, but that will cost me 1/3 of my capacity, or maybe 1/4 to 1/5th if i rearranged my ram, but then i would loose 20% ram speed because of memory channel bandwidth limitations… so sync=always and a nice purpose build slog device also gives me a bit more security in this aspect.

ended up ordering a lenovo enterprise value io3 pcie that i linked in post 16

mainly due to it’s excellent latency, performance under heavy load, throughput bandwidth, excellent redundancy features, and decent capacity… at 250$ + shipping.

Performance of the card or of the base model of which it’s build is the SX300
which is a capacity model, and then lenovo added redundancy features i guess, which to my knowledge isn’t existing on the base SX300 model… but on the performance model called PX600

most likely it’s just a PX600 with a Lenovo sticker which is also basically a SX300 card… now that i think about it that makes much better sense… nope … hmmm

i guess lenovo really did remake it… most likely using the px600 model as a general guideline… i cannot see why somebody would remake it from scratch… but ofc i guess somebody might have thought they could make it cheaper… this is after all built and sold like 2 years after the initial release of the SX300

the PX600 has different amounts of storage… i from that i can extrapolate that they didn’t use that build precisely…

however if anyone finds these things interesting here is the link for some benchmarking of the SX300 of which the card i ordered for my Slog / L2ARC device linked below.
it’s an amazing card… from what i can tell…

finally got my rebranded SX300 card…
seems really really snappy, but then again it was also one of the lowest latency cards i could find… netdata seems to agree… i think it must be dividing by zero or something like that…
because it’s numbers doesn’t make any sense… maybe a reinstall of netdata will fix it…

haven’t done a ton of benchmarks yet, but i cannot help being impressed that a freaking ssd literally makes my cpu faster at everything, even if with some tasks its literally next to nothing.
but even stuff like cinebench r20 improved some 5%

if anyone got a fix for this then don’t hold back…

and now for the humiliating part… this is a 4kn device…
while my main pool runs on 512n … so thats swell, seems i’m back at the whole pool throwing errors for no reason again… but maybe it’s because it runs 4k somewhere and 512b else where… so not that big a mystery… besides i’m planing a restructure of my pool.

so will try to leave the 512n sector size behind… since zfs allows me to run 4k on 512n devices even if with a io penalty…

past me should really have caught this… but gotten a bit use to ssd’s mostly supporting 512n and upwards… apparently not when it’s enterprise gear… or from the last decade :smiley:

sure seems like a great card tho… oh and also i did note that it doesn’t come with PLP which was a bit of a surprise, i would have assumed that would be a standard on stuff like this, but i suppose they assume stuff like that is taken care off on different levels, which leaves them to be more minded on the actually real workloads of the card…

i was mainly looking at the latency, capacity and that it was a commonly used model, but was i to rebuy it i would make sure the sector size of my main pool was supported and that it had plp since it does solve a few of my current issues… but i’m pretty happy with it…
even if i will eventually need to figure out some sort of PLP solution to my server setup, but the speed is also part of that… i mean how much data can you loose in a few microsecond or whatever.

sure that might be it’s RAM latency, but still lets give it a full milisecond that is lost, in my use cases i doubt it will ever go over a ms in write latency and thats loss on a Copy on Write filesystem… 99% of all cases it shouldn’t matter… and in the last 1% well everything should still work… just a minute amount of data that never made it to disk because of a 1ms delay… or 5ms … so 5/1000th of a second at my max internet is 500mbit so 1mbit, meaning 102kb lost data in a crash… uhhhh sounds scary… compared to what the avg or top tier storage nodes will loose during the first year… well… it’s good enough i’m sure.

and i still can’t believe it actually made my cpu faster… that alone is going to keep a smile on my face for days… that is just to cool… i didn’t even know what was possible until recently… ofc to a certain degree… but at everything… really… so awesome

Im a bit curious of what your end goal was for buying this? Seems to me your dumping tons of money into a old server when you could probably gotten an updated server for close to the same price.

afaik i cannot get a much better ssd at this price point, this model even beats an intel optane in some aspects…

the server i got is old no doubt about that, but most of what i’m thrown at it is very affordable upgrades, if it was cheaper to buy something new that would have the same performance as a storage server with some good options for vm’s then i would have gone that route instead.

and this upgrade can be moved straight into a new server and be just as good…

my plan is to eventually build a datacenter, so i’m just trying to get some affordable performance and i was unable to sustain good latency when running my two sata ssd’s in parallel, this upgrade saved me two sata ports, gave me improved cpu performance, better l2arc performance, and more than doubled my l2arc size, which is recommended by people like iXsystems for working with large datasets, next step would be hooking up a disk shelf over external SAS and then maybe get a bladecenter for processing purposes, if i decide that’s going to be something i want to focus more on… but at the moment i’m mainly focusing on storage… so the old server is basically just a big modular raid controller with extras…

1 Like

Is this server the x5650 or something like that? I remember you talking about it before but I dont remember the exact cpu it was running. But if your worried about latency its probably time to upgrade to a bit more modern with pcie 3.0 and possibly ddr4. Its really about the architecture of new processors that speeds up everything.
That new ssd is only gonna feel fast for a little while and then its going to feel slugish again.

I recently upgraded to pcie gen 4 and I started running m.2s in raid 0 I and amazed on how fast it is compared to the old ssds that I was running…

tho i will give you that there would be a certain decrease in latency from having a higher bus speed, which i assume is why pcie4 is faster.

however pcie5 is already announced and the new architecture where the memory controllers are on the memory thus giving us tb/s or PC1000000 RAM bandwidth options if the ram was fast enough… so upgrading for the sake up getting pcie4 … sure if one really needed it.

i don’t think the inherent latency on the pcie bus is high enough that i actually will feel a difference… it’s a bus, it’s basically just a row or rows of connectors / traces
so it is really down to the frequency sent over the bus, which granted like from 100mbit to 1gbit might be noticeable… but i’m not 100% sure… but it sure was on the network, hooked up a local gamer on a 100mbit connection because i figured he wouldn’t notice, but apparently it creates lag because of lower frequency.

however if i look at my 10 year old sata controller, then it can do sub 1ms speeds and it’s thus far been the ssd chips that been setting it’s limits, and so tho i don’t doubt your uplift in performance.
i don’t think it’s due to the pcie bus itself, on a modern system ram latency will be higher.

my server is dual xeon 5630 E 5 pcie slot 2.0 x8 running two hba’s for a total of 12i and 4e, the SX300, and then i got 6 onboard sata ports and something similar in minisas ports but haven’t even looked at those because i didn’t think they might support +2tb drives… but surprisingly enough the onboard sata controller does… maybe it gets help from the hba’s tho… i duno… seems weird…

i got 48gb ram populating 12 slots i think out of 18 to keep my ram frequency the highest possible, it came with 24gb and i think i paids like 2$ pr module when i upgraded… might have been 4 tho… can’t remember aside from that it was ridiculously cheap.
much like the server has been… the main cost has been power consumption and that was mainly due to a measurement issue with my wattage meter because of a power strip noise / surge filter… the server pulls about 300 watts and it’s about 1/3 cooling 1/3 disks and then the remaining 1/3 for cpu’s, ram, mobo and such attachments.

the only ssd you can get today that should have lower latency than mine is an intel optane 900P
and there maybe 1 or 2 other cards but atleast one of those requires like 150-200 watts just to run, the SX300 out performs the intel optane 900P in some areas such as random write iops and bandwidth, something with the optane series isn’t great at.

if anything my ram is the real bottle neck … but thats also part of why i wanted this fast an ssd… now my ssd which is my l2arc is basically 1/3 - 1/5 the bandwidth and about 3x - 5x the latency of my RAM, meaning my system basically has 1.4tb persistent memory… which i wanted to try out.

i’m not running an ibm system, it’s just an amalgamation of affordable parts… aside from the hdd trays… those i paid to much for but i didn’t know better at the time… and had bought the server, today i would have gotten a supermicro instead of a chenbro chassis, mainly because the bays are so much cheaper maybe because they are more popular or whatever…
chenbro hasn’t impressed, but the tyan mobo has been epic, its a 7012s

it’s been amazing and all the options it has is very nice… even if i wouldn’t have minded to be able to push the whole lockstep computing a bit further for ha purposes… but not like i’m running it anyways… it can do hotswap cpu’s even if it isn’t really built for it… but in theory it should work… seems to be a very HA minded setup… afaik it basically has a backup to everything, i just need like a dual psu and such to really to run in full HA mode
but every damn trace and connection on the mobo the ram, nic’s, cpu’s, hdd controllers, QPI pcie connections can be broken because there is in all cases multiple controllers and multiple traces and with the right bios configuration, one should be able to literally pull off any chip or cut any trace without the server even missing a beat.

first server i’ve bought so kinda wanted an old junker i could experiment on, also because it’s exposed to wrong conditions like humidity and last winter it got minus 20 Celsius because the room is unheated…

not really a place i would want to put a 10k $ new server when i get one… i’m thinking i want an IBM Power PC10 as my step up when my server room is a bit more completed and has more environmental controls, then this old box will just be like a NAS with a 36bay das or two 36 bay connected…

ram bandwidth tho… thats really where this setup is limited, even if i got two cpu’s worth of memory bandwidth, thats also something you have to take into account, even if my ram seems slow, i’m on a dual chip board.

any single chip board would have to be twice as fast to even compete, like say memory bandwidth… sure my ram is like pc10000 or whatever 10800 , but i got 6 channels of that per cpu, giving me a total memory bandwidth of more like 60GB/s, but ofc their latency doesn’t get better, but ofc modern memory has worse latency…

this machine is a monster, might not be a processing monster, but it should do fairly cheap processing and it still has 8 cores and 16 threads, while i only paid 5$ pr cpu… :smiley:
and i’m confident i could hook up 100 hdd’s to this without any issues now… might need another HBA tho.
hell i could hook up 256 hdd’s if i wanted ofc at that level the bandwidth of the machine becomes a critical limitation, but so long as one can live with some 40-80gbit total throughput bandwidth then it’s not really a big problem…

ofc i cannot hook only ssd’s up to that would take my bandwidth to quick… i mean the one i hooked up now is like 20 to 30 gbit bandwidth alone xD but regular hdd’s takes 15-20 of those to get to those speeds and one ofc never wants the ssd nor the hdds to run full tilt for extended periods anyways…
so with a 20-30% usage on each drive this is a perfect little controller for lots and lots of drives… been trying to run some vm’s on it… but really could use more ram… 48gb is to low… but the only upgrade worth considering is opening the option for the server’s 288gb max… but thats not cheap… so most likely never going to happen… but who knows maybe my new ssd fixes my ram issue :smiley:

sorry for the rant

Nah it’ll be fine…
it’s an old junker because my server room is very subpar and didn’t want to ruin a new very expensive server, so this like my datacenter’s storage solution, and the server really hasn’t been that expensive, 80-90% of the costs are disks if i don’t account for time spent and power used overhead.
next time i will get a 4U server so fan’s will be less of a drag :smiley:
before i got a dehumidifier installed i’ve had fan wires corrode until the literally fell of and fan’s stopped running… hoping to get control of that and verify that it’s good enough before installing new expensive gear, so this winter will be interesting to see how that goes.

what are you running that makes you think you can actually keep up… :smiley:
my setup is old, but it’s an old giant lol even if the cpu’s are lowest wattage draw possible, because it’s suppose to basically be a NAS or such storage controller type box

like which m.2 ssd’s you got in raid 0… i’m sure mine can keep up with both of those combined in most cases, mine also runs an internal version of raid5 for redundancy and ofc has ECC

the max speeds are 2.6GB/s read and 1.1GB/s write and 195k random read iops and 285k random write iops
and with a latency of 92 µs read and 15 µs write
i’m running sync always on my zfs pool so i wanted writes to be the fastest, because this is also used as a slog device.

i think the pcie 2.0 x8 bus is i think 40gbit and then it’s ofc hooked into two cpu’s with 200gbit bandwidth, so i think each cpu may be able to access pcie with 40gbit … thus the 5x pcie slots is the max bandwidth to 1 cpu from pcie

Well im running some video rendering and video editing on them, These aren’t enterprise grade because I couldn’t find anything that supported pcie 4 and much cheaper and afforable then the SSD you bought. Dual consair m.2 MP600 Each are 1tb drives transfer speeds with both of them range from 8.5gig/s peaks around 10.5gig/s random reads on the spec sheet are 680k random read and random write is 600k IOPs
My CPU is a threadripper 9970X, 128gigs DDR4 ram

I love older hardware but it really starts to show its age when you upgrade my very first server was a x5650 cpu 64gigs of ram then i got into the newer intel CPU E7-4830 which was much faster, Then I switched to AMD it runs circles around them both together.

well it’s just a storagebox, so don’t really need the horse power…

there is no way your ssd’s actually bench those numbers… i mean some of the highest random 4k reads / writes i’ve seen is like in the 40-100 MB ranges… and thats high… no doubt your drives are top tier consumer drives, since they are ranked at a 9th place of the ssd list.

but most of those pale when compared to enterprise for various reasons… this card isn’t that old either, only from 2016…
can’t really find any good benchmark to compare them, but ill try to run some on mine and see how it compares to the numbers of the mp600

i doubt m.2 can keep up… it’s a real estate issue… this is a hhhl card and both sides are plastered in chips, so maybe if its full length m.2’s and both sides are used, then one might be able to get 50-60% there in the real estate alone… thats also a big factor also this card is about the same price as yours…

i did consider m.2, but didn’t find any that gave me the performance i was hoping for, but still in the phase of testing out this board, and afaik the samsung 970 pro evo is the nr 1 m.2 consumer ssd and that was my other choice, but it wasn’t fast enough under high workloads…

one thing is having super performance when you do a short test, another thing is having good performance when the board runs at 90-100% of max load for 24hours staight
i have no doubt you can beat my card in sequential transfer speeds, but that wasn’t really a consideration of mine, and also sure your iops seems higher, but will it do that when stressed … i doubt it… the random read / writes with 4k blocks on the mp600 seems to be around 40mb/s
ofc i’m not sure if its a 512 device, which would decrease the number by 8… but 40mb/s looks perfectly reasonable… i don’t think the samsung 970 is above 70mb/s

1 Like

I got mine because the new 3090 will be able to pull data right from m.2 drives. Least in theory it will speed up everything that im doing.

But im curious what the actual data rate your ssd is getting compared to mine in real world. Benchmarks are fun but doesnt really give real world, I did some bench 4k is around what you said 56MB/s I was hoping in raid 0 id be getting better then this but its not really what im after. More about big files transfering from one drive to another which is like cut in paste like on a normal of which your just cuting and pasteing to the same drive. scrubbing though videos is very fast there is no loading. Mind you my drives are running full load when rendering videos.

1 Like

to increase your random read/writes you will need to run the drives in parallel, because when you stripe across both drives, one write or read operation is 1 operation on both drives… however if you run them in parallel then 1 write will only end up on one drive and then next write will end up at the second drive, thus doubling your iops / random read / write bandwidth.

it’s called a span on raid controllers and in windows not sure if linux uses the same name for it, but zfs basically doesn’t seem to name it anything, but you can still set it up… just add the drives individually to the vdev, pool, slog or cache(l2arc) and zfs will load balance between them

another option that has been a standart go to in enterprise has been the mirrored setup… ofc this halves the capacity, and write is essentially the same and one will use a lot more bandwidth because data has to be sent to both drives instead of just one of them.
however the read speed and iops are doubled.

thus for stuff like video editing might be good, and it’s ofc as safe as can be in a mirror setup…
so long as the hardware is maintained and monitored ofc…

tho another downside would be one gets double the wear on the ssd chips…

if it was me i would set them to run in parallel / span
did some testing on that on zfs and worked quite good, tho it did seem to increase latency a bit… very little tho… in the near sub ms range… i suspect because it has to coordinate two drives then it will ask which one is ready and thus that causes a slight wait… but aside from that iops and bandwidth is doubled.

which setup is best for your use case i’m not sure off, usually it will be a trend to use a particular setup for good reason, so if other video editors run a raid0 setup, then i’m sure that’s the most efficient solution for your particular workload…

Q1T1 Random 4k is basically hell for anything… only thing that does it well might be RAM
and it’s not often it’s needed… my system running multiple vm’s and a 14tb storagenode and i usually see 5000 read writes a second with peaks of 150k in the ARC when my system parses my log script lol… so really running Q1T1 random R/W is a fun test… its like the rally race of hdd’s

doesn’t really make much sense for every day use… but it was one of the measure i wanted to have great performance at, because i knew it was one of the weakest …

you know when old hdd’s stalll when moving millions of small files, thats Q1T1 usually
and their speeds… well can take even a semi okay older hdd down from 100mb/S to sub 1mb/s
so it’s like like our ssd’s are bad… i did find 1 benchmark of the SX300
but i don’t think the numbers are right… so want to try and verify them myself… it claimed 33mb/s 4k random RW

which seems wrong to me, but ill have to verify that…

some of the benchmarks i selected it on is here

and i picked it eventually due to this part
The ioMemory SX300 scored an average latency of 1.527ms when overprovisioned for best performance during the NoSQL benchmark, comparable to its sibling the PX600. Both Atomic drives scored among the best accelerators in this large dataset.

basically i means i can manage larger datasets more easily, it literally makes my processor faster… like it also states in the documentation could have sworn i linked that above… but it doesn’t seem like it…

  • Performance:
    • High-speed, low latency, consistent, and scalable I/O performance
    • Access latency can be as low as 15 µs
    • Up to 2.6 GBps/1.2 GBps of sustained sequential read/write throughput
    • Up to 215,000/300,000 random read/write IOPS that uses 4 KB data blocks
      " * Integrates with host processor as a memory tier for direct parallel access to flash"
      i’m guessing the last point is why it makes my processor faster… sounds a lot like what you are talking about the 3090 can do… just that this does it for the cpu…

i duno if it’s actually better than the ones you got… i’m sure in some aspect it will be… and in others it’s most certainly not.

i had a lot of parameters to account for when selecting it… like it had to be hhhl
it had to not pull an absurd amount of watts, i wanted it to be kinda redundant and it had to replace my two sata ssd’s
i wanted a popular product so that i could be sure drivers and such would work / future support
i wanted it over pcie to free up some hdd ports.
i could kick myself for getting a 4k ssd tho… when almost everything else runs 512 to increase my random iops for small files :smiley:
and i wanted to experiment if this is actually something that is viable for future expansion of my hardware farm

Raid makes many or multiple drives run in harmony, this increases capacity and read write bandwidth, but iops remains the same as 1 individual drive…

a mirror will give you double the read speed and iops, while it will take double the internal bandwidth of the controller or pcie, and the write speed and iops remains the same as 1 drive.

a span will usually load balance across drives and allow each drive iops and bandwidth to be used separately… tho might add a negligible bit of latency and ofc it will exist as two volumes to the system

1 Like

How much did you actually spend on it? I see them on ebay for 400 to 900+ dollars

270 with shipping from taiwan i think it was so it wasn't to bad for 1.6tb the price ended up being the same pr tb as if i bought the most affordable new m.2 ssd i could find.... i did think about getting a 4 slot m.2 card which would have been cool, but still those are like 40 atleast and then the m.2 on top… else i looked for other pcie cards found a lot of interesting options, but this model seemed to most useful for me and if nothing else then it’s very general purpose… so i should benefit from it no matter what i decide doing with it.

1 Like

i will assume that doesn’t mean transfer speeds of 6GB pr second???
because if it is like 1GB /s or less then it’s without a doubt not a sequential workload and thus your raid 0 is only a disadvantage… another thing to keep in mind is that if you actually fill up the drive beyond maybe 60% or 40% i forget the exacts and this may vary wildly on different technologies in the ssd market, then the drive / drives will slow down substantially.

this is due to stuff like QLC (Quad Level Cells) storing i think 4 bits can actually run in SLC (Single Level Cell) mode which gives them greatly increased performance.
thus so long as the drive has room to breath, it will cache data in SLC mode and then later when it has time convert it into QLC.

ofc if a drive is 60% full and we will say one should go beyond 80% to allow the drive space to redistribute data so thats our max… i know some ssd or most these days will allocate this by default because it can ruin the drive to fill it completely, but we will assume it’s not done, just for safety reasons.

so if 80% is our max and SLC data takes up 4x the space it will in QLC mode, then a 1TB ssd will end up being able to cache and manage something like 200gb in SLC mode, ofc then it cannot convert it, so thats unrealistic… so if we split it 50/50 that gives it 100gb SLC data before it will need to start converting data into QLC and thus performing internal processes which may greatly affect the use cases.

which may explain why it seems very inconsistent…

Terrible consistency

The range of scores (95th - 5th percentile) for the Corsair Force MP600 NVMe PCIe M.2 1TB is 272%. This is a particularly wide range which indicates that the Corsair Force MP600 NVMe PCIe M.2 1TB performs inconsistently under varying real world conditions.

got that from here… seems to be of 14k benchmarks

sadly the sx300 is there but the numbers are way off and the date of release also seems totally wrong… not sure if they may have released a sata version of it or something…
because the sequential is just plain wrong
not sure how to bench it in a useful way from linux terminal, but i might pull the drive tomorrow and run a bench on a windows machine, using the userbenchmark software, so i can get some comparable numbers out of it.

it’s big brother seems to get about the same numbers, but again only 1 actual recorded benchmark so not really highly useful or accurate data at the very least

and then there is the previous generation

i forget what the whole ioscale, iomemory, iodrive deal means, might just be the generations, but very unsure on this…

compared against yours, it seems like a sad purchase, but i’m not convinced this tells the full story, can tell i cannot use that benchmark to get any better results all the other similar models seems to bench the same…

so yeah the writing on the wall doesn’t look to good for the SX300 when compared to that… but i’m not sure that it tells the full story but sure does seem like there is a lot more to dig into about it for me… it’s never easy
ofc it doesn’t help that this drive seems to be so fast my netdata graphs for it is just all haywire.