Slow SLOG device now looking for a fast one

I ran that benchmark and it feels like those numbers aren’t anywhere near crystalbenchmark… Goes from a 9gb/s to a nice high 1900MB/s.
Not sure I trust that site very much.
But yea if you compare new tech to old tech new tech always wins.

I wouldn’t trust userbenchmark’s comparisons, they don’t have comparable setups, and with SSDs, a lot of performance depends on OS settings and other hardware. For example I’m getting results pretty much as stated by the vendor from my HP EX920, yet they are quite a bit lower on some benchmarks on ssd.userbenchmark.com.

On the other hand, when talking about customer, not enterprise SSDs, I quite trust categorization done by this guy on reddit, he does a lot of research.

i think you are benchmarking the dram when getting the 5gb pr sec…
i refuse to believe you can copy data onto the drive with that speed for any period of time beyond its 8gb dram… i think it was 8gb for the 1tb version

another odd thing is that usually the speed of a ssd is defined by its number of storage chips, thus the more capacity a drive has the better the bandwidth gets due to it being the chips / cells that shared the bandwidth between them, and yet corsair claims that the 500gb model is just as fast as the 2tb model, which has to be because they are just showing the dram speeds.

i looked at other reviews they seem to be around 2gb pr sec


granted they do list it as 2.5gb/s but i will call bs on that because of it being 20gb being copied to a drive with 8gb dram and thus the dram will offset the result, so atleast that test seems to agree with the userbench test.
these numbers also seem to correlate with that.

but it’s a couple of very very nice ssd’s, top tier it is without a doubt in the top 10 of the best consumer ssd’s on the market.

how my old enterprise ssd compare… ill try to get some good data in the near future, i haven’t had much luck in finding real great information on it aside from storagereviews.com and then the few on userbenchmarks which might make me think i’ve made a bad choice in getting it…

but it was still cheap, and i’m sure it will turn out to have some advantages in the io department.
comparing old tech to new tech is difficult, ofc these are two very similar techs which helps a lot… but one cannot really compare a horse drawn cart to a jet… they don’t really perform the same task, even tho in some aspects they could…
i knew buying old would come at a cost and that if it was to old i would run into lots of issues.
all ssd storage today still wants to run SLC which are the fastest and most durable, i did consider getting a SLC based ssd, but those costs … a factor of 5 or 10 more… or has greatly reduced capacity.
ill report back when i’ve done some proper testing…

i should point out tho that the mp600 lowest possible sequential writes end up at 700mb/s i think it was, when it runs out of cache…which was after a 70-80gb transfer i would hope the SX300 doesn’t have a barrier like that, but who knows… and ofc this also is quite difficult to reach in most real world conditions, and then on top of that if the most ssd’s multi cell or above higher than 60% of full capacity one will most likely see yet another drop in performance… not sure exactly how that would look or would reach in a load situation.
and these limits will most likely also apply in some ways to the iops in which at high iops performance with load the latency will basically just go towards infinite, like on all other storage technologies when pushing their limits.
i think my ssd’s worse case latency is 15ms maybe 30ms which is pretty good with 100% load

@Toyoo
well there are nearly 15000 people that have run benchmarks and it’s ordered by ranges, so that one can see the upper limits and the bottom limit of the performed benchs and where the % of users land…

ofc this is a avg user site and needs to be easy to read and the software easy to use, so there will be some large generalizations

but i’ts fine, in fact i think their numbers look a bit bloated because those are not sustainable speeds…

If you’re after sustained write speeds, then almost no customer drive can offer that. Go for proper MLC/SLC drives. This is rare for customer drives, the only ones I know of are Samsung’s Pro series. MP600 is specifically “only” a TLC. Or, from the enterprise market, Z-SSD or Intel Optane. See e.g. this Anandtech benchmark:

Thats probably true but its not really relevant for my use case but im also running 2 of these in raid 0 which increases its capacities and transfer speeds. Its also around 4000MB/s per drive which is read speed and around 2200 for write. I need to rebench it cause im using the drives But im still pretty sure its faster then the drive you got even though its not an enterprise drive. Because of its limitation across the pcie lanes you cant really compare pcie 2.0 to 4.0. I cant really test it considering I dont have 2 more of these to transfer too.
Also you buying a used enterprise drive probably has shorten its lifespan even more, and my drive is brand new with a good warranty.

i wasn’t going for top speed, i was trying to find something that would give me the great performance at high load, there will be limits to how much data i can push over network and internet which is the main use case for me, so 1.2gb pr sec means about 100% utilization of a 10gbit nic, if i thought i could get a new consumer ssd that meet the requirements i set then i would have gone that route… also there is so much information to go through at one point one simply has to pull the trigger, and i ended up with this… maybe it cannot do what i want, but it’s only a 4 year old card and it seems like it must have been exchanged on warranty and then sold.
because the total usage on it, is like 200-300gb combined read write over its lifespan.
your drive also has a pretty impressive wear rating or whatever its called… apperently the samsung 970 only gets 1.8 PB written before its done… yours is like 3 or 3.8 PB

read speeds maybe… write speeds sure for a time, its top tier, been trying to bench mine using dd but i’ve remembered just how bad i am in linux terminal lol… been using zfs so much i never got around to dealing with volumes in linux and now i cannot get the partition to mount… also was a bit weird i had to reboot just to create a partition, but apparently the whole iomemory deal is that the ssd connects with the cpu and expands its memory capacity in some ways … … not really sure what that exactly means, but it seems to be why processing goes faster with some enterprise ssd’s … not sure if that is in consumer drives these days… but i don’t think so…
looks very interesting… sounds a lot like the thing you are talking about with the 3090.
i assume you also want to do some heavy processing on that to have all this gear for it…
i wanted to setup something like golem on the server and this should make it much better at dealing with large data sets… sadly i don’t have a gpu lol nor the wattage to really support on… but general computation is also a thing :smiley:

@Toyoo the samsung 970 pro / evo was one of the drives i was trying to compare the SX300 with when i choose it… but the data isn’t easy to read but afaik the SX300 should outperform atleast the samsung 970 pro in the aspects i wanted

Your right I mean you got something that will work fine for you I needed the fast burst power for video rendering and video editing, So I needed the most speed I could get, But once the 3090 comes out cause Im using a 2080 ti and I dont believe it does what the 3090 will be able to do for pulling data straight from the m.2.

Your ssd probably would handle 24/7 better for sure, But Im looking at speed not running 24/7 I got this much power so I don’t have to wait forever for everything. But I guess time will tell how well that drive does though.

well i got it working, so very happy about that part lol
and i really hope it wasn’t a bad choice to buy it, because i’m not going to buy anything to replace it anytime soon… aside from if i resell it and buy something else…

10gb/s is a lot… even for pcie 4.0 but at 10gb/s you will also run into the cache limitation in what 16 seconds or so… it was like 8 GB data transfer before the dram was filled, that would be at 5 GB/s then 2 GB/s drive for another 70 or so GB so thats 35 sec, so assuming a max load then you got 37 ish sec of great performance, before you slip into the 750MB /s range

ofc thats write… i suppose read is most important for your use case… how much ram you got… because to my understanding atleast in the past RAM has been the preferred route for video editing, ofc today with modern ssd’s they are sort of like ram… i mean at 10GB read of your raid nvme’s thats about what one of my ram modules can do…

ram is just so expensive…

Then this sounds like any pair of decent customer drives in RAID0. No need to go Optane. Even my cheapish EX920 can do sustained 500 MB/s.

that was why i didn’t mind my card couldn’t go any faster… because it was unrealistic that i would use that for anything… optane has the best latency possible, which means your cpu will work faster because it will wait less time for data, which means you get better performance per wattage used…

the card i bought has a raid5 onboard… if i had bought a m.2 adapter i would have pay for that also on top of the m.2’s and still with 2 x m.2 in a raid0 then i wouldn’t have any redundancy on the data stored there… which is another important point since it’s also a slog, which is a device that stores data the is not yet written to the hdd’s so that it’s permanently saved in case of a power failure.

also you cannot always be sure of compatibility across different versions of pcie… it can cause some issues, so i went with a board that was max of what my mobo would do… so i knew it would all run smoothly… i know it’s not often this is an issue… but remember if i bought a new m.2 today and then i would have to use an adapter card which would go into a pcie 2.0 x8 maybe 2.1 or 2.1 whatever not totally sure what the version is… i think there are a few pcie 2.0 … if memory serves… and then there is also like atleast two pcie 3.0 and now pcie 4.0
you sure that would even work… and would it be stable enough to run 24/7 without fail… how would it react upon failures… i mean this board cost 10000$ when it was released you can be damn sure it will fight to stay alive and it’s not like it’s a first gen version either… the initial release was like in 2010-2012, but that was like sold by the pallet load or something sick like that… then in 2014 it was opened for regular sales down to smaller orders of like say 100 cards
and then this one was made in 2016 and apparently is part of a series that has been one of the most successful enterprise ssd’s made by sandisk which to my understanding have been one of the biggest in the ssd market.

buy consumer stuff… well i’ve seen how nice my old server is… i might just have gotten the bug… not that bug!! the enterprise gear bug xD

i’m sure i could have gotten equally good consumer drives… but this is kinda fun and educational

Nothing that a RAID10 can’t solve :wink:

how about writeholes… how does raid10 do with those…

this drive also ofc has built in ECC to fix that issue… ofc ECC will cost extra nand
i think this drive actually has a total of 3TB but after provisioning, ECC, raid5 its down to 1.6 TB

the mindblowing thing is that there are 6.4TB models that one could get in 2010-2012… O.o
so that would be like a 10TB ssd… ofc one would have to fill an entire data center with them… and basically had to be shitting gold bars… but still… one could get an ssd in those ranges… wtf

A proper RAID controller with battery backup solves write holes.

so now my m.2 adapter pcie card needs a battery and a raid controller and ofc ram in that case…
sure isn’t getting cheaper
and that only solved write holes what about the other bane of all raid that isn’t raid 6, bit rot…
ofc one could do a 4 drive setup with 2 drive for storage and two redundant on a 4 slot m.2

ofc the problem with that is that you would spend 4 m.2 drives and get the iops performance of 1 drive… which is why they went with ECC i guess, because it was cheaper overhead to solve the bit rot issue

this is also kinda interesting…

has it being significantly faster than the rebranded lenovo datasheet from here

i doubt the cards are actually different… and one is rated for 1.1 GB/s while the other has it at 1.7 GB/s
and the iops is like wise the same the lenovo datasheet is about a 1/3 reduction in bandwidth an iops.

and when i was updating the firmware on it to get it to work, then it was the sandisk firmware even ibm errr lenovo was using :smiley: there is also a hp model if not many more from other brands also.
all seemingly selling the exact same card.

i guess this is why quote from a storagereview article on the series

Fusion-io has also done away with any external power connectivity on the SX300 cards, which was seen on the first- and second-generation models. The reason for this is older models could draw more power in higher performance modes, and some servers couldn’t function safely above minimum PCIe power spec. However, the current crop of servers on the market support much higher power demands, so Fusion-io included the ability to enable higher power modes through the slot itself.

which i had seen on the status details of the card, i was wonder what that was… apperently i can put it on boost lol which gives me a 50% uplift in performance but also costs like twice the power draw, and i might need more airflow before i wanted to do that… besides i don’t really need it right now.

not sure if its because my server fans are worn or i need to clean something, turn something up to get more airflow… right now it’s 37.3 celcius which is kinda on the high end because the ambient is most likely like 8 degrees C, the max temp of the card is 55 and during the day i think it hits 42-45 which is kinda to high imo

oh and it does have PLP
Powerloss protection: protected
i couldn’t understand that i would have gotten it without it… odd that they don’t even mention it in the data sheet tho

Adapter: ioMono (driver 4.3.7)
1600GB Enterprise Value io3 Flash Adapter, Product Number:00D8431, SN:11S00D8431Y050EB58T055
ioMemory Adapter Controller, PN:00AE988
Product UUID:8f616656-45e4-5109-a790-6f766ca59382
PCIe Bus voltage: avg 12.16V
PCIe Bus current: avg 0.65A
PCIe Bus power: avg 7.98W
PCIe Power limit threshold: 24.75W
PCIe slot available power: 25.00W
PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
Connected ioMemory modules:
fct0: 07:00.0, Product Number:00D8431, SN:11S00D8431Y050EB58T006

fct0 Attached
ioMemory Adapter Controller, Product Number:00D8431, SN:1504G0638
ioMemory Adapter Controller, PN:00AE988
Microcode Versions: App:0.0.15.0
Powerloss protection: protected
PCI:07:00.0, Slot Number:53
Vendor:1aed, Device:3002, Sub vendor:1014, Sub device:4d3
Firmware v8.9.8, rev 20161119 Public
1600.00 GBytes device size
Format: v501, 390625000 sectors of 4096 bytes
PCIe slot available power: 25.00W
PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
Internal temperature: 37.40 degC, max 44.30 degC
Internal voltage: avg 1.01V, max 1.01V
Aux voltage: avg 1.80V, max 1.81V
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Active media: 100.00%
Rated PBW: 5.50 PB, 100.00% remaining
Lifetime data volumes:
Physical bytes written: 222,360,877,352
Physical bytes read : 86,984,279,424
RAM usage:
Current: 211,553,344 bytes
Peak : 211,611,584 bytes
Contained Virtual Partitions:
fioa: ID:0, UUID:7cbfc24a-5727-4708-b258-91c66caf6856

fioa State: Online, Type: block device, Device: /dev/fioa
ID:0, UUID:7cbfc24a-5727-4708-b258-90c660af6856
1600.00 GBytes device size
Format: 390625000 sectors of 4096 bytes
Sectors In Use: 14989681
Max Physical Sectors Allowed: 390625000
Min Physical Sectors Reserved: 390625000

wouldn’t it be funny if its actually the bus speed that sets the max speed of the card :smiley:

2200 and 1700 in read and write is suspiciously close to the max bus speeds.
i do really like this kinda command center overview of the details of the card, like the wattage draw, and the max temp since last reboot i suppose it might be… but not sure about that.

seems like the pcie powerlimit is what decided the max performance of the card, if the server has enough wattage, cooling and doesn’t care to much about the wear on the card one can push it … at 25 watts i’m not sure i can do that… granted it seems to run on about 8watts which suits me just fine.
so a triple in the power usage should be a pretty good kick in the butt, but … well something to keep in mind for later if i need it… right now i would rather save the power and be spared the added heat.

@deathlessdd
i thought i could take out the card and benchmark it on windows, but i forgot that it doesn’t behave 100% correctly in windows according to the old review on storagereviews… so there goes that plan, might try it anyways, been trying to correlate the benchmarks on the reviews there are, and it’s very difficult…

@Toyoo
from what i can tell the SX300 does infact beat the micron 9100 max in latency.
the sx300 peaks at 20ms in linux while the micron 9100 max maxes out at 36ms

the sx300 seems to fit somewhere between that and the top their on the anandtech chart

ofc it being a bit older model it has some things it doesn’t win on when compared to the micro 9100 max, and from what i can tell sure the memblaze ones gets slightly higher scores, but thats down to capacity most likely, the more cells the more bandwidth and iops

so really my 1.6 tb MLC card does from a hardware and speed point still compare on a 1 to 1 comparison with the non optane cards… and ofc optane… well that just costs to much and/or has no capacity with a sometimes sad write speed sprinkled on top for the cheaper models.
and else one gets like maybe 8gb optane and then the rest is like QLC, which i already has a 750 over provisioned 1tb ssd, which is pretty nice, since it makes use of the whole SLC caching thing when it’s not … i wonder if that was why i got latency now that i think about it…

if i let the l2arc fill that drive beyond 25-33% then i might have seem a major decrease in performance because it would stop running with a lot of SLC caching when it started to store a good deal of QLC…
i set it to 600gb which would have been way over the top for max performance… hmmmm didn’t think of that before now…wonder if thats why my small 250gb MLC SSD is bitching to… i did provision it to have 20% free… but at the time i wasn’t really aware of just how much a decrease in performance i would see depending on how much capacity i used… i suppose the MLC is less prone to that since they only store 2 bit per cell and thus the compression of the data is much less.

but still that would mean if i provisioned it at 50% then it would always run as SLC… and the QLC would have to be provisioned for 25% and then it would always / lets say never need to run as SLC… hmmmm maybe thats what storagereviews.com means when they talk about performance provisioned…

1 Like

not sure why i didn’t think to try this sooner, don’t really think of this as a normal hdd :smiley:
easy way to test the disk while running… ofc these are reads, but they should be uncached unbuffered reads.

this is the command for unbuffered disk reads

hdparm -t --direct /dev/fioa

/dev/fioa:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing O_DIRECT disk reads: 4534 MB in 3.00 seconds = 1511.07 MB/sec

and below is the command for unbuffered speeds to the cache

hdparm -T --direct /dev/fioa

/dev/fioa:
Timing O_DIRECT cached reads: 3328 MB in 2.00 seconds = 1665.05 MB/sec

and below this is the buffered disk reads and command

hdparm -t /dev/fioa

/dev/fioa:
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 3834 MB in  3.00 seconds = 1277.58 MB/sec

and below is the cached read speeds and command

hdparm -T /dev/fioa

/dev/fioa:
 Timing cached reads:   12464 MB in  1.99 seconds = 6248.58 MB/sec

keep in mind the disk is actively utilized so that is most likely why the standart buffered number is lower than the direct.
because the main system traffic will hit the buffer while the performance tests directly on the disk will bypass the buffer and thus get close to real world reads… ofc the disk supposedly is running in a endurance mode and thus will have less performance because of that and some activity will most likely be hitting the raw MLC from the system during the bench.

so i’m happy with that number 1650 MB/s is not bad if it’s actually sustainable speeds, still not sure how i am going to test writes.

i took the highest numbers i got from running the commands a few times, but the variation wasn’t great, less than 10%

smart might be able to do a write test… if it has smart :smiley:
apparently i need to use some sort of nvme controller id in the smart to make it work… ofc thats only useful if i can actually do a write test with smart…

maybe i did something wrong when installing this… i do run into some odd errors and issues from time to time… think i have to go back and run through the correct installation of the device into linux.
kinda just winged it, but it doesn’t seem to have done the job and there are like a whole toolset along with the drivers that supposedly are recommended to be used and i haven’t even checked what it is… so i guess thats where i should go spend my time now lol :smiley:
see you in a week lol and ofc they are customized drivers, so they might not be working correctly… but i doubt thats it… thus far it’s been me not knowing what i was doing that has been the issue… so have to assume thats still the case lol

1 Like

ended up formatting the virtual partition on the ssd, seems like i might be able to do pretty much what i want with it, in the manual it’s explained how to allocate the over partitioned reserve space, the chip on the card is fully programmable, so in theory no matter what software updates that are made the chip can be updated if one can write or find the code.

but i digress…
i think i mentioned that it was a 4kn device, which was weird because usually ssd’s don’t care to much about that… which was also the case for this, apperently it will talk with each thread or core on the cpu and thus can do parallel data access which is why it’s suppose to get it’s super low latency.
since my stuff is 512B mostly i formatted it to that, tho the manual says i should preallocate 28gb ram to manage the drive, but i think thats mostly only relevant if i use it as a dedicated swap drive… which is basically one of the things it was made for i guess…

supposedly it can allocate that by itself… so ill just see how it goes… since it’s not a critical system if it crashes only the slog part is important and thats only in case of the rest of the system crashing… so it’s redundant and thus i can play.

now i don’t get any errors after formatting to 512B sectors, but the manual says the optimal sector size is 32k, so will try and see if i can move towards that but i doubt that will be an easy trip… and i got no idea how to get there…

thus far it’s taken 650mb ram… which is nothing so thats fine… i formatted it to 1tb which is essentially a 60% provisioning and then it has 10% on top of that at the very least not sure how far all that goes… but 10% is actually useful.
40% seems to be the best spot for performance provisioning, according to some white papers i found, but 60% was about the sweet spot, so i went with that. 1tb +6gb for slog partition not sure if i was suppose to format it to full capacity and keep the partitions smaller or format it to the reduced size… but since it’s pretty easy to add and remove, then i can always change that later… just happy to get some proper testing time in now… and that everything seems to be running 100% smooth.

the manuals are a light little 200 pages or so lol maybe 150

my L2ARC have now been correctly running on the device for 12 hours since last reboot, still tinkering…
tho my numbers seems amazing.
often i see a 10% - 30% utilization on the l2arc reads, something which was nearly unheard of on my older ssd’s except maybe after a week or two of uptime and zfs seems to want to actually use the device and free up memory when it’s connected, which i didn’t see much of before… but still testing that out a bit… just got it working really well last night

i mounted an small bit of excess capacity, didn’t exactly hit the fdisk right… so i put that into a ext4 writethrough vhd in a windows vm to do a bit of limited benchmarks.

it shouldn’t be affected by stuff like arc, zfs because it’s not a part of the pool’s
and there shouldn’t been any caching because of the writethrough… the numbers does seem a bit odd… but not totally unreasonable… i got 2700mb and 1400mb/s on the first run, but i was running scsi and CoW so i remade it with virtio and raw and then i got this… the smaller numbers are about the same… i think they are limited either by some thing in the vm or because i only allocated a micro part of the ssd and thus the bandwidth / cell access might be limited and thus speed is restricted… but it’s something i’ve been seeing on most of my vm’s and most of my drives… doesn’t seem the matter which ones it is…

so maybe it’s the former, or some kind of windows limitation…
lots of testing and tinkering to do still

image

but my l2arc results seems epic… i cannot wait to see how it will be running after 3-4 weeks of up time and how well it will do when i start spinning up most of my vm’s, which usually kills most of my memory… so hopefully this will make that less of an issue.

Netdata when looking at the drive itself is completely bonkers, not sure whats going on there, and no luck in finding answers…
it’s either 10000% utilization, 2000minutes of latency and 100tb a sec… or zero across the board…
only seems to be a netdata thing tho.

1 Like