Unusually high disk activity

Storage2.Database-Dir: F:\Storj

and you need to stop node, copy all DBs to new location, change conig and start node.

@Vadim Storage2.Database-Dir: F:\Storj
nevermind, found it! Thanks! ill try it on a newer smaller node

yes, you need to add this to config and sett your own dirr like my F:\Storj

how old is your PC? may be you have sata2? or sata? Network load is big enouth so old interface is slow and can show 100% i have this on 1 pc, i instaled NVMe to pcie and all DBs there.

DB’s arent the issue. Im running this setup for 7 months, but recently it filled up past 1 HDD of the span of 2
Its this MFT

then only posibilyty stop node, defrag it and then work.

you have lot of read/write comannds of storagenode data what hdd do you use? May be it is SMR then it can perform slow ofter it go over some point.

I had similar issues with a JBOD setup. The issue in my case was that the volumn has a higher latency and that was slowing down many operations. Running 1 storage node per hard drive helped in my case.

3 Likes

Old 3tb EFRX’s arent SMR drives. And the problem is gone now, maybe it was garbage collection? But it lasted for like 5 hours im not sure. All is fine currently, i also increased pagefile as someone suggested somewhere and manually updated node to 1.11.1 Thanks for replies guys!

Garbage collection can take a really long time on bad setups. I have one node which is basically everything bad you can combine. (USB2, Old Drobo unit which is known for being slow, 2x ancient 320GB Sata2 HDD’s as part of the array, NTFS file system on a Linux host)
On that setup garbage collection can take over 24 hours. I did move the database files to an SSD accelerated array inside the host machine itself though. That’s probably the only reason it works at all.

This node was always an experiment to see if this crap shoot of a storage system would run at all. But since it does function I kept it running. It does share the total loads with 2 other nodes on my IP though. If it were the only node on my network and would get all incoming traffic, I would probably have a lot more problems.

Hah! Thats not my case though. The only slow parts of my setup are that disks are 5400 rpm in a spanned config

Spanned, so basically RAID0.
I hope you are aware that with one disk failure the whole volume is lost.

2 Likes

Yes, as i stated in the original post. But thanks for caring :slight_smile:

“New” 30EFRX drives aren’t SMR either. Only EFAX Reds are SMR (from what I can tell at least).

Yeah, looks that way.

You might want to take a look there:
Big Windows Node file system latency and 100% activity (NTFS vs ReFS)

Thanks! That is a very useful read. I am planning to move to a single large disk eventually though.