hallo - is it still required to keep 10% (or any other number) of space available on a node?
What about on a 10TB node or 20TB node? It seems like alot of data we are missing out on.
how about the 5GB hard stop - is that enough or?
hallo - is it still required to keep 10% (or any other number) of space available on a node?
What about on a 10TB node or 20TB node? It seems like alot of data we are missing out on.
how about the 5GB hard stop - is that enough or?
This is primarily a file system questionâŚHow far can you go without making it to slow because of fragmentation or other side effects?
For NTFS and ZFS I have seen nodes still working while OS reporting 99+ percent full.
300GB free seems to work well. But I still see older nodes where their sense of space-used is all out-of-whack. Like the OS says thereâs only 200GB free⌠but the node UI says thereâs 4TB free or something. Just have to be patient and wait for node upgrades and used-space filewalker to complete.
I think the answer is a definitely maybe.
Just yesterday I had a disk where I had specified way too much in the storj setup and it successfully stopped at 5GB free.
Iâve had other nodes that kind of keep a âsoft bufferâ where they never fill all the way to the alloted space.
You may want to keep some free space for file system performance. fragmentation gets worse as the drive fills up, especially with zfs but kinda true with anything file system.
But with the massive recent test data Iâve had the satellite used space be way different from file system actual usage, so if I wanted to keep a lot of space free⌠it wasnât really working.
And I did have one disk which, one time, just filled up completely hard. I donât remember how it happened, if it was normal storj operation or some sort of crash. zero bytes free. And of course this made the node fail and was a huge mess. And the only data on the drive was storj anyway so I didnât have slack to trim. I ended up having to delete some of the âtrashâ folder, which is generally a nono but was all I had available.
a different time I realized my âtempâ folder had a boatload of files leftover from some prior problem. So many files the âlsâ command couldnât list them. Deleting them took a long time and freed up 400GB of space. I think more recent code shouldnât leave as much in the temp folder.
so perhabs 100GB for NTFS will suffice?
i give 16.3 TB from my 20TB drive. ntfs. you never know.
That seems excessive to be honest?
No, not excessive at all. Your question canât be answered, as itâs a use case basis, you havenât provided your cluster allocation, whether youâre using compression, parity/raid/accelerated/cached file system,. Frankly Iâd say 12.5% free, as youâre using NTFS. Going beyond that with the incorrect File table and cluster size, could permanently cripple such a large sized node. After that point, generally you could literally start to mangle your file table with fragmentation - completely and totally ruining your drives performance permanently (think zombie file walking nightmare). It would literally take months to attempt a defrag (or even to offload it to a properly prepared file system) to fix that if the node remains full at maturity. It would have useless non-competitive engagement with storj, losing races. Due to payment methods, youâd best consider youâre in this for the long run, and be prudent. The volume of flush and replace means the data density over time will also wildy fluxuate, and depending on cluster size, could mean you eventually get insanely fragmented on NTFS; especially if youâre trying some exotic tiering approach which you donât fully understand.
Larger cluster sizes prevent fragmentation, but increase waste - nâ stuff like that. Consider the tortoise and the hare to balancing a file system for long term use, keep the runway short so you can eventually accommodate the 747 when it arrives, and not have to buy a new airport! heh. NTFS is not an auto wear leveling, fragmentation preventing file system like exFat, itâs finicky in the end.
100 gigs would not even cover file table, let alone compensate for data density & waste, as we have no way of knowing the average data set yet to be used by real customer data. Re-visit that aggressive stance when thereâs real data.
2 cents
18.1 (after formating) -1.8 =16.3
easy
i can still think over it when full.
exactly my thought (20chrs)
1 Windows service, and 2 docker for Windows
Please take a look on overused (I didnât change the allocation).
The node can have a problem with not updated or corrupted databases and would report the space as free, until it reaches the hard coded limit of 5GB in the allocation or on the disk (what is would happen first).
If you are ok with that risk, you may allocate more and leave less free space.
Please, do not use exFAT. The smallest cluster size is 128KiB. Itâs also too fragile and can be easily destroyed on abruptly reset or a power cut. Also the node check the used space of the data, but not the used space occupied on the disk (until it reach the 5GB free limit). With a bigger cluster size the size of the data on the disk could be in two times or more than the size of the data itself.
See: Topics tagged exfat
Thanks for The replies all
If you keep something else on that drive, you should account for that data. But I imagine you only use the drive for storagenode, so you could allocate all drive to it. It will stop at the 5GB free hardcoded, approximately. I donât recommend it, but it works.
Thanks. Only storj - and i left a bit more Space on it
I need to re-align your understanding here Alexey, as Iâm well aware of your concerns with exFat. I know itâs a standard disclaimer for you, but your understanding is incorrect. One can certainly format an exFat parition with a cluster size of 4k if they wantâŚ32k works nice. However, you are absolutely correct with regard to the fragility of corruption. And any task undertaken to correct a dirty bit on a exFat format can take a much greater time than per se NTFS.
Also of note, itâs at least 5-10x faster for random access and file operations than NTFS, because it doesnât have an alternate stream or any security nor meta data baked in. For very small nodes itâs actually quite plausable.
2 and 1/2 cents ⌠just for you
Yes, you likely can change the cluster size, however, by default
chkdsk e:
The type of the file system is exFAT.
Insufficient storage available to create either the shadow copy storage file or other shadow copy data.
Volume Serial Number is 6874-0010
Windows is verifying files and folders...
Volume label is New Volume.
File and folder verification is complete.
Windows has scanned the file system and found no problems.
No further action is required.
1073705984 KB total disk space.
256 KB in 1 files.
512 KB in 2 indexes.
0 KB in bad sectors.
768 KB in use by the system.
1073704448 KB available on disk.
262144 bytes in each allocation unit.
4194164 total allocation units on disk.
4194158 allocation units available on disk.
PS C:\Users\user> 262144/1KB
256
However, yes, you may use a smaller units during creation of the volume. But some vendors selling drives, especially external one a preformatted. And if you wouldnât be aware, you may make a mistake and use it with default settings.
And you are correct, the missing metainformation and a journal makes this FS more fast, but unfortunately very undesirable for the node due to lack of the journal - with the abrupt interruption the data could be damaged or lost with a higher probability than on NTFS or ext4.
Yeah people are generally stupid.
2 cents
Not necessarily. Just do not care. We attracts miners also, they likely related to a tech stuff somehow⌠but, never deep learning in details.