Disk usage discrepancy?

Do you mean this part:

Yep. There are also more advanced queries in the thread:

(which makes it unnecessary to manually calculate the difference between rows. The interval of rows can be different and should be calculated for actual average usage)

My utilization finally calmed down after I told it to use less space. I’m sitting at 88% now instead of pinned at 100% and storjnode crashing.

I don’t remember what the maximum recommended utilization was. Should I not use more than 85% 90% 95%? I’d like to configure my node to the max.

What’s the error after crash?
The official recommendation is to have 10% free of a total capacity. But many Community members are going much riskier: something about 100GB free from 12TB capacity (it’s actually dangerous especially in case of possible bugs).

Obviously too advanced for me.
I have no idea what this is and what I am supposed to do with the values to get the information I want:

Hi,
I have 13 TB dedicated to Storj. 9.18TB of which is being used. I have an entire drive dedicated to Storj Data of 15TB. Currently Storj is taking up 14.8TB on the drive yet Storj says it is only taking 9.18TB. Please advise as Storj is consuming much more space than it is saying. I was following another person having the same issue but it went to a different language and translation was not good.

WS 2019
NTFS
intel NUC i9 32GB RAM.

1 Like

Other topic I was talking about

OS?
File system?
Hardware?

1 Like

WS 2019
NTFS
intel NUC i9 32GB RAM.

Output of:

fsutil fsinfo ntfsInfo {drive-letter}:

fsutil fsinfo ntfsInfo E:
NTFS Volume Serial Number : 0x7624e1fe24e1c16b
NTFS Version : 3.1
LFS Version : 2.0
Total Sectors : 31,457,243,135 ( 14.6 TB)
Total Clusters : 3,932,155,391 ( 14.6 TB)
Free Clusters : 68,108,221 (259.8 GB)
Total Reserved Clusters : 1,136 ( 4.4 MB)
Reserved For Storage Reserve : 0 ( 0.0 KB)
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 4096 (4 KB)
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 53.95 GB
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x00000000bf2f2860
Mft Zone End : 0x00000000bf2f2940
MFT Zone Size : 896.00 KB
Max Device Trim Extent Count : 4096
Max Device Trim Byte Count : 0xffffffff
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000

In most of the post I read about similar issues, the problem usually is the NTFS and the filewalker.

1 Like

They are explained in the wiki page what I linked (first post in the forum).

I assume you are interested about the exact storage which is maintained by the satellite.

But there is no such value. Satellite shares the bytes*hour usage for a certain period. The average period can be calculated (for that period) which dividing it with the duration of the period (diff between the previous and current end time)

Again: I would recommend to check the post, including the diagram…

That’s not in question for me. I am trying to put that together with the “advanced” query

and what you have said before:

For me it looks like this when I take the first 2 rows:

2024-01-31 00:00:00+00:00|7B2DE9D72C2E935F1918C058CAAF8ED00F0581639008707317FF1BD000000000|1144871492.78699|2024-01-31 12:08:11.655857+00:00|2460341.00569046|1
2024-01-30 00:00:00+00:00|7B2DE9D72C2E935F1918C058CAAF8ED00F0581639008707317FF1BD000000000|1905812703.83945|2024-01-30 22:07:58.305931+00:00|2460340.42220262|2

Byte hours: 1144871492.78699
Interval: 2460341.00569046-2460340.42220262 * 24
Average: 1144871492.78699/14,00370816
= 81754880,90055927 |~ 8,17 TB

Another discrepancy on a node: Node says a bit less than 3 TB is used, trash negligible.
Then I wanted to move this to a 4TB partition and it cannot be moved because the 4TB partition is full before the node data is fully moved. Crazy.

May it be a different cluster size on the new partition?

I have not been able to closely investigate.
But cluster sizes, block seizes is all the same.
I’ll have to check if reported node sizes are correct at all. Maybe the node is in fact bigger than what it is saying.

Delete all databases. I thought that this would recover the capacity of the real node in the dashboard.

The dashboard capacity is not recovered, the node says that it is almost empty, the node is filling up, the node is becoming overloaded.

The node does not stop receiving data and if you do nothing the hard drive will fill up completely in a few weeks.

Is there a way for the node to mark the actual capacity? Can the entry of parts be stopped? Do I consider this node lost?

Let the Filewalker finish. It will update the database.

1 Like

Uncheck this (red arrow) and wait for it.

This causes extra iops for the node drive. Also slows down filewalker propably.