Disk usage discrepancy?

usually mean the timeout, so disk is unable to response in time. Since it’s NTFS, you need to perform a defragmentation and enable it if it was disabled for this disk (it’s enabled by default).
If that wouldn’t help, then you may disable the lazy mode.

it will use more IOPS, but should finish without interruption and should update databases at the end.
However, you need to have all 5 filewalkers successfully finished their work for every trusted satellite.
All data from untrusted should be deleted though: How To Forget Untrusted Satellites

Isn’t use to be 4 walkers, like a year ago? When the 5-th entered the scene? I hope that they don’t add 1 per year. :unamused:

My dashboard only shows 4 satellites, AP1, US1, EU1, and Saltlake. Does this mean i am good with regards to untrusted satellites, or is it possible they wouldn’t show up in the dashboard, but still be impacting my node

you can check the number of folders in blobs. 4=ok more = use force forget command as in

Method 2

1 Like

A node has deleted part of the garbage and another part of the garbage is marked to be deleted. Now it already marks an almost real capacity.

Other nodes have found garbage and the capacity is closer to the real one.

One node had garbage, it has been updated to version 1.101.3. The garbage has disappeared, the occupied space has increased, it is as if the garbage has become occupied space. Is it some problem with the dashboard?

3 Likes

This is not expected. We passed your information to the team.

Many of my nodes have this kind of difference. Is that a normal thing?

Hello i have a old node which had alot of iops issues, it has now been migrated to a new datastore and all the filewalkers has finished successfully. However my disk discrepancy looks like this on this particular node:


image

When will this be fixed? So old files starts deleting?

It seems to be a problem on larger storage units.

Many pieces have been deleted in the past month. Leave the node online, without restarting it. In the coming weeks the trash can fill up and be deleted little by little. Read the forum.

2 Likes

Looks like nobody posted this so far: Use same unit (1024 vs 1000 multiplier) for disk usage on SN console · Issue #6899 · storj/storj · GitHub

1 Like

Wow. Looks like a full node but in reality half of the occupied space is garbage that you don’t get paid for.

1 Like

Seems something is starting happening now :slight_smile:

Finaly today, my numbers add up flawlessly.
(on my new node, the older one needs propably more time, just checked)

Update: 2 days ago, the old node made 2TB (out of 10TB) disk space free and numbers add up perfect now too.

2 Likes

My node has 8.57TB of used space, but the “Average Disk Space” metric is displaying only 4.63 TB which is almost half of the real used space.

My question: is this behaviour normal? It doesn’t look like normal, the payout is based in the 4.63 TB of average.

Have you emptied the disks from decommissioned satellites?

and did you see this?

and this?

1 Like

Hello,
I noticed a discrepancy in my storage usage between my TrueNAS Core dataset and what is shown in the dashboard.
Can it be that the later versions of Storj didn’t cleanup older excessive storage/trash e.g.?

I run Truenas 13U6.1 (latest) the dataset uses 9.04TiB with 1.09 compression, so effectively ca. 10TiB total storage I guess.
I have given Storj 8.5 TB inside Storj config. And it currently uses 7.5TB with 330 GB Free and 0.67TB in Trash.

Given that 1 TiB is more than 1 TB, I am surprised.

I would assume that 8.5TB Storj is 7.7TiB on TrueNAS and with the compression of 1.09 gives 7TiB space usage.
I don’t use snapshots on this dataset.

Can it be that there is more trash but not shown as trash and not deleted before in older versions of Storj and the latest version doesn’t pick it up?
If so, how can I purge/cleanup the dataset somehow without wiping all?

Thank you
Etienne

Hello @etienneb,
Welcome back!
This is the exactly the same issue discussed here.
You may have a discrepancy, if your filewalkers are unable to finish their work without errors.

1 Like

Thanks, I have looked at the logs and learning everyday :slight_smile:

Plenty of these in my logs of last week (I unpacked a bunch of them):
lazyfilewalker.used-space-filewalker subprocess finished successfully
lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker completed
lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed

I am now removing some satellites which didn’t happen automatically I learned today.
storagenode forget-satellite --all-untrusted --config-dir /mnt/storagenode/config --identity-dir /mnt/storagenode/identity
But the diagnose only stated there is around 240GB in it.

The Temp folder has some older files (from june/july 2023) but that folder is only 400MB.

Would be great if you can shutdown the node, run a full cleanup command before continuing.
I have the feeling the storagenode disk is really slow in I/O.

This is my oldest node, around 5 years old and the average disk space used has been out by 2T for months. I had become used to it. Perhaps that was resigned to it. Now however its nearly 4T out vs the disk space used. No errors, cleaned out old Satellite data way back.

Before I trash this node (most of the data has gone to trash lately anyway), any ideas about how to get rid of or back that 4T? The way circumstance for operating these nodes has been going south lately, I’ve hardly got the patience to sort out these issues any more.

Do you have your used space filewalker enabled or disabled ?

1 Like