You have a nice toolbox in Edge browser, Bing button > toolbox, where you can calculate all sorts of conversions, including digital conversions, from TB to TiB.
The OS always displays data in TiB (1024 based, 2^x) and the storagenode’s dashboard including config parameters display/use data in TB (1000 based, 10^x).
Tha Properties window that you screenshot displays both values in 1024 base; just divide that big number by 1024 several times and you will get the small number.
The satellites reports what is your node confirmed by signed orders.
I think I will disable the lazy FW too, to speed up the run. I didn’t had any problems with timeouts or fatals in the past. ext4 + SATA + EXOS make a pretty solid combination.
So despite the discrepancy, it seems that sats are seeing the node full, as they should, and ingress has stopped. This is a big relief…
Would be interesting if it catches up, in a week or so.
i assume all other cases (temp full of old files, decom satelites blobs, big cluster size>8k are outruled)
Ext4, 512e format… what Syno is doing by default. Temp emptied, trash older than 7 days emptied, I quited all test sats before decomision and deleted their blobs.
that’s not the cluster size…should be fine since its no raid.
If you give me some command to run, I can display the output. There is too much terminology for me… cluster, block, sector,
Hi friend.
I don’t think it will catch up right now. The node below has been at capacity for three weeks. Interestingly, the Averate Disk Space Used This Month
has gone down from just shy of 3TB, when the node was not full to the 2.83 now.
This ia a vanilla Docker node running on linux, alone on a storage array
Yes, we all see these big discrepancies, and still we did’t got a satisfactory answear. I bet even if we follow the elek’s guide in the other thread, we still don’t get the unaccounted space and we won’t level to satellites reports.
I didn’t have the time for it, but I will do this: I have in a spreadsheet all the history of my nodes; I will put together a statistic to see the differences for each month, on each node, and I am pretty sure it will be confirmed by others on their nodes too.
It’s one thing if we have a few GB of unaccounted space for months, but when you have like 1TB each month per node, and you have several nodes… you can do the math how much money you loose. Because, what I try to make them understand is that me, as a SNO that I expect to be payed for my space occupied with Storj data, when I see that I have 10 TB of data occupied on a disk, where nothing else is except the storagenode, from 1 ian to 31 ian, I expect to be payed 15$ for that month, so for that 10TB of space occupied. I don’t care what sats do/see, or anybody else thinks/does; my disk retained 10TB of data from Storj, I expect my full 15$ from them. The OS sees 10TB, the storagenode sees 10TB, running commands give you 10TB… where’s my money?
I realy want to heare what the highend setups, like servers with huge RAM, powerfull CPUs, SAS drives etc. see on their dashboards, just to rule out the low performance hardware. Maybe is in true a problem only on lowend setups.
The problem is that we don’t have statistics for Trash like we have for Used Space. We should have a history for Trash too, and all these discussions would be much simpler.
Did you try to disable a lazy filewalker to allow it to finish its work faster?
If you’re referring to me - I did not try that yet. Ill give it a shot, next time I am on that location
Here is yet another thread on disk util mismaches, but I can’t figure it out really.
The total node size is 9TB, it is currently regarded as full at 8.92TB plus around 80GB trash. So far so good. The node is around 18 months old at this time.
The average disk size used graph however sits at around 7.8TB per day, and “df -h” reports around 8.3TB, nowhere near the 9TB it is claiming to have.
The node is running the lazy filewalker for a few months, before that the startup filewalker was disabled. I have confirmed that the used-space-filewalker finishes successfully for all 4 satellites in the logs and I have restarted the node several times. I have 4 directories in the blobs folder and I have run the forget-untrusted-satellites thing mentioned in another thread.
What else can I do to chase down the missing 1TB?
8.3TB + 8% (TB vs TIB) is roughly the difference between your df -h
command and the node total size, and is probably your issue there.
Average disk used
is something several others users have issues with. Did you try the steps in the guide below? ↓
For linux i will have to google it too. So who cares enough?
That makes sense regarding the total amount stored I suppose, ie the data is actually stored and the used space filewalker works.
The difference between the satellite reported data is still around 1TB though. I’d expect it to be cleaned up by the process checking it against the generated bloom filter and the difference should decrease with time but it doesn’t seem to happen.
to you and to @snorkel and to anyone, who has this issue.
I expect this might help since there were almost no problems like this before implementing the lazy filewalker and making it the default.
The notable that this problem usually happen on:
- VM nodes
- BTRFS/ZFS
- RAID
- network filesystems
- other slow disk subsystems (disabled/not performed defragmentation for NTFS, using NTFS under Linux, etc.)
and rarely happen on not virtualized nodes, uses a single disk per node with a native filesystem on it.
All of which are supported configurations?
(If ISCSI is the only network system)
The disk, connected via iSCSI is exposed to the OS as a block device, not a network filesystem, so you can format it to ext4 for Linux or NTFS for Windows, so it’s working. It may have a greater latency than local devices though, but it shouldn’t be too great, and you have a higher risk of file corruption due to interruptions in connection or drop of packages, but it should be low.
However, if your virtual disk for iSCSI lives on the slow filesystem on the server, it will likely inherit its behavior when it’s used from the client.