The node does not recheck the occupied space and therefore the disk is full

found in my logs, the lazy filewalker finished for around 6 hours in summary

2023-11-25T22:08:12Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "createdBefore": "2023-11-21T17:59:59Z", "bloomFilterSize": 1854803}
2023-11-25T22:18:42Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-11-29T18:11:10Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode", "createdBefore": "2023-11-22T17:59:59Z", "bloomFilterSize": 2097155}
2023-11-29T23:48:59Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-11-30T16:10:43Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "createdBefore": "2023-09-19T17:00:07Z", "bloomFilterSize": 364476}
2023-11-30T16:18:31Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-01T09:36:57Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "bloomFilterSize": 566263, "process": "storagenode", "createdBefore": "2023-11-27T17:59:58Z"}
2023-12-01T09:55:05Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully    {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-02T21:42:46Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "createdBefore": "2023-11-28T17:59:59Z", "bloomFilterSize": 1849134}
2023-12-02T21:53:43Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully    {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}


Average Disk Space Used Month - Does not change.
What to do about it?

Until filewalker would not finish its work - nothing will change.

He finished and re-read the size.
The right side shows correctly, but the left side does not change.

It also should move part (up to 90%) of the deleted data to the trash. The left side - is what your node reported as an actually used, everything above that - is deleted data, which should be moved to the trash by the garbage collector (gc-filewalker) and expiration removal (retain).

Please tell me how long I need to wait for everything unnecessary to be deleted?
I need to understand whether it is worth moving the node to another disk and enlarging it or waiting, if all that is needed is only 300Gb, then more than 1 TB will be deleted and this disk will still be normal for this node.

2023-12-04T10:34:50+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-04T10:34:50+02:00	INFO	piecestore	download started	{"Piece ID": "YSPGIIYHEYESWNKMKCSVI532ODBFTTI62PWUP4IM2SYCV24T7DKA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 580096, "Remote Address": "159.69.210.146:33550"}
2023-12-04T10:34:50+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
2023-12-04T10:34:50+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}

2023-12-04T10:34:51+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "piecesTotal": 17547262464, "piecesContentSize": 17542759424}
2023-12-04T10:34:51+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-04T10:34:51+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-04T10:34:51+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-04T10:34:52+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
2023-12-04T10:34:52+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}

2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode", "piecesTotal": 24462616320, "piecesContentSize": 24442697984}
2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-12-04T10:41:06+02:00	INFO	piecestore	upload started	{"Piece ID": "AJ37XF56VKXNK3NINOXSYJVVI4NPEL2QO4SBSRHJW4FOFO7XX3XQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1685981696, "Remote Address": "5.161.117.79:39350"}
2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
2023-12-04T10:41:06+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}

This one could take more time than a few minutes, the second biggest is EU1 (12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs) (at least for my nodes).

It should move to the trash about 90% of the garbage in the first loop, if it would finish.

Thank you, I’ll transfer it to a larger disk and run it with debug status to track the completion of the process.
But within a day, the left side only became a little smaller, and the right side remained unchanged.
I’ll write later in a couple of days.

To quickly check the result, I ran a debug on another node:

	Строка    29: 2023-12-05T18:45:27+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	Строка    42: 2023-12-05T18:45:27+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	Строка   255: 2023-12-05T18:45:27+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
	Строка   256: 2023-12-05T18:45:27+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
	Строка  1414: 2023-12-05T18:46:52+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "piecesTotal": 16607320832, "piecesContentSize": 16603104512}
	Строка  1415: 2023-12-05T18:46:52+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	
	Строка  1416: 2023-12-05T18:46:52+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	Строка  1417: 2023-12-05T18:46:52+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	Строка  1418: 2023-12-05T18:46:53+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
	Строка  1419: 2023-12-05T18:46:53+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
	Строка  2223: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode", "piecesTotal": 17261369600, "piecesContentSize": 17246769408}
	Строка  2224: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	
	Строка  2225: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	Строка  2226: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	Строка  2227: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
	Строка  2228: 2023-12-05T18:49:08+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
	Строка  6322: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode", "piecesTotal": 275982499328, "piecesContentSize": 274826279424}
	Строка  6323: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	
	Строка  6324: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
	Строка  6325: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
	Строка  6326: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode"}
	Строка  6327: 2023-12-05T19:00:38+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode"}
	Строка  6603: 2023-12-05T19:01:22+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "piecesTotal": 176859144192, "piecesContentSize": 176770240000}
	Строка  6604: 2023-12-05T19:01:22+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}


I specifically deleted the trash manually. It’s completely empty.
2023-12-05T18:45:27+02:00 DEBUG version Running on allowed version. {"Version": "1.93.1"}
I took the latest test version which state:

  • 1105deb storagenode/blobstore/filestore: fix flaky TestTrashAndRestore

As a result, the question is - Why is so little indicated on the left? This is the situation for many nodes and I want to find out the reason and eliminate the overuse of unpaid space in order to free it up and then occupy it with useful data.

Risky, hope that this does not get you disqualified.
Only files older than 7 days are to be deleted…

2 Likes

Thank you, I know that.
But the files from the trash bin are needed in case of problems with satellites, and sometimes they delete them before.
When the node has grown more than necessary and there is no free space to run the node, all that remains is to delete the trash…

Как я и говорил, gc-filewalker может удалить до 90% лишней информации за один проход, но не 100%. Через пару проходов должно выровняться.
trash вручную лучше не очищать, иногда GET_REPAIR вытаскивает кусочки из trash, если он не смог их найти в основном хранилище, потому что Блюм фильтр, например, захватил больше, чем нужно. В этом случае может пострадать audit score, если кусочка не окажется.
Ну и retain теперь будет жаловаться на весь лог, что не смог удалить что-то.

I wrote and took a screenshot with the identifier - this is a different node.
I checked on several nodes, on all of them the check for 4 satellites ends but nothing goes into the trash bin.
In this case, there is always a significant difference between the left side (information from the satellite) and the right side (information from the node)

The pictures show that nothing was sent to the basket! Nothing at all after a full check and completion of checking all 4 satellites - I provided the log.

Я же написал и сделал снимок экрана с идентификатором - это другой узел.
Я проверил на нескольких узлах, на всех проверка по 4 спутникам заканчивается но в корзину ни чего не уходит.
При этом всегда есть существенная разница между левой стороной(информация со спутника) и правой стороной(информация с узла)

На снимках видно что в корзину ни чего не было отправлено! Совсем ни чего после полной проверки и окончания проверки всех 4х спутников - я лог привел.

This is very weird. Is the retain process has been finished too?
Do you have any errors, related to orders in your logs?

Это очень интересно. Процесс retain тоже завершился успешно?
Никаких ошибок, связанных с orders в логах?

retain - not at all in the debug logs.
Now I have launched the original node from which I started the topic.
On the latest official version.

Here’s the log:

retain - вообще нет в логах дебаг.
Сейчас запустил первоначальный узел с которого начинал тему.
На последней официальной версии.

Вот лог:

2023-12-06T08:58:25+02:00	DEBUG	Version info	{"Version": "1.91.2"

	Строка   37: 2023-12-06T08:58:28+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	Строка   38: 2023-12-06T08:58:28+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	Строка   45: 2023-12-06T08:58:28+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
	Строка   46: 2023-12-06T08:58:28+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
	Строка  129: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "piecesTotal": 17556313600, "piecesContentSize": 17551808000}
	Строка  130: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
	
	Строка  131: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	Строка  132: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	Строка  133: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
	Строка  134: 2023-12-06T08:58:36+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
	Строка  152: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode", "piecesTotal": 24552196352, "piecesContentSize": 24532092672}
	Строка  153: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
	
	Строка  154: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	Строка  155: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	Строка  156: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
	Строка  157: 2023-12-06T08:58:50+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
	Строка 4808: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode", "piecesTotal": 774876084480, "piecesContentSize": 772373537024}
	Строка 4809: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
	
	Строка 4810: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
	Строка 4811: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
	Строка 4812: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode"}
	Строка 4813: 2023-12-06T09:15:22+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker started	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode"}
	Строка 5523: 2023-12-06T09:17:46+02:00	INFO	lazyfilewalker.used-space-filewalker.subprocess	used-space-filewalker completed	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "piecesTotal": 587660732672, "piecesContentSize": 587348464896}
	Строка 5524: 2023-12-06T09:17:46+02:00	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}

Вот что отображается:

Вот мой конфиг:

# in-memory buffer for uploads
filestore.write-buffer-size: 4.0 MiB

# file preallocated for uploading
pieces.write-prealloc-size: 4.0 MiB

# directory to store databases. if empty, uses data path
storage2.database-dir: C:\storj\db

# operator.wallet-features: ["zksync"]


storage2.piece-scan-on-startup: true
pieces.enable-lazy-filewalker: true




# how frequently bandwidth usage rollups are calculated
# bandwidth.interval: 1h0m0s

# how frequently expired pieces are collected
# collector.interval: 1h0m0s

# use color in user interface
# color: false

# server address of the api gateway and frontend app
console.address: *************************

# path to static resources
# console.static-dir: ""

# the public address of the node, useful for nodes behind NAT
contact.external-address: ***************************

# how frequently the node contact chore should run
# contact.interval: 1h0m0s

# Maximum Database Connection Lifetime, -1ns means the stdlib default
# db.conn_max_lifetime: 30m0s

# Maximum Amount of Idle Database connections, -1 means the stdlib default
# db.max_idle_conns: 1

# Maximum Amount of Open Database connections, -1 means the stdlib default
# db.max_open_conns: 5

# address to listen on for debug endpoints
# debug.addr: 127.0.0.1:0

# expose control panel
# debug.control: false

# If set, a path to write a process trace SVG to
# debug.trace-out: ""

# open config in default editor
# edit-conf: false

# in-memory buffer for uploads
# filestore.write-buffer-size: 128.0 KiB

# how often to run the chore to check for satellites for the node to exit.
# graceful-exit.chore-interval: 1m0s

# the minimum acceptable bytes that an exiting node can transfer per second to the new node
# graceful-exit.min-bytes-per-second: 5.00 KB

# the minimum duration for downloading a piece from storage nodes before timing out
# graceful-exit.min-download-timeout: 2m0s

# number of concurrent transfers per graceful exit worker
# graceful-exit.num-concurrent-transfers: 5

# number of workers to handle satellite exits
# graceful-exit.num-workers: 4

# Enable additional details about the satellite connections via the HTTP healthcheck.
healthcheck.details: false

# Provide health endpoint (including suspension/audit failures) on main public port, but HTTP protocol.
healthcheck.enabled: true

# path to the certificate chain for this identity
identity.cert-path: C:\Users\Администратор\AppData\Roaming\Storj\Identity\storagenode\identity.cert

# path to the private key for this identity
identity.key-path: C:\Users\Администратор\AppData\Roaming\Storj\Identity\storagenode\identity.key

# if true, log function filename and line number
# log.caller: false

# if true, set logging to development mode
# log.development: false

# configures log encoding. can either be 'console', 'json', 'pretty', or 'gcloudlogging'.
# log.encoding: ""

# the minimum log level to log
log.level: debug

# can be stdout, stderr, or a filename
log.output: winfile:///C:\Program Files\Storj\Storage Node\\storagenode.log

# if true, log stack traces
# log.stack: false

# address(es) to send telemetry to (comma-separated)
# metrics.addr: collectora.storj.io:9000

# application name for telemetry identification. Ignored for certain applications.
# metrics.app: storagenode.exe

# application suffix. Ignored for certain applications.
# metrics.app-suffix: -release

# address(es) to send telemetry to (comma-separated)
# metrics.event-addr: eventkitd.datasci.storj.io:9002

# instance id prefix
# metrics.instance-prefix: ""

# how frequently to send up telemetry. Ignored for certain applications.
# metrics.interval: 1m0s

# maximum duration to wait before requesting data
# nodestats.max-sleep: 5m0s

# how often to sync reputation
# nodestats.reputation-sync: 4h0m0s

# how often to sync storage
# nodestats.storage-sync: 12h0m0s

# operator email address
operator.email: ***********************

# operator wallet address
operator.wallet: *********************************

# operator wallet features
operator.wallet-features: ""

# move pieces to trash upon deletion. Warning: if set to false, you risk disqualification for failed audits if a satellite database is restored from backup.
# pieces.delete-to-trash: true

# file preallocated for uploading
# pieces.write-prealloc-size: 4.0 MiB

# whether or not preflight check for database is enabled.
# preflight.database-check: true

# whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly.
# preflight.local-time-check: true

# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5

# allows for small differences in the satellite and storagenode clocks
# retain.max-time-skew: 72h0m0s

# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled

# public address to listen on
server.address: :28967

# if true, client leaves may contain the most recent certificate revocation for the current certificate
# server.extensions.revocation: true

# if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
# server.extensions.whitelist-signed-leaf: false

# path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
# server.peer-ca-whitelist-path: ""

# identity version(s) the server will be allowed to talk to
# server.peer-id-versions: latest

# private address to listen on
server.private-address: 127.0.0.1:7778

# url for revocation database (e.g. bolt://some.db OR redis://127.0.0.1:6378?db=2&password=abc123)
# server.revocation-dburl: bolt://C:\Program Files\Storj\Storage Node/revocations.db

# enable support for tcp fast open experiment
server.tcp-fast-open: true

# the size of the tcp fast open queue
# server.tcp-fast-open-queue: 256

# if true, uses peer ca whitelist checking
# server.use-peer-ca-whitelist: true

# total allocated bandwidth in bytes (deprecated)
storage.allocated-bandwidth: 0 B

# total allocated disk space in bytes
storage.allocated-disk-space: 2.5 TB

# how frequently Kademlia bucket should be refreshed with node stats
# storage.k-bucket-refresh-interval: 1h0m0s

# path to store data in
storage.path: D:\

# a comma-separated list of approved satellite node urls (unused)
# storage.whitelisted-satellites: ""

# how often the space used cache is synced to persistent storage
# storage2.cache-sync-interval: 1h0m0s

# directory to store databases. if empty, uses data path
# storage2.database-dir: ""

# size of the piece delete queue
# storage2.delete-queue-size: 10000

# how many piece delete workers
# storage2.delete-workers: 1

# how many workers to use to check if satellite pieces exists
# storage2.exists-check-workers: 5

# how soon before expiration date should things be considered expired
# storage2.expiration-grace-period: 48h0m0s

# how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
# storage2.max-concurrent-requests: 0

# amount of memory allowed for used serials store - once surpassed, serials will be dropped at random
# storage2.max-used-serials-size: 1.00 MB

# a client upload speed should not be lower than MinUploadSpeed in bytes-per-second (E.g: 1Mb), otherwise, it will be flagged as slow-connection and potentially be closed
# storage2.min-upload-speed: 0 B

# if the portion defined by the total number of alive connection per MaxConcurrentRequest reaches this threshold, a slow upload client will no longer be monitored and flagged
# storage2.min-upload-speed-congestion-threshold: 0.8

# if MinUploadSpeed is configured, after a period of time after the client initiated the upload, the server will flag unusually slow upload client
# storage2.min-upload-speed-grace-duration: 10s

# how frequently Kademlia bucket should be refreshed with node stats
# storage2.monitor.interval: 1h0m0s

# how much bandwidth a node at minimum has to advertise (deprecated)
# storage2.monitor.minimum-bandwidth: 0 B

# how much disk space a node at minimum has to advertise
storage2.monitor.minimum-disk-space: 0.3 GB

# how frequently to verify the location and readability of the storage directory
# storage2.monitor.verify-dir-readable-interval: 1m0s

# how frequently to verify writability of storage directory
# storage2.monitor.verify-dir-writable-interval: 5m0s

# how long after OrderLimit creation date are OrderLimits no longer accepted
# storage2.order-limit-grace-period: 1h0m0s

# length of time to archive orders before deletion
# storage2.orders.archive-ttl: 168h0m0s

# duration between archive cleanups
# storage2.orders.cleanup-interval: 5m0s

# maximum duration to wait before trying to send orders
# storage2.orders.max-sleep: 30s

# path to store order limit files in
# storage2.orders.path: C:\Program Files\Storj\Storage Node/orders

# timeout for dialing satellite during sending orders
# storage2.orders.sender-dial-timeout: 1m0s

# duration between sending
# storage2.orders.sender-interval: 1h0m0s

# timeout for sending
# storage2.orders.sender-timeout: 1h0m0s

# if set to true, all pieces disk usage is recalculated on startup
# storage2.piece-scan-on-startup: true

# allows for small differences in the satellite and storagenode clocks
# storage2.retain-time-buffer: 48h0m0s

# how long to spend waiting for a stream operation before canceling
# storage2.stream-operation-timeout: 30m0s

# file path where trust lists should be cached
# storage2.trust.cache-path: C:\Program Files\Storj\Storage Node/trust-cache.json

# list of trust exclusions
# storage2.trust.exclusions: ""

# how often the trust pool should be refreshed
# storage2.trust.refresh-interval: 6h0m0s

# list of trust sources
# storage2.trust.sources: https://www.storj.io/dcs-satellites

# address for jaeger agent
# tracing.agent-addr: agent.tracing.datasci.storj.io:5775

# application name for tracing identification
# tracing.app: storagenode.exe

# application suffix
# tracing.app-suffix: -release

# buffer size for collector batch packet size
# tracing.buffer-size: 0

# whether tracing collector is enabled
# tracing.enabled: true

# how frequently to flush traces to tracing agent
# tracing.interval: 0s

# buffer size for collector queue size
# tracing.queue-size: 0

# how frequent to sample traces
# tracing.sample: 0

# Interval to check the version
# version.check-interval: 15m0s

# Request timeout for version checks
# version.request-timeout: 1m0s

# server address to check its version against
# version.server-address: https://version.storj.io

What else should I try? Which version is better to search for a solution?
Что еще пробовать? На какой лучше версии проводить поиск решения?

Will ping @clement, I’m out of ideas. The satellites should send a more wide Bloom filter to your nodes after they received a report about an actual usage.

Идеи у меня пока что закончились, попрошу Clement посмотреть, что ещё может помочь. По идее после успешного завершения filewalker и отправки отчётов на сателлиты, они должны прислать более “широкий” Блюм фильтр, чтобы удалить мусор.

Ok, I’ll wait for an answer.
Thank you

Ок, буду ждать ответа.
Спасибо

1 Like

I want just make sure - do you have only 4 folders inside the blobs folder, isn’t it?

Хочу уточнить, в папке blobs у вас сейчас только 4 подпапки?

yes, of course I deleted everything old with a script

да, конечно я все старое удалил скриптом

1 Like

what to do if there are 6 subfoldes in the blobs folder? there is also a discrepancy
что делать, если в директории blobs 6 поддиректорий? тоже есть расхождение