Then I would suggest to leave it as is until the new version will be rolled out, it should contain the fix.
Just to be clear, the garbage not cleared includes the garbage from the garbage collection disaster because of the bloomfilter size as well, right?
If so, this means that this garbage gets moved to trash but not getting deleted for another weeks until the fix is available and working?
Alexey
May 5, 2024, 10:10am
43
depends. On my nodes it’s cleared. But seems for your setup is not.
again. Depends. For my nodes it’s moved to the trash and the previous trash is cleaned successfully, include database updates.
Did you verify?
I just saw a new ticket that needs to be verified. If true we might have a new additional issue.
opened 07:23AM - 04 May 24 UTC
Bug
Retain logs states that pieces are moved to trash, but looks like it does not ha… ppen.
Random picked piece ID, that should have been moved to trash :
```
root@server030:/disk102/storj/logs/storagenode# grep "QJWHQBYGK3KOJPSY6RLCTFYETANCDTNMUGFUXTGTXYHLFDRAECEQ" server069-v1.102.3-30069.log
2024-04-27T19:03:57+02:00 DEBUG retain About to move piece to trash {"Process": "storagenode", "cachePath": "/root/storagenode-data/configs/server069/retain", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "QJWHQBYGK3KOJPSY6RLCTFYETANCDTNMUGFUXTGTXYHLFDRAECEQ", "Status": "debug"}
2024-05-04T08:13:43+02:00 DEBUG retain About to move piece to trash {"Process": "storagenode", "cachePath": "/root/storagenode-data/configs/server069/retain", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "QJWHQBYGK3KOJPSY6RLCTFYETANCDTNMUGFUXTGTXYHLFDRAECEQ", "Status": "debug"}
```
Storage path :
```
root@server030:~# grep "storage.path" /root/storagenode-data/configs/server069/config.yaml
storage.path: "/disk267/storj/data/server069-28967"
```
Piece remains in blob folder :
```
root@server030:~# stat /disk267/storj/data/server069-28967/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/qj/whqbygk3kojpsy6rlctfyetancdtnmugfuxtgtxyhlfdraeceq.sj1
File: /disk267/storj/data/server069-28967/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/qj/whqbygk3kojpsy6rlctfyetancdtnmugfuxtgtxyhlfdraeceq.sj1
Size: 37376 Blocks: 80 IO Block: 4096 regular file
Device: 4581h/17793d Inode: 109437710 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2023-08-04 07:29:37.321150106 +0200
Modify: 2023-08-04 07:29:37.325150089 +0200
Change: 2023-08-04 07:29:37.349149990 +0200
Birth: 2023-08-04 07:29:37.321150106 +0200
```
Trash folder remains empty - not even the trash date folder are created :
```
root@server030:~# ls -aR /disk267/storj/data/server069-28967/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/
/disk267/storj/data/server069-28967/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/:
. ..
```
Plenty of space on /disk267 :
```
root@server030:~# df -h /disk267
Filesystem Size Used Avail Use% Mounted on
/dev/sdck1 19T 15T 3,8T 80% /disk267
```
Node ID :
```
root@server030:/disk102/storj/logs/storagenode# head -n 30 server069-v1.102.3-30069.log | grep "Node "
2024-04-23T22:22:51+02:00 INFO Node 1wc1ePRmePEoas69zs1jJkRxqfpzdJj9QZHgV1af9tLbdiBtQ1 started {"Process": "storagenode"}
```
Storage node version :
```
root@server030:/disk102/storj/logs/storagenode# head -n 30 server069-v1.102.3-30069.log | grep "Version"
2024-04-23T22:22:50+02:00 DEBUG Version info {"Process": "storagenode", "Version": "1.102.3", "Commit Hash": "9cacfd6fb7b837ef9bbfaa89f4fb38516b2a3f46", "Build Timestamp": "2024-04-23 22:03:27 +0200 CEST", "Release Build": true}
2024-04-23T22:22:50+02:00 DEBUG version Allowed minimum version from control server. {"Process": "storagenode", "Minimum Version": "1.99.3"}
2024-04-23T22:22:50+02:00 DEBUG version Running on allowed version. {"Process": "storagenode", "Version": "1.102.3"}
2024-04-23T22:22:51+02:00 INFO db.migration Database Version {"Process": "storagenode", "version": 56}
```
Config.yaml :
```
root@server030:~# cat /root/storagenode-data/configs/server069/config.yaml
identity.cert-path: "/disk002/storj-v3/storj-ident/server069/storagenode/identity.cert"
identity.key-path: "/disk002/storj-v3/storj-ident/server069/storagenode/identity.key"
contact.external-address: "server030.id30069.storj.dk:30069"
operator.email: "*@*.*"
operator.wallet: "0x165A1Da6F4d3c0555a9E1f94288A7e17BD14f7d1"
operator.wallet-features: ["zksync-era", "zksync"]
log.level: debug
server.address: ":30069"
server.private-address: "10.0.0.30:31069"
server.revocation-dburl: "bolt:///disk103/storj/databases/server069/revocations.db"
storage.allocated-disk-space: 19894392772096 B
storage.path: "/disk267/storj/data/server069-28967"
storage2.database-dir: "/disk103/storj/databases/server069"
storage2.stream-operation-timeout: 30m0s
filestore.write-buffer-size: 3072.0 KiB
console.address: "10.0.0.30:32069"
storage2.orders.path: "/disk102/storj/orders/server069"
debug.addr: "10.0.0.30:33069"
bandwidth.interval: 24h0m0s
collector.interval: 1h0m0s
metrics.interval: 5m0s
metrics.instance-prefix: Th3Van-Server069-
nodestats.max-sleep: 1m0s
nodestats.reputation-sync: 1h0m0s
nodestats.storage-sync: 1h0m0s
retain.concurrency: 30
retain.status: debug
storage.k-bucket-refresh-interval: 24h0m0s
storage2.cache-sync-interval: 5m0s
storage2.delete-queue-size: 50000
storage2.delete-workers: 30
storage2.exists-check-workers: 30
storage2.expiration-grace-period: 48h0m0s
storage2.max-used-serials-size: 50.00 MB
storage2.monitor.interval: 0h30m0s
storage2.monitor.verify-dir-readable-interval: 60s
storage2.monitor.verify-dir-writable-interval: 60s
storage2.order-limit-grace-period: 1h0m0s
storage2.orders.archive-ttl: 24h0m0s
storage2.orders.cleanup-interval: 1h0m0s
storage2.orders.max-sleep: 30s
storage2.orders.sender-interval: 1h0m0s
storage2.orders.sender-dial-timeout: 1m0s
storage2.orders.sender-timeout: 5m0s
db.max_open_conns: 60
db.max_idle_conns: 10
storage2.trust.refresh-interval: 24h0m0s
storage2.monitor.minimum-disk-space: 10 MB
pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: false
```
Am I missing something simple here ? 🤔
Th3Van.dk
Alexey
May 5, 2024, 11:42am
45
Yes, I checked for my nodes. One of the nodes have the wrong path though for the abforhuxbzyd35blusvrifvdwmfx4hmocsva4vmpp3rgqaaaaaaa
satellite
ls X:\storagenode2\storage\trash\abforhuxbzyd35blusvrifvdwmfx4hmocsva4vmpp3rgqaaaaaaa\
Directory: X:\storagenode2\storage\trash\abforhuxbzyd35blusvrifvdwmfx4hmocsva4vmpp3rgqaaaaaaa
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 12/22/2019 8:29 PM 2024-04-20
it’s for
so, I didn’t worry about.