Large amounts of trash on europe-west-1

I have not specified a debug port, but I am good at following directions. Although if it is easier to wait for someone else who already has access to the debug port, no worries. I am happy to help if I can though. I do not know how to determine the debug port.

1 Like

sorry, not able to help on this one. My nodes have only been up for about 98 hours. I was installing some new light switches the other day and I stopped my nodes for about 30 minutes while I had the circuit to my office turned off as I was unsure if my connected UPS would last the entire duration as I have various other devices connected to it.

iā€™m not madā€¦ iā€™m disappointed lol

2 Likes

I have run:
-p 127.0.0.1:7777:7777

uptime 172h 31m (2020-09-21T15:27:03.043Z)

What command should I enter?

I have 331 hours uptime, butā€¦ apparently I forgot to set a debug port again after the last time I let the node recreate the config.yaml. Having to do this through CLI within docker, Iā€™m struggling a little to find what youā€™re looking for. I only found this.

/app # wget -O - http://127.0.0.1:39736/mon/stats | grep piecedeleter-queue-full
Connecting to 127.0.0.1:39736 (127.0.0.1:39736)
writing to stdout
-                    100% |*********************************************************************************************************************|  586k  0:00:00 ETA
written to stdout
/app # wget -O - http://127.0.0.1:39736/mon/stats | grep piecedeleter
Connecting to 127.0.0.1:39736 (127.0.0.1:39736)
writing to stdout
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces count=170960.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces sum=4910217986971.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces min=3862.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces avg=28721443.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces max=5324050202.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces rmin=29596.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces ravg=2584719.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r10=38486.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r50=70223.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r90=9623895.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces rmax=22486988.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces recent=53903.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces count=170960.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces sum=2697468.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces min=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces avg=15.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces max=291.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces rmin=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces ravg=1.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r10=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r50=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r90=2.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces rmax=31.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces recent=0.000000
-                    100% |*********************************************************************************************************************|  586k  0:00:00 ETA
written to stdout

No mention of piecedeleter-queue-full.

Not sure if this helps. Let me know if I can try something else. (Ps. little surprised curl wasnā€™t available inside the container, but this worked. That thing is light weight :wink: )

Nice one. Just to be sure that if from your node with the 400 GB of trash?

1 Like

Yes, it is most definitely so (needed 20 chars :wink: )

so i assume this command parameter cannot be placed randomly in the run command sequence, since it seems that itā€™s not the parameter but its location which makes it workā€¦

@littleskunk or @BrightSilence so what to i add to set this upā€¦ so itā€™s easy to find in future?

1 Like

tried adding it at the end of the run command but that didnā€™t seem to take, so added it before the --mount
which seems to have done the job, now i can access the info by going to http://storagenode-ip:5999/mon/funcs

oh yeah and ofc used the storagenode-ip instead of 127.0.0.1

adding it sooner than the other -p parameters seems like a bad idea, so made sense to put it just before the --mount 's startedā€¦ i guess that when the --name storagenode storjlabs/storagenode:latest parameter is given maybe the parameter input endsā€¦

seems a bit confusing that everything is a -p parameter but docker thing i suppose.

and also added the debug.addr ā€œ:5999ā€ to config.yaml

piecedeleter-queue-full

piecedeleter

trash

Zrzut ekranu z 2020-09-29 09-44-08

1 Like

So it turns out Stefan deleted ~100 TB of old data from old buckets. Somewhere between satellite and storage nodes we lost a lot of deletes and later garbage collection fixed it. The issue I see here is not so much that the storage nodes are not getting paid. 400 GB unpaid for 2 weeks is just 30 cent. The big problem I see is more about what happens if we ever implement a bug in garbage collection. Within 7 days we want to be able to recover the data from the trash folder back to the storage node. This only works if the storage nodes trust garbage collection. As soon as storage nodes start to delete the trash folder from time to time we also lose the option to recover. I hope we can find the issue and fix it. In the meantime please do not delete anything in the trash folder.

1 Like

Docker node update is coming ā€¦ can I do it?

Iā€™m curious how you determined the random port that monkit was attached to inside the container.

I didnā€™t have to, I have docker run in my command ā€¦
-p 127.0.0.1:7777:7777

That was actually meant for Bright. I will definitely be adding that port map and config file entry for future use.

1 Like

Oh Iā€™m not worried about the payout. In my specific case even the 30 cents doesnā€™t apply as I have plenty of free space. So whether it temporarily stores garbage or is sitting there empty really has no impact. Iā€™m just looking to help find if there is something bad going on.

Open a shell inside the container

docker exec -it storagenode /bin/sh

Then list ports being listened to

netstat -tulpn | grep LISTEN

Youā€™ll find 28967, 7778 and the one youā€™re looking for.

1 Like

Brilliant, thanks! Iā€™ll add my data to the pile:

/app # wget -O - http://127.0.0.1:41969/mon/stats | grep piecedeleter-queue-full
Connecting to 127.0.0.1:41969 (127.0.0.1:41969)
-                    100%
/app # wget -O - http://127.0.0.1:41969/mon/stats | grep piecedeleter
Connecting to 127.0.0.1:41969 (127.0.0.1:41969)
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces count=42607.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces sum=325940.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces min=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces avg=7.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces max=98.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces rmin=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces ravg=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r10=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r50=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces r90=0.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces rmax=1.000000
piecedeleter-queue-size,scope=storj.io/storj/storagenode/pieces recent=0.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces count=42607.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces sum=14030282805357.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces min=35291.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces avg=329295252.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces max=96749857825.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces rmin=133873.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces ravg=20437380.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r10=139005.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r50=149185.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces r90=715065.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces rmax=498660832.000000
piecedeleter-queue-time,scope=storj.io/storj/storagenode/pieces recent=177330.000000
1 Like

DQ nodes that delete trashā€¦ and/or rename it to something that sounds less like trashā€¦
filesmarkedfordeletionā€¦ terrible name but you get the ideaā€¦ trash sounds like its not importantā€¦
or one can just call them marked filesā€¦

because they are not trash they are marked for deletionā€¦ anyways i think the problem is mostly in the naming schemeā€¦

calling them marked i kinda like actuallyā€¦ because then people that donā€™t know wonā€™t understand what they areā€¦ and thus they will have to ask and then they will learn what they are and that they shouldnā€™t delete themā€¦

ofc iā€™m sure with enough time a better or more suitable name could be foundā€¦ but that was what i could come up with off the top of my head.

1 Like

@BrightSilence we noticed a second bug with our zombie segment reaper. The data Stefan deleted was very old. The zombie segment reaper is checking the creation time of the segment and simply didnā€™t clean up some leftovers from Stefans bucket. We fixed and executed the zombie segment reaper. Next garbage collection run should follow this weekend. I would expect that we will see another round of garbage.

The good news is we now should have finished the cleanup :slight_smile:

10 Likes