If I understand little skunk correctly, it’s normal for cancelled uploads to stay on the hard drive for a while. Garbage collection sorts them out later.
I suspect this is where much of my nodes trash is coming from, cancelled uploads that were eventually deleted by the garbage collector. I recently increased my storage by about 33GB(on my full node), and a few days later, after it was full again, the amount of trash on my node went from a few hundred MB, to about 9GB. I’m guessing that means that about 9GB of the 33GB received was cancelled which means my node kept 72% of it(72% successrate?). But that’s just guessing/assuming/hoping on my part. The successrate.sh script says my success rate is 10%. Ouch.
@Mark
so high numbers of deleted files on a regular basis might means lots of lost races / cancelled uploads… ofc does 1/8th a piece even count as a deletion… i mean ARRRR we not pirates…
thats interesting…
@anon27637763
i suppose some people might not be affected for various reasons… if you are running different OS, or what type of cpu / system you are running… i bet one of those old SPARC cpu’s would be great at stuff like this.
personally i’m not to fond about my cpu only running on 2.13ghz because if your clock is running slow there is a longer response time on most things…
my successrates are lovely tho… 53% upload… lol
partly because i am trying to migrate my node to a 512n pool rather than mixed 4k, 512n, sas,sata drives… not sure it will help much… but maybe it will make everything run better…
sure is taking a long time to copy my node this time… only 8.5tb and so far 5.3 tb done in maybe 3 days…or 4 days but did have 1 full day that was mostly a scrub which stalled the copy
done like 2.1 mil files in the last two days…i just think it was faster last time… ofc that was a copy on the same pool… this is to another pool on other drives… so should really be faster…
Right, I should jot down a few of those piece ID’s and check in a few days if they stick around. I don’t expect a large amount of them will be deleted though. My trash folder isn’t that big. I currently have 1.49GB trash on a 12TB node. So that suggests only a fraction gets trashed. But I’ll check again later.
might be interesting to be able to track that over say a month or forever and be able to compare numbers with say nodes which get like 13% which is the lowest i’ve seen somebody have.
So I concluded that my low percentage was due to the fact that I was cryptomining using CPU on the same computer (although using only 3 cores out of 4, i.e. 3 threads out of 8).
By stopping the CPU mining, I was able to raise upload success rate to over 40% in 3 days as well as repair to over 40%.
To give a better idea for the node: CPU: Intel Core i7-4700HQ 4x2.4 GHz (4core/8thread), boost up to 3.4 GHz RAM: 2x8 GB DDR3 1866 MHz System drive: Kingston SSD 240 GB SATA3 Node drive: Seagate Constellation ES.2 3 TB HDD OS: Ubuntu 20.04 LTS Network download/upload: fiber-optic 1Gbps/600Mbps
Also, the traffic has started coming in since yesterday even more than in May:
I haven’t checked specific pieces, but I kept an eye on the trash folder. While it grew a little, it’s never been bigger than 2.5GB. So clearly the vast majority of canceled uploads are actually sticking around. So while this script still says I have less than 30% successful, in reality it seems to be more than 90%.
Trash isn’t used for normal delete operations. It’s used for when data is on your node that shouldn’t be there. This can happen because your node was offline during delete operations or because the operation timed out previously. It can also happen because the cancelled transfers left pieces behind that really shouldn’t be on your node. I can’t tell you where your trash came from. But I can say that on a node that didn’t used to have a low successrate, like mine, these canceled transfers don’t appear to generate big amounts of trash.
it’s normal, but a bit on the low side… however basically irrelevant.
the cancelled uploads are logged wrong and doesn’t represent the true upload successrate… its a problem everybody has and which the Storj team is aware of, so hopefully it will be addressed sometime soon…