I think it will actually. Bandwidth contracts are created for an increasingly larger part of a piece throughout the transfer and I’m pretty sure this scenario means the bandwidth contract for the entire piece was signed by the uplink and will be sent on to the satellite to be settled for payout. Even transfers that only have signed bandwidth contracts for part of a piece transfer are paid I believe. Though I could be wrong. Anyone feel free to correct me if I am.
@anon27637763: you only get DEBUG lines if you set the log level to debug in the config.yaml. It’s on INFO by default.
anyone else noticing bad performance of the upload stats?
Mine have dropped to +/- 30%, I used to be on 95% or more. Is it my node that is performing badly on the newer releases or is it because of the vetting of the new sattelite or something else
sucsesful rate is good for statistics but we all need to understand, that all peases size is not equeal. so there can be lass pieses successful, but they are bigger than and havier than other moore pieces but smaller in size. Logs not show how big this pieses are so cant mesure it apparently. Also play the role delete operastions at same time, thay taking HDD speed olso.
Here’s an update on the latest script as well latest Node version. Important for my numbers to understand, I run two nodes on once as I have been disqualified due to a RAM / Swap issue on two of the 5 satellites which is why I opened a new one, so some of the traffic is split between the two nodes though it’s not on the new satellite and the ones that are paused on the other one.
Stats are since the update:
Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.34.6 Uptime : 108h 30m max-concurrent-requests : DEFAULT successrate.sh :
Hardware : Supermicro server, 2x Intel Xeon X5687, 100GB RAM. 6x4TB hard drives in raidz2 with two SSDs for L2ARC and ZIL. The node runs inside a VM with 32GB RAM. The node is not the only VM there. Bandwidth : Home GPON with 1gbps down and 600mbps up. Backup connection is DOCSIS with 100mbps down and 12mbps up Location : Lithuania Version : 0.34.6 Uptime : 120h32m42s
If you use a non-default container name you need to pass it as a parameter to the script. If you have logs written to a file, you need to pass the path to that file as a parameter. This is the default on windows GUI installs.
New update is also out for docker, v0.35.3… will wait 24 hours and also update here to see if any changes and then do a longer term update after a week.
Hardware : Raspberry Pi 4 (4GB RAM), 1x10TB WD Elements Desktop HDD connected by USB 3.0 Bandwidth : 200mbps down and 20mbps up. Location : USA Version : 0.35.3 Uptime : 14h21m successrate.sh :
Just a comment for newbies like me, because I dont see this in this thread and its maybe not obviuos for everyone at the first time. UPLOAD means the ingress, and DOWNLOAD means the egress traffic.
I notice a significant increase in successrate for egress after my 1TB node went full, means it was around 35%, now it is above 75%. Its maybe related to the USB3 connection as probably it was too much to handle both up and downloads at the same time, but still its good to know. So dont panic if your stats are low at first
I’ve noticed a nice uptick in upload success rate since the latest version. See below:
Hardware : Raspberry Pi 4 (4GB RAM), 1x10TB WD Elements Desktop HDD connected by USB 3.0 Bandwidth : 200mbps down and 20mbps up. Location : USA Version : 1.1.1 Uptime : 9h24m successrate.sh :
Latest update running and what I see is that my download success rate is stable at 99% since the last couple of updates. But my upload success is constantly going down, now at 53.72% … it was before the 1 update at 60%+ and even higher before.
I think this might be due to more SNOs in my area uploading faster now?
Gotya, I didn’t look at that at all - very good point, feels like that’s the solution. Thanks for sharing.
That also means: I download everything fine which means what come onto the node, but the traffic that gets back to users - your assumption is I guess - goes also back to the US area which means my uploads go down as slow.