Upload Accepted Canceled Rate very high

It’s location for sure. I’m in Australia even further from the rest of the world and your success rate is higher than mine.

I have the same issue on one of my nodes.
Location: Europe
Disk: internal
Age: 3rd mth
Internet: 1000/1000 Fiber business internet with 100% bandwidth guarantee, 3ms to google servers
Now I receive 95% of canceled uploads !
Before update my node was fine
I will move data to another drive but this needs to be something different

My another node at home, so not far from first one, on crappy 300/30 with typical ISP 75% guarantee (which now because of virus is probably lower) connection has great results despite being on bad hardware with slow internal disks but 11th mth online.

Lots of cancels here too, but not just since the 1.0.1 update, it was always the case :slight_smile:
Location: EU/Hungary
USB3 disk since the end of january without problem, net is 300/100MBit, lots of cancelled uploads, but vast amount of inbound…my data coming mostly from saltlake sat so I guess the location is the problem
Data is somewhere 2/3 saltlake and 1/3 from the other satelites.

Yes, You are probably right. There is many ultra low-latency nodes in the area and now the network is more optimised than before to choose those. I pinged saltlake satellite and I get 156 ms in 16 hops. Without question there is more clients from North America. On plus side it is always good to see any traffic even if that is down stream data flowか:slight_smile:

True :slight_smile: I’m mining crypto with cpu&gpu 24/7 in a single desktop machine, so storj is just an addition to that, even if it makes 1 usd in a month, thats still more than zero…

…Which doesn’t matter anyway.
The speed between your node and customer is only matter. The satellite doesn’t transfer customers’ data, all data transfers performing directly between customers and nodes.

1 Like

Can confirm that high cancel rate too.

Running a low Ping Node since Sep which had 98% acceptance rate…
Since the aggressive closing (i think it was in v0.34?) was implemented, my upload acceptance
rate dropped to 22% atm.

Surprisingly the download rate is still on 99.9%

Seems that the location plays a big role here.

For those who want to upgrade her nodes especially because of that, my Node runs on a
HPE DAS Server on 10k SAS disks so just keep that in mind maybe it takes not that effect that you`re looking for :wink:

Located in Switzerland on 1000/1000 Fiber

Similar setup. Thanks guys for sharing those numbers, I can give up then on upgrading drives to more than SAS 10K as it was only thing left. It is just latency and distribution algorithm between us and clients.

Yes, going like crazy. Getting spikes to 8-14MB/s
Still, far from filling up a whole space but I will be more than happy to add more when the time comes:)

Good Luck ! :slight_smile:

Yeah first i had the idea to cache the upload traffic on memory to boost my
performance i little bit more.

I checked my logs again and noticed that the time window between accepting the
upload and context canceled was not more than 0.003s sometimes… so i think it is definitely
just latency between sno / client.

Dude 8-14MB/s is damn huge! actually i receive not more then 50-80mbps in peak.
Where are you located in?
My Node gets actually 22% successful upload rate but still 130GB/day this week
so i dont worry about that :wink:

So, you will move to that location? :slight_smile:
And then to next? :smiley:

I like this idea, though. But quarantine holds me…

2 Likes

Of course not :wink:

Would be just nice to know how the data / traffic is distributed around ne network
and which location actually hold the most clients thats why i`m asking…

I run all my servers on my own location static hardware because renting a server somewhere
on client hot spots is not worth it and is definitely not compare with the idea of a
distributed and privat cloud…

:wink:

Thanks for sharing. My cancel rate is 38% - As the log file never rotates I think I have to split it up to get a better idea when it happens… Logfile size just after 2 months is 1.4 GB and takes 9h to parse in powershell :smiley:

Ingress: 2TB
Egress: 0.04TB

Northen Europe with 1000/1000 and SAS drives here as well + raid card with cache + Write through
(500MB read / 470MB Write)

99% of my traffic are from your-server.de.

Oof… I’m testing the linux version of this script on an SSD accelerated array with files for up to a month of logs. I honestly wasn’t aware it could take so long as it’s always less than a minute for me. Technically it goes through the file 18 times to get all the numbers. I wonder if there is a way to count all of these in one pass.

I’m thinking of feeding them into logstash. I think the problem might be how powershell stores arrays, or how the script works, haven’t looked at it in closer details yet, but I write powershell scripts daily and parse quite a lot of data.

All my logs are stored on a SSD that can do 3GB/s - the problem seems to be the memory. When you say a month it does not mean much to me. Whatever memory I give my machine it maxes out, I would need at least 25GB of ram just to parse the logs if what your saying is true.