My node Come very slow

Hello in last 4 days my host Come very slow only 0.2G in two day . I have hight speed connexion Can any one help me . Thanks in advance

The space and bandwidth are used by real customers, there is no way to “fix” the low usage.
Just make sure that your node is online.

I can suggest to invite your friends to use the Tardigrade network to store their data:

1 Like

Hello alex thank for you answer ,
my node was very speed and i install a firewall for same attack of brute force ,
so that why it was stoped for same time . and when i add the rule it’s comme very slow . it online can i give you the ID to check if it’s ok ?
i allwoed 500 TB of bandwitch and 12 TB for storage

if you are on a secure lan then you can just disable the firewall for a day or two and see if that is the problem.

unless if you got some weird firewall settings you only need to route your online ip port 28967 to same port at the host NIC IP of your storagenode.

else you can try and take a look at your logs.
for live logs on screen.
docker logs storagenode --tail 20 --follow

to export your logs to a file
docker logs storagenode >& /zPool/storj_etc/storj_logs/2020-04-05_storagenode.log

ofc you will need to set your own wanted path for the log to be saved at… obviously :smiley:
mine is /zPool/storj_etc/storj_logs/

and ofc this is linux, but if you are using docker on windows it might be pretty much the same, just using C:\storj_etc\storj_logs\2020-04-05_storagenode.log or such… not sure if you need to make the folder or if the command does that to.

the logs should say stuff
upload started
uploaded
download started
downloaded
deleted
and then a certain % will be cancelled if the response is to slow and others fill the need for the requested action.

anyways a bit long winded, hope it helps a bit…

Hello SGC thank you for you answer ,
i fixed all issue in the lan i talk about iptables i add all needed role but stil very slow only 0.1 in One day before was 100G peer days . here the log thank you

2020-04-05T14:24:54.796Z INFO piecestore upload started {“Piece ID”: “ZFJWQNJZWAYROGGBNRDT7ND4L6TA4KDWUUBVU5BLHD3ALCMOLIAA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Bandwidth”: 499768392508928, “Available Space”: 11629493589888}
2020-04-05T14:24:54.833Z INFO piecestore upload canceled {“Piece ID”: “ZFJWQNJZWAYROGGBNRDT7ND4L6TA4KDWUUBVU5BLHD3ALCMOLIAA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-04-05T14:25:02.962Z INFO piecestore download started {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:25:03.182Z INFO piecestore downloaded {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:25:13.861Z INFO piecestore upload started {“Piece ID”: “NWYUCN76NBCDJASI66CLV2NLV42AIOSYZFRY7ZXP6THCEW7W4CZQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Bandwidth”: 499768392438784, “Available Space”: 11629493588864}
2020-04-05T14:25:13.935Z INFO piecestore uploaded {“Piece ID”: “NWYUCN76NBCDJASI66CLV2NLV42AIOSYZFRY7ZXP6THCEW7W4CZQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”}
2020-04-05T14:25:27.340Z INFO piecestore upload started {“Piece ID”: “KKEZ3IT47HSWRBCWPNLBY4BK6FW3INMASR436VG55UVYWAJN6BZA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT_REPAIR”, “Available Bandwidth”: 499768392438272, “Available Space”: 11629493587840}
2020-04-05T14:25:27.683Z INFO piecestore upload canceled {“Piece ID”: “KKEZ3IT47HSWRBCWPNLBY4BK6FW3INMASR436VG55UVYWAJN6BZA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT_REPAIR”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-04-05T14:26:07.770Z INFO piecestore upload started {“Piece ID”: “XEFO6SO2JFSX4LWVD7HYFJPZGOQOM6WIKMK7SNZQK2JKYHZBOGWQ”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT”, “Available Bandwidth”: 499768392431872, “Available Space”: 11629493580928}
2020-04-05T14:26:07.857Z INFO piecestore upload canceled {“Piece ID”: “XEFO6SO2JFSX4LWVD7HYFJPZGOQOM6WIKMK7SNZQK2JKYHZBOGWQ”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-04-05T14:26:47.040Z INFO piecestore upload started {“Piece ID”: “NJSFUUEJ5AZCXMECYPFTUJK3RU2NNM4IUXMCRKVGGMEN5H25UEYQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT_REPAIR”, “Available Bandwidth”: 499768392425216, “Available Space”: 11629493573760}
2020-04-05T14:26:47.523Z INFO piecestore upload canceled {“Piece ID”: “NJSFUUEJ5AZCXMECYPFTUJK3RU2NNM4IUXMCRKVGGMEN5H25UEYQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT_REPAIR”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-04-05T14:27:30.124Z INFO piecestore download started {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:27:30.368Z INFO piecestore downloaded {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:28:32.663Z INFO piecestore upload started {“Piece ID”: “GML2FYFAXLSGBLTAOQDH6BDG5FV2RMUOWVTNRCR4WHAHJHU4GSFQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Bandwidth”: 499768392221184, “Available Space”: 11629493438848}
2020-04-05T14:28:32.697Z INFO piecestore upload canceled {“Piece ID”: “GML2FYFAXLSGBLTAOQDH6BDG5FV2RMUOWVTNRCR4WHAHJHU4GSFQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-04-05T14:28:36.118Z INFO piecestore download started {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:28:36.350Z INFO piecestore downloaded {“Piece ID”: “A3MNCEQKIUJ74OM2YTQWBGP4ADAHVHD7TGCGTVCEO53W5FZI5CKA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2020-04-05T14:28:57.388Z INFO piecestore upload started {“Piece ID”: “EDF323XUDEKCUVZTXPI2VYXKAVY76LQXD7CYWNRFMLOYSZRVK7RA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Bandwidth”: 499768392150528, “Available Space”: 11629493437312}
2020-04-05T14:28:59.491Z INFO piecestore upload canceled {“Piece ID”: “EDF323XUDEKCUVZTXPI2VYXKAVY76LQXD7CYWNRFMLOYSZRVK7RA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:461\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:214\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:986\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

You can take a look:

thank you alex i see the section but as you says there’s no problem .these the output of my audit script .
========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0,000%
Recoverable failed: 0
Recoverable Fail Rate: 0,000%
Successful: 270
Success Rate: 100,000%
========== DOWNLOAD ===========
./audit: ligne 62 : printf: 3.18493: nombre non valable
./audit: ligne 69 : printf: 3.91465: nombre non valable
./audit: ligne 76 : printf: 92.9004: nombre non valable
Failed: 1486
Fail Rate: 0,000%
Canceled: 1209
Cancel Rate: 0,000%
Successful: 35265
Success Rate: 0,000%
========== UPLOAD =============
./audit: ligne 106 : printf: 0.0414603: nombre non valable
./audit: ligne 113 : printf: 72.7054: nombre non valable
./audit: ligne 120 : printf: 27.2531: nombre non valable
Rejected: 0
Acceptance Rate: 100,000%
---------- accepted -----------
Failed: 106
Fail Rate: 0,000%
Canceled: 185883
Cancel Rate: 0,000%
Successful: 69677
Success Rate: 0,000%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR UPLOAD ======
./audit: ligne 186 : printf: 61.0503: nombre non valable
./audit: ligne 193 : printf: 38.9497: nombre non valable
Failed: 0
Fail Rate: 0,000%
Canceled: 558
Cancel Rate: 0,000%
Successful: 356
Success Rate: 0,000%
========== DELETE =============
Failed: 0
Fail Rate: 0,000%
Successful: 9860
Success Rate: 100,000%

Well the node works, but your ingress is doing horrible.
most get cancelled, this can be due to a multitude of issues.

basically it means that when something is attempted uploaded your node is slow to get it to disk, so slow that all the other nodes that are receiving the upload, finishes before you and thus your node ends up getting a cancelled upload.

this can be due to low internet bandwidth, but lets assume that’s not the issue.
else it can be because of all kinds load on your server, or if you are using solutions like local network storage.

let us get the details of how your system and lan is setup…

stuff like server loads can have a big effect, even tho my server is pretty good, i can see my node cancel rates getting affected, like say i dedicated 8 threads of 16 to folding@home… which lead to a significant increase cancelled uploads being logged.

even if i open a video file from my server over the local network, then i can see it has an effect on my avg uploads long term…

so it really becomes an issue of performance, since you are in a race with everybody else.
storj does say it should balance out, so that also slower / worse performing nodes should get data, but not sure how that actually works in reality.

kinda new to being a SNO myself

Try this workaround: GitHub - ReneSmeekes/storj_success_rate

thank for answer :
my lan is
( 2 CPU (3.17), 8G,12T STORAGENODE( LINUX)===> ROUTER(PORT FORWARDING) ==>internet
my connexion Speed :
Testing download speed…
Download: 94.26 Mbit/s
Testing upload speed…
Upload: 8.92 Mbit/s

========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 293
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 1486
Fail Rate: 3.893%
Canceled: 1209
Cancel Rate: 3.167%
Successful: 35476
Success Rate: 92.940%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 107
Fail Rate: 0.042%
Canceled: 186596
Cancel Rate: 72.706%
Successful: 69940
Success Rate: 27.252%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 588
Cancel Rate: 61.442%
Successful: 369
Success Rate: 38.558%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 9963
Success Rate: 100.000%

ill assume you are having low server loads
and that your storage node hdd are directly connected to the host OS for docker running the node.
you can use stuff like iscsi, but really you want to try to minimize latency.

so if we assume its not, that and not isn’t some sort of antivirus or multiple antiviruses one new installed with the firewall perhaps, or some such thing…

i would say your internet 100mbit down and 10 mbit up… thats not totally unreasonable… however it does seem a bit unbalanced, remember you want to be able to upload and download data to the clients of the storage node in a decent time, however in this case… the upload in your logs means client uploads which is downloads for you… which is the largest so most likely not relevant…

however 10mbit up to support 100mbit down is getting close to the max… if you downloading with 100mbit then you will be uploading with 5-10mbit in response… which would mean if you are uploading to a client with 5mbit, then your download mbit ability might be restricted to 50mbit because you are lacking upload bandwidth to support it…

but i digress… not the real issue, but if you can and want to run a storagenode long term, getting something a bit more balanced might be wise…

What can you do to solve your current issue…
well you are behind a router, so firewall is kinda a secondary thing and since you could setup the port forward in the router, then its your network and you can most likely run without a firewall for a day or two.

so i would turn off your new firewall, since there are good odds that the firewall or something related to it is to blame, since the node acted up after you installed it.

restart the node, after you turn off the firewall… and after 12-24 or 48hours you can tell in the storagenode web dashboard if there has been a change… ofc with other network monitoring tools or other such systems you might be able to see major changes quickly…

but i wouldn’t rely on anything with less than days of data to make a viable evaluation of the effect.
if it does indeed help to turn off the firewall and the node springs back to life you will most likely have found the cause of the problem… you will know when you turn the firewall back on and see if the problem returns xD

anyways thats my recommendation for next step…
turn firewall off, restart storagenode, monitor for 24-48 hours…

This is just a simple issue your internet is at the limits, Your upload is only 9Mbit/s your download is 95Mbit/s, And if your internet is not fiber and is dsl or cable is known that upload will slow down your download speeds drastically, which all and all is going to bogg Down your internet connection. If your using your internet for anything other then your node your going to hurt it even more. IE Streaming, normal internet browsing. The question is how many connections are used on your lan how many people are using the internet etc.

do you read your node logs or firewall logs or do you think I can read them and fix yea?