Some potential issues with a fresh node

Greetings!

I’m up with a new node, but I have a couple of concerns about what appeared in the log.
Firstly. My hardware includes an SSD cache pool which sits in front of the storage pool.
On the docker node (v3 beta) log I got a

warning: storage less than requested

message on the startup. and the log says available space to be equal with my cache pool.
Of course this cache pool is scheduled to load the data off regularly, so the available space in reality should never become an issue.
The node GUI seems to be correct and reports 8TB as I set it.
Is this something be be concerned on?

Secondly. I got this wierd error at around 8 hours in mark:

2021-01-15T17:14:08.187Z ERROR piecestore upload failed {“Piece ID”: “XXX”, “Satellite ID”: “XXX”, “Action”: “PUT”, “error”: “unexpected EOF”, “errorVerbose”: “unexpected EOF\n\tstorj(dot)io/common/rpc/rpcstatus.Error:82\n\tstorj(dot)io/storj/storagenode/piecestore.(*Endpoint).Upload:325\n\tstorj(dot)io/common/pb.DRPCPiecestoreDescription.Method.func1:996\n\tstorj(dot)io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj(dot)io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj(dot)io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj(dot)io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj(dot)io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj(dot)io/drpc/drpcctx.(*Tracker).track:51”}

What was that and why did it occur?
Thank you,
cp

You specified 8TB, but the real free space is less than you specified. We are recommend to leave 10% as a buffer, however, it could be reduced for disk greater than 2TB to about 5-6%.
Please, show result of the command

df -HT

The mentioned error can be related to long tail cut, when the customer is canceled upload because competitors finished their job earlier than your node.

Thanks for the answer. As I said in my OP, the actual space is not an issue.
The Storj client log reports the cache pool space as an available space and does not see the storage pool. This is an Unraid server running:

storjlabs/storagenode:beta

docker client.
The mounts are pointed to storage and the storage is set to “cache preferred”.
In theory it will probably run just fine, because the cache frequently offloads the storage. That’s unless Storj has some built in limitation which activates after the nodes accumulated storage size exceeds the size of the cache pool. Hopefully not.

Overnight log didn’t reveal any issues. So hopefully it will run fine now.

It should be storjlabs/storagenode:latest.
And I hope you use a CLI to run your node and not an application, because this community application has a bug - it uses dangerous -v option instead of --mount type=bind to bind your storage to the container.
In case of Unraid it’s especially dangerous and can destroy your node, because Unraid mounts disks to the userspace after the docker start, so the application would be started with empty storage volume and will fail to start in best case or being disqualified for losing customers data in worst.

Yes, I am using --mount type=bind.
I changed to the latest tag now. Thanks!

1 Like