Failed download

the following error is in the log but the file is there and readable. WHat might be going on ?

2023-07-15T12:54:05.594Z        INFO    piecestore      download started        {"process": "storagenode", "Piece ID": "OFEMV73T66NJW4GJOPJQMYAQ4MIBUVQED5GYW4RG5I4TWVKHSSFA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 256, "Remote Address": "172.17.0.1:61648"}
2023-07-15T12:54:05.595Z        ERROR   piecestore      download failed {"process": "storagenode", "Piece ID": "OFEMV73T66NJW4GJOPJQMYAQ4MIBUVQED5GYW4RG5I4TWVKHSSFA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 0, "Remote Address": "172.17.0.1:61648", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1\": open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1: not a directory", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1\": open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1: not a directory\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:285\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:82\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:309\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:653\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:251\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}

Sorry I hit create before I added some more information.
This is a node running under docker on macos with the underlying filesystem ZFS. this is the result of ls -l -rw------- 1 cap staff 2319872 Jun 28 17:35 storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1

Usually this is mean that the filesystem is corrupted. Please check and fix it.

Please show how looks other pieces there

I have decided that this was the result of some interaction between an old macos docker, zfs and storj maybe influenced by 800,000 files in the trash directory or not.
Proximate cause was my attempt to put the database files in a separate directory for better performance. It worked for a while and then failed in different ways. When a put them back the problems ceased.
So nevermind.

Putting databases to a different path should not affect blobs anyhow. Perhaps you tried to use symlinks?

I created a new filesystem. It’s trivial to do with ZFS and allows different caching and record sizes. I got bandwidth.db malformed errors and at one point a cash of the node with SIGBUS error. [signal SIGBUS: bus error code=0x2 addr=0x7f3ccbe2a000 pc=0xe33e69] and about 2600 lines of stack trace.
Thats when I gave up. No errors since then.
I still have the log file if you want some bedtime reading.:slightly_smiling_face:

I hope that you did not remove customers data when you created a new filesystem, otherwise your node will be disqualified for losing it.

No. I edited the config file (storage2.database-dir) and moved the database to the new filesystem.

The 3+ terabytes of client data was untouched.
As I said it seemed to work but was unreliable for reasons unknown so I abandoned the effort and went back to the default.

This is very interesting case, because databases are unrelated to blobs, so you did something with the data location too, otherwise the problem with blobs (customers data) would not disappear.
Keep an eye on your logs, the problem with blobs never fixes itself, unless exactly this corrupted piece would be removed by the garbage collector.

I agree and am watching the log. I will let you know if anything shows up.