Piecestore: could not get hash and order limit

I have this error, not all the time, during GET_REPAIR, I also have two databases piecestore.db that 6 months old with no access and piecesstore.db that is current but has no data 0KB. Seems about a 50/50 download success/failure rate.

My audits are 100% across all satellites.

I am running the windows version.

INFO piecestore download started {“Piece ID”: “QDQ5PX2NWO3LQUDPJPDASTNAZXNIXCCBIY4W3C2XIA3OUR3GYW4Q”, “Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET_REPAIR”}
2020-01-03T12:56:38.222-0600 ERROR piecestore could not get hash and order limit {“error”: “v0pieceinfodb error: sql: no rows in result set”, “errorVerbose”: “v0pieceinfodb error: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Get:132\n\tstorj.io/storj/storagenode/pieces.(*Store).GetV0PieceInfo:633\n\tstorj.io/storj/storagenode/pieces.(*Store).GetHashAndLimit:424\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:586\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:488\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func2:1072\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-01-03T12:56:38.222-0600 INFO piecestore download failed {“Piece ID”: “QDQ5PX2NWO3LQUDPJPDASTNAZXNIXCCBIY4W3C2XIA3OUR3GYW4Q”, “Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET_REPAIR”, “error”: “v0pieceinfodb error: sql: no rows in result set”}

Both db are not used. Seems you have problems in the past with databases. Such message is happening, when your database doesn’t have records about requested piece.
This database have a new name pieceinfo.db

Are you sure that you have specified a right path to the storage, when you migrated from the Docker version?

pieceinfo.db was malformed and rebuilt. file size dropped from 300MB to about 150-175MB. All files are still on drive. I did migrate over from docker to windows. database may have had errors before migration. satellites are finding a bunch of files. garbage cleanup also seems to be working. My space used seems larger than storj thinks it is. Figured I’d wait for several garbage cleanup cycles. so far audit reports from the python earnings file come back 100% from all satellites. Yes path is correct.

pieceinfo.db was the only database shown as having errors and cleaned up per instructions.
If File pointers are lost in the database cleanup, but the original files still exist on the hard drive intact, will storj know they are there? Will that get cleaned up in garbage collection?


Having the same issue, because of a db that crashed and had to be created from scratch.
Therefor pointers for the blob file are missing but only when a GET_REPAIR are issued.
SN are able to AUDIT the piece file perfectly, if the particular file exist in the blob folder.

Read more about the issue here:

2 Likes

That’s my problem exactly!! Great info.

Now, if there was a way or a python program that could scan the blob folders and rebuild the database locally. Or would that be added to the garbage collection process or storj service at startup.

I’m seeing the same error in my logs since 12/25
PRAGMA integrity_check on the db checked out fine

2020-01-03T22:31:22.046-0600 ERROR piecestore could not get hash and order limit {“error”: “v0pieceinfodb error: sql: no rows in result set”, “errorVerbose”: “v0pieceinfodb error: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Get:132\n\tstorj.io/storj/storagenode/pieces.(*Store).GetV0PieceInfo:633\n\tstorj.io/storj/storagenode/pieces.(*Store).GetHashAndLimit:424\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:586\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:488\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func2:1072\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

Sorry, not that i know of but I could be wrong :slight_smile:

I’ve actually tried to study the db’s and how they where build/structured (tables/rows), in a attempt to put something together, that could “re-insert” the missing pieces into the db, but have not been able to look into it yet, mostly due to lack of time.
As mentioned, in my last post in the GitHub link above, I’m trying to figure out if the garbage collector would be able to handle that “reinsert-into-db” task, since the file exist on the node, and the satellite knows about it. If GC does not process the file (either removes or reinserts it) one could end up with a lot limbo files.
But the issue are already marked as a bug by Storj, so I’m pretty sure they will come up with a proper solution. We’ll just have to be patience.

Having more than 25 million blob files, it would be nice to either remove those not being used, or update the db’s with files that are actually usable, even if a db crashes. :slight_smile:

2 Likes

Garbage collection is a very specific process that only removed pieces that should no longer be on your node based on a bloom filter sent by the satellite. These pieces should be on your node so they won’t be removed by gc, but are missing data in the db’s. From what I can tell these pieces can still be downloaded as well, so you wouldn’t want them to be removed. But if audit and download are able to work around the missing data in the db, repair should as well.

I am getting some of the following errors recently:

2020-02-03T22:45:07.227Z e[31mERRORe[0m piecestore could not get hash and order limit {"error":"v0pieceinfodb error: sql: no rows in result set", "errorVerbose": "v0pieceinfodb error: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.*v0PieceInfoDB).Get:132\n\tstorj.io/storj/storagenode/pieces.(*Store).GetV0PieceInfo:649\n\tstorj.io/storj/storagenode/pieces.*Store).GetHashAndLimit:430\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:599\n\tstorj.io/storj/storagenode/piecestore.*drpcEndpoint).Download:497\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1074\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

However I do not see an out of the ordinary impact on the success rate or satellite stats from the web UI. Should I be concerned?

Ah thanks! That’s a new one for me:))

DB errors scare me, because storj nodes are so reliant on them for functioning… Hopefully they can implement some repair strategies for the local databases one day.

1 Like

In the beginning, alpha and start of beta, I saw more database errors, but what seems to be most important is the file integrity. My files were all intact and when requested they could be found. I get these errors all the time, but they seem to be on older files. as time progresses these errors have been diminishing and they only seem to happen on stephans testing satellite data. Then the old files, not in the database, I assume are getting cleaned out with garbage collection. I had suggested a database integrity check and cleanup when storagenode service starts up. Maybe a future development. At that time in the past it was low on the priority list.