Cyclic redundancy check

Does this error cyclic redundancy check means that whole log file is dead?
what we can do with it? write-hashtbl as far as i know throw an error and stops, so just rebuild it not an option.

2026-02-27T08:21:25+02:00 ERROR hashstore compaction failed {“satellite”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “error”: “hashstore: writing into compacted log (rec={key:5ad7563c9d9416a007cecd87c7d0396e3527572ebafa5db28ef6ea86e62811cc offset:995827584 log:1583 length:595968 created:20444 (2025-12-22) expires:0 (1970-01-01) trash:false}) (from=R:\hashstore\12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\s0\2f\log-000000000000062f-00000000) (size=1074651264): read R:\hashstore\12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\s0\2f\log-000000000000062f-00000000: Data error (cyclic redundancy check).”, “errorVerbose”: “hashstore: writing into compacted log (rec={key:5ad7563c9d9416a007cecd87c7d0396e3527572ebafa5db28ef6ea86e62811cc offset:995827584 log:1583 length:595968 created:20444 (2025-12-22) expires:0 (1970-01-01) trash:false}) (from=R:\hashstore\12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\s0\2f\log-000000000000062f-00000000) (size=1074651264): read R:\hashstore\12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\s0\2f\log-000000000000062f-00000000: Data error (cyclic redundancy check).\n\tstorj.io/storj/storagenode/hashstore.(*Store).rewriteRecord:1395\n\tstorj.io/storj/storagenode/hashstore.(*Store).compactOnce.func8:1142\n\tstorj.io/storj/storagenode/hashstore.(*Store).compactOnce:1152\n\tstorj.io/storj/storagenode/hashstore.(*Store).Compact:855\n\tstorj.io/storj/storagenode/hashstore.(*DB).performPassiveCompaction:568”}

This looks like your file system is reporting an error. There’s no error named this way in the storage node code.

1 Like

I checked disk twice, no error found. but it happens during compaction.

I’d suspect hardware failure then (cabling, HDD, maybe memory). If fsck (or its equivalent you’re using) does not find this problem, then this simply means the problem is intermittent.

1 Like

it shown same file, same error several times on several days, so it is exactly problem with this file.

Still, it’s a file system level report, not storage node. This means the storage node requested the file system to preserve a piece of data (here, a log file), and the file system now reports it cannot retrieve that data when the node is asking for it.

It still can be a memory or cabling problem: if system RAM had a bad bit at the write time, or the cable flipped one bit, then the bad bit was written to the drive, and now keeps being read in the same exact position.

1 Like

Thank you for explanation, I will consider to change, pc then, I have spare server cpu and server memory, only need motherboard for them.

1 Like

Surely looks like a “storagenode" error.

What makes you think so?

Clearly it’s in storage node log. Generally a disk error or memory error would be located in relevant OS logs.

Accessing the same file and receive same outcome multiple time indicates an issue with file - which may or may not be caused by disk/memory/os issue - but still needs to be fixed by application in control of that file - in this case storagenode.

Error comes from io.CopyN() go function. I guess there’s also some corresponding error message in system log.

1 Like

This is statement is not internally consistent. Application cannot nether fix nor be expected to workaround an issue with hardware/filesystem/OS/etc. What it can do – ignore such failures gracefully. Better yet – crash immediately, so that operator can debug the issue.

I you cannot trust written file – this is a critical abort time, attempting to carryon has a potential of corrupting even more data.

1 Like

Assuming it is actually a hardware/filesystem/OS issue. Cryptic error messages that only mean something to developers won’t help a lot of operators. Hence original message.

If application attempts to retry operation and fails, we know it’s not application, and can then point finger elsewhere - A nice error message “Possible Faulty Hardware detected on /path/to/faulty/file”. Please check.

It’s exactly what it does: Google Search