Hashstore: logSlots too small

Hello.

After I force reboot when my linux box crash, I run into hashstore miscalculate problem, I try this: Hashstore error preventing node restart - #10 by Alexey

But I run into another problem:

root@d4:/mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/meta# /root/go/bin/write-hashtbl /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/0f/log-000000000000000f-00004fa4...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/10/log-0000000000000010-00004fa6...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/11/log-0000000000000011-00000000...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/12/log-0000000000000012-00000000...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/13/log-0000000000000013-00004fa8...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/14/log-0000000000000014-00000000...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/17/log-0000000000000017-00004fab...
Counting /mnt/storage/storj_13/storj_disk/storage/hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1/18/log-0000000000000018-00004fad...
Record count=1409
Using logSlots=12
hashstore: logSlots too small: logSlots=12
	storj.io/storj/storagenode/hashstore.CreateHashTbl:73
	storj.io/storj/storagenode/hashstore.CreateTable:129
	main.(*cmdRoot).Execute:101
	github.com/zeebo/clingy.(*Environment).dispatchDesc:129
	github.com/zeebo/clingy.Environment.Run:41
	main.main:29
	runtime.main:283

What is this error and how could I resolve this? Thank you!

I’d unmount that filesystem and scrub/“fsck -y” it first. That may make repairs that allow the node to start. Can’t hurt.

Forgot to mention, I’m using zfs with mirror and zfs filesystem is fine.

Not shure if it helps but I would try to write a new hashtable for 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1 using the write-hashtbl tool. Check folder 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE/s1 for zero byte files first and delete them.

There a manual option on write-hashtbl to set it:

Errors:
    argument error: dir: required argument missing

Usage:
    write-hashtbl [flags] <dir>

Arguments:
    dir    Directory containing log files to process

Flags:
    -f, --fast            Skip some checks for faster processing
    -s, --slots uint64    logSlots to use instead of counting
    -k, --kind string     Kind of table to write

Global flags:
    -h, --help         prints help for the command
        --summary      prints a summary of what commands are available
        --advanced     when used with -h, prints advanced flags help

Increase it until it work.