Upcoming storage node improvements including benchmark tool

Thats the kind of answer i was looking for.Since storj dont have proper answer.There are other nodes i am running which are more profitable then earning penny via storj. If storj dont have the answer, its ok.Even i dont have the time to waste in community forum.If there is profit, then there is a interest.There are many people who have much time to waste in social forum.

1 Like

Itā€™s a learning experience and just fun for most of the more active people on the forum.
You seem to have been unlucky in that your nodes mostly held test data. They will fill up with more permanent data. It just takes a while. Also, the recent testing is because big customers want to onboard. Iā€™d say itā€™s worth sticking around to see how that would end up. But just to be clear, customer usage is growing exponentially and has been for a while. So your statements in your previous message are incorrect. This mass deletion of data is likely a one time thing. But itā€™s up to you if you want to stay.

5 Likes

Bye bye, then. Thank you for your little rantā€¦ :man_shrugging:t2:

5 Likes

Iā€™m very impressed with my nodes performance after the recent improvements in storagenode software.
Syno 1GB RAM running 2 nodes, each with 4TB data plus almost 1.5TB trash each; I deleted databases and started used space filewalker to recreate them. The only improvements I could make myself for the nodes were moving databases to USB flash drive, alongside with using ext4 with noatime for drives and stick.

In 4 days, both filewalkers finished, in the same time with retain processes and trash cleanup.
After 5 days, my databases are back, and the 3TB of trash had been shriked to 500GB.
The activity of drives, after all these services finished, is like 4%.
So the improvements are awsome.

4 Likes

Hi everyone! I recently merged a few changes that I expect will improve performance of the piecestore benchmark (and the storagenode eventually). One of my benchmarks with an HDD showed an improvement of over 30%, but Iā€™m curious what folks from the community with their setups will see.

To run the new benchmark, youā€™ll need to:

git clone https://github.com/storj/storj
cd storj
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000

Then to calculate the baseline benchmark (whatā€™s the performance like before patches):

git clone https://github.com/storj/storj
cd storj
git checkout ae5dc146a3d33a2c5f2a62ade7e9293c8801b751
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000

(you can lower the -pieces-to-upload parameter if you have a slower node)

Thanks in advance to everyone who tests, as this might influence further development. Thank you all!

3 Likes

is there something for windows GUI?

Does this install something permanent?
Or can we just delete the benchmark folder afterwards?

Is the DB part of the equation?
If yes, is there a way give paths so the DB is on an SSD and the pieces on HDD?

New benchmark:

/media/disk017/bench/new1$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.398286972s (208.44 MiB/s, 3521.34 pieces/s)
collected 100000 pieces in 29.673969507s (199.48 MiB/s)
/media/disk017/bench/new2$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.630554626s (206.75 MiB/s, 3492.77 pieces/s)
collected 100000 pieces in 28.436651088s (208.16 MiB/s)
/media/disk017/bench/new3$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 27.318227059s (216.68 MiB/s, 3660.56 pieces/s)
collected 100000 pieces in 28.595973333s (207.00 MiB/s)

baseline benchmark:

/media/disk017/bench/old1$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.512591866s (200.57 MiB/s, 3388.38 pieces/s)
collected 100000 pieces in 29.606904669s (199.93 MiB/s)
/media/disk017/bench/old2$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.35963216s (201.61 MiB/s, 3406.04 pieces/s)
collected 100000 pieces in 27.417776602s (215.89 MiB/s)
/media/disk017/bench/old3$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.374990892s (201.51 MiB/s, 3404.26 pieces/s)
collected 100000 pieces in 26.963021964s (219.53 MiB/s)

Disk:

=== START OF INFORMATION SECTION ===
Device Model:     ST20000NM007D-3DJ103
Firmware Version: SN01
User Capacity:    20,000,588,955,648 bytes [20.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)

FS: ZFS

/media/disk017/bench# zfs get all -s local zp017

NAME   PROPERTY              VALUE                  SOURCE
zp017  recordsize            128K                   local
zp017  mountpoint            /media/disk017         local
zp017  compression           lz4                    local
zp017  atime                 off                    local
zp017  xattr                 sa                     local
zp017  primarycache          metadata               local
zp017  secondarycache        metadata               local
zp017  sync                  disabled               local
zp017  dnodesize             auto                   local

2 Likes

More tests and panics:

/media/disk017/bench/new1$ piecestore-benchmark -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 1m20.844532728s (146.44 MiB/s, 2473.88 pieces/s)
collected 200000 pieces in 52.144498804s (227.03 MiB/s)
/media/disk017/bench/new2$ piecestore-benchmark -pieces-to-upload 500000
uploaded 500000 62.07 KB pieces in 4m49.135427606s (102.36 MiB/s, 1729.29 pieces/s)
collected 500000 pieces in 1m51.713516688s (264.93 MiB/s)


/media/disk017/bench/old1$ piecestore-benchmark.old -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 1m17.46640484s (152.82 MiB/s, 2581.76 pieces/s)
collected 200000 pieces in 50.425297701s (234.77 MiB/s)
/media/disk017/bench/old2$ piecestore-benchmark.old -pieces-to-upload 500000
panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked

goroutine 77 [running]:
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/dsnet/try.e({0x1bdffa0?, 0xc238598d50?})
        /root/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:206 +0x65
github.com/dsnet/try.E(...)
        /root/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:212
main.uploadPiece.func1({0x1c044d8, 0xc12117d7c0})
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:191 +0x14a
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans.func1(0xc000597360?, 0xc06d0a7ea0, 0xc1e65f6210, 0xc06d0a7f38)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/collect/ctx.go:67 +0x9f
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans({0x1c044d8, 0xc12117d7c0}, 0xc06d0a7f38)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/collect/ctx.go:68 +0x24e
main.uploadPiece({0x1c044d8, 0xc12117d720}, 0xc000486000, 0xc057d957d0)
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:188 +0x1e5
main.main.func1.1()
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:235 +0x6c
created by main.main.func1 in goroutine 1
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:233 +0x96

1 Like

Thereā€™s no graphical interface for this benchmark tool, unfortunately.

No, you can delete both the binary and the benchmark folder after the test.

For these particular patches, itā€™s best to test with blob, orders, and the database on the same disk.

I get an error trying to run go install ā€¦

go: finding gotest.tools/v3 v3.5.1
go: gotest.tools/v3@v3.5.1: unknown revision gotest.tools/v3.5.1
go: error loading module requirements

Is my version of go too old?

ANOTHER DRIVE and FS
new benchmark:

/media/disk018/bench/new1$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 22.954588536s (257.87 MiB/s, 4356.43 pieces/s)
collected 100000 pieces in 7.35056042s (805.28 MiB/s)
/media/disk018/bench/new2$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 22.981609182s (257.57 MiB/s, 4351.31 pieces/s)
collected 100000 pieces in 7.452747976s (794.24 MiB/s)
/media/disk018/bench/new3$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 23.125528437s (255.96 MiB/s, 4324.23 pieces/s)
collected 100000 pieces in 7.359297953s (804.32 MiB/s)

/media/disk018/bench/new1$ piecestore-benchmark -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 50.369984448s (235.03 MiB/s, 3970.62 pieces/s)
collected 200000 pieces in 1m3.155559848s (187.45 MiB/s)
/media/disk018/bench/new2$ piecestore-benchmark -pieces-to-upload 500000
uploaded 500000 62.07 KB pieces in 2m13.992165989s (220.88 MiB/s, 3731.56 pieces/s)
collected 500000 pieces in 1m44.229967126s (283.95 MiB/s)

baseline benchmark:

/media/disk018/bench/old1$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.369228058s (208.65 MiB/s, 3524.95 pieces/s)
collected 100000 pieces in 10.508789289s (563.27 MiB/s)
/media/disk018/bench/old2$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.169365081s (202.93 MiB/s, 3428.25 pieces/s)
collected 100000 pieces in 8.584800602s (689.51 MiB/s)
/media/disk018/bench/old3$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.601768357s (206.95 MiB/s, 3496.29 pieces/s)
collected 100000 pieces in 8.995053427s (658.06 MiB/s)

/media/disk018/bench/old1$ piecestore-benchmark.old -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 1m2.618386893s (189.06 MiB/s, 3193.95 pieces/s)
collected 200000 pieces in 1m16.211374239s (155.34 MiB/s)
/media/disk018/bench/old2$ piecestore-benchmark.old -pieces-to-upload 500000
uploaded 500000 62.07 KB pieces in 3m57.504496122s (124.61 MiB/s, 2105.22 pieces/s)
collected 500000 pieces in 2m11.269413295s (225.46 MiB/s)

Disk:

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Ultrastar DC HC550
Device Model:     WUH721818ALE6L4
Firmware Version: PCGAW660
User Capacity:    18,000,207,937,536 bytes [18.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)

FS: EXT4

1 Like

ext4 seems is a winner

3 Likes

Whatā€™s your Go version? What are your GOPROXY settings?

1 Like

Thank you for all your testing @ksp! This is very helpful.

2 Likes

go version go1.11.6 linux/amd64

No idea. Letā€™s pretend that my only experience with go was copying and pasting the commands from the other post.
When I first run the go install command it ā€œfindsā€ a huge pile of stuff and fails with the gotest.tools. Running the command a second, third etc time the pile of stuff gets smaller, but it still fails on that one.

Could you upgrade it to the latest version and try building the benchmark again?

using version 1.22.4 the go install ./cmd/tools/piecestore-benchmark/ command goes with no errors, but there is no piecestore-benchmark executable I can find.
-bash: piecestore-benchmark: command not found

Also not working here.

Please provide which go version we should install and how.