Upcoming storage node improvements including benchmark tool

I think the issue is that the used key in the bandwidth dictionary is not being updated on the /api/sno/ route. It’s probably reading from the database and not the cache.

Similar for /api/sno/satellites/, the bandwidthDaily list is not updated for the latest day.

for this version:

https://link.storjshare.io/s/jwdo3ucon6mler5ch2hxqebnx6aa/test/piecestore-benchmark.exe.zip
The link will expire after 2024-05-17T10:44:00Z

for this version

will expire after 2024-05-17T11:53:28Z

On Windows it’s failed with panic

piecestore-benchmark.exe -pieces-to-upload 100000
panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit

goroutine 351 [running]:
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
panic({0x7ff6bcb86620?, 0xc0547685a0?})
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
panic({0x7ff6bcb86620?, 0xc0547685a0?})
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/dsnet/try.e({0x7ff6bcf1f240?, 0xc054768438?})
        C:/Users/User/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:206 +0x65
main.uploadPiece.func1({0x7ff6bcf42b18, 0xc0366a0fa0})
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans.func1(0xc000738910?, 0xc03ccf3ea0, 0xc04d45b4d0, 0xc03ccf3f00)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/collect/ctx.go:67 +0x9c
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans({0x7ff6bcf42b18, 0xc0366a0fa0}, 0xc03ccf3f00)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/collect/ctx.go:68 +0x23f
main.uploadPiece({0x7ff6bcf42b18, 0xc0366a0f00}, 0xc000470000, 0xc012dbef60)
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:170 +0x173
main.main.func1()
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:209 +0x6c
created by main.main in goroutine 1
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:207 +0x3cc

Thank you for managing to turn a Go application, the beauty of which lies in its static build, into an unportable one. I prefer not to install compilers on my servers for security reasons and to keep the system clean. However, due to the CGO requirements, I’m now unable to build your benchmark on my local machine and copy it to the server. Learning about Storj Labs’ development style!

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -tags netgo -ldflags '-w' ./cmd/tools/piecestore-benchmark/
# storj.io/storj/shared/dbutil/sqliteutil
shared/dbutil/sqliteutil/db.go:86:28: undefined: sqlite3.Error
shared/dbutil/sqliteutil/db.go:87:25: undefined: sqlite3.ErrConstraint
shared/dbutil/sqliteutil/migrator.go:104:24: destDB.Backup undefined (type *sqlite3.SQLiteConn has no field or method Backup)
3 Likes

I’m not a Go developer so I’m not quite sure what’s going on with CGO, but I spun up an Ubuntu docker container, built the binary in there, then copied the binary to a Slackware server (Unraid) and ran it without any problems.

@d4rk4 - sqlite is a very valuable library! There aren’t very many other in-process database drivers with as high of quality as sqlite, so while using sqlite complicates the build, we think it is worth it.

2 Likes

I’ve just posted Announcement: major storage node release (potential config changes needed!) which is an exciting update from everything we’ve learned in this thread.

4 Likes

I can cross-compile storage node without any problems. Not sure why you wouldn’t be able to.

you people are exploiting new storage node holder by sent garbage for last three month. i have 13. 1 tb of strogenode with id 1QGFX2rNUSoK4yi5AdG5yf1Jc3KS97vpjoKEorgNoWBMvuzRY2.
For the last three month my storage remains intact at 1.5 -1.6 tb, even after consuming over 1.5 tb of bandwidth on a month on month basis. For the first one and half month i got a storage around 1.9 tb. Now my node storage stands at (1.6 tb of storage out of 13.1 tb).Even my second storage node has same issue which have different ip. I am thinking of stopping the node at the end of this month. Even no proper explanation was given by Storj.They only sent a random reply like check your logs and update your node. Since i am using windows for running the node, it is updated automatically. I am not running the node to earn 3to 4 dollar a month. As per i know storj is struggling to get new customer, the way it used to get earlier. Even their earning excel sheet which was updated by the community is far from the reality.

What’s your question or concern?

It’s not that complicated. If you have extra resources available – run the node. If you don’t — don’t.

Storj does not owe anyone an explanation of how are they using the resources node operators are providing. You get paid according to published and agreed upon schedule.

Ingress of massive amount of data with very short TTL is possible and normal. You agreed to that. So I’m not sure what seems to be a concern?

7 Likes

The main concern is that the dream Lambo has to wait. :grin:

8 Likes

Only 198 more years to go… :smile:

4 Likes

Thats the kind of answer i was looking for.Since storj dont have proper answer.There are other nodes i am running which are more profitable then earning penny via storj. If storj dont have the answer, its ok.Even i dont have the time to waste in community forum.If there is profit, then there is a interest.There are many people who have much time to waste in social forum.

1 Like

It’s a learning experience and just fun for most of the more active people on the forum.
You seem to have been unlucky in that your nodes mostly held test data. They will fill up with more permanent data. It just takes a while. Also, the recent testing is because big customers want to onboard. I’d say it’s worth sticking around to see how that would end up. But just to be clear, customer usage is growing exponentially and has been for a while. So your statements in your previous message are incorrect. This mass deletion of data is likely a one time thing. But it’s up to you if you want to stay.

5 Likes

Bye bye, then. Thank you for your little rant… :man_shrugging:t2:

5 Likes

I’m very impressed with my nodes performance after the recent improvements in storagenode software.
Syno 1GB RAM running 2 nodes, each with 4TB data plus almost 1.5TB trash each; I deleted databases and started used space filewalker to recreate them. The only improvements I could make myself for the nodes were moving databases to USB flash drive, alongside with using ext4 with noatime for drives and stick.

In 4 days, both filewalkers finished, in the same time with retain processes and trash cleanup.
After 5 days, my databases are back, and the 3TB of trash had been shriked to 500GB.
The activity of drives, after all these services finished, is like 4%.
So the improvements are awsome.

4 Likes

Hi everyone! I recently merged a few changes that I expect will improve performance of the piecestore benchmark (and the storagenode eventually). One of my benchmarks with an HDD showed an improvement of over 30%, but I’m curious what folks from the community with their setups will see.

To run the new benchmark, you’ll need to:

git clone https://github.com/storj/storj
cd storj
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000

Then to calculate the baseline benchmark (what’s the performance like before patches):

git clone https://github.com/storj/storj
cd storj
git checkout ae5dc146a3d33a2c5f2a62ade7e9293c8801b751
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000

(you can lower the -pieces-to-upload parameter if you have a slower node)

Thanks in advance to everyone who tests, as this might influence further development. Thank you all!

3 Likes

is there something for windows GUI?

New benchmark:

/media/disk017/bench/new1$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.398286972s (208.44 MiB/s, 3521.34 pieces/s)
collected 100000 pieces in 29.673969507s (199.48 MiB/s)
/media/disk017/bench/new2$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 28.630554626s (206.75 MiB/s, 3492.77 pieces/s)
collected 100000 pieces in 28.436651088s (208.16 MiB/s)
/media/disk017/bench/new3$ piecestore-benchmark -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 27.318227059s (216.68 MiB/s, 3660.56 pieces/s)
collected 100000 pieces in 28.595973333s (207.00 MiB/s)

baseline benchmark:

/media/disk017/bench/old1$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.512591866s (200.57 MiB/s, 3388.38 pieces/s)
collected 100000 pieces in 29.606904669s (199.93 MiB/s)
/media/disk017/bench/old2$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.35963216s (201.61 MiB/s, 3406.04 pieces/s)
collected 100000 pieces in 27.417776602s (215.89 MiB/s)
/media/disk017/bench/old3$ piecestore-benchmark.old -pieces-to-upload 100000
uploaded 100000 62.07 KB pieces in 29.374990892s (201.51 MiB/s, 3404.26 pieces/s)
collected 100000 pieces in 26.963021964s (219.53 MiB/s)

Disk:

=== START OF INFORMATION SECTION ===
Device Model:     ST20000NM007D-3DJ103
Firmware Version: SN01
User Capacity:    20,000,588,955,648 bytes [20.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)

FS: ZFS

/media/disk017/bench# zfs get all -s local zp017

NAME   PROPERTY              VALUE                  SOURCE
zp017  recordsize            128K                   local
zp017  mountpoint            /media/disk017         local
zp017  compression           lz4                    local
zp017  atime                 off                    local
zp017  xattr                 sa                     local
zp017  primarycache          metadata               local
zp017  secondarycache        metadata               local
zp017  sync                  disabled               local
zp017  dnodesize             auto                   local

2 Likes

More tests and panics:

/media/disk017/bench/new1$ piecestore-benchmark -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 1m20.844532728s (146.44 MiB/s, 2473.88 pieces/s)
collected 200000 pieces in 52.144498804s (227.03 MiB/s)
/media/disk017/bench/new2$ piecestore-benchmark -pieces-to-upload 500000
uploaded 500000 62.07 KB pieces in 4m49.135427606s (102.36 MiB/s, 1729.29 pieces/s)
collected 500000 pieces in 1m51.713516688s (264.93 MiB/s)


/media/disk017/bench/old1$ piecestore-benchmark.old -pieces-to-upload 200000
uploaded 200000 62.07 KB pieces in 1m17.46640484s (152.82 MiB/s, 2581.76 pieces/s)
collected 200000 pieces in 50.425297701s (234.77 MiB/s)
/media/disk017/bench/old2$ piecestore-benchmark.old -pieces-to-upload 500000
panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked [recovered]
        panic: main.go:191: pieceexpirationdb: database is locked

goroutine 77 [running]:
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/ctx.go:155 +0x2ee
panic({0x1833900?, 0xc238598e28?})
        /usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/dsnet/try.e({0x1bdffa0?, 0xc238598d50?})
        /root/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:206 +0x65
github.com/dsnet/try.E(...)
        /root/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:212
main.uploadPiece.func1({0x1c044d8, 0xc12117d7c0})
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:191 +0x14a
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans.func1(0xc000597360?, 0xc06d0a7ea0, 0xc1e65f6210, 0xc06d0a7f38)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/collect/ctx.go:67 +0x9f
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans({0x1c044d8, 0xc12117d7c0}, 0xc06d0a7f38)
        /root/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.23/collect/ctx.go:68 +0x24e
main.uploadPiece({0x1c044d8, 0xc12117d720}, 0xc000486000, 0xc057d957d0)
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:188 +0x1e5
main.main.func1.1()
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:235 +0x6c
created by main.main.func1 in goroutine 1
        /root/PROJECT/storj-bench/storj/cmd/tools/piecestore-benchmark/main.go:233 +0x96

1 Like