Upcoming storage node improvements including benchmark tool

For reference I am using this exporter: GitHub - anclrii/Storj-Exporter: Prometheus exporter for monitoring Storj storage nodes

My graphs look like this:

It’s only picking up the periodic flushes. Are you using the same exporter?

No. I don’t like the idea of running some third party tool that basically gets full access to my storage node. Instead I use the build in metrics endpoint that works without any additional log scraping tool.

2 Likes

Ah gotcha. I will have to check that out.

In the meantime I’ll see if I can patch this.

2 Likes

Any hints on how to graph JSON data in Grafana? The exporter mentioned here is reading Storagenode API and exposing the data for Prometheus to scrape, but what would be a best way to do the same directly with Storagenode API?

Not storagenode API. Just the metrics endpoint with prometheus.

Thank you, didn’t know about that.
For anyone else wondering, it is at /metrics at the debug.addr port.

1 Like

try this suggestion:

I think the issue is that the used key in the bandwidth dictionary is not being updated on the /api/sno/ route. It’s probably reading from the database and not the cache.

Similar for /api/sno/satellites/, the bandwidthDaily list is not updated for the latest day.

for this version:

https://link.storjshare.io/s/jwdo3ucon6mler5ch2hxqebnx6aa/test/piecestore-benchmark.exe.zip
The link will expire after 2024-05-17T10:44:00Z

for this version

will expire after 2024-05-17T11:53:28Z

On Windows it’s failed with panic

piecestore-benchmark.exe -pieces-to-upload 100000
panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit [recovered]
        panic: main.go:173: order: grace period passed for order limit

goroutine 351 [running]:
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
panic({0x7ff6bcb86620?, 0xc0547685a0?})
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0x0)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/ctx.go:155 +0x2ee
panic({0x7ff6bcb86620?, 0xc0547685a0?})
        C:/Program Files/Go/src/runtime/panic.go:914 +0x21f
github.com/dsnet/try.e({0x7ff6bcf1f240?, 0xc054768438?})
        C:/Users/User/go/pkg/mod/github.com/dsnet/try@v0.0.3/try.go:206 +0x65
main.uploadPiece.func1({0x7ff6bcf42b18, 0xc0366a0fa0})
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans.func1(0xc000738910?, 0xc03ccf3ea0, 0xc04d45b4d0, 0xc03ccf3f00)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/collect/ctx.go:67 +0x9c
github.com/spacemonkeygo/monkit/v3/collect.CollectSpans({0x7ff6bcf42b18, 0xc0366a0fa0}, 0xc03ccf3f00)
        C:/Users/User/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/collect/ctx.go:68 +0x23f
main.uploadPiece({0x7ff6bcf42b18, 0xc0366a0f00}, 0xc000470000, 0xc012dbef60)
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:170 +0x173
main.main.func1()
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:209 +0x6c
created by main.main in goroutine 1
        C:/Users/User/storj/cmd/tools/piecestore-benchmark/main.go:207 +0x3cc

Thank you for managing to turn a Go application, the beauty of which lies in its static build, into an unportable one. I prefer not to install compilers on my servers for security reasons and to keep the system clean. However, due to the CGO requirements, I’m now unable to build your benchmark on my local machine and copy it to the server. Learning about Storj Labs’ development style!

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -tags netgo -ldflags '-w' ./cmd/tools/piecestore-benchmark/
# storj.io/storj/shared/dbutil/sqliteutil
shared/dbutil/sqliteutil/db.go:86:28: undefined: sqlite3.Error
shared/dbutil/sqliteutil/db.go:87:25: undefined: sqlite3.ErrConstraint
shared/dbutil/sqliteutil/migrator.go:104:24: destDB.Backup undefined (type *sqlite3.SQLiteConn has no field or method Backup)
3 Likes

I’m not a Go developer so I’m not quite sure what’s going on with CGO, but I spun up an Ubuntu docker container, built the binary in there, then copied the binary to a Slackware server (Unraid) and ran it without any problems.

@d4rk4 - sqlite is a very valuable library! There aren’t very many other in-process database drivers with as high of quality as sqlite, so while using sqlite complicates the build, we think it is worth it.

2 Likes

I’ve just posted Announcement: major storage node release (potential config changes needed!) which is an exciting update from everything we’ve learned in this thread.

4 Likes

I can cross-compile storage node without any problems. Not sure why you wouldn’t be able to.

you people are exploiting new storage node holder by sent garbage for last three month. i have 13. 1 tb of strogenode with id 1QGFX2rNUSoK4yi5AdG5yf1Jc3KS97vpjoKEorgNoWBMvuzRY2.
For the last three month my storage remains intact at 1.5 -1.6 tb, even after consuming over 1.5 tb of bandwidth on a month on month basis. For the first one and half month i got a storage around 1.9 tb. Now my node storage stands at (1.6 tb of storage out of 13.1 tb).Even my second storage node has same issue which have different ip. I am thinking of stopping the node at the end of this month. Even no proper explanation was given by Storj.They only sent a random reply like check your logs and update your node. Since i am using windows for running the node, it is updated automatically. I am not running the node to earn 3to 4 dollar a month. As per i know storj is struggling to get new customer, the way it used to get earlier. Even their earning excel sheet which was updated by the community is far from the reality.

What’s your question or concern?

It’s not that complicated. If you have extra resources available – run the node. If you don’t — don’t.

Storj does not owe anyone an explanation of how are they using the resources node operators are providing. You get paid according to published and agreed upon schedule.

Ingress of massive amount of data with very short TTL is possible and normal. You agreed to that. So I’m not sure what seems to be a concern?

7 Likes

The main concern is that the dream Lambo has to wait. :grin:

8 Likes

Only 198 more years to go… :smile:

4 Likes