Mass testing data from saltlake?

My nodes are overload with mass testing data from saltlake since yesterday. Anyone else?

Cannot confirm,

Wow. But why?

Still within a reasonable range, there is no pressure on the nodes.

2 Likes

You can put that satellite under untrusted list of satellites then you won’t have to worry about ingress :slight_smile:

3 Likes

Mine is similar in shape, but extremely small. This node gets 10x to 20x this from us1. Odd.

1 Like

yeah, but its pays as good as other sattelites. never undestood why to untrust it.
It was best paing sattellite amount all, for some of my nodes, up to 2023.
unfortunately it went innactive, good to hear its becoming active agaiN!

1 Like

the data is not persist, it’s test data and will be deleted, a waste of resources.

My node is under heavy load from saltlake. Can anyone from STORJ confirm if this is truly test data?

1 Like

All data on Saltlake is test data

3 Likes

Isn’t that rude? if company that pays You tries to test something, why not let them?
especially if they pay for it.

4 Likes

It is early in the morning. I need to sync up with the team first. For now I can confirm this is testdata. There is no official statement yet because it was kind of last minute decision friday evening. I would expect a official statement today.

Regardless of the official statement I would like to use this to collect some IOPs metrics. For my own node I am using this lovely grafana board: Storage I/O Statistics | Grafana Labs

If you are using grafana already it would be great if you could add that to your grafana instance.
If you always wanted to setup grafana now you have one more reason. However grafana isn’t easy. I don’t expect every node operator to setup grafana now. If you don’t have the time or experience thats fine as well. Just a few nodes measuring IOPs would be great.

3 Likes

Yeah, it’s paid but not much, test data usually got deleted soon. I just want to know what is the reason for stressing my disks.

I’m not sure about others, but one of my nodes is experiencing large ingress, maybe 100+ Mbit/s and the disk can’t quite keep up, so it causes the node to crash. The recent surge in node crash reports may be related.

Another node is at a steady 15 Mbit/s that is usually 3-5 Mbit and it handles the traffic fine.

1 Like

well maybe they r checking can You keep up with the nexus mods!? ;>
Wouldn’t be ironic, if Amazon’s TV show, gets Storj to power, so in the end it rises and kills Amazon’s AWS? ;>

2 Likes

Hello,
Do you have a good tutorial how i can set it up with a docker node (or even 7 :sweat_smile: ).
The last time i tried it, i couldn´t get it to work.

Thank you in advance

If anyone on the Storj team is reading this post: I’d just like to say I appreciate any and all test data you send my way. Hit me with a firehose of ingress! Allow me to bathe in all the 1’s and 0’s ! :wink:

(And have I really seen a couple SNOs complaining their nodes can’t keep up with traffic? Like money is falling from the sky and the problem is they can’t catch it all? Kids these days…)

11 Likes

I suspect this is among other things to find out how many nodes are running on a potato and will faceplant once real customer traffic picks up.

I see about 17 Mbps inbound traffic on node in WA and about 27 Mbps in CA (two nodes on the same IP – but that should not matter, and the difference shall be geographical).

On WA node the network traffic increased from 4.4 Mbps to 17; load on disks increases from 4 IOPs on average to 6 on data drives (1x 4-disk raidz1), and from 40 IOPS to 55 on metadata drives (2-disk mirror).

That node could tolerate at least triple of current load, leaving enough room to do all the other things this server exists to do.

The same stats on CA dual node setup (2x 4-disk raidz1 data, 2-disk mirror metadata):

Inbound traffic 5Mbps → 24Mbps
Data drives: 2 IOPS–> 4.5 IOPS (ceiling 240 IOPS)
Metadata drives: 75 IOPS–> 200 IOPS (ceiling 40000 IOPS)

In other words, the load from this test is still negligible.

4 Likes

I don’t get why a system or the storagenode would crash because of high iops or high ingress on drives… If I move some files from one drive to another, in the same PC that can get the maximum out of the drives, I don’t see PC crashing… I only can imagine storagenode software beeing buggy, if it crashes under heavy load.