Updates on Test Data

image

http://www.th3van.dk/mrtg-storj-graph-th3van.dk.html

Th3Van.dk

1 Like

Yes, “normaly” it would be around 1800-1900W, but since the test started it went up to about 2030W.

Th3Van.dk

2 Likes

Like a Bitmain S9… good ond times. :sweat_smile:

Good Evening Storlings,

One question: did you try splitting uploads to more than one node behind one ip ? No longer treating 5 Nodes behind one ip as one node.
Splitting upload to all 5 Nodes = 20% usage on each node and no overload.
Perhaps this could help in getting constant uploads even on slower hardware.

Greetings Michael

1 Like

This is what I see for my nodes. So nothing to change. :wink:

not sure if i got it right with all that TTL and how it works: but looks like i will have some robocopy to do on 16TB disks.
Does file with TTL need storagenode.exe to execute itself and perish?
its hardcoded in the files?

because i wonder how robocopy of a 16TB disk will look like, for those who cannot just clone it. like say file with TTL got copied to new disk, but the node don’t work there yet obviously, say after 6 (or …~30) days the migration is complete, but in meantime the TTL hour has struck, will it be deleted from new location by itself alone? or need node to be run on that files to trigger some GC?

Edit:

oh, thx, so after migration the node will clean itself from files with overdue TTL, got it!

TTL information is in piece_expiration.db, storagenode.exe checks frequently what to delete.

1 Like

Today i had to restart my other node pc, because of networks laggs, same as with the other node, wouldnt stop, restarted the pc as a whole

It has been about a week now: and I don’t see any additional nodes on 1.105.4. Was the cursor paused?

Possible. At some point we automated the cursor. So from our side it is just 3 commits or so and the cursor will do the steps in between these 3 commits. The idea is it doesn’t go from 0 to 100 all by its own. At least not for now. We will do the first rollout and check if everything looks good before moving on to the next commit. Today is a holiday so maybe tomorrow it will continue. If the team remembers that it is time for the next commit. That could also get lost between too many other tasks.

3 Likes

Sounds like we could expect that the testdata will never get wiped but replaced on every node?
If so it could definitely be interesting for SNO like me with spare disk shelves laying around in order to order some new disks and spin up new nodes?

I mean cmon the fact that you talk about to spin up some surge nodes sounds more like a “call to action” for me :wink:

I am pretty sure im note the only one SNO who could easy bring up hundreds of TB if we know it is worth it so just let us know about the signed contract :>

In the meantime i will clean up the good old NetApp DS4246 just in case…

Yes, but if you spin up surge nodes you own the risk. If they do it, then the risk is on them.
Remember, as far as we know this is all speculation at the moment. There are no firm deals struck yet (as far as we have been told) and no guarantee that we will get test data replaced with customer data.

Like @BrightSilence, I am expanding cautiously. Just brought a new 20TB drive SAS drive online and won’t be spending more money until that one hits at least 15 TB use.

2 Likes

…and it doesn’t sound like they need more nodes with just raw space: they need them with 1Gbps-or-faster Internet connections. As @littleskunk said:

“Bringing online a 1 PB of storage on a 100MBit/s connection isn’t going to help.”

It sounds like SNOs with fast connections could take the majority of the capacity-reservation data… because they’re the ones that can keep up with it constantly being deleted+re-uploaded.

1 Like

Buying new hardwareis a bit difficult with the current rates, especially if the new data is write-only (so, a lot of traffic for comparatively not a lot of storage and almost zero egress). My server has 5 drive slots left (would be 6 if I took the time to replace the backplane as the current one has a bad slot). I could also replace smaller drives with larger ones, so in theory I could expand the node by a lot, but I do not know if it would be worth it even if (and that’s a big if) those drives could be filled.

OTOH, there is some free space in the pool anyway, so I do not have to buy anything for now, just expand the virtual disk as it fills up.

3 Likes

Thats true and a good move from storj to take the risk instead to take in on the shoulders of the SNOs. But it will help a lot to communicate clearly and fast, if the customer(s) decided to sign we should know that in order to be prepared.

I just ordered one more 18TB disk and will proceed like you and i think many other SNOs here.

I dont know exactly what your experiences is but the most of my node are on 10G/10G or at least 1G/1G and i never saw more like 180-200mbps per node so far…

Where these speed requirements up to 1G already tested? experiences anyone?

I think I saw 300mbps for a short time a while ago. But this is less about that I think and more about the speed. As I understand, the problem is not with the amount of available space, but with the speed and the concern is that if the fast nodes fill up (and they will fill up first since they get more traffic than slow nodes), the remaining nodes, even if they have the space, won’t be fast enough
So, getting a lot of space online, but with a slow connection won’t help.

That would make sense if a whole file was being downloaded from a slow SNO.
But you’ll get 28 chunks out of however many (I can’t remember exactly the numbers) it tries first, so even if the SNOs are relatively slow, the parallelism should compensate for that.

Apparently it doesn’t, at least not enough. According to the previous posts, Storj found that the network was too slow, so they made some changes (reduced the data expansion, changed how nodes are selected to select faster nodes etc). So, now faster nodes get more traffic than slower nodes and if they fill up the remaining slower nodes may not be enough. Apparently the new customer is planning to upload data at 100gbps or some other high speed.

Well in fact i think this could be a possible reason.
The next question is about possible drops in terms of data distribution.

If there are SNO with sufficient bandwidth, spinnig up new nodes one by one in same /24 subnet, because disks fill up fast, then the amount of incoming traffic will be divided to all available nodes… Will the traffic be divided by amount of online nodes in subnet or will the traffic be divided by online nodes in subnet with space available?

If divided by online nodes without consideration of available space , there could grow a bottleneck because the amount of full nodes rises while the incoming traffic will be divided by more and more nodes including the full ones. That would lead to decending download rates on the nodes with space available.

From my interpretation, it was too slow on the upload (to the servers). I don’t think they ever tested downloads.

1 Like