Throughput seems a bit lower today and we didn’t have the “morning dip”, so the pattern has changed a little bit
Bit offtopic.
I see these graphs more often. How do you create them or can anyone else tell me?
I see the same thing: for a few days they seemed to still be in go-hard-or-go-home mode and were pushing for max throughput. But today (so far) and yesterday they seemed to be more chill: pushing 75% of max just to keep up with capacity-reservation.
Maybe they backed off a bit because 1.105.4 is also slowly getting rolled out: and they know there will be lots of node restarts and used-space-filewalkers running?
You do see those green RRDtool graphs all over the place. However it’s not something you tend to use directly: it’s more just the graphing part of some other monitoring tool. I’d check the web UI for your router first to see if it offers any graphs… or if you’re using something like PFsense there are some built-in (under Status → Monitoring). Or tell us if you’re using Windows or Linux and then someone can let you know the tool they use.
I use two monitoring systems - cacti (graphs with white background) and zabbix (graphs with black background).
Traffic graphs are easy to create, you just need to install SNMP service to the VM or wherever your node runs, cacti already has a graph template for traffic.
Other graphs that I sometimes post are custom. I have uploaded some scripts I use to my github, but there is no documentation
Thanks. I was thinking that maybe my node is too slow. I even disabled the filewalker, but the traffic did not go up.
Yep.
I’ve started a node on the 15th and I saw it using about 170 to 180Mbps with morning dips on the 15th throughout the 18th. Yesterday’s maximum was around 150Mbps. Today has been around 110 to 120Mbps without the morning dip, just a constant flow of traffic since about 3PM CET on the 18th.
Yes, “normaly” it would be around 1800-1900W, but since the test started it went up to about 2030W.
Th3Van.dk
Like a Bitmain S9… good ond times.
Good Evening Storlings,
One question: did you try splitting uploads to more than one node behind one ip ? No longer treating 5 Nodes behind one ip as one node.
Splitting upload to all 5 Nodes = 20% usage on each node and no overload.
Perhaps this could help in getting constant uploads even on slower hardware.
Greetings Michael
This is what I see for my nodes. So nothing to change.
not sure if i got it right with all that TTL and how it works: but looks like i will have some robocopy to do on 16TB disks.
Does file with TTL need storagenode.exe to execute itself and perish?
its hardcoded in the files?
because i wonder how robocopy of a 16TB disk will look like, for those who cannot just clone it. like say file with TTL got copied to new disk, but the node don’t work there yet obviously, say after 6 (or …~30) days the migration is complete, but in meantime the TTL hour has struck, will it be deleted from new location by itself alone? or need node to be run on that files to trigger some GC?
Edit:
oh, thx, so after migration the node will clean itself from files with overdue TTL, got it!
TTL information is in piece_expiration.db, storagenode.exe checks frequently what to delete.
Today i had to restart my other node pc, because of networks laggs, same as with the other node, wouldnt stop, restarted the pc as a whole
It has been about a week now: and I don’t see any additional nodes on 1.105.4. Was the cursor paused?
Possible. At some point we automated the cursor. So from our side it is just 3 commits or so and the cursor will do the steps in between these 3 commits. The idea is it doesn’t go from 0 to 100 all by its own. At least not for now. We will do the first rollout and check if everything looks good before moving on to the next commit. Today is a holiday so maybe tomorrow it will continue. If the team remembers that it is time for the next commit. That could also get lost between too many other tasks.
Sounds like we could expect that the testdata will never get wiped but replaced on every node?
If so it could definitely be interesting for SNO like me with spare disk shelves laying around in order to order some new disks and spin up new nodes?
I mean cmon the fact that you talk about to spin up some surge nodes sounds more like a “call to action” for me
I am pretty sure im note the only one SNO who could easy bring up hundreds of TB if we know it is worth it so just let us know about the signed contract :>
In the meantime i will clean up the good old NetApp DS4246 just in case…
Yes, but if you spin up surge nodes you own the risk. If they do it, then the risk is on them.
Remember, as far as we know this is all speculation at the moment. There are no firm deals struck yet (as far as we have been told) and no guarantee that we will get test data replaced with customer data.
Like @BrightSilence, I am expanding cautiously. Just brought a new 20TB drive SAS drive online and won’t be spending more money until that one hits at least 15 TB use.
…and it doesn’t sound like they need more nodes with just raw space: they need them with 1Gbps-or-faster Internet connections. As @littleskunk said:
“Bringing online a 1 PB of storage on a 100MBit/s connection isn’t going to help.”
It sounds like SNOs with fast connections could take the majority of the capacity-reservation data… because they’re the ones that can keep up with it constantly being deleted+re-uploaded.
Buying new hardwareis a bit difficult with the current rates, especially if the new data is write-only (so, a lot of traffic for comparatively not a lot of storage and almost zero egress). My server has 5 drive slots left (would be 6 if I took the time to replace the backplane as the current one has a bad slot). I could also replace smaller drives with larger ones, so in theory I could expand the node by a lot, but I do not know if it would be worth it even if (and that’s a big if) those drives could be filled.
OTOH, there is some free space in the pool anyway, so I do not have to buy anything for now, just expand the virtual disk as it fills up.