Updates on Test Data

Just to make things clear, your dashboard is showing data after expansion, correct?

Is that 20PB target after expansion?

Results after the last Saltlake GC for one of my nodes: 1928063 pieces deleted, took 68h while receiving ingress. The amount deleted was around 500GB.

I hope my drive can keep up with all the pieces to be deleted from these tests.

2 Likes

The answer should be obvious. Take a look at the grafana dashboard once more. Is it showing customer data before or after expansion? Why is it called customer data in the first place and not total used space?

I noticed in last days an extreme consume of cpu. never been a problem in my systemā€¦ something changed? Shared problem or just on my configuration?

1 Like

Itā€™s not obvious. Before expansion itā€™s customer data. After expansionā€¦ itā€™s still customer data (that just takes up more space). In a graph like this I think a reasonable person would say if thereā€™s 29.1PB used by customers nowā€¦

2024-06-17_usage

ā€¦that itā€™s ā€œnot even half fullā€. Like you could easily fit another 29.1PB of customer data in the free 37.2PB. But are you saying because that free space is ā€œrawā€ that it really couldnā€™t take an expanded (29.1PB-customer * 2.2-expansion = 64PB) worth of data? Weā€™re effectively closer to 2/3rds-full now? That would be a misleading report: not something you could make decisions from.

Iā€™m getting confused as to what Iā€™m seeing now :slight_smile:

SNOs deal in expanded/actually-used space. And it sounds like satellites can now handle multiple different expansion factors at once. So any report with un-expanded/variable-expansion-factor numbers isnā€™t showing useful data because you donā€™t know how much space expanded customer data will take?

Within that graph following two values (over ALL satellites) are compared:

max(storj_stats_storage_free_capacity_estimate_bytes) - green

statistical estimate of free storage node capacity, with suspicious values removed

sum(storj_stats_storage_remote_bytes) - blue

number of bytes stored on storage nodes (does not take into account the expansion factor of erasure encoding)

Because the expansion factor is not taken into account i called that value stored customer data. Of course its a bit comparing Apples to Eggs, but thats the nearest i could get to, without modifying (and thus possibly falsifying) the reported data.

Yes you are right, so it is important to understand the Values you are looking at. In my opinion it is still useful data because i, as an SNO, am interested to see if/how the network is utilised, which is reflected by those numbers.

3 Likes

Does that answer my question about the 20PB target somehow?

Iā€™m not interested in the graphana dashboard. Iā€™m interested in what you referred to here:

Thank you for explaining it! I can remember to sorta halve the free-space number to account for expansion: and it does make it clear why Storj has been asking the community for more nodes lately (as weā€™re closer to ā€˜fullā€™ than I thought).

And even if itā€™s mixing slightly different stats: it is an accurate report on data-customers-pay-$4/TB/m-for vs. space-SNOs-could-be-paid-$1.5/TB/m-for.

1 Like

I know you arenā€™t asking me: but what I think I heard is that Storj is willing to pay SNOs their $1.50/TB/m rate to hold 20PB-on-disk-on-node of capacity-reservation data. But really with a 2.2x expansion that only represents the space required to hold about 9TB of paid customer data?

(If I got it wrong: somebody will tell me shortly :wink: )

What I think I heard is someone crying ā€œthe wolves are coming!ā€ in the distance, if you know what I mean :wink:.

1 Like

Most of my nodes are now full and will not be able to deliver the same throughput as they have been for the last few weeks.

I did order additional 0.2PB worth of node hardware though it will take a month before they can all be brought online.

5 Likes

The forum search can tell you if the grafana dashboard is showing the numbers including or excluding expansion factor. For example here: Publicly Exposed Network Data (official statistics from Storj DCS satellites) - #30 by Arkina

The rest is simple math. I have given you some numbers. You can try to make them match the grafana dashboard and you will quickly find out our internal dashboard has to show the same otherwise it would have to show a different scale.

to much conections spinning up cpu usage in my case.

After almost 1 month of testing, any update in how long will the test phase last?
Hopefully it ends anytime soon and stops being the opium of the people.

Please scroll all the way up and read the top post. Youā€™ll find an answer right there.

2 Likes

so the ā€œnew normalā€ is this massive test data with increased power consumption? :smiley:

@Th3Van, did you notice any increase in total power consumption in your setup? tyia

http://www.th3van.dk/SM01-power-consumption.txt

it says Total watt consumption ~2050W and at the begining of the test campaign it was around 1850W, if i recall correctly.

1 Like

Yes, the tests are emulating expected customer usecase. Best case ā€” the test traffic will be replaced by customer traffic.

8 Likes

I have a 12-bay NAS and a 10-bay dumb USB attached storage box. That 10-bay storage box uses pretty much the exact same power as before. Around 78 watts. The NAS went from about 108 watts up to about 137 watts. Undoubtedly mostly the additional CPU use. This is with 14 nodes + 1 on testnet that is mostly idle.

4 Likes

Around a +25% increase. These 137W were measured today or when the test data was around 120Mbps per node?

Average of yesterday. The way I understand it the current tests are the expected behavior. So I thought that would be fair. Though +25% isnā€™t really correct. On the CPU side itā€™s much more as that previous 108 watts included 12 HDDā€™s. Assuming they use about the same as the HDDā€™s in the external bay, those HDDā€™s accounted for around 94 watts. CPU and rest of the system only about 14 watts. That jumped to 45 watts. The CPU has a TDP of 35 watts (I know, thatā€™s not equal to usage). Since those 45 watts include the rest of the system as well as 2 NVMe SSDā€™s it seems the system is working at close to max wattage most of the time. Anyway, system without disks 14->45 is more like a 200+% increase. If you want to include disks, you should also include the external box too as it runs off the same system. 186->215 so about 15% increase. Nothing too shocking. But this is a very energy efficient system. Heavier systems will almost certainly see even less of an increase as their baseline usage is already higher.

1 Like