Updates on Test Data

As far as I understand, it’s handled automatically - only the one node (accordingly success rate on the satellite) is selected for uploads. If you want to increase a priority for some nodes, just shutdown anything else, no?

Just shut down it. Is it complicated?

It’s very simple. Either reduce the allocated below the used or shutdown it.

1 Like

Is there a plan how this TBs of test data will be replaced by the magical big customer?

I start to see big trash coming up, and wonder if the plan is to drop all of this big test data in one go or gradually as the new customer data flows in.

The uploaded test data has a TTL that means it gets deleted without touching the trash.

1 Like

Well, I’m seeing an awful lot of trash as well. Wasn’t expecting that.

Look at one of my nodes:

3 Likes

In the current configuration its not possible.
The pc has no wifi adapter .
After movement i will try it with the smaller node when its located in the basement, one floor away from the router.
Maybe i consider an entire new setup with router in the basement. After moving.

Lan 4 can be assigned to the guest net.

Its not near the router.
Good tip for later, maybe.

Well, I’m seeing an awful lot of trash as well. Wasn’t expecting that.

I have been calling this out for a few days now in another thread - I thought it was just me.

CC

If your Server is dedicated to storj only, you can Set a low prio for your storj Server on the Fritzbox to prevent Bufferbloat on the upstream. Regarding your operating System you can use Software to Limit the Download rate too. On Linux you can use “wondershaper”, on Windows “netspeed limiter”. Maybe its a second solution you can consider.

Perhaps the reason is

But only in the case if there was a replacement of the same object with the same key, not uploading a new data with a different name. But I didn’t expect that it would be that amount.
Could you please check - which satellite is that?

nice tip, but i will use what i have atm.

None of them are

I also have a bunch of trash (similar in absolute amount, but less if you take into account the total size)
image

Interestingly, it does not seem to be from any particular satellite. IIRC the trash amount for a satellite is calculated by subtracting content_size column from the total column in the database.
From the piece_space_used database:

004AE89E970E703DF42BA4AB1416A3B30B7E1D8E14AA0E558F7EE26800000000|3072
A28B4F04E10BAE85D67F4C6CB82BF8D4C0F0F47A8EA72627524DEB6EC0000000|25200214528
AF2C42003EFC826AB4361F73F9D890942146FE0EBE806786F8E7190800000000|3214561792
84A74C2CD43C5BA76535E1F42F5DF7C287ED68D33522782F4AFABFDB40000000|703826944
7B2DE9D72C2E935F1918C058CAAF8ED00F0581639008707317FF1BD000000000|51618955776
F474535A19DB00DB4F8071A1BE6C2551F4DED6A6E38F0818C68C68D000000000|184395776
04489F5245DED48D2A8AC8FB5F5CD1C6A638F7C6E75EFD800EF2D72000000000|27313152

select satellite_id,total-content_size from piece_space_used where satellite_id="trashtotal";     
trashtotal|2802250434048

hmm…

select satellite_id,total,content_size,total-content_size from piece_space_used where satellite_id="trashtotal";
trashtotal|2802534273536|283839488|2802250434048

From the api:

  "diskSpace": {
    "used": 27298068154954,
    "available": 36000000000000,
    "trash": 2802534273536,
    "overused": 0
  },

I do not know how to interpret this. So, the node uses the total column from the trashtotal row to show the trash amount in the database, but adding up all the indivitual satellite trash amounts does not get nowhere near that number.

df shows this:

Filesystem          1B-blocks           Used      Available Use% Mounted on
/dev/sda1      43632859963392 30821043499008 12646872956928  71% /storj

Logs are also saved on that virtual disk, so the used space should be more than what the node shows.

du agrees with the total and the dashboard though:

558M    6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa
65M     abforhuxbzyd35blusvrifvdwmfx4hmocsva4vmpp3rgqaaaaaaa
4.1M    arej6usf33ki2kukzd5v6xgry2tdr56g45pp3aao6llsaaaaaaaa
2.6T    pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
15G     qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
4.0K    ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
61G     v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
2.6T    total

Then it’s an another batch of deletions of the free trial expired data I guess.

@littleskunk If you reserve space for futura use, why according to statistics occupied space all the time is the same? 29TB and not moving up

It is saying stored customer data over all satellites. I don’t expect test data to count as customer data. You could look into the query behind that graph to check it. Or just scroll down to get to the SLC stats.

Happy Cake Day @littleskunk ! It says you joined 5 years ago today!

2 Likes

Based on data from https://stats.storjshare.io/data.json all satellites including Saltlake carry 29034784160219136 bytes = 29,03 PB which is what the graph that @Vadim has posted seems to show.

3 Likes

looks like may be you override some data? thats why it not rizing? because some nodes trash gone very big