Current Storj traffic observations

I’m getting a steady flow of ingress traffic that is averaging about 1.39 megabytes per second On my raspberry pi 4 with a 60/60Mbps internet connection in the USA. Hard drive is an external 2TB USB with about 1 TB usable space remaining.

I’m guessing this is mostly test data to fill up some of the free space generated from the big Stefan benten delete we had recently, but who knows. At this rate, my available hard drive space would fill in about 9 days if I did the math correctly. I’ll cap my storage limit soon so I’ll have some room to play with later. Anybody know the average storage space per node? Wondering how long before the network is full at this rate of ingress. If this is mostly test data, I’m assuming they’ll stop sending it before the network is dangerously full.

What traffic are you guys seeing on your nodes? How long before you are full?


I recently expanded the array, now I have a lot of free space, at this rate it would take months to fill everything.

What is that inbound gap between week 25 and 26? Was your node full or was there a gap in test traffic?
Your 10 mega bits per second = 1.25 megabytes per second so it looks like we’re seeing similar inbound throughput.

I might have to give that rrdtool graphing system a try.

Yes, the node thought it was full (Space accounting appears to be broken on version 1.5.2) because of a bug that resulted in “trash” space being counted twice and at the time I could not expand the virtual drive. It turned out I just needed to restart the node.

I upgraded to 15TB and got only 3TB. So at current rate of about 2TB/month it will take me at least 6 months to fill my node :confused:

I wouldn’t exactly call that slow. I currently have 28TB of available space. About 13TB used. And yes, I’m seeing about 2TB ingress per month. But, also about 5% being deleted on average. That starts to add up at some point and slows down the net filling of a larger node.

yeah 1.5mb/s is a nice pace the ingress can be very erratic tho, my node is 5 months in a few days… and i’m at about 10.75tb stored, so yeah 2tb monthly ingress avg seems accurate… still a good deal of months out before i need to consider upgrading…

yeah 2TB/month is not bad. I’m just a bit impatient because I gave up a 7TB node when I switched to a raidz with 3x8TB HDDs :smiley: And now the 16TB array looks pretty empty with only 3TB on it…

This happened to me too. I was about to spin up a 3rd node, but when re-pulling the containers for :latest and starting the nodes up - it released over 1TB of freespace, haha! That was a nice suprise.

1 Like

July 2020 has started out with 10 times Egress vs. Ingress for my node.

Looking good so far. Maybe Storj is doing another stress test, like January… that’d be nice :slight_smile:

haha yeah we all would like that :smiley:

Things have slowed down on my end, I’m now getting about 260 KB/s ingress. My 1 TB free space will now last for about 1.5 months at this rate. Extra time to ponder the purchase of a new drive.

Actually, I might need two new drives. I scanned my bulk storage drive(Seagate BarraCuda) on my pc and found a lot of bad sectors. Purchased November 1, 2017. Warranty Expired January 20, 2020. I’ll probably try a different brand, maybe one with a better reputation and warranty.
scan2

Health status OK because none of the “Current” or “Worst” values are below “Threshold”.
Basically, the manufacturer is saying that your drive can be expected to have some bad/reallocated sectors.
It makes sense - why have the reallocation system in pace if you’re going to replace the drive under warranty after the first bad sector.

Your drive has a huge load cycle count though. You should make your new drive not unload the heads, it may last longer.

Yeah, I don’t know why it would be that high. When I divide the power on time, by the load cycle count, it equals 1 load cycle every 5 minutes. I don’t think I ever told windows to put my drives to sleep after such a short time.

HDDs unload their heads independently of windows settings if there was no activity for a certain amount of time (maybe it is 5 min in your case, some other drives might do it more quickly). So the only way around that would be to run a script that reads from the drive every view seconds, preventing the head from being unloaded.

Hard drives do it automatically as “power saving”. You can turn this off for some drives. For others you have to run a script that “pokes” the drive every so often (faster than the timeout) to prevent that.
Not only it makes the drive wear out faster, it makes the drive slower too, since it needs a couple of seconds to get ready. Until I made the drives in my file server work properly I was annoyed by the time it took to open a folder etc after some time of inactivity.

WD RED has, AFAIK, 5 minutes as the limit. For some other drives it may be as short as 10 seconds I think.

For example, these are examples of my drives:

  9 Power_On_Hours          0x0032   038   038   000    Old_age   Always       -       45377
193 Load_Cycle_Count        0x0032   199   199   000    Old_age   Always       -       3129

Some drives have more (until I figured this out):

  9 Power_On_Hours          0x0032   033   033   000    Old_age   Always       -       49167
193 Load_Cycle_Count        0x0032   197   197   000    Old_age   Always       -       9383

@kevink
or change it in the hdd’s own control system… with smartctl or something like that…

@Mark

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   089   089   016    Pre-fail  Always       -       131144
  2 Throughput_Performance  0x0005   137   137   054    Pre-fail  Offline      -       78
  3 Spin_Up_Time            0x0007   137   137   024    Pre-fail  Always       -       417 (Average 420)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       319
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   121   121   020    Pre-fail  Offline      -       34
  9 Power_On_Hours          0x0012   092   092   000    Old_age   Always       -       57183
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       287
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       555
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       555
194 Temperature_Celsius     0x0002   253   253   000    Old_age   Always       -       22 (Min/Max 3/37)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

and yes it’s running fine
only 6.5 years of power on … the error rate is a bit high… but it’s been acting fine lately… may be from issues with running sas and sata on same backplane… hint don’t do that… :smiley:

it does seem to be the last of the old 3tb drives left tho… but it’s not throwing any errors currently… so i’m happy with it… bad sectors are not really a big issue if you got redundancy… besides you can buy a new drive and it can be going back in no time at all…

don’t worry it’s fine lol

192 Power-Off_Retract_Count 0x0032   001   001   000    Old_age   Always       -       158642
193 Load_Cycle_Count        0x0012   001   001   000    Old_age   Always       -       158642

i guess i should keep an eye on that drive… maybe try to see if i cannot turn that off… can’t remember it being that high…