I’m getting a steady flow of ingress traffic that is averaging about 1.39 megabytes per second On my raspberry pi 4 with a 60/60Mbps internet connection in the USA. Hard drive is an external 2TB USB with about 1 TB usable space remaining.
I’m guessing this is mostly test data to fill up some of the free space generated from the big Stefan benten delete we had recently, but who knows. At this rate, my available hard drive space would fill in about 9 days if I did the math correctly. I’ll cap my storage limit soon so I’ll have some room to play with later. Anybody know the average storage space per node? Wondering how long before the network is full at this rate of ingress. If this is mostly test data, I’m assuming they’ll stop sending it before the network is dangerously full.
What traffic are you guys seeing on your nodes? How long before you are full?
What is that inbound gap between week 25 and 26? Was your node full or was there a gap in test traffic?
Your 10 mega bits per second = 1.25 megabytes per second so it looks like we’re seeing similar inbound throughput.
I might have to give that rrdtool graphing system a try.
Yes, the node thought it was full (Space accounting appears to be broken on version 1.5.2) because of a bug that resulted in “trash” space being counted twice and at the time I could not expand the virtual drive. It turned out I just needed to restart the node.
I wouldn’t exactly call that slow. I currently have 28TB of available space. About 13TB used. And yes, I’m seeing about 2TB ingress per month. But, also about 5% being deleted on average. That starts to add up at some point and slows down the net filling of a larger node.
yeah 1.5mb/s is a nice pace the ingress can be very erratic tho, my node is 5 months in a few days… and i’m at about 10.75tb stored, so yeah 2tb monthly ingress avg seems accurate… still a good deal of months out before i need to consider upgrading…
yeah 2TB/month is not bad. I’m just a bit impatient because I gave up a 7TB node when I switched to a raidz with 3x8TB HDDs And now the 16TB array looks pretty empty with only 3TB on it…
This happened to me too. I was about to spin up a 3rd node, but when re-pulling the containers for :latest and starting the nodes up - it released over 1TB of freespace, haha! That was a nice suprise.
Things have slowed down on my end, I’m now getting about 260 KB/s ingress. My 1 TB free space will now last for about 1.5 months at this rate. Extra time to ponder the purchase of a new drive.
Actually, I might need two new drives. I scanned my bulk storage drive(Seagate BarraCuda) on my pc and found a lot of bad sectors. Purchased November 1, 2017. Warranty Expired January 20, 2020. I’ll probably try a different brand, maybe one with a better reputation and warranty.
Health status OK because none of the “Current” or “Worst” values are below “Threshold”.
Basically, the manufacturer is saying that your drive can be expected to have some bad/reallocated sectors.
It makes sense - why have the reallocation system in pace if you’re going to replace the drive under warranty after the first bad sector.
Your drive has a huge load cycle count though. You should make your new drive not unload the heads, it may last longer.
Yeah, I don’t know why it would be that high. When I divide the power on time, by the load cycle count, it equals 1 load cycle every 5 minutes. I don’t think I ever told windows to put my drives to sleep after such a short time.
HDDs unload their heads independently of windows settings if there was no activity for a certain amount of time (maybe it is 5 min in your case, some other drives might do it more quickly). So the only way around that would be to run a script that reads from the drive every view seconds, preventing the head from being unloaded.
Hard drives do it automatically as “power saving”. You can turn this off for some drives. For others you have to run a script that “pokes” the drive every so often (faster than the timeout) to prevent that.
Not only it makes the drive wear out faster, it makes the drive slower too, since it needs a couple of seconds to get ready. Until I made the drives in my file server work properly I was annoyed by the time it took to open a folder etc after some time of inactivity.
WD RED has, AFAIK, 5 minutes as the limit. For some other drives it may be as short as 10 seconds I think.
and yes it’s running fine
only 6.5 years of power on … the error rate is a bit high… but it’s been acting fine lately… may be from issues with running sas and sata on same backplane… hint don’t do that…
it does seem to be the last of the old 3tb drives left tho… but it’s not throwing any errors currently… so i’m happy with it… bad sectors are not really a big issue if you got redundancy… besides you can buy a new drive and it can be going back in no time at all…