Sustained 200mbps ingress

I still do not receive data neither ingress nor egress nor repair. Just a few audits. 24 hours have now passed. it’s possible?

Of course, moreover - it’s normal.
Make sure that it is online, not suspended and not disqualified.

No, my nodes are online 24/7. They are on Server Esxi on VM with datastore on HP storageworks.

Thanks for Reply

i’m getting minimal uploads and some downloads… but the downloads aren’t a good thing… kinda shot my OS in the head today and keeled it dead… apparently dd command streaming gigabytes of data onto random disks doesn’t allocate to the unallocated space lol

so now the satellites are insulted over my dt and want what data they consider important… so not good… but hey i could have killed the node… nearly did it also got a buckshot of 4gb of 0’s but the redundancy took it… lol

Traffic is down for me pretty much exactly since midnight 14.05 to 15.05… even though I have 2 nodes running I never saw the uptake, maybe because located in Amsterdam.

This looks like what I am seeing since around 6th of May.

yeah, mine had a brief drop on around the 6th and then dropped completely again a few days ago…
very similar tho the total low activity time is quite different…

dead as a storagenode

same here, is this kind of thing habitually.

what´s the more common behavior, ¿the actual? since I´m not vetted, I´m assuming it´s normal for me to have almost none traffic, but seeing senior SNO with such graphics it´s not very tranquilizing.

Wait till they upload 1 TB, then delete it :smiley:
You get paid nothing for ingress and deletes unfortunately.

3 Likes

There seems to be an issue with node selection, tests were stopped in the mean time. I don’t think many people would have seen the jump in traffic. Most probably saw a drop instead. And since tests have stopped everyone saw a drop. The point is, low or no traffic isn’t a problem, there are scenarios in which that will happen. You only need to worry if your node is offline or failing audits. If it isn’t, just leave it be and wait for more data to come in later.

2 Likes

i’m on my 4th scrub… and then might do another one if i manage to remove one of the top level vdevs from my pool, so making good use of the “downtime” and trying to migrate from 8k block size to 512byte because my hardware is mostly 512byte based… and making sure my data integrity is flawless
scrubs does take the peak of what my system can perform, especially my read latency suffer… gets up to 100ms O.O so i rather run those when the network is mostly asleep

Any news so far regarding the node selection and continuation of the tests?
I sure do miss the steady influx of data.

There was a weekend…

3 Likes

Oh yes, there is very little going on at the moment.

1 Like

System seems to be running pretty smooth…

got a bit more IO delay than i care for, but i’m apparently running on a bad zfs ashift, if they aren’t all in the same blocksize it causes additional latency / IO delay…

$ zpool iostat -l
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
rpool       1.98G   137G      0     19  4.78K   250K  817us  976us  549us  230us   29us  297us    1ms  776us  372us      -
testpool    4.78M  4.50G      0      5      5  30.3K  491us  653us  491us  170us    2us    1us      -  485us      -      -
zPool       13.6T  24.6T     11    129   225K  1021K   18ms    6ms   11ms    1ms    2ms    7us   16ms    5ms   10ms  132us
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----

but system seems to run pretty nice while on low usage… storagenode is running tho…
i like how the total write latency is 6ms xD
weirdly enough my read is slightly high at the moment… maybe it’s because the system was rebooted only last night so the l2arc hasn’t had time to wake up, and most likely won’t at this speed… only 3 weeks left until its filled lol

I think it’s pretty good for me right now so I don’t miss anything. I was able to move my containers to my other Raspi4, modified my network and Pi’s a bit, move my backups to my server without slowing down anything else and synchronize the blockchain for Raspiblitz at full speed. As far as I’m concerned, it can stay that way until tomorrow, when I can calmly upgrade my first node from 4- to 8TB (SMR to PMR) and push the backups up further. 320GB with 1MB upload is pretty annoying ^^

1 Like

yeah i was looking at buying a few new hdd’s but the used prices at the moment seems almost unreasonable, i suppose it’s a corona thing… production was shut down for a good while so that may have upset the supply and demand balance…
got two 6tb that’s 4kn drives and that have offset my entire pool during creation, so its running ashift 12 give a IO amplification of x16 or x8 on my drives that is 512 based drives.

but i guess it’s a sign that i should really be buying new drives anyways and then sell them when warranty is about to expire, seems to be the trick of the trade in datacenters.

right now the risk vs reward on used hdd’s seems to favor new drives anyways…
not like my node has been shoveling in cash during the first 2 months either… but hopefully over the next few months it will start to actually offset my base electricity costs atleast…

Oh yes, that’s the reason for everything right now. Covid. I’m just happy when the topic is finally history … So I got a Western Digital Elements again relatively cheap for 120 euros, because I couldn’t say no ^^ I would never buy a used one, simply because I don’t knows how long it has been running. I’m very satisfied with Western Digital, my NAS is now 12 years old and still runs and runs on the same hard drive and according to the interface the hard disk is still in good condition. Sure the storage space is a bit small today but it is not the only HDD I have and as long as it runs … :slight_smile:

1 Like

i’ve been buying a bit of a mix of old enterprise drives and a few new.
got them cheap so i evaluated that it was worth the risk… it seems they will have just about 4 years and 7 months or so on them… which basically makes it so the datacenter can sell them to a 2nd hand dealer, that dealer can offer 3 months returns on bad drives… and they will still fall within the manufactures warranty.
and thus the datacenter can ensure they always have a supply of the same kinds of drives, in the same number they initially ordered, making dealing with array sizes easy.
and they can sell them at not much lower or so it seems than new drives… rarely able to get 50% savings on an old hdd.
thus one could if able to sell them replace most of the fleet of drives and upgrade them fairly cheaply and thus remain within warranty, might be pretty smart way to do it actually… so i think ill try that approach.
i wouldn’t have noticed without buying their used drives tho… xD

1 Like

Ah okay I misunderstood, that means you buy used hard drives from companies? Okay, the idea is not bad, you definitely know where they come from. In my previous post I meant that I would never buy a used hard drive from a private person. However … who knows what data could be restored if it was not properly formatted and overwritten ^^ If there is still a guarantee and you are even entitled to return it then I understand that. Where can you get such sources? xD