No, we are only talking about 200W of extra power. The drives would be powered anyway through a JBOD.
200 more watt every 24 bays with your future setup. scale up!
we can split in another thread. âWhat should I do to manage 1000 nodes when BIG customers comes?â
I want to be positive. I want to think that one day storj need a lot of more stored data and everyone have to think how to scale up.
So⌠is the testing over? Paused?
Weâve heard theyâve hit their performance targets: so maybe now theyâre configuring the capacity-reservation stuff that will only run fast for a couple hours each day?
Well, that post you linked suggested the high throughput would continue, but easing off a bit throughout the day.
It seems to have stopped completely, thoughâŚ
(Gives one of my âlettuce nodesâ some time to complete its filewalker, mind you!)
Did someone at Storj read your post and just turn things back on?!?!
Yesterday we saw a huge drop in healthy pieces from production sats. This could be the reason for the test stopâŚ
Or its just the weekend and more tests are set for Monday.
Does it take a lot of time for the repair work to bring numbers back to normal?
Seems like it was just a hiccup
There is some weirdness with repairs. I have two nodes that failed 1-2 repairs each with âfile not foundâ with associated audit scores drop 0.03%.
Both nodes are on zfs volumes, scrubbed monthly, both systems are on a UPSes, shut down gracefully on power outages (node is sent SIGTERM).
Iâve tried searching logs for the piece that wasnât found â and did not find any evidence of it it being ever uploaded, but since I donât store logs forever that could have been some very old piece.
I donât know what to think of that.
@littleskunk do repair workers also work on a snapshot and is it possible the 1 hour wait for expiration deletion is too short for the repair workers to be up to date on expired pieces?
Tests are still ongoing for the weekend.
I deal with this in my day job and DESPISE it. I visit this forum because itâs a refreshing change from the daily grind. While sometimes chaotic, I greatly appreciate the candid and open responses.
Love this behavior. I believe at one point it was mentioned that node performance metric would be added to the dashboard. Just a thought from this comment that a hit rate of âdial downâ might add to that insight on the dashboard. Is something like this in the works? I would love something that simple as a SNO to compare when optimizing things.
This problem also hits those with 10 USD/month: as long as their nodes are vetted, theyâre hit with regular traffic, not a substitute.
You invested. Maybe not for Storj, but you did spend extra money on top of your ISPâs router.
Same here. Iâve started hitting bottlenecks that I canât fix easily anymore, first with the ISP router, then, a bit by surprise, by my HBA card. Though, I was planning to get a new desktop anyway, so at some point Iâll probably make my old one into a quite a bit better NAS than my N36L; had the foresight to get a pretty flexible mobo and case for my current desktop. I basically just need to buy a single component for maybe 30 USD. We will see, because a lot depends on how Storj rules/traffic/environment develop.
It is nice when they complete before another run starts.
My ZyXel Viva router works perfectly serving 100Mbit up/down during the test. I do not consider it an enterprise grade.
strange. usually have many lanes at 6gb⌠it should not be a problem. how did you find out?
I suggest to set retain concurrency to 1
One of the affected nodesâ audit score is back to 100%. keeping an eye on the other.
You can always just ignore him.
Youâre not coming across very nice, either.