Updates on Test Data

No, we are only talking about 200W of extra power. The drives would be powered anyway through a JBOD.

200 more watt every 24 bays with your future setup. scale up!

we can split in another thread. “What should I do to manage 1000 nodes when BIG customers comes?”

I want to be positive. I want to think that one day storj need a lot of more stored data and everyone have to think how to scale up.

2 Likes

So… is the testing over? Paused?

We’ve heard they’ve hit their performance targets: so maybe now they’re configuring the capacity-reservation stuff that will only run fast for a couple hours each day?

Well, that post you linked suggested the high throughput would continue, but easing off a bit throughout the day.
It seems to have stopped completely, though…
(Gives one of my “lettuce nodes” some time to complete its filewalker, mind you!) :wink:

3 Likes

Did someone at Storj read your post and just turn things back on?!?! :heart_eyes:

1 Like

Yesterday we saw a huge drop in healthy pieces from production sats. This could be the reason for the test stop… :thinking:

Or its just the weekend and more tests are set for Monday.

Does it take a lot of time for the repair work to bring numbers back to normal?

Seems like it was just a hiccup :wink:

There is some weirdness with repairs. I have two nodes that failed 1-2 repairs each with “file not found” with associated audit scores drop 0.03%.

Both nodes are on zfs volumes, scrubbed monthly, both systems are on a UPSes, shut down gracefully on power outages (node is sent SIGTERM).

I’ve tried searching logs for the piece that wasn’t found – and did not find any evidence of it it being ever uploaded, but since I don’t store logs forever that could have been some very old piece.

I don’t know what to think of that.

@littleskunk do repair workers also work on a snapshot and is it possible the 1 hour wait for expiration deletion is too short for the repair workers to be up to date on expired pieces?

Tests are still ongoing for the weekend.

6 Likes

I deal with this in my day job and DESPISE it. I visit this forum because it’s a refreshing change from the daily grind. While sometimes chaotic, I greatly appreciate the candid and open responses.

4 Likes

Love this behavior. I believe at one point it was mentioned that node performance metric would be added to the dashboard. Just a thought from this comment that a hit rate of “dial down” might add to that insight on the dashboard. Is something like this in the works? I would love something that simple as a SNO to compare when optimizing things.

4 Likes

This problem also hits those with 10 USD/month: as long as their nodes are vetted, they’re hit with regular traffic, not a substitute.

You invested. Maybe not for Storj, but you did spend extra money on top of your ISP’s router.

Same here. I’ve started hitting bottlenecks that I can’t fix easily anymore, first with the ISP router, then, a bit by surprise, by my HBA card. Though, I was planning to get a new desktop anyway, so at some point I’ll probably make my old one into a quite a bit better NAS than my N36L; had the foresight to get a pretty flexible mobo and case for my current desktop. I basically just need to buy a single component for maybe 30 USD. We will see, because a lot depends on how Storj rules/traffic/environment develop.

It is nice when they complete before another run starts.

My ZyXel Viva router works perfectly serving 100Mbit up/down during the test. I do not consider it an enterprise grade.

strange. usually have many lanes at 6gb… it should not be a problem. how did you find out?

I suggest to set retain concurrency to 1

1 Like

One of the affected nodes’ audit score is back to 100%. :thinking: keeping an eye on the other.

1 Like

You can always just ignore him.

You’re not coming across very nice, either.

3 Likes