Where can I view the reputation score of my storage node? (alpha)

We only have a dashboard, but there are no reputation score.
Where can I see Storagenode alpha’s reputation?


The first big question is, is it wanted by STORJ that the SNOs knows there reputation score?

What can be done fast is sending once a week the score per eMail to the SNOs, every SNO submit the Admin email on node start score and email are known at satellite. Send an eMail is very simple and at current node count not a problem.

With a small code tweak the information could be put in the getAudit request from satellite to sno and get shown in the log output.

Third option building a dedicated app which is using the identity to auth agains the satellite to get this information. The affort is much higher than both other solutions.

1 Like

I agree with you. The next question would be “Why is this number so low? How it’s calculated?”

The problem is not in the reputation score or how it got calculated, both can be shared easy to the SNOs. The point there is no official decission from STORJ how to share or handle it.

The hole reputation thing isn’t finish. Afaik the score is simple calculated by taken all Audit_fails/Audit_gets and expressed in percent, all node selection by satellite is based on this, and there is a serious problem. The sharing Terms (https://storj.io/storj-share-terms/) are requesting only a uptime of 99,3% per month, what are 5 hours offline a month. If i split my offline time in small shunks i am theoretical able to miss all audits and get a repution of 0 without breaking the rules of the sharing terms.

the reputation calculation does not include the right of 5 h off with in a month. I do a compare to the SONM project where i can’t set my node to in a maintenance mode, all jobs get done correct to end, and then the node is in maintenance and can be updated.

1 Like

The answer to the first question is yes. On our roadmap is a dashboard for the storage nodes that will show the reputation.

How’s do i view my node reputation? My node was offline for an hour and I did not get extra data overnight. My logs has a lot of repair actions in the log and I’m worried.

No way at the moment. The SNOboard in the development.
All implemented metrics you can see on the local dashboard

I think, this isnt bad idea.
Sometimes, my electricity distributor only inform me, that in specific time window should be electric outage, maintenance on electric network. If this situation occured in current state, I loosing reputation not by my own fault.
Should be nice have ability signalized to satellites or the entire network, that in this time window surly my node not available.
Target is not losing reputation and eventualy do bigger upgrades on my node.
I know, that this may should be abused. For example, signalized maximum posible hours for maintenance and definitely go out.
I agreed with @BlackDuck , from StorJ team should be clearly defined rules for do this.
Just a idea from life experience …

@Adavan I am sure this feature is on the Roadmap, and it need to be there, but there so many vital things which have to be done before.

Afaik the 5h rule is inactiv right now. I am not sure how this is measured at all, what happen on a fail on storj site, for example if storj is not able to ping a large amount of nodes while everything else works fine. They can’t just disqualify thousands of nodes automatically.

I expect for long term that disqualification is a process controlled by an admin.

I am did some test with SONM, similar project only with other resources, there I can switch to a maintainace mode, all tasks get finish and then the node will shutdown for a requested time.

Yes, this property spot in SONM too. And yes, use it too, but my hardware is not interest for their use, just waiting for better time.
I know, that this is only Alpha, but I hope, that my feedback should be valuable for development team to do better application :slight_smile:.
No problem, I am patient and will waiting :wink:.

1 Like

All feedback is valuable. Even double request helps us to determine the right need.

Better to add it to the https://ideas.storj.io or vote for the existing one

Found it, thank you.

1 Like

For what it’s worth, it’s much easier to move compute away from one node to another than to move storage.

During maintenance mode as suggested the satellite would basically either have to take the risk that the node will return or repair any pieces on that node that fall below the repair threshold, which costs money. If such a maintenance mode would be implemented, I suggest taking the repair costs out of the escrow of the node in containment mode.

The current problem right now everything goes to 75% in escrow even payment of transfer, and if get disqualificated all is gone.

Second if a node is long enought there, there is no ascrow.

Even after 15 months, half of the built up escrow is kept. There may have to be something to replenish spent escrow during maintenance, but that can be worked out. I was just stating that maintenance down time can’t be free, because it imposes costs on repairs.

BTW, this payout for many people escrow will drop to 50%

OK, i could force that there is no escrow. Just start a node, keep it slim and slow for 9 Months, then there will no escrow taken. (Or i could mod the code for the first 9 month, that it report disk is already full)

But back to the maintenance. With consumer hardware/software setup is the needed reliability not reachable for long term period. Either there is a way to handle special short time (less a day) events, or I have to upgrade my setup to a redundance level as SNO. An upgrade would mean less ROI for a SNO, maybe it is such less that they not have any incentives to join the network.

There was a post which describe how RS works and how effective it scales up. I would say putting two more pieces in would have a better effect than let everyone build raid, cluster or anything else complex things. But this need way to manage takeouts.

You could cheat the escrow in that way whether you use it to pay for downtime or not.

I agree that it is a balancing act. But going by the V2 supply and demand, the supply side wasn’t really much of an issue. So storjlabs can afford to set requirements high.

However, if it indeed is not manageable as you suggest, that will show soon enough. And they will have to do something about that. I think the 6 hours is a bit of an issue… People sleep and go to work. If the node fails right when I fall asleep, I can’t guarantee it’s up again within 6 hours. I have remote management options set up, so I can restart things remotely. So work is less of a problem. But I can imagine not everyone has the luxury to take time out of their day to fix things.

This is still an Alpha, so I trust storj will take this time to find out what is reasonable as well as what is needed to keep the data safe.

V2 says nothing, the supply were never tested. I know some users which run 1000+ V2 setups. I there were real demand they would crash. V2 Data can’t be the base of information. Real stresstest didn’t exist in V2 the bridge was the first which crashed.

You self saw the topics there are users which asking for 24TB setups, failing of such large nodes will have large impact on network while repairing.

Yes, I heared it more than once, and I can’t it here anymore. Yes, Storj is in Alpha but not in a PoC, the alpha is the foundation of the design, things are hardly to change later. Build a skyscraper on a bad foundation and try to change the foundation while on top is a party. And we passed Beacon next is Pioneer (Beta) so refering it is Alpha is not fully correct, because some work for beta should be already there, it can not seem to come out of nowhere.

The way I see it, this is not a foundational issue, but a fine tuning issue. They can tweak the terms as well as the RS settings to find a better balance that would allow for a little more down time. Much of that wouldn’t even require a code change. I agree that V2 is not a representative test for supply and demand, so perhaps it’s partially a gut feeling. But I think there will be more than enough nodes to fill demand for a long time to come.