One of our goals has been to optimize the network architecture to provide the maximum amount of bandwidth allocation based on the operation of a single node in a bandwidth-constrained environment.
This means that running a single node per location (where one location = a discrete network with separate IP address and bandwidth/bandwidth cap) will yield the best results (highest reputation, most storage contracts from satellites and most earned STORJ tokens) for storage node operators.
These reputation factors are the elements that impact the amount of data stored and bandwidth utilized by a storage node. The reputation factors that are included in the reputation and node selection statistical model are listed out here (just replace kademlia with dRPC).
@keleffew Very interesting article. If the V3 reputation is based on a statistical model with dozens of factors, I think it makes it even more important to display the following info on Nodesā dashboards:
The global reputation the node has, and its sub criteria.
Automated guidelines (as suggested in one of my posts above) explaining for each criterion (or a selection of them) how to improve it, and/or what caused a criterion to have a poor score.
What the network needs, e.g. if more storage is needed
Iām sure that it would be motivating for people to know how to improve and get better at their SNO job
The dashboard seems like a great place to display these pieces of information, tailored for each Node.
I have a 1st Node that I created in June 2019. At first, I received many STORJ tokens as data was coming in so bandwidth usage was high. Now, the disk is full and the bandwidth usage is close to 0 so payout is ridiculous and does not cover my costā¦
I created a second node recently, and as itās brand new, itās filling fast and my bandwidth usage is really high! (Itās not full yet).
My question: Do you plan to wipe old data to ensure that old Node (which were supposed to be more profitable for being here for a long time) are profitable in the future? Right now, it seems that my old node is stuck storing junk data which will be never requested!
Otherwise, some things I see:
As many others, the uptime seems really hard to fulfill. Iām never down for now, but Iām worried that one day Iāll simply be sleeping and Iāll reach the max 5 hour requirement. No, with 10$/month earned I canāt pay a SRE.
We want more clarity on the payout process: a clear dashboard, with amount in dollar (usage, exchange rate used at payout time), amount stored in escrow so I can calculate if itās actually profitable to keep my node up.
This are just feedbacks, I believe in Storj and I think you did a really great work
@pierre-gilles: I experienced the same: once full my node did not get any more traffic. I recently allocated more space to it, and it instantly started getting data and bandwidth (both ingress & egress) again.
Iāll let StorjLabs staff answer, but Iām sure our Nodesā behavior will be quite different when the whole thing is in production.
And it would make sense to remove all the test data that got scattered amongst all our nodesā¦ Iām sure thatās planned
This is due to the current testing scenario, I got that answer in my thread about full nodes not getting traffic. It will definitely change in production and might even change before if they are doing other kinds of tests.
But all my nodes are now quite full too, had to increase the storage on my 3rd node to even get a bit of trafficā¦
One feature that recently lauched - and should help is Garbage Collection.
As per the original description in section 4.19 of the whitepaper,
A garbage collection algorithm is a method for freeing no-longer used resources. A precise garbage collector collects all garbage exactly and leaves no additional garbage. A conservative garbage collector, on the other hand, may leave some small proportion of garbage around given some other trade-offs, often with the aim of improving performance. As long as a conservative garbage collector is used in our system, the payment for storage owed to a storage node will be high enough to amortize the cost of storing the garbage.
[ā¦] In the simplest form, it can be a hash of stored keys, which allows ecient detection of outof-sync state. After detecting out-of-sync state, collection can use another structure, such
as a Bloom filter [82], to find out what data has not been deleted. By returning a data
structure tailored to each node on a periodic schedule, a Satellite can give a storage node
the ability to clean up garbage data to a configurable tolerance. Satellites will reject overly
frequent requests for these data structures.
Additionally, the way that segments are distributed is intended to distribute bandwidth usage across nodes over time (ie - you are likely to saturate availble bandwidth as reputation grows over time in production).
For me: : I build storage and network solutions before StorLabs is appear. I love hardware and build networking, infrastructure solutions, optimize it and get result (no make sense build anything without result), so Storj inspire me did things that I love, and some compensation inspire me did more good things.
Extra cash is nice, but this is also something interesting to do, especially since the requirements are rather strict.
And currently this costs less than mining and probably is more profitable. And the hardware I use could be used for other things if for some reason I decided to stop participating in the network, unlike an ASIC.
While I do like the hardware side I donāt have the kind of hardware some others are using. But I really believe in putting wasted free space to use. I enjoy systems that are clever like that. The concept that a bunch of untrusted pieces can be put together in a network that is rock solid. I guess you could say I love the software and concept side more than the hardware. I also wonāt say no to getting paid for this service. But the truth is that even in the best months itās fairly insignificant. I understand storj payouts wonāt change my life. Itās just a small extra payment.
I have a boatload of storage thatās always on. I fully support the program and want to use it in my business as we backup to S3 for many of our storage appliances so I can migrate that data easily to storj once itās stable. Iāve been in crypto for years and really like this project. It doesnāt require power hungry GPUās or custom hardware to run.