Node Operator Fireside Discussion

Hey there SNO’s,

On September 23rd at 10am Eastern Daylight Time, we’ll be conducting a Zoom chat with all of you and some of the folks from Storj Labs to help answer any questions you might have as it pertains to being a Storj Node Operator.

Sign up is located here - Webinar Registration - Zoom

Please send your SNO related questions to and we’ll answer them during the meeting. You’ll also have the opportunity to ask questions via Zoom chat during the meeting, which we’ll answer as time allows.



Awesome idea. It’ll be during work hours for me, but I’ll move some things around. Promise not to tell my boss? :stuck_out_tongue:


Looking forward to joining you @Knowledge and sharing some updates, but mostly answering questions.


We just need to tell your boss that this is a game changing technology. He might join the meeting himself :smiley:


I registered but have a conflict at that time slot. Will this be recorded and sent to all participants to watch later?


Yes. We’ll post the link to it when it is available.


If there are any questions or topics you want covered during the fireside chat, please add them here!


When will multi-node dashboards be possible?
What about a native Synology application?
What trends are you seeing with inbound data? Is Storj becoming more popular?
How many nodes are currently in production?

  • Would like to hear something about ethereum scaling solutions you are keeping an eye on.
  • Can you already share anything about aligning payouts with the new pricing?
  • Could you share some info on node churn rates you’re seeing?
  • What are the expectations about network growth over the coming months? A year? Multiple years?
  • A while back there was some talk about possibly tuning RS numbers, any updates on this?
  • What can you tell us about current test data and traffic patterns? Any changes in that planned?

Edit as mentioned further down:


What’s the status of the web browser-compatible JS client library idea?

1 Like

What other potential methods to pay SNO’s are you considering at the moment?

And a suggestion: If not everything can be answered during the Zoom, it would be great to receive the answers in this thread here.


Can we get an update on how the Storj CockroachDB deployment went ? specifically how the latency of transaction between geo-satellite locations, plus anything else learnt from deployment :slight_smile:

Any news on SNO hosted satellite functions ?

It looks like S3 compatable Gateway-MT is really popular ! however as these exit points are Geo-Located in specific locations ( I think 2-3 datacentres ? ) Are there any plans in place to optimise node selection further ? as currently nodes most respondent to those datacenters are preferred due to quicker network links ?

Can we get a breakdown of how much traffic as percentage of total traffic is now passing through Gateway-MT compared to uplink or Developer API

Are there plans to make Gateway-MT SNO hosted, to improve distribution ? or to increase the geo-location of gateway MT ?

When looking at the Storj Github, what does Storj think about the level of Pull requests being seen from the community ? Is this something that Storj is happy with ? What does the Dev team think about this ?

Are there any plans to make a much more easily accessible public issue tracker, that SNO / customers can use to log issues / feature requests ?

Are there any future plans to change the number of encoding pieces needed to recover data ?

1 Like

Do problems like Trouble with UDP affect customer experience in any measurable way?

Does Storj plan to accept L2 payments?

Jocelyn was heavily involved in the forum community but since her departure that involvement doesn’t seem to have been replicated. Any plans to change that?

How much does Storj currently owe to SNO’s who have not signed up for L2 payments and haven’t met the threshold for L1?

How can Storj consider small SNO’s in any way profitable considering you need to wait up to a year for a L1 payment and L2 Exchanges are yet to exist?


Is there a plan to make Storj nodes resilient to SMR disk technology?
More generally I guess the questions is:
:arrow_right: Could the node software ensure that it automatically refuses ingress requests whenever the underlying storage system (HDD, SSD, …) cannot keep up? Is that feature considered?

Currently, if the storage system cannot keep up, the node eats up all the RAM of the system by caching data waiting to be written down, and gets killed by the system eventually. The only workaround currently is to limit the number of concurrent requests the node is authorize to process, which is a shame because that drastically limits its ability to handle bursts of requests (even SMR drives can keep up with high loads for some time before starting to stall - limiting the number of concurrent requests prevents the node from using this ability).

Cheers :slight_smile:


I was hoping we would see something like this directly in the Linux kernel: ext4-lazy.

I love your post, great questions! But this is a stretch for sure. Chia and Storj have only one thing in common… they use HDD space. They use it for wildly different purposes though. There may be some mild competition on the supply side, but Storj is in a good position there as used space on Storj is much more valuable. Anyone doing both would be wise to remove Chia plots to make room for Storj when needed. That’s what I do at least.

I would be curious to know Pauls motivations for that career move though.

@john: I though of some more questions:

Both of these suggestions have gotten positive response from Storj Labs and a mention that “something like it will probably be implemented” but no further follow up.

This is great. Thank you for the questions. We’ll get through all that we can and if we run out of time, we’ll get a blog post out with the rest or respond here.

If this fireside chat format works well, we’ll try to work this into our regular cadence.


That was great @john @jtolio @Knowledge ! Thanks for the updates.

I hope this will become a more frequent thing.

I think there were still plenty questions left to fill a few more of these. :slight_smile: