Q3 2023 Storj Town Hall Q&A

As promised, here are answers to the questions posted in advance of the Town Hall. We answered all the questions posted even though some of them were covered in the Town Hall.

Question Set 1 - IsThisOn:

Could you describe a little bit more what “others” in your balance sheet are?

It states, service providers, which is odd because service providers already has its own column.
It also states general operations. It can’t be salaries, because that already has a column. Could you give a little bit more detail on what that means?
It also states liquidation on none US exchanges. The US part is oddly specific, and I wonder what you do with these liquidations. Where does the money from these liquidations go?

I am sorry if this comes off as nosy, but given that “Other” is by far the biggest spending point, I think it is appropriate to ask these questions.

RESPONSE: Thank you for your interest in the details of the Storj Token Balances and Flows Report. As we note about the category of “other” In the token report:

Line 14, “Other,” is reserved to report activity that doesn’t fall into any of the other categories, including, for example, non-routine payments to service providers and carbon offset program payments. As noted above, in Q3 ‘23, 14M STORJ tokens were used in payments that included general operations and liquidity purposes. To provide additional liquidity for general operations during uncertain economic times and in periods of growth, we also are liquidating a portion of our reserves on non-US exchanges through a partnership, and these flows are disclosed in this line item.

We do our best to respond to questions about Storj and the token reports as transparently as possible, but there are limits to the level of detail we share publicly about the operation of Storj as a privately held business. That said, we can share that the money from the STORJ liquidations is used to fund the ongoing operation of Storj as a business, as many of our bills are paid in fiat currency and we are a non-venture backed, growing company. In addition, we occasionally shore up our cash position to protect against uncertain economic times, industry-wide impacts, and any potential future headwinds.

With regard to your question about service provider payments, note that line 11 reflects regularly scheduled service provider payments made in STORJ, whereas service provider payments in line 14 includes any non-routine service providers paid in fiat currency from token liquidation.

Question Set 2 - Toyoo:

Is there any progress on updating node T&C?

RESPONSE: There has been a lot of progress on the Node T&C, and we appreciate your patience while we get it finalized. We had some additional work to do based on some changes related to the commercial node program rollout. We will update the SNO T&C this quarter.

Question Set 3 - penfold:

  1. What is the progress on the Commercial SNO Network?
    RESPONSE: The Commercial SNO Network is live in the US. We have established a process for sign-ups, for verifying and queuing capacity as we need it, and have rolled out an initial set of providers. The first enterprise customer is onboarding data onto the service and will be announced later this quarter. We have a growing pipeline of prospects and will onboard capacity to support customers in the US and in other locations worldwide as needed.

We’ve discussed this in a couple of places, but it’s worth reiterating that we implemented the SOC2 tier using node tags, similar to the way we handle geofencing. The commercial nodes and public nodes share the same satellites, and the usage information is visible on the dashboards and stats API, but any particular data set is stored only on one set of nodes or the other…

While the nodes are part of the same ecosystem, commercial nodes and public nodes are not competing for the same traffic. Data in a bucket would either be sent to public nodes (the default) or it would be sent to commercial nodes (by customer request).

  1. Please provide details on average node life now.
    RESPONSE: Average node life is:
    US1: 908 days (2 years and 5 months)
    AP1: 860 days (2 years and 4 months)
    EU1: 864 days (2 years and 4 months)

  2. Do you see further tweaks required to SNO payments within the next 18 months?
    RESPONSE: We do not see further changes to SNO payments any time in the near future. It’s possible that something may change in the broader market, but as far as we can foresee, no additional changes are planned.

  3. SNO income has reduced by approximately half since the payment changes. i thought the idea was to reduce income on a stable basis. How is a 50% drop stable?
    RESPONSE: We do our best to maintain the stable level of usage, however customer usage, especially egress bandwidth is variable and the timing of when customers start onboarding new data can be unpredictable. We’ve committed to keeping the level of synthetic data at a level to keep average payouts at ~$130k per month. We will consider ways to increase synthetic load as needed to help us hit this target.

Any prediction on a per-node basis relies on a range of factors and is impacted by the growth of the network, almost all of which are outside our control as Satellite operators.

  1. The ratio of unused space to used space is now very near 2:1 Is this a concern for Storj and if so what is planned here?
    RESPONSE: It’s not a concern at this point. We continue to see an increase in demand and currently do not foresee any issues with the current growth and trajectory of the network. Note that Commercial Node capacity is included in the total (currently only on US1) and in general, that capacity is activated with known, planned capacity.

  2. Removing data from the decommissioned sats was posted to the forum - but what about SNO’s who don’t visit the forum - what are the implications for them?
    RESPONSE: All nodes are treated equal in terms of the decommissioned satellites. Most of the data associated with the decommissioned satellites was deleted off the network as part of that decommissioning process with the help of garbage collection. We even ran a few cycles to make sure the amount of garbage left behind is low. Naturally there still is a small amount of garbage left behind. For safety reasons we decided we don’t want to implement code that could potentially wipe out any of the existing satellites so a manual command was the final solution.

We are also aware that there is a bug in the garbage collection implementation. Some nodes never finished garbage collection and are still sitting on a higher amount of garbage. Apologies for that. Please follow the same manual commands.

Question Set 4 - CutieePie:

  1. Can we have an update on zkSync-Era paymaster support for the STORJ token, as it feels like the only blocker from stopping mass adoption from Operators who don’t have spare ETH to fund their wallets.
    RESPONSE: zkSync Era is a new, emerging technology and we are rolling it out in partnership with Matter Labs. The rollout is a careful and methodical process, and we agree this is a very promising technology for the future of ethereum. It is live right now. Storage nodes can opt in to zkSync-Era payouts. In the config you will see have operator.wallet-features: ["zksync-era", "zkSync"] when configured.

You will see a new zkSync-Era link on your dashboard. It opens the new blockchain explorer. It works well and the first payment was last month for September. We hope to see more nodes adopt zkSync.

  1. Can you share any plans for additional satellites to support the Commercial SNO Network?
    RESPONSE: Currently our plans to scale the Commercial Node Network are focused on adding node capacity and edge capacity. The Commercial Nodes and public nodes share the same satellites. We don’t have plans or needs to scale or add satellites specifically for the Commercial Node Network. Satellites are already multi-region and horizontally scalable.

  2. Any plans to re-write the Node Trash piece code to be more efficient, and less intensive on Node Storage ? compared to 1 year ago, the Trash churn at piece level is horrible, doing it by satellite date would be far more efficient for the delete from the Trash, instead of walking each piece.
    RESPONSE: We refactored deletes from a satellite process to function more like GC. The result improved the user experience for deletes, but did increase the load on nodes. We’re continuing to look at ways to make GC and deletes more efficient, but there is nothing scheduled right now.

GC alone may not be the painful part of the process. One additional challenge is that on every startup a node will check all pieces in the trash folder. It will check the creation date of all of them. If you restart the node 5 minutes later it will start over and check again. That part is inefficient.

We are thinking about a number of approaches to address the challenges holistically. For example, could we make it so that the trash folder is checked only once every 7 days? All we need to do is have an index file. When GC runs it moves pieces into the trash folder. Copy the timestamp of the last piece that was moved. Do this for all satellites. Check the timestamp in the index file first and just skip looking at the pieces until the timestamp is reached. Make this per satellite and also plan for having more than one timestamp in the index file.

  1. Can you share any statistics on how the Audit workers are scaling now - approximate stats on how often a 1TB node has all it’s pieces audited would be useful.
    RESPONSE: The audit code is open source, but the implementation of that code is up to Satellite operators. We make regular tweaks to ensure we’re able to achieve our durability target with well over a billion pieces on the network.

If you want to check how many audits you get in a month, you can use the storage node API. It has an endpoint ( useful help link here) that shows uptime. That uptime endpoint lists all online checks (these are audits) in a 12h time window for the last 30 days. Sum them up and you know how many audits you had.

  1. Can you share any plans to provide better guidance for Customers on the downsides of enabling Geo-Location for Storj, given the redundancy offered by a globally distributed network - information and training is going to be important to remove the traditional enterprise misconceptions and it would be interesting to know how Storj is tackling this.
    RESPONSE: Customers are free to choose placement rules that align with their storage needs. Storj won’t create or support placement rules that compromise durability or availability of data. Currently we offer a limited number of regions for geofencing, but those options have a sufficient population of nodes.

We have a number of features in mind to make this a much more powerful and flexible feature for customers, but for now we’re managing it very closely.

  1. Can you share any plans to promote node operators to host nodes in more geographically lacking locations by any incentives ? Having all the nodes in Germany isn’t really a good thing for the network.
    RESPONSE: We’re actively looking for nodes in South America right now and the incentives are generally aligned that nodes closer to data being uploaded have a slight advantage.

  2. Can you share any plans for Tagging nodes more specifically, on performance, location etc to better enhance the experience for the customer, when the satellite provides nodes to upload to.
    RESPONSE: We do have a lot of plans and ideas to create compelling features for partners and customers leveraging node tagging and placement rules. The initial use cases of geofencing and supporting the Commercial Node program and tier of SOC2 certified nodes is only the beginning. This is a highly differentiated capability in the market, and we will continue to make it easier to use.

Question Set 5 - Ottetal:

  • I have a few nodes that had unrecoverable corrupted databases, and now after a long time still does not really know how much they store, and they have large unused sections of their drives. Any news on a “recalibrate databases” functionality?
    RESPONSE: Unfortunately, in the case, where you have a failing node that hasn’t been DQ’d, your best course of action may be to call Graceful Exit. We don’t have a feature to recalibrate databases on the near term plans but there may be a workaround that wipes and recreates the database. We could make this a bit simpler by having some kind of repair command that does this. Currently the operator needs to run these types of commands by hand which isn’t something every operator can do. If you have ideas on how this should work, please add as much detail as you can to a GitHub issue. We’ll follow up a DM if something surfaces internally and add it to the documentation.

I would love to hear some good warstories on where StorJ makes a difference for your (our? :slight_smile: ) customers
RESPONSE: We shared a few in the Town Hall recording and we have a press release coming out in the next month about a large customer that closed in Q3.