Q3 2023 Storj Town Hall Q&A

Consider that some questions could be answered by the support team and engineers here on the forum rather than the Town Hall depending on what you want to know.

Oh you mean like how the ToS question has(n’t) been updated in months?

As promised, here are answers to the questions posted in advance of the Town Hall. We answered all the questions posted even though some of them were covered in the Town Hall.

Question Set 1 - IsThisOn:

Could you describe a little bit more what “others” in your balance sheet are?

It states, service providers, which is odd because service providers already has its own column.
It also states general operations. It can’t be salaries, because that already has a column. Could you give a little bit more detail on what that means?
It also states liquidation on none US exchanges. The US part is oddly specific, and I wonder what you do with these liquidations. Where does the money from these liquidations go?

I am sorry if this comes off as nosy, but given that “Other” is by far the biggest spending point, I think it is appropriate to ask these questions.

RESPONSE: Thank you for your interest in the details of the Storj Token Balances and Flows Report. As we note about the category of “other” In the token report:

Line 14, “Other,” is reserved to report activity that doesn’t fall into any of the other categories, including, for example, non-routine payments to service providers and carbon offset program payments. As noted above, in Q3 ‘23, 14M STORJ tokens were used in payments that included general operations and liquidity purposes. To provide additional liquidity for general operations during uncertain economic times and in periods of growth, we also are liquidating a portion of our reserves on non-US exchanges through a partnership, and these flows are disclosed in this line item.

We do our best to respond to questions about Storj and the token reports as transparently as possible, but there are limits to the level of detail we share publicly about the operation of Storj as a privately held business. That said, we can share that the money from the STORJ liquidations is used to fund the ongoing operation of Storj as a business, as many of our bills are paid in fiat currency and we are a non-venture backed, growing company. In addition, we occasionally shore up our cash position to protect against uncertain economic times, industry-wide impacts, and any potential future headwinds.

With regard to your question about service provider payments, note that line 11 reflects regularly scheduled service provider payments made in STORJ, whereas service provider payments in line 14 includes any non-routine service providers paid in fiat currency from token liquidation.

Question Set 2 - Toyoo:

Is there any progress on updating node T&C?

RESPONSE: There has been a lot of progress on the Node T&C, and we appreciate your patience while we get it finalized. We had some additional work to do based on some changes related to the commercial node program rollout. We will update the SNO T&C this quarter.

Question Set 3 - penfold:

  1. What is the progress on the Commercial SNO Network?
    RESPONSE: The Commercial SNO Network is live in the US. We have established a process for sign-ups, for verifying and queuing capacity as we need it, and have rolled out an initial set of providers. The first enterprise customer is onboarding data onto the service and will be announced later this quarter. We have a growing pipeline of prospects and will onboard capacity to support customers in the US and in other locations worldwide as needed.

We’ve discussed this in a couple of places, but it’s worth reiterating that we implemented the SOC2 tier using node tags, similar to the way we handle geofencing. The commercial nodes and public nodes share the same satellites, and the usage information is visible on the dashboards and stats API, but any particular data set is stored only on one set of nodes or the other…

While the nodes are part of the same ecosystem, commercial nodes and public nodes are not competing for the same traffic. Data in a bucket would either be sent to public nodes (the default) or it would be sent to commercial nodes (by customer request).

  1. Please provide details on average node life now.
    RESPONSE: Average node life is:
    US1: 908 days (2 years and 5 months)
    AP1: 860 days (2 years and 4 months)
    EU1: 864 days (2 years and 4 months)

  2. Do you see further tweaks required to SNO payments within the next 18 months?
    RESPONSE: We do not see further changes to SNO payments any time in the near future. It’s possible that something may change in the broader market, but as far as we can foresee, no additional changes are planned.

  3. SNO income has reduced by approximately half since the payment changes. i thought the idea was to reduce income on a stable basis. How is a 50% drop stable?
    RESPONSE: We do our best to maintain the stable level of usage, however customer usage, especially egress bandwidth is variable and the timing of when customers start onboarding new data can be unpredictable. We’ve committed to keeping the level of synthetic data at a level to keep average payouts at ~$130k per month. We will consider ways to increase synthetic load as needed to help us hit this target.

Any prediction on a per-node basis relies on a range of factors and is impacted by the growth of the network, almost all of which are outside our control as Satellite operators.

  1. The ratio of unused space to used space is now very near 2:1 Is this a concern for Storj and if so what is planned here?
    RESPONSE: It’s not a concern at this point. We continue to see an increase in demand and currently do not foresee any issues with the current growth and trajectory of the network. Note that Commercial Node capacity is included in the total (currently only on US1) and in general, that capacity is activated with known, planned capacity.

  2. Removing data from the decommissioned sats was posted to the forum - but what about SNO’s who don’t visit the forum - what are the implications for them?
    RESPONSE: All nodes are treated equal in terms of the decommissioned satellites. Most of the data associated with the decommissioned satellites was deleted off the network as part of that decommissioning process with the help of garbage collection. We even ran a few cycles to make sure the amount of garbage left behind is low. Naturally there still is a small amount of garbage left behind. For safety reasons we decided we don’t want to implement code that could potentially wipe out any of the existing satellites so a manual command was the final solution.

We are also aware that there is a bug in the garbage collection implementation. Some nodes never finished garbage collection and are still sitting on a higher amount of garbage. Apologies for that. Please follow the same manual commands.

Question Set 4 - CutieePie:

  1. Can we have an update on zkSync-Era paymaster support for the STORJ token, as it feels like the only blocker from stopping mass adoption from Operators who don’t have spare ETH to fund their wallets.
    RESPONSE: zkSync Era is a new, emerging technology and we are rolling it out in partnership with Matter Labs. The rollout is a careful and methodical process, and we agree this is a very promising technology for the future of ethereum. It is live right now. Storage nodes can opt in to zkSync-Era payouts. In the config you will see have operator.wallet-features: ["zksync-era", "zkSync"] when configured.

You will see a new zkSync-Era link on your dashboard. It opens the new blockchain explorer. It works well and the first payment was last month for September. We hope to see more nodes adopt zkSync.

  1. Can you share any plans for additional satellites to support the Commercial SNO Network?
    RESPONSE: Currently our plans to scale the Commercial Node Network are focused on adding node capacity and edge capacity. The Commercial Nodes and public nodes share the same satellites. We don’t have plans or needs to scale or add satellites specifically for the Commercial Node Network. Satellites are already multi-region and horizontally scalable.

  2. Any plans to re-write the Node Trash piece code to be more efficient, and less intensive on Node Storage ? compared to 1 year ago, the Trash churn at piece level is horrible, doing it by satellite date would be far more efficient for the delete from the Trash, instead of walking each piece.
    RESPONSE: We refactored deletes from a satellite process to function more like GC. The result improved the user experience for deletes, but did increase the load on nodes. We’re continuing to look at ways to make GC and deletes more efficient, but there is nothing scheduled right now.

GC alone may not be the painful part of the process. One additional challenge is that on every startup a node will check all pieces in the trash folder. It will check the creation date of all of them. If you restart the node 5 minutes later it will start over and check again. That part is inefficient.

We are thinking about a number of approaches to address the challenges holistically. For example, could we make it so that the trash folder is checked only once every 7 days? All we need to do is have an index file. When GC runs it moves pieces into the trash folder. Copy the timestamp of the last piece that was moved. Do this for all satellites. Check the timestamp in the index file first and just skip looking at the pieces until the timestamp is reached. Make this per satellite and also plan for having more than one timestamp in the index file.

  1. Can you share any statistics on how the Audit workers are scaling now - approximate stats on how often a 1TB node has all it’s pieces audited would be useful.
    RESPONSE: The audit code is open source, but the implementation of that code is up to Satellite operators. We make regular tweaks to ensure we’re able to achieve our durability target with well over a billion pieces on the network.

If you want to check how many audits you get in a month, you can use the storage node API. It has an endpoint ( useful help link here) that shows uptime. That uptime endpoint lists all online checks (these are audits) in a 12h time window for the last 30 days. Sum them up and you know how many audits you had.

  1. Can you share any plans to provide better guidance for Customers on the downsides of enabling Geo-Location for Storj, given the redundancy offered by a globally distributed network - information and training is going to be important to remove the traditional enterprise misconceptions and it would be interesting to know how Storj is tackling this.
    RESPONSE: Customers are free to choose placement rules that align with their storage needs. Storj won’t create or support placement rules that compromise durability or availability of data. Currently we offer a limited number of regions for geofencing, but those options have a sufficient population of nodes.

We have a number of features in mind to make this a much more powerful and flexible feature for customers, but for now we’re managing it very closely.

  1. Can you share any plans to promote node operators to host nodes in more geographically lacking locations by any incentives ? Having all the nodes in Germany isn’t really a good thing for the network.
    RESPONSE: We’re actively looking for nodes in South America right now and the incentives are generally aligned that nodes closer to data being uploaded have a slight advantage.

  2. Can you share any plans for Tagging nodes more specifically, on performance, location etc to better enhance the experience for the customer, when the satellite provides nodes to upload to.
    RESPONSE: We do have a lot of plans and ideas to create compelling features for partners and customers leveraging node tagging and placement rules. The initial use cases of geofencing and supporting the Commercial Node program and tier of SOC2 certified nodes is only the beginning. This is a highly differentiated capability in the market, and we will continue to make it easier to use.

Question Set 5 - Ottetal:

  • I have a few nodes that had unrecoverable corrupted databases, and now after a long time still does not really know how much they store, and they have large unused sections of their drives. Any news on a “recalibrate databases” functionality?
    RESPONSE: Unfortunately, in the case, where you have a failing node that hasn’t been DQ’d, your best course of action may be to call Graceful Exit. We don’t have a feature to recalibrate databases on the near term plans but there may be a workaround that wipes and recreates the database. We could make this a bit simpler by having some kind of repair command that does this. Currently the operator needs to run these types of commands by hand which isn’t something every operator can do. If you have ideas on how this should work, please add as much detail as you can to a GitHub issue. We’ll follow up a DM if something surfaces internally and add it to the documentation.

I would love to hear some good warstories on where StorJ makes a difference for your (our? :slight_smile: ) customers
RESPONSE: We shared a few in the Town Hall recording and we have a press release coming out in the next month about a large customer that closed in Q3.


Would you consider sending an email to all operators, so that operators not following the forum are informed about this issue?

If I may: I’d consider marking trash not by moving files to a separate subdirectory, but in a database. This makes a lot of sense given deletions are now mostly handled through GC. Considering ext4, right now we perform the following operations for each single removed file:

  • GC file walker: lots of small directory scans, for each file two separate synchronized (!) random (!) writes to directory indices (the source and the target directory) — 8kB per file (!) best case,
  • trash file walker: lots of small directory scans, then for each old file, a file removal.

If we had a table with a schema of (satellite_id, piece_id, timestamp of GC), specifically with no indexes by (satellite_id, timestamp):

  • GC file walker: lots of small directory scans to get a list of all files to trash, then a single batch insert into the database, likely a simple sequential write to WAL (!), which should be amortized 100-400 bytes per file (!), with a later background update (also partially sequential) to tables.
  • trash file walker: no “lots of small directory scans” (!), instead a sequental (!) table scan, then a batch row removal, then for each old file, a file removal.

A curious coincidence here is that SQLite will set consecutive pk to inserted rows, and then store its rows in a btree keyed by the pk. Inserting a set of rows, as well as removing a set of rows with consecutive pk is pretty fast, as they will hit the same btree leaves!

As the table will only be written by file walkers, there should be no significant concurrency problems. I suspect we can’t avoid an index by (satellite_id, piece_id) for fast recovery from trash. I’d still expect a significant improvement of both file walkers. The only drawback is that GC will also go over already trashed files, but by replacing all random operations that we can replace with sequential scans we should still be way faster than the current approach.

The biggest problem here is probably that some code to migrate from a separate directory needs to be written in a robust way.

1 Like

We did that before. As result some nodes were DQ for losing a database. So we switched to the files level, which can survive abrupt interruptions unlike databases.

1 Like

Can we just trigger it less often? So for example only run GC if used space is more than 90% of available space or something like that? Then the node is collecting more trash but has less IO impact until it‘s almost full. And if someone needs free space I would make it possible to trigger it manually.

Right now you can configure only interval,

> & 'C:\Program Files\Storj\Storage Node\storagenode.exe' setup --help | sls interval
      --bandwidth.interval duration                              how frequently bandwidth usage rollups are calculated (default 1h0m0s)
      --collector.interval duration                              how frequently expired pieces are collected (default 1h0m0s)
      --contact.interval duration                                how frequently the node contact chore should run (default 1h0m0s)
      --graceful-exit.chore-interval duration                    how often to run the chore to check for satellites for the node to exit. (default 1m0s)
      --storage.k-bucket-refresh-interval duration               how frequently Kademlia bucket should be refreshed with node stats (default 1h0m0s)
      --storage2.cache-sync-interval duration                    how often the space used cache is synced to persistent storage (default 1h0m0s)
      --storage2.monitor.interval duration                       how frequently Kademlia bucket should be refreshed with node stats (default 1h0m0s)
      --storage2.monitor.verify-dir-readable-interval duration   how frequently to verify the location and readability of the storage directory (default 1m0s)
      --storage2.monitor.verify-dir-writable-interval duration   how frequently to verify writability of storage directory (default 5m0s)
      --storage2.orders.cleanup-interval duration                duration between archive cleanups (default 5m0s)
      --storage2.orders.sender-interval duration                 duration between sending (default 1h0m0s)
      --storage2.trust.refresh-interval duration                 how often the trust pool should be refreshed (default 6h0m0s)
      --version.check-interval duration                          Interval to check the version (default 15m0s)
      --metrics.interval duration        how frequently to send up telemetry. Ignored for certain applications. (default 1m0s)
      --tracing.interval duration        how frequently to flush traces to tracing agent (default 0s)

Just to be sure I get this right.
“Other” is used 100% percent only for the day-to-day business and expenses from STORJ?
None of that money is used for, bonuses or salaries or houses in the Bahamas? :wink:

And that liquidation from STORJ to whatever currency or asset gets 100% reinvested into STORJ and not personal gains, right?

Also, is there a reason, why you do that on none US exchanges? Without explanation, it seems wired why you would explicitly state the none US part.

unfortunately, no Ferraris for us or directors… Boring expenses on support of s3 gateways for example and other legacy service providers…

P.S. Automated Ferrary… no, doesn’t matter. Just an automated car, which can deliver me from p. A to p. B. without troubles and without a drive license, please?
also across the sky :thinking:, ah, never mind… I have so many unread books in my queue…


For a database dedicated to storing trash data the worst that can happen because of losing it is that the next GC will have to collect it again.

I hope so…
The last time it was some kind of disruptive…

P.S. I do not like the idea to store something useful in the sqlite database. It’s robust… until the next abruptly power interruption (without UPS, you know - use what you have now…)… so - no. bold NO.

1 Like

As mentioned in the other thread, can you make these two separate please?
I routinely check the available capacity on the nodes and the network to be able to plan ahead what to buy in a short term. Not be able to see the part of the network I participate in makes this decision making harder.
Thank you.


just vote for this function, more votes bigger chance it will be done.
Split SOC2 and Public network stats - Ideas & Suggestions / Storage Node feature requests - voting - Storj Community Forum (official)

Also you can see there is written, that Storj already working on it.


That is a good joke, I give you that, but I am not sure if this is the right topic to joke about. It seems like this is a way to dodge the question. Because for

you should use 11 services provider payments.

And S3 is not boring if it makes up for 75% of your expenses :melting_face:


You know… Not everything is around tokens.
Sometimes we need to deal with the old legacy money. This is the exact explanation what is in that so your (only) interesting position.

Not directly related to the tokens flow in my opinion. However, this is the much more open information that should be from the commercial private company, in my opinion, sorry to disappoint you. As I said - no Ferraries here. (I checked).

No. Since no STORJ tokens directly are involved.

Yes, I have to agree, my opinion - we must take a premium from users, who do not want to implement a native implementation, especially when they can.
But. More customers are better, then less… you know…


I see, I think that makes it clearer.
But again some follow up questions, just to be sure there are no misunderstandings:

All lines except line 14 are directly paid in STORJ tokens?
Basically you pay everything directly in STORJ tokens with “others” being the only exception?

I always assumed these lines are paid in FIAT and you included the conversion.

All lines except line 14 are directly paid in STORJ tokens?

Please note that everything mentioned in our Token Balance and Flow reports, including Line 14, exclusively is expressed in terms of STORJ tokens.

There may have been something lost in the translation in one of Alexey´s comments earlier in the thread. To clarify, on Line 14 is mentioned which amount of STORJ tokens are used for general operations related payments and for liquidity purposes.

So in order to pay some providers, we have to turn some STORJ tokens to fiat first, using certain exchanges.


Cheers. This is a “yes” to my question I guess?

It means that there are no fiat values listed on any line in the Token balance and flow reports, including line 14