It's Town Hall Time Again
If you have questions you'd like answered during the Town Hall or after, please post them here.Thanks,
John
Thanks,
John
Could you describe a little bit more what āothersā in your balance sheet are?
It states, service providers, which is odd because service providers already has its own column.
It also states general operations. It canāt be salaries, because that already has a column. Could you give a little bit more detail on what that means?
It also states liquidation on none US exchanges. The US part is oddly specific, and I wonder what you do with these liquidations. Where does the money from these liquidations go?
I am sorry if this comes off as nosy, but given that āOtherā is by far the biggest spending point, I think it is appropriate to ask these questions.
Is there any progress on updating node T&C?
What is the progress on the Commercial SNO Network?
Please provide details on average node life now.
Do you see further tweaks required to SNO payments within the next 18 months?
SNO income has reduced by approximately half since the payment changes. i thought the idea was to reduce income on a stable basis. How is a 50% drop stable?
The ratio of unused space to used space is now very near 2:1 Is this a concern for Storj and if so what is planned here?
Removing data from the decommissioned sats was posted to the forum - but what about SNOās who donāt visit the forum - what are the implications for them?
Can we have an update on zkSync-Era paymaster support for the STORJ token, as it feels like the only blocker from stopping mass adoption from Operators who donāt have spare ETH to fund their wallets.
Can you share any plans for additional satellites to support the Commercial SNO Network ?
Any plans to re-write the Node Trash piece code to be more efficient, and less intensive on Node Storage ? compared to 1 year ago, the Trash churn at piece level is horrible, doing it by satellite date would be far more efficient for the delete from the Trash, instead of walking each piece.
Can you share any statistics on how the Audit workers are scaling now - approximate stats on how often a 1TB node has all itās pieces audited would be useful.
Can you share any plans to provide better guidance for Customers on the downsides of enabling Geo-Location for Storj, given the redundancy offered by a globally distributed network - information and training is going to be important to remove the traditional enterprise misconceptions and it would be interesting to know how Storj is tackling this.
Can you share any plans to promote node operators to host nodes in more geographically lacking locations by any incentives ? Having all the nodes in Germany isnāt really a good thing for the network.
Can you share any plans for Tagging nodes more specifically, on performance, location etc to better enhance the experience for the customer, when the satellite provides nodes to upload to.
CP
Hiya
Consider that some questions could be answered by the support team and engineers here on the forum rather than the Town Hall depending on what you want to know.
Oh you mean like how the ToS question has(nāt) been updated in months?
As promised, here are answers to the questions posted in advance of the Town Hall. We answered all the questions posted even though some of them were covered in the Town Hall.
Could you describe a little bit more what āothersā in your balance sheet are?
It states, service providers, which is odd because service providers already has its own column.
It also states general operations. It canāt be salaries, because that already has a column. Could you give a little bit more detail on what that means?
It also states liquidation on none US exchanges. The US part is oddly specific, and I wonder what you do with these liquidations. Where does the money from these liquidations go?
I am sorry if this comes off as nosy, but given that āOtherā is by far the biggest spending point, I think it is appropriate to ask these questions.
RESPONSE: Thank you for your interest in the details of the Storj Token Balances and Flows Report. As we note about the category of āotherā In the token report:
Line 14, āOther,ā is reserved to report activity that doesnāt fall into any of the other categories, including, for example, non-routine payments to service providers and carbon offset program payments. As noted above, in Q3 ā23, 14M STORJ tokens were used in payments that included general operations and liquidity purposes. To provide additional liquidity for general operations during uncertain economic times and in periods of growth, we also are liquidating a portion of our reserves on non-US exchanges through a partnership, and these flows are disclosed in this line item.
We do our best to respond to questions about Storj and the token reports as transparently as possible, but there are limits to the level of detail we share publicly about the operation of Storj as a privately held business. That said, we can share that the money from the STORJ liquidations is used to fund the ongoing operation of Storj as a business, as many of our bills are paid in fiat currency and we are a non-venture backed, growing company. In addition, we occasionally shore up our cash position to protect against uncertain economic times, industry-wide impacts, and any potential future headwinds.
With regard to your question about service provider payments, note that line 11 reflects regularly scheduled service provider payments made in STORJ, whereas service provider payments in line 14 includes any non-routine service providers paid in fiat currency from token liquidation.
Is there any progress on updating node T&C?
RESPONSE: There has been a lot of progress on the Node T&C, and we appreciate your patience while we get it finalized. We had some additional work to do based on some changes related to the commercial node program rollout. We will update the SNO T&C this quarter.
Weāve discussed this in a couple of places, but itās worth reiterating that we implemented the SOC2 tier using node tags, similar to the way we handle geofencing. The commercial nodes and public nodes share the same satellites, and the usage information is visible on the dashboards and stats API, but any particular data set is stored only on one set of nodes or the otherā¦
While the nodes are part of the same ecosystem, commercial nodes and public nodes are not competing for the same traffic. Data in a bucket would either be sent to public nodes (the default) or it would be sent to commercial nodes (by customer request).
Please provide details on average node life now.
RESPONSE: Average node life is:
US1: 908 days (2 years and 5 months)
AP1: 860 days (2 years and 4 months)
EU1: 864 days (2 years and 4 months)
Do you see further tweaks required to SNO payments within the next 18 months?
RESPONSE: We do not see further changes to SNO payments any time in the near future. Itās possible that something may change in the broader market, but as far as we can foresee, no additional changes are planned.
SNO income has reduced by approximately half since the payment changes. i thought the idea was to reduce income on a stable basis. How is a 50% drop stable?
RESPONSE: We do our best to maintain the stable level of usage, however customer usage, especially egress bandwidth is variable and the timing of when customers start onboarding new data can be unpredictable. Weāve committed to keeping the level of synthetic data at a level to keep average payouts at ~$130k per month. We will consider ways to increase synthetic load as needed to help us hit this target.
Any prediction on a per-node basis relies on a range of factors and is impacted by the growth of the network, almost all of which are outside our control as Satellite operators.
The ratio of unused space to used space is now very near 2:1 Is this a concern for Storj and if so what is planned here?
RESPONSE: Itās not a concern at this point. We continue to see an increase in demand and currently do not foresee any issues with the current growth and trajectory of the network. Note that Commercial Node capacity is included in the total (currently only on US1) and in general, that capacity is activated with known, planned capacity.
Removing data from the decommissioned sats was posted to the forum - but what about SNOās who donāt visit the forum - what are the implications for them?
RESPONSE: All nodes are treated equal in terms of the decommissioned satellites. Most of the data associated with the decommissioned satellites was deleted off the network as part of that decommissioning process with the help of garbage collection. We even ran a few cycles to make sure the amount of garbage left behind is low. Naturally there still is a small amount of garbage left behind. For safety reasons we decided we donāt want to implement code that could potentially wipe out any of the existing satellites so a manual command was the final solution.
We are also aware that there is a bug in the garbage collection implementation. Some nodes never finished garbage collection and are still sitting on a higher amount of garbage. Apologies for that. Please follow the same manual commands.
operator.wallet-features: ["zksync-era", "zkSync"]
when configured.You will see a new zkSync-Era link on your dashboard. It opens the new blockchain explorer. It works well and the first payment was last month for September. We hope to see more nodes adopt zkSync.
Can you share any plans for additional satellites to support the Commercial SNO Network?
RESPONSE: Currently our plans to scale the Commercial Node Network are focused on adding node capacity and edge capacity. The Commercial Nodes and public nodes share the same satellites. We donāt have plans or needs to scale or add satellites specifically for the Commercial Node Network. Satellites are already multi-region and horizontally scalable.
Any plans to re-write the Node Trash piece code to be more efficient, and less intensive on Node Storage ? compared to 1 year ago, the Trash churn at piece level is horrible, doing it by satellite date would be far more efficient for the delete from the Trash, instead of walking each piece.
RESPONSE: We refactored deletes from a satellite process to function more like GC. The result improved the user experience for deletes, but did increase the load on nodes. Weāre continuing to look at ways to make GC and deletes more efficient, but there is nothing scheduled right now.
GC alone may not be the painful part of the process. One additional challenge is that on every startup a node will check all pieces in the trash folder. It will check the creation date of all of them. If you restart the node 5 minutes later it will start over and check again. That part is inefficient.
We are thinking about a number of approaches to address the challenges holistically. For example, could we make it so that the trash folder is checked only once every 7 days? All we need to do is have an index file. When GC runs it moves pieces into the trash folder. Copy the timestamp of the last piece that was moved. Do this for all satellites. Check the timestamp in the index file first and just skip looking at the pieces until the timestamp is reached. Make this per satellite and also plan for having more than one timestamp in the index file.
If you want to check how many audits you get in a month, you can use the storage node API. It has an endpoint ( useful help link here) that shows uptime. That uptime endpoint lists all online checks (these are audits) in a 12h time window for the last 30 days. Sum them up and you know how many audits you had.
We have a number of features in mind to make this a much more powerful and flexible feature for customers, but for now weāre managing it very closely.
Can you share any plans to promote node operators to host nodes in more geographically lacking locations by any incentives ? Having all the nodes in Germany isnāt really a good thing for the network.
RESPONSE: Weāre actively looking for nodes in South America right now and the incentives are generally aligned that nodes closer to data being uploaded have a slight advantage.
Can you share any plans for Tagging nodes more specifically, on performance, location etc to better enhance the experience for the customer, when the satellite provides nodes to upload to.
RESPONSE: We do have a lot of plans and ideas to create compelling features for partners and customers leveraging node tagging and placement rules. The initial use cases of geofencing and supporting the Commercial Node program and tier of SOC2 certified nodes is only the beginning. This is a highly differentiated capability in the market, and we will continue to make it easier to use.
I would love to hear some good warstories on where StorJ makes a difference for your (our? ) customers
RESPONSE: We shared a few in the Town Hall recording and we have a press release coming out in the next month about a large customer that closed in Q3.
Would you consider sending an email to all operators, so that operators not following the forum are informed about this issue?
If I may: Iād consider marking trash not by moving files to a separate subdirectory, but in a database. This makes a lot of sense given deletions are now mostly handled through GC. Considering ext4, right now we perform the following operations for each single removed file:
If we had a table with a schema of (satellite_id, piece_id, timestamp of GC), specifically with no indexes by (satellite_id, timestamp):
A curious coincidence here is that SQLite will set consecutive pk to inserted rows, and then store its rows in a btree keyed by the pk. Inserting a set of rows, as well as removing a set of rows with consecutive pk is pretty fast, as they will hit the same btree leaves!
As the table will only be written by file walkers, there should be no significant concurrency problems. I suspect we canāt avoid an index by (satellite_id, piece_id) for fast recovery from trash. Iād still expect a significant improvement of both file walkers. The only drawback is that GC will also go over already trashed files, but by replacing all random operations that we can replace with sequential scans we should still be way faster than the current approach.
The biggest problem here is probably that some code to migrate from a separate directory needs to be written in a robust way.
We did that before. As result some nodes were DQ for losing a database. So we switched to the files level, which can survive abrupt interruptions unlike databases.
Can we just trigger it less often? So for example only run GC if used space is more than 90% of available space or something like that? Then the node is collecting more trash but has less IO impact until itās almost full. And if someone needs free space I would make it possible to trigger it manually.
Right now you can configure only interval,
> & 'C:\Program Files\Storj\Storage Node\storagenode.exe' setup --help | sls interval
--bandwidth.interval duration how frequently bandwidth usage rollups are calculated (default 1h0m0s)
--collector.interval duration how frequently expired pieces are collected (default 1h0m0s)
--contact.interval duration how frequently the node contact chore should run (default 1h0m0s)
--graceful-exit.chore-interval duration how often to run the chore to check for satellites for the node to exit. (default 1m0s)
--storage.k-bucket-refresh-interval duration how frequently Kademlia bucket should be refreshed with node stats (default 1h0m0s)
--storage2.cache-sync-interval duration how often the space used cache is synced to persistent storage (default 1h0m0s)
--storage2.monitor.interval duration how frequently Kademlia bucket should be refreshed with node stats (default 1h0m0s)
--storage2.monitor.verify-dir-readable-interval duration how frequently to verify the location and readability of the storage directory (default 1m0s)
--storage2.monitor.verify-dir-writable-interval duration how frequently to verify writability of storage directory (default 5m0s)
--storage2.orders.cleanup-interval duration duration between archive cleanups (default 5m0s)
--storage2.orders.sender-interval duration duration between sending (default 1h0m0s)
--storage2.trust.refresh-interval duration how often the trust pool should be refreshed (default 6h0m0s)
--version.check-interval duration Interval to check the version (default 15m0s)
--metrics.interval duration how frequently to send up telemetry. Ignored for certain applications. (default 1m0s)
--tracing.interval duration how frequently to flush traces to tracing agent (default 0s)
Just to be sure I get this right.
āOtherā is used 100% percent only for the day-to-day business and expenses from STORJ?
None of that money is used for, bonuses or salaries or houses in the Bahamas?
And that liquidation from STORJ to whatever currency or asset gets 100% reinvested into STORJ and not personal gains, right?
Also, is there a reason, why you do that on none US exchanges? Without explanation, it seems wired why you would explicitly state the none US part.
unfortunately, no Ferraris for us or directors⦠Boring expenses on support of s3 gateways for example and other legacy service providersā¦
P.S. Automated Ferrary⦠no, doesnāt matter. Just an automated car, which can deliver me from p. A to p. B. without troubles and without a drive license, please?
also across the sky , ah, never mind⦠I have so many unread books in my queueā¦
For a database dedicated to storing trash data the worst that can happen because of losing it is that the next GC will have to collect it again.
I hope soā¦
The last time it was some kind of disruptiveā¦
P.S. I do not like the idea to store something useful in the sqlite database. Itās robust⦠until the next abruptly power interruption (without UPS, you know - use what you have nowā¦)⦠so - no. bold NO.
As mentioned in the other thread, can you make these two separate please?
I routinely check the available capacity on the nodes and the network to be able to plan ahead what to buy in a short term. Not be able to see the part of the network I participate in makes this decision making harder.
Thank you.
just vote for this function, more votes bigger chance it will be done.
Split SOC2 and Public network stats - Ideas & Suggestions / Storage Node feature requests - voting - Storj Community Forum (official)
Also you can see there is written, that Storj already working on it.
That is a good joke, I give you that, but I am not sure if this is the right topic to joke about. It seems like this is a way to dodge the question. Because for
you should use 11 services provider payments.
And S3 is not boring if it makes up for 75% of your expenses