How long does storj pay SNOs for stored data?

I wonder what happens if a customer deletes something. Will payment stop immediatly or is the data still paid until the next delete info (bloom filter) goes out to the SNO?

My guess, as Storj will not receive any money from the customer the second he deletes a file. The SNO will not get paid as well from this point on.


They dosen’t care how long you store deleted data. Even if on your node is still not in trash. They pay by what the sats are saying, and the sats are updated instantly by what customer does.


Is there any official source for this claim or is it just a guess?

No need to guess. Expirience on this forum and logic.

1 Like


Try to think from the perspective of client. Would you as a client pay for a file you deleted ?


Your node is paid by GB Hour too, accordingly data on the satellite. As soon as the customer deleted their data, the billing will stop increment for the deleted amount, thus payment to SNO will stop accounted for the deleted data too.


Thanks for confirming. The bloom filters should come more frequently then, maybe once a day.

Wasn’t the point of asynchronous deletions to make it seem faster to the customer?

IMO the most fair way to do it is to let customers either do a synchronous delete, in which case they take the performance hit but won’t be billed any further, or let them do an async delete in which case they still get billed, but don’t have to wait for the deletion.

As a customer… when I delete something I expect to no longer pay for it. If my provider has internal procedures or policies that mean it may take them a couple weeks of housekeeping to clean things up: not my problem or concern. If the providers to my provider may end up storing some of that useless data for awhile: not my problem.

I understand as a SNO some bandwidth and/or disk space will be “wasted” (in that I don’t get paid for it). Fine. If I find a service that rewards me better I’ll just move my space over there… everybody knows what they signed up for.


To me, I also treat storj themselves as a black box, just as a customer would. If they are no longer paying me to hold a piece, I’d expect them to notify the node of that fact in a timely manner so that it can be deleted.

I don’t mind that much yet because my big nodes aren’t full yet, but they’re getting pretty close, so trash retention becomes an actual issue.

The BF are generated by satellites from backuped databases as I understand. So you have already a time difference between backups and the current data, because you can’t keep making backups in each minute. Than the process of generating BF for each node is very resourse intensive process. So they just can’t generate BF daily for each node. But I’m not en engineer there and I don’t know all the details, only what storjings post here. Maybe they can generate them faster…

1 Like

There are different testimonials here, probably all true:

1 Like

We generating BF one or two times per week. @snorkel is correct - it generated on a DB backup to do not catch today’s customers activity, and this is a resource heavy process because of number of segments and number of nodes. So there is a delay between the actual deleted data and the generated BF.

Synchronous deletions are too slow and can even a timeout (it actually still can, if there are millions of segments/files), but it can delete async much more segments/files than with sync deletes: in the sync mode they send a deletion request to the satellite (:clock10: ), and it must contact each node (:clock1030: ) to pass a customer’s deletion request (:clock11: ) and get a confirmation that segments are deleted (:clock1130: ), then pass this confirmation to the customer (:clock12: ), with async mode the satellite will not contact each node individually to pass the customer’s deletion request, instead it will mark all segments as deleted and send a confirmation to the customer (:clock10: ), later it will generate a BF (which will be generated anyway even for the sync mode, because some nodes could be offline that time) and send it to all nodes - less intensive process and quicker response to the customer.

1 Like

And your node is notified twice a week.

Couldn’t it do the way it works currently, but also send the deletion request to a node? Or at least send it to nodes that are near full so that space can be freed up. Customer doesn’t wait any more than the current system. Even if the node doesn’t process the deletion request for whatever reason, the BF still gets it.


Just one idea. I don’t know if something like this would be even possible:

We have that db on the nodes with the pieces that self expire (TTL).
Can an existing file/piece be transformed into self-expiring piece?

If yes, then the idea would be that a customer deletes a file on his end. This starts a process that makes this file a TTL file with expiring date of current date. The nodes get informed to update the piece expiration db for their pieces of that file.
Then the node takes care of deletion according to its own expiration database.

Unfortunately no, it will change its hash and the customer’s libuplink will reject it as corrupted and this will also affect your audit score sooner or later.
Also, how can we remove the customers’ data without their consent?

The customer is the one, who specifies a TTL during upload: