Filewalker not running when disabling lazyfilewalker

I can only repeat my answer. Most important for the node is to store the incoming pieces. Writes to the DB file will be batched. And if that batch of writes can’t be persisted you would end up with an artifical small db file.

1 Like

I guess that you have had “database is locked” before this change, right? Otherwise why you ever bothered to do so? So, actually, the node wasn’t able to write to the database what is it actually wants, only what is not failed.
For me it looks like a good change though, because now most (or all) TTL data would be deleted without a Garbage Collector involved (and the trash).

No, I didn’t see any “database is locked” errors before that db transfer to SSD. And I made it just to free up at least some of the scarce HDD IOPS in a try for faster file walker processing (GC+used-space-FW+trash-FW). So i moved both db and main storagenode.log (running at “info” level so its written very often) from HDD to SSD.

This only slightly helped the work of FWs (a little bit like may be 15%-20% speedup), but I got such an unexpected side effect with the work of the piece_expiration.db database having increased multifold.

My best current version is that with the database located in the default folder(same as blobs), the node discarded saving TTL data coming from SLC for some reason and saved only data from the main 3 production satellites. Where this option is used like few ten times less often compared to SLC. On production Sats most of the pieces are uploaded without specifying TTL. Or at least this was the situation a few months ago when I checked it. While for SLC - ALL the pieces come with TTL set.

On my another small node (~10 M files), which also GE from SLC last year, the size of this database is now less than 50 MB. (it was about 70 MB, but last update to 1.108.3 additionally cut it almost by half during db.migration which dropped unused table columns).

And on two large nodes (~60+90 M files) that sill continue to work with SLC, the size of this database was in the range of 200-300 MB before transferring it to SSD (so it was in a direct proportion to data volumes stored for production stat excluding SLC) and 5800 MB and 8300 MB at the moment, after the transfer. And >90% of it content is TTL data for SLC now.

If I have time, I can test this hypothesis later, because I have a backup of a compact databases stored before transferring it to an SSD and I can check what exactly were stored in it before transfer.

P.S.
By the way, I have long wanted to ask why the community now often shortening the name of SaltLake Satellite to “SLC”. It’s supposed to be SLS, isn’t it?
Where does the last letter “C” in the “SLC” abbreviation come from instead of “S”(Satellite)?

Salt Lake City

2 Likes

Yeah, I know there’s a pretty big city in the USA with that name (also a county with the same name). But Storj users do not refer to a locality or geographical point/location, but to a particular server (or a small cluster of servers) from the Storj network which used to run various large scale tests. And servers in the Storj network are called “satellites”.

Even if this server is physically located in SL-city (I don’t know, probably, but I haven’t checked) It doesn’t really matter. This may easily explain the first two letters (where the name SL= “Salt Lake” came from, when server was installed), but not the last one “C” I asked about.

I don’t know why they named it saltlake but I guess because of salt lake city thus I am using SLC.

The full name of this city is Salt Lake City. There’s no locality named Salt Lake. (well, there is, but in Hawaii).

1 Like
offtopic about SLC

In the Russian Community it is mentioned in the literal translation, so it was very confusing for me what the lake they are discussing in the context:

I came out of the lake.

or even

And as you remember, I have a lake dumped everywhere.

So my family and I killed the lake as soon as it became cheaper than others.

:person_shrugging:

If the Community do like to call it SLC Satellite, we do not have any objections. At least seems everyone is aware that’s a Saltlake Satellite.

Yes, many mentioned that the database is growing more on SSD after they moved the DB there. However, I think that it’s a coincidence of several events:

  1. You moved only when the uploads are started and you saw errors like “database is locked”, not before, right? So some data was not added while your node has had this issue.
  2. Since now all records are added, it is growing
  3. Likely the tests become even higher on load than when you have had DB on HDD
  4. Exactly this database contains information about TTL of the pieces, and most of data from the SLC satellite is TTL limited.