Changelog v1.11.1

That’s caused by the writes still happening. Reads only are no different between SMR and CMR.

2 Likes

I had to transfer all SN files from seagate drive(smr) to wd, because it was causing me problems at 4TB and it took more than 24h just to calculate 3.5TB of files than it took close to 2 days to copy all that stuff to other disk.

how are smr drives identified

The node was offline while copying. Do you mean data being written from the CMR cache to the SMR sections?

Most of the time: just google the drive’s model number for a data sheet.

1 Like

There’s a pretty comprehensive list here too:

If it was stopped right before you started the copy, then yes. This can take many hours since a lot of data has to be overwritten and rewritten in the SMR sections of the drive. Just because you stopped writing to it doesn’t mean it stops doing its own house keeping.

1 Like

hope manufacturers are not submarining enterprise class hdds

I just received this update and I have 2 observations to share.

First off some good news. The rounding issue people have seen who set 1.93TB as a node size seem to be fixed with this release. This used to round down to 1.9TB, but now it seems to be exact. So this will give some users a bit more accurate control of how much space to assign. Additionally the CLI dashboard now shows an extra decimal point for all numbers, so we can see a bit more precisely how much data is used. That’s really appreciated!

The second observation is that on nodes that have set a separate db location, the new orders folder with files for orders instead of a db seems to be stored in the data config location rather than the database location. While it’s technically no longer a database, it’s still a set of additional IO that SNOs who moved the db’s would most likely rather have happen on the IO optimized db location. I know I sure would prefer that. @littleskunk was there a specific reason for this I’m overlooking?

4 Likes

in my case it is store in node instalation folder

1 Like

You’re right, it’s technically in the config location, not the storage location. On docker installs that’s on the same HDD though as the storage location is a sub folder of the config location.

Since it is a completely separate folder, you can just make another mountpoint in docker -v /somewhere:/app/config/orders [well use the mount option, not -v]
or do I miss something?

Oh sure I can find a workaround. But I’m not too worried about just my own node. I was just giving some feedback on what I would consider a better implementation.

But let me add some extra arguments to that while I’m at it. On windows, reinstallation and removal of the install directory didn’t used to impact the storage or databases. But removal of that directory would now also throw out orders, including unpaid ones. In my opinion either data or metadata shouldn’t live in that location. Obviously this wouldn’t be a vital issue, you’d be missing out on just a fraction of payout should that occur. But it’s just about keeping stuff in the right places.

why is it that if I allocate 13.5 TB in the docker run cmd. the config shows the following # total allocated disk space in bytes
storage.allocated-disk-space: 1.0 TB

Because the docker run command takes precedence over the config file.

I know about docker run cmd precedence, I guess I should have asked why are those four items uncommented instead of # if I was not the user who uncommented the items.

This is good news. Since it’s not in the change log, I’m wondering if this issue was intentionally fixed or if it’s just a convenient side effect of something else they modified, perhaps that extra decimal place they added.

I’m observing an improvement here too myself, but it’s still rounding to the closest ten in my case. It’s way more precise than before, but still rounding :wink:

I think it’s a great enhancement though :slight_smile:

There’s lots of smaller improvements with every release that aren’t always part of the change log. I would probably have included this one since several people have complained about the rounding issues on the forum, but overall it’s a very minor thing.

Yes it still seems to be rounding. But honestly if you care about being more precise than 10GB intervals, you’re probably either planning to significantly overfill your HDD or have way too little space to share to be profitable.

And I agree with you. I was just nitpicking because you said “it seems to be exact” :grin:

I think it’s now perfect as it is :+1:

By the way, the only reasons I configured my node with 894GB are:

  • my disk is a 984GB disk (Initially 1TB, but some corrupted sectors were isolated long ago, and no new corrupted sectors appeared so far…).
  • because of the way I calculate the 10% overhead to leave on disks as suggested by StorjLabs: I did 984/1.1 which gives 894: that’s what I allocated to the Node.
  • I did not know it was getting rounded up at the time.

That’s it :slight_smile:
I could probably safely go back to 900GB.

1 Like

I have noticed that since the current release bandwidth roll ups time to finished has increased from five minutes to 20 minutes and I have noticed a error; it self corrected, but as to the extended time I’m curious.