Use normal MariaDB/MySQL/…? Nooooo
Use slow sqlite with locks and DQ for that? Yeeees
I am happy to approve your PR. Go for it. Implement it.
Why would I do that? I don’t get paid for this, unlike developers from StorjLabs. They take 58.6% (!!!) to the Commission from the StorjLabs for storage? (Tardigrade user pay $10 => SNO will receive $4.14)
The problem is the DB locking can’t be fixed by just switching the database. If you are not able to implement at least a demo to proof your point I see no point in talking about this topic. You are complaining. I get that. My response is to motivate you to proof it with a demo implementation. If you don’t want then maybe someone else would like to do it. The point is the demo implementation would be much more productive then just complaining.
What is the reason for db locking (as it applies to the node)? Not enough IO performance? Something else?
If a node can get suspended/DQ for that, how to avoid it?
To my knowledge we have 2 db locks. One on the serial number table and one on the order db. An order db lock means you are unable to store and later submit a few orders. Sure you would lose a bit of money but you don’t get suspended or DQed for that one. So lets focus on the serial numbers. We had an internal discussion if we should just allow downloads even if the serial number table is locked. At that point the storage node would open the door to get cheated by an malicious uplink that would like to download a lot but doesn’t want to pay for that. It can just submit the same order multiple times. The serial number check is protecting the storage node.
The SQLite DB alone shouldn’t get locked. SQLite is performant and could handle that very easy. On my storage node I have seen the error while the used space calculation or garbage collection was running. Both are stressing my HDD and in that case SQLite is unable to keep up with the pure number of transactions we currently have. Switching to any other DB that is still stored on the same HDD doesn’t really help.
The root cause why the HDD is stressed is the piece size. A HDD can write hunderst MB/s but it has limited number of IOPS. The small piece size is consuming a lot of IOPS.
The serial number tracking is an problem for the satellite as well and we are using postgres there. We are currently researching sparse merkle trees. That sounds like a better long term solution.
My short term solution to that problem is ZFS with a SSD write cache but that is a different topic for another thread
That’s a little insincere. It’s fine for a small number of queued writes, but when the database writes need to compete with data writes because of the fact it was not possible to set a different path for databases and storage, the small number becomes a very small number. Even SQLite developers state that SQLite is not a good tool when there’s need for concurrent writes. To quote Appropriate Uses For SQLite:
We are questioning exactly that. Why do we need concurrent writes in the first place. Lets fix that end stay with SQLite.
Sure! All I’m saying is that stating that SQLite could handle that very easily is the wrong line of thought. You’re not going to modify SQLite. Storage node should handle that very easily so that SQLite doesn’t have to.
It is my favorite mem now.
Guys change your attitude to your work please. I can’t imagine how I would say such a thing at my work. The client, of course, will lose a little money…
About production version…
You probably do not know, but zfs does not have a write cache in the form in which you use this concept here. Nothing is read from slog in normal mode. Only when recovering.
Maybe I was not clear enough. In my logs (without SSD write cache), I have seen maybe 1-2 lost orders per week. Let’s make it an extreme example and say 1-2 orders per day. Even if each order would have the maximum size and would be a download we are talking about 2 * 2MB * 30 days. That is $ 0.0024. I can’t imagine why that small amount should be a critical bug that needs to be escalated. I also don’t see why we have to stop production for that small amount. So yes go ahead and use $ 0.0024 for your favorite meme now.
Can we please try to be a bit more productive here? I would like to help the community by describing what has changed as good as possible. Obviously I am sometimes missing a few important information and I am happy that the community is asking the right questions. What is the point of complaining in this thread? If you are unhappy with the changelog should I just stop posting it? Is that your goal? Why are you reading this topic at all if you think this is all garbage?
I don’t see the point of using my SSD as read cache but sure I am open to compare different ZFS setups. Again we should move that into a new topic. If someone is interested then please create a topic for it and ping me there.
The clients aren’t losing money, SNOs aren’t customers, we are essentially contractors who are assuming the financial risk of running nodes and the potential issues coming from bugs and other issues.
It definitely does. The ZFS SLOG is a great way to improve synchronous writes and high amounts of small operations such as those performed on a database. For large files, not so much, but small block modifications like SQL writes are immensely accelerated with a SLOG.
6 posts were split to a new topic: ZFS configurations and options
I will reply with your quote
My reaction was a perfectly logical response to the loss of an unknown amount of money.
You explained thanks. But that’s all that you wrote Next is your fiction that offends me.
Can you explain how such kind of records displayed in logs. I want count loses in my own logs.
And Next question about removed held amount. This is because storage amount calculated wrong? Did you expect this info Returns in Next version?
It should be something like Error - failed to add orders - orders.db database locked
I guess searching for orders.db
should find it even if my message was not accurate.
Sorry I can’t follow you. I can see the held amount on my dashboard. The issue you are seeing might be the missing scrollbar? If you click into the payout table you can scroll up and down with keyboard or mouse. It should be all there.
I cant check, docker image not updated now. But in chengelog i see this text:
storagenode/heldamount payments removed
That was a change in the storage node database. We don’t need to track the heldamount in a table. It is already part of the paystubs. On the storage node dashboard you should still have all the data.
I am using that and I sometimes get the db lock error, though as far as I know only on the orders db. I guess moving the database to nvme would help.
Some time ago I noticed that the garbage collection was doing the equivalent of cp old new && rm old
instead of mv old new
. I have not checked it since, but what was the reason and is it still doing that?
As far as I know, SQLite locks the entire database, but something like MySQL (InnoDB) locks only the relevant rows (and can still allow selects of them). That would probably help somewhat, since the serial number table would mostly see inserts and selects (not updates) and no fields are autoincrement.
Hopefully that works better than now.
Also, something like redis should work here - if I understand correctly, losing part of the serials database would not result in lost customer data - it would just make it possible for a malicious uplink to repeat its downloads from my node (probably once, since after that, the serial number would be back in the db).