"piecestore protocol: out of space(...)" - action required?

Helen its not the problem, i have other node that is working just fine as we speak, and as this one don’t work. I think my duty is to provide feedback, so we can find possible problems early and prevent them. I decided to write this, because no one else mentioned here the “out of space” case. I think its not optimal what this node does, if morenodes will behave like that its a waste of resources. He should be doing egress as usual. This node is somehow broken because of me who allow it to be too much on and offline. Other node behaving normal. Other peoples nodes as well.

I see there one problem, if allocated all the available space, the node could fill it up almost to zero remaining and do not have more space to write orders to the database.
Please make allocations with 10% of reserved, this could save the node in edge cases.

1 Like

I really think 10% reserved is a little bit too big for big drive like 10 TB (1 TB reserve?), isn it?

Even 100 GB of database is already very big, and most probably will degrade performance for most

I have to agree regarding the big reserve on a huge disks. However, we still in the beta, the storagenode could overuse the allocation if we introducing a bug. Better to be safe at least in the beta.
You could take this risk if you want, and allocate more space.

1 Like

Another option would be to store the databases on a different volume than the data chunks. This is what I do, and it prevents me from worrying about database corruption resulting from a partial write that can’t be completed due to the disk being full.

I would not recommend such setup for everyone. There are more points of failure than with a standard setup.

Maybe. The databases are on the system volume in my case, though. If that disk dies, the whole box is going down anyway.

In case of separate HDD for data (including databases) it will survive the system disk failure.

1 Like

Indeed… but if the system volume fails then downtime is likely going to be >5hrs and so the node would get disqualified in production anyway.

In my case the system volume is on a RAID1 so I’m not worried about that. :slight_smile:

I don’t have any inside information, but I think that 5 hour number may change in the future. And even if it doesn’t, having data and databases in tact at least gives you a chance at recovering your node. I’d always go with that solution.

@Alexey I just started getting this same message “piecestore protocol: out of space” but I have plenty of space left. My docker started with “-e STORAGE=“510G”” and if you look at the partition I . created for storj:

$ df -h /opt/storj/
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/bigstorage-storj  600G  477G  123G  80% /opt/storj 

There is plenty of space left. The dashboard says “-2.1 MB” available.

Can it be that Linux hold some space free so that it cant get overfilled?

That doesn’t make sense. Linux doesn’t do things you don’t tell it to do. There is no such protection in linux. You can literally fill up every single bit in linux and it won’t prevent you from doing so. I created a 600GB partition and told Storj to only use 510GB. When you ask linux how much free space is on that partition, it responds “123G are free”. But if you ask Storj how much free space is there, Storj is replying “-2.1 MB”. Storj should be seeing at least 30GB more space.

This is caused by the difference between decimal and binary units. Your OS uses binary units (1024) Storj can do both. GB for decimal (which you used now) GiB for binary. The dashboard shows you decimal units as well though.

1 Like

It’s remained space in the allocation, not the free space on the disk.
And as @BrightSilence mentioned we everywhere uses decimal units, but Linux uses a binary units.

1 Like

Since filling up my node, my bandwidth usage has been extremely low. Am I losing out on payments because of this? Less network traffic == less payout? Or is the network bandwidth payment a much smaller portion compared to simply storing the data?

1 Like

Unfortunately bandwidth is the bulk of the payout. You can use the earnings calculator here to see how much you’ve earned in the past for each. Earnings calculator (Update 2019-12-20: v8.1.0 - Now with Uptime and Audit scores, Vetting progress and DQ indication!)

Keep in mind that this effect is mostly related to how current tests are done. When more actual customers start using the network you may see very different behavior. Just wait and see how it turns out.

1 Like

So, it seems like I can game the system here by just deleting data off disk, thereby making free space for more incoming traffic?

Edit: By the way, seems extremely unfair for most of the payout to be only for bandwidth. Seems like I’m providing more use as a consistent data storage place rather than a medium to move that data around.

1 Like

If you delete data your node will fail audits and be disqualified in no time. So no you can’t cheat the system that way.

It’s not unfair, it’s industry standard. Bandwidth is more expensive so you’re getting paid more for it. As I mentioned, this behavior is likely temporary.


2 posts were merged into an existing topic: Log says out of space but dashboard shows space available