Disk space issue and upload failures

Hi there. I’ve been on holidays for the last couple of weeks and it was such a bummer to come back and see so many issues happening at the same time. It’s worth mentioning that I’ve been running the node for several months now and I haven’t changed any parameters in the last 3 months or so.

I tried restarting the node and noticed the “Disk space is less than requested” error. The node was started with -e STORAGE=“7TB”. You can see the df -h command shows the hard drive is big enough. Also, the dashboard says only 280MB are used and when I left I had more than 3GB.

error

There are also plenty of failed uploads/downloads. I believe something went wrong with the hard drive and Storj started using it like if it was a clean drive. But the old files are still there (you can see the Used 3.7TB in the df command). Has anybody any clue about what the issue might be? I’m also worried about my node reputation…

Thanks in advance.

Welcome to the forum @spanishpy!

Can you show your docker run command and remove your email, eth address and DDNS ?

Hi @nerdatwork! Thanks for the warm welcome. The command I run is:

docker run --restart unless-stopped -p 28967:28967 \
    -e WALLET=“WALLET” \
    -e EMAIL=“EMAIL” \
    -e ADDRESS=“ADDRESS” \
    -e BANDWIDTH="6TB" \
    -e STORAGE="7TB" \
    --mount type=bind,source=“~/storj/identity/storagenode",destination=/app/identity \
    --mount type=bind,source="/mnt/data/Storj",destination=/app/config \
    --name storagenode storjlabs/storagenode:arm

docker run -d --restart unless-stopped -p 28967:28967
-p 127.0.0.1:14002:14002
-e WALLET=“0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”
-e EMAIL="user@example.com"
-e ADDRESS=“domain.ddns.net:28967
-e BANDWIDTH=“20TB”
-e STORAGE=“2TB”
–mount type=bind,source=“”,destination=/app/identity
–mount type=bind,source=“”,destination=/app/config
–name storagenode storjlabs/storagenode:beta

You are using curly quotes. Copy the command from the documentation page and edit with your credentials.

Also check your dashboard by visiting 127.0.0.1:14002

It was a copy-paste issue. I’m actually not using curly quotes. The dashboard is on the bottom right pane, right?

Also, I’m using an SCB, so using the ARM image.

Perhaps the piece_spaced_used.db database was lost or corrupted.
I would like to recommend to check all databases on your drive: https://support.storj.io/hc/en-us/articles/360029309111-How-to-fix-a-database-disk-image-is-malformed-

This would show you web dashboard and will show notifications if your node is disqualified on any satellite(s).

@Alexey When checking for errors I run the command:
sudo docker run --rm -it --mount type=bind,source=/mnt/data/Storj/storage/piece_spaced_used.db,destination=/piece_spaced_used.db sstc/sqlite3 sqlite3 /piece_spaced_used.db "PRAGMA integrity_check;"
and I get: standard_init_linux.go:211: exec user process caused “exec format error”. I thought it was a file permission issue but it isn’t. Any idea?

@nerdatwork Can’t access the dashboard using 127.0.0.1:14002 on the browser even if I expose the port in the docker run command with -p 127.0.0.1:14002:14002.

Unfortunately for arm you should use either direct install of sqlite3 or check databases with a docker on other PC.
The image sstc/sqlite3 does not have a arm manifest

The third option, you can create a Dockerfile in the current directory with this content:

FROM alpine
RUN apk update && apk add sqlite

CMD ["sqlite3"]

Save it and build the image yourself:

docker build . -t sqlite3

Then you can use the image sqltie3 the same way, as in the documentation, but the image will be sqlite3 instead of sstc/sqlite3

Ok so I built the Docker image and run the commands. Everything seems OK?? :thinking:

Looks like.
Please, run on the copy of piece_spaced_used.db:

sqlite3 piece_spaced_used.db
select sum(total)/1000/1000/1000 from piece_space_used where satellite_id is not null;
.exit
SQLite version 3.16.2 2017-01-06 16:32:41
Enter ".help" for usage hints.
sqlite> select sum(total)/1000/1000/1000 from piece_space_used where satellite_id is not null;
0

I see. This database is empty. This can explain, why your node is not aware of previous used space.

What do you mean by empty? The data is still on the HD as the df command shows in the first picture where 3.7TB are used. I woke up 2 days ago and noticed the disk and computer were shut down (probably because of a power outage). Now the node can’t find the data. Could that power cut have corrupted the disk?

The database piece_spaced_used.db is empty. It should contain the used space value.
It could be corrupted during the power outage.
The disk itself could not be corrupted though. The audit rate will show

This is what I got:

"118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW"
{
  "totalCount": 76898,
  "successCount": 76577,
  "alpha": 11.97473878476754,
  "beta": 8.02526121523242,
  "score": 0.5987369392383782
}
"1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"
{
  "totalCount": 2341,
  "successCount": 2331,
  "alpha": 11.97473878476754,
  "beta": 8.02526121523242,
  "score": 0.5987369392383782
}
"121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"
{
  "totalCount": 39272,
  "successCount": 39175,
  "alpha": 11.97473878476754,
  "beta": 8.02526121523242,
  "score": 0.5987369392383782
}
"12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"
{
  "totalCount": 50246,
  "successCount": 49790,
  "alpha": 11.974738784767581,
  "beta": 8.02526121523242,
  "score": 0.598736939238379
}
"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"
{
  "totalCount": 80325,
  "successCount": 80099,
  "alpha": 11.97473878476754,
  "beta": 8.02526121523242,
  "score": 0.5987369392383782
}

Your score is below 0.6 which has disqualified you on those satellites. You would have to start over again with new authorization token.

This is why I had earlier suggested to visit the web dashboard.

That’s very upsetting… I’ve had my node up and running for months and now I’m disqualified… Probably loosing all the Storj held for the first 9 months…

Computers/HDs shutting down temporarily is probably very common. The database shouldn’t get corrupted because of this.

The db issue wouldn’t get you disqualified. At the moment the only way to get disqualified is if your node is online but unable to correctly respond to audits. This means the data itself needs to become inaccessible to the node or is deleted. I can’t yet figure out how that could have happened in your situation.

I assume the DB stores references to where the files are saved in the HD? If that’s correct, then if the DB looses the references, I can still have the data in the HD but it will still make no difference. It would be like erasing the HD, right? It’s really a shame cause my node was doing very well :pensive: