Storage Node new setup - Wrong total space

Hello,

I created a new storage node and it mount the storage via NFS, actually storj dashboard see only 2TB but the total space is 530TB.
How i can fix that problem?
Do you have any suggestion?

IF it Still is the case, NFS is not a Good way.
Direct attatched or SCASI (If i remember right) is the way to go

1 Like

Actually i’m attached via NFS at 40Gbps so i dont think that could be a problem.
If it’s really a problem FC would be fine?

its not a bandwidth problem its like a database communication protocol issue i believe… or something along those lines atleast.

so be aware it will most likely die… ofc it’s open source, so if you can fix the issue i’m sure we are many on the forum and in storjlabs that will be happy to see the solution for how to run storagenodes on NFS.

ofc not all networks is created equal if you are running infinitband then you might not be using ethernet, which i suppose is part of the problem, not sure if one can run tcp/ip without ethernet tho… but maybe NFS supports a different network protocol…

you should dig into the issues people have posted with running storagenodes on NFS before you start trusting it tho… imho

1 Like

I fixed the problem, it was just my fault in launching storj docker, now its all working :slight_smile:
Lets see if nfs will give out problems

1 Like

didn’t work a year ago when i started, or it kinda sort of did…
but people was very adamant about it not working, and i tried and got some database error which wasn’t storj related, but related to like SQL or whatever database it was running on… but they have since changed the DB, which kinda makes me wonder if NFS works now… not sure anyone or many have tested.

if it does work for the next few months, then make a post on the forum about it, so that they can change the documentation to say that NFS works now.

1 Like

Sure, as now i already got 115MB used, lets see if it will keep working.
As now i see no errors, lets hope xD

What others refer to is that storage nodes use sqlite for some bookkeeping, and sqlite in some setups may corrupt data on networked file systems. As long as the mount point is in “hard” mode, the connection is stable, and latency is low, it should be fine though. “soft” mount points will corrupt data though, and they’ll do so pretty quickly.

2 Likes

When I started to use Storj for the first time, I used NFS. I changed it when I’ve been warned of the risks.
It seems to work but it just really not recommended.
Do it if you want but keep in mind that it is not recommended and your node may fail one day or another, your data may be corrupted, then you may be disqualified and you may lose all your held amount.
That’s up to you.

1 Like

Lets move to FC then :slight_smile:

What about the sqlite database, is it only a file?
If yes what file is it

I saw that the folder that is really storing files is located in:
storage/blobs

Can someone with soo much space used try to execute a “du -h” inside the main storj folder and confirm if only blobs folder is taking much space (i mean that the rest does not change and takes soo much space during the time)

If yes maybe the workaround would be creating a symbolic link to the blob folder on an nfs share

1 Like

You’re better off moving the dbs and orders directories to something local, specify that in config.yaml, and then keep your blobs on NFS. That’s a slightly better solution than having the whole thing on NFS. I’m on my phone right now but can return later with additional details if this is of interest.

2 Likes

Hello @f14stelt ,
Welcome to the forum!

The network attached drives are not supported and not recommended. It could work, but will die sooner or later. The problem related to the lock mechanism implementation. In Linux it’s made in a wrong way even for NFS (I’m not telling about SMB implementation, it’s even worse), thus you will have a lot of issues starting from corrupted SQLite databases, not finishing uploads, overusage of your RAM and ending with losing files.
You can take a look on these topics: Topics tagged nfs, Topics tagged smb
The only working protocol is iSCSI, but even then your node would loose race for pieces to nodes with local attached drives due to latency. And higher RAM usage will be your friend anyway.

Surprisingly on Windows the SMB working better but only if SMB server is Windows too, however all other problems with latency, potential lose of files and high RAM usage will be still there. The NFS port on Windows is working as bad as on Linux though (maybe because the NFS server usually Linux, we have no reports from users, who tried Windows NFS server)

Hello @f14stelt and @Alexey,

This may not be the most suitable location but, having few nodes on iSCSI storage for the past 6 months, here is a little feedback.

TLDR: In my experience, the performance loss from using an iSCSI mount is negligible.

Some background:

  • node-3, 21 months old and running on iSCSI share since 6 months. iSCSI share is hosted on 2TB SAS hard drive and link between storj node and iSCSI target is on a cheap Ethernet 1 Gbit/s NIC;
  • node-4, 15 months old and running on local 1 TB SATA hard drive since the beginning;
  • node-1, same configuration as node-3 except that it runs behind a proxy and therefore its upload statistics have an additional bias.

Output of latest successrate script for node-3 (running on iSCSI):

Summary
========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            919 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                14 
Fail Rate:             0.089%
Canceled:              15 
Cancel Rate:           0.096%
Successful:            15643 
Success Rate:          99.815%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                5 
Fail Rate:             0.048%
Canceled:              12 
Cancel Rate:           0.114%
Successful:            10509 
Success Rate:          99.839%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            10787 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            2491 
Success Rate:          100.000%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            3515 
Success Rate:          100.000%

Output of latest successrate script for node-4 (running on local drive):

Summary
========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            525 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                4 
Fail Rate:             0.018%
Canceled:              3 
Cancel Rate:           0.013%
Successful:            22714 
Success Rate:          99.969%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                2 
Fail Rate:             0.049%
Canceled:              3 
Cancel Rate:           0.073%
Successful:            4086 
Success Rate:          99.878%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            11424 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            968 
Success Rate:          100.000%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            1417 
Success Rate:          100.000%

Output of latest successrate script for node-1 (remember that these statistics should be treated with caution):

Summary
========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            814 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                37 
Fail Rate:             0.426%
Canceled:              14 
Cancel Rate:           0.161%
Successful:            8642 
Success Rate:          99.413%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                1 
Fail Rate:             0.019%
Canceled:              5 
Cancel Rate:           0.095%
Successful:            5268 
Success Rate:          99.886%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            9835 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            1429 
Success Rate:          100.000%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            2907 
Success Rate:          100.000%
2 Likes

Thank you for sharing your experience!
What about RAM usage? Is it a myth too?

I haven’t monitored it accurately but I’ve never had a memory problem yet. The virtual machine, which hosts 4 nodes (3 of which have an iSCSI share), has 8 GB of RAM and is currently using 600 MB of it.
I’ll add the VM to my monitoring tool to get longer term data but I don’t see any problems yet.

2 Likes

yeah generally from what i understand iSCSI works… haven’t tested it tho
and don’t plan to.

my nodes also seem to use about 200mb avg if running in their own container.

however if i run more nodes in the same container then they will use an avg of 60mb each

so the memory utilization sounds about what i would expect depending on the setup.

Really? I wonder if this will ever fill up.

1 Like