Yes, because your disk is slow to do both for some reason.
is any node disk conncted via usb3.0?
is one not full or nearly emty? (filled with 4-6TBs of 1k files? difference in contend?)
All disks are internal, two nodes have a few free GB but will be full soon.
Oops already full (200GB was free two days ago).
thanks for information, i think the 2gb used ram limit is something related to usb3.0 or mainboard
more like to the motherboard, but it also could be related to used filesystem.
Make sure itās NTFS and not FAT or FAT32 or exFAT.
even without compression? isnt some zip system included in windows? i wanted to get rid of overhead and constant seeking file information by putting them in maybe an database or an other format (without compression) so it becomes more cmr disk friendly.
eg: all files from trash for one day in one/more combined files per node (maybe ~40 mb each), so it deletes just one file instead of one million little files, also speeding up filewalking maybe?
i did set my node to full status for testing if error still there or until the times are better and the speed problems are hopefully gone.
Will defragmenting the drive every day improve read and write access to the drive?
Now I have the default defragmentation, it is done every week.
@Vicente
I think itās a bad ideea to enable defragmentation for storagenode partition.
Disable defragmentation and indexing for the drive that store Storj data.
Thank you very much, I already disabled defragmentation and indexing.
I started the node 12 hours ago, the disk is at 100% usage. How long is normal to search for all the pieces after the start?
The node has 8.8TB of data occupied.
it can be several days
It will not speed up a process. Packing files into archives will take additional CPU and disk cycles, because you will store incoming pieces on the disk, then pack. Since your node stores only one piece from 80 of the one segment, the probability to be downloaded at once is close to zero, so when the customers will download their data, you will unpack these archives in parallel to disk, then upload to the customers.
Moreover, the multithread access to this file likely will not work on Windows well.
Very complicated, slow and fragile.
We tried to store pieces in the database in a previous version of our network, and we have had a lot of problems with corrupted databases and lost data as result, because nodes are not expected to be run on the enterprise grade hardware with redundancy, backups, etc.
So, the filesystem works better.
However, if you would like to experiment with your node and operating with one file, you may create a virtual disk, attach it, stop the node and move data to this virtual disk, update your config file with a new disk/path and start the node.
Please note - the virtual disks are not attached to the system after reboot, you need either use a task scheduler job to attach the virtual disk after reboot or do it manually, then start the node.
I do not believe, that it will perform better, more likely the reverse.
back to the case then,
since i set my node to full, i had no timeouts.(32h)
so what is the problem with the ingress stream?
do people with the error use
filestore.write-buffer-size 4MiB
or
filestore.write-buffer-size 128kiB
?
i set to 4 Mb Just with node set to full. so one or both are involved in the timeouterror.
maybe i should activate ingress again to test it?
i think about how to move the db files from the storj drive to the internal ssd? maybe this helps the drive to get less work.
it deffenetly will help, every of my 80+ nodes, dbs are on SSD.
any easy instruction for that? main node programm is already on ssd.
my tough was to take the node offline, defrag the plate on strong pc and going back online.
but it wont work like that eighter. (it would take days offline + losing data)
the last two weeks i got this email A LOT, when i check in on my node ( the pc ) with teamviewer its still reachable, but when i want to open the dashboard it says its not reachable, i dont get why the node goes offline on its own, do others also experience this a lot lately?
to fix it i just reboot the system and everything is fine again, can i fix this somehow?
this never happened for almost 10 months, but now its very often , but i didnt change anything at the node
my guess is that the current ānode version x.xxā is unstable? maybe, i dont know, its a guess, how do i fix this?
thanks in advance for your help
Hello @Pascoolism,
Welcome to the forum!
Please search logs for the FATAL error: How do I check my logs? - Storj Node Operator Docs
Hello Alexey,
thanks
when i type in the command in powershell like in the link you recommended ( im on windows ) then it does not stop, it keeps scrolling, i see āuploads, downloads, deletionā etc etc, in 5 seconds its more than 100 lines and i cant figure out any error msg
I have the exact same problem, didnāt have any issues for 17 months, now I get this email twice per day and have to restart my node with āStart-Service storagenodeā in Powershell as Administrator to keep it running.
Iām going to run this command now āGet-Content ā$env:ProgramFiles/Storj/Storage Node/storagenode.logā -Tail 20 -Waitā and let it run untill it goes offline and see what error occurs.