No, it’s only one. Just it takes several messages to notify what is going on and the final one - FATAL.
Yes, seems it really has a hardware issue. But luckily it is a write timeout.
Please try to increase it to 2m30s.
ok, if i have the next time-window free (maybe next week) i will move the node to a much more powerfull pc. downside its wireless connected via wifi-ax.
anybody knows a good guide to move to new windows-os hardware?
think it’s quite easy since the disc is usb3.0, but how to install storj service and transfer safely?
The more powerful hardware is not required. We need to figure out, what’s best timeout for your setup.
So, please just adjust the timeout, save the config and restart the node.
But if you want you can use this guide:
But, as you said, since it’s an external drive, it’s easy to move - just make sure that your identity and orders are moved to the disk with data (for orders
folder you also need to adjust a path to the orders
as well. It’s an storage2.orders.path:
parameter).
You can check what is default path to orders (PowerShell):
& 'C:\Program Files\Storj\Storage Node\storagenode.exe' setup --help | sls order
However, since it’s an external drive, you likely will have problems on a new hardware too.
So better to adjust a timout for the current setup.
So i tested with 2min timeout : no difference. node crashed nearly instantly.
Does it start an other test after one min clogging itself? or does it wait for the first one?
maybe the tests schould not run when filewaker or trash cleaning or garbage collector is running?
also my headache intensifies as i found that dashboard used space 6TB used/+0,2TB trash now differs from explorer ~6,7TB
but TB/M seems correct.
also i tried to get the file information on trash and blobs, while running task manager:
no visible bottlenecks, but drive only reads at 1mb/sec.
BUT: gathering files and size information takes AGES, i canceled it after 30 min and getting ~ 10% of infos and already over 100k files, counting slow.
so i think it can not finish filewalker processes and cloggs the drive in terms of iops.
trash gets maybe not deleted so it cloggs even more with time.
when i have more time, il will try the drive with my faster pc and compare the speeds if it is the drive or the mainboard/driver/bios
need to sleep now, letting run defragmentation did not finish overnight, then i will try to move the node to my faster pc. (propably fucking it up there)
It checks with an interval, for read:
for write:
Checkers should work independently, because audits and repairs never stops, so it’s better to crash to prevent disqualification.
with what error? Does it say that it is crashed after 2m0s of timeout or still 1m0s?
and also - what’s error - for writes or for reads?
this an indication, that is something wrong with this setup. Maybe the slow controller or disk itself.
perhaps you messed the config file.
You can start it from the elevated PowerShell as a process, to see where is it crash:
& 'C:\Program Files\Storj\Storage Node\storagenode.exe' run --config-dir "C:\Program Files\Storj\Storage Node\" --log.output=stderr
Please post an output
PS C:\WINDOWS\system32> & ‘C:\Program Files\Storj\Storage Node\storagenode.exe’ run --config-dir “C:\Program Files\Storj\Storage Node" --log.output=stderr
2023/04/09 07:33:06 failed to check for file existence: CreateFile C:\Program Files\Storj\Storage Node” --log.output=stderr\config.yaml: Die Syntax für den Dateinamen, Verzeichnisnamen oder die Datenträgerbezeichnung ist falsch.
PS C:\WINDOWS\system32>
writes, same as with 1min but with 2 min lastone with 1 min again, after i reedited my added lines with notepad++ and setting loglevel to error
seen this one or twice, usually PC restart will help
@Alexey
Others seem to have problems after the error too it seems.
Can this be done in the run command?
ups, mybad, set the starting type in windows to deactivated
now its automatic again and working…
There has to be an other limitation:
Like max 2 GB ram for 32 bit programs.
Usb has no drive cash activated. Compared To other internal drives. So ram is used more.
Or the max open files wich is 512 and if manualy set 8k. But these factors are easyly reached with 100k +files on the blobs/ trash
its running again, i set it to disabled, for testing the drive on my other pc. so my bad.
a little bug in windows let me press the “start” button, but it was already set to disabled, causing the node not to boot up. thanks microsoft for trolling
yea maybe to low iops i guess, reading folder size with much files did take ages on my fast pc too, but while defragmentation speed of drive is verry normal. Ca 20MB/s read and 20mbs additional write simultaneously. For 8h without failing.
Also on node drive speed bandwith is normal.
Its because it has to seek to maybe milions of files sized 1kb all over the plate…
Then it’s perhaps related to a disk itself. My WD (Red) 5400 disks doesn’t have this issue, however, they are full.
Seems your config.yaml
file doesn’t compatible with a YAML format.
You need to check it and fix the issue: https://www.yamllint.com/

same as with 1min but with 2 min lastone with 1 min again
This is mean that you did not save changes. With Notepad++ you need to explicitly save changes (menu File → Save) and restart the node.
Yes, you need to add --
before the parameter and add it after the image

yea maybe to low iops i guess, reading folder size with much files did take ages on my fast pc
I suspecting a SMR drive here…