"Error response from daemon: invalid mount config for type" in Docker

the bad mount errors appeared earlier because of restarting the pc - it is the problem that cures by Alexey method:

this time that won’t go and I’ve made the disk install of Windows (this not helped), then clean install of Windows with full format of Win directory (this let me go further but one node works ONLINE and UPTIME, the 2nd with OFFLINE and UPTIME).

the common error message from logs:
2020-12-31T06:43:50.283Z ERROR orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB failed to archive orders {“error”: “order: ordersfile: rename config/orders/unsent/unsent-orders-12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB-1609228800000000000.v1 config/orders/archive/archived-orders-12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB-1609228800000000000-1609397030279166900-ACCEPTED.v1: permission denied”, “errorVerbose”: “order: ordersfile: rename config/orders/unsent/unsent-orders-12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB-1609228800000000000.v1 config/orders/archive/archived-orders-12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB-1609228800000000000-1609397030279166900-ACCEPTED.v1: permission denied\n\tstorj.io/storj/storagenode/orders/ordersfile.MoveUnsent:143\n\tstorj.io/storj/storagenode/orders.(*FileStore).Archive:278\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrdersFromFileStore.func1:421\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

You should remove the OWNER-CREATOR group from the folder with data (or from the whole disk), add user SYSTEM and change owner recursively to the SYSTEM, then add full rights to it recursively.
Then restart (or re-create) the storagenode.

The offline is more like related to the network problems.

It shares drives automatically on wsl2’s level. All your drives will be automatically mounted into /mnt folder inside the wsl2 distro, i.e. the N: disk will appear as /mnt/n

If you want to. It’s useful if you use more than a one distro. Just make sure that your default distro is v2:

wsl --list -v

what kind of network problems?
the other node works correctly and a different proccess at the same pc and different machines too.

When node is started I’m receiving the next errors:

2021-01-03T04:52:50.305Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:150\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:359\n\tstorj.io/storj/storagenode/pieces.

2021-01-03T04:52:50.303Z ERROR contact:service ping satellite failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite error: rpc: dial tcp 34.68.0.237:7777: operation was canceled”, “errorVerbose”: “ping satellite error: rpc: dial tcp 34.68.0.237:7777: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:108\n\tstorj.io/common/rpc.TCPConnector.DialContext:72\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:175\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:96\n\tstorj.io/common/rpc/rpcpool.(*Pool).Get:87\n\tstorj.io/common/rpc.Dialer.dialPool:141\n\tstorj.io/common/rpc.Dialer.DialNodeURL:95\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:124\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-01-03T04:52:40.391Z ERROR servers unexpected shutdown of a runner {"name": "debug", "error": "debug: http: Server closed", "errorVerbose": "debug: http: Server closed\n\tstorj.io/private/debug.(*Server).Run.func2:108\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Error: piecestore monitor: error verifying writability of storage directory: remove config/storage/write-test743037799: permission denied

2021-01-03T04:52:39.891Z ERROR nodestats:cache Get pricing-model/join date failed {“error”: “context canceled”}

2021-01-03T04:52:38.892Z WARN trust Unable to save list cache {“error”: “rename config/trust-cache.json392622120 config/trust-cache.json: permission denied; close config/trust-cache.json392622120: file already closed; remove config/trust-cache.json392622120: permission denied”, “errorVerbose”: “group:\n— group:\n— rename config/trust-cache.json392622120 config/trust-cache.json: permission denied\n\tstorj.io/common/fpath.AtomicWriteFile:39\n\tstorj.io/storj/storagenode/trust.SaveCacheData:115\n\tstorj.io/storj/storagenode/trust.(*Cache).Save:72\n\tstorj.io/storj/storagenode/trust.(*List).saveCache:130\n\tstorj.io/storj/storagenode/trust.(*List).fetchEntries:107\n\tstorj.io/storj/storagenode/trust.(*List).FetchURLs:49\n\tstorj.io/storj/storagenode/trust.(*Pool).fetchURLs:240\n\tstorj.io/storj/storagenode/trust.(*Pool).Refresh:177\n\tstorj.io/storj/storagenode.(*Peer).Run:781\n\tmain.cmdRun:218\n\tstorj.io/private/process.cleanup.func1.4:362\n\tstorj.io/private/process.cleanup.func1:380\n\tgithub.com/spf13/cobra.(*Command).execute:842\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:950\n\tgithub.com/spf13/cobra.(*Command).Execute:887\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.ExecCustomDebug:70\n\tmain.main:385\n\truntime.main:204\n--- close config/trust-cache.json392622120: file already closed\n— remove config/trust-cache.json392622120: permission denied”}

This is mean, that your network doesn’t allow to contact the satellite - something blocking the traffic. One of the reasons - your port is closed and your node can’t receive a response.
Please, check that node with this checklist:

This suggest that docker do not have an access to the storage, please, fix the permissions issue by removing OWNER-CREATOR recursively from the storage and adding a user SYSTEM, make it owner recursively and add with full rights permissions recursively. Then stop and remove the container and run it back.

Made an SYSTEM;
Removed an OWNER-CREATOR;
Owner: Administrators (with mark of subcontainers);
the situation have not changed.
Maybe need to erase all the users from permissions and get them back?

After run back a container I get the next message:
2021-01-05_17-22-52

You should make SYSTEM as owner recursively too, not Administrators. And SYSTEM must have write, read, delete, etc. or Full access to all files in the storage folder.

Yes, this is expected and “normal” for the wsl2. I have tested, it’s much faster than SMB used in Docker desktop with Hyper-V engine though.

not needed, only change the owner to the SYSTEM and give it full rights should be enough.
Please, show access rights for the

{
“entries”: {
https://tardigrade.io/trusted-satellites”: [
{
“SatelliteURL”: {
“id”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”,
“host”: “us-central-1.tardigrade.io”,
“port”: 7777
},
“authoritative”: true
},
{
“SatelliteURL”: {
“id”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,
“host”: “europe-west-1.tardigrade.io”,
“port”: 7777
},
“authoritative”: true
},
{
“SatelliteURL”: {
“id”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”,
“host”: “asia-east-1.tardigrade.io”,
“port”: 7777
},
“authoritative”: true
},
{
“SatelliteURL”: {
“id”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”,
“host”: “saltlake.tardigrade.io”,
“port”: 7777
},
“authoritative”: true
},
{
“SatelliteURL”: {
“id”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”,
“host”: “europe-north-1.tardigrade.io”,
“port”: 7777
},
“authoritative”: true
}
]
}
}

there is alot of newly added trust.cache.json files aslo

should I do something with that?

I think you can remove them.
I need to see an access rights of that file, not its content.
Please, show result of the command (PowerShell), replace D:\trust-cache.json to your path for trust-cache.json file:

(Get-Acl -Path D:\trust-cache.json).Access

2021-01-08_01-12-43

Then it’s should be writeable. Please, try to remove it and run the container.

after container start it goes to restart in cycle with errors:

ERROR piecestore:cache error getting current used space: {“error”: “context canceled; context canceled; context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled”}

ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:150\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:309\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:359\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

ERROR servers unexpected shutdown of a runner {“name”: “debug”, “error”: “debug: http: Server closed”, “errorVerbose”: “debug: http: Server closed\n\tstorj.io/private/debug.(*Server).Run.func2:108\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

Error: piecestore monitor: error verifying writability of storage directory: remove config/storage/write-test668474483: permission denied

permissions of a disk and folder:

2021-01-08_09-38-53

2021-01-08_09-39-19

2021-01-08_09-40-00

2021-01-08_09-40-24

the owner is Administrators, but on a working node the setting the same:

2021-01-08_09-43-44

2021-01-08_09-44-20

Add group in users “Authenticated Users” as at 1st (working) node;
The node left from constant restarting, but got the errors:

2021-01-08T07:10:46.680Z ERROR contact:service ping satellite failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 7, “error”: “ping satellite error: failed to dial storage node (ID: 2ND_NODE) at address 95.141.198.226:28968: rpc: EOF”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 2ND_NODE) at address MY_EXT_IP:28968: rpc: EOF\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

You still has not changed the owner for the disk with data.

на русском

Пожалуйста, замените владельца диска, всех папок и файлов на СИСТЕМА. Группа Администраторы там может быть, но только не владельцем.

Ok, I’ll do that, but the other node that works correctly with Owner “Administrators”.
And it works.

This suggest that your external address is wrong and satellite cannot answer to ping request to your node.
You should check your port forwarding rule, firewall for the forwarded port and your identity.

Is your storage configured for the first node exactly like this one?
Because right now we have a permissions issue. This often happens with Windows when you do something manually with data folders - copy it from somewhere and such.

Sometimes Windows is very weird with default settings: you have CREATOR-OWNER by default, this is mean, that when you start any service and it creates files, the owner of these files would be a SYSTEM. However, if you modify or create a folder - your user will be an owner of that folder. During inheritance it’s propagated to all subfolders and files inside the parent folder.
Since you used a whole disk, not the folder, the permissions is propagated from disk’s permissions configuration.
With default Windows configuration for the disk and if you didn’t change anything on the disk, the setup will create all needed folders structure and owner will be correct.
But if you created folders manually - the owner will be your user (or Administrators in your case, because you seems is logged in as an Administrator) and now the SYSTEM user cannot access data because of different owner.
By the way, did you setup a node? Storage Node - Storj Docs

Such a configuration often have permissions issue, especially if you re-installed Windows and GUIDs for accounts have changed. You will have wrong permissions on the disk. The today’s Windows updated itself to the next release via automated re-install. So, it could have this problem after update to the next version. But you would notice it only after restart (postponed installation have applied when you rebooted).

there is no changes of disk letters or other settings,
the system just have rejected from start.

this build with 2 nodes works about 4-5 months.

after adding the “Authenticated Users” (same as the working node)
I have passed this error:

piecestore monitor: error verifying writability of storage directory: remove config/storage/write-test668474483: permission denied