Yes WM Win10 from the same Synology is fine. I have tried reinstalled docker and after recreate all nodes but with no luck. Question is if I can downgrade it…going to try
Correct me if i am wrong, but afaik quic is only nice to have and no must have.
So its no Problem, if its not working
Yes so far so good…still online but strange that my Win10 machines works without issue but SYNOLOGY-DOCKER nodes are misconfigured. Hopefully STORJ will not force this as mandatory to be marked as online and reliable STORJ NODE
I found that rebooting my Synology NAS fixed this issue.
This has been done many times same as recreation of nodes, reinstall docker, restart nodes, recreate router port forwarding rules, reboot router etc, etc…
Sure. It would be great if there was a better solution.
I would like to add that I tried PING tool on https://storjnet.info/ and get results for one of my Nodes which has QUIC Misconfiguration issue:
And second is DIAL:
But still Misconfigured
I guess someone with docker and/or sinology needs to try a simple experiment of delaying the storage node start in the container for a few seconds, to rule out the possibility of the race condition between docker network up and the service start. (Restarting container won’t help, the storage node service as it is started in the container needs to be delayed for testing, and subsequently made dependent on the network interface readiness)
Well exist there any configuration hint how to achieve this delay after recreation/restart of the node under docker?
Container is using
supervisord to launch the storage node:
You may want to try to add delay via the technique described here: python - How to add a delay to supervised process in supervisor - linux - Stack Overflow (adding another program, that will launch storage node with delay, and disable storage node autostart). This can be done by running a shell in the running container and editing the embedded script in the overlay.
Ideally, there should be a way to specify dependencies, but there does not seem to be any.
Actually, the script, instead of a delay, could be trying to establish a connection to, say, 184.108.40.206. until it succeeds, thus implementing dependency on a network being up.
Hmm not experienced in LINUX stuff but it would be nice if someone can re-write this to DSM - Docker like “format”.
But anyway is funny that other people does not have this issue on SYNOLOGY and DSM.
I start to believe that update on your Synology did not goes well…
Yes looks like I have to try reset network as first step and than as second step reinstall DSM on SYNOLOGY.
After messing around with a Synology NAS and Docker on the DSM I was able to finally figure out why some, may not be able to get quic working, and a fix for that.
If you have a Synology nas, and you have followed all the documented steps to get quic working, and it is still misconfigured on your Synology nas, this will likely fix your problem.
Bond/aggregate all of your synology ethernet, reduce your Ethernet lan to one link on the NAS, or adjust synology vswitch. Review your Docker Isolation Chains in Iptables through ssh on the Synology nas.
Quic will work.
Welcome to the forum!
Could you please describe in details what you did to fix the issue? This would help other Community members to understand, how to fix it.
You need to make sure there are no DENY or DROP entries in your Docker Isolation Chain on Iptables on the Synology NAS with SSH. if you have run development setups on docker on your nas there may be some stale entries.
A ds1621+ for instances comes with a 4 port NIC card. If you are utilizing all of these NIC cards for throughput the software switch/ Synology vswitch causes routing errors. If you bond all the nic card together it should fix it.
I am not exactly sure WHY this is happening, and I use my nas for “critical” services in my home that run on the synology virtualization server, and I am unable to view the vswitch. I wish I could offer more insight than a janky fix. But I figured it was worth it for anyone else trying to fix the issue. I found this thread searching for a fix.
I don’t use more than 1 eth port on my DS216+, running 2 nodes on 2 HDDs, but I had QUICK missconfigured when I tryed using the same 28967 internal port for both. So my advise is to use different ports, internal and external, in router and in docker run, for each node, for tcp, udp, dashboard. Using this config I never had a problem. I tryed all the combinations, getting quick missconfigured with others. Docker and DSM ar up-to-date, docker ethernet runs in bridge mode (default).
I must add to this the tcp_fastopen line in the kernel setup. Also I set the kernel parameters, including tcp fastopen, as described above, to run at boot. When I have time, I will update that post with screenshots, step-by-step.
Then you also need to change the internal listening ports either in your
config.yaml or with an argument options, especially for dashboard (
console.address) and listening address (
I got fast open working last night. Made a decent improvement in my wins / uploads etc. My average bw went up considerably.
Yes, I know it impuves a lot, but we, Synology users, are cut out, because Synology plays by its own rules, and what it is working for the majority, dosen’t seem to work for Syno. My canceled uploads and downloads are through the roof ever since tcp_fastopen has been implemented and addopted by majority.