This checklist:
Especially
This checklist:
Especially
ID
Status ONLINE
Uptime 34h58m20s
Available Used Egress Ingress
Bandwidth N/A 2.38 GB 181.34 MB 2.20 GB (since May 1)
Disk 80.00 TB 2.19 GB
Internal 127.0.0.1:7778
External xx.xx.x.xx:28967
However in the portal I still appear âQUIC Misconfiguredâ in red color
Is it correct or should another message come out?
did you remember
-p 28967:28967/udp
and to open the UDP ports on the router?
Yes, I just checked, although with a UDP checker tells me that it is closed I will review it tomorrow, because everything is well configured
If you did configure it while the node is running, you need to restart the node to apply the changes - it checks UDP only on start.
Congratz! I saw that you dedicated 81TB. To be honest, I think youâll never use up the space on a single node. If I remember correctly, maximum size on a node is 24 TB. My 3TB node took 16 months to fill.
The UDP is configured from the beginning, I have not made any subsequent changes.
I have performed a check with the tool
UDP Port Checker Online - Open Port to port 28967 and to the internal QNAP IP and it gives me close.
However, in docker it appears open and configured. I donât have any firewall configured in QNAP
Yes, thatâs right, 81 TB, I didnât see in the documentation any limitation in this regard, now I see it.
A question I have, can I have several nodes under the same identity in the same QNAP in different 24 TB containers? The idea would be to add storage in the same QNAP
Please make sure that you allowed the 28967 UDP port in the inbound rules of your firewall and that the docker run
command have -p 28967:28967/tcp -p 28967:28967/udp
parameters.
There is no limitations. You can allocate any size. Just with current usage the node(s) cannot fill up more than 24TB of used space in one location due to equality of uploads and deletions at this point.
So, doesnât matter how many nodes you would have in one location (/24 subnet of public IPs), they all treated as a one node for uploads and as a different ones for audits, customersâ egress, egress repair traffic and online checks.
I am sure that they are open in docker, this is the command that was sent when creating it and it was not modified later as it also indicates
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002 \
I have tried with command line âtelnet 192.168.100.50 28967â and with putty, in both cases it does not establish connection (192.168.100.50 is the LAN ip).
Itâs not that there is a strict limit (@ligloo is mistaken here). However, nodes just store customer data and so far there was just not enough customer data to fill more than few tens of TB per node. What @ligloo referred to is the estimated amount of data we can reasonably expect now from customers, under a lot of assumptions in terms of how customers actually use Storj and under assumption that the node works for many years. Also, as Storj gains more and more customers who want to store more and more data, this number might grow.
You can have multiple nodes collecting data at the same time, but if theyâre behind the same IP address, they will all be considered as one for the purposes of ingress.
Ok, Regards
I still need to fix the error in the previous post
The documentation lists a recommendation, not a hard limit. Others have already mentioned this, I just wanted to point you to this: Realistic earnings estimator
That estimator will give you the best estimation I can give you as to how quickly you can expect your node to fill up on a single /24 IP range. Technically the soft limit where deletes roughly match ingress is around 40TB atm, but itâs not longer so relevant as getting close to that will take decades as the growth slows over time. The estimator shows an estimation of the first 10 years.
There is no text/http interface on the nodeâs port, you need to use dRPC requests, not telnet.
Please check or show your port forwarding rule(s) for this node on the router - it should have UDP 28967 port.
This is the open port in the router, is a Fortigate Fierwall
Do you have a router and FortiGate? Or do you have only modem and FortiGate?
The difference - where is NAT happening and how you use a FortiGate.
I found this instruction: Technical Tip: Configure port forwarding using For... - Fortinet Community, and your example is different.
If you have a router before the FortiGate, do you have port forwarding rules on your router too?
I have modem + fortigate. The public IP is held by Fortigate itself. I have more than 24 port rules in the fortigate and they all work correctly. It is correctly configured by Fortigate certified engineers.
I have tried to perform an nmap from the Internet to port 28967 and the result is that it is openâŚ
nmap cannot reliable detect is the UDP port open or filtered, you can read an article on their wiki why: UDP Scan (-sU) | Nmap Network Scanning
So, this is not a reliable way to check UDP. The only reliable way, when the satellite is trying to contact your node via UDP, if itâs succeed - you will see that QUIC has an OK state.
The only inconvenience that this check is performed only on node start.
If UDP doesnât work, then itâs a network issue - something blocking the non-standard UDP ports. Maybe modem or router or firewall or your ISP.
I checked the port again with the Open Port Check Tool - Test Port Forwarding on Your Router tool and now it tells me that it is open. Indicate that then that I have to restart the node from Qnap container or should I configure it from scratch?
The yougetsignal checker tests the TCP port by default.
If the docker run
command did not change, in case of UDP check it should not be matter - you can restart the container or re-create it.