Tutorial: tunneling through CGNAT with portmap.io & ssh

You may want to do this if you can’t get your node online, open port checker tells you your port is closed AND the IP address shown is not the same as what your router’s WAN page says it is. Your ISP is using Carrier Grade NAT

Before you attempt to set up a tunnel you may want to try to get your ISP to turn off CGNAT. When you contact them, esp. on the phone, they will not understand what you need. Tell them you want to access a security camera from the outside. This worked for us twice, but then they refused, saying we must change to a business a/c with fixed IP at 3 times the cost.

I believe that if you use a tunnel you do not need to set up any port forwarding and dynamic IP, as SSH establishes the connection from the inside.

The following instructions are for Linux, but adapting them to Windows or MAC should not be too difficult.

  1. Create an account with portmap.io.
    Follow the instructions for creating a configuration file for OpenVPN or SSH key using the ‘Generate’ button on ‘Create new configuration’ form. I chose SSH because the client is already installed on my server. Save the key as instructed, in /root/.ssh/ .
    Change the file’s permissions:
    sudo chmod 600 /root/.ssh/yourfilenamehere

Create a mapping rule by specifying configuration created, remote and local ports. Leave blank the host header and IP allowed to access.
Copy the ssh command line in red using the little blue icon at the end. Paste into your favourite text editor.

  1. In the text editor add the following two options:
    -o ExitOnForwardFailure=yes -o ServerAliveInterval=15

Your editor should now look similar to this:

ssh -i ~/.ssh/Beddhist.first.pem Beddhist.first@portmap.io -o ExitOnForwardFailure=yes -o ServerAliveInterval=15 -N -R XXXX:localhost:YY

Replace the ~ with /root . Verify that this is the correct path to the key file you have saved.

Beddhist.first corresponds to your userid on portmap.io, XXXX is the port that they assigned you and YY is the port number you have configured on your node, usually 28968 or close to that.

  1. Testing your tunnel

Open two terminal windows on your server. Position one window so that the bottom few lines remain visible at all times.

Run this command:
tail -f /var/log/syslog

[Noob info: you will monitor your system log with this. It’s normal to see a few lines appearing here from time to time.]

In the second window become root with:
sudo su
Type your password at the prompt.

Start your node.

Open the open port checker page and enter the host name and port number that portmap.io gave you. The port will show as closed.

Copy the entire line starting with ssh -i … from your editor window, paste it into the terminal and press Enter. If you get any error message here you need to fix this first. You will be asked whether to accept the host key. Answer yes (naturally…). You will not get your prompt back at this time, as you are running the ssh command interactively.

If you were successful you should not see any ssh error messages in your log window and the open port checker must show the port as open. If it doesn’t you need to find out why and fix it, before proceeding further here.

  1. Automating it

Back in the root terminal press Ctrl-C to kill ssh.

Create the systemd unit file:
nano /lib/systemd/system/sshtunnel1.service
(Note the ‘1’ at the end of the file name. I foresee running a 2nd node on my server soon.)

Paste this into the file:

[Unit]
Description=SSH tunnel to portmap.io for storjnode1
After=network.target

[Service]
ExecStart=

# Restart every >2 seconds to avoid StartLimitInterval failure
RestartSec=5
Restart=always

[Install]
WantedBy=multi-user.target

Copy the ssh command line from your editor and paste it after ExecStart= . There should be no space after the ‘=’.

Save and quit. [Noobs: Ctrl-O, Y, Ctrl-X]

Start the new service:
systemd start sshtunnel1

Your log window should show:

systemd[1]: Started SSH tunnel to portmap.io for storjnode1.

Check to make sure your node is online.

Last, enable to start at boot time:

systemd enable sshtunnel1

Isn’t this basically just a VPN? Doesn’t this mean it will add loads of latency and therefore reducing your chances of winning the race?

You could be right on both counts. If you have a better/faster solution then I’m happy to try that out. But it must be free, or storj is not economical for me.

The real challenge will be to run a 2nd node. I will have to make another a/c at portmap.io and I would be very surprised if that’s not against their T&Cs. 2 connections going to the same IP could get me banned.

As more and more people get online the ipv4 space will get ever more crammed, meaning more people will find themselves on cgnat. I think Storj should find a better solution for that.

A node thorugh an SSH tunnel is not slower than a node directly on your home connection. Interestingly, my tunneled nodes often do even better than the node on the home connection :smiley:
So I wouldn’t be concerned about latency. (unless your host is really slow or has a bad connection or something. A normal VPS should be fine)

I don’t know how portmap.io compares to a VPS and I don’t know how to monitor the performance of the tunnel. Right now it’s a moot point, because the (weak) server is maxed out while I’m plotting another drive for Signum/Burst mining.

Keep in mind, your node is now operated from the proxy from the WAN side, so you’re sharing bandwidth with others on that same /24. You might could hop around until you’re on a /24 that doesn’t have any nodes, but that doesn’t mean someone may join at some point down the road.

In my case it’s all about consolidating services to reduce cost. I don’t have a need to bypass CGNAT currently, but i have a VPS for VPN, ISP aggregation, and load balancing. You could get a cheap $5 a month VPS to roll your own stuff.

Linode is my Fav one currently. The $5 VPS is 1Gb up, 40Gb down (real world test was like 8Gb)

If I can believe the dashboard, my node has made $18 in almost a year. Now estimated at $8/month with 1.2TB filled.

Since Burstcoin rebranded to Signum and combined proof of stake with proof of capacity my 20TB miner makes about $0.6/day. No held amount, no DQ, no messing with ports and NAT. If it’s online I get paid.

The ssh tunnel has made my node more unreliable. When I was setting it up and running the ssh command manually it often took 2 or 3 attempts to start it. It kept barfing “broken pipe” and “connection closed by remote”. systemd now restarts it after 5 secs, when necessary. Sometimes either the node or ssh hang. Only a manual restart fixes that.

My ISP seems to reboot a piece of equipment 2 to 3 times a week, just after 3am. I guess the IP changes at this point, because the node went offline for 5 mins. That interval has now tripled. I have no idea why. That’s 3h downtime every month.

Add to that a culture, where the power authority or ISP cut service without warning to do some work and I’m wondering whether I am flogging the proverbial dead horse.

If I ever dare to go away for more than a day or two, eventually I will get DQ’d.

You can have 288h of downtime per month at the moment. So it’s not that problematic.

1 Like

I just recovered from a 105h downtime due to a water leak damaging my ISP’s equipment in the basement and it happening on a Friday while building and property manager passing responsibility from one to another. When the service came back, my online scores were between 80 and 85%.

You can look into some SD-WAN solutions like Flexiwan, OpenMPTCProuter, or ZeroTier. You can aggregate multiple IPs for hot failover. Since you’re using a proxy anyway that would eliminate the risk of your IP changing and the downtime from a single ISP.

Thanks for all your replies. At least I won’t get dq’d any time soon, unless the policy is changed.

@KernelPanick I have no idea what these abbreviations and names are, but a 2nd ISP is out of the question, for cost reasons.

Update: last night my node was offline for over 1.5h. My guess is that sometimes the ssh process doesn’t detect that its connection has died and so doesn’t exit. The OS has no way of restarting it, if it doesn’t exit. I think a solution to this problem should be possible, but is beyond my expertise at this time. I will ask for help in a Linux forum.

I haven’t tried it but maybe ‘autossh’ is the way.

Apparently, there is a multitude of autossh scripts out there. I’ve looked at two of them, one a docker container, but I don’t think they do anything that systemd is already doing for me.

However, I have just discovered that I copied a couple of ssh options, but actually failed to paste them into the command file. I have now corrected this and I’m hopeful that it will now quit within one minute of the link going down. Systemd should restart it 5 secs later. We shall see.

Now, back to working on the documentation in the first post. Apologies that is has been taking so long.

I am talking about this one: autossh(1): monitor/restart ssh sessions - Linux man page

[Unit]
Description="Port forwarding for storj"
After=network.target

[Service]
User=kevin
ExecStart=/usr/bin/ssh -R ip:14658:localhost:29901 root@server -o ServerAliveInterval=60 -o ExitOnForwardFailure=yes -o ConnectTimeout=10 -N
RestartSec=60
Restart=always

[Install]
WantedBy=multi-user.target

This never failed me. Can’t remember having changed anything on the host side.

1 Like

That looks pretty close to what I have posted now. Thanks for sharing.

The autossh man page says you need 2 ports. portmap.io will give you only one for free.

Ok, I’m done. If anyone sees any errors or has suggestions for making this clearer/better let’s have them.

A little difference could make all the problems go away :smiley:

Indeed. I’m sure the problem was the missing option ServerAliveInterval. The default is 0 = disabled.

1 Like

@Beddhist normally you should have IPv6 Public IPs. With that there are also tools that do IPv4-> IPv6 translation for you.
Hopefully sometime soon, we can work on getting native IPv6 support on the DCS Satellites (which they do not have thus you still need the public IPv4 somehow). Once that is done, we hopefully all can avoid these “jump host” scenarios.

Hi Stefan.

I was wondering about IPv6. I just logged into my fiber optic node (ZTE F612, owned by the ISP) and found it supports IPv6 and has options for tunnelling 6 through 4 and vice versa, but it’s not enabled. I suspect the ISP does what most people (including myself) have done with v6 so far: ignore it for as long as possible. It’s disabled on all of my equipment and my guess is that applies to most of the ISP’s as well.

There seems to be quite a learning curve, judging by all the options and acronyms in the fon’s UI. If I were to enable IPv6 on my server and routers, would they then all appear visible from the outside? Or do I still have the option to use NAT in my own router?