I tested FileZilla and I must say I am not impressed

Because it doesn’t use the http/https to transfer data.
For example, If you select an ftp as a protocol - you will be forced to open an outgoing port for ftp and also incoming ports range if you use an active ftp.
The same is going for any non-http/https protocol. Even http/https could be blocked for outgoing connections and clients should use a corporate proxy instead. The network configuration is different in each Enterprise environment.

Today they usually do not block an outgoing traffic, it’s just sniffed by specialized scanners to detect a malicious activity and block it. Or to shape the traffic. And a lot of different things, include list of blocked sites, etc.

Or Bring-your-device incentive when you install a corporate tool to control your device by the organization. In such case the access to the Enterprise services must be done via VPN, all other traffic is going through your usual internet channel.
There is a lot of options.

However, if you prefer an old-school strict firewall, you forced to have a lot of allowance rules for specific protocols, as Tardigrade for example.

I think this approach:

could work.

1 Like

This does not solve all the underlying factors but I agree that it is very likely that additional ports center around the default one. No guarantee however.
But I’ll try that next time I use Filezilla.

It’s a peer to peer protocol. The client doesn’t decide what port is needed, the storagenode only listens to the port it set up. On default setups this works perfectly fine. On strict firewall setups, you’re going to have to do some work within your own network to get it to work. Seems fair to me. Kind of comes with the territory for strict firewalls. I mean, try to make other peer to peer stuff work like bittorrent for example? You’ll run into the same issues.

and yet most things will often be able to find an open port on it’s own an then utilize that, or whatever… i’m not saying that something that doesn’t exist should be implemented, simply that if it could establish a connection through a firewall… then it should…

i don’t really blame storj for not implementing stuff like that yet… but i’m trying to say that it would be highly advantageous, if it just works… adoption flows so much easier when a piece of software just makes life easy.

maybe i’ll go do some experiments, and maybe look at some code for once :smiley:

I’m not sure you’re thinking about this the right way. If a strict firewall is in place, no amount of auto configuration will make it work. That is the point of a strict firewall. If random software could just poke holes in it at will, then there isn’t much point to it being there in the first place.

1 Like

Most networks allow outgoing ssh connections… thus, the firewall is fairly useless for outgoing traffic prevention. A reverse ssh tunnel to a $5/month IaaS completely bypasses the entire outgoing traffic blocking.

If a given corporate firewall prevents outgoing ssh connections, then it’s really just for web browsing… but then there are web based ssh clients as well… such as firessh

If there are any outgoing connections allowed, all outgoing traffic can be routed through it somehow. Thus, the entire point of limiting outgoing traffic to particular ports/services is rather moot. Kind of like security through obscurity. However, front office corporate types like to pretend that something is really “locked down”…

4 Likes

@Alexey Any chance you could pull some numbers from satellites on what % uses std ports +/- 5-10 ports?

Personally, I’ve setup 10+ nodes on non standard ports in the 183xx range.

I would expect most firewalls today passing outbound connections to standard application ports like http/https/SSH… AND any non privileged (1024-65535).

My experience working with and on many enterprises - its a 50/50 split between ancient default block all outbound theory and what I explained above. Most would stilll add a dns layer of security to intercept all dns Q to a controlled environment, where any block could be implemented.

I fully second the previous posts on obscurity and suits thinking default block is secure. It is not.

If you required an application to connect to any port outbound in a strict scenario enterprise, normally you would setup some sort of proxy in a separate zone and allow traffic to it on a per client basis.

1 Like

Not many posts here talk about security, so I want to leave my 5cents.
I’m not an expert SysAdmin, but I am one, managing a few non-critical servers and tenths of individual workstations (and 2 Storj Nodes).
Even though “security through obscurity” is considered moot and in no way “enough”, from my short experience, it does wonders! Simple obscurity such as using random ports for everything that’s configurable, will stop most automated attacks, which are the huge majority of them if you are not a famous company, a celebrity or targeted for some other reason.

I’ve seen this scenario happening multiple times:

  • Some vulnerability is discovered in an online service;
  • 1 day later there’s a proof-of-concept exploit somewhere on PasteBin/hackers-forum,
  • 2 days later there are online mass-scanners actively searching and looking for vulnerable targets running on the default port.

Many of the times the manufacturer/developer launched an update before the exploit was used. Sometimes didn’t.
There are some ways for users to be protected against these:

  1. Disable the external access to such services;
  2. Timely install every security update, praying for the developers/manufacturer to patch the vulnerability before being exploited;
  3. Run an intelligent; (heuristics based?) firewall and hope the attackers are blocked;
  4. Change the default port. ( Only option available for most)

Storj is accepting external connections, and in no way they are impervious to vulnerabilities.
Some vulnerability in storj/tardigrade will be found someday, depending on the interest to attackers, and when it happens, default ports are the first to be exploited by mass-scanning botnets.
A few days later, those botnets may learn how to enquiry satellites for node:port information, but there will probably be an extra layer of protection there, requiring a valid tardigrade account to enquiry satellites, or similar, discouraging it.

Name any software providing internet-facing services with more than 1M users and older than 5 years, it happened to them.

So, No:

  • Forcing SNOs to use a default port is not desirable.
  • Forcing the use of a range of 10s port to allow multiple nodes neither, since it’s still fast to search them all.
  • Define a range of at least 1000 ports and I would accept.

(Same thing for usernames of routers, etc. It is almost as important to change the username as it is the password of the default user)

2 Likes

I fully agree.

However, the Tardigrade client port situation is for outgoing connections, not incoming connections.

Incoming connections need to be carefully monitored. Outgoing connections originate locally.

I also run several Internet facing servers. Port scanners pick up lots of services I run. However, that’s what those services require. Email servers require standard ports. Web servers require standard ports. SSH can be moved securely, but only through port knocking…

Reducing incoming ports dramatically decreases attack surface area. Using proxies when providing Internet facing services further reduces risk of compromise… However, these are server side issues - not client side issues.

In almost all cases, enabling ssh public key auth… disabling ssh passwords… and applying updates in a timely manner results in decent security for 99.99% of attacks I’ve seen in my logs over the last 15 years.

5 Likes

A post was split to a new topic: HAProxy setup on Home server