I tested FileZilla and I must say I am not impressed

While I understand, I don’t like it very much this way. I also do believe that this required setup could prevent businesses from implementing Tardigrade storage.
It seems for such Client-Server setups 2 thing would be needed instead. Either a gateway app that runs on the server and have all Tardigrade traffic run through it. On the server this app could be whitelisted as then the traffic would be application bound. However this app would need to be some kind of universal to handle any Tardigrade application. So no matter what is installed on the client (Filezilla, Duplicati or whatever) it should be handled by the same gateway.

The other thing would be to be able to limit Tardigrade on a specific port, the default port. Then this port could be easily opened on the server. For that a Tardigrade uploader should be able to tell the satellite only to receive nodes that have the default port open. I see this in the responsibility of the satellite because it can make better preselection to improve distribution across nodes. The huge problem I see here if data becomes inaccesible. This could happen when a SNO changes a nodes port or when the repair server moves data that was accessible via default port to a node that runs on a different port. In theory this could be prevented if the original upload port keeps retained as node selection criteria throughout the network.

The last idea I have is if Tardigrade and nodes could not run on 2 ports. One free port and one default port as fallback in case a client requires uploading and downloading via standard port. if something like this is possible then it seem like up- and downloading is possible under any circumstances no matter if SNOs changes ports, repair moves files or client is restriced in what ports it can use.

1 Like

I can see that being a problem for businesses…
Running on 2 ports seems difficult as you can’t bind multiple nodes to one port.
What about running storj on one standard port exclusively? Multiple nodes on a single machine would need some sort of “multiplexer” though…

1 Like

I understand 2 things:

  1. Nodes on non-standard ports are usefull because multiple connects can be made to the downloading or uploading client. This results in better speeds.
  2. If bound to one port only this advantage might no longer exist.
1 Like

not sure what exactly you mean. For incoming connections to the node, the port does not matter. you can open 200 connections to the same standard port.
For outgoing connections from the client you would need one port per connection. But a client always connects to a single node with a single connection because it only downloads one piece in a single connection (which is fine becuase the pieces are small, probably no performance gain in having parallel downloads to a single node).
Non-standard ports doesn’t change any of this.

1 Like

Maybe I got that wrong, but it was my impression that through parallel up- and downloads to/from different nodes on different ports there is a speed gain. I believe have read something like that somewhere.

1 Like

you are up/downloading to/from different nodes in parallel all the time. It doesn’t matter if they use a standard or non-standard port. The performance will be the same.

Ok. So the only advantage of a non standard port is the SNO to be able to run multiple nodes on one ip? Or run a node even in case another application is blocking that port already.

Not only, to DDOS a one well-known port is much simpler than when nodes have a different ports.
With attack on known port you can shutdown only part of the network. If it will be the same - the whole network.
It will be expensive attack anyway - you need to overload all nodes at once.

2 Likes

With a few uplaods and downloads you can gather information about most ports used by nodes and if you DDOS just the standard port you already take down the majority of the network resulting in data being unaccessible. I don’t think it makes a big difference at the moment.

1 Like

doesn’t the satellites coordinate everything, i would assume those would be the target for a ddos rather than the entire network… beating thousands of computers and internet connections and 10s if not 100’s of isps
seems like a pretty terrible plan…

1 Like

But the satellites are in data centers and probably better secured.
The nodes are home stuff and probably an easy target for DDOS.

sure if one hates one particular node…

and in relation to the ports and firewall… shouldn’t one port be fine… i mean the exit port or port range doesn’t have to be the same as the destination port / port range… ofc if 98% of everything outgoing is blocked and cannot be opened then the firewall needs to be “smart” like using upnp or whatever its called… tho ofc that has some pretty big security issues, so not sure if that would be a solution in an enterprise environment… but allowing all outgoing traffic to passthrough the firewall might be just as much of an issue…

most consumer grade firewalls today are pretty good at figuring stuff out, even without port routing, ofc port routing helps a lot… but most established software will either define a port it needs or simply search for a hole in the firewall / firewalls and then use that…

atleast to my knowledge, been a long time since i was really knee deep in this stuff, and the market is always changing.

UPnP is used to open incoming ports.
Nobody filter incoming traffic based on upstream’s port - it will be random by default.
We are talking about outgoing traffic to the external port, so the other way around.
The uplink will establish connections with effectively random ports, which SNO decided to configure.
The firewall on the server is configured to block outgoing traffic to unknown ports.
This is the problem of topic starter.

but this is like a 20-30 year old issue, i cannot believe there isn’t a smart solution for that…
i mean if we imagine this is suppose to be used for like enterprise / prosumer type backup, one should think it would just work in a professional environment, not have all kinds of special requirements on firewall setups…

i mean to me this sounds like the kind of thing that might make experimenting people just move on because it doesn’t work… nobody wants to deal with troubleshooting stuff… unless if they have to

1 Like

I am by no means a networking expert, but wouldn’t it be just as sensible/secure to open up a small range of outgoing ports around the default 28967? I don’t have the numbers, but based on what I have seen on the forum, most SNOs who run more than one node just increment the port number +1 from the default for each new node. Kind of makes sense from a human logic perspective.

Lets assume the if a SNO is running multiple nodes, most would be running x2, some would be running x3, fewer x4, even fewer x5. You could probably capture most of the network by opening the port range of 28967-28977. So even though you have to open multiple ports, it’s only 10 out of 65000, it’s still a small attack surface.

ISPs typically block port 25… SMTP… to prevent spammers from sending out email.

If there’s a single port for the whole network, it could be as simple as black listing that port on most ISPs. Suddenly Storj is no longer operational.

1 Like

for a little while i suppose… i would still say the satellites are the most obvious target for attacks atm, getting most isp’s to agree would require some wide scale government stuff… or simply that they don’t like the bandwidth being used… but i don’t see that happening here… everything here is fiber and storj barely even pull any real bandwidth… yet

also a bit off topic, even tho related to ports…

the argument i was trying to make is that there should be solutions that simply make stuff work… so people in like say an enterprise environment doesn’t have to open ports atleast for basic operation… let it run using tunnels or something and then repeatedly inform people that if they want optimal performance they should open ports…

ofc tardigrade is still a very new implementation in filezilla so maybe they will find some way around it…

but stuff just needs to work… else many people will use something else…

1 Like

I second that. (20 characters)

1 Like

The default firewall configuration - to allow any outgoing connection. So, it’s just work out of the box.
But if you configures a strict firewall - be ready to open needed ports to connect to services other than just http/https.

1 Like

yeah i’m aware i actually specialized in networks a few decades ago,

but for internet to work there will need to be some ports open, something which most other software can make use off, by scanning for them and then utilizing already opened ports…

i don’t see why one needs to get ports opened in a strict firewall to use filezilla or tardigrade, seems to me like they should just make use of existing open ports, to minimize how often people will have to get ports opened.

you can’t just call an admin and get them to poke whatever hole in the company firewall whenever one wants to, sure some might… but the whole idea of having ports closes is to limit access and if its just an short phone call away then security is essentially already breached.

the ports and programs needs to be verified for their usage, not just opened when somebody asks…

so if something just works, then people will try it and use it… if it doesn’t then they will use something else…

1 Like