Stefan Testnet Satellite

The important information first. This is another unpaid testnet satellite. Please don’t connect your production node to it. Better setup a dedicated testnet storage node.

Join Stefan Testnet Satellite
If you are running a testnet storage node already please update your config like this:

# list of trust sources
storage2.trust.sources: 1GGZktUwmMKTwTWNcmGnFJ3n7rjE58QnNcRp98Y23MmbDnVoiU@satellite.qa.storj.io:7777,12ZQbQ8WWFEfKNE9dP78B1frhJ8PmyYmr8occLEf1mQ1ovgVWy@testnet.satellite.stefan-benten.de:7777

If you are not running a testnet storage node yet please don’t forget the other nesessary config changes from here: Please join our public test network

Motivation Testnet
The Stefan Testnet Satellite is a feasibility study for launching the Stefan Mainnet Satellite later this year. The Mainnet Satellite will be paid. We expect that this feasibility study will take several months. We expect to hit more than one issue along the way. Code contributions are more than welcome.

Final Goals Mainnet

  1. Running a community satellite for more decentralization.
  2. Try to outperform all other satellites. Let’s have a friendly competition to improve overall performance (speed and cost).
  3. Implement IPv6 support.

Features to Test

  1. Reed solomon settings that should work better for storage node operators. We are going to start with 16/20/40/50. That should result in bigger pieces on disk.
  2. Higher inline segment size for faster downloads. The other satellites are optimized on low costs but the penalty is slower downloads. Let’s test out the opposit optimization. We are going to start with 1 MB inline segments.
  3. IPv6 support.
  4. Please let us know if you have any other suggestions.
12 Likes

Done!

Looking forward to see the impact of adjusted RS settings. Should be interesting!

Question about IPv6. I believe the satellite has taken over DNS resolution from the uplink and sends out IP addresses instead now to save the uplink from having to do this for every node. Wouldn’t that just mean that in a dual stack setup the satellite would choose the IPv4 address anyway?

Yes currently the satellite only returns one address and preferred that is IPv4 right now.
However it does return IPv6 addresses correctly, for IPv6 only nodes.

Main reason for this test early on without the full redo of the internal bookkeeping and selection to support both IPs concurrently is to see whether IPv6 only nodes make a difference.
My information so far show that we are not able to take advantage of many FTTH connections because IPv4 CG-NAT. I would expect that if nodes could be setup on those connections now against this test satellite, that the performance should overall increase (granted the client is DualStack capable).

2 Likes

It would definitely be good to have a way around cgnat. And it’s good to see your name back in the satellite list!

Has thought been given to data availability for IPv4 only clients when data gets repaired to IPv6 only nodes? Perhaps a new bucket restriction would be required to prevent that.

1 Like

Yeah, i have been thinking about that as well some time now. Potentially it could also be project wide, to make sure that if you invite other people to your project it behaves the same across buckets :man_shrugging: .
Lots of options :smiley:

1 Like

I don’t know if that is sufficient, as any node could switch anytime (or be forced to switch due to his ISP) to IPv6.
I was thinking of some kind of relay nodes which have IPv6 as well as IPv4 enabled and could pull data from IPv6 only nodes and relay the data to Ipv4 only customers in case data is not available normally.

1 Like

Repair could account for that. If a bucket or project is set to IPv4 only, pieces that are no longer available on IPv4 could be marked as unhealthy and count towards the repair threshold.

If a customer has to preselect the IP version I think it is quite difficult for them to access their data if they switch between IPv6 and IPv4. How does a customer who switched from IPv4 to IPv6 access his IPv4 only bucket?

Gateway MT could support both and still provide access. But if nodes are allowed to be IPv4 only or IPv6 only, there’s not really a way around it without costly repair if you want to access the data with uplink.

Having other nodes in the path to do translation between ip versions would only slow down uploads and downloads and somebody will have to pay for the bandwidth that uses as well.

But I find it highly unlikely that clients will be IPv6 only any time soon. Cgnat is also not a problem for clients as they don’t have to accept incoming connections.

As I customer I simply would not want to deal with such a question, if my bucket shall be only IPv4 or IPv6 or both. As a customer I want to access my data anytime anywhere via any internet connection that I choose.

2 Likes

That would require all nodes to be dual stack. Which I’m guessing less than 10% are right now. And btw, right now the entire thing is IPv4 only and it doesn’t seem to be a problem.

Dual stack means you have a full IPv4 and a full IPv6 right?
Ok, I have that on every node. So it would not be a problem.

But why all nodes? Only the relay nodes would need to be dual stack. All other nodes could decide if they want to be IPv4 only, IPv6 only or both. Of course the more dual stack nodes you have, the better it is.
Actually you would need that only in case there are not enough pieces available in one or the other net. If a customer is IPv4 only and there are enough pieces on IPv4 nodes, then there is no problem and you don’t need to pull and relay data from IPv6 nodes.

Oh, Stefan Benten arrived but unfortunately not to normal node :smiley:

Done.

1 Like

We do not need to have all nodes be dualstack in the described ways either. We just need to make sure there are enough to construct a download.
BTW IPv6 only networks are very rare and the only barrier here is for nodes to be reached publicly. Its more of an extending of the pool, than splitting :slight_smile:

did anyone else have an issue with the change to the config?

I had started a testnet node a couple of weeks ago, but ultimately shut it down to repurpose the HDD for another project. Then today I went to spin up a new node on the same machine with a different HDD, with a newly created identity that had never been used before. I went through the config file changes outlined here on the main “join the test net page”:

and then when I started the node it didn’t seem to understand the argument about what satellites it should talk to. it started trying to ping only mainnet sats with a bunch of errors. then at one point got a couple of blobs from one of those sats, and didn’t try and ping either testnet sats. I stopped the container and messed with the config file a few times to try the list of trust sources with and without a space between the comma and the stefan address, since the original instructions have it with a space and the instructions here on this page have it without a space (not really sure if that matters or not…)
regardless, neither seemed to work each time I would start up the node again. Then decided to just pull off the ,12ZQ…part and only leave the qa sat, which then when I started, it at least tried to ping that QA sat…but still was trying to ping the mainnet ones too…not quite sure what I messed up since I spun up the last testnet node so easily last time when the instructions only had the QA sat.

just generated a new identity, and I’m going to clear out the folder/contents on the machine and try it again. thought I’d ask to see if anyone had any thoughts as I continue to trouble shoot.

One last note, these issues are on a rpi4 that I used to run a mainnet node on that was migrated to another machine, and then I also ran the first testnet node on this rpi4 a few weeks ago…perhaps I missed deleting some artifact file somewhere…

EDIT: I just went back and checked the node I was describing here and it actually seemed to level out and started getting some blobs from 1GG sat…so perhaps I don’t need to wipe and restart. will try and add the stefan sat back into the config and see if it changes behavior…

EDIT2: in the end just decided to get a fresh install of everything and reformat all of my disks on this rpi…just got the new node spun up and although I haven’t seen any data yet in the log, looks like it didn’t try and ping any of the mainnet sats. so I think it’s good.