Stefan Testnet Satellite

Yeah, i have been thinking about that as well some time now. Potentially it could also be project wide, to make sure that if you invite other people to your project it behaves the same across buckets :man_shrugging: .
Lots of options :smiley:


I don’t know if that is sufficient, as any node could switch anytime (or be forced to switch due to his ISP) to IPv6.
I was thinking of some kind of relay nodes which have IPv6 as well as IPv4 enabled and could pull data from IPv6 only nodes and relay the data to Ipv4 only customers in case data is not available normally.


Repair could account for that. If a bucket or project is set to IPv4 only, pieces that are no longer available on IPv4 could be marked as unhealthy and count towards the repair threshold.

If a customer has to preselect the IP version I think it is quite difficult for them to access their data if they switch between IPv6 and IPv4. How does a customer who switched from IPv4 to IPv6 access his IPv4 only bucket?

Gateway MT could support both and still provide access. But if nodes are allowed to be IPv4 only or IPv6 only, there’s not really a way around it without costly repair if you want to access the data with uplink.

Having other nodes in the path to do translation between ip versions would only slow down uploads and downloads and somebody will have to pay for the bandwidth that uses as well.

But I find it highly unlikely that clients will be IPv6 only any time soon. Cgnat is also not a problem for clients as they don’t have to accept incoming connections.

As I customer I simply would not want to deal with such a question, if my bucket shall be only IPv4 or IPv6 or both. As a customer I want to access my data anytime anywhere via any internet connection that I choose.


That would require all nodes to be dual stack. Which I’m guessing less than 10% are right now. And btw, right now the entire thing is IPv4 only and it doesn’t seem to be a problem.

Dual stack means you have a full IPv4 and a full IPv6 right?
Ok, I have that on every node. So it would not be a problem.

But why all nodes? Only the relay nodes would need to be dual stack. All other nodes could decide if they want to be IPv4 only, IPv6 only or both. Of course the more dual stack nodes you have, the better it is.
Actually you would need that only in case there are not enough pieces available in one or the other net. If a customer is IPv4 only and there are enough pieces on IPv4 nodes, then there is no problem and you don’t need to pull and relay data from IPv6 nodes.

Oh, Stefan Benten arrived but unfortunately not to normal node :smiley:


1 Like

We do not need to have all nodes be dualstack in the described ways either. We just need to make sure there are enough to construct a download.
BTW IPv6 only networks are very rare and the only barrier here is for nodes to be reached publicly. Its more of an extending of the pool, than splitting :slight_smile:

did anyone else have an issue with the change to the config?

I had started a testnet node a couple of weeks ago, but ultimately shut it down to repurpose the HDD for another project. Then today I went to spin up a new node on the same machine with a different HDD, with a newly created identity that had never been used before. I went through the config file changes outlined here on the main “join the test net page”:

and then when I started the node it didn’t seem to understand the argument about what satellites it should talk to. it started trying to ping only mainnet sats with a bunch of errors. then at one point got a couple of blobs from one of those sats, and didn’t try and ping either testnet sats. I stopped the container and messed with the config file a few times to try the list of trust sources with and without a space between the comma and the stefan address, since the original instructions have it with a space and the instructions here on this page have it without a space (not really sure if that matters or not…)
regardless, neither seemed to work each time I would start up the node again. Then decided to just pull off the ,12ZQ…part and only leave the qa sat, which then when I started, it at least tried to ping that QA sat…but still was trying to ping the mainnet ones too…not quite sure what I messed up since I spun up the last testnet node so easily last time when the instructions only had the QA sat.

just generated a new identity, and I’m going to clear out the folder/contents on the machine and try it again. thought I’d ask to see if anyone had any thoughts as I continue to trouble shoot.

One last note, these issues are on a rpi4 that I used to run a mainnet node on that was migrated to another machine, and then I also ran the first testnet node on this rpi4 a few weeks ago…perhaps I missed deleting some artifact file somewhere…

EDIT: I just went back and checked the node I was describing here and it actually seemed to level out and started getting some blobs from 1GG sat…so perhaps I don’t need to wipe and restart. will try and add the stefan sat back into the config and see if it changes behavior…

EDIT2: in the end just decided to get a fresh install of everything and reformat all of my disks on this rpi…just got the new node spun up and although I haven’t seen any data yet in the log, looks like it didn’t try and ping any of the mainnet sats. so I think it’s good.

Is the satellite working or am I misconfigured? (0 bytes)

That looks like you are trying to connect a mainnet storage node to a testnet satellite. Please follow the setup instruction and don’t miss out the other important config changes.

But for the qa satellite I have data … Do they differ in configuration?

I am not doing many uploads/downloads currently.
Its main purpose is to test the next steps with regards to full dual stack support :slight_smile:

3 posts were merged into an existing topic: Please join our public test network

I am still not connecting to this satellite… Is it supposed to be like this?

2022-11-06T21:29:25.648Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12ZQbQ8WWFEfKNE9dP78B1frhJ8PmyYmr8occLEf1mQ1ovgVWy", "attempts": 7, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp connect: connection refused", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp connect: connection refused\n\"}

Thats expected, i am trying various settings with regards to the network stack. As this is a testnet satellites, there is no guarantee for uptime :slight_smile:


Thank you for clarifying that this is not an error in my configuration :slight_smile:


completely agree - what happens when you move house or have v4 at home and v6 at work or any combination. It needs to just work.