Sustained 200mbps ingress

same here

1 Like

And here

1 Like

I think there are really many problems in this period with the satellites networks. It is now more than 12 hours since I have received a Byte or a little more on my 3 nodes. Every 10-12 hours the transfer stops for several hours and mostly on all the nodes in the same time, both verified and non-verified ones. In addition, there have been whole days of ā€œBlack Outā€. We are waiting for better times.

1 Like

can’t all be good months, good times for error correcting, maintenance ofc some people will have gotten through that already… today it’s my bios watch dog on the schedule so my server will auto reboot on os freezes, kinda starting to be a challenge finding something to improve for HA type stuff.

might enable so my QPI lanes will be split into multiple lanes instead of 1 big lane… which gives me redundancy in my lanes from the northbridge to my cpu, which i suppose is kinda nice… however i do have two cpu’s each with a QPI path… so it would sort of be redundancy on the redundancy and it ofc has a certain performance effect…

might go from last power state bios setting to default on, so that if it turns itself off for whatever reason the bios should catch it and turn it on again, had enabled QPI L1 and L0 because i thought it might be cache… turned out to be power management that would turn off the server at random xD
and if it turns itself off then it ofc wouldn’t turn itself back on when on ā€œlast power stateā€ in the bios.

been really fond of that one tho… very practical, so if the power goes out… when it comes back the machine says … well i was on before the power was out so… i better turn on… or the other way around… so it says off and doesn’t turn on a random for whatever reason.

the speed was nice after the 2-3 day pause last week some time, got like 6-7mb/s for maybe 12 hours… thats a record for my node… was kinda starting to wonder if it would even go faster than 4-5 for the storagenode… :smiley:

Some day tests will finish and we will be missing these days with several hours of sustained flow of data at 20-25 Mbps :sweat_smile:

3 Likes

I’m starting GE on a few nodes now, because hoping for test data sucks.

We are way too many nodes for way too few customers, without test data my nodes would be nearly empty. Testing is important, but at some point Storj have to earn money and not just spend it all on test data.
I think Storj should temporarily stop sending out new node invitations.

If things go better, I could immediately provide 6-7 new nodes with at least 3TB each.

4 Likes

Production launch was 2 months ago. What do you expect? Even the customers who wanted to jump on board on day 1 are probably still working on integrating Tardigrade into their infrastructure. And yet, my HDD’s are filling up faster than I can buy them almost because Storj is pushing so much test data.

Meanwhile in this ā€œbadā€ month my node is still going to make more than $40. (Not counting the small nodes I started about a week ago)

It’s up to you, but I seriously advise against doing a graceful exit. For 2 reasons. The longer your node exists, the more profitable it is. And when times get better and you want to join back in. You’ll be handing over much higher amounts held back. Just be patient through the slightly slower months.

1 Like

yeah i’ve been thinking of starting up a few nodes myself, just so they could start to go through all the vetting, and all that as i do plan to scale up in the future, and 24tb is the limit for one node, also it would be nice to have some tiny test nodes for a while, maybe see if i cannot get nodes off baremetal and into some containers / vm’s might be nice…

one can always scale up a node… one cannot simply create a new one without long periods of delay.

looks like i’m going to end at 20$ for this month… not enough to cover my base electricity costs still, but getting a good bit closer, and my node is still so tiny at 7.5-8tb

The nodes where I do the GE are 8 months old and are not nearly full (3 nodes). The others that perform well (1-2 month old & already full), of course I don’t do a GE on them.

But I think there are currently enough nodes and Storj should not send any further invitations at the moment.
You can’t pay all nodes with test data only, where does the money for the test data come from? That’s going to be over some day, if Storj continue like this.

usually the rule is don’t expect to make money the first year of business… so 10 months left until storj should expect to even have the option to see profit :smiley:
if lucky we might see it sooner… ofc storj has a sizable runway if they played their cards right.
as you can hear bright is making money… xD

In fact, I stay online anyway. I trust the Storj infrastructure. In my opinion, shared storage like IPFS is the future. I have Cloud services for backing up my data and I am thinking of switching to them as a customer. Today I use unlimited services under Opendrive. I am evaluating the costs. Currently on Opendrive I have 14 TB of data backup. The Tardigrade Service must be advertised. :wink:

1 Like

that erasure coding thing is just brilliant, no doubt that is the future of data storage, math as redundancy… that is just awesome… tho maybe one could reverse the concept a bit… and make it so that the math doesn’t just provide redundancy but actually provide compression over extended periods of time, so that instead of just keeping the same size, the stored data would get smaller as compression becomes better and better with increased applied computation.

that way one could still have the erasure coding but then on top if we imagine a sole future storage solution that is unmaintained as its components decay its data stored would also grow smaller with its continued computed compression over time… xD

ofc that does sort of make the data unusable and basically impossible to decipher for anyone that finds it after the system fails… ofc that might be a feature rather than a bug…depending on what one is trying to do.

storing data long term is pretty difficult… like say store data 1000 years…

1 Like

It’s not, there is no limit. I’m not sure while people keep spreading this. It’s simply not true.

Storj needs to be ready with enough capacity on the network in case big customers need to be onboarded. It’s only been in production for 2 months. Big customers will surely come and it would be really bad if they didn’t have enough capacity. So they need to keep accepting invites and expand. That would both improve reliability and consistency across the network. The test data serves a purpose for them to push this expansion ahead of customers coming on board. Think of it as the decentralized version of building the datacenter before you can rent it out to customers.

3 Likes

well i can only assume what the documentation tells me…
says right there… 24tb max per node… ofc i will assume at one point that will go up, as it usually does.

https://documentation.storj.io/before-you-begin/prerequisites

in regard to big customers, they will often used 6 months or longer just to vet a platform before they will ever start to consider transitioning to a new platform… and that’s considering from the time they actually get convinced by their sysadmin or whoever to make the change…

and by vetting i mean prototype test deployment of production like stuff that isn’t really critical infrastructure, or simply room for sysadmins to experiment.

then there is the whole having to convince whom ever makes the decisions, delayed waits to avoid the first versions because they are usually the worst, and to gauge what happens with those that do choose to deploy on the new platforms and what they report back…

big business is rarely fast at making decisions… even if it seems they just throw money around randomly at times, it is most often a slow process, even if the eventual transition might seem fast…

like if we imagine a navy building ships… doesn’t happen over night… infact there is most likely more effort spent on design, logistics, raw materials, technology, redundancy, production facilities than in building a single ship.

the casual observer would however consider the ship the main job and worst part of it…
but when they start spitting out ships it goes fast, because then they are often done with the development and logistical phases.

if one can call warship production fast xD and not glacial

and in regard to the 24tb limit i don’t really want to make more nodes, but then again if i will have to because of that restriction then i might as well get the nodes fired up and starting vetting and such.

1 Like

8 TB and a maximum of 24 TB of available space per node
Minimum of 500 GB with no maximum of available space per node

Well you found the right page… But you didn’t read it all.
Surprisingly going by that same reading you should also think the minimum requirement is 8TB… but nobody seems to think that.

There is no maximum limit. 24TB is listed as max recommended only and that recommendation predates any real life experience with the platform. Once you approach that kind of size I think you can determine for yourself whether it’s good to expand the node or not.

Regarding recommendation of do not run more than 24TB in one place…

  1. Decentralization. If 24TB would lost in once it is a noticeable amount of lost data. It’s much better to have those 24TB as small nodes across the globe.
  2. Makes no sense to bring so much space online at once in one physical place. My 10 TB (three nodes) have been filled up after year. How much time it takes to fill up 24TB? Take in consideration non-linear usage.

Of course you can bring up more, if you wish, it’s just not recommended yet.

4 Likes

I think this is the most important argument. By the time you get up to 24TB, you probably have some idea about how useful it is to expand more. But spinning up more than that right away really makes no sense.

The decentralization is a decent enough argument, but shouldn’t really be the concern of SNOs. You can’t assume SNOs will be altruistic like that. If there is more money to be made by sharing more space, it will happen.

I have to agree. I pointed out one of the rationales behind the recommendation.

1 Like

well i’m at 1/3 full after 10 weeks only started out with 6tb dedicated to the node, but it was growing with a decent pace so i decided to give it some room to grow… good to know it’s only a recommendation…

i did catch that 8tb was a recommendation tho… i just figured when it said max then it was the max…
will most likely split it up for logistical reasons anyways… but again… it’s most likely all going to live in one big zfs pool… so from a redundancy stand point if i run two nodes or 10 nodes doesn’t really make any difference, if the pool dies they all die…

this last few weeks have been slow going… but it gives me time to experiment and test a bit.

just managed to copy 1mil files internally in less than 1minutes on the storage pool … got my hba’s moved to only be on my northbridge, my slog and l2 arc ssd has gotten split so they now each have their own 3gbit sata controller, and switched from really old sata cables for the ssd’s to 6gbit ones.

woop woop imma gonna get a ticket… still run into some weird 50mb/s limit on the network access to it… but seems like it must be the nfs server i use for local file sharing from the pool to my vm’s
also managed to do 1.14gbyte/s nearly sustained reads during my scrub last night

the storagenode is baremetal

well the rational behind having 1 or 2 big pools rather than many smaller is that any one storagenode then will have full bandwidth and io of the pool if needed, also allows me to focus more on having some decent redundancy without it taking away to much of the capacity.
and it will allow me to use the pool locally for other network related stuff and experiments, like trying to run my workstation directly off the pool instead of having local harddrives.
long term i plan to build it into a cluster, so that no matter what breaks the node will survive, maybe even get a second internet connection for redundancy, if this ends up being profitable.

ā€œPreferredā€
A minimum of 8 TB and a maximum of 24 TB of available space per node