Sustained 200mbps ingress

same here

1 Like

And here

1 Like

I think there are really many problems in this period with the satellites networks. It is now more than 12 hours since I have received a Byte or a little more on my 3 nodes. Every 10-12 hours the transfer stops for several hours and mostly on all the nodes in the same time, both verified and non-verified ones. In addition, there have been whole days of ā€œBlack Outā€. We are waiting for better times.

1 Like

canā€™t all be good months, good times for error correcting, maintenance ofc some people will have gotten through that alreadyā€¦ today itā€™s my bios watch dog on the schedule so my server will auto reboot on os freezes, kinda starting to be a challenge finding something to improve for HA type stuff.

might enable so my QPI lanes will be split into multiple lanes instead of 1 big laneā€¦ which gives me redundancy in my lanes from the northbridge to my cpu, which i suppose is kinda niceā€¦ however i do have two cpuā€™s each with a QPI pathā€¦ so it would sort of be redundancy on the redundancy and it ofc has a certain performance effectā€¦

might go from last power state bios setting to default on, so that if it turns itself off for whatever reason the bios should catch it and turn it on again, had enabled QPI L1 and L0 because i thought it might be cacheā€¦ turned out to be power management that would turn off the server at random xD
and if it turns itself off then it ofc wouldnā€™t turn itself back on when on ā€œlast power stateā€ in the bios.

been really fond of that one thoā€¦ very practical, so if the power goes outā€¦ when it comes back the machine says ā€¦ well i was on before the power was out soā€¦ i better turn onā€¦ or the other way aroundā€¦ so it says off and doesnā€™t turn on a random for whatever reason.

the speed was nice after the 2-3 day pause last week some time, got like 6-7mb/s for maybe 12 hoursā€¦ thats a record for my nodeā€¦ was kinda starting to wonder if it would even go faster than 4-5 for the storagenodeā€¦ :smiley:

Some day tests will finish and we will be missing these days with several hours of sustained flow of data at 20-25 Mbps :sweat_smile:

3 Likes

Iā€™m starting GE on a few nodes now, because hoping for test data sucks.

We are way too many nodes for way too few customers, without test data my nodes would be nearly empty. Testing is important, but at some point Storj have to earn money and not just spend it all on test data.
I think Storj should temporarily stop sending out new node invitations.

If things go better, I could immediately provide 6-7 new nodes with at least 3TB each.

4 Likes

Production launch was 2 months ago. What do you expect? Even the customers who wanted to jump on board on day 1 are probably still working on integrating Tardigrade into their infrastructure. And yet, my HDDā€™s are filling up faster than I can buy them almost because Storj is pushing so much test data.

Meanwhile in this ā€œbadā€ month my node is still going to make more than $40. (Not counting the small nodes I started about a week ago)

Itā€™s up to you, but I seriously advise against doing a graceful exit. For 2 reasons. The longer your node exists, the more profitable it is. And when times get better and you want to join back in. Youā€™ll be handing over much higher amounts held back. Just be patient through the slightly slower months.

1 Like

yeah iā€™ve been thinking of starting up a few nodes myself, just so they could start to go through all the vetting, and all that as i do plan to scale up in the future, and 24tb is the limit for one node, also it would be nice to have some tiny test nodes for a while, maybe see if i cannot get nodes off baremetal and into some containers / vmā€™s might be niceā€¦

one can always scale up a nodeā€¦ one cannot simply create a new one without long periods of delay.

looks like iā€™m going to end at 20$ for this monthā€¦ not enough to cover my base electricity costs still, but getting a good bit closer, and my node is still so tiny at 7.5-8tb

The nodes where I do the GE are 8 months old and are not nearly full (3 nodes). The others that perform well (1-2 month old & already full), of course I donā€™t do a GE on them.

But I think there are currently enough nodes and Storj should not send any further invitations at the moment.
You canā€™t pay all nodes with test data only, where does the money for the test data come from? Thatā€™s going to be over some day, if Storj continue like this.

usually the rule is donā€™t expect to make money the first year of businessā€¦ so 10 months left until storj should expect to even have the option to see profit :smiley:
if lucky we might see it soonerā€¦ ofc storj has a sizable runway if they played their cards right.
as you can hear bright is making moneyā€¦ xD

In fact, I stay online anyway. I trust the Storj infrastructure. In my opinion, shared storage like IPFS is the future. I have Cloud services for backing up my data and I am thinking of switching to them as a customer. Today I use unlimited services under Opendrive. I am evaluating the costs. Currently on Opendrive I have 14 TB of data backup. The Tardigrade Service must be advertised. :wink:

1 Like

that erasure coding thing is just brilliant, no doubt that is the future of data storage, math as redundancyā€¦ that is just awesomeā€¦ tho maybe one could reverse the concept a bitā€¦ and make it so that the math doesnā€™t just provide redundancy but actually provide compression over extended periods of time, so that instead of just keeping the same size, the stored data would get smaller as compression becomes better and better with increased applied computation.

that way one could still have the erasure coding but then on top if we imagine a sole future storage solution that is unmaintained as its components decay its data stored would also grow smaller with its continued computed compression over timeā€¦ xD

ofc that does sort of make the data unusable and basically impossible to decipher for anyone that finds it after the system failsā€¦ ofc that might be a feature rather than a bugā€¦depending on what one is trying to do.

storing data long term is pretty difficultā€¦ like say store data 1000 yearsā€¦

1 Like

Itā€™s not, there is no limit. Iā€™m not sure while people keep spreading this. Itā€™s simply not true.

Storj needs to be ready with enough capacity on the network in case big customers need to be onboarded. Itā€™s only been in production for 2 months. Big customers will surely come and it would be really bad if they didnā€™t have enough capacity. So they need to keep accepting invites and expand. That would both improve reliability and consistency across the network. The test data serves a purpose for them to push this expansion ahead of customers coming on board. Think of it as the decentralized version of building the datacenter before you can rent it out to customers.

3 Likes

well i can only assume what the documentation tells meā€¦
says right thereā€¦ 24tb max per nodeā€¦ ofc i will assume at one point that will go up, as it usually does.

https://documentation.storj.io/before-you-begin/prerequisites

in regard to big customers, they will often used 6 months or longer just to vet a platform before they will ever start to consider transitioning to a new platformā€¦ and thatā€™s considering from the time they actually get convinced by their sysadmin or whoever to make the changeā€¦

and by vetting i mean prototype test deployment of production like stuff that isnā€™t really critical infrastructure, or simply room for sysadmins to experiment.

then there is the whole having to convince whom ever makes the decisions, delayed waits to avoid the first versions because they are usually the worst, and to gauge what happens with those that do choose to deploy on the new platforms and what they report backā€¦

big business is rarely fast at making decisionsā€¦ even if it seems they just throw money around randomly at times, it is most often a slow process, even if the eventual transition might seem fastā€¦

like if we imagine a navy building shipsā€¦ doesnā€™t happen over nightā€¦ infact there is most likely more effort spent on design, logistics, raw materials, technology, redundancy, production facilities than in building a single ship.

the casual observer would however consider the ship the main job and worst part of itā€¦
but when they start spitting out ships it goes fast, because then they are often done with the development and logistical phases.

if one can call warship production fast xD and not glacial

and in regard to the 24tb limit i donā€™t really want to make more nodes, but then again if i will have to because of that restriction then i might as well get the nodes fired up and starting vetting and such.

1 Like

8 TB and a maximum of 24 TB of available space per node
Minimum of 500 GB with no maximum of available space per node

Well you found the right pageā€¦ But you didnā€™t read it all.
Surprisingly going by that same reading you should also think the minimum requirement is 8TBā€¦ but nobody seems to think that.

There is no maximum limit. 24TB is listed as max recommended only and that recommendation predates any real life experience with the platform. Once you approach that kind of size I think you can determine for yourself whether itā€™s good to expand the node or not.

Regarding recommendation of do not run more than 24TB in one placeā€¦

  1. Decentralization. If 24TB would lost in once it is a noticeable amount of lost data. Itā€™s much better to have those 24TB as small nodes across the globe.
  2. Makes no sense to bring so much space online at once in one physical place. My 10 TB (three nodes) have been filled up after year. How much time it takes to fill up 24TB? Take in consideration non-linear usage.

Of course you can bring up more, if you wish, itā€™s just not recommended yet.

4 Likes

I think this is the most important argument. By the time you get up to 24TB, you probably have some idea about how useful it is to expand more. But spinning up more than that right away really makes no sense.

The decentralization is a decent enough argument, but shouldnā€™t really be the concern of SNOs. You canā€™t assume SNOs will be altruistic like that. If there is more money to be made by sharing more space, it will happen.

I have to agree. I pointed out one of the rationales behind the recommendation.

1 Like

well iā€™m at 1/3 full after 10 weeks only started out with 6tb dedicated to the node, but it was growing with a decent pace so i decided to give it some room to growā€¦ good to know itā€™s only a recommendationā€¦

i did catch that 8tb was a recommendation thoā€¦ i just figured when it said max then it was the maxā€¦
will most likely split it up for logistical reasons anywaysā€¦ but againā€¦ itā€™s most likely all going to live in one big zfs poolā€¦ so from a redundancy stand point if i run two nodes or 10 nodes doesnā€™t really make any difference, if the pool dies they all dieā€¦

this last few weeks have been slow goingā€¦ but it gives me time to experiment and test a bit.

just managed to copy 1mil files internally in less than 1minutes on the storage pool ā€¦ got my hbaā€™s moved to only be on my northbridge, my slog and l2 arc ssd has gotten split so they now each have their own 3gbit sata controller, and switched from really old sata cables for the ssdā€™s to 6gbit ones.

woop woop imma gonna get a ticketā€¦ still run into some weird 50mb/s limit on the network access to itā€¦ but seems like it must be the nfs server i use for local file sharing from the pool to my vmā€™s
also managed to do 1.14gbyte/s nearly sustained reads during my scrub last night

the storagenode is baremetal

well the rational behind having 1 or 2 big pools rather than many smaller is that any one storagenode then will have full bandwidth and io of the pool if needed, also allows me to focus more on having some decent redundancy without it taking away to much of the capacity.
and it will allow me to use the pool locally for other network related stuff and experiments, like trying to run my workstation directly off the pool instead of having local harddrives.
long term i plan to build it into a cluster, so that no matter what breaks the node will survive, maybe even get a second internet connection for redundancy, if this ends up being profitable.

ā€œPreferredā€
A minimum of 8 TB and a maximum of 24 TB of available space per node