Updates on Test Data

Yes, it looks like the new customer is planning to upload a lot of data, but rarely download it.

2 Likes

That wonā€™t change the bandwidth: as nodes fill theyā€™ll stop being available for ingress. So the upload wonā€™t be ā€œdivided to all available nodesā€. Maybe thereā€™s one-node-with-spaceā€¦ or one-node-with-space-plus-twenty-full-nodes. It will be the same bandwidth to that one-node-with-space :slight_smile:

This all hinges on those SNOs ā€˜with sufficient bandwidthā€™ā€¦ to never stop expanding when they fill. Weā€™re not all @Th3Van - most of us have limits :wink:

3 Likes

Alsoā€¦ we donā€™t know where all the data is going to be coming from.
We donā€™t know if this is one big centralised company or a lot of smaller users uploading data from all over the world.
So the current pool of ā€œfast SNOsā€ may actually not be the fastest ones for the end-client if and when they do start uploading data.

1 Like

@littleskunk how SLK Graduating our nodes if it not make Audits? or you have some other procedure? I spined some new nodest, but it lokes strange.

When I win Euromillions (should happen aaaaaany day now), Iā€™ll buy a server with 3TB of RAM tooā€¦ :sweat_smile:

1 Like

None of my new nodes have ANY audits from SaltLake. I think they disabled the vetting requirement altogether for the duration of the test.

Iā€™ll buy a server with 3TB of RAM for testing tooā€¦

His test env is better than most companies production.

1 Like

Same here. We could just send it to /dev/null and still get paid for 30 days. :sunglasses:

I guess thatā€™s for new nodes only, as my node gets audits from saltlake

1 Like

or 10k of distributed stores all over the globe.

Write-only data would be very simple to ā€œstoreā€ if not for the pesky audits.

In 2017 I had a brief chance to work on an absolute monster with 12 TB of RAM and >500 CPU cores. I couldnā€™t reasonably take advantage of all that power despite doing CPU-heavy stuff. htop view was amazong though.

On the other side, now Iā€™m working at a place where spinning up a Spark cluster with hundreds of nodes in the cloud is a daily routine :person_shrugging:

1 Like

A new record of GC on these test data. 6M pieces moved to trash over 104h, totalling around 700GB, again while receiving ingress. It also reported a piece count of 28M from the saltlake satellite.

1 Like


OMG! This node was started in january!

2 Likes

It just takes a very long time, Iā€™m on my 3rd month and Iā€™m only at 22/100 for saltlake

Thatā€™s amazing: congrats! I wish Iā€™d see more posts from SNOs that fill a HDD and sound actually happy about it like you! :+1:

Some here are like:

ā€œYeah, I guess itā€™s full. Yeah, I guess Iā€™m getting paid. But thereā€™s TTL data in there. And I wish someone would send me a signed letter from the Prime Minister of Canada authenticating itā€™s all real customer dataā€¦ and prepaid for 10 years using gold bars. But I guess itā€™s still OK. Maybe. Kinda.ā€

Be happy people! :hugs:

6 Likes

Iā€™m happy as long as I can fund my photo backups on Storj. Currently donā€™t need more than $2/month for that.

2 Likes

Thanks!
Iā€™m just started hosting a node. Very basic setup. But am already looking into monitoring it better. The graphs look simple, but tell everything I would need to know.
I will take a look into it!

2 Likes

Probably there are other people like me who were running collocated hardware in a datacenter that charges for bandwidth usage, with the old usage pattern it was fine. But we have so much bandwidth usage I bought a new server with faster cpu (Apollo 4200 with 28 disks in 2U) and moving to a different datacenter with 10Gbps unmetered bandwidth for the same price as the old place with 1gbps & 40TB bandwidth.
So until I have finished the move I am not adding any more space. The testing is a good way to show what is required of the hardware to plan better what you need to buy / where to put it. E.g. I used to have all disks in a huge ZFS array with L2 cache SLOG and Metadata special device. When I would get high load on the array (all nodes doing file walker) the whole machine just fell over. So I moved to running all nodes with an individual drive and it works perfectly now.

2 Likes

Off-Topic. Butā€¦

ā€¦Storj database problems? What database problems?

(sadly not a price available in my country :sob: )

In this moments I realise that I am at this point taking all this ingress thanks to a ā€œbugā€ in the software.
So the BF was too small for big nodes over 10TB, so my 15TB nodes started to fill up quickly in december and january to the point that it triggered the search for more drives.
And that moment has been the perfect timing: someone just posted on the ā€œlocal ebay likeā€ site some drives at half price because they had some defects on the case (hits, scraches, etc), nothing to affect the performance of the drive, and they were brand new in sealed bags.
I bought 8 x 22TB drives. He even gave me a discount to already halfpriced drives. :smiley:
Fastforward, the filled drives started to loose data because of garbage propper removal, and I thought it was a mistake rushing in with the upgrades. But they were half prices soā€¦ well.
And now, it proved to be the best move ever, because I am taking all with my 17 nodes running hot. :star_struck:
And all of this thanks to a miscalculated BF.

6 Likes