Yes, it looks like the new customer is planning to upload a lot of data, but rarely download it.
That wonāt change the bandwidth: as nodes fill theyāll stop being available for ingress. So the upload wonāt be ādivided to all available nodesā. Maybe thereās one-node-with-spaceā¦ or one-node-with-space-plus-twenty-full-nodes. It will be the same bandwidth to that one-node-with-space
This all hinges on those SNOs āwith sufficient bandwidthāā¦ to never stop expanding when they fill. Weāre not all @Th3Van - most of us have limits
Alsoā¦ we donāt know where all the data is going to be coming from.
We donāt know if this is one big centralised company or a lot of smaller users uploading data from all over the world.
So the current pool of āfast SNOsā may actually not be the fastest ones for the end-client if and when they do start uploading data.
@littleskunk how SLK Graduating our nodes if it not make Audits? or you have some other procedure? I spined some new nodest, but it lokes strange.
When I win Euromillions (should happen aaaaaany day now), Iāll buy a server with 3TB of RAM tooā¦
None of my new nodes have ANY audits from SaltLake. I think they disabled the vetting requirement altogether for the duration of the test.
Iāll buy a server with 3TB of RAM for testing tooā¦
His test env is better than most companies production.
Same here. We could just send it to /dev/null and still get paid for 30 days.
I guess thatās for new nodes only, as my node gets audits from saltlake
or 10k of distributed stores all over the globe.
Write-only data would be very simple to āstoreā if not for the pesky audits.
In 2017 I had a brief chance to work on an absolute monster with 12 TB of RAM and >500 CPU cores. I couldnāt reasonably take advantage of all that power despite doing CPU-heavy stuff. htop
view was amazong though.
On the other side, now Iām working at a place where spinning up a Spark cluster with hundreds of nodes in the cloud is a daily routine
A new record of GC on these test data. 6M pieces moved to trash over 104h, totalling around 700GB, again while receiving ingress. It also reported a piece count of 28M from the saltlake satellite.
It just takes a very long time, Iām on my 3rd month and Iām only at 22/100 for saltlake
Thatās amazing: congrats! I wish Iād see more posts from SNOs that fill a HDD and sound actually happy about it like you!
Some here are like:
āYeah, I guess itās full. Yeah, I guess Iām getting paid. But thereās TTL data in there. And I wish someone would send me a signed letter from the Prime Minister of Canada authenticating itās all real customer dataā¦ and prepaid for 10 years using gold bars. But I guess itās still OK. Maybe. Kinda.ā
Be happy people!
Iām happy as long as I can fund my photo backups on Storj. Currently donāt need more than $2/month for that.
Thanks!
Iām just started hosting a node. Very basic setup. But am already looking into monitoring it better. The graphs look simple, but tell everything I would need to know.
I will take a look into it!
Probably there are other people like me who were running collocated hardware in a datacenter that charges for bandwidth usage, with the old usage pattern it was fine. But we have so much bandwidth usage I bought a new server with faster cpu (Apollo 4200 with 28 disks in 2U) and moving to a different datacenter with 10Gbps unmetered bandwidth for the same price as the old place with 1gbps & 40TB bandwidth.
So until I have finished the move I am not adding any more space. The testing is a good way to show what is required of the hardware to plan better what you need to buy / where to put it. E.g. I used to have all disks in a huge ZFS array with L2 cache SLOG and Metadata special device. When I would get high load on the array (all nodes doing file walker) the whole machine just fell over. So I moved to running all nodes with an individual drive and it works perfectly now.
Off-Topic. Butā¦
ā¦Storj database problems? What database problems?
(sadly not a price available in my country )
In this moments I realise that I am at this point taking all this ingress thanks to a ābugā in the software.
So the BF was too small for big nodes over 10TB, so my 15TB nodes started to fill up quickly in december and january to the point that it triggered the search for more drives.
And that moment has been the perfect timing: someone just posted on the ālocal ebay likeā site some drives at half price because they had some defects on the case (hits, scraches, etc), nothing to affect the performance of the drive, and they were brand new in sealed bags.
I bought 8 x 22TB drives. He even gave me a discount to already halfpriced drives.
Fastforward, the filled drives started to loose data because of garbage propper removal, and I thought it was a mistake rushing in with the upgrades. But they were half prices soā¦ well.
And now, it proved to be the best move ever, because I am taking all with my 17 nodes running hot.
And all of this thanks to a miscalculated BF.