Maybe wait a bit. The RS number we tested last was wasting bandwidth. We still have an RS number with lower bandwidth consumption that we want to try out in comparison. The plan is once we hit our target we will keep this load running for at least a day or so. Thats the day you can start to optimize your setup.
Looking at 2 nodes on the same machine, one older with GE on SaltLake:
piece_expiration.db without Saltlake - 254MB
piece_expiration.db with Saltlake - 642MB
This is now after vacuuming them 3 days ago. And⌠3 days ago, that big one was 450MB. If this continuus, we will hit 1GB db in no time.
If you work with 10gbit you need to spend for router. If you have a lot of nat jobs like multinodes storj you find the limit of home router easy.
How long would it take for the satellites to notice a node is full? I read somewhere that the minimum free space for a node to be considered full was now 100GB, is this still correct? This morning, one of my nearly full nodes screamed past that limit during the test until there was only 15GB of free space left. I then manually reduced the space allocated to storj by 100GB to keep the node in check.
The buffer I have for this node between space allocated to storj and actual maximum diskspace is around 20GB. Will we need to increase this buffer for when it takes to long for the satellites to notice a node is full and keep sending data?
Wasnât there a recommendation to have the buffer at 10% of total HDD capacity?
No, the new minimum is 5GB.
10% would be a bit steep, especially on the larger nodes. This is a small node, though I admit that a buffer of 20GB is on the small side ![]()
All that rosy spoken words are not always true. Your node storage will reflect the actual demand. As per my nodes are concerned i donât think so. Please read news articles before jumping to any conclusion. If i sell any product i will not say my product is bad please donât buy it.
exatly, yesterday around 18:00 UTC+2 time, there was a short, idk, 3 minutes test,
that bring constant 50% and up to 75% of my 1Gbps network, BUT all the CPUs went crazy to 100% (2 cores per node i have in VM) Today almost same 50% traffic but CPUâs are calm as normal!
There is clearly a language barrier here. I honestly have read that sentence about three times and Iâm struggling to understand what youâre trying to say.
Iâm really sorry but is there any way you could rephrase that? ![]()
Letâs decode ![]()
Rosy words probably spoken by other SNOs that mention getting paid in hundreds or thousands of dollars for being SNO.
The above mentioned ârosy words speaking SNOs/youtubersâ may have said that if you store 500GB you will see egress of 500GB. If you store 2TB you will see egress of 2TB.
![]()
The contrast of ingress vs egress obviously didnât reflect what was portrayed by said Rosy SNOs.
This is general PSA when we say, âLook the company is not doing as WE expect so bewareâ.
As per OP. If I (OP) was Storj I wonât tell my product is bad even when forum is full of bugs and people complaining. I (OP) will always say my product (Storj) is good.
Did that make sense now ? I hope I got the gist of OPâs frustration.
What kind of bandwidth @Th3Van needed with this ânew usage patternâ (1-8 week ttl) to maintain the level of entire 100nodes farm?
Doesnât sound reassuring at all.
There is an increase in workload, but no increase in payments is expected(
yes!
just imagine how much HDDs we would be able to buy with $2,5/TB!
Storj team! Think about it! ;D
This is just test data. And youâll be paid for it.
Wenn will the paid part start? So far I see all uploads from SLC satellite resulting in âmanager closed: unexpected EOFâ errors. So it seems to be just bandwith tests and not paid.
My understanding is that itâll work just like in production.
If you win the race and get the piece then youâll get paid for however much it is that youâre storing.
Is your success rate very low, then?
30 posts were split to a new topic: Currently exist issues
That seems plausible, thank you ![]()
This also has nothing to do with the test data. Please find a more appropriate topic or make a new one. Letâs try to keep this one a little cleaner.
Itâs a nice idea, some sort of replication of piece metadata client side. But I donât want to get into it here and get things further of track.