Yesterday, I noticed my download count spiked higher than I’ve ever seen. The logs show it coming from one piece. I suppose one user could be downloading a file over and over. Seems pretty unusual though. Anyone else see this before?
wow I wish I had customers like that too Gives some nice egress. Hopefully the file was big
Sadly I don’t have anything set up to have statistics over the amount of operations.
so 90k of 2mb files 180gb worth of downloads, so 3gb a minute being like 50mb/s
seems very high unless if you got 500-1000mbit internet, and then still kinda high
but i guess it could be smaller than 2mb… 90k times tho…
well it could be some sys admin getting ready to present his arguments to why his business should move to tardigrade… we are bound to start to see much more weird and unique traffic like this, now that we are getting closer to the 6 month from launch point… this will most likely be the time where corporation will start to look at their data of the limit use they have been testing tardigrade with…
seeing what works and what not and what stuff costs… and now some are ready to scale up, while others are ready to move on to test something else, if the system didn’t meet their parameters…
exciting times for storj … soon we might see some proper customer data
Files can be much smaller then that. They only end up being 2.3MB if the original segment was the max of 64MB. Any file smaller than that or segment at the end of a file smaller than that would result into much smaller pieces, down to something like 4KB minimum. Below that point it’s small enough that the file can be stored in line on the satellite and would be smaller than the metadata would have been for that file.
speaking of files, if you feel like testing just how bad your system is at dealing with say 1million files…
there this is a great way to generate them, i find copying 1mil files helps me getting an idea about how well the system does at processing lots and lots of files…
like say in the case of zfs… i can literally see my scrub time go up by an hour or two every time i add 1million files to the pool… also copying them can be a bit of a challenge… i think at my present setup i can manage in less than 3 minutes,
tho i am running 512k recordsizes now, the record i think was at 32k recordsizes and took less than 1 minutes to copy the 1mil files on the same pool, sync standard on all of them… have tried at sync always… but throughput kills that for me.
anyways i figured if anyone was interested in just how much of an effect working with big numbers of files have on the system, they are so easy to general… will take a bit tho
oh yeah and his math is kinda off… forgets a 0 here and there … i usually make 1mil files so i get a more accurate result… ofc you can manage with 100k files… will most likely put a good bit of strain on your system anyways
i doubt everybody would be affected… i also checked mine and couldn’t see anything… but it may just drown out in the avg when i view daily or weekly… i did try to scan through max, but also no real big spike, but there was a slight increase in activity but it’s always jumping up and down… so that doesn’t really mean much.
if it was just one file / object it would at max be like 29 or 90 SNO’s that would be hit…