The right test is the test performed on your hardware and in your location. All other just synthetic and not practice.
Of course, its not a proper test, it was just a try on the go.
Uploaded to Tardigrade (Europe-West-1).
On my hardware it utilizes 50-100Mbit (I have a such internet channel) and have an average speed for upload as 1.68 MiB/s (uploaded 834MiB of Ubuntu server image for 495.2029954 seconds) via WiFi 802.11n.
834/495.2029954 = 1.68415782567377
I just wanna say Filecoin is a scam…That is all.
miners are on strike…what a mess
WTF lol thats hilarious… you better watch out storj… we may revolt at anytime (j/k)
lol this kinda says just how clever they think they are…
InterPlanetary File System (IPFS)
IPFS is actually separate from Filecoin (I’m not sure if it was ever finished, but there was an early StorJ implemention at one point).
The mining method they use is interesting, but likely will lead to additional overhead.
I don’t believe they perform network level erasure coding, which means that for data recovery, the nodes speed is going to be significantly more important as well (Storj can benefit from a satellite coordinating many nodes to service the request while not requiring the client to store multiple replicas to achieve a certain performance threshold).
For some reason, I thought Filecoin required locking up unused space with random data to prove it existed? But I’m currently unable to verify this, is anyone able to confirm or deny this assumption?
If it is true, it will lead to terrible performance with SMR drives.
it’s not required according to their own documentation, however if one wants to mine filecoin’s /fil’s without having data, then one can lock up the data.
but since their miners are on strike, i think we can assume that this might process might not survive for long…
you can read about it here.
I don’t really know a ton about it like i already said, just read about how it worked until i understood that my server didn’t stand a chance of participating in the filecoin network i just bailed on reading more.
it sounds very confusing, maybe even purposely so… maybe so those that doesn’t understand how it all works ends up paying the price which the elite has an advantage from…
i mean it seems more confusing than i might have to be, ofc storj also seems simple… and then years later i still don’t understand it lol
Practically speaking it is needed to gain ‘power’ through committed sectors. There are simply not enough deals to support 600PiB of committed sectors. SMR is not really an issue (you could use tape if you wanted) because the sealing process for each sector takes 6 or more hours for each 32GB sector).
As far as the ‘strike’ is concerned, the top miners (in China) flexed their muscle on Friday to demand a change in vesting policy which they got. (25% of their block rewards are immediately available now for use.)
@SGC Ok, that sector pledging was what I was thinking of, I’m not quite tracking when one should pledge a sector, but @stuberman has a point about it not being an issue in this case. I was thinking you essentially had to do the equivalent to
cat /dev/random > /my_data_drive before joining, and any new data had to overwrite a portion of this ‘reserved’ block (but, an overwrite shouldn’t actually be much of an issue since you can discard the underlying random data?). Skimming the white paper, it is interesting how they split up into storage and retrieval miners (presumably, retrieval cost CPU cycles that a storage node might not be able to optimally provide?).
Their stated recommended hardware requirements sounds fairly intense. I’m still not quite following how the clients data enters the network per-say, the terminology makes it sound like they are targeting larger and colder files? It will be interesting to see the sort of I/O that Filecoin is able to obtain.
I think comparing FileCoin to Storj is a mistake, in that they really serve very different interests. For instance, a FC miner and client could be exclusive to each other and not allow any other users/devices to participate. The focus is on the protocol which can be used in ways we do not think of in traditional storage. A retrieval service would include retrieval miners and gateways as well as trusted storage miners that meet their requirements such as location, latency, SLAs, etc. You also have to understand that not only are the uses different but so are the motivations. Large storage miners are able to get block rewards in exchange for committing sectors to the network. The more sealed sectors in service the more rewards you are likely to earn. Retrieval miners are about low latency and high I/O and earn based on moving data (cached from storage miner sources).
retrival mining… are you trying to tell me that when somebody wants to download a file they will get a specific type of miner to find, download and distribute it …
that doesn’t sounds IPFS like, that sounds like it would add a lot of latency, and unrequired movement of data to more different spots online.
from what i can understand it’s highly unlikely that segmenting storage and data retrieval, if we think of an interplanetary network, then latency could be hours… so when a request is sent for data, one would only want a request to move directly without stops, maybe have some sort of location data so that it takes that into account for when it’s communicating within which segments of the network…
and tho that seems a bit less relevant on the current internet… then still having data storage and retrieval in different locations, is … hopefully just a misunderstanding, because it doesn’t make sense…
and why do you think it’s a good idea to give people fil coins for providing storage that isn’t used, that just means that the people with tons of storage right now and in the near future will get lots of the coins, which there are 1billion off, and thus if they are valued at 200$ at launch then they are saying their company is worth 200b and now its at 30 so 30b $ and even that is ridiculous.
so in lets say 5 years when most of the fil coins are mined, then they will be worthless because there are billions of them…
they just think they can make a lot of money on making a crypto currency which is to complex for people to understand…
maybe if i understood the point in filecoin, that would help me understand, but i think the point is a ponzi scheme
I am not advocating FileCoin or even defending them, I am sharing what I am learning by being in the middle of it (in addition to running three Storj nodes).
Filecoin storage miners are the backend. Retrieval miners are seen as a middle tier for fast retrieval and you will see a front end which are client facing systems using thing like browsers or APIs to access and pay for data. Each of these components is both independent and autonomous without relying on any centralization. IPFS is simply a mechanism to allow nodes to find data that is closer when that is a desired factor. A single rig can run all of those functions (as currently implemented in the lotus build) or these functions can be run by different entities in different locations, there is even provision for offline deals where it is simply more cost effective to transport large HDDs than to stream the data on the Internet. Think of a very different and new markets than traditional approaches.
I can go into the logic, at least as I understand it, around the costs and incentives but I am not certain you really want to have that discussion. I would certainly take their model seriously rather than dismiss it, and there are several published papers of the economic model. I am not saying they will succeed, they may not. One could make the same argument around BitCoin, as it seems to have nothing behind it but a complex idea just as you could use the arguments around many failed crypto currencies.
well i don’t understand it and i don’t have to because i don’t have enough hardware to even think about it…
best i can muster at present might be 48gb ram, a gf 950 dual xeon 5630 which i could overclock significantly, but even trying to participate in filecoin with this seems from what i can figure out, pointless because i would need to max my server ram to 288gb and then i would need something like a gf 2070, which i then in theory couldn’t use because i would be using it for enterprise usages, which isn’t allowed by nividia to my understanding… atleast in datacenters and such…
the game just seemed very rigged when i read about it… i don’t understand a 1/10th of its structure, to me it seems like it favors the large datacenter like users and the smaller users will basically just benefit the data centers profits to my understanding.
i wouldn’t mind trying it out, if i could… but i can’t
Ok, i made a Tardigrade account to burn some storj tokens
On the same hardware (i7 9700k @4.7Ghz) , the same internet (ethernet) and the same 700MB file on an NVME SSD to EU West Satellite with Filezilla:
It took 90 seconds to upload with some CPU spikes up to 37% (about 7.7MB/sec)
It took 37 seconds to download it (about 18.9MB/sec)
ok, i may be storing parts of own file as a SNO, but that is fast enough to me
that’s certainly not bad, does kinda make me wonder what’s the bottleneck then…
since storj / tardigrade is end to end encryption and the data is sent from the customer to the storagenodes, so i would think the whole erasure coding thing will have to happen on the customer side… and thus to my understanding when you are getting 7.7 MB/s are you actually sending like 16-24 MB/s (using 8mb/s) i forget what the multiplier on the erasure coding is also… and then ofc you would need something like 10% overhead most likely…
which would put it at 17.6-26.4 MB/s
and then ofc comes the whole question of where you are sending some of the data… i know i rarely get more than 200-250mbit across the atlantic, so that may be the limitation, or your internet caps out…
does seem awefully close to what i get across the Atlantic, and then the upload is less because of erasure coding…
to be fair to filecoin you should try them again later when their miners aren’t striking
while i uploading, i noticed pauses then resume in the progress bar.
i would suppose that kind of thing is to be expected since it’s distributed… there would have to be some sort of negotiation going on… but i duno… very little clue about how it actually works
would be a bit bad if the network will send data across the atlantic if it doesn’t need to…
speedwise its better to spread all the files close to me, but continuity wise it’s better to spread it accross the globle in case of local disaster.