Storj Competitor

Filecoin is about to get release in sept any worries to storj sno users?

Cool animation also included :stuck_out_tongue:

As an SNO, I’ve gotten paid in a reliable way for all the data that I store. The data pieces are encrypted and the files themselves are divided across many nodes… So, I am protected from liabilities regarding possible illegal files stored on the network.

My understanding is that Filecoin does not automatically encrypt files…

IPFS is currently pre-1.0 release. I run 0.6 myself, which is the latest stable version as of this post. I’ll probably update to 0.7rc1 next week. There are several ongoing significant architectural challenges and changes going on at the moment. And while several platforms do indeed currently run on IPFS, I’m not sure it’s ready for everyday use by everyday humans.

We’ll see what happens with Filecoin. But, my guess is it will be awhile before all the issues are worked out to the point where the system works as flawlessly as Storj does right now.

4 Likes

Is storj ipfs if not what’s the diff between storj architecture and ipfs

No. Storj is not IPFS.

IPFS is something like a global network of local data stores. Each IPFS node keeps all its own data, but makes that data available to every other node. No data is distributed across the network, unless IPFS nodes “pin” data that they find throughout the IPFS network.

So…

  • in IPFS a node operator is the data owner/controller.
  • in Storj, a customer is the data owner/controller.

Filecoin is a payment system for allowing others to use your IPFS node.

3 Likes

Can storj also implement this as a side project in case of direct data transfer just like transfer.sh without bothering satelitte

2 Likes

You’re getting paid to store illegal data, you’ll need a very good lawyer to convince a not very computer savvy judge that the plan to a 9/11 style attack stored on your computer was harmless.

1 Like

No.

I’m getting paid to store customer data. The legality of that data is unknowable by me… If a customer decides to store illegal data, that’s entirely the customer’s problem. I can not be held accountable for that data since I store only small pieces of encrypted information… even if I could decrypt the data pieces, I would only have a partial file… But since neither I nor any other SNO can even do that much, no SNO could possibly be liable for any illegal data uploaded by a customer… either purposefully, as a legal attack on the network or for actual illicit purposes.

It’s useful to remember that “illegal data” might be something such as the US Declaration of Independence… depending on the jurisdiction.

5 Likes

The cost to be a FileCoin miner is very high. The returns are uncertain at this point. Their market is uncertain at this point. Too early to tell.

2 Likes

At best you will be considered as an accessory to a crime. You clearly did not take the necessary precaution to avoid the situation.
Otherwise telegraph, skype, tweeter, whatsap and many others could be considered as complete innocents as the data are encrypted end to end.

Amazon AWS can be used exactly the same way.

I can encrypt an “illegal file”, subscribe to EC2, and upload it.

I’m fairly sure that Amazon is not sitting in court trying to figure out how to avoid being prosecuted for being an accessory to someone uploading GPG PGP encrypted illegal content to its services.

In order to prosecute SNOs as “accessories to crimes”, the entire Internet needs to be made illegal, including DNS operators.

I’m not sure what you mean by this.

Actual E2E is very rare in social media. Most services decrypt the traffic flow within their servers. The service is called E2E but actually isn’t. In any case, the above is true regardless… The Internet can and is used for legal and illegal purposes. If a service has no method of decrypting the data, that service can not be held liable for the content… and if a service does not edit any content, that service can’t even be liable for liable within the content.


Of course GPG is the program and PGP is the protocol… but typing and correct terminology sometimes don’t mix properly.

2 Likes

we’ve already seen over 240 miners from 5 continents preparing to participate - reaching sealing speeds of 1 TiB / second (15 PiB sealed within 3 days)!

was reading a bit… so their avg internet connection speed is 4Gbytes /s ofc thats only upload meaning they would need either the same or atleast 10% in download…

so maybe an avg of 6Gbit internet connection per storagenode on filecoin…

i mean Whiskey Tango Foxtrot which timeline is this… :smiley:

and at their avg upload was 241mb/s per miner… sustained for 3 days…

either i’m not fully a wake or these numbers seem quite to good to be true…
not that i want to hate to filecoin… i’m good with any storage project that can make use of hardware…

i suppose their numbers could be local internet speeds and they may have a good local following or something… those numbers cannot be global… i mean the avg person doesn’t have that kind of internet… hell its 4 times faster than what i got… and thats not even accounting for overhead…

or bandwidth instability or variation… so would need a 6 - 8 times faster just to reach that and sustain it…

i must be overlooking something.

There is a single difference that in my opinion makes it not even in the same ballpark as Storj. Which is that they use file replication as a redundancy mechanism. To get even close to the same kind of reliability as Storj they would need about 5x the storage space that Storj uses. To get the same kind of performance through parallel transfers they need 10x the storage space. To compensate for this each miner has a much higher requirement for specs, uptime and bandwidth. Otherwise your risk file availability. Which is why there are such a low number of miners to begin with.
The way I see it filecoin isn’t for home users sharing their extra space, but for beefy data centers sharing spare space. It’ll thus be far less decentralized.

They could make a massive change in reliability, availability and speed by simply changing to an erasure coding system like Storj has adopted. But until they do, I think filecoin is only good for fairly niche use cases.

I don’t blame them for not getting it completely right, Storj had to press the reset button twice before they did too. V2 was kind of a mess and had similar problems to filecoin. But not many projects are bold enough to simply start over. Even though I hope others will. It can be really good to start anew with all the knowledge gained.

7 Likes

You really cannot infer bandwidth from this article. I am participating in the space race and you can ‘pledge’ empty sectors which allow you to create a 32GiB sector with garbage no download at all.

Most miners are in China and have large operations. The ‘race’ aspect is not moving data but sealing sectors which involves 128GB of RAM per sector and fast (NVMe Gen4x4) disks.

1 Like

It will soon start price rising for NVMe devices, like was with video cards.
I have 1080ti from that time. I hope Storj start to use remote computing and i can utilise this cudacore power.

okay that does make the numbers more sensible… seems a bit weird…

sounds a kind of sia coin or something… i believe their approach was also something to do with data put on disk and then manipulated in some way to prove the capacity or something…

seemed like a lot of work for not much gain… storj’s disk usage approach was why i chose it… i see no point in filling drives with random data just to prove i got space dedicated for the project… or whatever the point was… but i’m sure it’s also continually evolving…

and that was years ago i want considering sia
think i got hung up on the install and then ended up dropping it because of that…
and the ridiculous disk utilization

This is how IPFS pinning works. Since Filecoin uses IPFS as the underlying storage system, the lowest level of redundancy is the file. Files are stored in sharded form within each individual IPFS node. As far as I know, one can not “pin” an individual shard of a file (CID).

XKCD comic imported into IPFS:

https://explore.ipld.io/#/explore/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm/10%20-%20Pi%20Equals

Yes I’m aware that it’s a limitation resulting from directly using IPFS. However it would be possible to implement an intermediate layer that creates erasure coded pieces and distributes those as files on IPFS.

I’ll have to see if I get through their whitepaper before thinking about the excess computational and bandwidth overhead introduced by that pre-IPFS step.

I don’t see a direct link to the whitepaper in this thread… So:

I haven’t read that whitepaper yet. Thanks for the link, I might get to that later. But even for Storj, the erasure coding is done client side prior to upload. The challenge is mostly in managing the metadata around where each piece is uploaded. This is done mainly on satellites for Storj. Which is something filecoin is likely trying to avoid. You could have the customer manage their own metadata, but should they lose it, they’ve basically lost their data. You could upload the metadata with basic replication on filecoin separately in case a recovery event is needed. Should still be way more efficient than doing replication for everything. And you can choose to replicate the metadata lots of times to become resilient.

Some Filecoin Whitepaper Notes:

  • Section 2.1.2

For example, consider a simple scheme, where the Put protocol is designed such that each storage provider stores all of the data. In this scheme m = n and f = m − 1. Is it always f = m − 1? No, some schemes can be designed using erasure coding, where each storage providers store a special portion of the data, such that x out of m storage providers are required to retrieve the data; in this case f = m − x.

  • 3.2 Proof-of-Replication

Definition 3.1. (Proof-of-Replication) A PoRep scheme enables an efficient prover P to convince a verifier V that P is storing a replica R, a physical independent copy of some data D, unique to P. A PoRep protocol is characterized by a tuple of polynomial-time algorithms:

(Setup, Prove, Verify)

  • 4.2 Data Structures

Pieces. A piece is some part of data that a client is storing in the DSN. For example, data can be deliberately divided into many pieces and each piece can be stored by a different set of Storage Miners.

Sectors. A sector is some disk space that a Storage Miner provides to the network. Miners store pieces from clients in their sectors and earn tokens for their services. In order to store pieces, Storage Miners must pledge their sectors to the network.

AllocationTable. The AllocTable is a data structure that keeps track of pieces and their assigned sectors. The AllocTable is updated at every block in the ledger and its Merkle root is stored in the latest block. In practice, the table is used to keep the state of the DSN, allowing for quick look-ups during proof verification. For more details, see Figure 5.

Orders. An order is a statement of intent to request or offer a service. Clients submit bid orders to the markets to request a service (resp. Storage Market for storing data and Retrieval Market for retrieving data) and Miners submit ask orders to offer a service. The order data structures are shown in Figure 10. The Market Protocols are detailed in Section 5.

Orderbook. Orderbooks are sets of orders. See the Storage Market orderbook in Section 5.2.2 and Retrieval Market orderbook in Section 5.3.2 for details.

Pledge. A pledge is a commitment to offer storage (specifically a sector ) to the network. Storage Miners must submit their pledge to the ledger in order to start accepting orders in the Storage Market. A pledge consists of the size of the pledged sector and the collateral deposited by the Storage Miner (see Figure 5 for more details).

  • 4.3.2 Mining Cycle (for Storage Miners)


3. Seal: Storage Miners prepare the pieces for future proofs.

Storage Miners’ storage is divided in sectors, each sector contains pieces assigned to the miner. The Network keeps track of each Storage Miners’ sector via the allocation table. When a Storage Miner sector is filled, the sector is sealed. Sealing is a slow, sequential operation that transforms the data in a sector into a replica, a unique physical copy of the data that is associated to the public key of the Storage Miner. Sealing is a necessary operation during the Proof-of-Replication as described in Section 3.4.

  • 4.4 Guarantees and Requirements


• Achieving Confidentiality:

Clients that desire for their data to be stored privately, must encrypt their data before submitting them to the network.



Addendum Note:

I should add that the last quoted sentence will prevent me from running a Filecoin Storage Miner node. I will not subject myself to the whims of “clients” storing unencrypted data on my hardware. So, unless and until Filecoin deploys mandatory file encryption on the client side, I will not contribute to their network.

2 Likes