Storj Competitor

I’ll speak from a SNO side

Storj hardware requeriments are easy to overkill but for Filecoin basic requeriments are very high, with basic node saying 8 core ryzen cpu, some people told me about threadreaper and 128GB of ram and tons of TB (and nvidia gpu’s) they will face a huge lack of operators servers with this requeriments

What about the funds? how will they pay their operators? How will they atract users or investors for paying operators? cost of service is not equal with a rpy vs a threadreaper. How will they face that extra cost of service?

Sorry if my point is not seeing something basic about Filecoin but im not a technical savvy

Further Notes on Filecoin Whitepaper

  • 6.2 Filecoin Consensus

We propose a useful work consensus protocol, where the probability that the network elects a miner to create a new block (we refer to this as the voting power of the miner) is proportional to their storage currently in use in relation to the rest of the network.


It seems this is the road to centralization. Large nodes control the block creation… In such a scenario, some government agency, like NSA for example, could simply take over the block generation.

NSA = 29 Petabytes a day
Filecoin = 13 Petabytes total

Storage space required on a single node to run a 51% Sybil attack against the blockchain:

13*0.51 Petabytes = 6.63 Petabtyes

8 of these will do it.

  • Filecoin block generation is based on the “storage power” of a given node.
  • It seems that the current client does not encrypt uploaded data.
  • NSA pulled more than 4 times the required storage space for a 51% attack on block generation every single day a couple of years ago.

Nope… not running a node, and not using it either.

But I do like IPFS and run my own gateway… which allows me to host my own data easily as well as allow others to access IPFS data that is not ever stored on my node.

Of course, I might be wrong

3 Likes

without a doubt it sounds interesting, i may give filecoin a whirl, if their node requirements are high that may mean each node has a bigger potential for profit… ofc if the system is drowned in work for no reason at all…aside from maybe bad programming…

which was why i really liked storj when initially getting into this during V2, it was so light and only used just what it needed… it didn’t prelock and write the space on the drives and it didn’t use much cpu.

sure this does also mean one should to be one amongst many nodes because it can run on everything and doesn’t really disrupt much… i know that there has been some issues with disk iops / database demands and what not… but there will always be stuff like that in a “new” project.

storj has a good approach, i cannot speak tons to the concepts behind it… but a light little functioning and stable program, for years sure isn’t a bad start also the momentous willingness to change, it takes to basically kill off V2 instead of just keep patching , it’s what often make greatness…

but running a light program, then i also have a sizable server running basically on idle most of the time…
so would make sense to do a bit more with it… think on avg if everything is running correctly storj resources demands is 2% of what the system should be able to do…

2 Likes

No one knows the value of FIleCoin yet - it will be market based pricing and they have not launched yet.

We still have the code to run an IPFS gateway that stores data on tardigrade. We found that due to the fact that locations aren’t cached (the Kademlia discover process) that IPFS was just kind of slow and something we didn’t want to ultimately promote.

If you are interested, check this out though:

3 Likes

A few thoughts on the how I understand the Storj-IPFS connector architecture…

From the blog post:

What this means is that decentralized apps using IPFS without pinning to a decentralized storage backend aren’t all that decentralized.

Any time a file is uploaded to an IPFS node, there’s no guarantee the file will persist longer than a few minutes (unless self-hosted on reliable hardware, or backed by a centralized cloud provider).

Pinning is “uploading” on IPFS. There’s no “upload” process. There’s only a get “download” in IPFS from remote nodes.

ipfs add file1

Chunks the file into smaller units and places the contents in a local data store. A second IPFS can download that file via: ipfs get <CID> The second IPFS node can then pin that CID and become a provider of that resource.

IPFS is configurable for different levels of performance. The default server configuration is probably going to be quite slow. The IPFS network itself is also rather slow. This is where multiple pinning services come in handy.

The two options for reliably serving data over IPFS are:

  1. Run your own gateway.
  2. Subscribe to a pinning service – which is “downloading” from your local node and pinning it.

Looking at the diagram on github:

https://github.com/storj-thirdparty/connector-ipfs/wiki#flow-diagram

it’s unclear how Storj would help decentralize the IPFS backend, since all the data within Storj would need to be added to the local IPFS node retrieving it before the data would be available over IPFS to other IPFS nodes. In the pictured architecture, IPFS would still require pinning services to function… with multiple pinning services “re-decentralizing” the data pulled out of the decentralized Storj nodes.

I’m not sure how to fix the architecture to make it work as intended, except to use it backwards… with IPFS the backend data provider to Storj… which then is used as the backend to the application.

Caveat: I can not claim to be an IPFS expert – I’m much more of a tinkerer

Yes he did by using encryption.

You clearly have no clue what “end to end” encryption is

Telegraph, skype and whatsapp use actual end to end encryption. They can’t decrypt any of the data exchanged. But they also manage the keys used to encrypt messages, so there isn’t necessarily anything stopping them from inserting a snooping key in a conversation. But during normal operation, they don’t have the keys to decrypt data that goes over their network. And as far as I’m aware they are also not held liable for any criminal content shared over those end to end encrypted connections.

Legally when you provide a platform to host third party content, you are only liable if you are aware of illegal uses and don’t do enough to stop it. With data being encrypted, there isn’t really a way for SNOs to be aware, though since this is all quite new, I don’t know whether storage nodes would be considered a platform in this context. It perhaps helps that Storj will remove data if they have been made aware of illegal uses. But who knows whether that is enough to protect SNOs from liability.

1 Like

I haven’t looked at the architecture for some of these “E2E” services in a while… However, until this year, most audio/video chat services required decryption of the data stream at the server for routing purposes.

Additonal:

Beware of the penguin as well:

I believe the three I mentioned have e2e on everything. They mostly rely on direct connections as well, though this may be different in group calls. The example you show with the ECB penguin is pretty basic cryptography stuff. The implementations used by these parties use modern ciphers and don’t suffer from such issues. However, the Telegram approach seems to be its own proprietary thing and complex rather than solid and known. I’m sure it works, but I prefer a good implementation of industry standards when it comes to encryption. Regardless, the type encryption wasn’t really the topic of discussion here.

Yup:

https://nvd.nist.gov/vuln/detail/CVE-2020-11500

Zoom Client for Meetings through 4.6.9 uses the ECB mode of AES for video and audio encryption. Within a meeting, all participants use a single 128-bit key.


Jitsi … however… it actually E2E …

1 Like

Don’t get me started about zoom. They had a LOT of issues. Best I can say about them is that they are working on it. They seem to be taking it seriously now, which is good, but they weren’t doing so great to begin with.

They acquired keybase as part of their move to better security/privacy. I’m not sure I’m happy about that move though. I would have preferred it if keybase remained independent.

2 Likes

I guess you haven’t heard about Kim Dotcom and his case.

As far I remember, Kim Dotcom’s service had one purpose… to upload and distribute copyrighted material.

This is entirely different than a decentralized storage backend with built-in encryption… that doesn’t even have a general purpose user interface… and has no built-in method for retrieval of the uploaded data by anyone except the original entity uploading it.

1 Like

This is completely different. The feds are going after Dotcom for criminal copyright infringement, money laundering, racketeering and wire fraud. The only one of these items that could be tangentially compared to the storj service is copyright infringement. But the feds allege that Dotcom and his associates a) had knowledge that the platform was being used for infringement (a SNO/Storj cannot know this) and b) encouraged infringement by paying users for having files that were high volume downloads, thus encouraging the use of the service for infringing files.

1 Like

Actually, MEGA encouraged the use of files with high download volume. It’s coincidental that those are infringing.

You forgot the first part of my statement

It could very well be coincidental.

1 Like

It was a service to upload and distribute “files”. What files are uploaded is and was out of their control as they provide just a “service”.

No different to any other cloud storage “service” providers out there. Regardless of it being decentralized or not. “files” or as you put it “copyrighted material” could also be uploaded to StorJ and us SNO’s wouldn’t know about it either… I guess the difference being the Satellite would be responsible for managing the “copyrighted material” if it was discovered as SNO’s don’t know what “files” are uploaded.

The satellite do not know the content of the pieces too. All encryption and pieces spreading is happening on the uplink side, i.e. customer. No one file is leaving customers’ side after uplink, only encrypted pieces, distributed across the globe.
Even metadata (where is nodes, sequence of pieces and their size, etc.) is encrypted. So, no one except customer can decrypt and compile pieces back to the file.

I am just wondering as that quote made me unsure: Can anyone with the correct link and encryption password decrypt and recompile the file?
Is there some kind of ‘sharing’ functionality implemented? Like a customer who could create one time passwords to allow access to specific files?