Next Town Hall - Ask your Questions Here January 2020

Hi Team,

my internet is quite fast (synchronous 20 Gbps) but my storage capacity is limited in comparison. However there are also people setting up their storage nodes on home systems with way slower internet but more storage per Mbit uplink available.

Will there be (or is there maybe already) some kind of technology in place, which profiles the end customers access to files and refragments the storage network in a way that nodes with faster internet access will get files which are requested more frequently and nodes wich have in relation more storage per MBit uplink will get more files but where the average file is accessed less frequently?

Are we going to get a GUI client which uses the uplink rather than using an existing S3 GUI with the gateway?

Will FileZilla be adding support for v3 etc? If so when?

But it is already work like this, fastest will win the race

Hey Vadim,

if you replied to my question, then: No it’s not working like this.

Simply explained according to what I know: Currently the node responding the quickest is getting the data uploaded. However, this could lead to a situation where a node with fast internet connection will gather many files which won’t be downloaded ever.

So refragmenting (=moving high demand files to faster nodes) would solve this situation and also increase overall download speed for customers.

1 Like

This topic is “Questions for Town hall”, please leave the possibility for Storj Team to reply to these questions :slight_smile:

3 Likes

How do you plan to handle situations, when a file stored on Storj is so popular that the 80 nodes storing it do not have enough bandwidth?

You claim Tardigrade to be “20% Faster” than your competition. My real-world results differ (see my post “Amazon S3 vs. Tardigrade”) - for example, Amazon is 100x faster in deleting one 10 GB file compared to Tardigrade.
Transfers are also much slower, e.g. Amazon is 14x faster when uploading one 10 GB file on a server with decent specs.
Are we going live with this performance?

2 Likes

I think, this is one of the most important questions now:

And related topic:

Storj Labs have to focus on refining the details in the whole project. We can’t launch a network to production until product doesn’t work perfectly because we will lose clients on early begin.

1 Like

My thoughts recently were about the same as In if I have say 7TB on my Node but 80% of the data just sits there without Egress what do I get for It, is there already something in place for maintaining large amounts of data for a long period without a customer accessing it.

You earn 1.5$ for every TB you store

The few enthusiasts here keep finding bugs, e.g. in the uplink and gateway, preventing its usage in several well known products like duplicati or nextcloud.
Also we experience a lot of performance issues.

How do you expect to launch a product with that many (imho serious) bugs at the end of January?
Personally I wouldn’t even think about a product launch until all common use-cases are tested and reported working. And reported working to be as good as S3.

Because otherwise I personally am very afraid of the product launch being a disaster for STORJ when normal users experience failing uploads, slow performance, storj gateway being unusable in duplicati/nextcloud, … I doubt those would come back to STORJ anytime soon.
And all that because a product launch is being rushed at the end, after a long period of testing. It would be a shame if that hurts the success of STORJ which did so great last year, carefully testing their product.

3 Likes

The production release date has not yet been determined.

Thanks. So I have a potential of $10-ish a month for passive income better than nothing I suppose.

Hi Team,

currently the upload performance is poor. This is mainly because people have to upload all the erasure codes snippets for a file leading to ~240% “overhead”.

Instead it may be possible that customers only upload 100% (or 1-3 extra erasure code pieces for a little initial redundancy) of their data and let the standard repair mechanism rebuild the rest of those snippets. This would lead to a quicker upload and to less traffic on the customer side.

Is this possible or am I missing something? Is such a technology planned?

2 Likes

Not really possible without then having to charge customers. The repair mechanism requires SNOs to upload the repair data which they are being paid to upload at 10/TB. The only way to really do this would be for a customer to either upload all the data themselves or pay extra /TB uploaded so that the network handles all of the subsequent shard distribution.

1 Like

youre both right – we did send announce that we’re nearing production, but what Heunland may have menat is that we didnt put an exact calendar date out yet. But that will also be coming soon. I know that as my mom would say “soon isn’t a time on the clock” but we really are doing everything possible to get this across the line and I promise there will be press release, tweetsorm , announcement and all the stuff as soon as possible.

2 Likes

Thanks, @jocelyn!
I also agree with @heunland , I just pay attention, that we do not have an exact date it true, but we have a deadline instead :slight_smile:

1 Like

Hello Cmdrd.

You say it’s impossible but you already present the solution? :smiley:

Ofc the team then has to find a solution on how to charge this or maybe just “eat those extra costs”. Or it will just be a general thing that node operators won’t be paid for this egress. Or whatever idea someone can have.

Another thing possible would also be that the end customer can choose if he wants to be charged for this or if he wants to upload all pieces by himself. It’s also a nice thing to give the end customer more options.

However: Beating S3 with 340% required upload bandwidth is just impossible regarding uploads.

I never said it was impossible, I just said it wasn’t possible without charging the customer for ingress data to cover the repair costs. Based on the current pricing model while that would put bandwidth in line with S3 to upload a piece of data, Storj now loses pricing competitiveness due to having to charge for ingress, which no provider does. A balance could be struck for this, but not with the current pricing model.

1 Like