Can Storj can lower costs and increase usage?

You are absolutely right.

Storj was originally pitched as using currently unused disk space provided by SNOs. The cost of this disk space for the SNO should be so low as to be negligible. This means Storj should have 2 unique selling points:

  1. Fast upload/download speed due to it’s distributed nature (i.e. parallel downloads from multiple local SNOs).

  2. A very low cost of operation as Storj does not have to buy drives, maintain storage arrays, etc.

Storj is only capitalising on (1). I think everyone on this forum expected that a storage system provided by home users spare disk space would be massively cheaper than any existing provider. I cannot believe that $6/Tb is required to operate the Storj satellite infrastructure. That is more than Backblaze charge and they have to pay for their entire infrastructure!

I would very much like the Storj project to succeed, but it feels at the moment that they are not using their main USP.

2 Likes

Oh something new, I don’t remember seeing object count fee, so they did implement that in the end. I was thinking they’d do it for transactions though. I wonder if we’ll get a small slice of that eventually.

@russman $6/TB*
But Backblaze is not the same as Amazon S3 storage.

No they rather implemented that to prevent people from storing millions of tiny files that end up being stored on the satellite rather than the storagenodes.

3 Likes

" In 2017, we reported that there was there was 2.7 Zettabytes (ZB) of data in our digital universe. PwC believe that this reached 4.4 ZB in 2019, but more staggeringly predict that this will grow to 44ZB of data this year. In fact, IDC predicts the world’s data will grow to 175 ZB by 2025! This sounds a ridiculously high amount,"

Storj is just a few PB Using LTT as an example company they have over 2PB of data

Storj is something like 0.00000001% of the internet

back in 2008 s3 was said to have 73EB of data stored Storj needs to up its game

maybe even find some of the customers that were involved in the ovh fire or maybe even ovh themselves

i know its not the goal of storj but long term backups could be something to look at a quality 16 tb drive costs £350 and running cost is less than £25 a year if you already have a system running so if it made £16 a month ROI is 2 years not counting network use

1 Like

Yeah that’s exactly what I was thinking - without limiting ingress in some fashion, there’s little stopping somebody from storing byte files just to waste resources. At $10 per (short) trillion files, it’s quite a bargain (D)DoS. Don’t know if there’s a minimum file size, I think there’s a minimum size for billing, is there? But even at 1kB files it’s still a billion files for cheap. Delete, start again, almost free waste of resources. I just thought they’d do request count fees, like Amazon.
How is this billed anyway, does deleting and uploading infinite number of files for a month with 1M files being stored at any one point in time constitute 1M files/month, i.e. $2.2? It should be per request/transaction, this looks like a vulnerability.

Let’s make a thread to start counting down zeroes? :grin:
We’re already counting nines!

I’m wondering if the real pb with not charging objects would be that you could theoretically store data in 0 byte files, just by putting data in file names (encoded in base64 for instance).
That would be incredibly inefficient performance wise probably, but it would be free…
Hence the fee per object.

That’s my 2cts anyway :slight_smile:

3 Likes

Everything you said here is exactly what I’ve been wondering myself… Seems like there ought to be an explanation for this. How could it NOT be massively cheaper? What am I missing?

when you consider there is 3 main distributed storage systems based around crypto Storj is the more expensive of the 3 i am not saying that it needs to be a race to the lowest price as that is just silly as proven by Sia but i do think that it may be better to find ways to cut costs that will still make hosts earn the same or more obviously the main issue is that data needs to be sent via the satellites and that makes the satellites deal with the full network bandwidth both in and out so for each 1TB that is stored a satellite will have to both download and upload the full 1TB using 2TB of bandwidth then the same again when its retrieved by the user this is not very efficient

That is not correct. Data gets sent between node and customer directly. No satellite involved.

3 Likes

It’s something like that, but not nearly that sneaky. It’s just that any piece needs metadata stored on the satellite and the object fee covers that. Files that are small enough are even stored in line, if the metadata would be larger than the actual piece data.

seen a few posts say that was the case as very small files can end up stored on the satellite i think storj could do with making a better diagram of how things actually work right now as somuch i have read is outdated or wrong

Like this?

The technical part is described in details in https://storj.io/whitepaper/

FYI - Storj never sent customers’ data via satellite (now) or the bridge (in v2), so this information is not outdated, it’s simple wrong.

1 Like

May be drifting off topic at this point, but is there some sort of marketing strategy?
I would think some small portion of fees should go into marketing. Possibly that amount of fee would be tied to how much unutilized storage space exists in the network?
As nodes fill to bursting the marketing drops off. Lose a little to fees but make it back up in volume for utilizing more of your node? I’m a new operator so I don’t have context for how much space is unused on older nodes.
Apologies if my thinking shows an ignorance for how things are set up.

I suspect Storj Labs will wait until the technical side has matured significantly and Tardigrade is feature-comparable to the alternatives before going all out on the marketing.
Seems like a sensible approach to me :slight_smile:

3 Likes

Yes and no, in my humble opinion.
I’m no marketing expert, but someone said once that if you’re too proud of the product your putting on the market, you probably waited for too long. I guess what that means is that one should find the right spot between being too early (with a buggy product, unstable, or useless) and being too late with the best product you’ll ever see that took 15 years to develop… and the market is now fully occupied by competitors.

And I think that’s what StorjLabs tried to achieve by going to production with a product that works and is robust, but still needs many features and integrations with other products/companies.

I think now would be a good time to speed up commercial actions.
But of course, that’s easier said than done :slight_smile:

My thoughts:

1.) I can’t see this scaling for SNO. Essentially the system demands that SNO have full failover capacity (100% uptime and no faults on connection). Hard drive costs/power costs etc are on SNO. My node has 10 cents income. 2/3 will be withheld.
2.) Diagnostics are demanded and use sysadmin time.
3.) Power lies with storj. Payment, withholding of funds.
4.) Initial set up is easy, but then one needs to crawl in the forum on what to do. Maybe fun as a project but not a business proposition.
5.) This feels a lot like Adsense in the beginning. Work work work for a couple of pennies. Money can be withdrawn not by Google but by Storj on an algo change.
6.) Just logically building up a Google AWS data center that is competitive can only be done by shifting costs to SNO. Infra structure costs each SNO has to carry + the risk. There are no bulk order discounts, etc etc etc.
7.) SNO need to be seen what they are, small providers and it can’t be demanded that we have full up systems, without clear feedback or any form to check how the system looks from storj’s view. The audit suspension online is too few information.
8.) Log files are too cryptic
9.) One satellite went from 100% to 0 overnight, others are 100 and others at 90 something. All are satellites are pingable. Putting the whole system in DMZ.
10.) As a beginner I have no information what’s going on. I checked the drive 0 bad blocks. Error logs show ping problem with the satellite that was done on the 14th and two EOF problems.
11.) Google and Amazon have staff that deal I assume with 100 and 1000 of nodes. Here we have an operator for far less with more risk etc.

If I want to scale this too something serious income would need to be 1000 dollars a month.

From where I am sitting that’s a 10000 times increase.

How would this happen?

Even if the system is trustless, the winners have to be SNO and STORJ, otherwise this is just another Google type centralisation scheme, maybe unintended but the ultimate result. Inflation by those that want for whatever reason, offer storage at a loss (as with Google, wikipedia, where someone always wants to offer free content, or content at a loss.)

Happy to be convinced this is not the result.

Storj looked like the best administered project. Filecoin seems dead from their forums.

Hello @Koesters,
Welcome to the forum!

The requirements you can see there: Step 1. Understand Prerequisites - Storj Docs, your node should be online at least 99.5%, 100% is desired, but not required.
The held amount is calculated accordingly ToS, and you can see details here: Node Operator Terms & Conditions or in the documentation: How does held back amount work? - Storj Docs

Don’t get it. Could you elaborate? There is no algorithms like in mining, so just don’t get it.

These metrics are only matter, and you can see them on the dashboard. We doesn’t have any other at the moment. If you want to see details, like used coefficients from storj/docs/blueprints/node-selection.md at 6a553ec9c5df94681c88ada43a6b9ae18464b8ee · storj/storj · GitHub, you can use storagenode’s API: Storage node dashboard API (v1.3.3)

Please, do not use DMZ, it’s dangerous - you open every single port of your device to the whole internet, use port forwarding instead.
If your node doesn’t have audits for a month, every downtime will affect your online score much more, because your node simple doesn’t have enough audits.
To make sure that your node is online we recommend to use uptimerobot.com
Your online score should recover in the next 30 days online. Every downtime will reset this counter and requires to have the next 30 days online. You can read more there: How is the online score calculated? - Storj Docs

1 Like

I do the “Internet” since Web 1. I had servers with millions of pageviews. First there was no payment. Then Google added adsense and there was a way to be paid. This was fine for a while. I made 10000 a month, then Google started a constant algorithm update, then more and more and more content was added etc etc etc and what ypu could earn was pennies. You had Matt Cutts (Matt Cutts: Gadgets, Google, and SEO) then at Google lecturing what is good content (for Google etc).

When you discussed these issues there were the ones that were happy that they could buy a coffee once in a while with adsense income.

And then came Wikipedia with the world offering content for free etc etc.

Now most websites are littered with ads or are behind paywalls like it used to be with real newspapers.

In a global marketspace there are very few niches and chances are that somewhere on the other side of the globe someone gives stuff away for social currency. Hence soon there will be 3 data scientists working at Google and Amazon, the rest does either something real or delivers parcels for the gig economy.

So I generally like the decentralised stuff but in the beginning price dumping as per OP can’t be were one can compete.

I think you need to market this as some form of pseudo Elite stuff like Apple. Medicocre hardware and a BSD clone for exorbitant prices. Slap something shiny on it an make it idiot proof.

Like with OS:
MS: pricey but not exorbitant
Apple: for those that have money to throw in the wind for social currency.

Unix etc for free.

In the moment Storj is in the MS space competing with the market leaders.

Free is not an option, so make it an Elite product.

1 Like

To make sure that your node is online we recommend to use uptimerobot.com

Make it more idiot proof also on the SNO side. I have two webservers, 8 miners, one Science company etc. I popped one Storj node up as a test and will give Chia coin a test now.

Likely others will do a quick test with a 2 TB, I have a 14 for Chia and a 12 TB for Data to test on Ocean.

The current guide lets you install a node quite easily what comes when you run one, one can only guess.

I think the crypto currencies can work with decentralised as they just exist in a persons mind space and are nothing real besides a shared story, like Fiat. Interaction with real hardware and real things will hit normal economic limitations.

But I am open.

I also bought an Apple once, while I like the BSD underneath it is essentially a colossal waste of money for something you can easily use Linux for. I now also found out you can’t even use it as a CPU miner as the power that comes in discharges the battery with more than one core even when plugged in and that only with display off. CUDA doesn’t work anymore as Apple made some weird decision with NVIDIA and now they move to ARM. So cheap processors the Instagrammers of the world will pay dearly for.

Well so I still hope that someone fights the mega corporations, hence I mine Monero as they actively fight centralisation. So what I try to express is that I support all these web3 things but out of experience I am sceptical.

Or sell it as a revolutionary product for unhinged capitalism critics like me. The USA ditched their anti trust laws, hence we are now in this weird world, of mega monopolies.

In a decentralised project SNO’s should be part of the marketing and sales perspective.

One should look at real world projects that work like COOPs.
https://www.google.com/search?q=COOP

A coop is often more expensive. They have to compete in the real world. While they have not been able to totally stick with the original intention, they still exist though.
COOP - Google Search