Read the title. It says everything.
Is 16TB the maximum amount of storage a node can have? if that's the case, do I need to create new nodes if I want to host more storage?
There is no maximum. Though you should realize it’s not likely you will fill up larger nodes on a single connection any time soon. If you have multiple locations with multiple IPs it’s best to run multiple nodes spread out. Otherwise just start with what you have.
I’ve got a 800Mbit up and a 2500Mbit down. Probably sufficient.
The speed doesn’t matter. Your node only gets assigned as much data as is your share of what is uploaded to the network. This is not like mining, you’re hosting actual customer data. There are not flows large enough to fill up your 16TB or larger node at the moment (or maybe even ever).
If I can’t build my small business on this thing, then what’s the point of these nodes? Is it worth worrying about 99.5% uptime for a $ 10 reward? )))
My income so far is about $260 with about as much in escrow. You’re not going to get crazy rich off storj. The point is making a little bit of money on disks that are otherwise catching dust. Take it or leave it.
I agree. Storj is just for unused disk space, not for making a huge profit. If you want to make a big profit, go do something else. Storj is not the right choice if that is your goal.
… or run a satellite, decide the pricing and business angle/feature/edge you will advertise to potential uplink customers, in order to obtain the cash needed to pay sno’s the way you seem appropriate for your satellites network and said angle/feature/edge.
There are many ways.
Until then, just enjoy the possibility of getting some cash for your idle space.
This thread highlights something I have thought about before. That is there seems to be a mismatch between between the articulated vision of just using spare storage versus having dedicated storage nodes. The uptime requirement pretty much rules out a basic users from simply sharing their spare hard drive space and forces node operators to standup dedicated storage nodes…
Not necessarily. Use cases:
- Home server (always online)
- Mining farm (always online)
- Game server (always online)
- NAS (always online)
- Unused servers and space in the private cloud (always online)
The only dedicated nodes on tiny PCs with low power consumption, such as Raspberry Pi3/4
And even then you can setup it as A renewable energy powered Storj farm.
Storj is developing a fault tolerant distributed data storage system, like DARPA was developing a fault tolerant network and came up with tcp/ip, to be able to send the launch code to the ICBM’s, we all know how tcp/ip developed since. The Storj project is very interesting, the engineers working on it are working toward this goal. Now the question is whats the current market is looking for, most of the internet traffic is streaming like Netflix, YouTube, etc… or all kind of replay channel where high speed local distribution is very important but data availability far less, a master node is always available to seed the relay nodes. If Storj marketing understand the current market they could be very successful, if they just want to compete with AWS cold storage the future is not very bright.
where i can download the appliaction
you can find the answer here Where to download the appliaction
We all are here to get profit, I cant get the thing why Storj try to get more operators, if it not posible event fill up existing storege. people will start leave after some time if they see that it not worth it. It will be like with V2. lot of left, because no profit.
Because of redundancy and durability.
Please, look on it from the customer point of view. Do you want to lost your family album for the last 20 years because of some storage node operator just abruptly exit the network? I think, no.
So the Storj network is designed to be durable and fast. To achieve that, we need to have a lot of storage nodes across the globe. In case of many storage nodes all transfers can be performed in parallel, giving you blazing fast transfer with maximum usage of bandwidth of your channel to the internet. With Reed-Solomon encoding you will get a durability, but with expansion factor of your data. In case of 29/80 we need to have 80 nodes to have a desired durability with 29 needed pieces from 80 to recover the file. To start the upload we need to have at least 130 nodes (other 50 will be dropped when the first 80 is finish their work).
I can suggest to read the blog to understand the math