NASA needs 250PB storage by 2025 and cannot afford AWS

Could be great tardigrade opportunity: https://www.theregister.co.uk/2020/03/19/nasa_cloud_data_migration_mess/

6 Likes

Great finding! It would match not so distant future - ā€œhumansā€ - multi-planetary species. As tardigrade ā€œbugā€ can survive in the space, so can the storj. Linking those by a great deal like with NASA would lift things off with no time!.
Storagenode Operators are ready:)

BTW. Can You imagine having storagenodes on Mars:) Picture that in your minds eye.

Are they? The network is far from having 250PB of available space ^^ā€™
We donā€™t even have the 32PB they need right now.

Thatā€™d be an awesome client though :slight_smile:

Network is relatively small because it didnā€™t have much demand. Clients with real data came just recently.
Announcement states:
ā€œ ā€¦Our previous network reached more than 150 petabytes of data stored across more than 100,000 Nodesā€¦ ā€ - Which are quite some numbers.

100K Nodes and 150PB that means everyone put 1.5TB for Storj on avg.

Nowadays 3-4 TB hdd is cheap and it is probably minimum that everyone of those SNO has in their machines.

Letā€™s say everyone dedicate one full 3TB hdd into storj. Having that many nodes scales out quickly to 400PB. Of course this calculation is plain simple and should be weighted but it shows potential. Some of them will use 8TB and even new 18-20 TB hdds when those come out later this year for general public.

At current state, to scale out 5,807 nodes to 250PB sounds like far far away milestone and I wouldnā€™t make such bold statement but at the same time I didnā€™t realise how many nodes were active.

If there be demand the network will grow no time.

If this was the case NASA would be the only one using the entire network. It may or may not be worth it, Cause they could just store that data for who knows how long and then everyoneā€™s node is full of data that Nasa just slammed everyone with and might not pull it off for years.

1 Like

Of course network needs many clients that ideally move data back and forth but having among others a big, well known enterprise client that is happy with the service is very important. Clients like that attract many other bigger or smaller clients and this creates a snowball effect.

Also, while I have 16TB array, only 8TB are given to the node, I will increase it if the node starts running out of space (expanding a virtual disk is easier then shrinking it). The server also has some empty drive slots.

Also they dont have to put all of the data all together. They can start small and see how network responds. Itā€™s not all or nothing situation. Also, im sure many sno will add more storage once their existing node starts filling up

1 Like

But the article clearly states that NASA is worried about the egress costs with AWS.
if I was CEO of Storj I would start talking to NASA right now offering some free accounts for immediate testing and a huge discount.
Even if they put only a fraction of that data into Tardigrade, such a customer would be gold.

4 Likes

it would require roughly 2.7 times that in fact, as data is duplicated ~2.7 times in V3 for redundancy.

I donā€™t know how fast the network would grow if there were demand: if StorjLabs asked their SNOs to expand storage space, I personnally wouldnā€™t as I have no more spare disks.
Unless there is a big bonus that comes with it, for buying a new diskā€¦

I did the same on my first node bro. Cheers!

Exactly, They could put their hot data only or cold data only. Hot data is more profitable for SNO but 1.5 buck for 1TB storage of cold storage using some disk that is just lying around is always something even considering extra electricity that this drives uses.

Still it isnā€™t crazy amount of disk space put into work per SNO with one node. It ends up in standard,
4TB disk region. With formatting lets say it would be closer to 5TB disk.

SNOs running multiple nodes probably will add even more storage with ease but there must be a demand first.

Yeah, my server has 16 empty drive bays, but they will stay empty until the node uses the space that is actually there already. After that I could figure something out as well.

I donā€™t think it is crazy at all. V3 hast just started. And NASA is talking about the year 2025.
I believe a lot of SNOs would increase storage when utilisation increases. Me for example could quintuple my storage space I have dedicated to Storj instantly if necessary. And if that is not enough I still could add more drives however this would require few hardware adjustments.

It is all a matter of wether it will pay off. But with a big customer and constant egress flows, it might.

Yes, besides profit perspective there is also ideological one. For example many bitcoin miners (not talking about hyper scalers) were mining bitcoin to help the network and be part of something much bigger not even taking profits. Unfortunately bitcoin didnā€™t managed to become widely used payment system of VISA or MasterCard scale. What bitcoin is trying to achieve for payments, Storj with IPFS may achieve that for storage.

1 Like

same here, 5.5TB node and got 12 TB waiting for the right time to upgrade(limiting downtime), tho i may add 24-30TB because of how ZFS works, kinda hate just putting a drive or array on with no redundancy, since we will be punished if we fail on a graceful node exit.

with my setup, i would hate to only have cold storageā€¦ thats never going to be profitable atleast in my caseā€¦

2 Likes

@jocelyn
Maybe itā€™s a great time to make an offer official collaboration between NASA and Storj Labs. Of course, we still need to grow our network capacity but I think this will be a huge opportunity to develop for Storj and all of SNOā€™s.

3 Likes

@jocelyn Yes, at least put Storj on their radars for considerationā€¦satellites in case of NASA :joy:

3 Likes

Donā€™t forget make it publically as well. Like on Twitter. This will attract additional potential customers.

4 Likes