Source: Q3 2019 Town Hall, some answers may have changed since time of original answer. Please feel free at to chime in
From Ben Golub, Executive Chairman & Interim CEO
When we did our v2 network we grew to 150 petabytes pretty quickly. When we decided to go to v3 we wanted to take a different approach; and rather than focusing on raw petabytes, we wanted to make sure that we were building an enterprise-grade network where supply and demand stayed in balance.
The way that we deal with data on our network is that when a file is uploaded it’s encrypted; it’s split into 64 mega segments; and those segments are split up using Reed-Solomon. So every file is divided into roughly 80 pieces (of which any 30 can be used to reconstitute it) and each of those 80 pieces goes to a different drive on the network.
That means that our expansion factor is about 2.7. So for every you gigabyte that a customer stores with us, we end up storing about 2.7 gigabytes on the network.
Right now [September 2019] we’re at about a petabyte of data on the network. So we’re close to three petabytes in terms of total usage on the network. That number again is going to scale up fairly rapidly. We’ve been keeping that moderate until we got to enterprise grade. As you saw earlier we’re now at enterprise grade so expect those numbers to grow really quickly as new users come on board and as we push more test data out to the network.