I have a 1TB node that is 70% full. What is better approach to extend?

I have a 1TB node that is 70% full and is 2 months old of activity, assuming that it will soon be at 100% of its capacity, how should I proceed? I can think of several options and I would like to consult them
with the most experienced in the forum:

1.- Increase the capacity of the node to 4TB.
2.- Mount a second 4TB node and maintain the first 1TB node.
3.- Wait for the node to fill up, then wait for it to generate enough
profit to be able to take any of the 2 previous options.

Your opinions and suggestions will be very helpful.

if you don’t care about the higher power/storage ratio, this would be my choice.


I would favour either options 2 or 3.
Whether or not you go for 3 depends on whether or not you have the capital available to get the new storage.

From a purely technical aspect, option 2 is probably easiest and less likely to cause problems.

1 Like

If you have space on your disk I would extend your capacity of the node.

If your disk is full you have 3 options.

  1. Leave it as it is
  2. Migrate to a bigger disk
  3. Add another node

For the latter I would prefer to migrate your node since you do not have to go through vetting again

1 Like

I would use both disks with separate nodes. Assuming you were talking about different HDD’s. You have two options, the first would be to start the new node on the larger HDD. That’s simple. But you could get a bit more data faster by migrating the existing node to the bigger disk and starting a new one on the smaller one.

Either way I wouldn’t wait longer to start the new node. Ideally you want it to be vetted before the old one is full.


if you got room on the disk it’s on, then you can simply change the storage variable in the docker run command…

by far the easiest way to expand an existing node…

1 Like

I had forgotten to mention that in options 1 and 2, it implied the fact of buying a new HDD, since the current node is mounted in a 1TB unit only.

Thank you very much for your opinions xD

Then don’t buy a 4TB disk, buy 12-16TB right away
4TB is not enough … I went through 500GB -> 1TB -> 2TB -> 4TB and full node …

This is exactly the approach I took and as far as I can tell it worked really well!

My first node (10TB HDD) was ~70-80% full in April and we were seeing a huge amount of data coming in. I started trending my daily ingress to try and predict how many days I had left until it was full. I calculated that it was ~15-30 days out from being full. I had just received a new 12TB HDD and had originally thought I’d just migrate the data to the new HDD to increase the node capacity, but ultimately decided it would just be easier to start a second node with the new HDD. I set it up, and got it running in the last week of April. It took about a whole month to get vetted on all 6 satellites, although Stefen Benten was by far the slowest since it’s not putting out too much data these days. The new node was vetted on the other satellites in about 10-20 days. During May we saw a decline in data ingress, when compared to April, also there were some heavy delete days in there as well. So my first node is still not full, but with the traffic pick up that started ~06Jun, my first node is on track to be full sometime in the next 7 days. Then with my new node now fully vetted and already storing 1.0 to 1.5TB, I believe I’m in good shape to just keep the data coming in for the foreseeable future.

Two items to note based on my experience, when comparing to OPs:

  1. they have a lot less data to migrate than I did when I considered that option. So would take quite a bit less time than it would have taken me.
  2. they probably don’t have enough space remaining in 1TB HDD in node 1 to keep the “total” ingress traffic from being “interrupted” while a second node goes through the vetting process.

Therefore, I think my recommendation and best (although not easiest) option would be for the OP to migrate Node 1 to the higher capacity HDD now, so that they can continue to receive the high ingress traffic we’re seeing on the network. Then use the original 1TB HDD to spin up a second node (if/when they feel it’s necessary) to start the vetting process.

Having said that, the easiest option by far would be to just leave Node 1 alone, and let it fill up and then start a second node with the new, larger HDD and accept the probable short-term decline in ingress traffic when Node 1 is full and Node 2 is being vetted.


yeah a 1tb node is a breeze to migrate… shouldn’t take more than 1-2 days depending on how well the system performs…

without a doubt i would move it to a larger drive, vetting takes a good while and the node isn’t truly running right before after 9 months when you get 100% payout from it… thats a long time to wait for a new node to spin up to speed…

so yeah i agree with dragon, spin up the new node on the old 1tb drive…
and remember STAY AWAY FROM SMR drives… they are hell
personally i would also go for 7200rpm… they are not that much more expensive and much more suited for the task of running a storagenode…


Yes, I am precisely looking to buy a WD purple with CMR HDD to use as a node.


no matter what hdd you buy, make sure to do atleast a bit of research on it, to make sure it’s annual failure rate isn’t horrible… usually not a huge problem, but rather safe than sorry…

backblaze is a pretty good source for that kind of info, but it not always easy to find the exact models, so you might just end up checking some sort of reviews of the drives…

is kinda nice for checking general performance.

and then ofc look at the warranty, if the drive gets long warranty and high TB annual throughput or whatever it’s called usually means it’s a high end drive, and often the prices aren’t all that different if it’s sata i think last drive i was looking at was like 25-30% extra from cheapest in capacity and 5400rpm to a 7200rpm 4 year warranty with … 540-720tb annual rating, against the cheaper one which was 210tb
i forget the exacts, but my point is that it’s often well worth the consideration… but if memory server wd purple is also among the top tier.

another consideration is sas vs sata… i’m currently running sata in a sas backplane without issues… but apparently that isn’t always the case, and mixing sas and sata on the same backplane / controller can give you a whole world of trouble apparently… :smiley: been learning that lesson the hard way…

so if you plan to use a sas backplane, do keep in mind it’s designed for sas, not sata… it will run sata, but not always great and you will lack alot of options for diagnosing problems if you are running sata…

however sata is usually quite a bit cheaper… so
i’m on sata, and getting rid of two sas im putting into an array on their own… because my array is running weird… maybe because i mixed sas and sata… even tho it’s the same brand and model of drives…

and another thing is the 4kn thing… 4kn only has 3% overhead (lost capacity on how it writes the data on the plater) while 512e and 512n will both use 10% atleast in my experience i haven’t been able to get the 512e drives to use 3% only on overhead… which well sucks because i would really use the extra 7% capacity

Then don’t buy a 4TB disk, buy 12-16TB right away
4TB is not enough … I went through 500GB -> 1TB -> 2TB -> 4TB and full node …

I am running a Synology NAS with 4bays. Regarding the discussion on the other thread about raid vs. non-raid I now have 4 independand disks in it. 4x8TB. What shall I say: The are now full and a am having trouble increasing the disksize. It is technically not possible to pull out a 8tb disk and mount it via USB to migrate the to a bigger disk that filles the empty slot.
Why I did not buy a 16TB disk in the first place. Because that costs a lot of money at once. And this is for a project that just startet and I have to figure out if this is worth all the work and trouble etc.
So the system kept growing 8TB-wise. Right now 8TB is the best €/TB effort.

So now? Appetite comes with the hunger. My NAS is now full and I am sad not to have set up 1x16TB in the first place and then grow 16TB wise (4x8 would be the same as 2x16…)

Nowadays the price for Seagate Exos X16 (16TB) dropped dramatically, so you can get it for 27€/TB and therefore this is the new cheapest way to buy storage.

Go make your own calculations but a 16TB node created roughly 4x more benefit then a 4TB node. So it does not matter what size you buy. Payoff comes out to about the same time periode. But when the system is paid off the more you have hosted the better.

So yes, I advice all to go buy a 16TB in the first place.
Afterwards growing then is more easy, just stick in new 16TB drives. No need to migrate or swap or so.

and yet you advice for people to buy a 512e drive, which will waste 7% additional space on supporting the old 512byte sector size, if people have the hardware for it, they should use 4kn.

but that maybe why it seems so cheap, and then personally i don’t buy seagate…

On further investigation the EXO x16
does seem like a drive that is optimized for running something like cifs, but then again it is a hyperscale datacenter hdd… and tho the consensus seems to be that it’s not SMR.
it doesn’t perform great in a iscsi raid setups… which kinda makes me think that there might be a bit more to the story… maybe seagate solved the SMR latency issue… and is trying to bank on that…

anyways, personally i like buying tech that i can verify is good through what others review it for…
backblaze seemingly got 60 Exos x16 in their current line up and they seem to be performing quite well… so thats a plus… but still only 5500 drive days recorded or so… a good start, but problems can also come at a later time… performance degradation / wear

from what i can gather it doesn’t look to bad actually :smiley:

was a bust tho… no entries, which seems kinda odd, maybe it’s because its a data center drive.

but that also means we have zero information about how a drive would perform on it’s own…
something which is odd…

well i have to admit, it does look like a really good deal…
and the drive like any new technology has some advantages and some disadvantages…
ofc the price might be due to seagate wanting to attempt to keep their leading position in the market, or because they know something we don’t…

it seems very weird that the best and highest capacity enterprise drives, even tho being sata… is so cheap, the 5 year warranty kinda makes it feel a bit safer… but still that doesn’t prevent the entire market from changing and the current technology being outdated in a couple of years…

the new ssd’s have up to 60tb space… soon it might not be a real contest between hdd and ssd anymore… but tape storage is still around… so… not easy to say… ofc 5.25" hdd and such are totally gone… and ssd chips does kinda have the same advantages that 3.5" once had over 5.25"

but yeah looks like a great deal… ofc if one got the 4kn sas version it might be double the price… or something ridiculous like that.

thanks for your comments. I myself was very surprised to learn that the 16TB is cheaper than the “standard” size 4 and 8TB.

I agree with you that in approx. 3 years (have read it) SSDs will drop prices even to HDD. Their advantage is huge. Faster, more reliable, URE problem in raid gone (rate is 10^17!), smaller. First the developers have to find out what transfer-plug they are heading for. SATA is a dead end. NVme also dead end. They will probably go with the PCI-fast lane.

I also have read that the volatile and non-volatil flashes will merge. So no more difference between RAM and storage chip.

The EXOX X16 16TB Sata version starts in germany at 450€. The SAS version is at 540€ but not kompatible with many NAS systems (yet).

5 years of warranty is really a safe point for me. Since it is a lot of money. A 16TB Storj node to be killed and lost due to hardware crash on he other side suxx. But SATA slots in a NAS are not cheap eather.

well i doubt tiered storage is going to die… right now we got what… cpu cache L1 L2 L3 which are increasingly slower if memory serves… then we go to ram, which is like 10 times slower… most likely more… but not really important for this point.
then we move to storage which has its own ram/cache, which then writes it to ssd chips, which then when ssd capacity runs out goes to hdd… and then if you are working in the high end then you go to tape afterwards…

these technologies will change, and shrink and morph… but i doubt they will end up being all uniform… but the 3.5" hdd will most likely soon be a datacenter freak drive that is only cheap because consumers demanded storage space like that… and it was the solution that won out.

and now the manufactures want the last bit of profit out of their machines because the ecosystem is trashed… consumers today want’s ssd’s so its going to be where the development is at… and honestly the idea that its just a tiny chip is quite practical… then you basically print it, solder them together or directly on the motherboard … it fits mobile devices and in larger block setups it will also fit datacenters… there can be little doubt it’s the way its going to go… unless if something more magnificent comes along in the near future.

but it would make a kind of ironical sense that the future of storage is on chip… DOH
who would have thunk

and if you can make one chip and just copy it… well that would be a mass production dream…
which might be what we will see… also they need something to do with all those old cpu manufacturing plants as their technology gets out dated … most likely perfectly good for making ssds

Hello dear SNOs,

I have been running two nodes for a few months and both are now full with the massive flow of data coming in in June. I wull upgrade a node next week with a x4 factor, and I am wondering if there is any tip and trick to migrate HDD with a minimal downtime to avoid loss of reputation. For instance I am considering executing rsync cycles between mounted volumes while the node is up. Does anyone experience such method ?

rsync it and then rsync it some more… when it completes in like 10-30min or shortest time you can get it down to… shut down the node, run a final rsync and change the docker run command to the new location.
and power it up again… think i did my last one in 10-20min maybe 30min
after i spent a couple of days running rsync’s
if anyone got a better way i’ll also be interested in hearing it…

but i know that works…

think i ran rsync like 5-6 times

and keep in mind you want it to also delete the files that over time gets deleted on the live storagenode, but i usually add that parameter last.

1 Like



they should really add that it’s beneficial to run rsync a few times just so it’s really up to date… i think my second run took 10 hours… that would have been 10 hours of dt where i managed with 30min or so.