I want to buy 16 TB HDD, is it a good idea?

Hello everyone,

I operating a storage node on Raspberry PI 4B (8 GB RAM) with Seagate external (usb) HDD with about 2TB size.

I have gracefully existed from all satellites except one. I`ll be ready to rejoin the network as soon as I have enough money to buy new HDD.

I heard somewhere that storage node operator should ideally use one HDD per node and only one node per IP. Is it correct nowadays?

So I found that in Moscow (Russia) the maximum available size of HDD is 16 TB (that is still good deal). So I should be using this HDD for at least of 6 months before any exits (if I decide to do any). I thought I better use the maximum available capacity I can get, than just 2 TB, that will give me a little amount of profit.

The dashboard of my node is here: http://konard.ddns.net:14002/

While I save the money for new HDD I expanded allocated capacity few days ago and I see it fills up very slowly… Is it ok? How long it will take to fill up 16 TB?

The general consensus around here is not to buy new expensive hardware.

2 Likes

Hello Konard!

Why did you an graceful exit when yiou want to continue with storj? Because your HDD was full? A full HDD is good in terms of profit for the space you provide. So I would try to rejoin the network, but i don’t know how that works. If you are only working for one satellite you get less ingress data. So it fills up very slowly (compared to other nodes), because you are connected to just one satellite.

It is recommended to run one node per HDD, as when the HDD is going to die you only loose the data that is stored on this HDD. However I remember it is not allowed to run more than one node per HDD. It is allowed to run more than one node per IP, but on seperate HDDs (it think that has changed, previous it was not allowed to run more than one node per IP).

When you are running 2 nodes behind one IP the network handles you like one node when it comes to ingress. So if your first HDD is full I would just start a new node with a new HDD, if that is what you want. When you start a new node, that node starts to getting vetted. During this time you will not get full ingress. That should take about 2 months when it is 2 nodes behind one ip.

If it is a good idea to spend money on a new HDD is a decision that you must take. Who knows how Storj is doing in one or two years? Nobody can tell. If you decide a new HDD you shouldn’t take a HDD with SMR, you should take a HDD with CMR or ECMR. https://en.wikipedia.org/wiki/Shingled_magnetic_recording

You could also start a new node with a smaller HDD that you might have around without buying a new one and once that is full you can migrate that node to a new HDD. There is a guide in the documentation an how to do that. This would be a smart option, getting the new node vetted and when you are thinking that it could fill a 16 TB HDD and you are willing to invest into the project than you could that.

I can’t tell how long it would take to fill up a 16 TB node, as who can tell what happens in the future. But I would guess it would take something between one and two years. Thats for one node behind one IP. That should also be true for one full node and one node filling behind one IP (thats how I understand storj works).

2 Likes

you can have more than one node per ip. So don’t buy too expensive stuff. you can always add a new node on a new hdd. use the old one first until it is full.

Hello @Mykro,
Welcome to the forum!

Per IP. Each node must have an own unique NodeID

1 Like

Hello @Alexey ,

thanks I corrected that.

1 Like

So I can have both 2 TB and 16 TB nodes on single IP? What are the limits of amount of nodes on single IP?
I did graceful exit because I thought I will have to move to different location and then the coronavirus happened. I already started to do it, and now I did it mostly for testing and to get all the held money back.
I`m not sure that I will be able to migrate the node, it requires significant downtime even for 500 GB. There is a risk to get disqualified during migration. And also it seems I have some problems after migrating from 500 GB to 2 TB partition. I got 98.4% uptime score, and lost any “usage” traffic (both ingress and egress).

This is after migration:

And this is before migration:

So before migration I had usage traffic, now there no usage traffic and lots of repair traffic.
May be some files were corrupted during migration…

You could use an online migration: https://documentation.storj.io/resources/faq/migrate-my-node
Make sure that all databases transferred and you have the correct path to data in your setup (mount
to /app/config in case of docker or storage.path: option in the config.yaml file in case of binary or Windows GUI)

Of course if you already finished migration, this will not help. Any attempt can broke the node completely, but you could just copy only missed pieces from source (blobs folder)

I did it with rsync almost like in the instruction. I already finished it (no source anymore). But anyway something is wrong with the traffic. I`m not sure is it related to the migration or it is just the coincidence, my node still continue to fill up the space but with repair traffic instead of usage traffic.

This is normal. The repair traffic means that satellite upload other repaired pieces to your node.
It’s not related to your missed pieces - someone else will receive them later as a repair traffic too.

Is there a way to know I have missed pieces for sure? Does the ingress repair traffic means I have ones? Or it just new pieces that are reduplicated on the network after someone’s abrupt exit?

Yes, your audit score will drop. The ingress repair traffic for other missed pieces in the network but yours.
The missed pieces on your node will never return to your node. When the number of healthy pieces will drop below the threshold the repair job will download 29 remained pieces from other nodes, calculate missed ones (51) and upload them to other nodes. Those nodes will see an ingress repair traffic.

1 Like

@Alexey so if I have 100% audit score that means I have no missing pieces?

1 Like

Most probably yes, it means all audits satellites sent to your node were successful: no missing pieces were detected.