Possible to make a "Ballon" function on node size?

Hiya StorJ team.

I’ve got a handful of wonderful nodes at around 500GB in size. They are that size, because they fit wonderfully into free capacity on the systems on which they live.

The nodes got filled here during the test, and sat at their allotted 500GB. Now that garbage machine is pointing out trash on the nodes, they are still 500GB in size, but ~280GB is marked trash.

I’d love to enable an advanced feature that would allow me to not count trash towards the total node size. I’m okay with the nodes ballooning to over 500GB of total size, if it means that they can ingress “real” data, while the garbage waits to be taken out.

Cheers.

1 Like

That’s not quite what I’m after. I don’t want the nodes to be permanently 800GB. I want them to hover around 500GB, but am alright with them balloning larger for some time while large deletions are in progress.

You’re not understanding what I want to do then. I want the node to hover around 500GB, but have the ability to burst up into higher territories for short amounts of time. The node currently has the aforementioned ~280GB of trash, and I’m fine with the node being larger than normal for a short amount of time. The node will delete it’s trash soon and will then again be below the 500GB threshold

With the new normal (30 day TTL) the trash usage should settle at a constant high level. So your idea comes to late.

2 Likes

I know it’s not possible. That is why this post is in the “Ideas and suggestions” category, and not in the Node Operator section :slight_smile:

Yeah but the new normal - just like all other normals - is just temporary. The tests will end in a few months, and then we’ll have another scenario

Yeah, it would be a nice feature. I see a different implementation though: an additional limit under which non-garbage pieces with no TTL or TTL more than a week has to fit. Seeing that trash is removed at most a week later, this essentially means I can decide how much disk space I want node to use that I would potentially like to free up with one week notice for other purposes.

However, this would be so much nicer if I could set allocation at runtime without restarting the node to make it extra flexible.

1 Like

Yes! That’s exactly what I want.

Perhaps an advanced setting in the config.yaml file, along the lines of Storagenode.Garbage.MaxBalloon which is normally commented out, and set to 0. If commented in, the value will overwrite the AllocatedDiskSpace filed in an additive manner.

Example:

  • AllocatedDiskSpace = 500GB
  • Storagenode.Garbage.MaxBalloon = 300GB

… would result in a Storagenode that is temporarily maximum 800 GB, where the last 300 could only be trash. Then when garbage collection is finished running - a week later as Toyoo points out - the node would shrink to under balloon size again.

1 Like

Then you may do it with scripting. Set the allocated to 800GB as suggested by @IsThisOn and monitor the actually used space. When you got desirable numbers - reduce the allocation and restart the node.

1 Like

I like this idea too. Maybe just have a config option that disables counting the trash size as used space entirely? My node runs on a ZFS array that i’m using for other reasons and the node is just there to fill some extra space. I don’t mind if it uses a few hundred more gigabytes extra for a week at a time.

You could use something like this in a cronjob. In your case used=500000000000 and it should do the job.

diskspace=`curl -s 127.0.0.1:15001/api/sno/ | jq .diskSpace`
used=`echo $diskspace | jq .used`
trash=`echo $diskspace | jq .trash`
sed -i "s/storage\.allocated-disk-space: .*/storage.allocated-disk-space: $(($used+$trash)) B/g" /mnt/sn1/storagenode1/storagenode/config.yaml
systemctl restart storagenode1

I would suggest changing the name. Ballooning in IT suggests something quite different.

1 Like