Docker vs ubuntu vm on truenas

Hi all,
I’m new in Storj world and i have a question.
I have an old dell t320 and with e5-2470 xeon cpu, 32gb of ram and 10gbps fibre internet
i installed truenas scale (linux based) on it to have possiblity to use raidz and to add more disks later
Now i will start with 4x4tb hard drives so my question which is better and easier for expanding storage, mounting, adding more disks as i’m not a linux expert, install storj docker on truenas or create a ubuntu vm and install storj on it ?

Best regards.

Hi Dali44,

There are tradeoffs, if you add another layer (VM) you may introduce some additional latency. It will be slight, but nodes tend to be in a race against other nodes to deliver data. Less latency, more data. So that would point to having less layers as an advantage.

But… You might want to run more than one environment, so you can run multiple nodes in different ways on the same hardware. There are different ways to go about it, of course. A lot of this is personal preference on your end.


Thank you.
I’m planning to run a single node as i read that running multiple nodes under same ipv4 is not a good idea.
Maybe i will try to run it on ubuntu vm and add later others projects but as i know expanding vm storage when others drives be added is not easy


Hello @Dali44 ,
Welcome to the forum!

I would recommend to start a one node with one disk. You can start the second and next nodes (each with own unique signed identity and own disk) when the previous one almost full or at least vetted.
You read it right - running multiple nodes will not give you more data, it will be distributes across your nodes, but in total it would be treated as a one big node. So, you will have a RAID on network level. If one node die - you will lost only this piece of common data.
With a usual RAID5/6 you have a higher probability to lost a whole array because of bitrot during rebuild after the one disk die. The last will be mitigated by zfs of course but with expense of more quick wearing of your disks and more slow storage in average.
If you configure it wrong - you can also waste a lot of space (in additional to the wasted space for redundancy in raidz), see Node using a lot of extra diskspace (ZFS)

See also RAID vs No RAID choice

If you would go for raidz anyway, then it is better to run your node as a docker container - it’s more light and doesn’t have a VM’s overhead for virtualization (the docker image uses your host’s kernel directly with isolation), and you can expand the allocated space with re-creating the container with all your parameters and changed allocation: How do I change values like wallet address or storage capacity? | Storj Docs
So you do not need to physically allocate space from the pool to the virtual disk for the VM.

1 Like

Hi @Alexey
Thank you for all the detailed information.

Multiple nodes vs big one node :
with one big node you can start exploiting full potential of your node just after vetting period maybe after one month but with multiple nodes you have to start one by one as you have to wait vetting period of each one so your nodes will be 100% fonctionnels after longer time compared to to big node

With a big node you have unique dashboard, less maintenance, checking, ports etc …


The vetting process is not a problem. The filling a node with data takes a lot of time independently of its size. When I said - start with a one node, I mean exactly that.
Please use this Realistic earnings estimator to get an idea how long it may take to fill up all allocated space.
All that time three other disks could not wear, if they are not attached :slight_smile:
Of course, if you plan to use this space not only for storagenode - then it makes sense to have the only one node.

The maintenance would be to write once the docker-compose.yaml and use it for the first node and add a next one when the first is almost full and do docker-compose up -d.

Regarding node dashboard - please take a look on [Tech Preview] Multinode Dashboard Binaries and Multinode Dashboard Docker image

Ports - yes, they should be configured (once) and added to the monitoring (once).

1 Like

Hi @Alexey

Well said.
If i run a poweredge t320 just for hosting one node, it will be overkill as i don’t have for now others tasks for it so i think i will start with one node hosted in usb 3.0 4tb external drive connected to my isp router as it’s running 24/7, it have Nas feature and i can run many vms on it without third party hardware

When the node will be vetted i will attach another hard drive and launch second node.

Cpu : Qualcomm snapdragon 835
Ram : 8gb ddr4
Uplink : 10gb down / 700mb up
4 slots drive 2.5
Usb c port / usb 3.0 port
Name : freebox delta

Thank you all for your help.

1 Like

I love the T320. Though I admit I’d like the T420 or T620 a little more. :slight_smile: I have 3 T320’s in my homelab now and they are all running as hypervisors so that is my Personal preference for running Storj nodes on these systems. They are officially supported by Esxi as welland can run at least 6.7.

I run both Esxi and Proxmox VE on them. I had hoped to upgrade all three to the E5 2470 v2 but only managed to complete one before the current political situation here stopped all tech deliveries.
Dell has an iso that will do all the firmware updates for this system as well.

1 Like

Yeah it’s a good server, more quieter than my old t610, i have already upgraded it to E5-2470 but not v2 as it was i little expensive, i used it for chia plotting but finally i abandonned the project and now it sleep
Maybe i will install esxi or proxmox to host some Storj nodes and others stuffs
Just one issue i have fews 8gb ram and never i succeeded to put more than 4x8gb

I have a mix of 32 and 64GB in mine - but to upgrade ram it is always recommended to be on latest firmware. I did find one of my HP NC723SFP cards (!0Gbit networking) would overheat in the T320 so that was one minus. I generally use an SSD as a boot device in them and SAS drives for the vm bulk storage. I think the H710 raid controller is recommended over the H310.
I think from memory I did get one of them up to the max 96GB - one reason I like the T420 more - twice the max ram capacity. The board is exactly the same. On the Dell site someone even swapped a T320 board with a T420 and had it work.

One of mine that did get the cpu upgrade with 64GB ram.

1 Like

I own already more than 10 x 8gb ram units same model same frequency but when i add the 5th and 6th ones i get post errors and system d’ont want to boot … i have to investigate on this

I use an ssd as boot device and for vm’s, i swapped out h710 to a h310 flashed to it mode as i don’t want to use hardware raid and to have direct access to disks, maybe to use zfs too, i will replace sas disks 8x600gb with a regular bigger capacity sata hard drives, i added also an usb 3.0 pcie card and i bought idrac 7 licence.

Mine, i installed proxmox and ubuntu server 18 as vm last night, i want to test esxi too to choose between them

1 Like

Hi, you can run directly on truenas scale. I did a request a while ago on truecharts and they made a ready to go K8S container.
The request I made: Add Storj Node · Issue #1086 · truecharts/apps · GitHub
The outcome: Introduction - TrueCharts

Running my main node for over a year now, this truenas scale node is up for a few months now and yes, you can expand the storage just by editing the container from the GUI. make 6TB, 12TB for example.

1 Like

This is the spec of the RAM I installed to get my T320’s to 64GB and over.

I never had a bad module from these guys. I can’t say the same for some other ebay sellers. lol

Interesting to know. I’m running my first TrueNAS Scale box as a vm on one of the T320’s so it might be a viable pace to run a Storj node as well.

Exact my setup, custom built server with proxmox. Vm with truenas scale with the storjnode container.

If you run truenas scale as vm just to run Storj as docker, it will be overkill because truenas need a lot of ram and cpu or you can just install ubuntu server and give it 1 or 2 gb ram, will be enough to run many nodes

1 Like

I have many of them, same as yours, 8gb model
The problem is that when i put them in a3 and a6 slots my server refuse to boot

1 Like

That implies some slot damage or issue with firmware.