Synology App native with iSCSI

My Synology (DS418play) doesn’t have docker by default, but it could be manually added…
However, docker is a bit quirky/confusing to me and one layer too deep in virtualization to get identity files in and out of the container.

When the Syno App does get into further dev stages, a native one is preferred and one that tunnels out over WireGuard/ZeroTier/Point-to-Point VPN/Cloudflare Argo to master servers/satellites/relays/etc. instead of port-forwarding.

If at all possible, it would be nice if the app supports localhost iSCSI (or NFS) as that is how I’ve put 2TB available on the Storj/Tardigrade network.

Plus, using an iSCSI drive really helps speed up migration when a node fails.
(restore identity files from backup, connect to iSCSI share, run Storj installer, paste in pertinent info, back in business)

1 Like

The iSCSI will slow down your node and it will lose almost all races for pieces to competitors with local connected drives. So better to avoid using the network connected storage if it possible.

The NFS or SMB doesn’t supported at all.

As far as I know, the native application for Synology is on its way to release.
But as I heard, it uses a docker as backend. So, you still need to install a container station. It’s not so difficult as you thought.
Install the container station and run a docker with this guide: https://documentation.storj.io/setup/cli

1 Like

But the DS418play doesn’t support docker and so far I don’t know how to get around this. Is there any other information about this that I have missed?

If they do not support the docker, then there is nothing you can do at the moment. You can only continue to use a combination of PC + NAS with iSCSI or move the disk to your PC, it can significantly improve the speed.

I’ll have to experiment with testing docker, but my PC only supports a single 2.5" drive, so it’s more expensive to move backwards to a more unreliable platform.

As a network/data engineer, I’d rather have cloud data delivered at 75-85% of max speed with a higher level of reliability compared to a single disk that could fail at any given moment.

Plus, restore to full operation after a node crash/disk failure is 10x faster to use a NAS than to have a single drive.

In some tests, iSCSI random read on 32MB shards can be done in 250 microseconds, which is the same as the server’s SSD.

There’s an idea, an SSD caching bonus for hot data in region that the Storj node could relay/cache/offer as short-order storage. Additionally, potential Storj Rewards for having N+1 data reliability to reduce repair overhead. (I have Raid5 with BTRFS)

Anyways, this Synology has maxed out at 350MB/sec sequential read and can sustain 60MB/sec read and write, so the chances seem to be good for delivery of data to Tardigrade clients in 10ms to any peers in Atlanta or Chicago as that is where my ISP has public Tier1 links established.

The speed is not a problem. The problem is latency with small pieces on read and write, random read and write to be precise.
No one network drive can beat the local connected drive in such situation.

I hope you dont take it the wrong way, this stuff realy interests me and i like to learn as much as i can.
I must not be understanding the math or calculating things wrong or i am missing part of the equasion somewhere?

If my single hdd has an avg latency of 4ms when direct attached and my iscsi san has and average latency of 0.3ms and the san’s network has a average latency of 0.25ms at 600 iops what am i missing what do i not have mesurements for given in both senarios i am using the same motherboard, ram and cpu to host the node software?

If your running iscsi that will be connected though your network. So if your running a storagenode with iscsi your node gets the data from the internet then send it over your network to the iscsi creating a lot more latency. Vs running node on hardware directly which is much faster then over network.

How is my iscsi san running at 0.550ms slower than my direct attached drive that runs at 4ms I don’t understand how 0.55 is slower than 4.00

Assuming in both cases you test writes with O_DIRECT flag, your SAN probably has a write cache (batter-backed RAM or SSD) that makes it faster than a direct-attached hard drive.
Of course, the same setup with the cache would be even faster if directly connected.

However, SAN is not that much slower. Sure, compared to a directly attached SSD it is a bit slower, but compared to a hard drive it’s not that different.