Storj DCS issues

First I’m surprised not found a Storj DCS sub-forum where your customer could exchange.

Otherwise I’m a free tester. I noticed after downloading 4.38GB from the web interface, the dashboard bandwidth show a 6.35G usage and the bucket bandwidth show 6.05GB, both were at 0 before the download. A 45% over-billing might upset some customers.

It took 19.5 minutes to download 4.38GB or 13.5 GB/h, 323 GB/day. It would take me 3 days to restore my 1Tb SSD and 3 weeks to restore my HDD, not very appealing for data backup.
For information I’m in France with a 1000/500Mb fiber.

1 Like

Interesting, I’m testing it as well. The speed that I received though not calculated was great.

Speed is great if you are in ADSL but I’m getting used to download games in the 700Mbs range, going back to 40Mbs bring me back some old memories which I rather forget.

If you would check the Detailed report, you will notice that you are not billed for all allocated bandwidth.
There is a known issue (we are working on it by the way), that client software (include objects browser in the satellite UI) allocating more bandwidth than settling it. The bandwidth become settled when all nodes submits their orders, thus there is some delay up to 48 hours.
However, even if orders are expired and not submitted, the allocated bandwidth is not reset until end of the month. This is that issue we are working on.

Have you tried to download it with uplink?
Is it the same result?


I’m glad I’m a free user, if I had to explain your answer to our accounting department I would be in deep s…t.
What I understand in your answer is you’re experiencing a few bugs but you’re working on it, great, if my electric company gave that kind of answer they will be in court with a class action suit.
I haven’t tried uplink, but I tried filezila following the manual instructions, after wasting an hour on it to figure out filezila is not supported and the manual is not updated, I gave-up.
I’m a free beta tester, I really wonder how you can sell this product to a paying customer.
I participate in the beta program to push this system in this last retrenchments but sadly it doesn’t meet my minimum requirements.
A beta testers forum would have been welcome but its nowhere to be found.

1 Like

You are billed only for actual usage, not allocated.
On dashboard you should see an allocated bandwidth usage. In the detailed report you will see the actual usage. Can you please confirm?
The invoice for accounting department contains an actual usage, not allocated, so no confusion there.

FileZilla have bugs too, the only working version is 3.51.0. They didn’t fix an issue with subfolders yet as far as I know.

You can use this forum for feedbacks as a beta tester. We can move your thread to the appropriate category.


hi team… im not sure how is your performance on the storj DCS … but im sure i am getting really bad speed for 70GB upload… my upload seems to be taking ages. i am getting 2 weeks to complete uploada 70GB backup files using cloudberrylab. is it me or the network itself is getting a little slow.


What is gateway you have used? What is your upstream speed?
Have you tried the self-hosted gateway for CloudBerry Lab?

Have you tried a native integration? Duplicati or rclone for example?

It would be helpful to get a little more context to help narrow down the potential issue(s).

Per @Alexey’s request, can you share:

  • The gateway you’re using

  • Your upstream bandwidth

  • Format/type of data (typical object size, how many objects)

  • Where the data is being uploaded from


MSP360 (cloudberry) has a few scenarios that do not work well with object storage.

First what works well, file-based backups of larger files. Even if a mix of small and files more toward or exceeding our block side of 64Mb, this still works well.

Small files (kb) and object storage tend to be a bad match. Blocked-based backups in MSP360 work poorly as their block sizes for files under 1gb are 128kb.

Any detail you can provide regarding your configuration will be most helpful for getting you a good solution.

Can you clarify your values, small ‘b’ mean bit and you say a 64 Mb or 8MB file work well but a 1gb or 128MB file work poorly, I guess you mean 128kbs kilo bits per seconds which is ISDN speed if you’re old enough to remember this acronym and I was complaining about my poor 30Mbs speed.

Ideally files are 64MB or larger but 1MB files would still be performant. When doing block level backups with msp360 1 MB to 512 GB files are split into 128 KB blocks and files ranging from 512 GB to 1024 GB are split into 256 KB blocks, this could result in poor performance. Conversely file backups in msp360 with a wide range of file sizes tend to work quite well.

Please elaborate on your msp360 backup that was going poorly, we would love to help.

Ref for msp360 Block-Level Backup: A Comprehensive Guide for MSPs

1 Like

I’m not using msp360 but the web interface and today I’m getting a 15Mbs download speed on a 2GB file download. I’m getting used to download games from steam,epic and others at 800Mbs. Is there any tricks,apps to get a few 100Mbs from Storj, I could be missing?

I don’t know what’s is your targeted customers, you offered a free subscription for beta testers to discover the flaw and bugs in the system, great idea and when I clicked on community I was expecting a DCS forum with all kind of sub-forums for customers and testers.
Otherwise from my test the data are safe, no corruption whatsoever but the speed is appealing I really wonder what’s your customer target.

For upload try the following with rclone and monitor your network with a tool rather than reporting rclone speed.

(Default rclone behaviour)
rclone copy /local/300gbfile.extension bucketname:path

(Initial tuning for multipart)
rclone copy --s3-upload-cutoff 0 --s3-upload-concurrency 8 --s3-chunk-size 64M /local/300gbfile.extension bucketname:path

You can vary --s3-upload-concurrency (say, 4 and 16) and see what provides the best upstream performance. Please note using --progress in rclone is inaccurate so please either time the process or use a bandwidth monitor like nload/iftop/iperf3 etc.

For download you can do multiple files in parallel to boost performance with --transfers=int (try --transfers=8 to start). This will allow you to saturate your internet connection.

1 Like

Thanks a lot for your information. If it work as well as you described. I’ll just need to figure out a way to send my main PC running under Windows 10 data to the Linux PC running Storj. I’m open to suggestions :slight_smile:

Happy to help. Restic+rclone, give this a go… Backup With Restic - Storj DCS

By the way: Is it possible to mount a storj-DCS-restic-repository into a local directory??

case yes then how else pity.

We have seen s3fs used for this in the past.

Alternatively you could also try rclone mount.