Slow upload throughput - Using Veeam backup - Brazil

I am getting very slow speeds to upload data. No mode than 20mb/s. As I am using the bucket to store backups made from veeam and I have around 4Tb that should be send everyday, how can I improve this performance ?

Could you give us some more details of your setup?

  • Available bandwidth
  • Router (Consumer grade? Enterprise grade?)
  • CPU usage

(I am sure other, cleverer people like @Alexey will be able to ask more relevant questions) :slight_smile:

(E bem-vindo a esta comunidade, espero que consigamos resolver os seus problemas!) :wink:

We are an ISP, so the server is connected to a 10Gb interface adaptaer and our backbone to the internet have 200Gb.We are an ASN 28132.

Router is a Hua Wei NE8000 (two routers) enterprise grade. Our CPU usage is less than 20%, as you can see (ti’s a dedicated Dell R410 with 96Gb only the veeam).

1 Like

OK, definitely one for the “big guys” to help you with :slight_smile:

2 Likes

Thanks for the eforts !!

Welcome to the forum @rcarvalhaes !

Did you follow this guide ?

3 Likes

To increase speed for a Veeam Backup with configured objects storage you need to increase chunk size and parallelism.
By default it uses 1MiB block size, this is a very small object size and thus speed is suboptimal. It also produce a lot of small objects and increases your costs due to a big amount of segments (see Pricing - Storj Docs) and slows down almost any operation, include deletions.

However, the Veeam Backup doesn’t have a separate option to change neither S3 chunk size, nor parallelism, it uses the same Storage option for both backup and for the object storage.
So you may increase the chunk size only by increasing the storage block size (Data Compression and Deduplication - User Guide for VMware vSphere), but it will increase the size of diff backups.
For the Storj DCS objects storage the best option is to have 64MiB chunk size.

To increase parallelism you may change a General Network option to Use multiple upload streams per job in Managing Upload Streams - User Guide for VMware vSphere.

4 Likes

This guide is just to setup the service, there is no troubleshooting information about performance, so, it doesn’t help.

Thanks for the detailed answer @Alexey . I have some questions on your points:

  1. You recommended 64 MiB chunk size but Veeam seems to have a maximum of 4mb (image bellow). It’s too far from 64, that’s relly 64 your number ?
    image

  2. How can I test the network bandwidth with the Storj gateway?

Interesting, why are you (still) running an e5530 from 2009 in active production?

The largest blocksize Veeam will allow is 8MB, but requires editing Windows Registry to get it to show up. (This is mentioned in Veeam’s KB4215 under “Object Storage Compatibility”)

‘HKLM\SOFTWARE\Veeam\Veeam Backup and Replication’

add:

UIShowLegacyBlockSize (DWORD, 1)

After restarting the Veeam server once after adding this regkey, you will now see 8MB as an option.

1 Like

Given your performant setup lets move right to tuning and testing. What block size you are using? If 1MB or smaller please stop the backup and restart it with 4MB and if you have time 8MB which should give you the best throughput.

20MB/s is 160Mb/s but we should be able to see better speeds. Feel free to also run multiple jobs at the same time to Storj as the additional parallelism should be advantageous.

-Dominick

Hello,
I would assume that there is a performance improvement when using Gateway-ST (GitHub - storj/gateway-st: Single-tenant, S3-compatible server to interact with the Storj network) or Gateway-MT (GitHub - storj/edge: Storj edge services (including multi-tenant, S3-compatible server to interact with the Storj network))?

Gateway-ST is probably sufficient for your needs and easier to set up.

If you want to make a speedtest I would recomment Storj/Uplink. Its a CLI tool where you can upload / download files.

Be aware that the default settings are not ideal. Please see for a 8 core CPU:

If you have a more powerful server, you can go even faster.

Still a very good machine, dedicated only for veeam, works like a charm!

1 Like

I tested and worked the 8mb Chunk Size, but it seems too faraway from the ideal 64MiB recommented. Even if the troughtput be better now I am still on the point of cost. With 1MiB I already have around 10Tb of data with 10MM of segments, so just the price for segments it’s already 8USD agains 4USD for the 10Tb. Total 12USD.

Wasabi would be USD 7.00 and with lower chunk sizes I will have I will consume less space on incremental backups and deduplication algoritm from Veeam. Bigger chunk sizes means a lot of aditional data transfered for daily incremental backups and less eficience on the deduplication. I have a demand for more than 100TB so I am really thinking if it will make sense.
Anyway I will run more test for throughtput this night having more parallels connections and this new block.

Another point is concerning the immutability on Wasabi. I didn’t found a way to create a bucked already with "X"days os immutability, that will be very usefull for any stolled access key (someone could just access and delete everything). There is no security protection for that.

There is something that I am missing?

3 Likes

We have updated our documentation to advise the use of large and ideally extra large blocks.

@rcarvalhaes thanks for testing and reporting on throughput.

4 Likes

With 1MiB I already have around 10Tb of data with 10MM of segments…so just the price for segments it’s already 8USD

With your change to a 8MiB blocksize, your segment costs will be cut by a factor of x8. So for 1TB of Veeam backup, your Storj cost will be ~$5/TB ($4/TB stored, $1/TB segments)

Wasabi would be USD 7.00

A hidden cost with Wasabi for backup use-cases is their 90-day minimum storage duration policy. Depending on how often you are purging daily incremental backups, on Wasabi you are required to pay for that used storage for a full 90-days. So if you only keep 30-days worth of daily backups available, with Wasabi you are paying for that used capacity an additional 60-days.

Also, $7/TB is for single-region storage. If you want to ensure your backups are always available and are never offline due to a regional-outage, your Wasabi costs would be USD 14.00 (using Wasabi Cloud Sync to copy your bucket from one region to a second region), plus any additional fees from their 90-day minimum usage policy.

With Storj, there is no 90-day minimum policy and your uploads to Storj DCS are globally distributed, eliminating the risk of downtime associated with regional outages.

Another point is concerning the immutability

As @jammerdan highlighted, object versioning/immutability (Object Lock & Retention) is on the Storj roadmap.

6 Likes

Hi @ray.bull , thanks for the detailed answer.

@Dominick , what about the point that I wrote concerning the protection agains stolen access key? What alternatives to prevent someone to access with the key and delete the bucket or files inside ?

1 Like

First off, thanks for this. I reached out to a few principal engineers to make sure I got this perfect.

Basics… We follow best practices, have strict access control, and have security as a first principles theme. That being said here is the answer.

Details… The access key is the encryption key for decrypting your access grant.

If you want total control never share the access key with us. Again we don’t log or record it but thats your answer.

We are only exposed to your access keys during two scenarios

  1. When you create them initially on storj.io
  2. When you send them to us during an api call to the Hosted Gateway

As an alternative to exposing these credentials to us you can use our network as originally envisioned and run the endpoint yourself as well as client side generate the Access Key and Secret Keys.

  1. Use Uplink CLI client side to create your access via uplink register after importing an Access Grant
  2. Then run Gateway ST and never expose us to the Access Key or the Secret Key

-Dominick

5 Likes