I am getting very slow speeds to upload data. No mode than 20mb/s. As I am using the bucket to store backups made from veeam and I have around 4Tb that should be send everyday, how can I improve this performance ?
Could you give us some more details of your setup?
- Available bandwidth
- Router (Consumer grade? Enterprise grade?)
- CPU usage
(I am sure other, cleverer people like @Alexey will be able to ask more relevant questions)
(E bem-vindo a esta comunidade, espero que consigamos resolver os seus problemas!)
We are an ISP, so the server is connected to a 10Gb interface adaptaer and our backbone to the internet have 200Gb.We are an ASN 28132.
Router is a Hua Wei NE8000 (two routers) enterprise grade. Our CPU usage is less than 20%, as you can see (tiâs a dedicated Dell R410 with 96Gb only the veeam).
OK, definitely one for the âbig guysâ to help you with
Thanks for the eforts !!
Welcome to the forum @rcarvalhaes !
Did you follow this guide ?
To increase speed for a Veeam Backup with configured objects storage you need to increase chunk size and parallelism.
By default it uses 1MiB block size, this is a very small object size and thus speed is suboptimal. It also produce a lot of small objects and increases your costs due to a big amount of segments (see Pricing - Storj Docs) and slows down almost any operation, include deletions.
However, the Veeam Backup doesnât have a separate option to change neither S3 chunk size, nor parallelism, it uses the same Storage option for both backup and for the object storage.
So you may increase the chunk size only by increasing the storage block size (Data Compression and Deduplication - User Guide for VMware vSphere), but it will increase the size of diff backups.
For the Storj DCS objects storage the best option is to have 64MiB chunk size.
To increase parallelism you may change a General Network option to Use multiple upload streams per job in Managing Upload Streams - User Guide for VMware vSphere.
This guide is just to setup the service, there is no troubleshooting information about performance, so, it doesnât help.
Thanks for the detailed answer @Alexey . I have some questions on your points:
-
You recommended 64 MiB chunk size but Veeam seems to have a maximum of 4mb (image bellow). Itâs too far from 64, thatâs relly 64 your number ?
-
How can I test the network bandwidth with the Storj gateway?
Interesting, why are you (still) running an e5530 from 2009 in active production?
The largest blocksize Veeam will allow is 8MB, but requires editing Windows Registry to get it to show up. (This is mentioned in Veeamâs KB4215 under âObject Storage Compatibilityâ)
âHKLM\SOFTWARE\Veeam\Veeam Backup and Replicationâ
add:
UIShowLegacyBlockSize (DWORD, 1)
After restarting the Veeam server once after adding this regkey, you will now see 8MB as an option.
Given your performant setup lets move right to tuning and testing. What block size you are using? If 1MB or smaller please stop the backup and restart it with 4MB and if you have time 8MB which should give you the best throughput.
20MB/s is 160Mb/s but we should be able to see better speeds. Feel free to also run multiple jobs at the same time to Storj as the additional parallelism should be advantageous.
-Dominick
Hello,
I would assume that there is a performance improvement when using Gateway-ST (GitHub - storj/gateway-st: Single-tenant, S3-compatible server to interact with the Storj network) or Gateway-MT (GitHub - storj/edge: Storj edge services (including multi-tenant, S3-compatible server to interact with the Storj network))?
Gateway-ST is probably sufficient for your needs and easier to set up.
If you want to make a speedtest I would recomment Storj/Uplink. Its a CLI tool where you can upload / download files.
Be aware that the default settings are not ideal. Please see for a 8 core CPU:
If you have a more powerful server, you can go even faster.
Still a very good machine, dedicated only for veeam, works like a charm!
I tested and worked the 8mb Chunk Size, but it seems too faraway from the ideal 64MiB recommented. Even if the troughtput be better now I am still on the point of cost. With 1MiB I already have around 10Tb of data with 10MM of segments, so just the price for segments itâs already 8USD agains 4USD for the 10Tb. Total 12USD.
Wasabi would be USD 7.00 and with lower chunk sizes I will have I will consume less space on incremental backups and deduplication algoritm from Veeam. Bigger chunk sizes means a lot of aditional data transfered for daily incremental backups and less eficience on the deduplication. I have a demand for more than 100TB so I am really thinking if it will make sense.
Anyway I will run more test for throughtput this night having more parallels connections and this new block.
Another point is concerning the immutability on Wasabi. I didnât found a way to create a bucked already with "X"days os immutability, that will be very usefull for any stolled access key (someone could just access and delete everything). There is no security protection for that.
There is something that I am missing?
We have updated our documentation to advise the use of large and ideally extra large blocks.
@rcarvalhaes thanks for testing and reporting on throughput.
With 1MiB I already have around 10Tb of data with 10MM of segmentsâŚso just the price for segments itâs already 8USD
With your change to a 8MiB blocksize, your segment costs will be cut by a factor of x8. So for 1TB of Veeam backup, your Storj cost will be ~$5/TB ($4/TB stored, $1/TB segments)
Wasabi would be USD 7.00
A hidden cost with Wasabi for backup use-cases is their 90-day minimum storage duration policy. Depending on how often you are purging daily incremental backups, on Wasabi you are required to pay for that used storage for a full 90-days. So if you only keep 30-days worth of daily backups available, with Wasabi you are paying for that used capacity an additional 60-days.
Also, $7/TB is for single-region storage. If you want to ensure your backups are always available and are never offline due to a regional-outage, your Wasabi costs would be USD 14.00 (using Wasabi Cloud Sync to copy your bucket from one region to a second region), plus any additional fees from their 90-day minimum usage policy.
With Storj, there is no 90-day minimum policy and your uploads to Storj DCS are globally distributed, eliminating the risk of downtime associated with regional outages.
Another point is concerning the immutability
As @jammerdan highlighted, object versioning/immutability (Object Lock & Retention) is on the Storj roadmap.
Hi @ray.bull , thanks for the detailed answer.
@Dominick , what about the point that I wrote concerning the protection agains stolen access key? What alternatives to prevent someone to access with the key and delete the bucket or files inside ?
First off, thanks for this. I reached out to a few principal engineers to make sure I got this perfect.
Basics⌠We follow best practices, have strict access control, and have security as a first principles theme. That being said here is the answer.
Details⌠The access key is the encryption key for decrypting your access grant.
If you want total control never share the access key with us. Again we donât log or record it but thats your answer.
We are only exposed to your access keys during two scenarios
- When you create them initially on storj.io
- When you send them to us during an api call to the Hosted Gateway
As an alternative to exposing these credentials to us you can use our network as originally envisioned and run the endpoint yourself as well as client side generate the Access Key and Secret Keys.
- Use Uplink CLI client side to create your access via
uplink register
after importing an Access Grant - Then run Gateway ST and never expose us to the Access Key or the Secret Key
-Dominick