Rclone should work great for this. I’m happy to do a call if you want to run a lab @Suykerbuyk
It sounds like your process might benefit from the use of rclone sync vs rclone copy.
Example command
rclone sync --progress --checkers 100 --fast-list --disable-http2 --transfers 64 --dry-run /local/path mount:bucket
--progress
real-time transfer statistics
--checkers
default is 8, scale up to improve checking throughput
--fast-list
can improve listing speed by being recursive, drop this if you have memory issues
--disable-http2
a must as it improves performance substantially
--transfers 64
files transferred in parallel, memory usage will be this number multiplied by file size up to 64mb (64x64=4096mb). Normally 64 is enough to be “really fast” when moving a bunch of smaller files but if you have the resources you can try 96 and 128. If the file being bulk uploaded (synced) is large don’t use such high transfers as default --s3-upload-concurrency is 4 (64x64)
--dry-run
for testing, does not make any changes
Native vs Hosted S3
As for native vs hosted S3. Rclone supports both options. My advice is to use hosted s3 to start and experiment with native after you are successful. Load will be a lot higher native.
Hosted S3
As of version 1.61.1
5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
Then…
21 / Storj (S3 Compatible Gateway)
\ (Storj)
Native
As of version 1.61.1
41 / Storj Decentralized Cloud Storage
\ (storj)