When you use a native integration, it opens for uploads 130 connections in parallel for the each segment of the file (64MiB or less), when the first 80 are uploaded all remained are canceled.
If you want to have a more stable uploads, you need to increase a parallelism (how many segments the libuplink can upload in parallel), in FileZilla it’s located in the menu Edit → Settings → Transfers → Limit for concurrent uploads.
You may use rclone instead:
rclone copy --multi-thread-streams=8 --multi-thread-cutoff=64M -P "c:\Users\shiny\Downloads\Twisted Metal (USA).chd" storj:emulation/PS1
or with uplink:
uplink cp --parallelism=8 "c:\Users\shiny\Downloads\Twisted Metal (USA).chd" sj://emulation/PS1/
The number of threads (or concurrent uploads/downloads) will affect how much connections it will open and how much memory and CPU will be used. If your router is not capable to handle so much connections, this may reduce the speed or increase the number of retries.
Errors during download are related to unstable connection. With a native integration it opens 35 connections for the each segment and downloads in parallel. When the first 29 are downloaded all other are canceled.
If you upload and download in the same time, it may overload your router if it’s not capable to handle so much parallel connections. So you may reduce the number of parallel transfers in the menu Edit → Settings → Transfers → Maximum simultaneous connections
If you upstream or downstream bandwidth of your internet connection are small (less than 25Mbit for upstream and less than 100Mbit for downstream), perhaps it’s better to use an S3 integration instead, it’s supported by FileZilla PRO, Cyberduck, rclone and many others.
See also