Rclone v1.58.1 ecclient errors?

Hi Storj,

So I’m having a funny using Rclone with Storj, I’m getting hundreds of errors as below, and while things are working, uploads are painfully slow sometimes ? As in it could be fine for a few minutes/hours, then logs 2-4 of these errors, stalls for a bit then starts uploading again.

I’m assuming it’s because it can’t upload successfully to the 80 nodes ? Is there anyway I can increase this, or is there a fall-back so if it fails on 80, it gets another 40 nodes ? or is there a timeout? as it seems to wait ages for the nodes to fail ?

root@rclonedev03:/mnt/# rclone --version
rclone v1.58.1
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.13.0-1023-azure (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.9
- go/linking: static
- go/tags: none

I’m also using caching, so not sure if that is causing an issue, my mount command is below;

000:000/ /mnt/stroj03dmp/ rclone rw,noauto,nofail,x-systemd.automount,args2env,vfs_cache_mode=full,cache_dir=/mnt/cache03/,vfs_cache_max_size=1024G,vfs_c
ache_poll_interval=5m,vfs_cache_max_age=48h,config=/etc/rclone/rclone03/rclone.conf,log_level=INFO,log_file=/mnt/logs/rclone03.logs/rclone.log,allow_other 0
0

When it’s slow, it just seems to stall, but then will eventually pick up - The cache means that it doesn’t impact it’s usage, but it would be nice to see the queue clear :slight_smile:

2022/05/25 10:48:00 INFO  : vfs cache: cleaned: objects 194 (was 194) in use 194, to upload 190, uploading 4, total size 39.961Gi (was 39.961Gi)
2022/05/25 10:53:00 INFO  : vfs cache: cleaned: objects 194 (was 194) in use 194, to upload 190, uploading 4, total size 39.961Gi (was 39.961Gi)
2022/05/25 10:58:00 INFO  : vfs cache: cleaned: objects 194 (was 194) in use 194, to upload 190, uploading 4, total size 39.961Gi (was 39.961Gi)
2022/05/25 11:03:00 INFO  : vfs cache: cleaned: objects 194 (was 194) in use 194, to upload 190, uploading 4, total size 39.961Gi (was 39.961Gi)
2022/05/25 11:08:00 INFO  : vfs cache: cleaned: objects 194 (was 194) in use 194, to upload 190, uploading 4, total size 39.961Gi (was 39.961Gi)
2022/05/25 11:09:52 ERROR : FS xxxxx: cp input ./Some/File/Path/Here/3632333#at#342#a4d301.part [HashesOption([])]: uplink: stream: ecclient: successful puts (77) less than success threshold (80)
2022/05/25 11:09:59 ERROR : FS xxxxx: cp input ./Some/File/Path/Here/3632333#at#342#dfd302.part [HashesOption([])]: uplink: stream: ecclient: successful puts (72) less than success threshold (80)

So as above, you can see it stalls the upload with no data going out, then it will log a few of those errors, then it will pick itself up and continue.

This isn’t urgent, I’ll still keep using Rclone as it eventually gets there - just would be nice to not have the hang.

thank you

CP

This has been an issue for long as I can remember, When chia was a thing I tried using rclone to static mount a hard drive and try hard drive mining with it, an I was never successful uploading larger files. Using uplink is really the only way to upload larger files.
image
image
It always started good but never finished.

This is indication of low upstream bandwidth (less than 20 Mbit). If this is the case, then you need to configure it to use GatewayMT or at least configure to use less parallel threads for uploads.

1 Like

Thanks Alexey, bandwidth won’t be the issue, but I will try adjusting parallel uploads down, think I also need to change chunk encoding size to 64MB, as think I’ve got it on 128M.

CP