Adding: if I copy the files again from the other cloud, one by one, using Rclone, they “cease to be” locked objects. The images below are in chronological order:
Yes, this is a pending multipart upload (Understanding Multipart Upload - Storj Docs). rclone uses the multipart upload interface to upload files, as it’s often more reliable than a single-call upload. It’s possible that rclone didn’t clear a failed upload (for example) and it’s the reason you are seeing that. You can try removing it manually by using the uplink rm command with the --pending flag.
What is strange is that immediately after deleting the file the message about object locked appears. Is it some kind of cache? (no, it’s not in my browser, I tested it in 3 different ones)
After several tests, both with the new test bucket and with the files that originally led me to discover this bug (?), I give up. I’m just going to trust what uplink and rclone tell me, the web interface definitely has something fishy about it.
I copied exactly 294 files, and confirmed this number with rclone, with uplink, with direct mounting via mountainduck and using the web interface itself:
uplink ls --recursive -o json --access all sj://bucket_name | wc -l
294
I made sure, using uplink, that there were no “ghost files”:
uplink ls --pending --recursive -o json --access all sj://bucket_name | wc -l
0
Still, the web interface shows 231 locked files:
Note: the copy of these files was made yesterday.
Even though this was done yesterday with rclone’s default number of simultaneous transfers (4), and may have created the multipart upload issue mentioned above, why does the ls --pending command show zero results? Is it the command that is returning wrong or the web interface? IMHO is the web interface. Therefore, I will continue to use only command line access.