Do you have any indexing serving running? I assume you are using some sort of linux, something that is built into Gnome (like tracker) or KDE (like Baloo) might try downloading data to scan. Even just opening a directory with a file manager that automatically creates thumbnails may download data.
I would recommend to mount it using the vfs cache, it wouldn’t prevent downloads by indexers, thumbnail builders or GUI explorers, but at least it wouldn’t download data every time when you open a folder.
By the way, is it possible that you shared something from your bucket or the bucket itself?
Did you use an objects browser in the satellite UI? Especially previews?
There are no thumbnails in the directories I browsed to on the Storj mount. I compress and split pictures into 5 GB archives to reduce segments and allow easier management of data. I turned off remote thumbnails just in case. I do have a directory with pictures in (current year) not in archives but I did not browse this on either occasion.
I am using thunar 4.18.4 file browser. I also removed tracker and killed the process as do not need indexing. Can easily install if required by other packages.
I am not sharing - these buckets are private. No files shared or hosted static pages.
I refreshed the dashboard this morning and the downloads have reduced to 6 GB.
I have reproduced the process now using:
rclone mount --read-only --vfs-cache-mode full sj: /mnt/storj
I browsed to my directories again (the same ones as before) and listed the contents.
The mount lasted 3 minutes.
The dashboard now reports an additional 6 GB downloaded totally 12 GB now.
I will report back later to see if the download count reduces.
May be important to say I am accessing Storj with S3 credentials.
A bucket was cached in the rclone cache directory which reports 321 GB which is impossible as my internal nvme does not have that capacity. All the archives report 5 GB as expected but when I compare hashes they are all different and the archives do not open, so assume this to be magical caching things I do not understand. Maybe a placeholder.
There are metadata files totalling 16 kB.
This is likely nothing to do with the issue but highlight as an interest.
This is expected. When the libuplink (under the hood) requests some data (metadata), it allocates the whole size. Later it’s updated with the actual usage from nodes.
Sorry to say that, but you really downloaded 6GB of your data, otherwise nodes would not have a proof (signed orders), which now settled in your usage.
doesn’t matter… it will account any usage, despite the protocol.
it’s likely the cache of expanded data. But it’s interesting to see, how do you calculate it.
This is probably to be expected. rclone uses sparse files and only downloads the parts of files that are actually in use. Because files are sparse, they logically have the full size as stored on the remotes, but physically only the parts that were in use are stored in the cache directory. Then, depending on the tool you use, you might see the logical or physical size of the cache directory.
True. I was recently using Rclone Browser but as it is depreciated, I want to try the rclone mount instead. I use it for OneDrive though. SJ is my backup and archive solution so do not need it mounted especially nervous about lack of versioning in SJ which I hope will be resolved soon.
Update for someone following this thread: the 6 GB download reduced to 2.6 GB.