Rclone Mount results in 14 GB download on dashboard

I mounted my Storj buckets to visually check some things.

I downloaded nothing, upload nothing and did not edited (move/rename) any files/directories.

rclone mount --read-only sj: /mnt/storj

The command worked well and I could explore the files.

I refreshed my dashboard and to my suprise I had consumed 14 GB. I wrote before about strange download behaviour.

If I hover over the last peak, it states Dec 09 / 11.5GB

The mount was up for less then 5 minutes and over my connection I could not download 14 GB that quickly.

I will, of course, wait for 48 hours to see if this resolves (as per advice here, but on my previous post it did not resolve the excessive download).

Any thoughts?

1 Like


I refreshed the dashbard and the download total now matches the progress bar at 14.3 GB:


1 Like

Do you have any indexing serving running? I assume you are using some sort of linux, something that is built into Gnome (like tracker) or KDE (like Baloo) might try downloading data to scan. Even just opening a directory with a file manager that automatically creates thumbnails may download data.


I would recommend to mount it using the vfs cache, it wouldn’t prevent downloads by indexers, thumbnail builders or GUI explorers, but at least it wouldn’t download data every time when you open a folder.

By the way, is it possible that you shared something from your bucket or the bucket itself?
Did you use an objects browser in the satellite UI? Especially previews?

1 Like

Thanks all excellent suggestions.

I am using Linux Mint.

There are no thumbnails in the directories I browsed to on the Storj mount. I compress and split pictures into 5 GB archives to reduce segments and allow easier management of data. I turned off remote thumbnails just in case. I do have a directory with pictures in (current year) not in archives but I did not browse this on either occasion.

I am using thunar 4.18.4 file browser. I also removed tracker and killed the process as do not need indexing. Can easily install if required by other packages.

I am not sharing - these buckets are private. No files shared or hosted static pages.

I refreshed the dashboard this morning and the downloads have reduced to 6 GB.


I have reproduced the process now using:

rclone mount --read-only --vfs-cache-mode full sj: /mnt/storj

I browsed to my directories again (the same ones as before) and listed the contents.



The mount lasted 3 minutes.

The dashboard now reports an additional 6 GB downloaded totally 12 GB now.


I will report back later to see if the download count reduces.

May be important to say I am accessing Storj with S3 credentials.

1 Like


Strange things are happening.

A bucket was cached in the rclone cache directory which reports 321 GB which is impossible as my internal nvme does not have that capacity. All the archives report 5 GB as expected but when I compare hashes they are all different and the archives do not open, so assume this to be magical caching things I do not understand. Maybe a placeholder.

There are metadata files totalling 16 kB.

This is likely nothing to do with the issue but highlight as an interest.


This is expected. When the libuplink (under the hood) requests some data (metadata), it allocates the whole size. Later it’s updated with the actual usage from nodes.
Sorry to say that, but you really downloaded 6GB of your data, otherwise nodes would not have a proof (signed orders), which now settled in your usage.

doesn’t matter… it will account any usage, despite the protocol.

it’s likely the cache of expanded data. But it’s interesting to see, how do you calculate it.

1 Like

Only explanation is rclone started downloading. I will have to follow-up on their forum.

I can take the loss as learning. Luckily Storj only charge for download and no transaction fees

6 GB = $0.04
12 GB = $0.08

(I know egress billed in GiB but for ease just used GB)

I can survive as above.

I will avoid mount for now. I usually just upload as you can see from my dashboard.

1 Like

It’s also calculated in GB*hour to make it more complicated (to pay less), but it’s only for storage.

but. but. It’s convinient

This is probably to be expected. rclone uses sparse files and only downloads the parts of files that are actually in use. Because files are sparse, they logically have the full size as stored on the remotes, but physically only the parts that were in use are stored in the cache directory. Then, depending on the tool you use, you might see the logical or physical size of the cache directory.

1 Like

Excellent thought.

True. I was recently using Rclone Browser but as it is depreciated, I want to try the rclone mount instead. I use it for OneDrive though. SJ is my backup and archive solution so do not need it mounted especially nervous about lack of versioning in SJ which I hope will be resolved soon.

Update for someone following this thread: the 6 GB download reduced to 2.6 GB.


The nodes finally submitted their orders.

P.S. This is interesting, I’m using the mount myself under Windows and didn’t have much of egress, when I just browsed “folders”, however, I didn’t open them as a gallery with thumbnails…

1 Like