Unable to see files after mounting with rclone

Hello everyone,
I’ve been trying to mount my storj buckets to linux (Debian 11.3) and rclone 1.58.1. Sometimes the mount works, sometimes it doesnt. Even when it is mounted, I can only see the 2 buckets I have but not their contents. Both buckets have folders and files in them. I’m able to make a connection to storj using the rclone config wizard and access grant key.

I’m not sure if this is an rclone issue or storj issue or if there’s something I’m missing.

This is the mount command I’ve been using:

rclone mount arthaus_storj3 /etc/levideo --allow-other --log-file=/etc/rclonelog/storjmountlog.txt --log-level DEBUG

This is what the log of my last attempt says:

2022/06/15 20:53:58 DEBUG : /: >Lookup: node=arthaus-library/, err=<nil>
2022/06/15 20:53:58 DEBUG : arthaus-library/: Attr: 
2022/06/15 20:53:58 DEBUG : arthaus-library/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:00 DEBUG : /: Attr: 
2022/06/15 20:54:00 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:00 DEBUG : /: ReadDirAll: 
2022/06/15 20:54:00 DEBUG : /: >ReadDirAll: item=4, err=<nil>
2022/06/15 20:54:00 DEBUG : /: Attr: 
2022/06/15 20:54:00 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:00 DEBUG : /: Lookup: name="arthaus-library"
2022/06/15 20:54:00 DEBUG : /: >Lookup: node=arthaus-library/, err=<nil>
2022/06/15 20:54:00 DEBUG : arthaus-library/: Attr: 
2022/06/15 20:54:00 DEBUG : arthaus-library/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:39 DEBUG : arthaus-library/: Attr: 
2022/06/15 20:54:39 DEBUG : arthaus-library/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:39 DEBUG : arthaus-library/: ReadDirAll: 
2022/06/15 20:54:39 DEBUG : FS sj://: ls ./arthaus-library
2022/06/15 20:54:39 DEBUG : FS sj://: OBJ ls ./arthaus-library ("arthaus-library", "")
2022/06/15 20:54:39 DEBUG : FS sj://: opts &{Prefix: Cursor: Recursive:false System:true Custom:true}
2022/06/15 20:54:40 DEBUG : arthaus-library/: >ReadDirAll: item=2, err=<nil>
2022/06/15 20:54:50 DEBUG : /: Attr: 
2022/06/15 20:54:50 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:50 DEBUG : /: Lookup: name="arthaus-library"
2022/06/15 20:54:50 DEBUG : /: >Lookup: node=arthaus-library/, err=<nil>
2022/06/15 20:54:50 DEBUG : arthaus-library/: Attr: 
2022/06/15 20:54:50 DEBUG : arthaus-library/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2022/06/15 20:54:53 DEBUG : /: Attr: 
2022/06/15 20:54:53 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>

I would really appreciate any insight or help towards solving this.

I tried connecting to my storj account using cyberduck on windows and faced the exact same issue. I can see 2 buckets listed, but nothing in them. Both buckets have files and folders in them, which I can see through the web browser.

Any idea why storj could be refusing to show the files in the bucket?

Not seeing the content of a bucket usually indicates an incorrect encryption/access grant/key is being used. It is possible to use multiple encryption keys on the same bucket, but with different content shown depending on the access grant used.

4 Likes

Thanks Stob.

Should I setup each bucket separately? or is it okay to setup a connection to the account and therefore the buckets?

When settingup rclone, it only asks me for the accessgrant. Where do I use the passphrase that is needed to open the bucket in the browser?

I haven’t done it in a while but you should be able to use the web to create the access grant, which can then be imported into rclone. Use the same encryption phrase and then the same files are visible.

1 Like

Thanks Stob for the detailed instructions and screenshots. This is exactly how I’ve been doing it. Should I try deleting my buckets, reuploading the content and try again with the new encryption passphrase?

Hmm strange. Yes, try again with just one or two files. You can also use the same access grant with uplink to list the bucket content for a second verification.

Stob, I got it working. I tried making an access grant for a single bucket instead of both and used the same decryption key I use in the browser and it worked. Thank you for guiding me through this.

So is there a way to make 1 access grant for both buckets? If yes, which of the 2 decryption keys would I use here? I had a 3rd random generated key I was using. I might have saved it when setting up my account and buckets.

While making the access grant, the permissions page does have an option to make 1 grant for all buckets, but I’m slightly lost on how to get it working.

1 Like

Just don’t select a bucket when creating the access grant. It is an optional restriction. If you don’t restrict the access grant it will be for all buckets including buckets you might create in the future.

You might also want to delete all existing buckets in the satellite UI even if they look empty. You might have some leftover files in these buckets. Deleting and recreating is the best way to make sure they get removed.

2 Likes

Hi littleskunk. Since I have 2 buckets, which of the 2 decryption keys do I use while making a multibucket grant?

Currently I got both buckets mounted using rclone using 2 grants, thanks to troubleshooting help by Stob. If I could get 1 grant to work on both buckets, that would be an ideal situation.

Just delete both buckets and start from scratch. Create a new access grant with one of the encryption keys and than use it for both buckets. That should work just fine.

Thanks littleskunk. So if I understand correctly, you’re suggesting I make 2 new buckets and both buckets to have the same decryption key. That way 1 grant will work for both buckets, right?

I can give this a try.

Yes exactly. The only reason you might want to use a different encryption key is too limit the damage if it ever gets leaked. For example I am using 2 different encryption keys for my duplicati backups and everything else. In this situation I want to restrict access and since I will never touch my duplicati backup with any other tools there is also no downside for me.

1 Like

True. In that case using 2 mounts with 2 grants seems like a better choice, don’t you think?

Wouldn’t they both get compromised at the same time?

The injest bucket needs to be used by 2 servers, one will upload, other will download. So maybe its better to keep the grants separate.

1 Like

It seems there might be some changes regarding that in the future:

Maybe it’s a good idea to have some kind of information for the user who is presented with an empty bucket that this bucket is not really empty. However this might not be preferred under all circumstances. If I share a bucket, maybe I don’t want other users to know if there are still files that they simply are not allowed to see.

2 Likes

You can see that on the satellite dashboard. On the project view page you can see buckets and can confirm are they empty or not (Usage per bucket section), it shows used space, used egress and number of stored objects and segments for each bucket.

2 Likes

Not sure if my problem is the same but…

I created a bucket with a passphrase, then I associated that bucket with an app to upload files but without a passphrase, when I go to the panel and enter to see the files I only see a folder that I created but not what I uploaded in another folder.

If I enter through cyberduck I see all the files that were uploaded WITHOUT a passphrase but in the StorJ panel I cannot see them because always it asks me to enter a passphrase.

Wouldn’t it be better to allow the user to see the content with the pass that he wants?

Why is not only 1 passphrase per bucket made?

thanks

Well it is up to you! You can use 1 password for your bucket or you can use many different passwords for one bucket.
But only those files are visible that match with the correct password.
This way you can have many users share a single bucket but they can only see and access the files they have a password to.

I think it is much better this way than having only the option to use one password per bucket.