Share specified folder in a bucket with filezilla access grant

I have some files uploaded with filezilla in a folder.
I want to share only this folder to my collaborator using access grant for filezilla, I can do this?!

Thank you.

This is for free Filezilla

This is for Filezilla Pro

yes i know.
but only allow sharing the bucket instead of a folder.

Hey @openaspace!

I think there is not currently a way within the Satellite UI to generate an access restricted to a folder. However, this should not be an issue if you generate an access via the uplink share command: share | Storj Docs e.g.

uplink share sj://bucket/path/to/folder/

This way, you will be able to generate an access restricted to whatever level you want - you can restrict it to a single bucket, folder, or even file.
Let me know if you need any more help!


thank you.
Yesterday i tried the app Mountainduck (like a porting of cyberduck) configured as s3 access, and used the integrated share url function, I can see the direct link to the file but in the AUTH storj control panel I can’t see any new Credential access

also in " My Accesses" page, i see some link share created from the control pannel of file that are inside long folders three and I cant delete it because ,
the folder path is really long to see in the web page.

How I can list all the shared link using the command line?

That is expected. You are actually creating a new access derived from the original one you created in the Satellite UI (I am assuming this is what you are referring to as “auth storj control panel”).
There is no interaction with the satellite necessary in order to restrict an existing access, so you wouldn’t see one listed if you used another tool to add restrictions on an existing access.

refering to the external software , in my case Mountainduck, therefore there is no way to list active shared link to revoke?

I am not sure if there is a way to list all the shared links specifically - though I can do some investigation and try to figure this out.
But using the uplink tool, you should be able to use the ls command to get the full path of any file(s). I can elaborate with more detail as needed.

Not from my knowledge, unless you revoke the original access that it is derived from. Of course, deleting the original access will also remove access from anything you shared that originated from that access.

I think that the uplink will absolutely have the function to show and verify the active shared link to have full control of the shares!

Yes but i can’t delete from the control pannel the access credential because i can’t read the full name share.

I have solved using firefox inspect tool to read the full share name from the source and pasted in the delete confirmation dialog!! :smile:

1 Like

Haha, glad you were able to figure it out! I can bring this feedback to our design team to see if there is a way we can improve the experience of the delete confirmation for large access names. If you have any screenshots, please share them. If not, no worries :slight_smile:

Thank you, I will look into this further. It’s an area I’m not an expert in, so I should brush up on my knowledge around getting information about what has already been shared via uplink.

the problem is that there is no time to manage with details all this problems… because storj is used for the job…is the instrument for…, therefore there is no mind to remember how to do some simple operation like “shared link” when you are working on a job.

I notice a few different issues that have come up in this conversation. I would like to better understand what your pain points are.

  • unable to create an access for a specific file/folder from the Satellite UI access page
  • unable to see a list of derived accesses from the Satellite UI
  • difficulty deleting an access with a very long name from the Satellite UI (had to open inspector)

Is this an accurate reflection of your current difficulties, or did I miss something?

the most important is to see derivated accesses of file/folder shares

Thank you for the feedback. This is something that I think would be great to support in the future.
But for now, the best way that you can handle this as a customer is to keep track of which accesses you derive from the original, so that you know what access needs to be deleted in the UI in order to invalidate a derived access.
The best way to “automate” this some way might be to set an expiration time for any derived accesses. That way, they will become invalid after a certain period of time, and the original access they were derived from will still be valid. If you derive an unrestricted access from another, the derived access will always be valid until the original access is invalid.

thank you.
Other questions,

  • exist a bucket number limit for each project? I need to create a bucket for each client.
  • exist a download speed limit for a single bucket? I’m planning to transfer big files to my customers, each week around to 2,5TB each one x 10/20 clients at the same time.
  • it’s possible to set traffic quota allert?
  • and… i can set dns hostname for non public storage bucket?