Python Subprocess to Uplink CLI vs Bash S3 permissions not working

Hi everyone,

I am trying to create S3 grants from a Flask Python app. The Python bindings don’t support S3, so I am manually running a subprocess to access Uplink CLI. The S3 keys that get generated don’t work from subprocess, but work fine in bash/terminal.

from subprocess import Popen, PIPE, STDOUT

p = Popen(f'uplink share --readonly=false --disallow-deletes --not-after +1h --register sj://{MY_BUCKET}/{uuid}/ --auth-service=https://auth.us1.storjshare.io', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
textToParse = p.stdout.read()
print(p.args)
print(textToParse)

When I test the S3 keys that get generated, they don’t work with the AWS CLI. However, if I copy the result of p.args and run it again in terminal, it works.

My Flask code has a Python bindings session access through Uplink to Storj that I use for manipulating uploaded files, but I don’t think it has any impact on the CLI

I separated the keys for my Uplink CLI and Uplink Python Bindings and that did the trick. Something about concurrent session is breaking the backend permission engine. I’m not sure why.

1 Like