No CORS header set in dashboard API response headers

I’m trying to make a simple React web app to show node stats by querying the dashboard API. However, my browser gives a CORS error when I make the request:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://192.168.42.8:14002/api/sno. (Reason: CORS header 'Access-Control-Allow-Origin' missing).

I’m not an expert in this but from what I’ve googled around it looks like the response headers from dashboard API should have “Access-Control-Allow-Origin” set to “*” for this to work.

The API itself works fine, I tested it with a Python script.

Any ideas? Am I doing something wrong here or should I make a suggestion to the devs?

1 Like

You are correct! This isn’t implemented AFAIK. Ideally we’d allow users to configure CORS and implement the necessary endpoints for the OPTIONS requests. This would be a good thing for a feature request.

6 Likes

So what is the current status of this feature? Is there any ability to configure the value of “access-control-allow-origin” property in Response Headers?

Currently response header looks like this:

access-control-allow-origin: *

Am I right if I say that now everybody who has link to my files in the bucket (for example: my huge .geojson files in web application) can easily fetch them and I will pay for a storage and egress?

Context: I use folder in the bucket for web hosting of the client side web application, in a few internal public folders there are .geojson files, which are fetched by my application.

I have shared that folder with this command:

uplink share --dns demo.domain.com sj://webapp/demo --not-after=none

Note that CORS does not prevent egress. It prevents other websites from using data downloaded through CORS-authenticated website, but before the web browser can decide whether data can be used, in case of simple requests, the request must be performed anyway. And, of course, it will not prevent other HTTP consumers, i.e. non-web browsers, from downloading your data.

So it’s not like CORS will prevent abuse of your egress costs.

1 Like

My question was about a little bit different things :wink:

  1. Can I set the value of “access-control-allow-origin” for some resources in the bucket?
  2. Can other developers build their own applications and just use links to my .geojson files in MY bucket?

we typically support a permissive CORS policy, so there is nothing to configure. If you want to have a more strict CORS policy, you may use a reverse proxy with more strict CORS policy, or run a Self-hosted S3-compatible Gateway and configure CORS policy in your reverse proxy in a front of the gateway.
You may do the same for the self-hosted linksharing service too:

Yes, this is how linksharing works.

As I answered above - it’s not needed, we have a permissive policy, so there is nothing to configure, it just works.
If you want to strict something, you need to host a server, either as a reverse proxy or as a normal web server.
You may also generate an access key with only listed objects.
For example, you have three data files in your bucket webapp:

  • demo/public.geojson
  • demo/public2.geojson
  • demo/private.geojson

then you may generate an access key to allow a public access only for two public ones:

uplink share --dns demo.domain.com sj://webapp/demo/public.geojson sj://webapp/demo/public2.geojson --not-after=none

And anyone would be able to download only these 2 public objects, but not a private one.
To allow an access to the private, you may generate a separate access grant

uplink share sj://webapp/demo/private.geojson --not-after=none

and give either this access grant to the partner, or better to register it and give an S3 credentials instead:

uplink share sj://webapp/demo/private.geojson --not-after=none --register

But if you want that they can use just an URL, then you may generate an URL:

uplink share sj://webapp/demo/private.geojson --not-after=none --url

It’s also possible to setup a separate custom domain, but in this case this file become available to everyone who would found your second domain.

You may also rename the private object to have a different prefix, not demo, but private for example, then you may share a prefix to do not list every one object in the uplink share command, i.e.:

uplink share sj://webapp/private/ --not-after=none --url
uplink share --dns demo.domain.com sj://webapp/demo/ --not-after=none

Please note, if you want that they have a raw URL without Storj preview, then you need to replace /s/ to /raw/ in the generated URL.

3 Likes

thanks a lot for detailed explanation and examples. Yeah, Storj requires from me a little bit different point of view comparatively to my previous experience with cloud storage.

this approach totally works for my needs

I have created 2 buckets:

  1. webhosting – used for hosting of a client web application (Vue.js)
  2. datastorage – used for storing files with spatial information

To “datastorage” bucket I uploaded not only .geojson files, but also a huge datasets of the vector tiles (.pbf) with total amount of 6500 single files. And that works like a charm. So my experiment with “serverless” web-GIS application is successful.

that is a very valuable note

1 Like