Just have a server and don’t bother with runing a satellite, appeasing SNOs etc. You will also get better performance with local storage than with local server that needs to pull data from remote nodes.
I do not think I would be able to sell this idea to my boss. If a client wants “cloud” storage, then he will use Google, Microsoft or even Storj DCS instead of our own satellite which would have the same reliability as storing the data locally, but lower performance and more complexity.
The problem is that this is not “decentralization” which is one of the selling points of using this type of setup instead of just renting a VPS and storing the data there.
Erasure coding is not encryption. Uplink works like this now: data - encryption - splitting up and erasure coding. The encryption part is entirely optional and even if the official uplink does not allow me to skip it, I could always use a null key or modify the source to skip the encryption. Reason for doing so may be simple - if I am storing data that I make available in public anyway, why waste the CPU cycles for encryption?
There is no real way to automatically detect if a piece of data is encrypted or not.
As I see it, this would be exactly the type of discussion you would have to do, if you run your own satellite unless you simply pay what Storj pays.
Following that discussion I really cannot see anybody who would do that and run a satellite.
That’s funny because I started a public satellite about two months ago out of curiosity.
I set it up for the sole purpose of testing it, learning from it, and eventually allowing others to benefit from it for the same purpose; I don’t intend today to invest the time necessary to make it highly available and reliable enough to store important data, nor do I intend to manage the financial aspect inherent in handling that data. So I don’t think it is relevant to add it to your list @jtolio .
I tried to have an open approach from the moment I set up the server: its configuration is public and uses a public Ansible collection. The prometheus instance and the associated grafana are also public, although very little information is currently reported there.
This is the next point I’d like to work on for this satellite: to manage to simply report the usage metrics of the satellite. I’ve seen that the binary offers metrics such as --metrics.addr but I haven’t taken the time to dig into them yet and I’m not sure it would effectively report usage metrics.
Oh, and the satellite web interface has been unusable for ~2 weeks for some reason and the satellite configuration hasn’t changed since January, and I don’t see anything interesting in the logs… this is definitely the kind of problem I wouldn’t want to deal with in production as it is kind of a nightmare to debug actually
It was caused by a missing file (static/static/wasm/access.wasm) and this means I will have to double check the way I “install” these static files because sth might be wrong.
Yes, I remember having a lot of errors on the WebUi due to missing components. From my understand this services aren’t required for a “basic” use of the satellite : storagenodes offering storage and clients using uplink, so this isn’t a priority at all for me, but I may be wrong.
Depends on your definition of required. You can get the satellite working without the UI. The challenge will be to generate an access grant. It is possible to get one without the UI but that might be even a bigger challenge than getting the UI running.