- A/B testing Pumpkin Spice and Praline lattes: experimenting to see if traditional winter latte flavors—such as Praline—turn up our company’s holiday spirit pre-Black Friday
- Adding support for a Tardigrade-hosted S3 gateway so users who want to utilize the S3 gateway would not have to host anything.
With this and the link sharing service it sounds like you would have a lot of egress traffic on gateways to deal with. I guess I’m just wondering how this fits into your costs model. Are these just considered loss leader features?
The latte comment seems to be an attempt to inject levity. The reference is pretty US centric. Black Friday is the day after a holiday called Thanksgiving that a lot of people celebrate. Thanksgivong is always on a Thursday. The stores have sales to prompt revenue from Christmas shoppers.
For the loss leader questions, it’s a good question but I should check in with people who know more about the finance and profit projections before I offer an opinion
Heres what I got back
linksharing is available to anyone with a Tardigrade account. Linksharing APIs are currently generated from the CLI although we have plans to expand the feature set
Thanks, I was aware it was available through CLI. Its nice to hear that will expand further. I guess I was just curious about the inner workings as any implementation of this functionality I could think of would require some hosted gateway that connects to Tardigrade, which would lead to additional cost for you guys.
I figured the customer payments for egress might not be enough to cover both pay for node egress as well as the additional costs of that gateway. Though it would be completely fair to respond by saying that’s non of my concern.
If that were the case, they probably would make it so that nodes could also have http. That is, the gateway then just generates a 302 redirect to some random node, the node then pulls data from other nodes, assembles it and transfers it to the customer.
This would be complex to implement and such nodes would probably have to be selected and checked to have reasonable upload speed, but it would save some money.
I see the direct http access becoming popular if Storj is used as a CDN so that the customer can save his own bandwidth.
I like that solution, but it doesn’t get rid of the double egress costs, it just moves the duplicate egress to nodes as well. Unfortunately that also introduces a single node bottleneck and eliminates the advantage of parallel downloads.
Ideally you would use an uplink implementation in browser. But there are quite a few challenges to tackle to make that happen.
You won’t get parallel downloads with http anyway. Unless there is a very fast browser-based uplink implementation, we are stuck with http.
Node egress is probably cheaper than the satellite/gateway egress and the nodes could be paid a bit less for “gateway egress” (which would be offset by getting more traffic), so the egress costs would be lower than double (but more than single).
it’s time for a simple js browser based uplink implementation. That would solve all problems. But I don’t know enough about js or uplink to have an idea of how compliated it is to create.
Perhaps I’m missing something, but why not?
I kind of understand this would make some sense, but the node would be doing a lot more than just egress. It would also recombine the file. This RS processing would put more of a burden on the CPU and set higher requirements for hardware. It doesn’t really seem fair to then pay them less for it.
I agree, but there won’t be anything simple about that. Making the browser speak RPC would already be a massive challenge.
This will never work.
First, it will allow malicious SNOs or anyone who has access to underlying hardware (think of a node hosted on a VPS) to alter data served to the end user. Next, it will put SNOs under a legal threat if their nodes are used to serve illegal content.
Parallel downloads of multiple different files - sure. I do not know if any browser supports parallel download of a single file over http. Various download managers used to be able to do that in an attempt o speed up the download, but I think that browsers do not split the file and download the segments in parallel, like the Storj uplink does.
I see what you mean. No it won’t happen natively, besides you still need the logic to recombine the file with RS. That’s not so much a limitation of http but rather just how Storj works. But if you make nodes speak http, I don’t see why it wouldn’t be possible to download pieces in parallel from all nodes and recombine in js.
The double bandwidth problem is something we have been trying to figure out a solution for. There are solutions for much much much cheaper bandwidth out there that we could try to take advantage of to make a Tardigrade-hosted S3 gateway work.
hey @hoarder i dont understand exactly what you mean by this; do you mind elaborating? especially around the legal threat part because nodes only store a small piece of a file that is encrypted so they have no way of knowing what data they are storing.
This was in the context of a single node downloading all pieces from the other nodes, assembling the pieces and serving the file to the client.
So I think he means if a node is used to download files, the node could be serving illegal content.
But unless the file is downloaded unencrypted (with a public share), the node still doesn’t know the content of a file.
Yes, this was about serving files from nodes directly. I realize storj does not work like that and probably never will.
Even if it does the recombined file would still be an encrypted blob only the client could decrypt it. That part should never be the responsibility of a storage node.