at least speedtest.net claims so.
see my “also for the record” which i added in that reply a bit later.
and the moral of the story is …? ![]()
at least speedtest.net claims so.
see my “also for the record” which i added in that reply a bit later.
and the moral of the story is …? ![]()
So ISPs start to stratify support by protocol. This is interesting to learn, would partially support my hypothesis on peering contracts.
Though, if my hypothesis is correct, Storj on IPv6 would not actually get faster either.
Does your experience with downloading from Storj differ between libuplink and s3 gateway?
they have to, because of depletion of ipv4 and it getting very expensive.
and they also are compulsed to cgnat several endusers into one ipv4, which of course gets hosting of anything on ipv4 impossible.
and with ipv6 they are of course very generous!
about the same, 10% difference at most, sometimes in favor of one and sometimes in favor of the other.
and if going back to my whole point, the idea is that with support for ipv6, storj clients can convince more local pigeons to join the project.
apart from me having a whole server at my personal mercy, there is a myriad of nas owners who wont mind making a penny on their spare storage space.
only if that it is pure spare and would not involve extra investments like into a dedicated ipv4 address (either link-owned or on a rented vps)!
and what puts storj especially apart from other cloud storage is that neat possibility to time-shift your storage amount like i said earlier in this thread.
that, guys, is the real selling point of storj, not the bullshit you put on the main page and changing it now and then in hope for some big fish!
Out of curiosity, what do you think of this? Node operator-only offer as means of attracing more nodes
many numbers i have absolutely no idea about, so cant give any verdict on that.
well, just shared some opinions there, as observed on my side, nothing very verdicty.
Downloading from a mounted bucket wouldn’t be fast without special options (e.g. rclone allow you to increase a parallelism with --vfs-read-chunk-streams, but you also need to enable a vfs cache with --vfs-cache-mode full).
For example, for Windows:
rclone mount storj:my-bucket z:\ --vfs-cache-mode full --vfs-read-chunk-streams 40
Then copy even from a mounted bucket should be faster. Please note, since it uses a local cache, it will start to download to the cache, then give that data to the client (cp, explorer, etc.), so there could be a delay before it would show a progress.
Also there might be a difference if you configured an rclone remote with a Storj native protocol or with Storj S3.
For downloads it’s much better to use either uplink CLI or at least rclone copy directly from the bucket, not from the mounted drive, i.e.
./uplink cp sj://my-bucket/my-file.zip .
With the default options it should be pretty fast and saturate your downstream connection. If not - you can always increase a parallelism with --parallelism option.
For rclone:
rclone copy -P --multi-thread-streams 40 storj:my-bucket/my-file.zip .
A remote storj can be configured as a Storj native protocol or as Storj S3. I would recommend to try both and see, which one would be better for your connection.
Please note - there is no difference between using a Storj native protocol and self-hosted S3 Gateway, because in both cases the native Storj protocol will be used under the hood, so you need to use Storj-hosted S3 Gateway for Storj S3 protocol to see a difference.
i know that, you told me several months ago, and i installed gateway-st back then, and experimented both ways.
and because you requested my feed back at the same time, i just needed some time for usage observations, so sorry for the delay, but it was necessary before jumping into conclusions!
and what concerns gateway-st.
the latest release has no windows binary!
i wrote in the repo’s github ussue tracker back then, and got no reaction in response.
so i cross-compiled it myself on a linux vm and tried it.
and i have a STRONG feeling that what i compiled is the wrong source version because it behaved VERY unstable.
please do a windows binary for gateway-st latest release, from the same sources you did for the linux/docker version, so my tests would have at least any meaningful certainty.
Yes, I heard from engineers that using the Windows executable for gateway-st was too rare, but supporting it required complex preparation and making some adjustments during the build process, so they decided to abandon it. However, you can also use a docker version or simple use rclone serve s3, just configure it to use a Storj native protocol.
Or, if you would use rclone mount with a Storj native protocol, then you will have it without a middleware.
chicken and egg problem.
like if you ever advertised it widely and not only in almost private special ways as it was with me?
nothing really complex, just some settings for a cross-compile which i completed in a couple tens of minutes being a complete zero from the start.
then just selected the version by tag and compiled, quite blindly, but maybe that does not exactly correspond to what specific code revision was used by them in their official release.
oh stupid me!
there you have the exact snapshot of the sources from which you compiled, as archives.
just downloaded them and compiled my target, will update all my instances with the fresh build and redo some tests which i consider relevant.
cheers and go celebrate that women’s day!
Very simple: https://api.github.com/repos/storj/storj/releases
Please check for amount of downloads gateway-st binaries for every platform.
nope. The windows binaries should be prepared to support the outdated Windows versions, which is incompatible with the latest GO versions. Yes, we finally reduced supported Windows OS versions, and now the minimum is W10 or WS 2016: Step 1. Understand Prerequisites - Storj Docs, however, this doesn’t change too much.. now no Windows binaries for gateway-st
It’s Open Source though, so you can improve it and create a pull request to change that - we will be glad to review and merge it.
well that only confirms that very few know about it.
so, you missed my point.
let it be at least what i myself achieved to do, and it works at least on w11 and s2025, tested and confirmed.
по принципу - що маемо, це маемо.
oh we are talking just about compiling a target this time.
thats for the node, while we are talking gateway-st!
Yes, the same code base, the same language, the same modules and dependencies (I would say gateway-st has them even more), the same problems.
I would ask the team, is it possible to return it back though.
great.
and in another forum thread there was a concern that you would drown in support requests if you go for many small clients.
publish on the web as many documentation as you can on these matters, chances are that ai assistants will pick that up and offload you on support inquiries as much as possible.
some years ago i had a product for mass market.
all that sufficed in my case was just drupal’s built-in site search function, and i could handle the entire project practically alone!
There is a documentation,
You can use a docker version, which would work on almost any platform.
However, nobody read it and asking AI instead. Unfortunatelly AI hallucinating very often.
For example, we regularly receives a support tickets from new SNOs, who requests an authorization token for their new node, but this requirement was removed more than half of a year ago.
yes, there still a problem with these ai, they relearn only periodically and not very often.
and i think that for them to pick up quicker, there should be these doc pages in the sitemap, and the sitemap location published in robots.txt
Everything’s already been done. It doesn’t help. Perhaps there’s another workaround, but in the current state, we have to re-check everything generated by the AI.
apart from this, very little on our side.
it is just that they first scrape a lot, then relearn (only once in a while).
ah!
and also there are pinger services.
but i have no idea if ai’s pay any attention to that, maybe only search engines do.
for my current personal site i use IndexNow, but cannot say if it has any considerable impact.
out of curiosity, i just put this link into my masspinger app, it is masspinging it into thousands of places right now.
upd: canceled that midprocess and pinged the entire storj.dev as a domain (because you said all else is in pace and it seems so as i checked).
now until they index, until ai’s relearn… eh.