S3-compatible interface fails miserably

Hello Vitalie,
thanks for contacting us.

Thank you for the log file.

We reviewed it carefully, and the issue has nothing to do with
TntDrive itself. Your S3-compatible storage server is failing to
handle completely normal S3 operations. The log shows two clear
server-side problems:

  1. Your storage repeatedly stops responding

There are long sequences of network timeouts such as:

System.Net.WebException: The operation has timed out

These exceptions come directly from the .NET networking stack when the
server simply does not return any response within the required time.

This indicates that your storage is hanging, overloaded, or unable to
process concurrent requests.

  1. Your storage crashes and returns HTTP 500

At one point your endpoint returns a full ASP.NET runtime error page
instead of a valid S3 response:

500 Internal Server Error / “Runtime Error”

This means the server-side application behind your S3 interface is
failing internally.

When the server manages to respond correctly, everything works:

“Successfully collected files…”
“Successfully received information…”

This confirms that the client side is functioning normally, and the
failures only occur when the storage itself stops responding or
crashes.

Your S3-compatible storage is intermittently unresponsive and
sometimes returns internal server errors. These issues originate
entirely on the storage side and cannot be fixed from the client.

Please forward this information to the support team of your storage
platform so they can investigate the server-side timeouts and
application crashes.

–
Best Regards,
Ivan Moiseev,
TntDrive Team
Netsdk Software
support2@tntdrive.com

Please check your account details on the satellite and make sure that all invoices are paid (please check the Billing History page).
Please provide logs from this app during operations, also please provide details what are you doing with the mounted drive.
Or you may submit a support request using your satellite account email address: Submit a request – Storj DCS

Could you please also try to configure rclone with the same S3 credentials and mount all buckets as an y:\ drive:

rclone mount storjS3: y:\ --vfs-cache-mode=full --no-console

Here storjS3 is an rclone remote name.
You may also use GUI tools like GUI or https://rcloneview.com/ and similar, there are many.
Then please try any operation which you were doing and please provide your feedback.

You may also try WinSCP :: Official Site :: Download, Managing Files using S3 Browser with Storj - Storj Docs, Setting Up and Using Cyberduck - Storj Docs and many other S3-compatible, like Setting Up and Using Mountain Duck - Storj Docs.

And of course, the best option would be Object Mount for Media | Storj

There is no .NET anywhere in the storj stack, the errors in 1 and 2 are not coming from Storj. Do you access the internet through some sort of proxy? Could you try bypassing that or perhaps test with a different isp?

2 Likes

isnt that just nodes behind reverse proxies???

You use an S3 interface, not direct p2p native Storj interface. So, seems either TntDrive returning these .NET errors or you use some kind of proxy yourself.

If you would use a native interface, then yes, your client will contact nodes directly, and if they are behind a reverse proxy, this proxy may return weird responses.

When you use S3, you contacting not storage nodes, but an S3 gateway - either Storj-hosted, or Self-hosted. Then these gateways contacts nodes. But since we do not use .NET stack anywhere in our apps, these errors can only be thrown from something, which is developed on .NET, like TntDrive or something between this app and the gateway.

This is why I suggested to try other S3-compatible tools with a similar functionality, or even better - use rclone with a native integration and have a mounted bucket or use their GUI.

Please share your experience during the test. We didn’t have such reports before, so very curious.

Gateway-ST is following builds of Gateway-MT, so there is only Linux binaries, but you may run it in a docker container, see Setting Up a Self-Hosted S3 Compatible Gateway - Storj Docs

Could you please provide a sample? It would be nice, if it has logs. But I suspect some bugs in that app.

Then you can take a look on other tools, which are proven for years, I provided several links earlier, like Object Mount, MountainDuck, WinSCP, S3 Browser, Cyberduck, FileZilla, S3Drive, etc.

great.
now just tell me which of these just quickly makes a temp copy of files in its temp folder which i can specify, when i upload, and processes the upload in the background.

see, i make backup copies of my virtual machines onto storj.
it aint just some small cat pictures, although i love my cat very much.
and with tntdrive’s temp copy and so on, my vm’s are down just for the time it takes to make the said copy onto the temp ssd, not the time it takes for the upload itself.

cheers!

Object Mount, MountainDuck and rclone mount: rclone mount storj:my-bucket z:\ --vfs-cache-mode full --no-console

or just

rclone copy -P x:\VM storj:my-bucket

It will not need any cache and will transfer everything in a background in several streams, so it would work faster.

Then you likely need to use a normal backup tools, which also support snapshots, like Duplicacy or restic and others. Almost all has an own web-GUI and standalone GUIs, or you may use something like Duplicati. If you backup VMWare VMs, then it’s better to use Veeam.

This app doesn’t work with nodes directly, since it’s using S3 - it connects to the gateway, so for the app there should not be any difference what is backend used (distributed or centralized), but it can try to send too many requests or try to use an unsupported feature. Without logs it’s hard to say.

You may also use Veeam, or write a script to take a snapshot, then export it to the mounted bucket.
I would suggest to mount a bucket not as a drive, but as a folder:

rclone mount storj:my-bucket C:\bucket --vfs-cache-mode full --no-console

Then this script can create a checkpoint and export it to the mounted bucket.

Checkpoint-VM -VMName my-vm "my-snapshot"
Export-VMCheckpoint -VMName my-vm -VMSnapshot "my-snapshot" -Path C:\bucket\backup

If you want to mount a bucket as a drive, then you need to use Administrator’s rights, because the script to work with Hyper-V will require them.
Of course using the normal backup software is better.

please delete the topic to not mislead others.
after all tries, i just created the necessary scripts with vibe coding and i think that’d be enough for my current needs.
cheers and happy holidays!

No need to delete anything. But you may do it, of course, however it could be helpful for other readers though.