Tardigrade with S3FS-FUSE?

As Tardigrade is suppose to be S3 compatible, do you think it will work with S3FS-FUSE ?

You can try it now: Test network + S3 Gateway and you have an s3 API available right on your PC!

2 Likes

I believe the uplink binary has an option to mount a bucket to storage as well. Havenā€™t tried it yet though, but that would skip a few layers inbetween.

2 Likes

So you even have a two options to compare :slight_smile:

Is the test network open or invite only? Iā€™m on the waiting list for Tardigrade but havenā€™t been given access yet. Looking forward to testing it out. I had good success storing on the v2 network.

Speaking of storing on v3. Are files persistent now? i.e. no need to re-upload every 2-3 months. Iā€™m looking for a good place to backup my media library with future plans to access it via EC2.

Itā€™s a local test network, so no invites are needed.

If you want to use a public beta network, then you should subscribe on tardigrade.io to receive an developerā€™s invite.

1 Like

Yes, files are persistent now. They could be deleted only if you specified an expiration date or remove it explicitly.

You donā€™t need to have a EC2 to access your files, the uplink and you keys are enough.

Thanks for the info. Iā€™m looking replace my beefy RAIDZ2 FreeNAS server, also running Plex, with a two bay Synology NAS. Sync a backup of my NAS media to Tardigrade, mount it as read-only on a scheduled EC2 instance running my Plex server. So Tardigrade will act as my off-site backup and also host my Plex media storage being transcoded if needed by the scheduled EC2.

Iā€™m a cloud architect by day, so trying to move my stuff to the cloud as well.

Are you sure?
I downloaded the last release today and it doesnā€™t seem to have an option to mount it:

I believe it was removed. I never tried it myself, but I think it was read only anyway.

Ok!
Anyway, I will try the other solution you mentioned.

rclone has the option to mount a bucket

im new here ā€¦ just want to know if you guys have this issue.

i did manage to mount using s3fs to the folder.
after tested all good. after the test, i have umount the path nonempty and the file was remain in that folder. i did a manual remove after that and mount back again. do i get the files back after mounting ? or it will not show up anymore ā€¦ ?

Are you sure that s3fs was properly mounted in the first place? It fails silently unless you add a flag to enable debug info. Verify that it was mounted by running mount | grep s3fs to ensure itā€™s in the list of mounts.

Generally yes, provided that it was properly mounted in the first place. Once you unmount, the mountpoint should be empty, and when you remount the s3fs, the files should show in the mountpoint directory again.

rclone support for Tardigrade is not merged yet, though itā€™s getting close.

thanks for the reply.
just out of curiosity, are you guys able to get around the S3FS 64GB limitation for S3FSā€¦ i think my backup keeps failing when hitting 64GB storage size.
i believe this could be the problem with S3FS as mention in the wiki.