What about MSP360

Hey,
what do you think about intigration into this service? (or similar)
https://www.msp360.com

This could bring alot of Traffic in the Storj-Ecosystem

This should already work as StorJ is s3 compatible:

1 Like

Maybe Storj should get in touch with them to feature Storj as trustfull

Thanks, we can look into it. My experience with reaching out to companies to get Stoj integrated with their products typically comes down to them wanting money, or some kind of news announcement with a dedicated customer impact. It can be complicated when you start down that road because of agreements you want to reach and how you handle service when there are issues, or changes.

It tends to be better to have customers ask the company for things, because they usually want to please their customers. But every company is different and sometimes things aren’t that complicated. No harm in asking. But just in my experience, 9 out of 10 want to get paid to provide any kind of integration or support.

2 Likes

Check out Comet Backup. They integrate with Filebase which uses Storj.

We would love to have more backup software work well with all object storage. Today MSP360 works, but their block sizes are 10MB (for object backup) and below and thus result in a lot of segments on the network. In the future we like other object storage providers may need to charge for excessive segments as we have to record that metadata.

Our network block size is 64MB, thus a 64MB file is one segment. However let’s say you use Veeam to backup the same file with a 1MB block size, well now we have 64x more metadata to store.

Whats the solution? Backup software providers need to support larger block sizes, in this case 64MB. Our current favorite is Ahsay which supports 32MB block sizes today and soon to be 64MB.

5 Likes

My favorite is duplicati. It supports any block size and is open source.

Any plans to work with Proxmox and their Backup server product?

1 Like

I used Duplicati briefly and hated it.

msp360 is basically a rebranding of cloudberrylab. used it … and the performance is not that great… as mention before, we tried to upload a 70GB backup file. upload speed is pretty horrible, gateway or direct to storj… it takes 14 days to upload the 70GB backup files…

Please take a look on more robust solutions like Duplicati or restic
If you want UI and do not like Duplicati, then you can use FileZilla to copy files.

See also Hotrodding Decentralized Storage

1 Like

I’ve been doing some testing of HashBackup with Storj, both native uploading and via S3 MT Gateway. Here are upload times for a 700MB backup of /usr on a small 512M VM at Vultr. Note: backup time is not included here but since backup overlaps with upload, would probably not add to the time.

Here is the original upload using the native interface with 1 thread (2 threads caused it to bomb: not enough RAM):

[root@hbtest ~]# /usr/bin/time -v hb dest -c hb sync
HashBackup #2493 Copyright 2009-2021 HashBackup, LLC
Using destinations in dest.conf
Warning: destination is disabled: amzs3
Writing hb.db.1
Copied hb.db.1 to storj (5.2 MB 3s 1.6 MB/s)
Waiting for destinations: storj
Copied arc.0.0 to storj (62 MB 7s 8.5 MB/s)
Copied arc.0.1 to storj (62 MB 7s 8.4 MB/s)
Copied arc.0.2 to storj (63 MB 7s 8.6 MB/s)
Copied arc.0.3 to storj (62 MB 8s 7.4 MB/s)
Copied arc.0.4 to storj (62 MB 7s 8.1 MB/s)
Copied arc.0.5 to storj (62 MB 7s 8.5 MB/s)
Copied arc.0.6 to storj (63 MB 8s 7.8 MB/s)
Copied arc.0.7 to storj (62 MB 7s 7.8 MB/s)
Copied arc.0.8 to storj (62 MB 7s 8.0 MB/s)
Copied arc.0.9 to storj (62 MB 8s 7.4 MB/s)
Copied arc.0.10 to storj (62 MB 7s 8.1 MB/s)
Copied arc.0.11 to storj (41 MB 5s 7.8 MB/s)
Copied dest.db to storj (94 KB 1s 59 KB/s)

Command being timed: "hb dest -c hb sync"
  User time (seconds): 40.79
  System time (seconds): 34.69
  Percent of CPU this job got: 78%
  Elapsed (wall clock) time (h:mm:ss or m:ss): 1:36.05
  Maximum resident set size (kbytes): 189108

Here is the same upload with 8 threads using the S3 MT Gateway:

[root@hbtest ~]# /usr/bin/time -v hb dest -c hb sync
HashBackup #2561 Copyright 2009-2021 HashBackup, LLC
Using destinations in dest.conf
Warning: destination is disabled: sj
Warning: destination is disabled: s3
Warning: destination is disabled: b2
Writing hb.db.2
Waiting for destinations: sjs3
Copied hb.db.2 to sjs3 (5.2 MB 5s 1.0 MB/s)
Waiting for destinations: sjs3
Copied arc.0.5 to sjs3 (62 MB 7s 7.8 MB/s)
Copied arc.0.1 to sjs3 (62 MB 8s 7.6 MB/s)
Copied arc.0.6 to sjs3 (63 MB 8s 7.3 MB/s)
Copied arc.0.3 to sjs3 (62 MB 9s 6.4 MB/s)
Copied arc.0.7 to sjs3 (62 MB 9s 6.4 MB/s)
Copied arc.0.4 to sjs3 (62 MB 10s 6.1 MB/s)
Copied arc.0.2 to sjs3 (62 MB 10s 5.7 MB/s)
Copied arc.0.0 to sjs3 (62 MB 11s 5.5 MB/s)
Copied arc.0.8 to sjs3 (62 MB 5s 10 MB/s)
Copied arc.0.10 to sjs3 (62 MB 6s 9.6 MB/s)
Copied arc.0.11 to sjs3 (49 MB 5s 9.1 MB/s)
Copied arc.0.9 to sjs3 (62 MB 6s 8.9 MB/s)
Copied dest.db to sjs3 (98 KB 3s 30 KB/s)

Command being timed: "hb dest -c hb sync"
  User time (seconds): 4.60
  System time (seconds): 3.87
  Percent of CPU this job got: 31%
  Elapsed (wall clock) time (h:mm:ss or m:ss): 0:26.88
  Maximum resident set size (kbytes): 47940

Using the gateway is about 3x faster and uses only 48M of RAM - about 1/4th of using the native interface with one thread.

At 27 seconds for 700MB, uploads through the gateway with HashBackup are about 26 MB/s, so your 70GB backup should take about 45 minutes.

If you want to try it, you’ll need the preview version. Get that with hb upgrade -p

Update: I ran a test on the small VM (512M, 1 CPU) backing up a 10GB file of random data to Storj via S3 MT Gateway:

[root@hbtest ~]# /usr/bin/time -v hb backup -c hb big10 -B2M
HashBackup #2563 Copyright 2009-2021 HashBackup, LLC
Backup directory: /root/hb
Backup start: 2021-10-03 21:34:58
Using destinations in dest.conf
Warning: destination is disabled: s3
This is backup version: 0
Dedup not enabled; use -Dmemsize to enable
/
/root
/root/big10
Copied arc.0.0 to sjs3 (60 MB 6s 8.7 MB/s)
Copied arc.0.1 to sjs3 (60 MB 7s 8.1 MB/s)
...  (lots of these)
Copied arc.0.157 to sjs3 (60 MB 6s 8.8 MB/s)
/root/hb
/root/hb/inex.conf
Copied arc.0.158 to sjs3 (60 MB 6s 9.4 MB/s)
Copied arc.0.159 to sjs3 (62 MB 7s 8.5 MB/s)
Copied arc.0.160 to sjs3 (60 MB 7s 8.2 MB/s)
Copied arc.0.163 to sjs3 (30 MB 3s 7.6 MB/s)
Copied arc.0.162 to sjs3 (62 MB 5s 11 MB/s)
Waiting for destinations: sjs3
Copied arc.0.161 to sjs3 (60 MB 8s 7.3 MB/s)
Writing hb.db.0
Copied hb.db.0 to sjs3 (149 KB 4s 31 KB/s)
Copied dest.db to sjs3 (61 KB 2s 28 KB/s)

Time: 287.1s, 4m 47s
CPU:  270.0s, 4m 29s, 94%
Wait: 13.0s
Mem:  103 MB
Checked: 5 paths, 10058658184 bytes, 10 GB
Saved: 5 paths, 10058658184 bytes, 10 GB
Excluded: 0
Dupbytes: 0
Compression:  0%, 1.0:1
Efficiency: 0.00 MB reduced/cpusec
Space: +10 GB, 10 GB total
No errors
	Command being timed: "hb backup -c hb big10 -B2M"
	User time (seconds): 190.44
	System time (seconds): 81.06
	Percent of CPU this job got: 90%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 5:00.52
	Maximum resident set size (kbytes): 100832
	Voluntary context switches: 80801
	Involuntary context switches: 24099
	Swaps: 0
	File system inputs: 41423512
	File system outputs: 19674304
	Page size (bytes): 4096
	Exit status: 0

It took 5 minutes to save and upload 10 GB, and I’m guessing it would be slightly faster on a multi-CPU system.

I used the -B2M option here because with big files, HB uses a larger block size and can go over the arc-size-limit set (this is set to 62M).

Here’s what a dest.conf file for the gateway looks like:

destname sjs3
type s3
host gateway.us1.storjshare.io
secure
partsize 64m
accesskey xxx
secretkey xxx
bucket hbtest
dir s3dir
workers 8

Because this VM only had 11G of free disk space, I didn’t have room to backup a 10G file and keep a local copy. In that situation, use hb config -c hb cache-size-limit 1g to set a 1G limit on local data saved. Then it acts like a cache of recently accessed backup data.

1 Like