Let's talk about the elephant in the room: The Storj economic model (node operator payout model)

You will not be disqualified for transferring data to another location. Majority of data you can transfer online, only the last sync will require downtime, but it will be pretty short in compare with the main transfer.
See How do I migrate my node to a new device? - Storj Node Operator Docs

1 Like

This assumes all the storage and egress is done by real customers (paid services). In the past there was a lot of test/dummy data filling Storage Nodes and uploading data. As of today, do we know if all the data traffic is paid or there is still test data paid by the Storj Token Sale reserves?

1 Like

I recall actually being surprised when I checked these things. IIRC, a typical PC power supply is less efficient (80-95% at peak consumption, but even going for ~60% when severely underutilized) than a typical external driveā€™s (~95% at peak consumption, and these power supplies are sized so that theyā€™re always close to the peak for the devices they power).

Offtopicā€¦ how can I choose a custom profile picture, like BS?

I believe he has the account connected with github

That was misstated by me. 20 months was the time to earn back your investment, not to fill the node. Thatā€™s roughly 34 months as I had already mentioned later in the post. This is based on my earnings calculator which was recently updated to reflect the latest network behavior. I corrected my original post.

The rest of the numbers I mentioned were correct. You make a lot of money on that HDD before those 5 years are up. Since I actually bought one recently, trust me, I did the math.

I stated my calculation. If itā€™s full after 34 months you use way more than half of the HDD in 5 years on average. Alsoā€¦ Why should I care if the ROI is 500%? Over those 5 years?

Sia hosts do not provide the same level of reliability. Part of this is quality of software, Storj storage node code is just so much better than Sia hosts. There seems to be some drama around that problem. Part is that the Sia network have different reliability standards, and with upload fees it reduces the incentive to provide reliable long-term storage.

Either that, or work on a way to use the uplink protocol in pure client-side in-browser Javascript, which would reduce the need for gateways. There are some problems needed to be solved to do the latter. I do not see this feature on the official roadmap though, I suspect there are enough customers satisfied with libuplink or gateways to fuel the development process already.

@BrightSilence has described his idea several times, one of them here. Held amounts were a recurring discussion topic for some time, so if youā€™re interested in some ideas, it might be worth spending some time browsing old posts.

If you are to migrate a whole disk to another, itā€™s better to copy the whole disk image, as opposed to copying file by file. This is because you perform a single sequential read over the whole drive capacity, and not tens of millions of small reads all around the drive. Copying a modern 18 TB drive this way would effectively take a bit less than two days, so well within allowed downtime.

Indeed. I mostly limited participation in discussions like that to correcting obvious ommissions in other peopleā€™s posts because of this lack of feedback. It would be nice, for example, to learn what economic model constraints are known to Storj, but missed by us SNOs in this discussion. On the other side, I understand that some of that knowledge may not be easy to share to keep competitive advantage. I also absolutely understand the fact that it is difficult to formulate any public statements, however positive they would be meant to be, without triggering some backlash. Iā€™m therefore fine waiting for the first drafts of the whitepaper as mentioned in the Twitter space.

On the contrary, it makes sense as a way to reduce power usage. A single large drive will require less power than many small drives. And we are in an economy model thread, making this difference crucial for the topics under discussion.

Besides, a single large drive is now cheaper than many small drives (both in terms of per-unit cost, but also the bay capacity in a device they are mounted in). When consolidating a SNO may sell the older drives, recovering some costs.

Yep. But Iā€™m trying to dispute the rule. I think itā€™s a bad one. Good intention, not so good resultsā€¦

ok. So, you think a 20TB ā€œfileā€ is as easily manageable as 5 4TB ā€œfilesā€?

This discussion was moved to a different topic, please respond there.

Yeah, about that.

I know you tried to be funny butā€¦

@john, please donā€™t do that again

I didnā€™t use your misstated period to fill a 20TB node. I used my personal experience. 60 months for 20TB. Letā€™s call it ā€œstatistics performed on a population of 1ā€ :upside_down_face:
You might be doing statistics on older network performance. I caught the train when it was slow moving. Also, Iā€™ve noticed that recently the ingress is much much higher than Iā€™ve ever seen.

34 months to fill up is indeed much better than average 1/2 use over 5 years. But still, youā€™re wearing down a 20TB disk with little return (when compared to a full 20TB node) for ~2 years. Thatā€™s 40% of its life (I did come up with the notion that a disk would die on the day of its 5th anniversary).
The disk will probably not die after 5 years, but it could also die before itā€™s 5 years old. The warranty thing is a probability result calculated by the manufacturer (cost of replacing during 5 years versus cost of not selling so many disks if the warranty is just 3 years). And keep in mind that the data is not under warranty. When I say 20TB is all you can have, using your calculations, I mean that you canā€™t hold more than ~20TB on a single IP/24. When you loose the disk, thatā€™s game over for you.

1 Like

I found that funny!ŲœŲœŲœ

2 Likes

And I finally rest my caseā€¦

You are now literally quoting text from the moved topic and still responding hereā€¦ Please stop! Youā€™re just giving @Alexey more work to do to move your posts. That discussion has moved here: 5 nodes on the same HDD vs 5 nodes on a separate disks

4 Likes

Vetting processā€¦ should be better if it is different for newbies than veterans?
For ex., you should be marked as a veteran if you have 3 nodes active and all are older than 6 months. When you start new nodes, the vetting should be faster, because you already know what you are doing, you know how to manage them and you are interested in long term comitment. So the new nodes you start should be trustworthy.

veeting process is mostly for hardware, because here is lot of old hdds, and it just give time to understand that hdd work stable without loosing big ammount of data.

2 Likes

Please read more about Gateway MT here - specifically this section about Regions and Points of Presence where it states:

We currently have hosted Gateways in several regional locations and expect to expand as needed. The Gateway endpoint https://gateway.storjshare.io is configured to automatically route the traffic from the instance closest to your location.

I.e. Gateway MT is a globally distributed, multi-region cloud-hosted S3-compatible gateway.

3 Likes

thank you for pointing.

1 Like

Hmmā€¦ I didnā€™t considered cloning. Good sugestion. I donā€™t know if it works for NAS disks, though. I only have Synology NAS nodes, and for cloning, I should take the disk out and perform the cloning on a PC with Windows. All disks are ext4, so I should be able to read them in windows with an extension installed. I donā€™t know if DSM will have a problem when wakes up on another diskā€¦

1 Like