You will not be disqualified for transferring data to another location. Majority of data you can transfer online, only the last sync will require downtime, but it will be pretty short in compare with the main transfer.
See How do I migrate my node to a new device? - Storj Node Operator Docs
This assumes all the storage and egress is done by real customers (paid services). In the past there was a lot of test/dummy data filling Storage Nodes and uploading data. As of today, do we know if all the data traffic is paid or there is still test data paid by the Storj Token Sale reserves?
I recall actually being surprised when I checked these things. IIRC, a typical PC power supply is less efficient (80-95% at peak consumption, but even going for ~60% when severely underutilized) than a typical external driveās (~95% at peak consumption, and these power supplies are sized so that theyāre always close to the peak for the devices they power).
That was misstated by me. 20 months was the time to earn back your investment, not to fill the node. Thatās roughly 34 months as I had already mentioned later in the post. This is based on my earnings calculator which was recently updated to reflect the latest network behavior. I corrected my original post.
The rest of the numbers I mentioned were correct. You make a lot of money on that HDD before those 5 years are up. Since I actually bought one recently, trust me, I did the math.
I stated my calculation. If itās full after 34 months you use way more than half of the HDD in 5 years on average. Alsoā¦ Why should I care if the ROI is 500%? Over those 5 years?
Sia hosts do not provide the same level of reliability. Part of this is quality of software, Storj storage node code is just so much better than Sia hosts. There seems to be some drama around that problem. Part is that the Sia network have different reliability standards, and with upload fees it reduces the incentive to provide reliable long-term storage.
Either that, or work on a way to use the uplink protocol in pure client-side in-browser Javascript, which would reduce the need for gateways. There are some problems needed to be solved to do the latter. I do not see this feature on the official roadmap though, I suspect there are enough customers satisfied with libuplink or gateways to fuel the development process already.
@BrightSilence has described his idea several times, one of them here. Held amounts were a recurring discussion topic for some time, so if youāre interested in some ideas, it might be worth spending some time browsing old posts.
If you are to migrate a whole disk to another, itās better to copy the whole disk image, as opposed to copying file by file. This is because you perform a single sequential read over the whole drive capacity, and not tens of millions of small reads all around the drive. Copying a modern 18 TB drive this way would effectively take a bit less than two days, so well within allowed downtime.
Indeed. I mostly limited participation in discussions like that to correcting obvious ommissions in other peopleās posts because of this lack of feedback. It would be nice, for example, to learn what economic model constraints are known to Storj, but missed by us SNOs in this discussion. On the other side, I understand that some of that knowledge may not be easy to share to keep competitive advantage. I also absolutely understand the fact that it is difficult to formulate any public statements, however positive they would be meant to be, without triggering some backlash. Iām therefore fine waiting for the first drafts of the whitepaper as mentioned in the Twitter space.
On the contrary, it makes sense as a way to reduce power usage. A single large drive will require less power than many small drives. And we are in an economy model thread, making this difference crucial for the topics under discussion.
Besides, a single large drive is now cheaper than many small drives (both in terms of per-unit cost, but also the bay capacity in a device they are mounted in). When consolidating a SNO may sell the older drives, recovering some costs.
I didnāt use your misstated period to fill a 20TB node. I used my personal experience. 60 months for 20TB. Letās call it āstatistics performed on a population of 1ā
You might be doing statistics on older network performance. I caught the train when it was slow moving. Also, Iāve noticed that recently the ingress is much much higher than Iāve ever seen.
34 months to fill up is indeed much better than average 1/2 use over 5 years. But still, youāre wearing down a 20TB disk with little return (when compared to a full 20TB node) for ~2 years. Thatās 40% of its life (I did come up with the notion that a disk would die on the day of its 5th anniversary).
The disk will probably not die after 5 years, but it could also die before itās 5 years old. The warranty thing is a probability result calculated by the manufacturer (cost of replacing during 5 years versus cost of not selling so many disks if the warranty is just 3 years). And keep in mind that the data is not under warranty. When I say 20TB is all you can have, using your calculations, I mean that you canāt hold more than ~20TB on a single IP/24. When you loose the disk, thatās game over for you.
You are now literally quoting text from the moved topic and still responding hereā¦ Please stop! Youāre just giving @Alexey more work to do to move your posts. That discussion has moved here: 5 nodes on the same HDD vs 5 nodes on a separate disks
Vetting processā¦ should be better if it is different for newbies than veterans?
For ex., you should be marked as a veteran if you have 3 nodes active and all are older than 6 months. When you start new nodes, the vetting should be faster, because you already know what you are doing, you know how to manage them and you are interested in long term comitment. So the new nodes you start should be trustworthy.
veeting process is mostly for hardware, because here is lot of old hdds, and it just give time to understand that hdd work stable without loosing big ammount of data.
We currently have hosted Gateways in several regional locations and expect to expand as needed. The Gateway endpoint https://gateway.storjshare.io is configured to automatically route the traffic from the instance closest to your location.
I.e. Gateway MT is a globally distributed, multi-region cloud-hosted S3-compatible gateway.
Hmmā¦ I didnāt considered cloning. Good sugestion. I donāt know if it works for NAS disks, though. I only have Synology NAS nodes, and for cloning, I should take the disk out and perform the cloning on a PC with Windows. All disks are ext4, so I should be able to read them in windows with an extension installed. I donāt know if DSM will have a problem when wakes up on another diskā¦