How many node do you have?

It is. The SQLite is not compatible with any network protocols, except iSCSI. Sooner or later you could have a problem with sqlite databases.
The only exception could be if CIFS is shared from the Windows server and the client is Windows too.
You can take a look: https://forum.storj.io/tag/smb

donā€™t see how that relates to satellites controlling the allocation of data on nodesā€¦
what i was trying to say is if nodes have the same age, no downtime, infinite bandwidth and infinite storage, then they would most likely be 99% the same in data stored and bandwidth used for egress.

atleast for test data which is what we see the most ofā€¦ but i suspect normal data will be distributed much in the same wayā€¦ that way itā€™s easier to keep the network balanced and stableā€¦ actually makes perfect sense when you think about itā€¦

i tried using network drives, but didnā€™t have much luck with itā€¦ also it might add latencyā€¦ ofc it depends on a lot of factorsā€¦ stuff is rarely impossible, just very difficult xD

i did design my setup to hardwire up to like 1000 hdds into the bus of my server so no need to iscsi, cifs and what notā€¦ ofc there will be a bus speed limitation at one pointā€¦ but iā€™m betting the internet goes first lol

Thank you for sharing.
Your success rate with the pi3 is about the same with my pi4. For me, I think the upload speed is the bottleneck. I think then I will put the pi3 to my parentsā€™ house. If afterwards itā€™s $ 2-3 a month, Iā€™m happy. The hardware is already there and will not be used anyway.

Hi sir I just wonder how did you do 24tb on just 1 node?

You didnā€™t ask me, but RAIDZ1. I just happened to know. :slight_smile:

I use SHR2 on Synology myself.

Hi sir, Im also newbie, what do you mean by node gets vetted? I also have 3.2tb node

You didnā€™t ask me again, but since Iā€™m at it, haha. Nodes need to succeed 100 audits to get vetted on the corresponding satellite. Until they are they only receive 5% of normal traffic. You can use the earnings calculator to see the progress of vetting on each satellite.

1 Like

iā€™m using zfsā€¦ but in short i combined a number of drives in a raid solution to get some redundancy and other performance benefits for a large array / pool which i will be using for a wide range of things, like vmā€™s, storagenode/s, home media storage, network drives and pxe server

and to be more exact.
its currently a pool of two raidz1 vdevs, and when my scrub of the pool is complete i will be adding a 3rd raidz1 vdev to amp up my raw iops the pool can handle.


                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                          55.0G  84.0G      0     47      0   637K
  ata-OCZ-AGILITY3_OCZ-B8LCS0WQ7Z7Q89B6-part3  55.0G  84.0G      0     47      0   637K
---------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                           13.6T  11.0T  2.03K    153   803M  3.63M
  raidz1                                       8.13T  8.23T    769     41   406M  1.73M
    ata-HGST_HUS726060ALA640_AR11021EH2JDXB        -      -    249     13   136M   590K
    ata-HGST_HUS726060ALA640_AR11021EH21JAB        -      -    219     14   135M   590K
    ata-HGST_HUS726060ALA640_AR31021EH1P62C        -      -    300     13   135M   590K
  raidz1                                       5.43T  2.74T  1.28K     52   396M   577K
    ata-TOSHIBA_DT01ACA300_531RH5DGS               -      -    586     18   132M   193K
    ata-TOSHIBA_DT01ACA300_99PGNAYCS               -      -    578     17   132M   192K
    ata-TOSHIBA_DT01ACA300_Z252JW8AS               -      -    147     16   132M   193K
logs                                               -      -      -      -      -      -
  ata-OCZ-AGILITY3_OCZ-B8LCS0WQ7Z7Q89B6-part5  63.7M  4.44G      0     58      0  1.34M
---------------------------------------------  -----  -----  -----  -----  -----  -----
temp512                                        9.03T  1.88T      0      0      0      0
  ata-HGST_HUS726060ALA640_AR31051EJS7UEJ      4.55T   921G      0      0      0      0
  ata-HGST_HUS726060ALA640_AR31051EJSAY0J      4.47T  1002G      0      0      0      0
---------------------------------------------  -----  -----  -----  -----  -----  -----

and looks like it will be fast fast fast lol 3300 reads of a total 800MB pr seconds, and writes should be not far removed from thatā€¦and ofc + 50% better in a bitā€¦ ofc will take a while to rebalance the data, but didnā€™t have enough drives to actually have them all in the pool when creating it and migrating my storagenodeā€¦
but thats just a balance issue, should solve itself evenā€¦ if i donā€™t figure out how to make it rebalance.
and because i donā€™t have enough 6tb drives it will be a mix of 3tb and 6 tb lol
me ghetto rig

1 Like

1 node - dedicated NAS with 4 drive bays, currently on raid 1 (SHR1 with 2 HDDs).
10 TBs total storage.

However knowing what I know now, I would have just gone with a cheap Rpi and a 10 TB drive since they would earn the same and the rpi would be cheaper as an initial investment.

SNOā€™s with higher end gear donā€™t get paid any different to lower end gear.

A post was split to a new topic: iOS/Android/Mac/Win App like dropbox

Hi! Out of curiosity, would you be willing to share the name of some of these projects? I have a very quick internet connection and plenty of storage that I would like to put to good use. If not, no worries. Cheers. Tiago

3 Likes

14 TB on 3 nodes not full on 2 public IPs with the same provider (5$/m option for labbing)
2 x2TB old nodes full running on corei3 mini-pc hackintoshed on applefs as external USB devices
A macmini i7 as cold standby configured to takeover mini-pc (no failure so far) used for other minings
10TB on synology 412+ in Raid5 not full yet but very fast
All recycled materials with no crash. Just had to slow down the ingress throughput to the minipc no accepting more than 3MBs due to low specs
Planning to start a rapi4 one in july to learn docker ARM
Looking forward the syno backup app to recycle my STORJs AND consolidated Https dashboard

If you are ready to spend time on projects not in prod, Check out golem or Akash network which in currently in challenge phase2 for building a decentralized cloud infrastructure with compute and storage.

Hello Bob,

How do you limit the ingress ?

Thank you.

6 posts were split to a new topic: Is it possible to run 2 docker node on one pc and how is it done

Siacoin is the only other publicly participate-able project, but I price SC/TB stored like Storj pays but bandwidth a lot cheaper, do not get much for contracts as a result, prices for it are rock bottom with lots of people providing storage and bandwidth for much cheaper. My other projects are just some other non-decentralized side gigs.

1 Like

Not mine, but randomly saw this single location of 677 nodes in Kassel from storjnet.info :sunglasses:

image

Thats more then in all of canada must be in a datacenter.

1 Like

ip geolocations arenā€™t always correct in germany. E.g. I have vodafone cable and my IP geolocation shows Hannover (which is in the north) even though I live in the very south of Germanyā€¦ So this is more likely just an ISP location and not a datacenter. I doubt any german datacenter has 677 nodes in it. None of these nodes would make any profit, even if that datacenter had over 100 subnets available.

4 Likes

Thatā€™s a reply that someone who has a node in a datacenter lol jk.
Also then explain why itā€™s so concentrated in one location if that was the case I doubt thereā€™s 617 random people in one location running these nodes too.
Even if it was many subnet it would still be concentrated in one area because if a data center has many subnets would still show it in a location because thatā€™s how itā€™s setup.

You can see anywhere itā€™s super concentrated of nodes I can google where a datacenter is located and sure enough there is a datacenter.

Take Canada for an example You Look at Montreal theres a datacenter you look at Toronto there is a datacenter you look at Ottawa no datacenter. Barely any nodes.