We would like to get your feedback on the geofencing test plan. In the furture we are planning to use github markdown for the test plan and a forum thread for the feedback similar to the blueprint workflow. At the moment we haven’t switched to github markdown but we don’t want to miss out the feedback opportunite. I hope the following test plan copy is in a readable format. Please let us know if you have any question or even better identify missing tests.
I added a few links to existing tests around our current node selection. Please note that the existing tests are without geofence. I was hoping that we could extend some of the existing tests with additional geofence filtering instead of multiplying the number of tests.
Storagenode Checkin
First Checkin: Two storage nodes checkin for the first time. One is using some kind of DynDNS the other one a static ip. Geolocation works for both.
Geolocation Changed: Storage node checks in with a DynDNS entry for the first time. Geolocation gets set. Storage node changes the IP address of the DynDNS entry. Audit job tries to connect to the storage node, finds out about the new ip address and I assume the audit job updates the ip in the cache? Now we have the perfect setup. Storage node checks in again. Everything is matching except the geolocation. We shouldn’t be lazy and always check the current geolocation even if we believe it hasn’t changed. Should update the new geolocation. Does it need to get flushed to the database? I would expect so.
Suppress DB Writes: Storage node checks in a second time. This time no changes including no geolocation update. Should not get flushed to the database.
Lookup Failed: Storage node checks in with a DynDNS entry. Lookup failes. Should not set geolocation. Well it can’t because we don’t have an IP address anyway.
Pingback Failed: Storage node checks in but the pingback failes. We know the IP address but we should not set the geolocation.
Bucket Config:
Bucket New: Create a new bucket. Geolocation can be set.
Bucket not Empty: Create a new bucket and upload a file. Geolocation can’t be set because of the first upload.
Bucket Empty: Create a new bucket, upload a remote file and delete the file. Geolocation can be set because bucket is empty.
Pending Multipart Upload: Create a new bucket, upload a remote file with more than one segment, cancel the upload. Geolocation can’t be set because of the pending multipart upload.
https://review.dev.storj.io/c/storj/storj/+/6316/6/satellite/buckets/service_test.go
Customer Upload
Remote Segment: Create 2 buckets one with geolocation and one without geolocation. Upload a remote file with multiple segments to each bucket. Verify the location of each piece.
Server Side Move: Create 3 buckets two of them with geolocation and one without geolocation. Upload a remote file to each bucket. Server Side Move should only work between buckets with the same geolocation. It should be impossible to move a file out or into the one bucket without geolocation. https://review.dev.storj.io/c/storj/storj/+/6354/6/satellite/metainfo/metainfo_test.go
Server Side Copy: Same as for Server Side Move.
Nodeselection Fill With Vetted Nodes: Requires a test network that has enough vetted and unvetted nodes but noth enough unvetted nodes inside the geolocation. Verify that the nodeselection will fill the missing slots with vetted nodes. storj/state_test.go at 696b8f0d8eab883d0f81a9d7875854516b52c7f5 · storj/storj · GitHub
Nodeselection Not Enough Vetted Nodes: Test network with enough vetted and unvetted nodes but we can’t seleted enough nodes within the geolocation. Upload should error out. Do not compromise geolocation. Better return an error to the customer. storj/state_test.go at 696b8f0d8eab883d0f81a9d7875854516b52c7f5 · storj/storj · GitHub
Nodeselection Not Enough Free Space: storj/selection_test.go at fb604be460ac04acc030768c9dfc6d819eff1f4f · storj/storj · GitHub
Nodeselection >4 Hours Offline Nodes: storj/selection_test.go at fb604be460ac04acc030768c9dfc6d819eff1f4f · storj/storj · GitHub
Nodeselection Suspended Nodes (failed audits): storj/service_test.go at fb604be460ac04acc030768c9dfc6d819eff1f4f · storj/storj · GitHub
Nodeselection Suspended Nodes (downtime): storj/service_test.go at fb604be460ac04acc030768c9dfc6d819eff1f4f · storj/storj · GitHub
Nodeselection Disqualified Nodes: storj/disqualification_test.go at fb604be460ac04acc030768c9dfc6d819eff1f4f · storj/storj · GitHub
Nodeselection Same Subnet: storj/state_test.go at 696b8f0d8eab883d0f81a9d7875854516b52c7f5 · storj/storj · GitHub
Nodeselection Equal Chance: storj/selector_test.go at 696b8f0d8eab883d0f81a9d7875854516b52c7f5 · storj/storj · GitHub
Repair
Nodeselection: Upload a file with multiple remote segments (multiple segments to eliminate random luck) to a bucket with geolocation. Kill a few pieces and trigger repair. After repair verify that all pieces are still stored in the geolocation.
Filter Out Same Subnet: Upload a remote file to a bucket with geolocation. Kill a few pieces and trigger repair. Available nodes for repair upload are partially in the same subnet. Verify that repair uploads the new pieces only to the correct geolocation but it doesn’t store multiple pieces of the same segment on the same subnet.
Not Enough Nodes: Upload a remote file to a bucket with geolocation. Kill a few pieces and trigger repair. There are enough nodes globally but not enough nodes in the geolocation. Repair job should finish with less than success threshold but still take the pieces that it was able to upload.
Fill With Vetted Nodes: Test not needed. Duplicate to customer uploads.
Graceful Exit
Nodeselection: Upload a file with multiple remote segments (multiple segments to eliminate random luck) to a bucket with geolocation. Call graceful exit on one of the storage nodes. Verify the location of all pieces after graceful exit.
Filter Out Same Subnet: Upload a remote file to a bucket with geolocation. Call graceful exit on one of the storage nodes. Some of the available storage nodes have the same IP subnet. They should not get selected for graceful exit upload.
Not Enough Nodes: Upload a remote file to a bucket with geolocation. Call graceful exit on one of the storage nodes. Nodes outside the geolocation are available but not a single storage node inside the geolocation. Graceful exit should finish successful without having to transfer the piece. Do not penalize the storage node for our mistake.