Error in deleting objects using AWS s3 SDK

I use AWS s3 sdk to interact with storj.

I upload files, made sure I can use them, and then delete them (all of the operations are done with the same pass-phrase S3 generated credentials).

After the delete, when I try to list the content of the bucket I get the proper results (0 files - since I deleted them all) but from the UI I don’t see the files but get the message: " Due to the number of objects you have uploaded to this bucket, 1842 files are not displayed.".

What this is mean? did I succeed to delete the files or no?

This is how I delete the files:

var request = new DeleteObjectsRequest() {
                    BucketName = bucketName,
                    Objects = keyVersions
};
DeleteObjectsResponse response = await s3Client.DeleteObjectsAsync(request);

This is how I list the content of the bucket (After the deletion return 0 before the deletion returned the number of file there)

var request = new ListObjectsV2Request()
{
                BucketName = bucketName
} ;

ListObjectsV2Response response = await s3Client.ListObjectsV2Async(request);

I read the network request the UI does and this is the response there:

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>*****</Name><Prefix></Prefix><KeyCount>0</KeyCount><MaxKeys>1000</MaxKeys><Delimiter></Delimiter><IsTruncated>false</IsTruncated></ListBucketResult>

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>****</Name><Prefix></Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated></ListBucketResult>

Is there a chance you’ve interacted with the bucket with a different access key? I saw your comment that all operations are done with the same pass phrase S3 credentials, but at any point in the past was that different?

The S3 interface doesn’t allow for checking for objects with other encryption passphrases, but the Uplink library and CLI do. Can you try checking what uplink ls --recursive --encrypted sj://bucket looks like? It should show all objects.

If it does show objects but the S3 interface says no objects exist, there’s your answer. You can empty the bucket by doing uplink rm --recursive --encrypted sj://bucket. If it does not show objects in either case, then you’re right, we have a bug in the web interface of some kind.

Could you find out which is the case?

Thanks!

2 Likes

I tried using the c# uplink.net SDK, it give me that there are 0 files as well.

This is the uplink logic I used:

Access _access = new Access("address", "key, "param");
IBucketService _bucketService = new BucketService(_access);
IObjectService _objectService = new ObjectService(_access);

BucketList bl = _bucketService.ListBucketsAsync(new ListBucketsOptions()).Result; // return 1 bucket
foreach (Bucket buc in bl.Items)
{
    ObjectList objL = _objectService.ListObjectsAsync(buc, new ListObjectsOptions()).Result; // Retuen 0 objects
    bool testIt = true;
}

There is no chance there were files before since I deleted all the buckets and for the test, I started with creating a new bucket.

In addition in my dashboard I see:

Objects
Updated 5/12/2023

1842

Total of 0

and in the Storage graph window, I see that the storage size is 0.0B.

But in the bucket line, I see:

STORAGE	         BANDWIDTH	       OBJECTS	           SEGMENTS	            DATE ADDED
0.13GB           0.00GB            1842                1842                 5/11/2023

I appreciate the help, this is important to me since I want to move to paying costumer and I need to make sure first that I know how to delete files so I won’t have extra ‘ghost’ files hanging out there

If you do not have any buckets, then the stat info would take more time to update (up to 48h).
If the bucket is exist, then the stat should be updated within 24h.

However, please configure an Uplink CLI and do this check anyway (please use the correct name for the bucket):

uplink ls --encrypted --recursive sj://my-bucket | wc -l

The command:

.\uplink.exe ls --encrypted --recursive sj://<myBucket>

return no results (0 lines)

.\uplink.exe ls

return 1 line (my bucket creation date and bucket name)

You need to replace sj://<myBucket> with the actual bucket name.
But I guess you did so.

In this case I believe you need to wait when the stat on the dashboard will update to the actual numbers (up to 24h for the existing bucket).

I did replace myBucket with my actual bucket name :slight_smile:

My first massage about this issue was posted 19h. ago, but the issue happened before that, so I’m pretty sure 24h. already passed.

Is there anything you can do to check this issue and verify for me that the file actually got deleted? (maybe flash some cache)

You need to check that the project is the same. The access grant and S3 creds are valid only for their project.
But since you checked the content of the bucket with --encrypted --recursive flags and there is no objects, then there is no objects in the checked bucket in that project.

I still see the massage that: " Due to the number of objects you have uploaded to this bucket, 1842 files are not displayed."

This is weird. Will try to reproduce.

Any update? this is a blocker for me to moving forward

Why is it a blocker? Your usage is zero, so no charges. I suspect there is a visual bug though.

I did:

$ for ((i=1; i<1200; i++)); do head /dev/urandom | uplink cp - sj://test/file-$(printf "%04d" $i).bin; done
$ uplink ls --recursive --encrypted sj://test/ | wc -l
1200

On my dashboard it shows:


And in the bucket view:

Now will remove all objects from there:

uplink rm --recursive --encrypted --parallelism 100 sj://test/

and need to wait until it updated on the dashboard.

1 Like

The blocker is I need to know 100% that the files actually got deleted and that this is a UI bug.
If this is only a UI bug - than I good to go, if this is a more series bug (that I can potentially pay on something that got deleted or something like this) than this is still no-go from my side

If you cannot see them with a CLI even with an --encrypted flag, they are definitely deleted.
The visual bug would be fixed anyway.

I experimented a little bit and found, that I was able to partially reproduce an issue, but the answer was a much simpler - I have had pending objects.

Could you please check:

uplink ls --pending sj://my-bucket

or

uplink ls --pending --encrypted sj://my-bucket

./uplink ls --pending => no results
./uplink ls --pending --encrypted => no results

The UI still show the same massage

Hi @meir,

I’m sorry to hear that you’re having issues. In order for me to help you, I’ll need some additional information that may be sensitive. To keep your info private, can you please check your direct messages and respond to me there? Thanks.

2 Likes

Ok, we will continue in private