I upload files, made sure I can use them, and then delete them (all of the operations are done with the same pass-phrase S3 generated credentials).
After the delete, when I try to list the content of the bucket I get the proper results (0 files - since I deleted them all) but from the UI I don’t see the files but get the message: " Due to the number of objects you have uploaded to this bucket, 1842 files are not displayed.".
What this is mean? did I succeed to delete the files or no?
This is how I delete the files:
var request = new DeleteObjectsRequest() {
BucketName = bucketName,
Objects = keyVersions
};
DeleteObjectsResponse response = await s3Client.DeleteObjectsAsync(request);
This is how I list the content of the bucket (After the deletion return 0 before the deletion returned the number of file there)
var request = new ListObjectsV2Request()
{
BucketName = bucketName
} ;
ListObjectsV2Response response = await s3Client.ListObjectsV2Async(request);
I read the network request the UI does and this is the response there:
Is there a chance you’ve interacted with the bucket with a different access key? I saw your comment that all operations are done with the same pass phrase S3 credentials, but at any point in the past was that different?
The S3 interface doesn’t allow for checking for objects with other encryption passphrases, but the Uplink library and CLI do. Can you try checking what uplink ls --recursive --encrypted sj://bucket looks like? It should show all objects.
If it does show objects but the S3 interface says no objects exist, there’s your answer. You can empty the bucket by doing uplink rm --recursive --encrypted sj://bucket. If it does not show objects in either case, then you’re right, we have a bug in the web interface of some kind.
I appreciate the help, this is important to me since I want to move to paying costumer and I need to make sure first that I know how to delete files so I won’t have extra ‘ghost’ files hanging out there
If you do not have any buckets, then the stat info would take more time to update (up to 48h).
If the bucket is exist, then the stat should be updated within 24h.
However, please configure an Uplink CLI and do this check anyway (please use the correct name for the bucket):
uplink ls --encrypted --recursive sj://my-bucket | wc -l
You need to check that the project is the same. The access grant and S3 creds are valid only for their project.
But since you checked the content of the bucket with --encrypted --recursive flags and there is no objects, then there is no objects in the checked bucket in that project.
The blocker is I need to know 100% that the files actually got deleted and that this is a UI bug.
If this is only a UI bug - than I good to go, if this is a more series bug (that I can potentially pay on something that got deleted or something like this) than this is still no-go from my side
I experimented a little bit and found, that I was able to partially reproduce an issue, but the answer was a much simpler - I have had pending objects.
I’m sorry to hear that you’re having issues. In order for me to help you, I’ll need some additional information that may be sensitive. To keep your info private, can you please check your direct messages and respond to me there? Thanks.