Trust Cache - Satellite Authoritative False

I been noticing a Warning in the Console Log from a satellite.
console:service unable to get Satellite URL

So I checked my trusted cache and there is a satellite marked as untrusted. I don’t know if this satellite is suppose to be marked as untrusted or is?

I have a very good up time, so there should be no reason that my node should be ignoring any satellites.

“SatelliteURL”: {
“id”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”,
“host”: “”,
“port”: 7777
“authoritative”: false

EDIT: Side note, I been noticing a ton of disk activity from the temp folder, is there a way I can specify the location of this folder to another drive? Think of getting an SSD just for that to act as a cache.

Thank you for heads-up!
Notified the team

Hey SpyShadow -

Just to confirm, have you configured your trusted cache at all? Or are you running all default settings? Could you paste the warning/error you’re getting in the logs in full?

One additional clarification: “authoritative”: false is absolutely fine here. That’s not a problem. That’s just saying that returned a satellite on another domain name. Once we move the trusted satellite list to (in the next few weeks), authoritative will be true again, but that shouldn’t cause any problems for now. Whether or not the listing is authoritative, it is still trusted, by virtue of being in your trusted list.

On your side note, no, we explicitly use the atomic rename functionality provided by having the temp folder on the same filesystem as the rest of your storage, so I’m afraid putting temp on another drive will break things.

Also the update went without a problem but we will see in a few months if it remains that way.

I have not configured a trusted cache, running default settings. Error at bottom of my post.

Disk activity is Always 100%, what’s the temp folder for exactly? Just checking if this is normal.

2021-04-11T06:51:48.438-0700 WARN console:service unable to get Satellite URL {“Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “error”: “storage node dashboard service error: trust: satellite “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW” is untrusted”, “errorVerbose”: “storage node dashboard service error: trust: satellite “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW” is untrusted\n\*Pool).getInfo:228\n\*Pool).GetNodeURL:167\n\*Service).GetDashboardData:169\n\*StorageNode).StorageNode:45\n\tnet/http.HandlerFunc.ServeHTTP:2042\n\*Router).ServeHTTP:210\n\tnet/http.serverHandler.ServeHTTP:2843\n\tnet/http.(*conn).serve:1925”}

Hi, the temp folder under the storage node is used as a placeholder for uploads - When your node accepts an upload, it creates a file under ‘temp’ to hold the data while it’s streamed to the node (each file will be no larger than 5MB) - When your node has successfully got the upload, and signalled that it has the file, it is relocated into the ‘blobs’ directory for storage.

An active node will have constant churn on the temp directory, but no files should be more than 30 minutes old in there - If you start to collect more than 2-3 temp files when listing the directory that is a sign that your hard drive can’t cope with the rate. There are some settings you can change to help with the file allocation for the temp files in config.yml file - you can pre-allocate bigger chunks, and you can limit you node on maximum concurrent uploads.

#Edit - options in the config.yml
To limit the number of concurrent uploads
storage2.max-concurrent-requests: 5 <or what ever you think is sensible>

To pre-allocate the temp files
pieces.write-prealloc-size: 4.0 MiB

As a side note, seeing your utilisation on F: at 100% with 8.2 MB/S doesn’t look healthy - is your disk virtualised so the I/O stats aren’t real ? I see you are using MS SSD, is that into a disk pool with more than 2 disks by any chance ? the latency looks low though, are you running any Dedupe or virus scanner on that volume ? If your happy with that performance it’s cool, but I know from my node that you will be missing lots of uploads as your will be dropped as being slow, and potentially if your disk is maxed out, when Audits come along you could start failing, or more commonly your online score will drop although your node is online

EDIT: So I shut it down to run a speed test, it pulls 124 MB/s for Read & 116 MB/s for Write. So the 8 MB/s is nothing for it, I think the utilization is reporting that incorrect or it’s really sensitive.
Also the files inside the temp folder are still there even after a shutdown.

It’s (3x) 1 TB HDDs pooled together as a single volume using Storage Pool Option under Server Manager in Windows Server 2016. It’s not affecting performance at this time so I probably ignore it, I will keep an eye on it when I expand the storage as needed.

Could some partial files been left over from shutting down the node and restarting it?
Probably should reboot the server, it’s been months sense a reboot.

Ok, well that is better on the 100+ on R/W :slight_smile: plenty for a few nodes

If that is your only node, and it is 100% shutdown i.e. not lurking as a process, then the files in TEMP that are “blob-xxxxxxxx.partial” can be removed as they are failed uploads, orphaned from a node crash - to be safe, move them into a directory in case something weird happens, then at least you can restore them.

Erm, well if you do a clean shutdown the node finishes the files it’s processing and should remove the temp files. If it’s being forcibly terminated, or timing out then Yes, you will be left with .partial files orphaned - there’s no process currently in the node to purge those files on start-up. They arn’t doing anything bad, they just look untidy :slight_smile: don’t break a old node over some temp files.

1 Like