Steadily Increasing CPU usage

Its 100% a VM you can run windows and linux inside a docker container which equals a VM. Espically if its nested container.

You should really read the part between brackets…

Linux containers on Linux are NOT a VM. (I mean at this point just google “is docker a vm?” please)

And even if you run linux containers on windows, the containers are running inside the VM, they arent vms themselves.

Good explanations: comparison - How is Docker different from a virtual machine? - Stack Overflow

1 Like

I just remember being able to run windows inside docker and pretty sure It was inside linux, and also boot up to a desktop though vnc.

Right, this time, I’m just going to quote what I said for you.

But what @LrrrAc said still applies, the containers still aren’t VM’s. They just run in a VM which runs the guest OS used by the containers. So all Linux containers will share one Linux VM when running on a Windows host.

This exception only applies if the container OS is different from host OS.

I was right I did run in a docker container in linux but it was with kvm enabled so your also right.

You ran a VM inside a docker container? Thats meta. You gotta see how many levels deep you can go.

Yeah I wanted to see if it was possible, Its a bit easier to do inside proxmox though.

Ive wanted to move to proxmox, but it seems like quite the undertaking. Moving everything i mean. Although containers make it easier.

Yeah I also get proxmox containers confused with docker contrainers so there is that.

List of Linux containers - Wikipedia So many types of containers.

I dont understand. Limiting it seems to have fixed it. I set the limit to .4 and it maxed that out when doing startup, but after that it went back to its ~5% average usage and has sat there for an hour which it used to rise to 15% by then. Its not even close to the limit, but its not rising. It seems like having a limit at all stopped it from rising.

Im going to let it go for another hour before I try changing, then removing the limit to see what happens.

Ive limited it to 40% of a core and it hasnt risen over 15% (after it calmed down after starting) in 3 hours. which is how it used to work. its not even hitting the 40% cap.

1 Like

Did you end up looking at the debug info @littleskunk posted? Guide to debug my storage node, uplink, s3 gateway, satellite

Particularly function 2 and 6 listed there might give you some info on what’s going on.

That is if you find the current solution as unsatisfying as I do, haha.

Yeah I dont like this solution. I tried it but it gave me a gibberish output. Not sure what it was outputing. https://transfer.sh/vL9fuA/profile this is the output piped to a file if you know how to read it. What extension is it supposed to be?

2:

[6079095051076450573] storj.io/private/process.root() (elapsed: 4h16m14.317366191s)                                                         

[60142560489862150] storj.io/storj/storagenode.(*Peer).Run() (elapsed: 4h16m13.193655157s)                                                                                                                                                                                           

[8761063885431801635] storj.io/storj/private/server.(*Server).Run() (elapsed: 4h16m12.650466105s, orphaned)                                 

[1595792565560114085] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 29m6.399989115s)                                       

[5421409337542920680] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 29m6.399976542s)                               

[2555345583987964921] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 672.30064ms)                                           

[6380962355970771516] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 672.290421ms)                                    

[5280704728805890367] storj.io/storj/storagenode/pieces.(*Writer).Commit() (elapsed: 668.601692ms)                                          

[9106321500788696962] storj.io/storj/storage/filestore.(*blobWriter).Commit() (elapsed: 668.494932ms)

[3708566235916727750] storj.io/storj/storage/filestore.(*Dir).Commit() (elapsed: 668.488831ms)

[3102354273867452587] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1m43.730755164s)

[6927971045850259182] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 1m43.73074184s)

 [4883951687236815098] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1m44.183199056s)

[8709568459219621693] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 1m44.183185752s)

[7099765301338612995] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 4m41.12375241s)

[1702010036466643783] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 4m41.123735939s)

[8035876244581868368] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1h13m2.375715955s)

[2638120979709899156] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 1h13m2.375699073s)

[8777516747648847420] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 3m33.980106143s)

[3379761482776878207] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 3m33.980087529s)

[637949475741906977] storj.io/storj/storagenode/bandwidth.(*Service).Run() (elapsed: 4h16m12.650612469s, orphaned)

[1581711207190469912] storj.io/storj/storagenode/collector.(*Service).Run() (elapsed: 4h16m12.649857064s, orphaned)

[7188925392542639018] storj.io/storj/storagenode/console/consoleserver.(*Server).Run() (elapsed: 4h16m12.650518121s, orphaned)

[1319289261946388338] storj.io/storj/storagenode/contact.(*Chore).Run() (elapsed: 4h16m12.650596258s, orphaned)

[5616786899653476401] storj.io/storj/storagenode/gracefulexit.(*Chore).Run() (elapsed: 4h16m12.650482144s, orphaned)

[7660806258266920486] storj.io/storj/storagenode/monitor.(*Service).Run() (elapsed: 4h16m12.649730197s, orphaned)

[5879208844897557975] storj.io/storj/storagenode/orders.(*Service).Run() (elapsed: 4h16m12.649477754s, orphaned)

[2053592072914751379] storj.io/storj/storagenode/pieces.(*CacheService).Run() (elapsed: 4h16m12.649485759s, orphaned)

[7398384313022838912] storj.io/storj/storagenode/retain.(*Service).Run() (elapsed: 4h16m12.650583624s, orphaned)

[8289183019707520168] storj.io/storj/storagenode/version.(*Chore).Run() (elapsed: 4h16m12.6506132s, orphaned)

6 also outputs binary according to an error.

I guess 6 is just a memory heap dump. I’m not sure how useful that is. 2 shows a snapshot of what processes are running on your node and how long they are running. It could show things that are stuck. I was looking at what you posted, but I guess you removed the message as it disappeared while I was looking through it.

Didn’t see anything that stood out yet, but you probably won’t see anything standing out unless you remove the memory fix and wait for the problem to occur again. Up to you whether you’re up for that.

Oh weird. I accidentally hit delete and restored it. I was editing it after i restored it too. IDK why its gone.

[6079095051076450573] storj.io/private/process.root() (elapsed: 4h46m46.898552979s)

[60142560489862150] storj.io/storj/storagenode.(*Peer).Run() (elapsed: 4h46m45.774838688s)

[8761063885431801635] storj.io/storj/private/server.(*Server).Run() (elapsed: 4h46m45.231652011s, orphaned)

[1527800927675037933] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 7m33.61737966s)

[5353417699657844529] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 7m33.617372468s)
 
[1595792565560114085] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 59m38.981223712s)
  
[5421409337542920680] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 59m38.98121174s)
 
[3614069802602495783] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 11m57.762138032s)

[7439686574585302379] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 11m57.762130678s)
 
[4193219017831587191] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1.621531282s)
  
[8018835789814393787] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 1.621515082s)
 
[7599917948853993999] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1.51219198s)
  
[2202162683982024786] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 1.512183965s)
 
[8178037960212730377] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 1m6.746925778s)
  
[2780282695340761164] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 1m6.74691613s)
 
[8777516747648847420] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 34m6.561331393s)
  
[3379761482776878207] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 34m6.561314722s)

[637949475741906977] storj.io/storj/storagenode/bandwidth.(*Service).Run() (elapsed: 4h46m45.231841285s, orphaned)

[1581711207190469912] storj.io/storj/storagenode/collector.(*Service).Run() (elapsed: 4h46m45.231087533s, orphaned)

[7188925392542639018] storj.io/storj/storagenode/console/consoleserver.(*Server).Run() (elapsed: 4h46m45.231750214s, orphaned)

[1319289261946388338] storj.io/storj/storagenode/contact.(*Chore).Run() (elapsed: 4h46m45.231829873s, orphaned)

[5616786899653476401] storj.io/storj/storagenode/gracefulexit.(*Chore).Run() (elapsed: 4h46m45.231717613s, orphaned)

[7660806258266920486] storj.io/storj/storagenode/monitor.(*Service).Run() (elapsed: 4h46m45.230967219s, orphaned)

[5879208844897557975] storj.io/storj/storagenode/orders.(*Service).Run() (elapsed: 4h46m45.230716228s, orphaned)

[2053592072914751379] storj.io/storj/storagenode/pieces.(*CacheService).Run() (elapsed: 4h46m45.230725826s, orphaned)

[7398384313022838912] storj.io/storj/storagenode/retain.(*Service).Run() (elapsed: 4h46m45.231825184s, orphaned)

[8289183019707520168] storj.io/storj/storagenode/version.(*Chore).Run() (elapsed: 4h46m45.231856273s, orphaned)