Noob Node Runner need some help pls

Not exactly. You should increase only those timeouts, which affects the node crash, not everyone.
Improperly changed they could make a harm. For example, your disk become corrupted, but readability timeout didn’t detect it and didn’t shutdown the node to protect from disqualification.
How is it related to disqualification? For this example, when the disk is dying, the node cannot provide a piece for audit after 5 minutes timeout and did so 2 more times, such an audit will be considered as failed. Several failed audits like that and node will be disqualified.
If the readability timeout were shorter, this internal monitoring will stop the node even before it would start to fail audits.

In case if you have had a crash because of readability timeout was exceed during the check, and you did recommended actions (checked and fixed the disk, performed a defragmentation, but the readability errors are still occurs), you may slowly increase the readable timeout and readable interval, because they are both 1m0s by default.
Here should be no spaces before the option, otherwise node may not start due to incorrect YAML format.

it should be:

# how frequently to verify the location and readability of the storage directory
storage2.monitor.verify-dir-readable-interval: 1m30s

If your node suffer to write on the disk, and you performed recommended actions (checked and fixed the disk, performed a defragmentation but the writeability errors are still occurs), you may slowly increase a writeable timeout, but not writeable checks interval, because they are different by default (the writeable timeout is 1m0s by default, but the writeable checks interval is 5m0s by default).

So you should not change the writeable checks interval, unless you increased a writeable timeout greater than 5m0s (which is a red alert for your disk system already and you need to check a disk surface and S.M.A.R.T.).
So, you should not add/uncomment this parameter:

and again, if you added this parameter, it should not have spaces before it. To comment out you may add a # character before it,

# how frequently to verify writability of storage directory
# storage2.monitor.verify-dir-writable-interval: 5m0s

Save the config and restart the node.

ok got it. I just verified from log file that fatal errors were caused by: verify-dir-readable-timeout and verify-dir-writable-timeout. So I extended them to 1m30s and returned other 2 to it’s default values and commented out as it was before change. I’ll monitor few days see what happens and will keep you posted. thanks again for your time, help and clarifying.

ok so I was looking on log file again and seeing lot of this upload and download errors from this satellite us1.storj.io:7777
error:

2023-08-23T14:09:33-04:00 ERROR piecestore upload failed {Piece ID: 3P6L4FYOQYAPJXQTXJ2S2NR5YKCRTZTPWV44I7W3INRFPGATTM7A, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, error: context canceled, errorVerbose: context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:75\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func6:500\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:506\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:243\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:124\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:114\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35, Size: 10240, Remote Address: 5.161.149.40:11876}
2023-08-23T14:09:37-04:00 ERROR piecestore download failed {Piece ID: SAX7CIVEKWJ25YUIFPB5THTE6UB5AR6VXHJJTBEIG6TAJ2JZXUMQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: GET, Offset: 0, Size: 540672, Remote Address: 5.161.207.152:64112, error: write tcp 10.0.1.33:28967->5.161.207.152:64112: wsasend: An existing connection was forcibly closed by the remote host., errorVerbose: write tcp 10.0.1.33:28967->5.161.207.152:64112: wsasend: An existing connection was forcibly closed by the remote host.\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:349\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).sendData.func1:816\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22}

is this error caused from my side?

I will try to explain one more time.
The readability check interval has a default value of 1m0s. The readability timeout is 1m0s by default too.
So if you change a readability timeout, you also need to change its check interval, otherwise it will checks more often than a timeout, they will be overlapped and likely could lead to crashing node more often.

The writeability check interval is 5m0s by default, the writeability timeout is 1m0s by default (they are different).
So if you change the writeability timeout you would change its check interval only if the writeability timeout is greater than 5m0s.

I hope that now you understand it better.
So, if you have both timeout errors, you need to increase:

These errors

means that the remote host is closed the connection because of a long tail cancelation - your node is loosed the race for pieces. If you see a lot of such errors, it could be your router though.

ok got it, made changes.
So what sort of problem I should be looking in to my router?

Does reboot of the router reduces number of errors? If not, then it’s long tail cancelation errors only.