Too many open files

Rclone mount returns ( I’m not sure it’s rclone) this error:

Rclone (native)

user888@vmi1222222:~$ rclone mount waterbear:mybucket ~/storj --transfers=10 --drive-chunk-size=65536 --vfs-cache-mode writes --allow-non-empty
2022/11/14 02:22:41 ERROR : FS sj://mybucket: cp input ./sample.dat [HashesOption([])]: uplink: stream: open /tmp/tee1185832014: too many open files
2022/11/14 02:22:43 ERROR : FS sj://mybucket: cp input ./sample.dat [HashesOption([])]: uplink: stream: ecclient: successful puts (0) less than or equal to repair threshold (35), ecclient: failed to dial (node:12Cdt629arF7g5B4kkoCcHuB5RcM43VGifU9jYXWyAYEk4vYjth): piecestore: rpc: tcp connector failed: rpc: dial tcp 37.84.73.80:28967: socket: too many open files; ecclient: failed to dial (node:12VhfjU7Z4GqL1krugdkM4AewNideNcHu8UaTqAsQACjYgy26jS): piecestore: rpc: tcp connector failed: rpc: dial tcp 192.99.160.16:28967: socket: too many open files; ecclient: failed to dial (node:1kjC2EVPSPsfyWRL6aX2i5vn5t5CA7DHaNcq3FyE6xT5ygVwUP): piecestore: rpc: tcp connector failed: rpc: dial tcp 194.230.191.218:28971: socket: too many open files; ecclient: failed to dial (node:1VbmdaUEr5VM8rVxe8dZqQ9uUxMXtH9TH9DSEPZSX6f5PpL8yD): piecestore: rpc: tcp connector failed: rpc: dial tcp 115.74.103.174:28966: socket: too many open files; ecclient: failed to dial (node:17AJS3FKFTpCwi1tKZGce3VJm6MHHmcNv36upB9zL6Jt53fDwX): piecestore: rpc: tcp connector failed: rpc: dial tcp 217.160.142.235:28906: socket: too many open files; ecclient: failed to dial (node:12mpckjHbtfrS5zqcZ8dmk892YUWEtPr4mbFAJXcJRHPD3FzS1H): piecestore: rpc: tcp connector failed: rpc: dial tcp 138.36.105.76:21988: socket: too many open files; ecclient: failed to dial (node:12qgWaFC2YsTaKK99KwrSzbU7NNGyz2LoqTX6WmdeQgdXSkejLE): piecestore: rpc: tcp connector failed: rpc: dial tcp 45.83.241.30:29005: socket: too many open files; ecclient: failed to dial (node:1vbJACcJTJt3yESnM7NQVtScEmU5ELmQvNb4ivcSnuAy76VnN7): piecestore: rpc: tcp connector failed: rpc: dial tcp 73.94.50.120:28967: socket: too many open files; ecclient: failed to dial (node:12g1RoikhJmxpVZzjjPyo4ufRRMdwCjBaJKndJ8AH6gkk8d9ML9): piecestore: rpc: tcp connector failed: rpc: dial tcp 87.106.192.140:30083: socket: too many open files; ecclient: failed to dial (node:127wtGkEyyXmPdqMuVp7xbEAkiABV4ef5hXNhmpFZpZ8kfZ797G): piecestore: rpc: tcp connector failed: rpc: dial tcp 194.72.35.201:28967: socket: too many open files; ecclient: failed to dial (node:1AiRg4aRvw

ulimit -a

real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) 0
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 31698
max locked memory           (kbytes, -l) 1018500
max memory size             (kbytes, -m) unlimited
open files                          (-n) 1024
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 31698
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

Vps 4 core/8GB Ram
200Mbit/s
Germany

urls = await database.load_urls(page_n=page_n)
result_to_list = [i[0] for i in urls]
results = ThreadPool(10).imap_unordered(download_url,  result_to_list)

def download_url(url):
    print("downloading: ", url)
    file_name_start_pos = url.rfind("/") + 1
    file_name = url[file_name_start_pos:]
    req = requests.get(url, stream=True)

    download_path = os.path.join("~/storj/", file_name)
    if req.status_code == requests.codes.ok:
        with open(download_path, 'wb') as f:
            for data in req:
                f.write(data)

The code downloads from several https address and write data to storj network
it hasn’t crashed but there are many errors
How to debug it?
Thank you

You’ll need to increase the file limit allowance for rclone from the above mentioned 1024 (-n) to something higher. You should not have any problems with setting it to 65536 (64k).
You can either do it via mounting the above bucket as an systemd service (and specifying the file handle limit in it) or increase the global defaults.
For a reference of a simple systemd file, you can refer to this file or more specifically this line:

6 Likes

Yes Stefan, Thank you