Nextcloud and PHP's out of the box configuration is sufficient for basic use, where only a few apps are installed and a handful of sub-100MB files are being uploaded at a time.

However, this starts to fail once you set up external storages, external app integrations (GitHub, Reddit, Twitter, etc) and try to upload multi-gigabyte files.

Outside of setting up caching (Redis/APCu), there are a few things that can be done to improve PHP's handling of long-running IO operations.

 

1) Increase PHP's execution time and Nginx max body size

This is probably the single most important step you can take. When running default configuration, I noticed that most of my large uploads to Nextcloud were failing due to PHP timeout errors. The PHP and NGINX timeout is set to 60 seconds by default, which is not nearly enough to handle and process a large file upload.

Depending on how conservative you want to be, you can increase PHP's max_execution_time and max_input_time to either 1200 seconds or 3600 seconds in php.ini.

You'll also need to configure the timeout and client_max_body_size parameters in Nginx.

Add the following to your server block in Nginx:

    # Increase PHP timeouts
    proxy_read_timeout 600;
    proxy_connect_timeout 600;
    proxy_send_timeout 600;
    send_timeout 600;
    fastcgi_read_timeout 600;

    # set max upload size for mobile
    client_max_body_size 0;
    fastcgi_buffers 64 4K;

 

2) Increase PHP's memory limit

This one is quite a common piece of advice. I find that the recommended value of 512MB is enough for copying very large files to an external S3 storage backend.

 

3) Increase the amount of PHP workers

Configuring this highly depends on the amount of resources you have available. 

If you have the RAM to spare but not enough CPU resources, you should set a static pool size. This reduces the overhead of having to dynamically spawn new PHP-FPM workers during a traffic spike.

On a box with 2 CPU cores and 4GB of RAM, I find that the following pool configuration works quite well:

pm = static

pm.max_children = 36

Increasing max_children further might help if you have more CPU cores.

If you're on a host with better CPU performance but low on RAM (<2GB), setting a dynamically sized pool might help:

pm = dynamic

pm.max_children = 36

pm.start_servers = 12

pm.min_spare_servers = 1

pm.max_spare_servers = 12

This starts 12 PHP workers and allows the pool to grow up to 36 workers depending on traffic. Saves on memory but introduces CPU overhead from spawning new threads.

 

4) Enable JIT and tune OpCache

JIT compilation was introduced with PHP 8.

Add the following to the bottom of your php.ini to enable JIT and further OpCache optimisations:

opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=128
opcache.max_accelerated_files=10000
opcache.huge_code_pages=1
opcache.enable_file_override=1 
opcache.jit_buffer_size=128M
opcache.jit=1255

 

5) Configure Redis and APCu

Nextcloud can make use of both APCu and Redis at the same time. The Nextcloud docs recommend the following in config.php if you have both available:

'memcache.local' => '\\OC\\Memcache\\APCu',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',

Configuring both a local and distributed cache will allow Nextcloud to look up cache entries in the faster APCu backend before going out to Redis over the network.

The 'memcache.locking' directive will make Nextcloud use Redis as the file locking backend instead of the database, which greatly speeds up uploads of folders with many files.

Submitted by admin on Sat, 06/19/2021 - 00:39