Nginx + php-fpm – Each php-fpm process 70-100% cpu when running

amurrell asked:

I have a situation in which the following is taking place:

  • We are on linode with 8-core, 8gb of ram , 2.6 ghz – using nginx + php-fpm – we are getting extremely high graphs of cpu usage (which we don’t want to be such a bad VPS neighbor)…

  • We have around less than 100 users on the site at a time – so this situation is also incredibly embarassing – that our cpu usage is very high.

  • We are using a very unknown, possibly cpu intensive php-wise, questionably horrible Framework instead of well-known, well-documented, well-crafted other frameworks like wordpress or drupal in which there is LOTS of documentation about caching (as well as plugins that handle caching) php on a nginx + php_fpm platform.

  • Thus, we have about 6 open php-fpm processes that when RUNNING, consume individually LARGE (30+, and often near 99%) amounts of cpu – and I haven’t really the slightest idea how to stop them from using so much cpu. I can’t tell which php scripts are causing these spikes because they are happening all the time… usually only 1 or 2 are running – but when all 6 run we maximize all 8 cpus.

  • My pool.d/www.conf file has the following settings:

    pm = dynamic
    pm.max_children = 10
    pm.start_servers = 4
    pm.min_spare_servers = 2
    pm.max_spare_servers = 6
    
  • We did this ^ setup because, in the way that I am interpretting it, our memory is actually amazing (htop showing 472/7000+mb used, no swapping etc) and we could handle many more processes and break down the line waiting to get processed – BUT unfortunately, since each process is too intense on our cpu when running – we end up driving our CPU through the roof – so we can’t handle enough processes.

  • The question – what on earth can we do to reduce the process php-fpm cpu usage so that we can increase the settings in that pool conf file for php-fpm – and also yes, the /var/log/php5-fpm.log is yelling at us to increase our children and adjust/increase our min/max/start servers. But doing so makes our load average crazy as previously stated. How can we do so without necessarily using a cache or what are our options?

  • My idea? I’ve read things about using cpulimit to ensure no process takes more than an allotted amount of cpu – but will that slow things down to be unusable? Or in doing so we could increase our ability to run more than a few processes – I also thought running two pools – one for our forward facing website (what customers experience) and another for a backend (which is affecting our forward facing site when time-consuming reports are being ran).

  • I have been spending a few days researching, googling, etc on this topic – and it is difficult because everyone’s situation is so unique to their system – the trouble is being on such a specific unheard-of, possibly poorly written – framework – is making it hard to find a solution. We can’t just scrap this framework just yet either – I have to find a solution of some sort.


UPDATE: I have implemented memcache to store php sessions – because the framework heavily relies on user sessions and the nature of our system is that employees often use several tabs at a time – each checking back to the sessions to confirm abilities / user data / etc… so I am hoping to see some increase in performance from this – welcome to comment on that if you’d like – I’ll see how it goes tomorrow when we get through our higher volume peak times.

My answer:


You are running opcode caching, right?

It used to be APC that was the go-to here, but it’s been a buggy piece of for quite a while, and has been superseded by Zend Opcache, which is now part of PHP since 5.5, and has a backport in PECL for 5.3 and 5.4.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.