PHP Worker Performance Benchmarking and Test Results

At Pagely, we pride ourselves on providing the best possible solutions for our customers. Sometimes, that requires dedicating a significant amount of research and development into truly discovering what works best. Whenever possible, we always want to take a data-backed approach.

That’s why we’ve run several benchmarks to determine how PHP workers impact a site’s performance. By running these benchmarks, we can ensure that our customers can get the maximum amount of value out of their hosting environment.

So without further ado, let’s take a look at our tests and what we found!

Our Testing Environment

First, let’s talk about the environment that we tested on. To ensure that our tests can be replicated and improved upon, we’ve performed them on a standard Amazon EC2 instance that anyone can spin up. In addition, we’ve also made the entirety of our benchmarking code available on GitHub.

Our PHP Test Scripts

As anyone who’s done any benchmarking knows, a primary concern is eliminating any potential noise within the tests. An excellent way of testing PHP worker behavior is to have the worker perform tasks that are heavy on CPU, without impacting things like disk I/O or network latency.

To accurately benchmark PHP worker behavior, our test code is a simple PHP script that encrypts and decrypts a string several thousand times to perform CPU-heavy activity. Here’s the gist of how it works:

// Encrypt and decrypt a string 50,000 times.
while ( $times_run < 50000 ) {
    $encrypted = openssl_encrypt( $string, $method, $key, null, $iv );
    $decrypted = openssl_decrypt( $encrypted, $method, $key, null, $iv );
    $times_run++;
}

 

Our Benchmarking Environment

On the client-side, we wanted to ensure that we’ve eliminated noise from any outbound requests, while performing tests that accurately reflect different CPU core counts. For this, we wrote up a shell script that runs the k6 benchmarking tool against various Docker container configurations that reside on the server.

By using Docker, we’re able to run a single script that utilizes different numbers of CPU cores and runs benchmarks against each separate core count. Just like our PHP code, it also resides within GitHub for you to take a look through. Here’s how we’re doing it:

#!/bin/bash
mkdir -p reports

# “num-cores cpuset”
for row in “1 0” “2 0-1”
do
    set — $row
    cores=$1
    cpuset=$2
    for worker in {1,2,8,50,100,200}
    do
        pworker=$(printf “%03d” $worker)
        pcores=$(printf “%02d” $cores)
        file=reports/${pcores}core-${pworker}worker.txt
        json=reports/${pcores}core-${pworker}worker.json
        if [[ ! -f $file ]]
        then
            ./run-php.sh $cpuset $worker $file
            ./run-bench.sh 3 $worker $json >> $file
        fi
    done
done

 

Our Results

When processing our results, we need to eliminate things like network latency. Thanks to the k6 benchmarking tool, we were able to do that with ease by looking at http_req_waiting times and using the following statistics:

  • Average response time
  • Minimum response time
  • Median response time
  • Maximum response time
  • p90 (maximum response time for the fastest 90% of users)
  • p95 (maximum response time for the fastest 95% of users)
  • Requests per second

Requests Per Second

Within our tests, we’re sending a number of virtual users that match the number of cores being utilized by our testing environment. For example, when running a test against an environment with 4 CPU cores, our benchmark uses 4 simultaneous virtual users. These users perform a request, wait for a response, then immediately send another request. They’ll continue doing this for our testing duration of 60 seconds.

Here’s a graph of the results:

As you can see, a balance of PHP workers and CPU cores is what matters most. Without enough dedicated CPU workers to handle the influx of traffic, requests can’t be handled efficiently. In contrast, after a certain point, adding more PHP workers has a minimal impact on the number of requests per second that the server can handle. Once at critical mass, there’s even a negative impact on the site’s performance as too many PHP workers are added.

Looking even further at the data we’ve collected, we can quickly determine that the optimal number of PHP workers varies based on the number of CPU cores being utilized.

Of course, on different workloads (different levels of code optimization) or digging deeper into different PHP worker counts, we may have found a slightly more optimal worker pool. Still, overall, it’s a pretty good starting point.

Looking deeper into how the number of PHP workers impacts the number of requests per second that our test environment could handle, we see some interesting data. For example, let’s look at our 32 core test:

1 Worker 4.99999187
2 Workers 10.86665914
8 Workers 41.84993377
50 Workers 160.8998032
100 Workers 162.8164824
200 Workers 160.1164952
400 Workers 159.6665389

As you can see here, the number of requests per second that can be handled steadily increases as we increase the PHP workers, until it reaches 50 workers. At 100 PHP workers, we see a slight increase; then, at 200 and 400 workers, we see performance declining.

This is a common trend amongst all of our tests. After a certain point, adding more PHP workers will decrease performance overall. While the impact is at rather low levels, this can be majorly problematic for sites that don’t have a dedicated resource pool.

Many shared/cloud WordPress hosts have thousands of people all on the same server. Since the number of PHP workers will need to be significant (and they’re always screaming about how many PHP workers they have), every site on the server is negatively impacted by the worker pool as a whole.

Measuring Response Times

While the overall capacity of requests per second is a reasonable determination of how many users a site can support simultaneously, it’s not the only metric at play here. In our tests, we also measured response times, based on the number of PHP workers are active for environments with different core counts.

Here’s what our chart for a 32-core environment looks like:

As you see here, there is a measurable increase in response times as the number of PHP workers increases. While our Requests Per Second statistics only change slightly between 50 PHP workers and 400 PHP workers, request time shows an entirely different story.

Let’s take a look at the raw data to look a bit closer at what’s happening:

Worker Count Average Minimum Maximum Requests Per Second
1 Workers 199.39 198.50 208.07 4.99
2 Workers 183.67 177.27 188.35 10.86
8 Workers 190.71 175.81 197.59 41.84
50 Workers 309.70 189.69 1034.55 160.89
100 Workers 610.20 191.03 2903.50 162.81
200 Workers 1230.40 194.00 3937.65 160.116
400 Workers 2422.41 197.82 10478.34 159.66

When running a static count of 50 PHP workers across 32 cores, we see that we can make 160 requests per second, with an average response time of 309ms. When we increase that worker pool to 100 PHP workers, we see that we can handle an additional 2 requests per second, but our average response time increases to 610ms. That’s a 97% increase in the time it takes PHP to handle the request for only being able to handle an additional 1.25% more users!

As we increase the worker count even further, we see that while the number of requests per second stays roughly the same between 50 and 200 PHP workers, our average response time increased by almost 300%.

From this data, we also see that while a lower number of PHP workers in the pool can handle less traffic, our response times do become lower. At 8 PHP workers, we’re only able to process roughly 41 requests per second, but our site became much faster — responding in around 190ms.

Putting It All Together

After running our various benchmarks, we see plenty of interesting data that everyone can use to optimize their WordPress sites better. Within our benchmarks, we’ve proven that the number of available PHP workers on your server does indeed matter quite a bit — just not how you might have imagined.

Now that you’ve seen the data, we’d also like to stress the importance of running an optimal number of PHP workers that is appropriate for your WordPress site, rather than just guessing or taking a cookie-cutter approach. More PHP workers do not necessarily mean increased performance. If not tuned to the exact specifications of your site, changing your PHP worker count can be catastrophic to your site.

If you’re attempting to tune your site’s performance to handle more traffic or get faster page load times, you’ll want to do so in a way that’s specific to your website and the server on which it resides.

Generally speaking, if you want to increase the number of users that your site can handle simultaneously, you’ll want to increase your PHP worker count. If you want your site to be faster, decrease the worker count. If you want a mix of the two, you can always use dynamic limits with a minimum and maximum worker count. Having a dynamic worker count allows you to handle spikes in traffic while offering faster page load times when you’re receiving an average amount of traffic.

Want to share your thoughts? Have an idea for performing even better tests? Want to see something else benchmarked? Let us know in the comments below.

New Posts in your inbox

  1. It depends on the cores. If you have 8 cores and you start 100 workers, then of course the performance decreases. This is because there is no real parallelism anymore because you have only 8 cores and you have tons of time intensive context switches on the cpu. You should be very careful when starting more works as physical CPUs you have. Also inter hyperthreading is not real parallelism as those are virtual cores and it is more marketing.