Benchmarking Serverless Laravel vs Lumen
In this post, I’m comparing Laravel and Lumen performances for building a serverless function running on AWS Lambda.
Laravel is a well-known and elegant PHP framework written by Taylor Otwell. To date, Laravel is the backend framework with more stars on Github.
Lumen is a slimmed-down version of Laravel, with fewer package requirements, so it’s smaller, faster and leaner than the full framework. Lumen is built for microservices, rather than user-facing applications.
Bref, written by Matthieu Napoli, is a tool that makes it very easy to deploy serverless PHP applications to AWS and run them on AWS Lambda . Bref comes as an open source Composer package and includes Laravel and Symfony support.
Test Scenario
I created a Laravel and a Lumen project. In both, I added a very simple route returning “Hello, world!”. In this way, I’m just comparing the framework overhead, and nothing else.
I’m using the Bref layer php-74-fpm running a Lambda with 1024 MB of memory in the eu-west-1 region.
Creating the Laravel Test
Let’s create a Laravel project and add Bref following the instructions here:
laravel new laravel-test
composer require bref/bref bref/laravel-bridge
php artisan vendor:publish --tag=serverless-config
cp .env.example .env
Now let’s prepare the benchmark: set DEBUG=false
in the .env
file, and add the following route to the file routes/api.php
:
$router->get('/test', function () {
return "Hello, world!";
});
and run these commands to deploy:
php artisan config:clear
composer install --optimize-autoloader --no-dev
sls deploy
Creating the Lumen Test
Although Lumen is not covered by Bref documentation, deploying a Lumen application to Lambda is straightforward. Here are the steps:
If you haven’t yet done so, install Bref as described here.
Open a command prompt and run:
composer create-project --prefer-dist laravel/lumen lumen-test
cd lumen-test
cp .env.example .env
Download this file in the project root directory, eg.:
wget https://raw.githubusercontent.com/brefphp/laravel-bridge/master/config/serverless.yml
Edit the serverless.yml
file to change line service=laravel
to service=lumen
, or any other service name that you like.
Edit the .env
file and change: LOG_CHANNEL=stack
to LOG_CHANNEL=stderr
.
Let’s prepare the benchmark: set DEBUG=false
in the .env
file and add the following route to the file routes/web.php
:
$router->get('/test', function () {
return "Hello, world!";
});
Now you’re ready to deploy! Simply run:
composer install --optimize-autoloader --no-dev
sls deploy
Running the Benchmark
Once the Bref deployment is completed for both Laravel and Lumen, you should get the following output:
...
endpoints:
ANY — https://abcdefghi.execute-api.us-east-1.amazonaws.com/dev
ANY — https://abcdefghi.execute-api.us-east-1.amazonaws.com/dev/{proxy+}
...
Just copy the first of the two URLs, i.e. the Laravel URL and the Lumen URL. You can now invoke the two function opening the URLs with your favorite tool (browser, curl, Postman, …).
Remember that:
- you must append
/api/test
to the Laravel URL, - you must append
/test
to the Lumen URL.
I invoked each service 10 times, and collected each time duration on the CloudWatch logs.
Results
Here are the results for Lumen:
Duration: 140.17 ms Billed Duration: 500 ms Memory Size: 1024 MB Max Memory Used: 87 MB Init Duration: 313.36 ms
Duration: 2.58 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.64 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.56 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.56 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.52 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.44 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.47 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.68 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Duration: 2.53 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 87 MB
Here are the results for Laravel:
Duration: 455.10 ms Billed Duration: 800 ms Memory Size: 1024 MB Max Memory Used: 100 MB Init Duration: 341.48 ms
Duration: 8.12 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 7.96 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 7.82 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 8.28 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 7.64 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 8.06 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 7.68 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 7.81 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
Duration: 8.04 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 100 MB
The first line of each group corresponds to the duration taken for the Lambda cold start, and the other lines correspond to the duration for the Lambda already started (warm start). More information on cold/warm start can be found, for example, here.
Here is a summary of my benchmarks:
Conclusions
First of all, let me say that I’m not particularly surprised by the results. Laravel provides a much broader range of services, so you can expect an execution overhead.
Another consideration is that the difference between the two frameworks in a real-world application would be less significant due to the overhead for connecting to a database or a cache service, which could be much higher. For example, if the overhead to read or write from a database is about 200 ms, then the difference between the two frameworks (207.93 ms vs 202.55 ms) would be insignificant.
However, if you are developing low-latency micro-services, where milliseconds matter, and you don’t need the services provided by Laravel, then Lumen might be worth considering.
If you happen to know how to get different results applying further optimizations, please contact me.
I hope you find this article helpful, thank you for reading!