Running a Serverless Lumen REST API on AWS Lambda

Pietro Iglio
8 min readJan 13, 2019

In this post I’ll go through the process of setting up a Lumen-powered API running as an AWS Lambda function. I’ll use the latest Lambda extension to support custom runtimes, so that the Lambda is pure PHP without the need of a NodeJS proxy.

Lumen is a “micro-framework”, meaning it’s a smaller, faster, leaner version of Laravel, which is the full web application framework. Lumen has the same foundation as Laravel, but it is built for microservices.

AWS Lambda is a service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. Although Lambda does not support PHP natively, it has been recently extended to support AWS Lambda runtime API and layers capabilities. Because of this, it is possible to build a native PHP runtime for Lambda functions.

In this post, I’ll show you how to build a PHP runtime to start powering your PHP applications on AWS Lambda and how to create a simple serverless Lumen REST API that queries an AWS RDS Postgresql database instance. AWS API Gateway is intercepting REST requests from the external world and routing them to AWS Lambda functions, as shown in the following diagram:

Throughout the rest of this article I am assuming that you are familiar with basic AWS and Lumen concepts.

Part I: PHP Runtime

First of all, you need a PHP runtime that is suitable for running a Lumen API. Once you have created your runtime, you are going to reuse it for all APIs.

You can build a runtime completely from scratch, as explained here. However, I decided to start from the Stackery PHP Lambda Layer.

UPDATED 13 Feb 2019: at the time I wrote this story, Stackery did not support mbstring and POD, which are requirements for Lumen 5.7. So, I forked the original Stackery repository to make my own version. Stackery finally accepted my pull-request and now they are offering support for PHP 7.3 and 7.1 with all Lumen requirements. Thus, you can skip this section and update your template.yaml file to use one of the two ARN that identify the PHP layers, eg.:

Layers:
- !Sub arn:aws:lambda:${AWS::Region}:887080169480:layer:php73:2

Otherwise, you can keep reading to learn how to build your own custom layer.

You have two options here:

  1. use a pre-built PHP runtime that I have created;
  2. build the PHP runtime using Docker.

1. Use a pre-built runtime

Just download the zip file from this link on your local disk.

2. Build the PHP runtime

You can skip this step if you decided to download the pre-build runtime.

You need to install Docker first. Git is required as well.

If you are running on a Windows workstation, it is recommended to turn off CR/LF, otherwise your bash scripts won’t work:

git config --global core.autocrlf false

Make sure the Docker daemon is running. Clone the repository and run “make”:

git clone https://github.com/code-runner-2017/php-lambda-layer.git
cd php-lambda-layer
make

If no error occurs, a file named php71.zip is created. This is the same file that you can download from the previously supplied link.

Updated: the original Stackery layer supports PHP 7.1 only. I have added suport for PHP 7.2 and 7.3 as well. Just type make php72.zip or make php73.zip if you want to use a newer PHP version. PHP 7.1 is still the default.

For Windows users using GIT Bash: if you need the Make utility, download it here . Unzip it and copy the make.exe file to the following directory:

C:\Program Files\Git\mingw64\bin

Creating a new Layer

You have either downloaded or built the new PHP runtime. Now you need to upload the php71.zip package in your AWS account.

Log into AWS, click on “Services”, and choose “Lambda”. Click on “Levels” on the left menu and click on the “Create Level” button on the right. Enter the layer name (eg. MyPHPLayer), a description (eg. “Stackery PHP runtime with mbstring support”) and upload the zip package.

Write down the Version ARN, that is used to identify your runtime. For example:

arn:aws:lambda:eu-west-1:5320652344506:layer:php7_1-stackery_mbstring:1

You have now fhe first part. You have your PHP runtime and you are ready to deploy a Lumen REST API.

Part II: Creating the Sample Lumen REST API

I have prepared an example Git repository that you can use as a starter template:

git clone https://github.com/code-runner-2017/lambda-lumen-test.git
cd lambda-lumen-test/src/php/lumen
composer update

You have to edit the template.yaml file and paste the Version ARN in the Layer: !Subsection. You can also change other parameters such as Memory, Timeout, etc.

You can edit the php.ini file to enable/disable the PHP modules that you need. For instance, if you want to try the provided example that runs a query against an RDS database, your php.ini should have the following extensions enabled:

extension=curl.so
extension=json.so
extension=mbstring.so
extension=pdo.so
extension=pdo_pgsql.so

The src/php/lumen contains a regular Lumen 5.7 project. In addition:

  • the composer.json file already contains the AWS SDK, as it is very likely that your Lambda is going to interact with the rest of the AWS ecosystem (DynamoDB, S3, …). If you don’t need it, you can remove that entry.
  • I have configured Monolog to not write on the file system, as the Lambda function would fail. You must create your own .env file copying the .env.example file. It is important that you have this configuration in it: LOG_CHANNEL=errorlog

I have created two test REST endpoints:

  • /hello that returns “hello world!”;
  • /test that returns a JSON object. You can optionally pass a name parameter on the URL (eg. ?name=Pietro). It will be returned in the JSON response, just to test that you can read URL params.

You can test locally before you create the Lambda function:

# Run this from the lambda-lumen-test/src/php/lumen folder
php -S localhost:8000 -t public

Try to open thehttp://localhost:8000/hello URL to check that it works.

You can edit the routes/web.php file to create your own endpoints.

Creating the AWS Lambda

Now you are ready to upload your API to AWS to create the Lambda function.

You need the following command line tools:

  • the latest version of AWS CLI (Windows users can download it here)
  • AWS SAM (available here)

I am not going through the details of installing and configuring AWS CLI and SAM. I assume that you are familiar with them and that you have already configured AWS CLI with your account credentials.

Create an S3 bucket “yourbucketname”, choosing any name you like.

Now you are ready to package and deploy the Lambda function:

$ sam package --template-file template.yaml --output-template-file serverless-output.yaml --s3-bucket yourbucketname
$ sam deploy --template-file serverless-output.yaml --stack-name my-first-serverless-lumen-api --capabilities CAPABILITY_IAM

Replace yourbucketname with the name of your S3 bucket.

Testing the AWS Lambda

Log into your AWS console again and select “Services” > “Lambda” > “Functions”. Click on “my-first-serverless-lumen-api”, then click on “API Gateway” in the Designer view.

AWS Lambda Designer

Once you click on “API Gateway”, the API Endpoint section appears in the lower part of the above figure. It should be something like this:

https://ua12345ai.execute-api.eu-west-1.amazonaws.com/Prod/{proxy+}

Open that endpoint in your browser (or use curl, if you wish to use the command line). Replace {proxy+} with “hello” and yourhostname with the value of your API Endpoint:

https://yourhostname/Prod/hello

If that works, you should get a “Hello world!” response from Lumen.

Next Steps

Now you are ready to use the whole AWS ecosystem. For example, you can use services such as:

  • DynamoDB (NoSQL)
  • ElastiCache (Redis/Memcache)
  • ElasticSearch
  • Aurora

and so on. Covering the benefits of AWS services is not within the scope of this post. Most of these services are serverless, like the Lambda service. This means that they automatically scale and you are charged in a pay-per-use model. Therefore, you can setup an API that can serve a huge amount of requests without setting up a cluster of servers.

Querying an RDS Postgresql instance

Finally, we are going to query an RDS Postgresql instance following the Lumen guidelines for database access. This example can be easily adapted to use an RDS MySQL or Aurora instance type.

First, create a public RDS Postgresql instance and write down the database host, name, user, and password. Connect to the DBMS using your favourite SQL client to create a “users” table and insert a record in it:

CREATE TABLE users (
user_id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
login_name VARCHAR(64) NOT NULL,
email VARCHAR(64) NOT NULL
)

insert into users (login_name, email) values (‘pietro.iglio’, ‘igliop@gmail.com’);

The PHP code to query the database is already included in the sample routes/web.php file:

$router->get('/testdb', function (Request $request) use ($router) {
$users = app('db')->select("SELECT * FROM users");
return response()->json($users);
});

As you can see, the database name, host and credentials are taken from the environment. You might add them to your .env file, but I recommend setting them as Lambda Environment variables. So, log into your AWS console again and select “Services” > “Lambda” > “Functions”. Click on “my-first-serverless-lumen-api” to open the Lambda page. Click again on the Lambda function in the Designer. If you scroll down in the page, you will find a section where you can insert your environment variables. Set the DB_CONNECTION=pgsql, and DB_HOST, DB_DATABASE, DB_USER and DB_PASSWORD variables according to your RDS configuration.

Now you are ready to invoke the REST endpoint:

https://yourhostname/Prod/testdb

The output should be something like:

[
{
"user_id": 1,
"login_name": "pietro.iglio",
"email": "igliop@gmail.com"
}
]

Done!

Final Considerations

In this example, we have created a public instance of the RDS database. You should be aware that this is not considered best practice from a security point of view. However, if you create the RDS database in a VPC, as recommended, you need to run the Lambda function in the VPC as well. This method has several drawbacks. For example, it increases to about 10 seconds the boot time for cold starts. In addition, if you query your database from the Lambda function, you need to take into account that Lambda can quickly scale, so your database might receive hundreds of connections. A serverless database would be much better for this scenario.

In future posts, I’ll explain how to use AWS serverless services from the Lumen API, so stay tuned. I hope you found this article helpful!

--

--