Laravel Vapor: Deploying Serverless PHP Applications
Deploy Laravel on AWS Lambda with Vapor. Complete guide covering setup, cost analysis vs EC2, cold start mitigation, and when to choose serverless.
After 14 years of deploying PHP applications across every hosting configuration imaginable, I have watched serverless evolve from an interesting experiment to a production-ready deployment strategy. Laravel Vapor, the official serverless deployment platform for Laravel, has matured significantly since its 2019 launch. In 2026, it represents a compelling option for teams seeking automatic scaling, reduced operational overhead, and pay-per-use economics.
This guide provides a practical walkthrough of deploying Laravel applications on AWS Lambda using Vapor. I will cover the complete setup process, analyze real costs compared to EC2 hosting, share cold start mitigation strategies that actually work, and help you determine whether serverless is the right choice for your specific use case.
What is Laravel Vapor and How Does It Work?
Laravel Vapor is an auto-scaling, serverless deployment platform built specifically for Laravel applications. It deploys your application to AWS Lambda, Amazon's serverless compute service, while abstracting away the complexity of configuring Lambda functions, API Gateway, SQS queues, and other AWS services.
The Architecture Behind Vapor
When you deploy a Laravel application through Vapor, it creates a sophisticated AWS infrastructure:
HTTP Lambda Function: Handles all incoming web requests. API Gateway routes HTTP traffic to this function, which boots your Laravel application, processes the request, and returns a response.
CLI Lambda Function: Executes Artisan commands, processes queued jobs from SQS, and runs scheduled tasks. This separation allows your web-facing function to remain optimized for HTTP responses while background work scales independently.
Supporting Services: Vapor automatically configures or connects to:
- Amazon S3 for file storage and static assets
- Amazon SQS for queue processing
- Amazon RDS or Aurora for databases
- ElastiCache for Redis caching and sessions
- CloudFront for asset delivery
┌─────────────────┐
│ CloudFront │
│ (Static Assets)│
└────────┬────────┘
│
┌──────────────┐ ┌────────▼────────┐ ┌─────────────────┐
│ Clients │───►│ API Gateway │───►│ HTTP Lambda │
└──────────────┘ └─────────────────┘ │ (Web Requests) │
└────────┬────────┘
│
┌───────────────────────────────┼───────────────┐
│ │ │
┌─────▼─────┐ ┌──────▼──────┐ ┌─────▼─────┐
│ SQS │ │ Aurora │ │ Redis │
│ (Queue) │ │ (RDS) │ │ (Cache) │
└─────┬─────┘ └─────────────┘ └───────────┘
│
┌─────▼─────┐
│CLI Lambda │
│ (Jobs) │
└───────────┘
Why Vapor Instead of Raw Lambda
You could deploy Laravel to Lambda manually using Bref or the Serverless Framework. However, Vapor provides significant advantages. For applications with complex AWS infrastructure needs, combining Vapor with comprehensive AWS cost optimization strategies ensures you maximize both operational efficiency and budget control.
Laravel-Native Integration: Vapor understands Laravel's conventions. It automatically handles asset compilation, environment variables, database migrations, and queue configuration without manual serverless manifest files.
Managed Infrastructure: Vapor provisions and manages VPCs, security groups, IAM roles, and service connections. You focus on your application rather than AWS infrastructure.
Zero-Downtime Deployments: Vapor deploys to a new Lambda version and atomically switches traffic only after successful deployment. Failed deployments never affect production.
Dashboard and CLI: Monitor deployments, view logs, manage databases, and scale resources through an intuitive web interface or CLI commands.
Step-by-Step Setup Guide
Let me walk through deploying a Laravel application to Vapor from scratch. This process assumes you have an existing Laravel application ready for deployment.
Prerequisites
Before starting, ensure you have:
- A Laravel application (10.x or 11.x recommended)
- An AWS account with billing enabled
- A Laravel Vapor subscription ($39/month)
- Composer installed globally
Step 1: Install the Vapor CLI
Install the Vapor CLI globally using Composer:
composer global require laravel/vapor-cli
# Verify installation
vapor --version
Authenticate with your Vapor account:
vapor login
Step 2: Link Your AWS Account
In the Vapor dashboard, navigate to Team Settings and link your AWS account. Vapor requires IAM credentials with administrative access to provision resources. You can use either:
- An IAM user with programmatic access
- AWS SSO credentials
Vapor creates all infrastructure in your AWS account, giving you full visibility and control over resources.
Step 3: Initialize Your Project
From your Laravel project root, initialize Vapor:
vapor init
This command creates a vapor.yml configuration file and registers your project with Vapor. The initial configuration looks like this:
id: 12345
name: my-laravel-app
environments:
production:
memory: 1024
cli-memory: 512
runtime: php-8.3
build:
- 'composer install --no-dev'
Step 4: Configure Your Environment
Expand your vapor.yml with production-ready settings:
id: 12345
name: my-laravel-app
environments:
production:
domain: myapp.com
memory: 1024
cli-memory: 512
runtime: php-8.3
database: my-database
storage: my-storage
cache: my-cache
gateway-version: 2
warm: 10
concurrency: 50
build:
- 'composer install --no-dev --optimize-autoloader'
- 'php artisan event:cache'
- 'php artisan route:cache'
- 'php artisan view:cache'
- 'npm ci'
- 'npm run build'
deploy:
- 'php artisan migrate --force'
- 'php artisan queue:restart'
staging:
domain: staging.myapp.com
memory: 512
cli-memory: 256
runtime: php-8.3
database: my-database-staging
storage: my-storage
cache: my-cache
warm: 2
build:
- 'composer install --optimize-autoloader'
- 'npm ci'
- 'npm run build'
deploy:
- 'php artisan migrate --force'
Step 5: Create Supporting Infrastructure
Provision the database and cache through the Vapor CLI:
# Create a serverless Aurora database
vapor database my-database --serverless
# Create a Redis cache cluster
vapor cache my-cache
# Create an S3 storage bucket
vapor storage my-storage
For staging environments, you might use smaller resources:
vapor database my-database-staging --serverless --min-capacity=0.5
Step 6: Configure Environment Variables
Set environment variables through the Vapor dashboard or CLI:
# Set individual variables
vapor env:set production APP_KEY=base64:your-key-here
vapor env:set production DB_CONNECTION=mysql
vapor env:set production CACHE_DRIVER=redis
vapor env:set production SESSION_DRIVER=redis
vapor env:set production QUEUE_CONNECTION=sqs
# Or pull from .env file
vapor env:pull production
Step 7: Deploy
Deploy your application:
vapor deploy production
Vapor executes the following sequence:
- Runs build hooks locally (composer install, npm build, etc.)
- Creates a deployment archive
- Uploads the archive to S3
- Creates new Lambda function versions
- Runs deploy hooks (migrations)
- Pre-warms containers based on
warmsetting - Switches traffic to the new deployment
A successful deployment takes 2-5 minutes depending on your application size and build complexity.
Step 8: Configure Your Domain
Point your domain to Vapor:
vapor domain myapp.com
Vapor creates an ACM certificate and configures CloudFront distribution. Update your DNS records as instructed to complete the setup.
Cost Comparison: Vapor vs Traditional EC2 Deployment
Understanding serverless economics is crucial for making an informed decision. Let me break down real-world costs across different traffic scenarios.
Vapor Pricing Components
Vapor costs include two categories:
Vapor Platform Fee: $39/month (or $399/year for 14% savings)
AWS Infrastructure Costs (paid directly to AWS):
| Service | Pricing Model |
|---|---|
| Lambda (HTTP) | $0.20 per 1M requests + compute time |
| Lambda (CLI/Queue) | $0.20 per 1M requests + compute time |
| API Gateway | $1.00 per 1M requests (v2) |
| Aurora Serverless v2 | $0.12 per ACU-hour |
| ElastiCache Serverless | $0.125 per GB-hour + ECPUs |
| S3 | $0.023 per GB + request fees |
| CloudFront | $0.085 per GB transfer |
Scenario 1: Low Traffic Application
Profile: Portfolio site or internal tool with 100,000 monthly requests
Vapor Platform: $39.00
Lambda Compute: $1.50
API Gateway: $0.10
Aurora Serverless (0.5 ACU): $43.80
ElastiCache Serverless: $15.00
S3 + CloudFront: $5.00
────────────────────────────────────
Total Monthly Cost: ~$105
Equivalent EC2 Setup:
- t4g.small instance: $12/month
- RDS t4g.micro: $13/month
- ElastiCache t4g.micro: $12/month
- Load Balancer: $16/month
- Total: ~$53/month
Verdict: For low-traffic applications with predictable usage, EC2 is more cost-effective. The Vapor platform fee alone exceeds basic EC2 hosting costs.
Scenario 2: Medium Traffic SaaS Application
Profile: B2B SaaS with 2 million monthly requests, moderate queue processing
Vapor Platform: $39.00
Lambda Compute (2M req): $15.00
API Gateway: $2.00
Aurora Serverless (2 ACU avg): $175.00
ElastiCache Serverless: $45.00
SQS (500K messages): $0.20
S3 + CloudFront: $25.00
────────────────────────────────────
Total Monthly Cost: ~$300
Equivalent EC2 Setup:
- 2x t4g.medium instances: $48/month
- RDS db.t4g.medium Multi-AZ: $94/month
- ElastiCache cache.t4g.small: $24/month
- Application Load Balancer: $22/month
- DevOps time (4 hours/month): $400/month
- Total: ~$588/month
Verdict: At medium traffic, Vapor becomes competitive when you factor in operational overhead. The managed infrastructure eliminates DevOps maintenance time.
Scenario 3: High Traffic E-Commerce Platform
Profile: E-commerce site with 20 million monthly requests, heavy queue usage, traffic spikes during sales
Vapor Platform: $39.00
Lambda Compute (20M req): $180.00
API Gateway: $20.00
Aurora Serverless (8 ACU avg): $700.00
ElastiCache Serverless: $150.00
SQS (5M messages): $2.00
S3 + CloudFront: $200.00
────────────────────────────────────
Total Monthly Cost: ~$1,290
Equivalent EC2 Setup (with auto-scaling):
- Auto Scaling Group (avg 4x c6g.large): $280/month
- RDS db.r6g.large Multi-AZ: $380/month
- ElastiCache cache.r6g.large: $145/month
- Application Load Balancer: $45/month
- DevOps time (8 hours/month): $800/month
- Over-provisioning for spikes: $200/month
- Total: ~$1,850/month
Verdict: High-traffic applications with variable load benefit significantly from Vapor's automatic scaling. You avoid over-provisioning and reduce operational burden.
The Hidden Cost: Operational Overhead
The cost comparisons above include estimated DevOps time, which is often overlooked:
| Task | EC2 (hours/month) | Vapor (hours/month) |
|---|---|---|
| Security patches | 2-4 | 0 |
| Scaling management | 1-2 | 0 |
| Deployment management | 1-2 | 0.5 |
| Monitoring setup | 1-2 | 0.5 |
| Incident response | 2-4 | 0.5 |
| Total | 7-14 | 1.5 |
At a loaded cost of $100/hour for DevOps time, this represents $700-1,400/month in hidden costs for self-managed infrastructure.
Cold Start Mitigation Strategies
Cold starts occur when AWS Lambda creates a new execution environment to handle a request. During a cold start, Lambda downloads your code, initializes the runtime, and runs your application's bootstrap logic before processing the request.
For Laravel applications on Vapor, cold starts typically add 800-1200ms to request latency. Let me show you how to minimize their impact.
Understanding Cold Start Frequency
Cold starts happen when:
- A new deployment occurs
- Traffic increases beyond current warm containers
- Containers are recycled after ~15 minutes of inactivity
- AWS needs to rebalance load across availability zones
For most applications, cold starts affect 0.3-1% of requests. On high-traffic sites, this percentage drops significantly because containers stay warm.
Strategy 1: Pre-Warming with Vapor
Vapor provides built-in pre-warming through the warm configuration option:
environments:
production:
warm: 10
This setting tells Vapor to maintain 10 warm containers at all times. Vapor achieves this by:
- Pinging your application every 5 minutes to prevent container recycling
- Pre-warming containers before new deployments go live
- Maintaining the specified number of concurrent warm instances
Choosing the Right Warm Value:
| Traffic Pattern | Recommended Warm Value |
|---|---|
| Low traffic (< 1 req/sec) | 2-3 |
| Medium traffic (1-10 req/sec) | 5-10 |
| High traffic (10-50 req/sec) | 10-20 |
| Very high traffic (50+ req/sec) | 20-50 |
Set warm to approximately your average concurrent connections plus a buffer for traffic spikes.
Strategy 2: Optimize Application Bootstrap
Reduce cold start duration by optimizing what Laravel loads during bootstrap:
<?php
// config/app.php - Remove unused service providers
'providers' => ServiceProvider::defaultProviders()->merge([
// Only include providers you actually use
App\Providers\AppServiceProvider::class,
App\Providers\AuthServiceProvider::class,
App\Providers\RouteServiceProvider::class,
// Comment out unused providers
// App\Providers\BroadcastServiceProvider::class,
])->toArray(),
Use deferred service providers for heavy dependencies:
<?php
namespace App\Providers;
use Illuminate\Contracts\Support\DeferrableProvider;
use Illuminate\Support\ServiceProvider;
class HeavyServiceProvider extends ServiceProvider implements DeferrableProvider
{
public function register(): void
{
$this->app->singleton('heavy.service', function ($app) {
return new HeavyService();
});
}
public function provides(): array
{
return ['heavy.service'];
}
}
Strategy 3: Leverage Caching Aggressively
Cache everything possible to reduce cold start initialization:
# vapor.yml
build:
- 'composer install --no-dev --optimize-autoloader'
- 'php artisan config:cache' # Vapor does this automatically
- 'php artisan route:cache'
- 'php artisan view:cache'
- 'php artisan event:cache'
Strategy 4: Reduce Package Size
Lambda cold start duration correlates directly with deployment package size. Minimize your deployment:
{
"require-dev": {
"laravel/telescope": "^5.0",
"phpunit/phpunit": "^10.0"
}
}
Ensure development dependencies are not deployed:
build:
- 'composer install --no-dev --optimize-autoloader'
Use .vaporignore to exclude unnecessary files:
# .vaporignore
tests/
.git/
.github/
node_modules/
.env.example
README.md
phpunit.xml
*.md
Strategy 5: Optimize Memory Configuration
AWS allocates CPU proportionally to memory. More memory means faster cold starts:
environments:
production:
memory: 1024 # Recommended minimum for Laravel
| Memory | CPU | Cold Start Impact |
|---|---|---|
| 512MB | 0.33 vCPU | Slow cold starts |
| 1024MB | 0.67 vCPU | Balanced (recommended) |
| 1536MB | 1 vCPU | Fast cold starts |
| 2048MB+ | 1+ vCPU | Minimal benefit for PHP |
Since PHP is single-threaded and Lambda handles one request per instance, allocating more than 1536MB rarely improves performance.
Strategy 6: Provisioned Concurrency (When Necessary)
For latency-critical endpoints that cannot tolerate any cold starts, AWS offers Provisioned Concurrency. This keeps a specified number of Lambda instances fully initialized at all times.
Warning: Provisioned Concurrency is expensive. You pay for every provisioned instance every hour, regardless of usage.
Cost Example:
- 10 provisioned instances
- 1024MB memory
- 24/7 availability
- Cost: ~$150/month (just for provisioned concurrency)
Use Provisioned Concurrency selectively for:
- Authentication endpoints
- Critical checkout flows
- API endpoints with strict SLA requirements
Most applications do not need Provisioned Concurrency. Vapor's built-in warming handles typical use cases adequately. For alternative approaches to reducing latency for global users, consider Lambda@Edge for edge-based personalization that processes requests closer to end users.
When Serverless Makes Sense (And When It Does Not)
After deploying numerous Laravel applications across both traditional and serverless infrastructure, I have developed clear criteria for when each approach excels.
Choose Laravel Vapor When:
Traffic is Unpredictable or Spiky
Applications with viral potential, seasonal traffic, or launch events benefit enormously from serverless. Vapor scales from zero to thousands of concurrent requests without pre-provisioning.
Example: A marketing campaign site expecting 10x normal traffic
during a 48-hour promotion. Vapor handles the spike automatically,
then scales down to minimal cost afterward.
You Want to Minimize Operational Overhead
Vapor eliminates server maintenance, security patching, capacity planning, and scaling management. Your team focuses on building features rather than managing infrastructure.
You Need Geographic Distribution
Deploying to multiple AWS regions with Vapor is straightforward. Combined with CloudFront, you can serve users globally with low latency.
Your Application is Stateless
Serverless applications should not rely on local filesystem or in-memory state between requests. Laravel applications using Redis for sessions, S3 for file storage, and SQS for queues are ideal candidates.
Cost Predictability is Less Important Than Scale Flexibility
Serverless costs scale linearly with usage. If you prefer paying proportionally to actual usage rather than fixed infrastructure costs, serverless aligns with that model.
Avoid Laravel Vapor When:
Traffic is Consistent and Predictable
Applications with stable, predictable traffic are often cheaper on reserved EC2 instances:
Example: An internal business application serving 50 employees
during business hours. EC2 with reserved instances costs less
than equivalent Vapor deployment.
Long-Running Processes are Core to Your Application
Lambda has execution time limits (15 minutes maximum). Applications requiring:
- WebSocket connections
- Long-running background processes
- Real-time streaming
...are poor fits for serverless. Use traditional hosting or hybrid approaches.
You Have Heavy Compute Workloads
CPU-intensive tasks like video processing, machine learning inference, or complex calculations may be more cost-effective on EC2 instances sized for compute.
Cold Starts Are Unacceptable
Despite mitigation strategies, some cold starts will occur. If your application has strict latency SLAs (sub-100ms p99), serverless may not be appropriate.
You Need Full Server Control
Some applications require custom PHP extensions, specific OS configurations, or low-level system access. Vapor abstracts these details, which can be limiting.
Hybrid Approaches
Many production applications use both:
# Web application on Vapor
environments:
production:
memory: 1024
# Handles web requests
# Background processing on EC2
# Long-running queue workers, WebSockets, etc.
Consider Vapor for web-facing components while running specialized workloads on traditional infrastructure.
Real-World Performance Considerations
Let me share performance insights from production Vapor deployments.
Request Latency Benchmarks
Typical Laravel Vapor response times (P50):
| Request Type | Cold Start | Warm |
|---|---|---|
| Simple API endpoint | 900ms | 50-80ms |
| Database query | 950ms | 100-150ms |
| Complex page render | 1100ms | 150-250ms |
These numbers assume 1024MB memory configuration and optimized Laravel application.
Database Connection Management
Lambda's ephemeral nature complicates database connections. Each Lambda instance creates its own connection, and during traffic spikes, you can exhaust database connection limits.
Recommended Configuration:
When configuring database connections for serverless environments, apply the same best practices used in Laravel API development to ensure optimal connection management.
<?php
// config/database.php
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST'),
'database' => env('DB_DATABASE'),
'username' => env('DB_USERNAME'),
'password' => env('DB_PASSWORD'),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'options' => [
PDO::ATTR_PERSISTENT => false,
PDO::ATTR_TIMEOUT => 5,
],
],
For high-concurrency applications, use RDS Proxy to pool connections:
# vapor.yml
environments:
production:
database: my-database
database-proxy: true
Note: RDS Proxy adds approximately $45/month minimum cost. Only use it if you experience connection limit issues.
Queue Processing Performance
Vapor processes queued jobs through Lambda, enabling massive parallelism:
environments:
production:
queue-concurrency: 100 # Process up to 100 jobs concurrently
This configuration allows processing 100 jobs simultaneously, compared to traditional queue workers that process sequentially. For job-heavy applications, this parallelism dramatically improves throughput.
Configure job-specific concurrency limits:
environments:
production:
queues:
- name: default
concurrency: 50
- name: emails
concurrency: 100
- name: heavy-jobs
concurrency: 10
timeout: 300
Static Asset Performance
Vapor serves static assets through CloudFront, providing global edge caching:
environments:
production:
build:
- 'npm run build'
asset-url: 'https://d1234567890.cloudfront.net'
For optimal performance, ensure your build process generates versioned assets:
// vite.config.js
export default defineConfig({
build: {
manifest: true,
rollupOptions: {
output: {
entryFileNames: `assets/[name].[hash].js`,
chunkFileNames: `assets/[name].[hash].js`,
assetFileNames: `assets/[name].[hash].[ext]`
}
}
}
});
Deployment Best Practices
CI/CD with GitHub Actions
Automate deployments with GitHub Actions:
# .github/workflows/deploy.yml
name: Deploy to Vapor
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup PHP
uses: shivammathur/setup-php@v2
with:
php-version: '8.3'
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Composer Dependencies
run: composer install --no-dev --optimize-autoloader
- name: Install NPM Dependencies
run: npm ci
- name: Build Assets
run: npm run build
- name: Deploy to Vapor
env:
VAPOR_API_TOKEN: ${{ secrets.VAPOR_API_TOKEN }}
run: |
composer global require laravel/vapor-cli
vapor deploy production --commit="${GITHUB_SHA}"
Environment-Specific Configuration
Use separate configurations for staging and production:
environments:
staging:
domain: staging.myapp.com
memory: 512
warm: 2
database: staging-db
scheduler: true # Run scheduled tasks
production:
domain: myapp.com
memory: 1024
warm: 10
database: production-db
scheduler: true
Rollback Strategy
Vapor maintains deployment history for easy rollbacks:
# View recent deployments
vapor deploy:list production
# Rollback to previous deployment
vapor rollback production
# Rollback to specific deployment
vapor rollback production --deployment=123
Key Takeaways
Laravel Vapor offers a compelling serverless deployment option for Laravel applications in 2026. Here is what matters most:
-
Cost efficiency depends on traffic patterns: Vapor excels for spiky, unpredictable traffic but may cost more than EC2 for steady, predictable workloads. Factor in operational overhead when comparing.
-
Cold starts are manageable: With proper configuration (warm containers, optimized bootstrap, appropriate memory), cold starts affect a small percentage of requests and rarely impact user experience.
-
Vapor abstracts complexity: The platform handles VPC configuration, Lambda packaging, API Gateway setup, and dozens of AWS details. This abstraction saves significant DevOps time.
-
Not every application is a fit: Long-running processes, strict latency requirements, and heavy compute workloads may be better served by traditional hosting or hybrid approaches.
-
Start with staging: Deploy to a staging environment first, validate performance and costs, then promote to production with confidence.
If your Laravel application has variable traffic, you want to minimize operational burden, and you can design for statelessness, Vapor is worth serious consideration. The initial learning curve pays dividends in reduced maintenance and automatic scaling.
Considering Laravel Vapor for your application? I have helped teams evaluate and migrate to serverless infrastructure with Laravel Vapor. My AWS Infrastructure service includes architecture assessment, migration planning, and optimization to ensure your serverless deployment meets performance and cost targets. Schedule a free consultation to discuss your serverless strategy.
Related Reading:
- AWS Cost Optimization for PHP Apps: A Complete Guide
- AWS Lambda@Edge for PHP Apps: Low-Latency Personalization
- Laravel Octane Performance Optimization
- Laravel API Development Best Practices
External Resources:
- Laravel Vapor Documentation - Official Vapor Documentation
- Laravel Cloud vs Vapor Comparison - Laravel's Official Comparison
- Bref PHP Performance - PHP Lambda Benchmarks
- AWS Lambda Cold Start Optimization - Cold Start Best Practices

Richard Joseph Porter
Full-stack developer with expertise in modern web technologies. Passionate about building scalable applications and sharing knowledge through technical writing.
Need Help Upgrading Your Laravel App?
I specialize in modernizing legacy Laravel applications with zero downtime. Get a free codebase audit and upgrade roadmap.
Related Articles
AWS Cost Optimization for PHP and Laravel Apps
Cut AWS costs 40-70% for PHP and Laravel apps. Expert strategies for EC2, RDS, Lambda, S3 optimization—from 14 years of production experience.
AWS Lambda@Edge for PHP Apps: Low-Latency Personalization
Use Lambda@Edge with PHP apps for A/B testing, personalization, and edge security. Practical patterns, real trade-offs, and honest limitations.
Integrating AI APIs in Laravel: OpenAI, Claude, Bedrock
Build AI features in Laravel with OpenAI, Claude, and AWS Bedrock. Complete guide covering content summarization, caching, and cost optimization.