
Many teams use both Terraform and Serverless Framework. Here’s how to make them work together seamlessly using SSM Parameter Store, with real examples for sharing VPCs, databases, and configuration between tools.

Most teams don’t choose between Terraform and Serverless Framework—they use both. Terraform excels at foundational infrastructure (VPCs, RDS, IAM), while Serverless Framework handles Lambda functions and their event sources with less boilerplate.
The challenge? These tools don’t talk to each other natively. Serverless Framework compiles to CloudFormation and can reference CloudFormation outputs, but it has no built-in support for Terraform state.
This guide shows you how to bridge the gap using AWS SSM Parameter Store—the cleanest, most scalable approach to sharing configuration between tools.
Before diving into integration patterns, let’s clarify when each tool shines:
| Resource Type | Why Terraform |
|---|---|
| VPCs, Subnets, Security Groups | Rarely changes, referenced by many services |
| RDS, ElastiCache, OpenSearch | Stateful, long-lived resources |
| IAM Roles and Policies | Shared across multiple applications |
| S3 Buckets (shared) | Referenced by multiple Lambda functions |
| Route53, ACM Certificates | Infrastructure-level DNS and TLS |
| Resource Type | Why Serverless |
|---|---|
| Lambda Functions | Native packaging, layers, deployment |
| API Gateway | Tightly coupled to Lambda definitions |
| DynamoDB (app-specific) | Deployed alongside the app that uses it |
| SQS/SNS (app-specific) | Event sources for Lambda triggers |
| Step Functions | Orchestrates Lambda functions |
Shared, long-lived infrastructure → Terraform Application-specific, frequently deployed → Serverless Framework
SSM Parameter Store is the glue between Terraform and Serverless Framework:
| Alternative | Problem |
|---|---|
| Hardcoded values | Drift, duplication, manual updates |
| Environment variables | Still need a source of truth |
| CloudFormation exports | Serverless can use these, but Terraform can’t write to them |
| Terraform remote state | Serverless Framework can’t read it natively |
| SSM Parameter Store | Both tools have native support |
SSM is:
# terraform/main.tf
# Create a VPC
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = "production-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
}
# Create RDS instance
resource "aws_db_instance" "main" {
identifier = "production-db"
engine = "postgres"
engine_version = "15.4"
instance_class = "db.r5.large"
allocated_storage = 100
db_name = "appdb"
username = "admin"
password = var.db_password
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.main.name
skip_final_snapshot = false
}
# Write outputs to SSM Parameter Store
resource "aws_ssm_parameter" "vpc_id" {
name = "/production/vpc/id"
type = "String"
value = module.vpc.vpc_id
}
resource "aws_ssm_parameter" "private_subnet_ids" {
name = "/production/vpc/private_subnet_ids"
type = "StringList"
value = join(",", module.vpc.private_subnets)
}
resource "aws_ssm_parameter" "security_group_lambda" {
name = "/production/vpc/security_group_lambda"
type = "String"
value = aws_security_group.lambda.id
}
resource "aws_ssm_parameter" "db_endpoint" {
name = "/production/database/endpoint"
type = "String"
value = aws_db_instance.main.endpoint
}
resource "aws_ssm_parameter" "db_name" {
name = "/production/database/name"
type = "String"
value = aws_db_instance.main.db_name
}
resource "aws_ssm_parameter" "db_password" {
name = "/production/database/password"
type = "SecureString"
value = var.db_password
}
# serverless/serverless.yml
service: my-api
provider:
name: aws
runtime: nodejs20.x
region: us-east-1
# VPC configuration from Terraform
vpc:
securityGroupIds:
- ${ssm:/production/vpc/security_group_lambda}
subnetIds: ${ssm:/production/vpc/private_subnet_ids~split}
# Environment variables from Terraform outputs
environment:
DB_ENDPOINT: ${ssm:/production/database/endpoint}
DB_NAME: ${ssm:/production/database/name}
DB_PASSWORD: ${ssm:/production/database/password}
functions:
api:
handler: src/handler.main
events:
- http:
path: /
method: ANY
- http:
path: /{proxy+}
method: ANY
Key Syntax:
${ssm:/path/to/param} - Reads a standard or SecureString parameter${ssm:/path/to/param~split} - Splits a comma-separated StringList into an arrayReal projects have multiple environments. Structure your SSM parameters accordingly:
# terraform/environments/prod/main.tf
locals {
environment = "prod"
}
resource "aws_ssm_parameter" "vpc_id" {
name = "/${local.environment}/vpc/id"
type = "String"
value = module.vpc.vpc_id
tags = {
Environment = local.environment
ManagedBy = "terraform"
}
}
resource "aws_ssm_parameter" "db_endpoint" {
name = "/${local.environment}/database/endpoint"
type = "String"
value = aws_db_instance.main.endpoint
tags = {
Environment = local.environment
ManagedBy = "terraform"
}
}
# serverless.yml
service: my-api
custom:
stage: ${opt:stage, 'dev'}
provider:
name: aws
runtime: nodejs20.x
stage: ${self:custom.stage}
vpc:
securityGroupIds:
- ${ssm:/${self:custom.stage}/vpc/security_group_lambda}
subnetIds: ${ssm:/${self:custom.stage}/vpc/private_subnet_ids~split}
environment:
STAGE: ${self:custom.stage}
DB_ENDPOINT: ${ssm:/${self:custom.stage}/database/endpoint}
DB_NAME: ${ssm:/${self:custom.stage}/database/name}
Deploy to different environments:
# Deploy to dev (reads /dev/vpc/id, etc.)
serverless deploy --stage dev
# Deploy to prod (reads /prod/vpc/id, etc.)
serverless deploy --stage prod
For sensitive data like database passwords and API keys:
resource "aws_ssm_parameter" "db_password" {
name = "/${local.environment}/database/password"
description = "Database master password"
type = "SecureString"
value = var.db_password
key_id = aws_kms_key.secrets.arn # Custom KMS key (optional)
tags = {
Environment = local.environment
Sensitive = "true"
}
}
resource "aws_ssm_parameter" "api_key" {
name = "/${local.environment}/external/stripe_api_key"
description = "Stripe API key"
type = "SecureString"
value = var.stripe_api_key
tags = {
Environment = local.environment
Sensitive = "true"
}
}
provider:
environment:
# Decrypted at deploy time, stored in Lambda environment
DB_PASSWORD: ${ssm:/${self:custom.stage}/database/password}
STRIPE_API_KEY: ${ssm:/${self:custom.stage}/external/stripe_api_key}
For maximum security, fetch secrets at runtime instead of deploy time:
// src/config.js
const { SSMClient, GetParameterCommand } = require('@aws-sdk/client-ssm');
const ssm = new SSMClient({ region: process.env.AWS_REGION });
// Cache parameters to avoid repeated API calls
const parameterCache = {};
async function getParameter(name, decrypt = true) {
if (parameterCache[name]) {
return parameterCache[name];
}
const command = new GetParameterCommand({
Name: name,
WithDecryption: decrypt,
});
const response = await ssm.send(command);
parameterCache[name] = response.Parameter.Value;
return parameterCache[name];
}
module.exports = {
getDbPassword: () => getParameter(`/${process.env.STAGE}/database/password`),
getStripeKey: () => getParameter(`/${process.env.STAGE}/external/stripe_api_key`),
};
Benefits of runtime fetching:
Tradeoffs:
Terraform can create IAM roles that Serverless Framework functions assume:
# terraform/iam.tf
resource "aws_iam_role" "lambda_execution" {
name = "${local.environment}-lambda-execution-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_basic" {
role = aws_iam_role.lambda_execution.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "lambda_vpc" {
role = aws_iam_role.lambda_execution.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy" "lambda_custom" {
name = "custom-permissions"
role = aws_iam_role.lambda_execution.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ssm:GetParameter",
"ssm:GetParameters",
]
Resource = "arn:aws:ssm:*:*:parameter/${local.environment}/*"
},
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
]
Resource = "${aws_s3_bucket.data.arn}/*"
}
]
})
}
# Export role ARN to SSM
resource "aws_ssm_parameter" "lambda_role_arn" {
name = "/${local.environment}/iam/lambda_execution_role_arn"
type = "String"
value = aws_iam_role.lambda_execution.arn
}
# serverless.yml
provider:
name: aws
runtime: nodejs20.x
# Use the Terraform-managed role instead of auto-generated
iam:
role: ${ssm:/${self:custom.stage}/iam/lambda_execution_role_arn}
Why manage roles in Terraform?
Here’s a real-world project structure using both tools:
project/
├── terraform/
│ ├── environments/
│ │ ├── dev/
│ │ │ ├── main.tf
│ │ │ ├── variables.tf
│ │ │ └── terraform.tfvars
│ │ ├── staging/
│ │ │ └── ...
│ │ └── prod/
│ │ └── ...
│ ├── modules/
│ │ ├── vpc/
│ │ ├── rds/
│ │ ├── iam/
│ │ └── ssm-outputs/
│ └── shared/
│ └── backend.tf
│
├── serverless/
│ ├── services/
│ │ ├── api/
│ │ │ ├── serverless.yml
│ │ │ ├── src/
│ │ │ └── package.json
│ │ ├── workers/
│ │ │ ├── serverless.yml
│ │ │ └── src/
│ │ └── notifications/
│ │ ├── serverless.yml
│ │ └── src/
│ └── shared/
│ └── serverless.common.yml
│
├── scripts/
│ ├── deploy-infra.sh
│ └── deploy-apps.sh
│
└── README.md
#!/bin/bash
# scripts/deploy-all.sh
STAGE=${1:-dev}
echo "=== Deploying infrastructure with Terraform ==="
cd terraform/environments/$STAGE
terraform init
terraform apply -auto-approve
echo "=== Deploying Serverless services ==="
cd ../../../serverless/services
for service in api workers notifications; do
echo "Deploying $service..."
cd $service
npm ci
serverless deploy --stage $STAGE
cd ..
done
echo "=== Deployment complete ==="
When Terraform updates infrastructure, you may need to redeploy Serverless apps to pick up the changes.
# After terraform apply
serverless deploy --stage prod
If your Lambda functions fetch parameters at runtime (not deploy time), they’ll automatically get updated values:
// Parameters fetched fresh on each invocation
const dbEndpoint = await getParameter('/prod/database/endpoint');
For critical config changes, update the Lambda environment without full redeploy:
aws lambda update-function-configuration \
--function-name my-api-prod-handler \
--environment "Variables={DB_ENDPOINT=$(aws ssm get-parameter --name /prod/database/endpoint --query Parameter.Value --output text)}"
If you want to eliminate the tool split entirely, consider serverless.tf—Terraform modules designed specifically for serverless workloads:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-api"
description = "My API Lambda"
handler = "index.handler"
runtime = "nodejs20.x"
source_path = "../src"
vpc_subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [aws_security_group.lambda.id]
environment_variables = {
DB_ENDPOINT = aws_db_instance.main.endpoint
}
allowed_triggers = {
APIGateway = {
service = "apigateway"
source_arn = "${aws_apigatewayv2_api.main.execution_arn}/*/*"
}
}
}
Pros:
Cons:
Use a consistent hierarchy:
/{environment}/{service}/{parameter}
Examples:
/prod/vpc/id
/prod/database/endpoint
/dev/api/rate_limit
Always handle missing parameters gracefully:
# serverless.yml - with fallback
provider:
environment:
LOG_LEVEL: ${ssm:/${self:custom.stage}/config/log_level, 'info'}
Terraform and Serverless Framework work well together when you establish clear boundaries:
This pattern scales to dozens of services and multiple environments. The key is consistency: establish your parameter naming convention early and stick to it.
Related Reading:
References: