I want to deploy my Lambda functions to AWS Lambda using Serverless Framework with this command.
serverless deploy --stage dev --region eu-central-1
.
Here’s my servless.yml
file:
service: sensor-processor-v3
plugins:
- serverless-webpack
# - serverless-websockets-plugin
custom:
secrets: ${file(secrets.yml):${self:provider.stage}}
accessLogOnStage:
dev: true
prod: true
nodeEnv:
dev: development
prod: production
mqArn:
dev:
prod:
provider:
name: aws
runtime: nodejs18.x
stage: ${opt:stage, 'dev'}
region: eu-central-1
logs:
accessLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}
executionLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}
logRetentionInDays: 14
memorySize: 128
timeout: 30
endpointType: REGIONAL
environment:
STAGE: ${self:provider.stage}
NODE_ENV: ${self:custom.nodeEnv.${self:provider.stage}}
REDIS_HOST_RW: !GetAtt RedisCluster.PrimaryEndPoint.Address
REDIS_HOST_RO: !GetAtt RedisCluster.ReaderEndPoint.Address
REDIS_PORT: !GetAtt RedisCluster.PrimaryEndPoint.Port
SNIPEIT_INSTANCE_URL: ${self:custom.secrets.SNIPEIT_INSTANCE_URL}
SNIPEIT_API_TOKEN: ${self:custom.secrets.SNIPEIT_API_TOKEN}
apiGateway:
apiKeySelectionExpression:
apiKeySourceType: AUTHORIZER
apiKeys:
- ${self:service}-${self:provider.stage}
iam:
role:
statements:
- Effect: Allow
Action:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Resource: "*"
- Effect: Allow
Action:
- "dynamodb:PutItem"
- "dynamodb:Query"
Resource: { Fn::GetAtt: [ theThingsNetwork, Arn ] }
- Effect: Allow
Action:
- "dynamodb:PutItem"
- "dynamodb:Query"
Resource: { Fn::GetAtt: [ loriotTable, Arn ] }
- Effect: Allow
Action:
- firehose:DeleteDeliveryStream
- firehose:PutRecord
- firehose:PutRecordBatch
- firehose:UpdateDestination
Resource: '*'
- Effect: Allow
Action: lambda:InvokeFunction
Resource: '*'
- Effect: Allow
Action:
- s3:GetObject
- s3:ListBucket
- s3:PutObject
Resource:
- arn:aws:s3:::sensor-processor-v3-prod
- arn:aws:s3:::sensor-processor-v3-prod/*
- arn:aws:s3:::sensor-processor-v3-dev
- arn:aws:s3:::sensor-processor-v3-dev/*
- arn:aws:s3:::datawarehouse-redshift-dev
- arn:aws:s3:::datawarehouse-redshift-dev/*
- arn:aws:s3:::datawarehouse-redshift
- arn:aws:s3:::datawarehouse-redshift/*
package:
patterns:
- '!README.md'
- '!tools/rename-script.js'
- '!secrets*'
functions:
authorizer:
handler: src/authorizer.handler
memorySize: 128
environment:
STAGE: ${self:provider.stage}
API_KEY_ALLOW: ${self:custom.secrets.API_KEY_ALLOW}
USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
ibasxSend:
handler: src/ibasxSend.ibasxSend
memorySize: 256
environment:
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
NODE_TLS_REJECT_UNAUTHORIZED: 0
processIbasxPayload:
handler: src/processIbasxPayload.processor
memorySize: 384
timeout: 20
environment:
STAGE: ${self:provider.stage}
LORIOT_DB: { Ref: loriotTable }
OCCUPANCY_STREAM_NAME: { Ref: firehose }
ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
IBASX_DATA_SYNC_DB_NAME: ${self:custom.secrets.IBASX_DATA_SYNC_DB_NAME}
IBASX_DATA_SYNC_DB_USER: ${self:custom.secrets.IBASX_DATA_SYNC_DB_USER}
IBASX_DATA_SYNC_DB_PASSWORD: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PASSWORD}
IBASX_DATA_SYNC_DB_HOST: ${self:custom.secrets.IBASX_DATA_SYNC_DB_HOST}
IBASX_DATA_SYNC_DB_PORT: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PORT}
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
FEATURES: snipeId
vpc:
securityGroupIds:
- sg-0d7ec27d8c3e59a5f
subnetIds:
- subnet-093295e049fd0b192
- subnet-0b4dd59bec892f1b5
- subnet-0ba4e03f8d83d5cd4
loriotConnector:
handler: src/loriotConnector.connector
memorySize: 384
timeout: 20
environment:
STAGE: ${self:provider.stage}
LORIOT_DB: { Ref: loriotTable }
OCCUPANCY_STREAM_NAME: { Ref: firehose }
ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
vpc:
securityGroupIds:
- sg-0d7ec27d8c3e59a5f
subnetIds:
- subnet-093295e049fd0b192
- subnet-0b4dd59bec892f1b5
- subnet-0ba4e03f8d83d5cd4
events:
- http:
path: loriot/uplink
method: post
# private: true
authorizer:
type: TOKEN
name: authorizer
identitySource: method.request.header.Authorization
ibasxDiagnostics:
handler: src/ibasxDiagnostics.diagnostics
memorySize: 256
timeout: 60
vpc:
securityGroupIds:
- sg-0d7ec27d8c3e59a5f
subnetIds:
- subnet-093295e049fd0b192
- subnet-0b4dd59bec892f1b5
- subnet-0ba4e03f8d83d5cd4
importDataFromS3:
handler: src/importDataFromS3.importFn
memorySize: 512
timeout: 300
environment:
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
qrcodeSync:
handler: src/qrcodeSync.sync
memorySize: 256
timeout: 30
vpc:
securityGroupIds:
- sg-0d7ec27d8c3e59a5f
subnetIds:
- subnet-093295e049fd0b192
- subnet-0b4dd59bec892f1b5
- subnet-0ba4e03f8d83d5cd4
environment:
REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
events:
- schedule: rate(5 minutes)
# deduplicator:
# handler: src/deduplicator.deduplicate
# memorySize: 512
# environment:
# REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
# REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
# REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
# REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
# REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
# REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
# # events:
# # - schedule: rate(5 minutes)
websocketMessage:
handler: src/websocketConnector.onMessage
memorySize: 256
events:
- websocket:
route: '$default'
environment:
LORIOT_DB: { Ref: loriotTable }
OCCUPANCY_STREAM_NAME: { Ref: firehose }
ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
wsAuthorizer:
handler: src/authorizer.handler
memorySize: 128
environment:
STAGE: ${self:provider.stage}
API_KEY_ALLOW: ${self:custom.secrets.WS_API_KEY_ALLOW}
USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
websocketConnect:
handler: src/websocketConnect.connect
environment:
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
events:
- websocket:
route: $connect
# routeKey: '\$default'
authorizer:
name: wsAuthorizer
identitySource:
- route.request.header.Authorization
- websocket:
route: $disconnect
wifiConnector:
handler: src/wifi.connector
memorySize: 384
vpc:
securityGroupIds:
- sg-0d7ec27d8c3e59a5f
subnetIds:
- subnet-093295e049fd0b192
- subnet-0b4dd59bec892f1b5
- subnet-0ba4e03f8d83d5cd4
events:
- http:
path: wifi/uplink
method: post
authorizer:
type: TOKEN
name: authorizer
identitySource: method.request.header.Authorization
environment:
STAGE: ${self:provider.stage}
LORIOT_DB: { Ref: loriotTable }
OCCUPANCY_STREAM_NAME: { Ref: firehose }
ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
missingPowerBIData:
handler: src/missingPowerBIData.update
memorySize: 256
timeout: 600
environment:
STAGE: ${self:provider.stage}
CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
kinesisEtl:
timeout: 60
handler: src/kinesisTransformer.kinesisTransformer
environment:
TZ: "Greenwich"
ROUND_PERIOD: 360000 # 6 minutes
ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
atmosphericEtl:
timeout: 60
handler: src/atmosphericTransformer.atmosphericTransformer
environment:
TZ: "Greenwich"
ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
peopleEtl:
timeout: 60
handler: src/peopleTransformer.peopleTransformer
environment:
ROUND_PERIOD: 360000 # 6 minutes
TZ: "Greenwich"
ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
updateSensorSlot:
timeout: 60
handler: src/updateSensorSlot.updateSensorSlot
environment:
ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
resources: ${file(resources.yml)}
Here’s the resources.yml
file:
---
Resources:
firehoseRole:
Type: AWS::IAM::Role
Properties:
RoleName: ${self:service}-${self:provider.stage}-FirehoseToS3Role
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- firehose.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: FirehoseToS3Policy
PolicyDocument:
Statement:
- Effect: Allow
Action:
- s3:AbortMultipartUpload
- s3:GetBucketLocation
- s3:GetObject
- s3:ListBucket
- s3:ListBucketMultipartUploads
- s3:PutObject
Resource: '*'
- PolicyName: FirehoseLogsPolicy
PolicyDocument:
Statement:
- Effect: Allow
Action:
- logs:CreateLogStream
- glue:GetTableVersions
- logs:CreateLogGroup
- logs:PutLogEvents
Resource: '*'
- PolicyName: FirehoseLambdaPolicy
PolicyDocument:
Statement:
- Effect: Allow
Action:
- lambda:InvokeFunction
- lambda:GetFunctionConfiguration
- kinesis:GetShardIterator
- kinesis:GetRecords
- kinesis:DescribeStream
Resource: '*'
serverlessKinesisFirehoseBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: "${self:service}-${self:provider.stage}"
LifecycleConfiguration:
Rules:
- Status: Enabled
ExpirationInDays: 90
theThingsNetwork:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "${self:custom.secrets.TTN_DB}"
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
AttributeDefinitions:
- AttributeName: "device"
AttributeType: "S"
- AttributeName: "timestamp"
AttributeType: "S"
KeySchema:
- AttributeName: "device"
KeyType: "HASH"
- AttributeName: "timestamp"
KeyType: "RANGE"
BillingMode: PAY_PER_REQUEST
loriotTable:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "${self:custom.secrets.LORIOT_DB}"
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
AttributeDefinitions:
- AttributeName: "device"
AttributeType: "S"
- AttributeName: "timestamp"
AttributeType: "S"
KeySchema:
- AttributeName: "device"
KeyType: "HASH"
- AttributeName: "timestamp"
KeyType: "RANGE"
BillingMode: PAY_PER_REQUEST
processed:
Type: "AWS::Redshift::Cluster"
Properties:
AutomatedSnapshotRetentionPeriod: "${self:custom.secrets.REDSHIFT_SNAPSHOT_RETENTION_PERIOD}"
AllowVersionUpgrade: true
ClusterIdentifier: "${self:custom.secrets.REDSHIFT_IDENTIFIER}"
ClusterType: "${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}"
DBName: "${self:custom.secrets.REDSHIFT_DB_NAME}"
MasterUsername: "${self:custom.secrets.REDSHIFT_DB_USER}"
MasterUserPassword: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
Port: "${self:custom.secrets.REDSHIFT_DB_PORT}"
NodeType: "${self:custom.secrets.REDSHIFT_NODE_TYPE}"
PubliclyAccessible: true
VpcSecurityGroupIds: "${self:custom.secrets.REDSHIFT_SECURITY_GROUP_IDS}"
ElasticIp: "${self:custom.secrets.REDSHIFT_EIP}"
ClusterSubnetGroupName: "${self:custom.secrets.REDSHIFT_SUBNET_GROUP}"
# ClusterParameterGroupName: "${self:custom.secrets.REDSHIFT_PARAMETER_GROUP}"
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: ${self:service}-${self:provider.stage}-kinesis
RetentionInDays: 30
OccupancyS3LogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: { Ref: LogGroup }
LogStreamName: OccupancyS3LogStream
OccupancyRedshiftLogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: { Ref: LogGroup }
LogStreamName: OccupancyRedshiftLogStream
AtmosphericS3LogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: { Ref: LogGroup }
LogStreamName: AtmosphericS3LogStream
AtmosphericRedshiftLogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: { Ref: LogGroup }
LogStreamName: AtmosphericRedshiftLogStream
firehose:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamName: ${self:service}-${self:provider.stage}
DeliveryStreamType: DirectPut
RedshiftDestinationConfiguration:
ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
CopyCommand:
CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
DataTableName: "processed_data"
Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: OccupancyRedshiftLogStream }
S3Configuration:
BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 1
CompressionFormat: UNCOMPRESSED
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: OccupancyS3LogStream }
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
ProcessingConfiguration:
Enabled: true
Processors:
- Parameters:
- ParameterName: LambdaArn
ParameterValue: { Fn::GetAtt: [ KinesisEtlLambdaFunction, Arn ] }
- ParameterName: BufferIntervalInSeconds
ParameterValue: "60"
- ParameterName: BufferSizeInMBs
ParameterValue: "1"
- ParameterName: NumberOfRetries
ParameterValue: "2"
Type: Lambda
AtmosphericFirehose:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamName: ${self:service}-${self:provider.stage}-atmospheric
DeliveryStreamType: DirectPut
RedshiftDestinationConfiguration:
ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
CopyCommand:
CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
DataTableName: "atmospheric_data"
Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: AtmosphericRedshiftLogStream }
S3Configuration:
BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
Prefix: atmospheric/
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 1
CompressionFormat: UNCOMPRESSED
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: AtmosphericS3LogStream }
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
ProcessingConfiguration:
Enabled: true
Processors:
- Parameters:
- ParameterName: LambdaArn
ParameterValue: { Fn::GetAtt: [ AtmosphericEtlLambdaFunction, Arn ] }
- ParameterName: BufferIntervalInSeconds
ParameterValue: "60"
- ParameterName: BufferSizeInMBs
ParameterValue: "1"
- ParameterName: NumberOfRetries
ParameterValue: "2"
Type: Lambda
PeopleFirehose:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamName: ${self:service}-${self:provider.stage}-people
DeliveryStreamType: DirectPut
RedshiftDestinationConfiguration:
ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
CopyCommand:
CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
DataTableName: "people_data"
Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: AtmosphericRedshiftLogStream }
S3Configuration:
BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
Prefix: people/
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 1
CompressionFormat: UNCOMPRESSED
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: { Ref: LogGroup }
LogStreamName: { Ref: AtmosphericS3LogStream }
RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
ProcessingConfiguration:
Enabled: true
Processors:
- Parameters:
- ParameterName: LambdaArn
ParameterValue: { Fn::GetAtt: [ PeopleEtlLambdaFunction, Arn ] }
- ParameterName: BufferIntervalInSeconds
ParameterValue: "60"
- ParameterName: BufferSizeInMBs
ParameterValue: "1"
- ParameterName: NumberOfRetries
ParameterValue: "2"
Type: Lambda
RedisCluster:
Type: 'AWS::ElastiCache::ReplicationGroup'
Properties:
AutoMinorVersionUpgrade: true
ReplicationGroupId: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
ReplicationGroupDescription: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
CacheNodeType: cache.t4g.micro
Engine: redis
ReplicasPerNodeGroup: 3
NumNodeGroups: 1
EngineVersion: '7.0'
MultiAZEnabled: true
AutomaticFailoverEnabled: true
PreferredMaintenanceWindow: 'sat:01:45-sat:04:45'
SnapshotRetentionLimit: 4
SnapshotWindow: '00:30-01:30'
CacheSubnetGroupName: mm-vpc-cache
SecurityGroupIds:
- sg-07663c145bf3feb84
- sg-0d7ec27d8c3e59a5f
It fails with the error message Error: CREATE_FAILED: serverlessKinesisFirehoseBucket (AWS::S3::Bucket) sensor-processor-v3-dev already exists
.
I investigated the issue and determined that my S3 bucket must have a unique name across all AWS regions.
- Question 1: I haven’t changed anything on the S3 bucket. Should I still provision it? I don’t want to change the name of my existing S3 bucket. But I want to use my existing bucket.
I adjusted my AWS S3 bucket to avoid the error during testing. However, it still fails and displays the error message Error: CREATE_FAILED: RedisCluster (AWS::ElastiCache::ReplicationGroup) Cache subnet group 'mm-vpc-cache' does not exist. (Service: AmazonElastiCache; Status Code: 400; Error Code: CacheSubnetGroupNotFoundFault; Request ID: 2cbfadb2-8086-4ce8-ae61-1d75dcaaa1aa; Proxy: null)
.
- Question 2:: Is my
resources.yml
file out of date and not synchronised with my AWS resources?
>Solution :
Question 1: To do this, you need to update your serverlessKinesisFirehoseBucket
resource in your resources.yml
file. Instead of creating a new bucket, reference your existing S3 bucket using its ARN or name, depending on what’s required by the BucketARN
property in the RedshiftDestinationConfiguration
section of your Firehose delivery stream configuration.
Regarding the error message you’re encountering about the RedisCluster, it seems like there’s an issue with the cache subnet group. Make sure that the subnet group with the name ‘mm-vpc-cache’ actually exists in your AWS account and in the region where you are deploying the resources. Double-check the subnet group name and ensure it’s correct. If the subnet group doesn’t exist, you’ll need to create it with the appropriate subnets.
Question 2: It’s essential to ensure that the definitions in your resources.yml
match the actual resources you have in your AWS environment. If you’ve made changes to your resources directly in the AWS console or via other means, your resources.yml
might not reflect those changes.
To troubleshoot this, you should compare the definitions in your resources.yml
file with the resources you have in AWS Console, paying attention to any discrepancies. If you’ve made changes outside of Serverless Framework, you might need to update your resources.yml
accordingly to reflect the correct configuration.
Additionally, the error message you provided about the Cache subnet group 'mm-vpc-cache' does not exist
suggests that there might be an issue with the referenced cache subnet group. Make sure that the subnet group mm-vpc-cache
exists in your AWS environment and is correctly defined in your resources.yml
file.