Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

AWS EC2 log userdata output to cloudwatch logs

I’m doing pre-processing tasks using EC2.

I execute shell commands using the userdata variable. The last line of my userdata has sudo shutdown now -h. So the instance gets terminated automatically once the pre-processing task completed.

This is how my code looks like.

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

import boto3


userdata = '''#!/bin/bash
pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h
'''


def launch_ec2():
    ec2 = boto3.resource('ec2',
                         aws_access_key_id="", 
                         aws_secret_access_key="",
                         region_name='us-east-1')
    instances = ec2.create_instances(
        ImageId='ami-0c02fb55956c7d316',
        MinCount=1,
        MaxCount=1,
        KeyName='',
        InstanceInitiatedShutdownBehavior='terminate',
        IamInstanceProfile={'Name': 'S3fullaccess'},
        InstanceType='m6i.4xlarge', 
        UserData=userdata,
        InstanceMarketOptions={
            'MarketType': 'spot',
            'SpotOptions': {
                'SpotInstanceType': 'one-time',
            }
        }
    )
    print(instances)


launch_ec2()

The problem is, sometime when there is an error in my python script, the script dies and the instance get terminated.

Is there a way I can collect error/info logs and send it to cloudwatch before the instance get terminated? This way, I would know what went wrong.

>Solution :

You can achieve the desired behavior by leveraging functionality.
You could in fact create a log file for the entire execution of the UserData, and you could use trap to make sure that the log file is copied over to S3 before terminating if an error occurs.

Here’s how it could look:

#!/bin/bash -xe
exec &>> /tmp/userdata_execution.log

upload_log() {
  aws s3 cp /tmp/userdata_execution.log s3://... # use a bucket of your choosing here
}

trap 'upload_log' ERR

pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h

A log file (/tmp/userdata_execution.log) that contains stdout and stderr will be generated for the UserData; if there is an error during the execution of the UserData, the log file will be upload to an S3 bucket.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading