We can push to S3 and invalidate the cache with the API:
aws s3 sync . s3://mybucketname
aws cloudfront create-invalidation --distribution-id id --paths /*
but why run two commands when one will do? Let's create a Lambda function that will trigger on an ObjectCreate in our S3 bucket for our website.
Create the Lambda Function
Go to Lambda in the console and create your function. In this example I used Python 3.7 and the inline code entry type for the Lambda Function. In the designer select S3 and the S3 bucket that you will use as the trigger. Next, select the event type, in my case I'm using the "All Object create events" type in order to flush the cache every time I update my site. The Lambda function will look like below, and you will enter your Cloudfront Distribution as an Envinronment Variable in the section below the inline function code.
import os import boto3 import time def lambda_handler(event, context): client = boto3.client('cloudfront') dist_id = os.environ['dist_id'] invalidation = client.create_invalidation(DistributionId=dist_id, InvalidationBatch={ 'Paths': { 'Quantity': 1, 'Items': ["/*"] }, 'CallerReference': str(time.time()) })
In the Lambda Function, we're grabbing the Cloudfront Distribution ID from the environment variable and using "/*" in the items list to invalidate all items in the distribution. 'CallerReference' is required to identify the invalidation request, so we're using time.time for a unique string.
https://docs.python.org/3/library/time.html
Take a look at the Cloudfront CLI and BOTO3 documentation for more details.
If you want to invalidate a specific object/objects, rather than the entire distribution, you'll want to take a look at the S3 Event Message Structure documentation, the AWS Lambda with Amazon S3 document, as well as the Lambda with S3 example doc. An S3 PUT request will create a JSON message in the following format that you will want to filter with your Lambda function.
{ "Records":[ { "eventVersion":"2.1", "eventSource":"aws:s3", "awsRegion":"us-west-2", "eventTime":"1970-01-01T00:00:00.000Z", "eventName":"ObjectCreated:Put", "userIdentity":{ "principalId":"AIDAJDPLRKLG7UEXAMPLE" }, "requestParameters":{ "sourceIPAddress":"127.0.0.1" }, "responseElements":{ "x-amz-request-id":"C3D13FE58DE4C810", "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD" }, "s3":{ "s3SchemaVersion":"1.0", "configurationId":"testConfigRule", "bucket":{ "name":"mybucket", "ownerIdentity":{ "principalId":"A3NL1KOZZKExample" }, "arn":"arn:aws:s3:::mybucket" }, "object":{ "key":"HappyFace.jpg", "size":1024, "eTag":"d41d8cd98f00b204e9800998ecf8427e", "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko", "sequencer":"0055AED6DCD90281E5" } } } ] }
Your Lambda function to grab the specific Object/Objects PUT will look something like this.
import os import boto3 import time def lambda_handler(event, context): path = [] for records in event['Records']: path.append("/" + records["s3"]["object"]["key"]) print(path) client = boto3.client('cloudfront') dist_id = os.environ['dist_id'] invalidation = client.create_invalidation(DistributionId=dist_id, InvalidationBatch={ 'Paths': { 'Quantity': 1, 'Items': [path] }, 'CallerReference': str(time.time()) })
Lastly, you'll want to create a role with the following policy attached to it.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "cloudfront:CreateInvalidation" ], "Resource": [ "*" ] } ] }
That's all there is to it! Now you can
aws s3 sync . s3://yourbucketname
and your Cloudfront Distribution will automatically invalidate.