Use AWS MediaConvert to reduce your videos size
Our customers have access to a dashboard where they can create their own content to share with theirs learners through our mobile app. As we developed the content editor, we had to handle medias (especially videos) and how we could optimize them. Let’s talk about how we tackled this issue with a simple, fast and scalable infrastructure.
AWS Services
First of all a brief talk of our current backend.We have a NodeJS API running on EC2 instance(s). Our customers are using CKEditor 5 to create their content and we made a special upload adapter to work with signed urls (with AWS S3).
Now let’s see what services we’re going to use:
- S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance
- Elemental Media Convert is a file-based video transcoding service with broadcast-grade features. The service combines advanced video and audio capabilities with a simple web services interface and pay-as-you-go pricing.
- Lambda lets you run code without provisioning or managing servers.
- SNS (Simple Notification Service) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
AWS S3
If you don’t have an S3 Bucket we’ll create one now which is mandatory for the next step.
- Go to AWS S3 dashboard and select “Create bucket” on the top right corner. Add a name, region and enable “Block all public access”. (For testing purpose you can allow all public access if you want.)
AWS Elemental Media Convert
- Go to the AWS Elemental Media Convert dashboard and click on “Job Templates” > “Create template.” Fill in whatever you want for the general information but add a simple and descriptive name such as MediumConvertMedia.
2. Add a job input with the “Add” button next to “Inputs.”
You have a lot of overwhelming settings here but leave it as it is for the moment, you can always update them later if you need.
3. Add an output group with the “Add” button next to “Output groups.”
Here we’ll choose “File Group” to produce an mp4 output, you have a variety of choices such as Apple HLS which is well supported on iOS and Android but not much on Chrome or Firefox.
4. Next screen will be the place where we’ll setup our outputs. Add a name and set the “destination” to be our S3 bucket. Don’t worry, we’ll override this settings later to store our converted videos in the right place.
The name modifier is important, you have to set it to something otherwise MediaConvert won’t replace the input with the output. I’ve chosen “-converted” as a simple name.
5. Next step is to click on “H.264, AAC, -converted” line bellow “File Group”. This screen is where you can setup the output like the framerate, extension, sharpness, bitrate etc …
To be honest I don’t have a clue about 95 percent of these settings and what they do, I had to call our video editor to help me out on what settings I can tweak to have a sharp, lightweight video.
Amazon also offers us presets with different definitions, extensions, sizes, etc. You can chose one of them and see the results.
6. Another neat feature is to create a poster of your video, and we’re using it a lot so our end users don’t have a white screen but a nice poster. Add a new “Output Group” like this one:
7. Click on the group below which is “H.264, AAC, -poster” in our case. Don’t select any preset and leave it to “No container.”
Remove the default audio encoding settings and for the video, click on “Video Codec” and choose “Frame Capture to JPEG.” For the resolution set 1280 x 720. Don’t worry we’ll override these settings later.
Finally you can save your new template.
AWS Lambda
Now we need to link our S3 Bucket with a Lambda function in order to trigger our newly create job template.
- Go to the AWS Lambda dashboard and click on “Create function” and enter the following informations.
2. Once your function is created we need to update the IAM role to be able to fetch our AWS Media Convert job template. Go to “Permissions” and click on your role, here it’s “MediumMediaConvert-role-076wf1n”.
3. On the IAM console click on “Attach policies” and then “Create policy”.
4. Create a new policy with the following statements (these are not production ready, you’ll need it to be less permissive.)
Replace REGION, ACCOUNT_ID and MEDIACONVERT_TEMPLATE_NAME with your data.
5. Attach the new policy to the lambda’s role and you’re good to go for the IAM part.
6. Now that we have our role setup we need to add an S3 trigger to start our Lambda and use our MediaConvert Template. Go to your lambda and on “Configuration” choose “Add trigger”.
7. Add a new S3 Trigger, select the appropriate event type for your project (for testing purpose you can set to all avent types.). Prefix and suffix can be blank, we’ll avoid recursive calling in our lambda. Don’t forget to set the S3 bucket to your bucket and we’re done with the trigger.
Now if we look at our architecture diagram we can see that we made all the right part : an S3 bucket, media convert template and a Lambda. Now let’s see how we can publish messages for the job’s status to our API.
CloudWatch and SNS
I know I didn’t speak about CloudWatch at the beginning because as you’ll see it’s really a simple part. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. In our case we’ll use it to start an SNS topic whenever there is a MediaConvert event.
Before tackling the CloudWatch side, let’s create our SNS Topic.
- Go to the SNS Dashboard and click on “Create Topic”. Add a new topic with the following settings :
The little trick here is to allow your future CloudWatch rule to publish to our SNS topic. In the Access Policy section we need to adjust the default policy to be like this :
2. Once your topic is created you need to create a new subscription. Go to your topic and click on “New subscription”.
I use an ngrock endpoint which I can inspect the request payload. You’ll need to see the payload in order to confirm the subscription.
3. After the subscription is created you’ll get a request to your endpoint with a JSON Object containing a SubscribeURL key.
Copy the SubscribeURL and on your subscription dashboard select the pending confirmation and click on “Confirm subscription”.
Great ! Our SNS topic is ready to be used ! Now let’s see how we can connect our topic to our Media Convert job.
CloudWatch
- Go to the CloudWatch dashboard and click on Events > Rules and select “Create Rule”.
2. Now we can set our rule to be trigger on each MediaConvert change. Set the following configuration:
- Event Source: Event Pattern
- Service Name: MediaConvert
- Event Type: Media Convert Job State Change
- Specific States: Here you can choose all states but to avoid too much data I’ve chosen “COMPLETE”, “PROGRESSING” and “ERROR”.
3. Add a new Target, choose SNS Topic and select your topic created previously.
4. You can save your rule and everything is now ready to be tested !
Let’s code
Our infrastructure is ready now let’s code our lambda function. We’ll do a basic function just to test our media convert job.
Before to show you the actual code which is simple let’s create four environnement variables in your lambda :
- bucket (the bucket created previously)
- endpoint (the MediaConvert endpoint which you can get in the MediaConvert console > Account),
- jobTemplate (the MediaConvert job template name)
- mediaConvertRole (which is the ARN of the role created previously).
Now that everything is setup here’s the code :
Now all we have to do is test our architecture by uploading a video to our S3 bucket ! You should see a nice decrease of your videos size otherwise you can tweak your MediaConvert template.
Monitor your API to see the status of your job and the associated data.
What if I have an error ?
If your job fails you have multiple places where you can look to debug :
- CloudWatch : it’ll be useful to see if your lambda fails
- MediaConvert Jobs : you can go to your MediaConvert dashboard > jobs. You’ll have information about why it failed.