Build your own platform for HLS live stream monitoring

Here is another full example of a solution built on open web services in Eyevinn Open Source Cloud. This example covers a solution for your own platform for HLS live stream monitoring. This solution consists of an HLS Stream Monitor that monitors one or many HLS live streams for errors. It provides an OpenMetrics endpoint that Prometheus can scrape and be visualized in Grafana. To manage what streams to monitor we have a database and a service to create or remove streams from the stream monitor.

Requires 3 available services in your plan. If you have no available services in your plan you can purchase each service individually or upgrade your plan.

HLS Stream Monitor

The open web service that is responsible for monitoring the live streams is the HLS Stream Monitor. It provides an API to manage running monitors and a monitor can check one or many HLS streams. It also provides an OpenMetrics endpoint that can be scraped by metrics collectors such as Prometheus.

To enable access to the monitor instance outside of Eyevinn Open Source Cloud we launch a Basic Auth adapter running in a Web Runner. This provides a Basic Auth authentication to access the instance and metrics endpoint.

Stream Monitor Manager

To manage what streams to monitor and controlling the HLS Stream Monitor we have an application running in a Web Runner that reads the list of streams to monitor from a CouchDB NoSQL database.

Start building here

Build your own platform for virtual channels

Here is a full example project to get you started with building your own platform for virtual channels based on open web services in Eyevinn Open Source Cloud. This solution consists of a virtual channel playout and a simple web application fetching configuration from an application configuration service.

Requires 5 available services in your plan. If you have no available services in your plan you can purchase each service individually or upgrade your plan.

Virtual Channel Playout

The virtual channel playout is built with the open web services:

  • FAST Channel Engine generating and providing the player with the live streaming manifest.
  • Web Runner to provide the webhook that the engine calls to decide what to play next in the channel.
  • CouchDB for storing the database of assets and URLs to the VOD streaming packages.

Web Video Application

The web video application is a NextJS based web application that reads the channel configuration from an application configuration service and provides the player to view the channel.

Start building here

Build your own Video Streaming Platform

Here is a full example project to get you started with building your own video streaming platform based on open web services in Eyevinn Open Source Cloud. This solution consists of a VOD preparation pipeline, orchestrator, database and a simple web application.

Requires 7 available services in your plan. If you have no available services in your plan you can purchase each service individually or upgrade your plan.

VOD Preparation Pipeline

The VOD preparation pipeline is built with the open web services:

  • SVT Encore for transcoding the source video file to a bundle of video files with different resolutions and qualities, often referred to as ABR transcoding.
  • Encore Packager to create a streaming package that is adapted for video delivery over HTTP
  • MinIO providing the storage buckets that is needed

Orchestrator

The orchestrator consumes events from the input bucket and creates a VOD preparation job when a new file is added. It is a NodeJS server application that we will develop and deploy in a Web Runner instance. The orchestrator will register in a database all files that have been processed.

Web Video Application

The web video application is a NextJS based web application that will fetch the available files from the database and enable playback using a web video player.

Start building here

MinIO Storage as VOD Origin

As a continuation to previous blog where we described how to get started with MinIO storage in Eyevinn Open Source Cloud we will in this blog walk you through how you can use it as an origin for Video On-Demand distribution.

Why Open Source Cloud as VOD Origin?

Using an open web service based on open source you are not locked in with a specific vendor and you have the option to run the very same code in your own infrastructure or cloud.

We will not cover how to create video on demand files in this blog post as it is covered in detail in the Eyevinn Open Source Cloud documentation.

Create an account for free at app.osaas.io and create your tenant. If you already have access to Eyevinn Open Source Cloud you can skip this step.

Step 1: Create a MinIO bucket

Start by creating a MinIO bucket in Eyevinn Open Source Cloud by following the instructions in the documentation. By following this guide you should now have a bucket called “tutorial”.

Step 2: Enable public access to bucket

For a video player to be able to download the Video On-Demand files we need to enable public read-only access for the bucket. If you followed the guide you will have en alias to your MinIO server instance called “guide” and using the MinIO command line tool you enable public access with the following command.

% mc anonymous set download guide/tutorial

Step 3: Upload VOD packages to bucket

Now let us upload VOD packages to this bucket. There are several options available here:

  • Setup a VOD creation pipeline in Eyevinn Open Source Cloud to create a VOD package from a video file.
  • Upload existing VOD packages on your computer to this bucket.
  • Migrate VOD packages from another origin using the HLS Copy to S3 service in Eyevinn Open Source Cloud.

In this walk-through we will use the “HLS Copy to S3” service to copy an HLS package we have available online to the bucket you created.

Navigate to the HLS Copy to S3 service and click on the button “Create Job”. Enter the following in the job creation dialog.

  • Name: guide
  • CmdLineArgs: https://maitv-vod.lab.eyevinn.technology/VINN.mp4/master.m3u8 s3://tutorial/
  • DestAccessKey: root
  • DestSecretKey: abC12345678
  • DestEndpoint: (MinIO server endpoint)

Press “Create” and wait for the job to complete.

Let us now verify that all files ended up in our bucket. We can use the MinIO command line tool or the AWS S3 client.

% mc ls guide/tutorial/VINN.mp4/
[2025-01-15 13:51:33 CET]   351B STANDARD master.m3u8
[2025-01-15 13:51:58 CET]     0B 1000/
[2025-01-15 13:51:58 CET]     0B 2000/
[2025-01-15 13:51:58 CET]     0B 600/

Step 4: Verify VOD package

We can now verify that the VOD package can be played. Open a web browser and go to our online web player at https://web.player.eyevinn.technology/ and enter the URL to index file, in our example it is https://demo-guide.minio-minio.auto.prod.osaas.io/tutorial/VINN.mp4/master.m3u8

Step 5: Configure CDN

To be able to handle the distribution of these VOD files you need to setup a CDN that your users go through to access the files. Pointing your users directly to the origin is not recommended as it is designed to handle large scales of request. For performance and security you will use a CDN provider for the delivery.
When you setup your distribution property at your CDN provider you will use the following:

  • Origin: Your MinIO instance hostname, e.g. demo-guide.minio-minio.auto.prod.osaas.io
  • Protocol: HTTPS
  • Port: 443
  • Origin Host Header: e.g. demo-guide.minio-minio.auto.prod.osaas.io

Important here is that the Host header in the HTTPS request to the origin is the hostname of the MinIO storage instance and not the hostname in the viewer request. Consult your CDN provider documentation on how to configure this.

Conclusion

With the open web service providing origin functionality in Eyevinn Open Source Cloud you always have the option to run the same solution on your own premises as it is based on open source. You can create one MinIO instance including 50 GB storage for free to try this out.

Can Claude create a VOD streaming package for you?

The question in the title is of course a bit rhetorical. Of course Claude can. In this post I am going to describe how that works and how you can try this out.

Claude is an AI assistant built by Anthropic that is trained to have natural, text-based conversations, and first model was released in March 2023. Anthropic released in November 2024 a specification for the Model Context Protocol (MCP) that is an open protocol to enable seamless integration between LLM applications and external data sources and tools. MCP provides a standardized way to connect LLMs with the context they need.

MCP is a protocol that enables secure connections between host applications, such as Claude Desktop, and local services. Programs like Claude Desktop, IDEs or AI tools access MCP servers that are lightweight programs that exposes specific capabilities through the standardized Model Context Protocol.

We have developed and open sourced an MCP server for Eyevinn Open Source Cloud. An MCP server provides tools and resources and we currently provide tools for video on-demand streaming but more will be added by us our hopefully the open source community.

In the demonstration video below I show how I can have Claude to setup a video on-demand preparation pipeline and create a video on-demand file for streaming from a video file available online.


Install

If you want to try this out yourself you can follow these steps. A prerequisite is that you have an account on Eyevinn Open Source Cloud and at least 6 services available on your plan.

1. Download and install Claude Desktop.
2. In the Eyevinn OSC web console go to API settings (in Settings > API settings)
3. Copy the Personal Access Token
4. add the following to your claude_desktop_config.json:

{
  "mcpServers": {
    "eyevinn-osc": {
      "command": "npx",
      "args": ["-y", "@osaas/mcp-server"],
      "env": {
        "OSC_ACCESS_TOKEN": "YOUR_PERSONAL_ACCESS_TOKEN"
      }
    }
  }
}

5. Restart Claude

If everything is correctly installed you should see an icon of a hammer in the bottom of the chat input.

Now you can ask Claude to create a VOD from a file that you have available online as shown in the video above.

Client SDK

This MCP server uses the Typescript client SDK for Eyevinn Open Source Cloud. With this SDK you can create and remove instances and automate what you can do in the web console. Here is an example of how to create a VOD package using the client SDK which is basically what one of the tools currently can do.

import { Context, Log } from '@osaas/client-core';
import { createVod, createVodPipeline } from '@osaas/client-transcode';

async function main() {
  const ctx = new Context();

  try {
    const ctx = new Context({ environment });
    Log().info('Creating VOD pipeline');
    const pipeline = await createVodPipeline(name, ctx);
    Log().info('VOD pipeline created, starting job to create VOD');
    const job = await createVod(pipeline, source, ctx);
    if (job) {
      Log().info('Created VOD will be available at: ' + job.vodUrl);
    }
  } catch (err) {
    Log().error(err);
  }
}

main();

This gives you an example of what you can do and the possibilities are “endless”. It feels as it is only creativity that stands in the way of what you can do.

Share your ideas either in the comments below or with a contribution to the Eyevinn OSC MCP server that is open source. Be creative!

Simplified access to cloud storages with Eyevinn OSC

There are several options on how to store files in the cloud today and in this blog post we will show how you with an open source project made available as a service in Eyevinn Open Source Cloud can simplify the access to the storage. In this blog we will as an example use Akamai S3 compatible Object Storage as the cloud storage.

Create storage bucket

Ref: https://techdocs.akamai.com/cloud-computing/docs/create-and-manage-buckets

1. Log in to Cloud Manager and select Object Storage from the left menu. If you currently have buckets on your account, they are listed on this page, along with their URL, region, size, and the number of objects (files) they contain.

2. One of the first steps to using Object Storage is to create a bucket. Here’s how to create a bucket using Cloud Manager, though you can also use the Linode CLI, s3cmd, and s4cmd.

3. Navigate to the Object Storage page in Cloud Manager (see View buckets).

4. Click the Create Bucket button to open the Create Bucket panel. If you have not created an access key or a bucket on this account, you are prompted to enable Object Storage.

5. Within the Create Bucket form, add a Label for the new bucket. This label must be unique and should not be used by any other bucket (from any customer) in the selected data center.

6. Choose a Region for the bucket to reside. See the Availability section on the Object Storage Overview page for a list of available regions.

7. Click Submit to create the bucket.

In this example we have created a bucket called “osc-blog” in the data center in Stockholm.

To be able to access this bucket we have created we need to create an access key. Navigate to Access Keys tab and press Create Access key. Give the access key a name and in this case we will limit the access to only the bucket we created.

Copy and store the generated “access key id” and “secret key” as you will use these later.

Setup Cloud Storage Manager

In Eyevinn Open Source Cloud web console navigate to the service called Filestash and press “Create filestash”.

Give the service a name for example “blog” in this case. Click on the instance card once it is in state running. A new page will open in a new tab or browser window. Then enter an administrator password for this Filestash storage manager instance.

In the navigation sidebar on the left click on the item “Backend”. Select S3 as storage backend.

You may remove the others as we will be only be using S3 in this example.

For simplicity we will be using the ADMIN authentication middleware. This means that you will login with the admin password you just created. You might at least want to use HTPASSWD for more granular access control in practice.

Enter the access key id and secret key.

The endpoint in this case is https://se-sto-1.linodeobjects.com as the bucket is located in region se-sto-1.

Upload a file

Now go back to the start page by clicking on the instance card and login with the admin password that you created.

Now you can upload a file by using drag-and-drop.

Conclusion

With this open source project now made available as a service in Eyevinn Open Source Cloud you can give your users a simpler and consistent user interface independent from what cloud storage provider you are using. Using Eyevinn Open Source Cloud you contribute back to a sustainable business model for open source as a share of the revenue is shared with the open source creator.

VOD File Creation with Open Source Cloud

In a previous blog post we provided a walk-through on how to setup video file transcoding using Open Source Cloud based on SVT Encore and supporting backend services. In this blog post we are extending the setup by adding the creation of video-on-demand streaming files to the pipeline.

In this solution we will add another open source project made available as a service. The Encore Packager is a backend service that creates the VOD file package. It consumes jobs from a Redis queue and creates the VOD file package and uploads the package to an S3 bucket. For the creation of the VOD file package the open source packager Shaka Packager is used. The red box in the diagram below shows what we will add to our solution.

Step 1: Create another Valkey queue

Valkey provides a Redis compatible key / value store and we will create another queue for the packaging jobs. Navigate to the Valkey service in Open Source Cloud and press “Create valkey”. Give the instance a name and press Create.

Note down the IP and port to the Valkey instance card and this is what will be the Redis URL that we will refer to later in this blog. In this example it would be redis://172.232.131.169:10511.

Step 2: Launch another Encore Callback Listener

We will now create a separate service that can be used to monitor a transcoding job in SVT Encore so we know when the file is ready to be packaged. Navigate to the Encore Callback Listener in the web user interface. Click on button “Create callback” and enter the name of the instance, Redis URL (above), URL to the SVT Encore instance that we created last time and the name of the queue. We will call this queue for “package” now.

Important the URL to the SVT Encore instance is without the trailing slash.

Step 3: Create Encore Packager service

We can now move on with creating the Encore Packager service. Enter the name of the instance, Redis URL, name of queue in Redis (Valkey), output S3 URL, OSC token and the AWS credentials for the output S3 bucket. In this example we will have the following values:

Then press Create and wait for the instance to be ready.

Step 4: Submit a job

Now we are ready to try transcoding and creating a VOD package that we have available on an S3 compatible storage. We will create signed URL to the video file we want to transcode. For example:

https://lab-testcontent-input.s3.eu-north-1.amazonaws.com/NO_TIME_TO_DIE_short_Trailer_2021.mp4?SIGNURLSTUFF

Navigate back to the SVT Encore service and press the menu item to open API docs again. Click on the POST /encoreJobs bar and button “Try it out” and enter the following JSON. Here we have changed the progressCallbackUri to point to our Encore Callback Listener for VOD packaging.

{
  "externalId": "blog",
  "profile": "program",
  "outputFolder": "/usercontent/",
  "baseName": "blog",
  "progressCallbackUri": "https://demo-vod.eyevinn-encore-callback-listener.auto.prod.osaas.io/encoreCallback",
  "inputs": [
    {
      "uri": "https://lab-testcontent-input.s3.eu-north-1.amazonaws.com/NO_TIME_TO_DIE_short_Trailer_2021.mp4?SIGNURL",
      "seekTo": 0,
      "copyTs": true,
      "type": "AudioVideo"
    }
  ]
}

And then press button Execute. Now a job is submitted and if you want to see the progress you can go to the Encore Callback Listener service and open the instance logs to check that it is receiving the callbacks.

When the transcoding process is completed it will place a job on the packaging queue that will be picked up by the Encore Packager service. And when the packaging job is completed you will in this example find a VOD package ready for streaming: https://lab.cdn.eyevinn.technology/osc/NO_TIME_TO_DIE_short_Trailer_2021/bb347d8e-c095-43dc-ba5f-914c7e74f13d/index.m3u8.



Conclusion

You now have a fully fledged video transcoding and packaging pipeline for preparing video files for streaming using ´SVT Encore with some supporting services. All based on open source and you don’t have to setup your own infrastructure for this to get started. If you later choose to do so you are free to do it as the code and everything demonstrated here is available as open source.

Trim video file on an S3 compatible bucket using open source

In this blog post I will describe how to trim a video file on an S3 compatible bucket using ffmpeg without having to download it first, process it and then upload the result.

For trimming the video we will use the open source tool ffmpeg and a script that handles uploading the result to an S3 bucket. This open source script is available as a service in Open Source Cloud.

Step 1: Login to Eyevinn Open Source Cloud

Go to www.osaas.io and login. Sign up for an account if you don’t already have one. It is free to get started and you don’t even have to enter a credit card to try this out.

Step 2: Setup access to S3 bucket

Go to the service in Open Source Cloud called “FFmpeg to S3” using the search bar on the browse page. Click on the card “FFmpeg to S3 to go to the service.

Then click on the tab named “Service secrets”

Get the S3 access key credentials from your administrator of your S3 buckets. You need at minimum the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Create a service secret for each of these credentials.

Step 3: Generate signed URL to the video to trim

Now we need to generate a signed URL for the video that you want to trim.

Copy the presigned URL to the clipboard

Step 4: Create a ffmpeg trim job

As an example we will extract the first 30 seconds of the video file and the ffmpeg command for that is:

ffmpeg -ss 0 -t 30 -c:v copy -c:a copy

Go back to the FFmpeg to S3 service page in Open Source Cloud and click on button “Create job”.

Enter the following in the settings dialog:

Name: “tutorial”
CmdLineArgs:
Replace [SIGNED_URL] from clipboard and lab-testcontent-input with the name of your bucket:

-i [SIGNED-URL] -d s3://lab-testcontent-input/tutorial-30sec.mp4 "-ss 0 -t 30 -c:v copy -c:a copy"

AwsAccessKeyId and AwsSecretAccessKey: reference to the service secrets created
Region: Location of the S3 bucket

Now press Create and wait for the job to be completed. When the job is completed you should have a file called tutorial-30sec.mp4 on the bucket you provided and 30 seconds duration.

Create a job from command line

You might want to automate or script the creation of these ffmpeg jobs and to facilitate that there is an open source SDK and command line tool for Eyevinn OSC. The command line tool is a Node.js script.

Follow the instructions on how to install Node.js if you don’t already have it installed.

Then install the CLI


% npm install -g @osaas/cli

Obtain the personal access token by going to Settings in OSC and the tab API. Here you find the personal access token that you copy to your clipboard. Set this token as an environment variable in your shell.


% export OSC_ACCESS_TOKEN=token

Now you can create the same job with the following command (replace [SIGNED-URL] and s3 bucket):


% osc create eyevinn-ffmpeg-s3 tutorialcli -o awsAccessKeyId="{{secrets.eyevinnawskeyid}}" -o awsSecretAccessKey="{{secrets.eyevinnawssecret}}" -o cmdLineArgs='-i [SIGNED-URL] -d s3://lab-testcontent-input/tutorial-30sec.mp4 "-ss 0 -t 30 -c:v copy -c:a copy"'

Conclusion

This was an example of how you can run ffmpeg to process a video file on an S3 bucket and output the result back to an S3 bucket without having to develop your own script for it as a script already existed that is open source and made available as a service in Eyevinn Open Source Cloud.

How to create a FAST channel in Open Source Cloud

Creating Free Ad-Supported Streaming TV (FAST) channels is becoming increasingly popular among content creators and broadcasters aiming to reach a wider audience without the need for a subscription model.

With the rise of open-source technologies and cloud platforms, launching your own FAST channel is more accessible than ever. The Open Source Cloud, with its array of tools and services, offers a comprehensive environment to deploy a FAST Channel Engine. This article guides you through the process of setting up a FAST channel using the FAST Channel Engine within the Open Source Cloud using already transcoded videos.

The base for the virtual channel is transcoded and packaged HLS VoD assets stored on an origin. The advantage with virtual channels is that you only prepare and encode the content once.

Prerequisites

As a prerequisite for creating a linear channel using the FAST Channel Engine, you need to have your VOD assets transcoded into HLS format. These assets should be properly segmented and stored on an origin server or accessible file storage system.

Ensuring that your media files are in HLS format and readily accessible allows the channel engine to seamlessly retrieve and stream the content according to your schedule.

Prepare a playlist, in other words a URL pointing to a text file containing a list of .m3u8 URLs, each representing a streamable video segment. One way to do this is to use gist.

– Go to https://gist.github.com
– Enter a name of the playlist in Filename (e.g. playlist.txt)
– Enter a list of URL to HLS manifests (one per line), for example:

https://demo.osc.technology/fast_1/manifest.m3u8
https://demo.osc.technology/fast_2/manifest.m3u8
https://demo.osc.technology/fast_3/manifest.m3u8

– Press Create public gist (green button)
– Press “Raw” on your created playlist file
– Copy the URL to the created playlist file e.g. https://gist.github.com/xxx/playlist.txt

Create a channel

Open your web browser and go to https://app.osaas.io/ and login using your credentials. Once logged in, locate the “Subscriptions” item in the menu on the left-hand side of your screen and click on it. This will take you to the page where you can manage and explore available services.

On the Subscriptions page, look for the card labeled “FAST Channel Engine.” This represents the service you’ll use to create your FAST channel. Next to the service title, there’s a drop-down menu symbolized by three dots. Click on this menu to reveal more options and select “Create channel.”

Enter a meaningful name for your channel. This name will help you identify it among other channels you may create. In this example the type “Playlist” is used. This option indicates that your channel will play content sequentially from a playlist you provide.

Enter the URL to your playlist in the “URL” field, e.g. the playlist created earlier (https://gist.github.com/xxx/playlist.txt). Make sure your playlist is correctly formatted and accessible online.

After entering all necessary information, press the “create” button. The platform will now process your request and start setting up your channel based on the playlist provided. This process may take a few moments. You can monitor the progress directly on the platform.

Once your channel is successfully created, find the channel’s drop-down menu (again, symbolized by three dots). Click on it and select “Copy URL” to copy the channel URL to your clipboard.

Open a new tab in your browser or launch a web player that supports .m3u8 streaming, safari or https://web.player.eyevinn.technology. Paste the copied URL into the player’s input field to start streaming your channel. This step is crucial for ensuring everything is working correctly and allows you to preview your channel’s content as your audience would.

Conclusion

Creating a FAST channel using the FAST Channel Engine in the Open Source Cloud is a powerful way to reach audiences with your content. By leveraging open-source technologies and cloud infrastructure, content creators can deploy scalable, high-performance streaming channels supported by ads.

This approach enable content distribution, allowing creators to broadcast their content globally without the need for heavy infrastructure investments.

Video File Transcoding with Open Source Cloud

SVT Encore is a powerful open-source video transcoder specifically designed for the cloud. It forms the backbone of the transcoding process in the media supply chain, taking raw video inputs and converting them into multiple formats and bitrates suitable for adaptive streaming. The transcoding process involves breaking down video files into different resolutions and bitrates, allowing viewers to receive the best possible quality based on their device and network conditions.

To reduce the barrier to get started with SVT Encore we have added their project to Open Source Cloud together with some supporting backend services that we have added. This blog gives you a walk-through on how to setup video file transcoding using Open Source Cloud.

Prerequisites

  • If you have not already done so, sign up for an OSC account.
  • 5 remaining services on your subscription plan or individually purchased the services included in this solution.
  • S3 compatible object storage solution

This solution is based on the following open source projects made available as a service:

  • SVT Encore
  • Valkey
  • Encore Callback Listener
  • Encore Transfer
  • Retransfer

After completed this tutorial you will be able to transcode a video file on an S3 compatible storage and the output is placed on another S3 compatible storage when the processing is completed.

Step 1: Create Encore Queue

Go to the web user interface and navigate to the service called SVT Encore. Click on the button “Create queue” and give the queue a name.

You can leave the Profiles URL empty for now and then press Create.

Now you have an instance of SVT Encore running with one single queue and ready to receive transcoding jobs for processing. You can try this out by clicking on the menu item Open API docs to access the online REST API documentation and submit a job.

However, to automatically get transcoded files out from SVT Encore and transferred to the output storage we need a few more help services. So that we will setup now. Start by take a note of the URL to the SVT Encore instance.

Remove the trailing slash an keep it for later use. In this case it is https://demo-blog.encore.prod.osaas.io.

Step 2: Create Valkey queue

Valkey provides a Redis compatible key / value store and this i what we will use to manage the queue for transferring files out from Encore and to out output bucket.

Navigate to the Valkey service in Open Source Cloud and press “Create valkey”. Give the instance a name and press Create.

Note down the IP and port to the Valkey instance card and this is what will be the Redis URL that we will refer to later in this blog. In this example it would be redis://172.232.131.169:10507.

Step 3: Launch Encore Callback Listener

Now we need something that monitors a transcoding job in SVT Encore so we know when the file is ready to be transferred. For that you navigate to the Encore Callback Listener in the web user interface. Click on button “Create callback” and enter the name of the instance, Redis URL (above), URL to the SVT Encore instance and the name of the transfer queue. We call it “transfer” in this example.

Important the URL to the SVT Encore instance is without the trailing slash.

Press Create and you are done with this step for now.

Step 4: Setup secrets

Now we have the Callback Listener service running that will monitor transcoding job and place completed jobs in the transfer queue. Now we need a service that picks up a job from the transfer queue and actually transfers the file out from SVT Encore and to our destination bucket.

First we need to configure the transfer job service with API secrets needed for the access to the S3 bucket. Navigate to the Retransfer service in Open Source Cloud and click on the tab Secrets.

Create the secrets containing the Access Key Id and Secret Access Key for the destination storage access. Note down the name of these secrets as you will be using it later.


awsaccesskeyid
awssecretaccesskey

Now navigate to the Encore Transfer service in the web user interface and click on the tab Secrets. Add a secret with your personal access token (OSC token) that you find under Settings and the tab API.

Step 5: Create Encore Transfer service

When the service is created and saved we can now move on with creating the Encore Transfer service. Enter the name of the instance, Redis URL, name of queue in Redis (Valkey), output URL, OSC token and the name of the access key secrets in the Retransfer service. In this example we will have the following values:

Then press Create and wait for the instance to be ready.

Step 6: Submit a job

Now we are ready to try transcoding a video file that we have available on an S3 compatible storage. We will create signed URL to the video file we want to transcode. For example:


https://lab-testcontent-input.s3.eu-north-1.amazonaws.com/NO_TIME_TO_DIE_short_Trailer_2021.mp4?SIGNURLSTUFF

Navigate back to the SVT Encore service and press the menu item to open API docs again.

Click on the POST /encoreJobs bar and button Try it out and enter the following JSON

{
  "externalId": "blog",
  "profile": "program",
  "outputFolder": "/usercontent/blog",
  "baseName": "blog",
  "progressCallbackUri": "https://demo-blog.eyevinn-encore-callback-listener.auto.prod.osaas.io/encoreCallback",
  "inputs": [
    {
      "uri": "https://lab-testcontent-input.s3.eu-north-1.amazonaws.com/NO_TIME_TO_DIE_short_Trailer_2021.mp4?SIGNURL",
      "seekTo": 0,
      "copyTs": true,
      "type": "AudioVideo"
    }
  ]
}


And then press button Execute. Now a job is submitted and if you want to see the progress you can go to the Encore Callback Listener service and open the instance logs to check that it is receiving the callbacks.

When the transcoding process is completed it will place a job on the transfer queue that will be picked up by the Encore Transfer service. And when all the transfer jobs are completed you will in this example find a set of files in your output bucket where you have set of different variants with different resolutions and bitrates.

Conclusion

You now have a fully fledged video transcoding pipeline for preparing video files for streaming using ´SVT Encore with some supporting services. All based on open source and you don’t have to setup your own infrastructure for this to get started. If you later choose to do so you are free to do it as the code and everything demonstrated here is available as open source.