Transfer files from one S3 bucket to another

With the open web service S3 sync in Eyevinn Open Source Cloud you can synchronize the contents of one S3 bucket to another S3 bucket without having to download and upload the files first. This service can be used to for example:

  • Migrate from cloud storage in AWS to MinIO storage service in Eyevinn Open Source Cloud
  • Migrate from MinIO storage service in Eyevinn Open Source Cloud to MinIO storage service on own infrastructure
  • Backup and provide redundancy for critical files

In this blog post we will give an example of how you can synchronize files on an S3 bucket in AWS to a bucket in a MinIO server instance in Eyevinn Open Source Cloud.

Create an account for free at app.osaas.io and create your tenant. If you already have access to Eyevinn Open Source Cloud you can skip this step.

Step 1: Create the destination bucket

Login and navigate to the MinIO service in the catalog of open web services. Follow the MinIO getting started guide in a previous blog post here. Once completed you should have MinIO server and a bucket called “tutorial” and a web user interface to access it.

Step 2: Setup access to source bucket

Setup and obtain the access credentials from AWS for the source bucket. Ensure that the access credentials can only access the source bucket.

Navigate to the S3 Sync service in the Eyevinn Open Source Cloud web console and select the tab titled “Service Secrets”. Create secrets called “sourceaccesskey” and “sourcesecretkey” where “sourceaccesskey” stores the AWS_ACCESS_KEY_ID and “sourcesecretkey” stores the AWS_SECRET_ACCESS_KEY.

Step 3: Setup access to destination bucket

Add two new secrets called “destaccesskey” and “destsecretkey” where “destaccesskey” is the MinIO RootUser and “destsecretkey” is the RootPassword that you configured in step 1. You should now have 4 secrets created that you will use in the next step.

Step 4: Create an S3 sync job

In this example we want to synchronize the content on an S3 bucket in AWS called “lab-testcontent-output” and available in the “eu-north-1” AWS region to the bucket called “tutorial” on a MinIO server instance in Eyevinn Open Source Cloud. Choosing a folder on this bucket will migrate all the files and the subfolders in this folder.

  • Name: A unique name for the S3 sync job.
  • CmdLineArgs: We enter here the S3 URL for the source and destination.
  • SourceAccessKey: We reference the “sourceaccesskey” secret we created.
  • SourceSecretKey: We reference the “sourcesecretkey” secret we created.
  • SourceRegion: We can enter “eu-north-1” here but in this example it was not needed to specify.
  • DestAccessKey: We reference the “destaccesskey” secret we created.
  • DestSecretKey: We reference the “destaccesskey” secret we created.
  • DestEndpoint: The URL to the MinIO storage instance in Eyevinn Open Source Cloud.

Press the button “Create” to create and start the job.

Now we have a job running that starts to migrate the files from the AWS S3 bucket to a bucket in Eyevinn Open Source Cloud.

Command Line Tool

You can also use the OSC command line tool to create this job.

osc create eyevinn-s3-sync guidecli \
  -o cmdLineArgs="s3://lab-testcontent-output/osc/VINN-11/ s3://tutorial/" \
  -o SourceAccessKey="{{secrets.sourceaccesskey}}" \
  -o SourceSecretKey="{{secrets.sourcesecretkey}}" \
  -o DestAccessKey="{{secrets.destaccesskey}}" \
  -o DestSecretKey="{{secrets.destsecretkey}}" \
  -o DestEndpoint="https://eyevinnlab-jonas.minio-minio.auto.prod.osaas.io"

Conclusion

With the open web service providing storage functionality in Eyevinn Open Source Cloud you always have the option to run the same solution on your own premises as it is based on open source, and with this tooling you can easily migrate data to Eyevinn Open Source Cloud as well as from Eyevinn OSC to a self-hosted solution.

Getting started with Open Source Cloud and MinIO

As a follow-up to our last post how you can simplify your file storage without being locked in with a specific vendor we will in this post walk you through step-by-step how to get started with file storage in Eyevinn Open Source Cloud.

Why Open Source Cloud with MinIO?

Using an open web service based on open source you are not locked in with a specific vendor and you have the option to run the very same code in your own infrastructure or cloud. To reduce the barrier to get started we have included in the free tier one MinIO server instance with 50 GB of storage. Storage interface is compatible with AWS S3 tools offering a wide range of options to access the storage.

Create an account for free at https://app.osaas.io and create your tenant. If you already have access to Eyevinn Open Source Cloud you can skip this step.

Step 1: Create a MinIO server instance

Login and navigate to the MinIO service in the catalog of open web services. Follow the MinIO getting started guide in the Open Source Cloud documentation. Once completed you should have MinIO server and a bucket called “tutorial”.

Step 2: Provide a web user interface to the storage

To manage the storage we have a couple of options. Either use the AWS S3 command line tool for uploading and downloading files, the AWS S3 SDK or a desktop application. If you want to provide your users with a web based user interface to the buckets on this MinIO server instance you can use another open web service available in Eyevinn Open Source Cloud.

Navigate to the Filestash service in the catalog of services in Eyevinn Open Source Cloud. Create a new Filestash instance by pressing “Create filestash” button.

Follow the steps below to configure and connect this manager with the MinIO instance you created earlier.

  • 1. Click on the instance card once it is in state running. A new page will open in a new tab or browser window.
  • 2. Enter an administrator password for this Filestash storage manager instance.
  • 3. In the navigation sidebar on the left click on the item “Backend”. Select S3 as storage backend. You may remove the others as we will be only be using S3 in this example.
  • 4. Choose the authentication middleware ADMIN. This means that you will login with the admin password you just created. You might at least want to use HTPASSWD for more granular access control in practice.
  • 5. Select S3 backend
  • 6. Enter the access key id and secret key. This is the RootUser and RootPassword that you set for your MinIO instance. The endpoint is the URL to the MinIO server instance that you created.

Step 3: Upload a file

Now go back to the start page by clicking on the instance card and login with the admin password that you created. Use drag-and-drop to upload a file.

Now you have a storage based on open web services in Eyevinn Open Source Cloud and a web based user interface to access it. A storage for storing poster and image objects for a streaming service, backing up project files, sharing large media files with team members or creating scalable media libraries to mention a few examples of use cases.

Simplify Your File Storage with Open Source Cloud and MinIO

Ever struggled with managing digital files for your project? Whether you’re a developer, content creator, or just someone who needs reliable file storage, I’ve got a game-changing solution that’s both powerful and surprisingly easy to use.

What Exactly is Object Storage?

Think of object storage like a super-smart, infinitely expandable digital filing cabinet. Instead of saving files on your local computer or a single hard drive, you can store them in the cloud, access them from anywhere, and scale your storage as your needs grow. The best part? You don’t need to be a tech wizard to use it.

Why Open Source Cloud with MinIO?

Traditional cloud storage can be expensive and complicated. Open Source Cloud offers a refreshing alternative:

– Free tier with 50 GB of storage
– Completely compatible with popular tools like AWS S3
– No complex setup or massive technical knowledge required
– Supports media, images, documents, and more

Your Quick Start Guide

Getting Set Up

– Create an Account: Sign up for Open Source Cloud (it’s free!)
– Install the Basics: You’ll need NodeJS and a few simple command-line tools
– Follow the simple instructions: https://docs.osaas.io/osaas.wiki/Service%3A-MinIO.html
– Create Your Storage: Set up a MinIO storage service in just a few clicks

Creating Your First Bucket

A “bucket” is just a fancy term for a folder in the cloud. You can create as many as you need for different projects – one for photos, another for documents, another for backups.

Uploading and Accessing Files:

– Upload files using simple commands
– Generate shareable links
– Access your files from any device
– Integrate with existing tools and applications

Real-World Use Cases:

– Storing poster and image objects for a streaming service
– Backing up project files
– Sharing large media files with team members
– Creating scalable media libraries

The Open Source Difference

What makes this special isn’t just the technology – it’s the philosophy. Open Source Cloud shares revenue with the open-source authors, which means you’re supporting the community of developers who create these amazing tools.

Getting Started is Easier Than You Think

Don’t let technical jargon intimidate you. With Open Source Cloud’s MinIO, you can have a professional-grade storage solution up and running in minutes, without spending a fortune or getting a computer science degree.

Ready to simplify your file storage?

– Visit Open Source Cloud
– Sign up for a free account
– Start storing your files in minutes

Additional Resources:

– Open Source Cloud: https://app.osaas.io
– API Documentation: https://api.osaas.io

Tip: Start small, explore, and scale as you grow!

AI Assisted Code Review

Based on open source made available as open web service in Eyevinn Open Source Cloud we can have AI to assist with code reviewing of submitted pull requests to a GitHub repository.

The AI Code Reviewer is an open source project that provides an API and user interface to review code in a public GitHub repository. It analyzes code for quality, best practices and potential improvements, providing actionable feedback to improve a code base. This is achieved by carefully prompting a GPT4 model to perform the task and return a structured response with scores and suggested improvements.

This project has been made available as an open web service in Eyevinn Open Source Cloud which means that you can instantly start to integrate this into your solution.

AI Code Review GitHub Action

For example, we might want to incorporate this AI Code Reviewer in our Pull Request workflow, and use this to provide an automated first review of all opened pull requests. It could add a comment on overall score and suggested improvements.

In order to add this to our GitHub workflow we need a GitHub action to perform this task based on this open web service in Eyevinn Open Source Cloud. We will create a GitHub action that uses the client libraries for Eyevinn OSC to create an AI Code Reviewer instance.

core.info('Setting up Code Reviewer');
const ctx = new Context();
let reviewer = await getEyevinnAiCodeReviewerInstance(ctx, 'ghaction');
if (!reviewer) {
  reviewer = await createEyevinnAiCodeReviewerInstance(ctx, {
    name: 'ghaction',
    OpenAiApiKey: '{{secrets.openaikey}}'
  });
  await delay(1000);
}
core.info(`Reviewer available, requesting review of ${gitHubUrl.toString()}`);

These lines of code will create an instance called “ghaction” if it does not already exists. When that is available we can use the API that this service provides to perform the actual code review. The following lines of codes takes care of this.

const reviewRequestUrl = new URL('/api/v1/review', reviewer.url);
const sat = await ctx.getServiceAccessToken('eyevinn-ai-code-reviewer');
const response = await fetch(reviewRequestUrl, {
  method: 'POST',
  headers: {
    Authorization: `Bearer ${sat}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    githubUrl: gitHubUrl.toString()
  })
});
if (response.ok) {
  const review = await response.json();
  return review;
} else {
  throw new Error('Failed to get review');
}

We package this together into a GitHub action and makes it available on the GitHub action marketplace.

Add review to pull request workflow

Now it is time to add this to our pull request workflow in GitHub. We add a step to review the branch for the pull request using the GitHub action we created. The input variable “repo_url” contains the URL to this branch and in addition we need to provide the access token to Eyevinn OSC as an environment variable.

  - name: Review branch
    id: review
    uses: EyevinnOSC/code-review-action@v1
    with:
      repo_url: ${{ github.server_url }}/${{ github.repository }}/tree/${{ github.head_ref}}
    env:
      OSC_ACCESS_TOKEN: ${{ secrets.OSC_ACCESS_TOKEN }}

Next step is to take the outputs “score” and “suggestions” of this step and add this as a comment to the pull request.

  - name: comment
    uses: actions/github-script@v7
    with:
      github-token: ${{secrets.GITHUB_TOKEN}}
      script: |
        github.rest.issues.createComment({
          issue_number: context.issue.number,
          owner: context.repo.owner,
          repo: context.repo.repo,
          body: 'Code review score: ${{ steps.review.outputs.score }}\n${{ join(fromJSON(steps.review.outputs.suggestions), ', ') }}'
        })

When a pull request is open the workflow is run and result of the code review is posted as a comment to the pull request.

Conclusion

This is an example on how you can integrate open web services to enhance your software development processes. We are continuing our journey to advance and democratize web services through open source and a sustainable business model for creators.

Empowers Developers to Integrate Open Web Services into their Applications

This blog post serves as an example of how our platform empowers developers and businesses to seamlessly integrate open source as web services into their applications and services. Build applications and solutions on these open web services to avoid being locked in to a single web services vendor.

In this example we will build a NodeJS application that uses an open web service to store application configurations. Before we begin and to follow this guide you will need to signup with Eyevinn Open Source Cloud to get the personal access token to access available web services. Navigate to the Web Console and register with your email. Signup is free and on the free plan you have access to create one open web service at the time. Upgrade to startup or business plan gives you access to use more open web services at the same time. Create a tenant and you are good to go.

You can now obtain the access token by navigating to Settings / API in the web console. Copy this and store in your shell’s environment in the environment variable OSC_ACCESS_TOKEN.

% export OSC_ACCESS_TOKEN=access-token-copied-above

Setup your Node/Typescript project and install the Typescript client SDK.

% npm install --save @osaas/client-services

Create a main function that will read a config value and if not existing it will save a default.

async function main() {
  const ctx = new Context();
  const service = await setup(ctx);
  console.log('Configuration UI available at:', service.url);
  let value = await readConfigVariable(service, 'foo');
  if (!value) {
    await saveConfigVariable(service, 'foo', 'default');
    value = await readConfigVariable(service, 'foo');
  }
  console.log(`Config value: ${value}`);
}

Let us go through in more detail what above does.

This line will read the OSC_ACCESS_TOKEN environment variable from your shell and create a context for accessing the Eyevinn Open Source Cloud platform.

const ctx = new Context();

Then we will setup the open web services that we will need.

const service = await setup(ctx);

In return we will get the open web service handling the configuration variables and a URL to the configuration service web user interface. In this web interface you can manage the values of the configurations.

  let value = await readConfigVariable(service, 'foo');
  if (!value) {
    await saveConfigVariable(service, 'foo', 'default');
    value = await readConfigVariable(service, 'foo');
  }
  console.log(`Config value: ${value}`);

We try to read a variable called foo and if it does not exists we will create a variable with a default value. We then print the value to the console.

Now let us take a closer look at the function setup().

In this function we create the open web service for managing configuration variables. An open web service created from the open source project App Config Svc available on GitHub. This service requires a Redis database for storing and accessing the values.

For the database we will then create an instance of the Valkey.io open web service that provides a Redis compatible API. We obtain the IP and ports to this instance and creates a Redis URL that we provide the application config service we want to create.

async function setup(ctx: Context) {
  const configServiceAccessToken = await ctx.getServiceAccessToken(
    'eyevinn-app-config-svc'
  );
  let configService: EyevinnAppConfigSvc = await getInstance(
    ctx,
    'eyevinn-app-config-svc',
    'example',
    configServiceAccessToken
  );
  if (!configService) {
    const valkeyInstance = await createValkeyIoValkeyInstance(ctx, {
      name: 'configstore'
    });
    const redisUrl = await getRedisUrlFromValkeyInstance(
      ctx,
      valkeyInstance.name
    );
    configService = await createEyevinnAppConfigSvcInstance(ctx, {
      name: 'example',
      RedisUrl: redisUrl
    });
  }
  return configService;
}

The functions to save and read configuration variables are written as followed. Basically just using the HTTP API that the configuration service provides for writing and reading variables.

async function saveConfigVariable(service: EyevinnAppConfigSvc, key: string, value: string) {
  const url = new URL('/api/v1/config', service.url);
  const response = await fetch(url.toString(), {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ key, value })
  });
  if (!response.ok) {
    throw new Error(`Failed to save config: ${response.statusText}`);
  }
}

async function readConfigVariable(service: EyevinnAppConfigSvc, key: string) {
  const url = new URL(`/api/v1/config/${key}`, service.url);
  const response = await fetch(url.toString(), {
    method: 'GET',
    headers: {
      'Content-Type': 'application/json'
    }
  });
  if (!response.ok) {
    return undefined;
  }
  const data = await response.json();
  return data.value;
}

This example shows you how you can integrate open web services easily in your applications and services. More examples are available in this GitHub repository.

Can Claude create a VOD streaming package for you?

The question in the title is of course a bit rhetorical. Of course Claude can. In this post I am going to describe how that works and how you can try this out.

Claude is an AI assistant built by Anthropic that is trained to have natural, text-based conversations, and first model was released in March 2023. Anthropic released in November 2024 a specification for the Model Context Protocol (MCP) that is an open protocol to enable seamless integration between LLM applications and external data sources and tools. MCP provides a standardized way to connect LLMs with the context they need.

MCP is a protocol that enables secure connections between host applications, such as Claude Desktop, and local services. Programs like Claude Desktop, IDEs or AI tools access MCP servers that are lightweight programs that exposes specific capabilities through the standardized Model Context Protocol.

We have developed and open sourced an MCP server for Eyevinn Open Source Cloud. An MCP server provides tools and resources and we currently provide tools for video on-demand streaming but more will be added by us our hopefully the open source community.

In the demonstration video below I show how I can have Claude to setup a video on-demand preparation pipeline and create a video on-demand file for streaming from a video file available online.


Install

If you want to try this out yourself you can follow these steps. A prerequisite is that you have an account on Eyevinn Open Source Cloud and at least 6 services available on your plan.

1. Download and install Claude Desktop.
2. In the Eyevinn OSC web console go to API settings (in Settings > API settings)
3. Copy the Personal Access Token
4. add the following to your claude_desktop_config.json:

{
  "mcpServers": {
    "eyevinn-osc": {
      "command": "npx",
      "args": ["-y", "@osaas/mcp-server"],
      "env": {
        "OSC_ACCESS_TOKEN": "YOUR_PERSONAL_ACCESS_TOKEN"
      }
    }
  }
}

5. Restart Claude

If everything is correctly installed you should see an icon of a hammer in the bottom of the chat input.

Now you can ask Claude to create a VOD from a file that you have available online as shown in the video above.

Client SDK

This MCP server uses the Typescript client SDK for Eyevinn Open Source Cloud. With this SDK you can create and remove instances and automate what you can do in the web console. Here is an example of how to create a VOD package using the client SDK which is basically what one of the tools currently can do.

import { Context, Log } from '@osaas/client-core';
import { createVod, createVodPipeline } from '@osaas/client-transcode';

async function main() {
  const ctx = new Context();

  try {
    const ctx = new Context({ environment });
    Log().info('Creating VOD pipeline');
    const pipeline = await createVodPipeline(name, ctx);
    Log().info('VOD pipeline created, starting job to create VOD');
    const job = await createVod(pipeline, source, ctx);
    if (job) {
      Log().info('Created VOD will be available at: ' + job.vodUrl);
    }
  } catch (err) {
    Log().error(err);
  }
}

main();

This gives you an example of what you can do and the possibilities are “endless”. It feels as it is only creativity that stands in the way of what you can do.

Share your ideas either in the comments below or with a contribution to the Eyevinn OSC MCP server that is open source. Be creative!

Video File Transcoding with Open Source Cloud using Terraform

In a previous post, Video File Transcoding with Open Source Cloud, we discussed how to set up a fully-fledged video transcoding pipeline using SVT Encore. In this follow-up, we will walk through the process of recreating that setup using Terraform, an open-source infrastructure-as-code (IaC) tool from HashiCorp, or its open-source alternative, OpenTofu.

Terraform enables you to automate the creation, management, and maintenance of infrastructure by defining it in code. By using HCL (HashiCorp Configuration Language), you can declaratively describe resources like servers, databases, and networks, and Terraform will handle their provisioning across a variety of platforms.

OSC Terraform Provider

To simplify interaction with Open Source Cloud (OSC), we’ve created a Terraform provider. This allows you to easily spin up or tear down OSC resources directly through Terraform.

In this post, we’ll define a Terraform configuration file to describe the video file transcoding pipeline.

Prerequisites

The Terraform Configuration

Create a new Terraform file named main.tf. We’ll begin by defining the required provider for OSC.

terraform {
  required_providers {
    osc = {
      source = "EyevinnOSC/osc"
      version = "0.1.3"
    }
  }
}

provider "osc" {
  pat        = <PERSONAL_ACCESS_TOKEN>
  environment = "prod"
}

Using Variables for Credentials

To avoid accidentally exposing sensitive credentials, we’ll use a variable for the Personal Access Token (PAT). This approach makes it easier to manage credentials securely. We also define the OSC environment as a variable to easily switch between devand prod.

variable "osc_pat" {
  type = string
}
variable "osc_env" {
  value = string
  environment = "dev"

You can then reference this variable in the provider block like so:

provider "osc" {
  pat        = var.osc_pat
  environment = var.osc_env
}

Setting the Token

To set the osc_pat variable, you can either pass it via the command line using the -var flag or set it as an environment variable:

export TF_VAR_osc_pat=<PERSONAL_ACCESS_TOKEN>

Define the SVT Encore Resource

The first step in the transcoding pipeline is setting up an SVT Encore instance. This resource requires a name and, optionally, a profiles_url where transcoding profiles are stored.

Define the osc_encore_instance resource like this:

resource "osc_encore_instance" "example" {
  name          = "ggexample"
  profiles_url  = "https://raw.githubusercontent.com/Eyevinn/encore-test-profiles/refs/heads/main/profiles.yml"
}

Define the Valkey Instance

Next, we define a Valkey instance, which is a required component in the pipeline.

resource "osc_valkey_instance" "example" {
  name = "ggexample"
}

Define the Callback Listener

The Encore Callback Listener connects to both the Valkey and Encore instances. It requires the redis_url and encore_url, which are derived from the earlier resources.

The redis_url should be in the Redis-compatible format, and the encore_url should be formatted without a trailing slash:

resource "osc_encore_callback_instance" "example" {
  name         = "ggexample"
  redis_url    = format("redis://%s:%s", osc_valkey_instance.example.external_ip, osc_valkey_instance.example.external_port)
  encore_url   = trimsuffix(osc_encore_instance.example.url, "/")
  redis_queue  = "transfer"
}

Define the Retransfer Service

We also need to set up AWS secrets for the Retransfer service. To manage this securely, we’ll use variables for the AWS access key and secret, and local variables to store the names of the credentials.

variable "aws_keyid" {
  type = string
}

variable "aws_secret" {
  type = string
}

variable "aws_output" {
  type    = string
  default = "s3://path/to/bucket"
}

resource "osc_secret" "keyid" {
  service_ids  = ["eyevinn-docker-retransfer"]
  secret_name  = "awsaccesskeyid"
  secret_value = var.aws_keyid
}

resource "osc_secret" "secret" {
  service_ids  = ["eyevinn-docker-retransfer"]
  secret_name  = "awssecretaccesskey"
  secret_value = var.aws_secret
}

Define the Encore Transfer Service

Finally, we define the Encore Transfer service, which will manage the transfer of transcoded media files to the specified AWS S3 bucket.

resource "osc_encore_transfer_instance" "example" {
  name        = "ggexample"
  redis_url   = osc_encore_callback_instance.example.redis_url
  redis_queue = osc_encore_callback_instance.example.redis_queue
  output      = var.aws_output
  aws_keyid   = osc_secret.keyid.secret_name
  aws_secret  = osc_secret.secret.secret_name
  osc_token   = var.osc_pat
}

Define Outputs

To easily access dynamic endpoints, we can define outputs in Terraform. These outputs can be used in scripts or other automation tasks.

output "encore_url" {
  value = trimsuffix(osc_encore_instance.example.url, "/")
}
}

output "encore_name" {
  value = osc_encore_instance.example.name
}

output "callback_url" {
  value = trimsuffix(osc_encore_callback_instance.example.url, "/")
}

Full Configuration File (main.tf)

Here’s the complete main.tf file:

terraform {
  required_providers {
    osc = {
      source = "EyevinnOSC/osc"
      version = "0.1.3"
    }
  }
}

variable "osc_pat" {
  type      = string
  sensitive = true
}

variable "osc_environment" {
  type    = string
  default = "prod"
}

variable "aws_keyid" {
  type      = string
  sensitive = true
}

variable "aws_secret" {
  type      = string
  sensitive = true
}

variable "aws_output" {
  type = string
}

provider "osc" {
  pat         = var.osc_pat
  environment = var.osc_environment
}

resource "osc_encore_instance" "example" {
  name         = "ggexample"
  profiles_url = "https://raw.githubusercontent.com/Eyevinn/encore-test-profiles/refs/heads/main/profiles.yml"
}

resource "osc_valkey_instance" "example" {
  name = "ggexample"
}

resource "osc_encore_callback_instance" "example" {
  name        = "ggexample"
  redis_url   = format("redis://%s:%s", osc_valkey_instance.example.external_ip, osc_valkey_instance.example.external_port)
  encore_url  = trimsuffix(osc_encore_instance.example.url, "/")
  redis_queue = "transfer"
}

resource "osc_secret" "keyid" {
  service_ids  = ["eyevinn-docker-retransfer"]
  secret_name  = "awsaccesskeyid"
  secret_value = var.aws_keyid
}

resource "osc_secret" "secret" {
  service_ids  = ["eyevinn-docker-retransfer"]
  secret_name  = "awssecretaccesskey"
  secret_value = var.aws_secret
}

resource "osc_encore_transfer_instance" "example" {
  name        = "ggexample"
  redis_url   = osc_encore_callback_instance.example.redis_url
  redis_queue = osc_encore_callback_instance.example.redis_queue
  output      = var.aws_output
  aws_keyid   = osc_secret.keyid.secret_name
  aws_secret  = osc_secret.secret.secret_name
  osc_token   = var.osc_pat
}

output "encore_url" {
  value = trimsuffix(osc_encore_instance.example.url, "/")
}

output "encore_name" {
  value = osc_encore_instance.example.name
}

output "callback_url" {
  value = trimsuffix(osc_encore_callback_instance.example.url, "/")
}

Running the Pipeline

Before running Terraform, ensure your environment variables are set:

export TF_VAR_osc_pat=<PERSONAL_ACCESS_TOKEN>
export TF_VAR_aws_keyid=<AWS_KEYID>
export TF_VAR_aws_secret=<AWS_SECRET>

Once the environment variables are configured, you can run the pipeline with:

terraform init
terraform apply

Follow the prompts to create the infrastructure. Once complete, the instances should be successfully provisioned.

Viewing Outputs

To view the outputs from Terraform, run:

terraform output

Example output:

callback_url = "https://eyevinnlab-ggexample.eyevinn-encore-callback-listener.auto.prod.osaas.io"
encore_url = "https://eyevinnlab-ggexample.encore.prod.osaas.io"
encore_name = "ggexample"

To view a specific output, such as the encore_url:

terraform output encore_url
"https://eyevinnlab-ggexample.encore.prod.osaas.io"

Encode Job

To initiate a transcoding job, you can either use the Swagger UI, as described in the previous post, or run the script provided below. The script accepts the URL of the media you want to encode as an input argument.

encoreJob.sh

#!/bin/bash

# Ensure MEDIA_URL argument is provided
if [ -z "$1" ]; then
  echo "Usage: $0 <MEDIA_URL>"
  exit 1
fi

# Assign the first argument to MEDIA_URL
MEDIA_URL="$1"

# Retrieve values from Terraform output
ENCORE_URL=$(terraform output -raw encore_url)
EXTERNAL_ID=$(terraform output -raw encore_name)
EXTERNAL_BASENAME=$(terraform output -raw encore_name)
CALLBACK_URL=$(terraform output -raw callback_url)
OSC_PAT=$TF_VAR_osc_pat
OSC_ENV=$TF_VAR_osc_env

# Validate required variables
if [ -z "$ENCORE_URL" ]; then
  echo "Error: Terraform output 'encore_url' is missing."
  exit 1
fi

if [ -z "$EXTERNAL_ID" ]; then
  echo "Error: Terraform output 'encore_name' is missing."
  exit 1
fi

if [ -z "$EXTERNAL_BASENAME" ]; then
  echo "Error: Terraform output 'encore_name' is missing."
  exit 1
fi

if [ -z "$CALLBACK_URL" ]; then
  echo "Error: Terraform output 'callback_url' is missing."
  exit 1
fi

if [ -z "$OSC_PAT" ]; then
  echo "Error: Environment variable 'OSC_PAT' (TF_VAR_osc_pat) is not set."
  exit 1
fi

if [ -z "$OSC_ENV" ]; then
  echo "Error: Environment variable 'OSC_ENV' (TF_VAR_osc_env) is not set."
  exit 1
fi

TOKEN_URL="https://token.svc.$OSC_ENV.osaas.io/servicetoken"
ENCORE_TOKEN=$(curl -X 'POST' \
    $TOKEN_URL \
    -H 'Content-Type: application/json' \
    -H "x-pat-jwt: Bearer $OSC_PAT"  \
    -d '{"serviceId": "encore"}' | jq -r '.token')

curl -X 'POST' \
  "$ENCORE_URL/encoreJobs" \
  -H "x-jwt: Bearer $ENCORE_TOKEN" \
  -H 'accept: application/hal+json' \
  -H 'Content-Type: application/json' \
  -d '{
  "externalId": "'"$EXTERNAL_ID"'",
  "profile": "program",
  "outputFolder": "/usercontent/",
  "baseName": "'"$EXTERNAL_BASENAME"'",
  "progressCallbackUri": "'"$CALLBACK_URL/encoreCallback"'",
  "inputs": [
    {
      "uri": "'"$MEDIA_URL"'",
      "seekTo": 0,
      "copyTs": true,
      "type": "AudioVideo"
    }
  ]
}'

Example Usage

To run the script with an example media URL, use the following command:

./encoreJob.sh http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4

This script will trigger a transcoding job for the specified media file and send progress updates to the provided callback URL. Make sure that all Terraform outputs are correctly set before running the script.

When a job has been submitted and if you want to see the progress you can go to the Encore Callback Listener service and open the instance logs to check that it is receiving the callbacks.

When the transcoding process is completed it will place a job on the transfer queue that will be picked up by the Encore Transfer service. And when all the transfer jobs are completed you will in this example find a set of files in your output bucket where you have set of different variants with different resolutions and bitrates.

Destroy

When the transcoding is finished and no more jobs are required we can take down the instances running with the simple command:

terraform destroy

Conclusion

You now have a fully-fledged video transcoding pipeline for preparing video files for streaming using SVT Encore, along with supporting services. The entire setup is based on open-source software, and you don’t need to set up your own infrastructure to get started. Additionally, the pipeline leverages Terraform for managing and deploying the infrastructure, making it easy to get up and running. Should you later choose to host everything yourself, you’re free to do so, as all the code and resources demonstrated here are available as open source.

Setup a Bluesky Personal Data Server in Open Source Cloud

Bluesky is a decentralized microblogging social media service based on open standards (AT Protocol) and open source infrastructure so that social communication can be as open and interoperable as the web itself. The AT Protocol (Authenticated Transfer Protocol aka atproto) is a federated protocol for large-scale distributed social applications.

The three core services in a network are Personal Data Server (PDS), Relays and App Views. A personal data server is your home in the cloud. This is the server that hosts your data, distribute it, manage your identity and orchestrate requests to other services to give you your views. However, the goal of the AT protocol is to ensure that a user on one PDS can move and migrate their account to a new PDS without the server’s involvement.

In this blog we will describe how you can setup your own Personal Data Server based on open source made available as a service all for free.

Step 1: Create an account in Eyevinn Open Source Cloud

Navigate to www.osaas.io and click on Login/Signup. Enter your email to create an account and enter the login code you receive in your inbox. If this is the first time you logged in you need to create a tenant first.

Step 2: Create your own PDS

Navigate to Bluesky Personal Data Server by entering this text in the search bar in the top bar.

Click on the tab Service secrets and click on New Secret to create a secret for your administration password.

Click on the button “Create pds” and enter the name of your PDS and a reference to the secret you created.

Leave the input DnsName empty for now. This will be used when you add a CDN in front of the server and use a custom domain name. Press create and wait for the indicator on the instance card to turn green.

Step 3: Create an invitation code

Now you have your own PDS up and running. To create an account on the server you need to first create an invitation code. This is done by sending an HTTP request to the PDS API. In this example we will use an HTTP API client available online.

Use Basic auth as authentication method and admin as user and the password is the administration password that you created above. As URL you enter the URL available on the instance card and add /xrpc/com.atproto.server.createInviteCode

In the body you enter the following JSON:

{ "useCount": 1 }

The code returned in the response is the invitation code, in this case demo-blog-bluesky-social-pds-auto-prod-osaas-io-5ito3-t5umt. This is the code you are using when creating an account on this server.

Step 4: Create an account

Download the Bluesky social app on your appstore. When registering a new account select a custom hosting provider and enter the URL to the PDS created. Use the invitation code and enter email and password. Now you will have an account created with a handle @.demo-blog.bluesky-social-pds.auto.prod.osaas.io and you are ready to go!

Advanced: Custom domain and CDN

To use a custom domain name for your service you need to be able to administer a DNS domain and CDN. We will not go through this in detail in this blog post. What you need to setup is the following:

  • 1. Decide and register a root domain name, e.g. my.org
  • 2. Decide what domain name you will use for the PDS, e.g. pds.my.org
  • 3. Create an SSL certificate for *.pds.my.org and pds.my.org
  • 4. Create a PDS in OSC as before with the addition that you set DNS_NAME to pds.my.org
  • 5. Setup a CDN property / distribution where origin is the URL to the PDS created above, e.g. demo-blog.bluesky-social-pds.auto.prod.osaas.io and use the SSL cert created in 3. It is important that the CDN uses the origin host in the request to the origin. Consult your CDN provider for how to configure this.
  • 6. Create DNS records *.pds.my.org and pds.my.org to point to the CDN distribution created in 5.

Conclusion

Creating your own Bluesky Personal Data Server based on open source is achievable with only a few click of a button and a quick way to get your own self-hosted account to join the conversation in this open social media network.

Simplified access to cloud storages with Eyevinn OSC

There are several options on how to store files in the cloud today and in this blog post we will show how you with an open source project made available as a service in Eyevinn Open Source Cloud can simplify the access to the storage. In this blog we will as an example use Akamai S3 compatible Object Storage as the cloud storage.

Create storage bucket

Ref: https://techdocs.akamai.com/cloud-computing/docs/create-and-manage-buckets

1. Log in to Cloud Manager and select Object Storage from the left menu. If you currently have buckets on your account, they are listed on this page, along with their URL, region, size, and the number of objects (files) they contain.

2. One of the first steps to using Object Storage is to create a bucket. Here’s how to create a bucket using Cloud Manager, though you can also use the Linode CLI, s3cmd, and s4cmd.

3. Navigate to the Object Storage page in Cloud Manager (see View buckets).

4. Click the Create Bucket button to open the Create Bucket panel. If you have not created an access key or a bucket on this account, you are prompted to enable Object Storage.

5. Within the Create Bucket form, add a Label for the new bucket. This label must be unique and should not be used by any other bucket (from any customer) in the selected data center.

6. Choose a Region for the bucket to reside. See the Availability section on the Object Storage Overview page for a list of available regions.

7. Click Submit to create the bucket.

In this example we have created a bucket called “osc-blog” in the data center in Stockholm.

To be able to access this bucket we have created we need to create an access key. Navigate to Access Keys tab and press Create Access key. Give the access key a name and in this case we will limit the access to only the bucket we created.

Copy and store the generated “access key id” and “secret key” as you will use these later.

Setup Cloud Storage Manager

In Eyevinn Open Source Cloud web console navigate to the service called Filestash and press “Create filestash”.

Give the service a name for example “blog” in this case. Click on the instance card once it is in state running. A new page will open in a new tab or browser window. Then enter an administrator password for this Filestash storage manager instance.

In the navigation sidebar on the left click on the item “Backend”. Select S3 as storage backend.

You may remove the others as we will be only be using S3 in this example.

For simplicity we will be using the ADMIN authentication middleware. This means that you will login with the admin password you just created. You might at least want to use HTPASSWD for more granular access control in practice.

Enter the access key id and secret key.

The endpoint in this case is https://se-sto-1.linodeobjects.com as the bucket is located in region se-sto-1.

Upload a file

Now go back to the start page by clicking on the instance card and login with the admin password that you created.

Now you can upload a file by using drag-and-drop.

Conclusion

With this open source project now made available as a service in Eyevinn Open Source Cloud you can give your users a simpler and consistent user interface independent from what cloud storage provider you are using. Using Eyevinn Open Source Cloud you contribute back to a sustainable business model for open source as a share of the revenue is shared with the open source creator.

Open source databases available as a service

In this blog post we will go through the open source databases that are available as a service in Eyevinn Open Source Cloud.

Databases are fundamental in many solutions and some of the open source projects that are available as a service in Eyevinn Open Source Cloud depends on a database for storing data and states. There are a great number of databases to choose from and recent years a lot of open source alternatives have emerged. With open source you are not locked in with a single vendor but it requires you to host and manage it yourself. To reduce this barrier we have a few of these open source databases already made available as a service and more will be added. This enables you to run open source services in our platform that depends on a database without having to host and manage the database server yourself. It is of course possible to run the databases from another cloud provider if available. That choice is entirely up to you. And as with any other service in this platform we give a share of the revenue back to the original creators.

Let us go through what is available in Eyevinn Open Source Cloud today.

Valkey

Valkey is a Redis-compatible high-performance key-value store that can serve many purposes where simplicity and performance is of most importance. It can be used as a processing queue in a VOD transcoding and packaging solution as well as a store for application config service.

To create a Valkey instance simply navigate to the Valkey service in the Eyevinn Open Source Cloud web console and press Create. When the instance is created you obtain the IP and port to use on the instance card, e.g. redis://[IP]:[PORT].

MariaDB

MariaDB is a relational databases made by the original developers of MySQL and guaranteed to stay open source. It can be used as the database for a WordPress blog for example the blog you are currently reading from. This blog is powered by WordPress and MariaDB in Eyevinn Open Source Cloud (dogfooding). Another example is the database for Suite CRM available here.

To create a MariaDB database instance navigate to the MariaDB service in the web console and press Create. Enter the root password and database users you want to setup. These credentials are then used to connect to the database from the application. Obtain the IP and port on the instance card when constructing the connection URL.

PostgreSQL

Another object-relational database that is open source is PostgreSQL. The origins of PostgreSQL date back to 1986 as part of the POSTGRES project at the University of California at Berkeley and has more than 35 years of active development on the core platform. Navigate to PostgreSQL service in the Open Source Cloud web console and press the button Create to launch a new database instance.

Enter the credentials and name of the database and press create.

Couch DB

Apache CouchDB is an open source NoSQL document database that collects and stores data in JSON-based document formats and works well with modern web and mobile applications. Access your document with your web browser via HTTP. The CouchDB API is the primary method of interfacing to a CouchDB instance. Requests are made using HTTP and requests are used to request information from the database, store new data, and perform views and formatting of the information stored within the documents. Simple and easy to use with any HTTP client.

Navigate to the CouchDB service in Open Source Cloud and press Create to start a new instance.

When the instance is up and running you can click on the instance card to go to the web user interface to Couch DB.

Create a new database by clicking the Create database button in the top right corner. Then you can create your first document that you want to store.

There are client libraries available and the offical Apache CouchDB library for Node.js is called nano.

Conclusion

These are the open source databases that are available as service in Eyevinn Open Source Cloud today. If you have a suggestion of another open source database that can be made available as a service go to www.osaas.io and submit it there, or write a comment to this post.