Video-on-demand platforms rely on metadata such as titles, thumbnails, tags, captions, and categories for search, recommendations, and structured playback. Managing this metadata manually does not scale when handling hundreds or thousands of uploads. Automating metadata workflows in Contentful ensures consistent schema, real-time updates from storage or transcoding pipelines, and immediate availability to applications via APIs.

Designing Metadata Models in Contentful

The first step in automation is defining a schema in Contentful. A Video content type should include fields for essential metadata, while relationships connect videos to playlists, categories, or series. This allows modeling larger catalogs such as OTT libraries.

Example: Video Content Type Schema in JavaScript

code
const videoContentType = {
name: "Video",
fields: [
{ id: "title", type: "Text" },
{ id: "description", type: "Text" },
{ id: "duration", type: "Integer" },
{ id: "tags", type: "Array", items: { type: "Symbol" } },
{ id: "thumbnail", type: "Link", linkType: "Asset" },
{ id: "captions", type: "Link", linkType: "Asset" },
{ id: "playbackUrl", type: "Text" },
{ id: "relatedPlaylist", type: "Link", linkType: "Entry" },
],
};

export default videoContentType;

Explanations:

  • playbackUrl: Adaptive streaming manifest (.m3u8/.mpd).
  • relatedPlaylist: Reference to a playlist entry.

Contentful also supports localization, allowing each field to hold multiple language variants, e.g. "fr-FR": "Titre".

Banner for Metadata

Connecting Video Sources to Contentful

Metadata should be pushed automatically from video storage or transcoding pipelines into Contentful. For example, when a file is uploaded to AWS S3, a Lambda function can extract metadata using FFmpeg and create an entry in Contentful.

Example: Creating a Video Entry from an S3 Upload

code
import { createClient } from "contentful-management";
import { getVideoMetadata } from "./metadata-utils"; // ffmpeg wrapper

const client = createClient({ accessToken: process.env.CONTENTFUL_MANAGEMENT });

export async function handleUploadEvent(s3Event) {
const { key, bucket } = s3Event;
const metadata = await getVideoMetadata(bucket, key);

const space = await client.getSpace(process.env.CONTENTFUL_SPACE_ID);
const env = await space.getEnvironment("master");

await env.createEntry("video", {
fields: {
title: { "en-US": key },
duration: { "en-US": metadata.duration },
description: { "en-US": metadata.description || "" },
tags: { "en-US": metadata.tags || [] },
},
});
}

Explanations:

  • getVideoMetadata(bucket, key): Extracts duration, resolution, etc.
  • createClient(...): Initializes Contentful Management API client.
  • env.createEntry("video", ...): Creates a new video entry in Contentful.

Automating Metadata Workflows

Automation keeps metadata synchronized with the video lifecycle. When transcoding completes, services like AWS MediaConvert can trigger a webhook to update playback URLs in Contentful.

Example: Updating a Video Entry with a Playback URL

code
await env.getEntry(videoEntryId).then((entry) => {
entry.fields.playbackUrl = {
"en-US": "https://cdn.example.com/video/123/playlist.m3u8",
};
return entry.update();
});

Explanations:

  • env.getEntry(videoEntryId): Fetches a specific entry.
  • entry.fields.playbackUrl: Adds or updates the playback URL.
  • entry.update(): Commits changes to Contentful.

Exposing Metadata to Applications

Applications query Contentful via REST or GraphQL APIs. The data can then be consumed by frontends to display video details alongside the player.

Example: Next.js API Route Fetching Metadata

code
export default async function handler(req, res) {
const query = `
query {
videoCollection(where: { sys: { id: "${req.query.id}" } }) {
items {
title
description
duration
tags
playbackUrl
thumbnail { url }
}
}
}
`;

const response = await fetch(
`https://graphql.contentful.com/content/v1/spaces/${process.env.CONTENTFUL_SPACE_ID}`,
{
method: "POST",
headers: {
Authorization: `Bearer ${process.env.CONTENTFUL_DELIVERY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query }),
},
);

const data = await response.json();
res.json(data.data.videoCollection.items[0]);
}

Explanations:

  • req.query.id: Video ID from request.
  • videoCollection(where: { sys: { id: ... } }): GraphQL query filtering by ID.
  • fetch(...): Calls Contentful GraphQL API.
  • res.json(...): Returns metadata in JSON.

On the frontend, this API can be consumed to render a video player with associated metadata.

Example: Rendering Metadata with a Video Player

code
export default function VideoPlayer({ video }) {
return (
<div>
<h2>{video.title}</h2>
<p>{video.description}</p>
<video
controls
width="640"
height="360"
src={video.playbackUrl}
poster={video.thumbnail.url}
></video>
</div>
);
}

Explanations:

  • video.title, video.description: Renders metadata in UI.
  • src={video.playbackUrl}: Loads adaptive stream.
  • poster={video.thumbnail.url}: Displays thumbnail before playback.

End-to-End Workflow Example

A complete automated workflow looks like this:

Step 1: A video is uploaded to storage (e.g., S3).

Step 2: A serverless function extracts metadata and creates a Contentful entry.

Step 3: A transcoding service generates adaptive streams and updates playback URLs via webhook.

Step 4: Applications query Contentful APIs to render metadata and stream playback.

This pipeline keeps video metadata and playback links synchronized without manual intervention, scales with catalog size, and ensures consistent delivery across applications.