Serverless architecture is a method where code runs in small, event-triggered functions instead of long-running servers. These functions start only when needed and shut down after completing the task. This allows applications to scale automatically and reduces infrastructure management.

Developers only write the logic, and the cloud provider handles execution, uptime, and scaling. Node.js is well-suited for this as it runs lightweight and asynchronous tasks. In serverless environments, it performs well for APIs, file processing, and background jobs.

What Serverless Architecture Means for Node.js

In a serverless setup, your app is broken into small functions. Each function does a specific thing and runs when needed. For example, you might have a function that handles user login, another that processes uploaded images, and another that sends emails.

Server management tasks such as uptime monitoring, memory allocation, and scaling are handled by the cloud provider. That’s handled by the cloud provider. You write your function using Node.js, and it runs whenever it’s triggered.

This works well with Node.js as it’s good at handling tasks that happen at the same time (like web requests) without waiting for one task to finish before starting another. That’s exactly what’s needed in a system that runs short and quick tasks on demand.

Basic Serverless Function in Node.js

A serverless function in Node.js is a small piece of code triggered by events like HTTP requests. It runs in isolation, handles a single task, and returns a response. This keeps the logic simple and easy to manage.

code
exports.handler = async (event) => {
code
return {
code
statusCode: 200,
code
body: JSON.stringify({ message: "Hello from serverless Node.js!" }),
code
};
code
};
code

Explanation:

  • exports.handler is the function the cloud will run when the event happens.
  • event contains details about the trigger, like an HTTP request.
  • statusCode is the response code for the request (200 means OK).
  • body is the message sent back, turned into a string format.

Deploying Node.js Functions to the Cloud

After writing a function, it must be deployed to a cloud platform to make it usable. Configuration files define how the function should run and what should trigger it. This setup allows the cloud to handle the execution automatically.

Example of a Configuration File (called serverless.yml): Deploying this Function to AWS

code
service: hello-node
code

code
provider:
code
name: aws
code
runtime: nodejs18.x
code

code
functions:
code
hello:
code
handler: handler.handler
code
events:
code
- http:
code
path: hello
code
method: get

Explanation:

  • service: The name of your serverless service/project.
  • provider.runtime: Specifies the Node.js version the function runs on.
  • functions.hello.handler: Points to the exported handler in your code file (handler.js).
  • events.http: Defines an HTTP GET event that triggers the function at the /hello path.

Processing Video Files Using Serverless Functions

When a user uploads a video to cloud storage (such as Amazon S3), a function can be triggered to convert the video format, extract metadata, or generate thumbnails. Each of these steps can be performed by a separate function to keep the logic modular and manageable.

Node.js supports libraries like fluent-ffmpeg for working with video files in serverless environments. These functions can run independently and handle tasks in parallel, such as encoding a video into multiple resolutions or uploading a processed result to another bucket or database.

Example:

code
const AWS = require('aws-sdk');
code
const ffmpeg = require('fluent-ffmpeg');
code
const fs = require('fs');
code
const s3 = new AWS.S3();
code

code
exports.handler = async (event) => {
code
const bucket = event.Records[0].s3.bucket.name;
code
const key = event.Records[0].s3.object.key;
code

code
const inputFile = `/tmp/${key}`;
code
const outputFile = `/tmp/processed-${key}`;
code

code
const params = { Bucket: bucket, Key: key };
code
const file = fs.createWriteStream(inputFile);
code

code
// Download file from S3
code
await new Promise((resolve, reject) => {
code
s3.getObject(params).createReadStream()
code
.pipe(file)
code
.on('finish', resolve)
code
.on('error', reject);
code
});
code

code
// Process video using ffmpeg
code
await new Promise((resolve, reject) => {
code
ffmpeg(inputFile)
code
.output(outputFile)
code
.size('640x360')
code
.on('end', resolve)
code
.on('error', reject)
code
.run();
code
});
code

code
// Upload processed video back to S3
code
const processedData = fs.readFileSync(outputFile);
code
await s3.putObject({
code
Bucket: bucket,
code
Key: `processed/${key}`,
code
Body: processedData,
code
ContentType: 'video/mp4'
code
}).promise();
code

code
return {
code
statusCode: 200,
code
body: JSON.stringify({ message: 'Video processed successfully' })
code
};
code
};

Explanation:

  • ffmpeg: Used to resize or convert the video format.
  • /tmp: Temporary file storage path available in serverless environments like AWS Lambda.
  • s3.getObject(): Downloads the uploaded video file from the S3 bucket.
  • s3.putObject(): Uploads the processed video back to S3 in a new path (e.g., processed/).

This function triggers automatically when a new video is uploaded and processes it without needing a permanent server.

How Serverless Node.js Handles Data

Serverless functions do not keep memory between runs. They don’t remember what happened before. So, if your app needs to store or read data (like user information), you’ll need to connect it to a database. Here’s an example using AWS DynamoDB:

code
const AWS = require('aws-sdk');
code
const dynamo = new AWS.DynamoDB.DocumentClient();
code

code
exports.handler = async (event) => {
code
const params = {
code
TableName: 'Users',
code
Key: { userId: event.userId }
code
};
code

code
try {
code
const data = await dynamo.get(params).promise();
code
return {
code
statusCode: 200,
code
body: JSON.stringify(data.Item)
code
};
code
} catch (error) {
code
return {
code
statusCode: 500,
code
body: JSON.stringify({ error: 'Failed to fetch user' }),
code
};
code
}
code
};

Explanation:

  • AWS.DynamoDB.DocumentClient(): AWS SDK client for interacting with DynamoDB in JavaScript-friendly format.
  • params.TableName: The DynamoDB table to query.
  • params.Key: The primary key used to retrieve the specific user item.

Logging and Debugging

Since serverless functions don’t run on your computer, you won’t see what’s going on unless you add logs. In Node.js, you can use console.log() to print messages, and those will appear in your cloud provider’s log system.

Example:

code
exports.handler = async (event) => {
code
console.log('Received event:', JSON.stringify(event));
code
// Function logic here
code
return {
code
statusCode: 200,
code
body: "Done",
code
};
code
};

Explanation:

  • console.log: Prints data that you can later view in cloud logs (like AWS CloudWatch).