Lesson
5
Creating dynamic Open Graph images
Generate dynamic Open Graph images that pull your data directly from Sanity, saving you hours of design work and ensuring your social previews are always up to date with your content.
Log in to mark your progress for each Lesson and Task
Open Graph images (or social cards) are the preview images that appear when your content is shared on social media platforms. It is proven that having these images included with your social shares increases click through rates.
Dependent on which platform you're sharing to, you may want to create a range of different aspect ratios. For this tutorial, you'll create the most common size, 1200x630
pixels.
As always, you'll set this up in such a way that if you do upload a bespoke image to the seo.image
field, it will override the automatically generated one.
By the end of this lesson, you'll be able to:
- Generate dynamic Open Graph images using Next.js Edge Runtime
- Extract and use dominant colors from featured images
- Create professional, branded social previews
Let's create a new API route using Next.js Edge Runtime. This route will:
- Accept a parameter to dynamically fetch data
- Return an image response using Next.js
ImageResponse
Make sure before you proceed any further, you have read the limitations of Open Graph image generation on Vercel.
Create a new route in your Next.js application
src/app/api/og/route.tsx
import { ImageResponse } from "next/og";
export const runtime = "edge";
const dimensions = { width: 1200, height: 630,};
export async function GET(request: Request) { const { searchParams } = new URL(request.url); const title = searchParams.get("title");
return new ImageResponse( ( <div tw="flex w-full h-full bg-blue-500 text-white p-10"> <h1 tw="text-6xl font-bold">{title || "Missing title parameter"}</h1> </div> ), dimensions );}
Visit http://localhost:3000/api/og?title=hello and you should see an image rendered of a blue rectangle with the word "hello" in the top right.
This creates a route that generates an image which will render whatever was passed into the title parameter.
We're using Tailwind CSS utility classes in a tw
prop for styling. If you use className
you will get an error. This is all part of how the ImageResponse
function works.
This works, but isn't much to look at. And at present any user could enter any value for the title
parameter and have it render a custom image. Not safe! If we're going to render content, it's better to do so from a single source of truth. Your Content Lake.
We'll need to fetch specific data for our OG images:
- Page title
- Featured image URL
- Color palette information
There's lots of neat metadata you can pull from Sanity images, such as the dominant colors within an image. We'll use these as part of the design.
Update queries with a GROQ query for the data needed to generate an image
src/sanity/lib/queries.ts
// ...all other queries
export const OG_IMAGE_QUERY = defineQuery(` *[_id == $id][0]{ title, "image": mainImage.asset->{ url, metadata { palette } } } `);
Update the route that generates the OG image to fetch data based on the search parameter
id
src/app/api/og/route.tsx
import { client } from "@/sanity/lib/client";import { urlFor } from "@/sanity/lib/image";import { OG_IMAGE_QUERY } from "@/sanity/lib/queries";import { notFound } from "next/navigation";import { ImageResponse } from "next/og";
export const runtime = "edge";
async function loadGoogleFont(font: string, text: string) { const url = `https://fonts.googleapis.com/css2?family=${font}&text=${encodeURIComponent(text)}`; const css = await (await fetch(url)).text(); const resource = css.match( /src: url\((.+)\) format\('(opentype|truetype)'\)/ );
if (resource) { const response = await fetch(resource[1]); if (response.status == 200) { return await response.arrayBuffer(); } }
throw new Error("failed to load font data");}
export async function GET(request: Request) { const { searchParams } = new URL(request.url); const id = searchParams.get("id");
if (!id) { notFound(); }
const data = await client.fetch(OG_IMAGE_QUERY, { id });
if (!data) { notFound(); }
const vibrantBackground = data?.image?.asset?.metadata?.palette?.vibrant?.background ?? "#3B82F6"; const darkVibrantBackground = data?.image?.asset?.metadata?.palette?.darkVibrant?.background ?? "#3B82F6";
const text = data.title || "";
return new ImageResponse( ( <div tw="flex w-full h-full relative" style={{ background: `linear-gradient(135deg, ${vibrantBackground} 0%, ${darkVibrantBackground} 100%)`, }} > {/* Content container */} <div tw="flex flex-row w-full h-full relative"> {/* Text content */} <div tw="flex-1 flex items-center px-10"> <h1 tw="text-7xl tracking-tight leading-none text-white leading-tight"> {text} </h1> </div>
{/* Image container */} {data.image && ( <div tw="flex w-[500px] h-[630px] overflow-hidden"> {/* eslint-disable-next-line @next/next/no-img-element */} <img src={urlFor(data.image).width(500).height(630).url()} alt="" tw="w-full h-full object-cover" /> </div> )} </div> </div> ), { width: 1200, height: 630, fonts: [ { name: "Inter", data: await loadGoogleFont("Inter", text), weight: 400, style: "normal", }, ], } );}
This query fetches the page title, image, and its color palette information—all based on the value of an ID passed to the route. It uses this data to create a dynamic background color based on the image.
The route now also uses the font Inter, fetched from Google Fonts.
You can test this route by visiting /api/og?id=your-document-id
in your browser, replacing your-document-id
with an actual Sanity document ID.
The image template includes:
- A dynamic background color based on the featured image
- The page title
- The featured image, respecting its crop and hotspot settings
What we have now is a basic—but working—prototype for the future. You could extend this design or even explore creating different layouts depending on the value of the document's _type
.
Now that you have your Open Graph image generation set up, it will need to be added to each route's metadata so that it renders when that URL is shared.
Update the
generateMetadata
function in your page
and post
routes to use the dynamically generated Open Graph image, if an image is not specified in the documentsrc/app/(frontend)/[slug]/page.tsx
// ...all your imports
export async function generateMetadata({ params,}: RouteProps): Promise<Metadata> { const { data: page } = await getPage(params);
if (!page) { return {}; }
const metadata: Metadata = { title: page.seo.title, description: page.seo.description, };
metadata.openGraph = { images: { url: page.seo.image ? urlFor(page.seo.image).width(1200).height(630).url() : `/api/og?id=${page._id}`, width: 1200, height: 630, }, };
if (page.seo.noIndex) { metadata.robots = "noindex"; }
return metadata;}
Be sure to copy this logic over to your individual post route as well.
This setup generates metadata dynamically for each page, uses the page's Sanity ID to generate the correct Open Graph image, and maintains consistent dimensions across platforms.
There are a few ways to test your implementation.
If you have a service like ngrok setup locally you can pipe your local development environment to an external URL, and then run that URL through an Open Graph previewing service.
opengraph.ing is a simple service for validating your social previews in multiple applications and services
Once you're ready to deploy, you can check the implementation from your preview environment. Once deployed you can use the Vercel toolbar to preview your site and see the Open Graph image.
Other alternatives include using platform-specific debugging tools:
These tools allow you to test your Open Graph image on the platform you are sharing to, and see the preview image. There is an added benefit of these tools, that some of them force the cache to be invalidated, which means you can see the latest version of your Open Graph image across that social platform.
The next lesson will cover remixing content for social platforms using the Sanity AI Assistant.
You have 4 uncompleted tasks in this lesson
0 of 4