Skip to content

Advanced Composer Patterns

The compose() method handles the most common case — run all prompts in parallel and assemble the output. This page covers the manual flow for when you need more control:

  • Different settings per prompt — e.g. higher maxTokens for one prompt, lower for another
  • Streaming — stream individual prompts with streamText()
  • Structured output — use Output.object() for specific prompts
  • Partial results — assemble output even when some prompts are skipped
  • Custom post-processing — transform AI results before assembly

Each prompt segment in the composer is exposed as a top-level property, camelCased from the CMS prompt name (e.g. “Intro Prompt” becomes introPrompt). Every named prompt has an AI SDK compatible shape:

const { introPrompt, reviewPrompt } = await promptly.getComposer(
'my-composer',
{ input: { topic: 'TypeScript' } },
);
introPrompt.model; // LanguageModel (auto-resolved from CMS config)
introPrompt.system; // 'You are a helpful assistant...' | undefined
introPrompt.prompt; // 'Write an intro for TypeScript.' (variables already interpolated)
introPrompt.temperature; // 0.7
introPrompt.promptId; // 'prompt-a'
introPrompt.promptName; // 'Intro Prompt'

The { model, system, prompt, temperature } shape matches exactly what generateText() and streamText() accept, so you can spread directly:

const result = await generateText(introPrompt);

All unique prompts are also available as an ordered array (document order, de-duplicated):

const composer = await promptly.getComposer('my-composer', {
input: { topic: 'TypeScript' },
});
for (const prompt of composer.prompts) {
console.log(prompt.promptName, prompt.promptId);
}

Fetch a composer, generate text for each prompt manually, then assemble with formatComposer():

import { createPromptlyClient } from '@promptlycms/prompts';
import { generateText } from 'ai';
const promptly = createPromptlyClient();
const { introPrompt, summaryPrompt, formatComposer } =
await promptly.getComposer('report-builder', {
input: { companyName: 'Acme Corp', quarter: 'Q1 2025' },
});
// Run prompts in parallel for best performance
const [introResult, summaryResult] = await Promise.all([
generateText(introPrompt),
generateText(summaryPrompt),
]);
const report = formatComposer({
introPrompt: introResult,
summaryPrompt: summaryResult,
});

generateText() returns an object with a text property. formatComposer() accepts either the full result object (extracts .text automatically) or a raw string.

Spread the prompt and override individual parameters:

const result = await generateText({
...introPrompt,
maxTokens: 500,
topP: 0.9,
});

This works because the spread gives generateText() the model, system, prompt, and temperature fields, and your overrides take precedence for anything you add.

const result = await generateText({
...introPrompt,
providerOptions: {
anthropic: { cacheControl: { type: 'ephemeral' } },
},
});

For streaming responses, use streamText() instead of generateText(). Since streamText() accepts the same parameters, the composer prompts spread in the same way:

import { streamText } from 'ai';
const { introPrompt, formatComposer } = await promptly.getComposer(
'my-composer',
{ input: { topic: 'AI Safety' } },
);
const stream = streamText(introPrompt);
// Stream chunks to the console as they arrive
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
// Await the full text for formatComposer
const output = formatComposer({
introPrompt: await stream.text,
});
const { introPrompt, reviewPrompt, formatComposer } =
await promptly.getComposer('my-composer', {
input: { topic: 'TypeScript' },
});
// Start all streams (streamText returns synchronously)
const introStream = streamText(introPrompt);
const reviewStream = streamText(reviewPrompt);
// Await the final text from each stream in parallel
const [introText, reviewText] = await Promise.all([
introStream.text,
reviewStream.text,
]);
const output = formatComposer({
introPrompt: introText,
reviewPrompt: reviewText,
});

In a server context, use toTextStreamResponse() to stream directly to the client:

// Stream a single prompt as an HTTP response
const { introPrompt } = await promptly.getComposer('my-composer', {
input: { topic: 'AI' },
});
const stream = streamText(introPrompt);
return stream.toTextStreamResponse();

If a prompt in your composer needs structured output, use generateText() with the Output helper (AI SDK 6 pattern):

import { generateText, Output } from 'ai';
import { z } from 'zod';
const { analysisPrompt, formatComposer } = await promptly.getComposer(
'my-composer',
{ input: { data: reportData } },
);
const result = await generateText({
...analysisPrompt,
output: Output.object({
schema: z.object({
summary: z.string(),
score: z.number().min(0).max(100),
}),
}),
});
// Convert structured output to string for formatComposer
const { summary, score } = result.output;
const output = formatComposer({
analysisPrompt: `Summary: ${summary}\nScore: ${score}/100`,
});

formatComposer() takes an object keyed by prompt name and returns a single string with static HTML and AI text interleaved in the document order defined in the CMS.

// Full generateText() result objects (extracts .text)
formatComposer({
introPrompt: await generateText(introPrompt),
reviewPrompt: await generateText(reviewPrompt),
});
// Raw strings
formatComposer({
introPrompt: 'Manually written intro.',
reviewPrompt: 'Manually written review.',
});
// Mix of both
formatComposer({
introPrompt: await generateText(introPrompt),
reviewPrompt: 'Static fallback text.',
});

When the same prompt appears multiple times in a composer’s segments, you only provide the result once. formatComposer() reuses it at every position:

// "Intro Prompt" appears at positions 1 and 4 in the document
const output = formatComposer({
introPrompt: await generateText(introPrompt),
});
// The same generated text appears at both positions

If you omit a prompt key, formatComposer() skips those positions gracefully:

// Only provide intro, skip review
const output = formatComposer({
introPrompt: await generateText(introPrompt),
});
// Static segments still render; reviewPrompt positions are omitted

A Next.js API route that fetches a composer, generates text for each prompt with different settings, and returns the assembled document:

app/api/generate-report/route.ts
import { createPromptlyClient } from '@promptlycms/prompts';
import { generateText } from 'ai';
const promptly = createPromptlyClient();
export const POST = async (req: Request) => {
const { companyName, quarter, metrics } = await req.json();
const { introPrompt, analysisPrompt, conclusionPrompt, formatComposer } =
await promptly.getComposer('quarterly-report', {
input: { companyName, quarter, metrics: JSON.stringify(metrics) },
});
// Run all prompts in parallel with different settings
const [intro, analysis, conclusion] = await Promise.all([
generateText({ ...introPrompt, maxTokens: 300 }),
generateText({ ...analysisPrompt, maxTokens: 1000 }),
generateText({ ...conclusionPrompt, maxTokens: 200 }),
]);
const report = formatComposer({
introPrompt: intro,
analysisPrompt: analysis,
conclusionPrompt: conclusion,
});
return new Response(report, {
headers: { 'Content-Type': 'text/html' },
});
};