Advanced Composer Patterns
The compose() method handles the most common case — run all prompts in parallel and assemble the output. This page covers the manual flow for when you need more control:
- Different settings per prompt — e.g. higher
maxTokensfor one prompt, lower for another - Streaming — stream individual prompts with
streamText() - Structured output — use
Output.object()for specific prompts - Partial results — assemble output even when some prompts are skipped
- Custom post-processing — transform AI results before assembly
Named prompt access
Section titled “Named prompt access”Each prompt segment in the composer is exposed as a top-level property, camelCased from the CMS prompt name (e.g. “Intro Prompt” becomes introPrompt). Every named prompt has an AI SDK compatible shape:
const { introPrompt, reviewPrompt } = await promptly.getComposer( 'my-composer', { input: { topic: 'TypeScript' } },);
introPrompt.model; // LanguageModel (auto-resolved from CMS config)introPrompt.system; // 'You are a helpful assistant...' | undefinedintroPrompt.prompt; // 'Write an intro for TypeScript.' (variables already interpolated)introPrompt.temperature; // 0.7introPrompt.promptId; // 'prompt-a'introPrompt.promptName; // 'Intro Prompt'The { model, system, prompt, temperature } shape matches exactly what generateText() and streamText() accept, so you can spread directly:
const result = await generateText(introPrompt);The prompts array
Section titled “The prompts array”All unique prompts are also available as an ordered array (document order, de-duplicated):
const composer = await promptly.getComposer('my-composer', { input: { topic: 'TypeScript' },});
for (const prompt of composer.prompts) { console.log(prompt.promptName, prompt.promptId);}Using with generateText()
Section titled “Using with generateText()”Fetch a composer, generate text for each prompt manually, then assemble with formatComposer():
import { createPromptlyClient } from '@promptlycms/prompts';import { generateText } from 'ai';
const promptly = createPromptlyClient();
const { introPrompt, summaryPrompt, formatComposer } = await promptly.getComposer('report-builder', { input: { companyName: 'Acme Corp', quarter: 'Q1 2025' }, });
// Run prompts in parallel for best performanceconst [introResult, summaryResult] = await Promise.all([ generateText(introPrompt), generateText(summaryPrompt),]);
const report = formatComposer({ introPrompt: introResult, summaryPrompt: summaryResult,});generateText() returns an object with a text property. formatComposer() accepts either the full result object (extracts .text automatically) or a raw string.
Overriding AI SDK parameters
Section titled “Overriding AI SDK parameters”Spread the prompt and override individual parameters:
const result = await generateText({ ...introPrompt, maxTokens: 500, topP: 0.9,});This works because the spread gives generateText() the model, system, prompt, and temperature fields, and your overrides take precedence for anything you add.
Provider-specific options
Section titled “Provider-specific options”const result = await generateText({ ...introPrompt, providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } }, },});Using with streamText()
Section titled “Using with streamText()”For streaming responses, use streamText() instead of generateText(). Since streamText() accepts the same parameters, the composer prompts spread in the same way:
import { streamText } from 'ai';
const { introPrompt, formatComposer } = await promptly.getComposer( 'my-composer', { input: { topic: 'AI Safety' } },);
const stream = streamText(introPrompt);
// Stream chunks to the console as they arrivefor await (const chunk of stream.textStream) { process.stdout.write(chunk);}
// Await the full text for formatComposerconst output = formatComposer({ introPrompt: await stream.text,});Streaming all prompts
Section titled “Streaming all prompts”const { introPrompt, reviewPrompt, formatComposer } = await promptly.getComposer('my-composer', { input: { topic: 'TypeScript' }, });
// Start all streams (streamText returns synchronously)const introStream = streamText(introPrompt);const reviewStream = streamText(reviewPrompt);
// Await the final text from each stream in parallelconst [introText, reviewText] = await Promise.all([ introStream.text, reviewStream.text,]);
const output = formatComposer({ introPrompt: introText, reviewPrompt: reviewText,});Returning a streaming HTTP response
Section titled “Returning a streaming HTTP response”In a server context, use toTextStreamResponse() to stream directly to the client:
// Stream a single prompt as an HTTP responseconst { introPrompt } = await promptly.getComposer('my-composer', { input: { topic: 'AI' },});
const stream = streamText(introPrompt);return stream.toTextStreamResponse();Using with structured output
Section titled “Using with structured output”If a prompt in your composer needs structured output, use generateText() with the Output helper (AI SDK 6 pattern):
import { generateText, Output } from 'ai';import { z } from 'zod';
const { analysisPrompt, formatComposer } = await promptly.getComposer( 'my-composer', { input: { data: reportData } },);
const result = await generateText({ ...analysisPrompt, output: Output.object({ schema: z.object({ summary: z.string(), score: z.number().min(0).max(100), }), }),});
// Convert structured output to string for formatComposerconst { summary, score } = result.output;const output = formatComposer({ analysisPrompt: `Summary: ${summary}\nScore: ${score}/100`,});formatComposer() in depth
Section titled “formatComposer() in depth”formatComposer() takes an object keyed by prompt name and returns a single string with static HTML and AI text interleaved in the document order defined in the CMS.
Accepted input formats
Section titled “Accepted input formats”// Full generateText() result objects (extracts .text)formatComposer({ introPrompt: await generateText(introPrompt), reviewPrompt: await generateText(reviewPrompt),});
// Raw stringsformatComposer({ introPrompt: 'Manually written intro.', reviewPrompt: 'Manually written review.',});
// Mix of bothformatComposer({ introPrompt: await generateText(introPrompt), reviewPrompt: 'Static fallback text.',});Duplicate prompt handling
Section titled “Duplicate prompt handling”When the same prompt appears multiple times in a composer’s segments, you only provide the result once. formatComposer() reuses it at every position:
// "Intro Prompt" appears at positions 1 and 4 in the documentconst output = formatComposer({ introPrompt: await generateText(introPrompt),});// The same generated text appears at both positionsMissing results
Section titled “Missing results”If you omit a prompt key, formatComposer() skips those positions gracefully:
// Only provide intro, skip reviewconst output = formatComposer({ introPrompt: await generateText(introPrompt),});// Static segments still render; reviewPrompt positions are omittedComplete end-to-end example
Section titled “Complete end-to-end example”A Next.js API route that fetches a composer, generates text for each prompt with different settings, and returns the assembled document:
import { createPromptlyClient } from '@promptlycms/prompts';import { generateText } from 'ai';
const promptly = createPromptlyClient();
export const POST = async (req: Request) => { const { companyName, quarter, metrics } = await req.json();
const { introPrompt, analysisPrompt, conclusionPrompt, formatComposer } = await promptly.getComposer('quarterly-report', { input: { companyName, quarter, metrics: JSON.stringify(metrics) }, });
// Run all prompts in parallel with different settings const [intro, analysis, conclusion] = await Promise.all([ generateText({ ...introPrompt, maxTokens: 300 }), generateText({ ...analysisPrompt, maxTokens: 1000 }), generateText({ ...conclusionPrompt, maxTokens: 200 }), ]);
const report = formatComposer({ introPrompt: intro, analysisPrompt: analysis, conclusionPrompt: conclusion, });
return new Response(report, { headers: { 'Content-Type': 'text/html' }, });};Next steps
Section titled “Next steps”- Start with the simpler
compose()approach if you don’t need per-prompt control - Explore AI SDK integration patterns for prompts
- Learn about model resolution and custom resolvers
- Handle errors from the API