AI SDK Integration
The getPrompt() method returns everything you need to integrate with the Vercel AI SDK. Destructure the result and pass properties directly to generateText(), streamText(), and other AI SDK functions - no prompts hardcoded in your application code.
Basic usage
Section titled “Basic usage”import { createPromptlyClient } from '@promptlycms/prompts';import { generateText } from 'ai';
const { getPrompt } = createPromptlyClient();
const { userMessage, systemMessage, temperature, model } = await getPrompt( 'my-prompt',);
const { text } = await generateText({ model, system: systemMessage, prompt: userMessage({ name: 'Alice', task: 'code review' }), temperature,});What getPrompt() returns
Section titled “What getPrompt() returns”const result = await promptly.getPrompt('my-prompt');
// result contains:// {// systemMessage: 'You are a helpful assistant...',// userMessage: PromptMessage, // callable template function// temperature: 0.7,// model: LanguageModel, // auto-resolved from CMS config// promptId: 'my-prompt',// promptName: 'My Prompt',// version: '1.0.0',// config: PromptConfig,// }systemMessage- the system message from the CMSuserMessage- callable function that interpolates template variables; callString(userMessage)for the raw templatetemperature- the temperature setting from the CMSmodel- automatically resolved from the model configured in the CMS (see Model Resolution)
With generateText()
Section titled “With generateText()”import { generateText } from 'ai';
const { userMessage, systemMessage, temperature, model } = await promptly.getPrompt('summarize');
const { text } = await generateText({ model, system: systemMessage, prompt: userMessage({ content: articleText }), temperature,});With streamText()
Section titled “With streamText()”import { streamText } from 'ai';
const { userMessage, systemMessage, temperature, model } = await promptly.getPrompt('chat-assistant');
const stream = streamText({ model, system: systemMessage, prompt: userMessage({ context: conversationHistory }), temperature,});
for await (const chunk of stream.textStream) { process.stdout.write(chunk);}With messages array
Section titled “With messages array”For full control over the messages array (e.g. for provider-specific options like cache control):
import { generateText } from 'ai';
const { userMessage, systemMessage, temperature, model } = await promptly.getPrompt('review-prompt');
const { text } = await generateText({ model, system: systemMessage, temperature, messages: [ { role: 'user', content: userMessage({ pickupLocation: 'London', items: 'sofa' }), providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } }, }, }, ],});Version pinning
Section titled “Version pinning”Fetch a specific prompt version:
const { userMessage, model, temperature, systemMessage } = await promptly.getPrompt('my-prompt', { version: '2.0.0',});Next steps
Section titled “Next steps”- Learn how model resolution works
- See structured output for Zod schemas from the CMS
- View the full client API reference
- Manage your prompts and models from the Promptly CMS dashboard