AI Model Access
Overview
subscribe.dev provides seamless access to a variety of AI models. This guide will walk you through the steps to integrate and utilize AI models in your application, enabling you to leverage powerful AI capabilities with ease, from text to image generation.
Usage
AI interaction is done through the client
object, which is exposed in the useSubscribeDev
hook.
The user must be authenticated to interact with AI models.
Text Generation
Use models like Claude, GPT-4o or many others for text generation tasks:
// Text input with text output:
const response = await client.run("anthropic/claude-sonnet-3.5", {
input: {
messages: [
{role: "system", content: "You tell jokes"},
{role: "user", content: "Tell me a joke about dogs"}
]
}
});
const joke: string = response.output[0];
console.log(`Joke: ${joke}`);
// Text + image input with text output:
const response = await client.run("openai/gpt-4o", {
input: {
messages: [
{role: "system", content: "You identify stuff."},
{role: "user", content: "What is this image?"},
{type: "image_url", image_url: {url: "https://example.com/image.jpg"}}
]
}
});
const imageDescription: string = response.output[0];
console.log(`Image description: ${imageDescription}`);
Image Generation
Generate images from text prompts using models like Flux Schnell:
// Text input with image output:
const response = await client.run("black-forest-labs/flux-schnell", {
input: {
prompt: "a cute dog",
width: 512,
height: 512,
}
});
const imageUrl: string = response.output[0];
console.log(`Generated image URL: ${imageUrl}`);
Or manipulate images with Google's Nano Banana:
// Text + image input with image output:
const response = await client.run("google/nano-banana", {
input: {
prompt: "Put a hat on the dog in the image",
image_input: [
"https://example.com/dog.jpg",
],
},
});
const imageUrl: string = response.output[0];
console.log(`Generated image URL: ${imageUrl}`);
Video Generation
Generate videos from text prompts using models like Wan 2.2:
// Text + image input with video output:
const response = await client.run("wan-video/wan-2.2-5b-fast", {
input: {
prompt: "A cat has a mini piano in front of him and is playing it spectacularly with his paws",
image: "https://example.com/cat.jpg",
num_frames: 121,
go_fast: true,
resolution: "720p",
aspect_ratio: "16:9",
optimize_prompt: true,
frames_per_second: 24,
},
});
const videoUrl: string = response.output[0];
console.log(`Generated video URL: ${videoUrl}`);
Advanced
JSON Schema parsing
subscribe.dev can automatically validate and structure model outputs with Zod.
Just pass a schema to response_format
and the SDK ensures the AI response matches it.
const factSchema = z.object({
fact: z.string(),
source: z.string().url().optional(),
});
const response = await client.run("openai/gpt-4o", {
input: { messages: [{ role: "user", content: "Tell me a fun fact about space and the source" }] },
response_format: factSchema,
});
console.log("Fun Fact: ", response.output[0].fact);
console.log("Source: ", response.output[0].source);
Streaming Responses
subscribe.dev supports real-time streaming responses for compatible models, allowing you to process output as it's generated:
Basic Streaming
// Enable streaming by setting stream: true
for await (const chunk of client.run("openai/gpt-4", {
input: {
messages: [
{role: "system", content: "You are helpful"},
{role: "user", content: "Tell me a story about streaming"}
]
},
stream: true
}, { streamBy: 'chunk' })) {
console.log("Chunk:", chunk);
// chunk contains the cumulative text up to this point
}
Stream Granularity Options
Control how streaming content is yielded using the streamBy
parameter:
// Stream by individual chunks (default)
for await (const text of client.run("openai/gpt-4", {
input: { messages: [{role: "user", content: "Write a haiku"}] },
stream: true
}, { streamBy: 'chunk' })) {
console.log("Chunk:", text);
}
// Stream word-by-word for typewriter effects
for await (const text of client.run("openai/gpt-4", {
input: { messages: [{role: "user", content: "Write a haiku"}] },
stream: true
}, { streamBy: 'word' })) {
console.log("Words so far:", text);
}
// Stream character-by-character
for await (const text of client.run("openai/gpt-4", {
input: { messages: [{role: "user", content: "Write a haiku"}] },
stream: true
}, { streamBy: 'letter' })) {
console.log("Characters so far:", text);
}
Multimodal Streaming
Streaming works with multimodal inputs including images:
const messages = [
{ role: "system", content: "You are helpful" },
{
role: "user",
content: [
{
type: "text",
text: "Analyze this image.",
},
{
type: "image_url",
image_url: {
url: "https://example.com/image.jpg",
},
}
],
},
];
for await (const chunk of client.run("openai/gpt-4o", {
input: {
messages: messages,
max_tokens: 1024,
temperature: 0.7
},
stream: true
}, { streamBy: 'word' })) {
console.log("Analysis chunk:", chunk);
}
Event-Based Streaming
For more complex applications, you can use event-based streaming:
const stream = client.run("openai/gpt-4", {
input: { messages: [{role: "user", content: "Tell me about streaming"}] },
stream: true
}, { streamBy: 'word' });
// Handle streaming events
stream.on('data', (chunk) => {
console.log("Received:", chunk);
});
stream.on('end', () => {
console.log("Stream completed");
});
stream.on('error', (error) => {
console.error("Stream error:", error);
});
// Wait for completion
await stream.toPromise();
Promise-Based Streaming
You can also await the entire streaming response to get the final accumulated result:
// This will stream internally but return the complete response
const finalResult = await client.run("openai/gpt-4", {
input: { messages: [{role: "user", content: "Write a short story"}] },
stream: true
}, { streamBy: 'word' });
console.log("Complete story:", finalResult);
Error Handling
All subscribe.dev functions can throw errors, which you can catch using standard JavaScript error handling:
try {
const response = await client.run("openai/gpt-4o", {
input: { prompt: "Hello, world!" }
});
const result: string = response.output[0];
console.log(`Response: ${result}`);
} catch (error) {
// Errors include type, message, and relevant details
if (error.type === "insufficient_credits") {
console.error("Not enough credits:", error.message);
// Handle insufficient credits (e.g., prompt user to subscribe)
} else if (error.type === "rate_limit_exceeded") {
console.error("Rate limited:", error.retryAfter);
// Handle rate limiting (e.g., show retry timer)
} else {
console.error("AI request failed:", error.message);
}
}