Documentation Index Fetch the complete documentation index at: https://docs.heylua.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The AI API provides isolated text generation from within tools, aligned with Vercel AI SDK generateText semantics. Requests are proxied to a dedicated generation endpoint — they do not go through the agent chat pipeline.
import { AI } from 'lua-cli' ;
// Quick text generation
const text = await AI . generate ( 'Summarize the latest AI news.' );
// Full options with rich result
const result = await AI . generate ({
model: 'google/gemini-2.0-flash' ,
system: 'You are concise.' ,
prompt: 'What is the weather in London?' ,
});
Powerful Tool Enhancement: The AI API lets you embed AI generation directly in your tools with custom personas, multi-modal inputs, and rich response metadata.
Import
import { AI } from 'lua-cli' ;
// or
import { AI } from 'lua-cli/skill' ;
Capabilities
Text Generation Generate content with custom prompts and personas
Image Analysis Analyze images with AI vision capabilities
Document Processing Process and analyze documents with AI
Multi-Modal Combine text, images, and files in one request
Multi-Provider Google (Vertex AI), OpenAI, and Anthropic models
Web Grounding Google Search grounding with source URLs
Methods
Simplified: AI.generate(prompt, content?)
Quick text generation. Returns plain text as a string.
When called with one argument, this is the user prompt. When called with two arguments, it becomes the system instruction.
User message content. Accepts a string or an array of multimodal parts (TextPart, ImagePart, FilePart) from the AI SDK.
Returns: Promise<string>
// Single argument: prompt is the user message
const text = await AI . generate ( 'Summarize the latest AI news.' );
// Two arguments: system instruction + user content
const text2 = await AI . generate (
'You are a helpful assistant.' ,
[{ type: 'text' , text: 'What products do you recommend?' }]
);
// Multi-modal content
const analysis = await AI . generate (
'You are an image analysis expert.' ,
[
{ type: 'text' , text: 'What do you see?' },
{ type: 'image' , url: 'https://example.com/photo.jpg' }
]
);
Full options: AI.generate(options)
Full control over generation parameters. Returns a rich result object.
Model to use, e.g. 'google/gemini-2.0-flash', 'openai/gpt-4o', 'anthropic/claude-sonnet-4-20250514'. Defaults to the agent’s configured model.
User prompt (simple text).
Conversation messages (AI SDK ModelMessage[]).
Sampling temperature (0–2).
Maximum tokens to generate.
Returns: Promise<AiGenerateOutput>
const result = await AI . generate ({
model: 'google/gemini-2.0-flash' ,
system: 'You are concise.' ,
prompt: 'What is the weather in London?' ,
temperature: 0.7 ,
});
result . text // Generated text
result . finishReason // 'stop', 'length', etc.
result . usage // { promptTokens, completionTokens, totalTokens }
result . sources // Google Search grounding URLs
Response Shape (AiGenerateOutput)
The full-options response mirrors AI SDK GenerateTextResult:
Field Type Description textstringGenerated text finishReasonFinishReason'stop', 'length', 'content-filter', 'tool-calls', 'error', 'other', 'unknown'usageLanguageModelUsage{ promptTokens, completionTokens, totalTokens }reasoning?ReasoningOutput[]Model reasoning steps (e.g. Gemini thinking) reasoningText?stringConcatenated reasoning text sources?AiGenerateSource[]URL sources from Google Search grounding toolCalls?AiGenerateToolCall[]Tool calls made during generation toolResults?AiGenerateToolResult[]Tool results from generation warnings?CallWarning[]Provider warnings
Supported Providers
Provider Prefix Example Google (Vertex AI) google/google/gemini-2.0-flashOpenAI openai/openai/gpt-4oAnthropic anthropic/anthropic/claude-sonnet-4-20250514
If the requested provider’s API key is not configured, the request falls back to the default Vertex AI model.
Google models automatically get Google Search grounding — real-time web search results appear in the sources field of the full-options response.
Content Types
Text
[{ type: 'text' , text: 'Your message here' }]
Image
[
{ type: 'text' , text: 'What do you see in this image?' },
{ type: 'image' , url: 'https://example.com/photo.jpg' }
]
File
[
{ type: 'text' , text: 'Summarize this document' },
{ type: 'file' , url: 'https://example.com/doc.pdf' , mimeType: 'application/pdf' }
]
Complete Examples
Product Description Generator
import { LuaTool , AI , Products } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class GenerateDescriptionTool implements LuaTool {
name = 'generate_product_description' ;
description = 'Generate compelling product descriptions using AI' ;
inputSchema = z . object ({
productId: z . string () ,
style: z . enum ([ 'casual' , 'professional' , 'luxury' ]). optional ()
});
async execute ( input : z . infer < typeof this . inputSchema >) {
const product = await Products . getById ( input . productId );
if ( ! product ) {
return { success: false , error: 'Product not found' };
}
const style = input . style || 'professional' ;
const description = await AI . generate (
`You are a ${ style } copywriter. Create a 2-3 sentence product description.` ,
[{ type: 'text' , text: `Product: ${ product . name } , Price: $ ${ product . price } ` }]
);
await product . update ({ description });
return { success: true , productId: input . productId , description };
}
}
Weather Search with Google Grounding
import { LuaTool , AI } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class WeatherSearchTool implements LuaTool {
name = 'search_weather' ;
description = 'Search current weather using AI with Google Search grounding' ;
inputSchema = z . object ({
location: z . string (). describe ( 'City or location' )
});
async execute ( input : z . infer < typeof this . inputSchema >) {
const result = await AI . generate ({
model: 'google/gemini-2.0-flash' ,
system: 'You report current weather conditions concisely.' ,
prompt: `What is the current weather in ${ input . location } ?` ,
});
return {
weather: result . text ,
sources: result . sources ?. map ( s => ({ title: s . title , url: s . url })) ?? [],
usage: result . usage ,
};
}
}
import { LuaTool , AI } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class AnalyzeImageTool implements LuaTool {
name = 'analyze_image' ;
description = 'Analyze images using AI vision' ;
inputSchema = z . object ({
imageUrl: z . string (). url () ,
question: z . string (). optional ()
});
async execute ( input : z . infer < typeof this . inputSchema >) {
const analysis = await AI . generate (
'You are an image analysis expert. Describe what you see in detail.' ,
[
{ type: 'text' , text: input . question || 'Describe this image in detail' },
{ type: 'image' , url: input . imageUrl }
]
);
return { success: true , imageUrl: input . imageUrl , analysis };
}
}
Content Summarizer
import { LuaTool , AI } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class SummarizeTool implements LuaTool {
name = 'summarize_content' ;
description = 'Summarize long text content' ;
inputSchema = z . object ({
content: z . string () ,
maxLength: z . enum ([ 'brief' , 'medium' , 'detailed' ]). optional ()
});
async execute ( input : z . infer < typeof this . inputSchema >) {
const instructions = {
brief: 'in 1-2 sentences' ,
medium: 'in 1 paragraph' ,
detailed: 'in 2-3 paragraphs with key points'
};
const length = input . maxLength || 'medium' ;
const summary = await AI . generate (
`You are a professional content summarizer. Create a clear summary ${ instructions [ length ] } .` ,
[{ type: 'text' , text: `Summarize: \n\n ${ input . content } ` }]
);
return { success: true , summary , originalLength: input . content . length };
}
}
import { LuaTool , AI } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class TranslateTool implements LuaTool {
name = 'translate_text' ;
description = 'Translate text to different languages' ;
inputSchema = z . object ({
text: z . string () ,
targetLanguage: z . string () ,
sourceLanguage: z . string (). optional ()
});
async execute ( input : z . infer < typeof this . inputSchema >) {
const translation = await AI . generate (
`You are a professional translator. Translate accurately to ${ input . targetLanguage } . Return ONLY the translated text.` ,
[{ type: 'text' , text: input . text }]
);
return {
success: true ,
original: input . text ,
translation ,
from: input . sourceLanguage || 'auto-detect' ,
to: input . targetLanguage
};
}
}
Sentiment Analysis (Structured JSON)
import { LuaTool , AI } from 'lua-cli/skill' ;
import { z } from 'zod' ;
export default class SentimentAnalysisTool implements LuaTool {
name = 'analyze_sentiment' ;
description = 'Analyze the sentiment of text' ;
inputSchema = z . object ({ text: z . string () });
async execute ( input : z . infer < typeof this . inputSchema >) {
const analysis = await AI . generate (
`You are a sentiment analysis expert. Respond with JSON only:
{ "sentiment": "positive|negative|neutral", "score": 0.0-1.0, "summary": "brief explanation" }` ,
[{ type: 'text' , text: `Analyze: \n\n " ${ input . text } "` }]
);
try {
return { success: true , ... JSON . parse ( analysis ), originalText: input . text };
} catch {
return { success: false , error: 'Failed to parse' , rawResponse: analysis };
}
}
}
Best Practices
Provide clear, specific context
Good context leads to better results // Good — specific and clear
const context = `You are a product reviewer.
Rate products on Quality (1-10), Value (1-10), Features (1-10).
Return ONLY a JSON object with these ratings.` ;
// Bad — vague
const context = `Review this product` ;
Choose the right overload
Use simplified for quick text, full options when you need metadata // Simplified — just need the text
const summary = await AI . generate ( 'Summarize this article.' , content );
// Full options — need usage stats, sources, finish reason
const result = await AI . generate ({
model: 'google/gemini-2.0-flash' ,
prompt: 'What is happening in tech today?' ,
});
console . log ( result . sources ); // Google Search grounding URLs
Handle variable responses
AI responses may vary — always validate try {
const response = await AI . generate ( context , messages );
if ( ! response || response . trim (). length === 0 ) {
return { success: false , error: 'Empty response' };
}
return { success: true , response };
} catch ( error ) {
return { success: false , error: error . message };
}
Don't use AI for simple logic
Use regular code for deterministic tasks // Bad — waste of AI for simple math
await AI . generate ( 'Calculate 2 + 2' );
// Good — use regular code
const result = 2 + 2 ;
Common Use Cases
Use Case Example Tool Content generation Product descriptions, email drafts, social media posts Analysis Sentiment analysis, image recognition, document review Translation Multi-language support Summarization Long documents, articles, conversations Recommendations Product suggestions, content curation Moderation Content filtering, safety checks Q&A Document queries, knowledge retrieval Web search Real-time information via Google Search grounding
Response Time AI generation typically takes 1-5 seconds. Longer/more complex requests take more time.
Context Length Maximum context length varies by model. Keep contexts focused and relevant.
Image Processing Image analysis is slower than text. Optimize image sizes when possible.
Caching Cache common AI responses. Store frequently requested generations.
Limitations
Maximum context length varies by model
Image size limits apply (optimize before sending)
File types supported: PDF, images, text files
Response may vary between calls (non-deterministic)
Some content may be filtered for safety
Processing time increases with input complexity
Error Handling
async execute ( input : any ) {
try {
const response = await AI . generate ( context , messages );
if ( ! response || response . trim (). length === 0 ) {
return { success: false , error: 'AI returned empty response' };
}
return { success: true , response };
} catch ( error ) {
return {
success: false ,
error: error instanceof Error ? error . message : 'AI generation failed'
};
}
}
User API Get user context for personalization
Data API Store AI-generated content
Products API Enhance products with AI descriptions
Jobs API Schedule AI generation tasks
See Also