createConfigurableAgent
Creates a highly customizable agent with extensive configuration options and lifecycle hooks.
Overview​
createConfigurableAgent
is the primary factory function for creating agents in Grid. It provides a comprehensive configuration system with support for custom prompts, tools, behavior settings, and lifecycle hooks that allow fine-grained control over agent execution.
Import​
import { createConfigurableAgent } from "@mrck-labs/grid-core";
Function Signature​
function createConfigurableAgent(options: CreateConfigurableAgentOptions): Agent
Parameters​
options​
- Type:
CreateConfigurableAgentOptions
- Required Properties:
llmService
: LLMService instance (created withbaseLLMService
)toolExecutor
: ToolExecutor instance (created withcreateToolExecutor
)config
: Agent configuration object
- Optional Properties:
voiceService
: VoiceService instance (enables voice capabilities)customHandlers
: Lifecycle hooks for customizing behavior
config​
- Type:
AgentConfig
- Required Properties:
id
: Unique identifier for the agenttype
: Agent type (e.g., "general", "research", "support")version
: Configuration versionprompts
: Prompt configurationsystem
: System prompt defining agent behavior
metadata
: Agent metadataid
: Agent ID (usually same as config.id)type
: Agent typename
: Human-readable namedescription
: Agent descriptioncapabilities
: Array of capability stringsversion
: Metadata version
- Optional Properties:
tools
: Tool configurationbuiltin
: Array of built-in toolscustom
: Array of custom toolsmcp
: Array of MCP tools
behavior
: Behavior configurationmaxRetries
: Maximum retry attempts (default: 3)responseFormat
: "text" or "json_object"temperature
: LLM temperature (0-2)maxTokens
: Maximum response tokens
orchestration
: Orchestration settingsmode
: "autonomous" | "guided" | "hybrid"maxIterations
: Max iterations for autonomous mode
voice
: Voice configuration (requires voiceService)enabled
: Enable voice featuresvoiceId
: Default voice IDautoSpeak
: Automatically speak responsesinterruptible
: Allow speech interruptionautoListen
: Automatically listen after speakingmixedModality
: Mixed voice/text configuration
customConfig
: Additional custom configuration
customHandlers​
- Type:
CustomHandlers
- Optional Properties:
beforeAct
: Called before agent processes inputafterResponse
: Called after agent generates responseonError
: Called when an error occursvalidateResponse
: Validate agent responsestransformInput
: Transform input before processingtransformOutput
: Transform output before returning
Return Type: Agent​
The created agent has these properties and methods:
id
: Agent identifiertype
: Agent typeavailableTools
: Array of tool namesgetMetadata()
: Get agent metadataact(input: AgentActInput)
: Execute agent with input
Voice Methods (when voiceService is provided)​
hasVoice()
: Check if voice is availablecanSpeak()
: Check if agent can synthesize speechcanListen()
: Check if agent can transcribe speechspeak(text: string, options?: VoiceOptions)
: Synthesize speechlisten(options?: ListenOptions)
: Not implemented - throws error requiring application-level audio input
Lifecycle Hooks​
beforeAct​
beforeAct?: (input: AgentActInput, config: AgentConfig) => Promise<AgentActInput>
Modify input before agent processing.
afterResponse​
afterResponse?: (response: AgentResponse, input: AgentActInput) => Promise<AgentResponse>
Modify response after generation.
onError​
onError?: (error: Error, attempt: number) => Promise<void | {
retry: boolean;
modifiedInput?: AgentActInput
}>
Handle errors with optional retry logic.
validateResponse​
validateResponse?: (response: AgentResponse) => Promise<{
isValid: boolean;
errors?: string[]
}>
Validate responses before returning.
transformInput​
transformInput?: (input: AgentActInput) => Promise<AgentActInput>
Transform input at the earliest stage.
transformOutput​
transformOutput?: (output: AgentResponse) => Promise<AgentResponse>
Transform output at the final stage.
Examples​
Basic Agent​
import {
createConfigurableAgent,
baseLLMService,
createToolExecutor
} from "@mrck-labs/grid-core";
const llmService = baseLLMService();
const toolExecutor = createToolExecutor();
const agent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
id: "assistant-v1",
type: "general",
version: "1.0.0",
prompts: {
system: "You are a helpful assistant that provides clear, concise answers."
},
metadata: {
id: "assistant-v1",
type: "general",
name: "General Assistant",
description: "A helpful general-purpose assistant",
capabilities: ["general", "conversation"],
version: "1.0.0"
},
behavior: {
maxRetries: 3,
responseFormat: "text",
temperature: 0.7
}
}
});
const response = await agent.act("What is machine learning?");
console.log(response.content);
Agent with Tools​
import { createNamedTool } from "@mrck-labs/grid-core";
import { z } from "zod";
// Create tools
const calculatorTool = createNamedTool({
name: "calculator",
description: "Perform calculations",
parameters: z.object({
expression: z.string()
}),
execute: async ({ expression }) => eval(expression)
});
const weatherTool = createNamedTool({
name: "weather",
description: "Get weather information",
parameters: z.object({
location: z.string()
}),
execute: async ({ location }) => `Sunny, 72°F in ${location}`
});
// Register tools
toolExecutor.registerTools([calculatorTool, weatherTool]);
// Create agent with tools
const agent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
id: "tool-agent",
type: "general",
version: "1.0.0",
prompts: {
system: `You are an assistant with access to tools.
Use the calculator for math questions.
Use the weather tool for weather inquiries.
Always explain your process.`
},
metadata: {
id: "tool-agent",
type: "general",
name: "Tool-Enabled Assistant",
description: "Assistant with calculator and weather tools",
capabilities: ["tools", "math", "weather"],
version: "1.0.0"
},
tools: {
builtin: [],
custom: [calculatorTool, weatherTool],
mcp: []
}
}
});
const response = await agent.act("What's 25 * 4 and what's the weather in Paris?");
// Agent will use both tools and provide comprehensive answer
Agent with Custom Handlers​
const agent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
// ... standard config
},
customHandlers: {
// Log all interactions
beforeAct: async (input, config) => {
console.log(`[${new Date().toISOString()}] Input:`, input.messages);
await logger.log("agent:input", { input, agentId: config.id });
return input;
},
// Add metadata to responses
afterResponse: async (response, input) => {
const enhanced = {
...response,
metadata: {
...response.metadata,
processedAt: Date.now(),
inputLength: JSON.stringify(input).length,
responseLength: response.content?.length || 0
}
};
await logger.log("agent:response", { response: enhanced });
return enhanced;
},
// Retry on specific errors
onError: async (error, attempt) => {
console.error(`Attempt ${attempt} failed:`, error.message);
if (error.message.includes("rate limit") && attempt < 3) {
await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
return { retry: true };
}
if (error.message.includes("context length") && attempt === 1) {
// Retry with shorter context
return {
retry: true,
modifiedInput: truncateInput(input)
};
}
return { retry: false };
},
// Validate responses
validateResponse: async (response) => {
const errors: string[] = [];
if (!response.content && !response.toolCalls) {
errors.push("Response has no content or tool calls");
}
if (response.content && response.content.length < 10) {
errors.push("Response too short");
}
if (containsPII(response.content)) {
errors.push("Response contains PII");
}
return {
isValid: errors.length === 0,
errors
};
}
}
});
Input/Output Transformation​
const agent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
// ... config
},
customHandlers: {
// Add context to all inputs
transformInput: async (input) => {
const timestamp = new Date().toISOString();
const enhanced = {
...input,
messages: [
{
role: "system",
content: `Current time: ${timestamp}`
},
...input.messages
]
};
return enhanced;
},
// Format all outputs
transformOutput: async (output) => {
// Add structured format
const formatted = {
...output,
content: output.content ? formatMarkdown(output.content) : null,
metadata: {
...output.metadata,
formatted: true,
formattedAt: Date.now()
}
};
// Add citations if needed
if (output.metadata?.sources) {
formatted.content += "\n\n## Sources\n" +
output.metadata.sources.map((s, i) => `${i+1}. ${s}`).join("\n");
}
return formatted;
}
}
});
Specialized Agent Types​
// Research Agent
const researchAgent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
id: "research-agent",
type: "research",
version: "1.0.0",
prompts: {
system: `You are a research specialist. Your approach:
1. Break down research questions into components
2. Search multiple sources for each component
3. Analyze and synthesize findings
4. Provide citations for all claims
5. Highlight conflicting information
6. Suggest areas for further research`
},
metadata: {
id: "research-agent",
type: "research",
name: "Research Specialist",
description: "Comprehensive research and analysis",
capabilities: ["research", "analysis", "synthesis"],
version: "1.0.0"
},
tools: {
custom: [searchTool, academicSearchTool, factCheckTool]
},
behavior: {
maxRetries: 5, // More retries for thorough research
temperature: 0.3, // Lower temperature for accuracy
maxTokens: 4000 // Longer responses for detailed reports
}
}
});
// Code Assistant
const codeAgent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
id: "code-agent",
type: "coding",
version: "1.0.0",
prompts: {
system: `You are an expert programmer. Guidelines:
- Write clean, well-documented code
- Follow best practices and design patterns
- Include error handling
- Add type annotations
- Explain complex logic
- Suggest optimizations`
},
metadata: {
id: "code-agent",
type: "coding",
name: "Code Assistant",
description: "Programming help and code generation",
capabilities: ["coding", "debugging", "refactoring"],
version: "1.0.0"
},
behavior: {
responseFormat: "text",
temperature: 0.2 // Low temperature for consistent code
}
}
});
Autonomous Agent​
const autonomousAgent = createConfigurableAgent({
llmService,
toolExecutor,
config: {
id: "autonomous-agent",
type: "autonomous",
version: "1.0.0",
prompts: {
system: `You are an autonomous agent. For each task:
1. Analyze what needs to be done
2. Break it into steps
3. Execute each step using available tools
4. Verify results before proceeding
5. Iterate until the task is complete`
},
metadata: {
id: "autonomous-agent",
type: "autonomous",
name: "Autonomous Task Agent",
description: "Self-directed task completion",
capabilities: ["autonomous", "planning", "execution"],
version: "1.0.0"
},
orchestration: {
mode: "autonomous",
maxIterations: 10
},
tools: {
custom: [fileSystemTool, databaseTool, apiTool, validationTool]
}
},
customHandlers: {
// Track autonomous execution
afterResponse: async (response, input) => {
if (response.metadata?.iterationCount) {
console.log(`Iteration ${response.metadata.iterationCount}: ${response.content?.substring(0, 100)}...`);
}
return response;
}
}
});
Voice-Enabled Agent​
import { elevenlabsVoiceService } from "@mrck-labs/grid-core";
// Create voice service
const voiceService = elevenlabsVoiceService({
apiKey: process.env.ELEVENLABS_API_KEY,
defaultVoiceId: "21m00Tcm4TlvDq8ikWAM", // Rachel voice
});
// Create voice-enabled agent
const voiceAgent = createConfigurableAgent({
llmService,
toolExecutor,
voiceService, // This enables voice capabilities!
config: {
id: "voice-assistant",
type: "general",
version: "1.0.0",
prompts: {
system: `You are a friendly voice assistant. Guidelines:
- Keep responses brief and conversational
- Use natural speech patterns
- Avoid technical jargon unless necessary
- Add appropriate pauses with commas
- Be warm and personable`
},
metadata: {
id: "voice-assistant",
type: "general",
name: "Voice Assistant",
description: "Natural language voice assistant",
capabilities: ["general", "conversation", "voice"],
version: "1.0.0"
},
voice: {
enabled: true,
voiceId: "21m00Tcm4TlvDq8ikWAM",
autoSpeak: true, // Automatically speak responses
interruptible: true,
stability: 0.75,
similarityBoost: 0.75,
style: 0.5,
useSpeakerBoost: true
},
behavior: {
maxRetries: 2,
responseFormat: "text",
temperature: 0.8 // Slightly more creative for conversation
}
}
});
// Use voice features
async function voiceInteraction() {
// Check voice availability
if (!voiceAgent.hasVoice()) {
console.log("Voice not available");
return;
}
// Manual speech
await voiceAgent.speak("Hello! I'm your voice assistant.");
// Note: listen() is not implemented at agent level
// You need to handle audio recording in your application
// Example with terminal voice service:
if (voiceAgent.canListen()) {
const terminalVoice = new TerminalVoiceService();
const recording = await terminalVoice.startRecording();
const audioInput = await recording.stop();
// Use voice service directly for transcription
const transcript = await voiceService.transcribe(audioInput);
console.log("User said:", transcript.text);
// Process with automatic speech
const response = await voiceAgent.act(transcript.text);
// Response is automatically spoken due to autoSpeak: true
}
// Regular text input with voice output
const response = await voiceAgent.act("Tell me a fun fact");
// Response is spoken automatically
console.log("Assistant said:", response.content);
}
// Handle voice errors gracefully
try {
await voiceInteraction();
} catch (error) {
if (error.code === "VOICE_NOT_AVAILABLE") {
// Fall back to text-only mode
const response = await voiceAgent.act("Hello");
console.log(response.content);
}
}
Best Practices​
- Unique IDs - Always use unique agent IDs for tracking
- Clear System Prompts - Define agent behavior precisely
- Appropriate Tools - Only include tools the agent needs
- Error Handling - Implement onError for production agents
- Response Validation - Validate critical agent outputs
- Metadata - Include rich metadata for observability
- Version Control - Version your agent configurations
Configuration Schema​
interface AgentConfig {
id: string;
type: string;
version: string;
prompts: {
system: string;
};
metadata: AgentMetadata;
tools?: {
builtin?: Tool[];
custom?: Tool[];
mcp?: Tool[];
};
behavior?: {
maxRetries?: number;
responseFormat?: "text" | "json_object";
temperature?: number;
maxTokens?: number;
};
orchestration?: {
mode?: "autonomous" | "guided" | "hybrid";
maxIterations?: number;
};
customConfig?: Record<string, any>;
}
interface AgentMetadata {
id: string;
type: string;
name: string;
description: string;
capabilities: string[];
version: string;
[key: string]: any;
}
Related APIs​
baseLLMService
- Create LLM service for agentscreateToolExecutor
- Create tool executorcreateNamedTool
- Create tools for agentsagentFlowService
- Run autonomous flows