logo

Prompt. Control. Validate.
Powered by AI.

Your journey to structured LLM guidance begins here.

Integrated into enterprise-grade LLM pipelines at leading AI organizations.

Integrations

Integrates with

Seamlessly integrate with your favorite tools

Seamless Prompt & Control Interleaving

Real-Time Stateful Context Sync

Customizable Guidance Patterns

Effortless Deployment

AI-Powered LLM Language.

Streamline Guidance DSL deployment for scalable, high-throughput LLM orchestration.

Seamless Interleaved Generation & Control

Mix prompt text, LLM generations, conditional logic, loops, and branching in a single Python-guided workflow.

Seamless Interleaved Generation & Control

Mix prompt text, LLM generations, conditional logic, loops, and branching in a single Python-guided workflow.

Automated Validation & Constraints

Validate and constrain model outputs on the fly using regex, grammar rules, CFGs, and conditional checks.

Automated Validation & Constraints

Validate and constrain model outputs on the fly using regex, grammar rules, CFGs, and conditional checks.

Structured & Constrained Outputs

Enforce exact output schemas with selects, regex patterns, context-free grammars, and prebuilt components like JSON and substring.

Structured & Constrained Outputs

Enforce exact output schemas with selects, regex patterns, context-free grammars, and prebuilt components like JSON and substring.

Seamless Tool & Function Integration

Define and invoke external APIs, tools, or custom functions within the prompt pipeline, with automatic pause and resume around tool calls.

Seamless Tool & Function Integration

Define and invoke external APIs, tools, or custom functions within the prompt pipeline, with automatic pause and resume around tool calls.

Stateful Control & Multi-Step Reasoning3

Maintain and update internal state across prompt turns, enabling complex loops, conditionals, caching, and multi-step workflows in a single LLM call.

Stateful Control & Multi-Step Reasoning3

Maintain and update internal state across prompt turns, enabling complex loops, conditionals, caching, and multi-step workflows in a single LLM call.

Token Boundary Healing & Text-Level Prompting

Automatically heal split tokens to eliminate tokenization biases, allowing you to think and prompt purely in text.

Token Boundary Healing & Text-Level Prompting

Automatically heal split tokens to eliminate tokenization biases, allowing you to think and prompt purely in text.

Expert Collaboration

Seamless Integration

Scalable Solutions

Your Queries, Simplified

Questions? Answers!

Find quick answers to the most common questions about our platform

What is guidance ?

A DSL that orchestrates prompts, control flow, and constraints via {{...}} syntax to produce reliably structured outputs.

Is guidance built for different LLM architectures?

Yes—it’s model-agnostic, converting templates into any LLM API calls and caching key-value pairs for faster repeats.

Do I need coding skills to use guidance?

Only basic scripting (e.g., Python); the DSL hides all low-level API details.

Can I customize guidance to fit my use case?

Absolutely—you can define custom templates, variables, regex patterns, conditionals, and function hooks.

Does guidance support mobile LLM deployments?

Yes—it runs wherever your LLM does (on-device or via remote API), enabling the same structured prompting on mobile.

Your Queries, Simplified

Questions? Answers!

Find quick answers to the most common questions about our platform

What is guidance ?

Is guidance built for different LLM architectures?

Do I need coding skills to use guidance?

Can I customize guidance to fit my use case?

Does guidance support mobile LLM deployments?

Your Queries, Simplified

Questions? Answers!

Find quick answers to the most common questions about our platform

What is guidance ?

A DSL that orchestrates prompts, control flow, and constraints via {{...}} syntax to produce reliably structured outputs.

Is guidance built for different LLM architectures?

Yes—it’s model-agnostic, converting templates into any LLM API calls and caching key-value pairs for faster repeats.

Do I need coding skills to use guidance?

Only basic scripting (e.g., Python); the DSL hides all low-level API details.

Can I customize guidance to fit my use case?

Absolutely—you can define custom templates, variables, regex patterns, conditionals, and function hooks.

Does guidance support mobile LLM deployments?

Yes—it runs wherever your LLM does (on-device or via remote API), enabling the same structured prompting on mobile.