Skip to main content

Prompt-Knowledge Validation Guide

Briefcase validation prevents runtime failures by checking prompt references against your knowledge base before LLM execution.

Quick Start

from briefcase_ai.validation import PromptValidationEngine
from briefcase_ai.integrations.lakefs import VersionedClient

lakefs = VersionedClient(
repository="knowledge-base",
branch="main",
lakefs_endpoint="https://lakefs.example.com/api/v1",
lakefs_access_key="your-key",
lakefs_secret_key="your-secret",
)

validator = PromptValidationEngine(
versioned_client=lakefs,
repository="knowledge-base",
mode="strict",
)

report = validator.validate("Follow policies/medicare.pdf for evaluation")

if report.status == "passed":
pass
else:
for error in report.errors:
print(error.message, error.remediation)

Validation Layers

  1. Syntax/reference extraction
  2. Resolution against storage/version control
  3. Optional semantic validation (higher latency)

Modes

ModeBehaviorRecommended for
strictErrors fail validationProduction/compliance
tolerantOnly critical issues failDevelopment
warn_onlyNever fails, emits warningsMonitoring/testing

Common Error Families

  • 404: referenced content does not exist
  • 409: reference points to version mismatch
  • 410: referenced content removed/deprecated
  • 503: validation backend unavailable (for example storage connectivity)

OpenTelemetry Attributes

Validation emits attributes such as:

  • validation.status
  • validation.mode
  • validation.reference.count
  • validation.error.count
  • validation.resolution.time_ms
  • validation.lakefs.commit

API Surface

class PromptValidationEngine:
def __init__(self, versioned_client, repository: str, branch: str = "main", mode: str = "strict", enable_semantic: bool = False, llm_client=None): ...
def validate(self, prompt: str): ...

See also the LLM Interaction Playbook for schema-first and replay-driven validation workflows.