Prompt-Knowledge Validation Guide
Briefcase validation prevents runtime failures by checking prompt references against your knowledge base before LLM execution.
Quick Start
from briefcase_ai.validation import PromptValidationEngine
from briefcase_ai.integrations.lakefs import VersionedClient
lakefs = VersionedClient(
repository="knowledge-base",
branch="main",
lakefs_endpoint="https://lakefs.example.com/api/v1",
lakefs_access_key="your-key",
lakefs_secret_key="your-secret",
)
validator = PromptValidationEngine(
versioned_client=lakefs,
repository="knowledge-base",
mode="strict",
)
report = validator.validate("Follow policies/medicare.pdf for evaluation")
if report.status == "passed":
pass
else:
for error in report.errors:
print(error.message, error.remediation)
Validation Layers
- Syntax/reference extraction
- Resolution against storage/version control
- Optional semantic validation (higher latency)
Modes
| Mode | Behavior | Recommended for |
|---|---|---|
strict | Errors fail validation | Production/compliance |
tolerant | Only critical issues fail | Development |
warn_only | Never fails, emits warnings | Monitoring/testing |
Common Error Families
404: referenced content does not exist409: reference points to version mismatch410: referenced content removed/deprecated503: validation backend unavailable (for example storage connectivity)
OpenTelemetry Attributes
Validation emits attributes such as:
validation.statusvalidation.modevalidation.reference.countvalidation.error.countvalidation.resolution.time_msvalidation.lakefs.commit
API Surface
class PromptValidationEngine:
def __init__(self, versioned_client, repository: str, branch: str = "main", mode: str = "strict", enable_semantic: bool = False, llm_client=None): ...
def validate(self, prompt: str): ...
See also the LLM Interaction Playbook for schema-first and replay-driven validation workflows.