Exporters
Exporters ship decision records from framework handlers to external systems.
Configure one via briefcase.config.setup(exporter=...).
All exporters inherit from BaseExporter and implement:
async def export(self, decision: Any) -> bool
async def flush(self) -> None
OTelExporter
Ships decision records as OpenTelemetry spans to any OTLP-compatible collector.
Constructor
from briefcase.exporters import OTelExporter
exporter = OTelExporter(
endpoint="http://localhost:4317", # OTLP gRPC or HTTP endpoint
service_name="briefcase-ai", # OTel service.name resource attribute
protocol="grpc", # "grpc" or "http"
span_exporter=None, # supply a custom SpanExporter (optional)
)
| Parameter | Default | Description |
|---|---|---|
endpoint | "http://localhost:4317" | Collector endpoint |
service_name | "briefcase-ai" | service.name resource attribute |
protocol | "grpc" | "grpc" uses OTLP/gRPC; "http" uses OTLP/HTTP |
span_exporter | None | Inject a custom SpanExporter (for testing or custom backends) |
Example
from briefcase.config import setup
from briefcase.exporters import OTelExporter
setup(
exporter=OTelExporter(
endpoint="http://otel-collector:4317",
service_name="claims-processing",
protocol="grpc",
)
)
Decision records appear as spans with briefcase.* attributes in your
collector (Jaeger, Tempo, Honeycomb, Datadog, etc.).
SplunkHECExporter
Ships decision records to Splunk via the HTTP Event Collector (HEC) API with batching and retry.
Constructor
from briefcase.exporters import SplunkHECExporter
exporter = SplunkHECExporter(
url="https://splunk.example.com:8088", # HEC base URL (path appended automatically)
token="your-hec-token",
index="main", # Splunk index
sourcetype="briefcase_ai", # Splunk sourcetype
batch_size=100, # flush after N events
verify_ssl=True,
max_retries=3,
timeout=10, # seconds
)
| Parameter | Default | Description |
|---|---|---|
url | required | Splunk HEC base URL. /services/collector/event is appended. |
token | required | HEC token |
index | "main" | Target Splunk index |
sourcetype | "briefcase_ai" | Event sourcetype |
batch_size | 100 | Flush when buffer reaches this size |
verify_ssl | True | Verify TLS certificates |
max_retries | 3 | Retry count on HTTP error |
timeout | 10 | Request timeout in seconds |
Example
from briefcase.config import setup
from briefcase.exporters import SplunkHECExporter
setup(
exporter=SplunkHECExporter(
url="https://splunk.example.com:8088",
token="Splunk 8f4b2d1a-...",
index="ai_decisions",
sourcetype="briefcase_langchain",
)
)
Call await exporter.flush() to force-send buffered events.
SentinelConnector
Ships decision records to Azure Log Analytics (Microsoft Sentinel) using the HTTP Data Collector API with HMAC-SHA256 SharedKey authentication.
Constructor
from briefcase.exporters import SentinelConnector
exporter = SentinelConnector(
workspace_id="your-workspace-id",
shared_key="your-shared-key",
log_type="BriefcaseDecisions", # custom log table name
batch_size=100,
verify_ssl=True,
max_retries=3,
timeout=10,
)
| Parameter | Default | Description |
|---|---|---|
workspace_id | required | Log Analytics workspace ID |
shared_key | required | Primary or secondary workspace key |
log_type | "BriefcaseDecisions" | Custom log table name in Sentinel |
batch_size | 100 | Flush when buffer reaches this size |
Example
import os
from briefcase.config import setup
from briefcase.exporters import SentinelConnector
setup(
exporter=SentinelConnector(
workspace_id=os.environ["SENTINEL_WORKSPACE_ID"],
shared_key=os.environ["SENTINEL_SHARED_KEY"],
log_type="BriefcaseAI",
)
)
FHIRExporter
Converts decision records to FHIR R4 AuditEvent resources and submits
them to a FHIR server as Bundle transactions.
Constructor
from briefcase.exporters import FHIRExporter
exporter = FHIRExporter(
url="https://fhir.example.com/fhir", # FHIR server base URL
token=None, # Bearer token (optional)
batch_size=100,
agent_name="briefcase-ai", # AuditEvent agent name
verify_ssl=True,
max_retries=3,
timeout=10,
)
Output Format
Each decision record becomes a FHIR AuditEvent resource:
{
"resourceType": "AuditEvent",
"type": {
"system": "http://dicom.nema.org/resources/ontology/DCM",
"code": "110110",
"display": "Patient Record"
},
"recorded": "2026-02-26T10:00:00Z",
"outcome": "0",
"agent": [{"name": "briefcase-ai", "requestor": true}],
"source": {"observer": {"display": "briefcase-ai"}},
"entity": [{"what": {"display": "decision_id:..."}}]
}
Multiple records are submitted as a Bundle of type transaction.
GxPExporter
Produces FDA 21 CFR Part 11-compliant electronic signature records with a cryptographic hash chain for tamper evidence.
Constructor
from briefcase.exporters import GxPExporter
exporter = GxPExporter(
signer_id="user@example.com", # Electronic signature identity
system_validation_id="SYS-VAL-001", # System validation reference
format="json", # Output format
meaning="production", # Signature meaning
reason_for_change="", # Default reason for change
)
| Parameter | Default | Description |
|---|---|---|
signer_id | required | Electronic signer identity (21 CFR Part 11 §11.50) |
system_validation_id | required | Reference to the validated system ID |
format | "json" | Output format |
meaning | "production" | Signature meaning |
reason_for_change | "" | Default reason for change field |
Hash Chain
Each exported record is linked to the previous via SHA-256:
{
"record_id": "uuid-...",
"signer_id": "user@example.com",
"system_validation_id": "SYS-VAL-001",
"signed_at": "2026-02-26T10:00:00Z",
"hash": "a1b2c3...",
"prior_hash": "9f8e7d...",
"payload": { ... }
}
The prior_hash creates an append-only chain. Any modification to a
prior record invalidates all subsequent records.
Flushing Buffers
Exporters with batch_size buffer records until the batch is full.
To force a flush (e.g., on application shutdown):
import asyncio
await exporter.flush()
# or from sync code:
asyncio.run(exporter.flush())
See Also
- Quick Start — configure an exporter
- Infrastructure — Storage — immutable WORM storage backends
- Infrastructure — Events — webhook and Kafka publishing