The story

From selling tractors to coordinating American agriculture — John Deere.

Start with one of the most familiar companies in industrial history. Deere built tractors. It ended up running the layer that American farms plan, execute, and report through. The shift from product company to coordination business is what made Deere durable — and the same shift is available to any platform with sensors at its base.

Modern · AI-era · agriculture
John Deere
Wedge Tractors. The legacy hardware sale. Familiar, dumb, unloved by analysts.
Engine GPS + telematics added to the equipment lifted fuel efficiency and uptime — and laid the groundwork to capture operational data per pass over a field. Soil conditions, planting practices, local timing decisions. AI-driven prescription tools and autonomous machinery now determine yield; farms hit targets because of Deere, not just with Deere.
Coordination The John Deere Operations Center. Planting schedules, soil profiles, supplier QA, government paperwork, distribution — farms run on it, not just use its tools. Deere stopped being a product company and became a coordination business.
Outcome Tacit knowledge that used to live in the farmer's head — soils, weather, timing — now lives in Deere's models. The business model shifted from selling equipment to coordinating operations. That's the durable position.
"Farms no longer merely use Deere's tools; they build their operations around Deere's infrastructure."
The strategic move. Deere didn't get there by selling more tractors. It got there by becoming the layer that coordinates how the work happens — planning, execution, supplier QA, regulatory filings. The next two slides distill the arc this kind of platform follows, and the trade it asks the customer to make in return. Source: Choudary, Sangeet Paul. Reshuffle: Who Wins When AI Restacks the Knowledge Economy, ch. 8.
The strategic arc

Sensor → Engine → Coordination.

Deere's path, generalized. Two transitions any analytics platform with sensors at its base eventually walks. The dashboard becomes the engine that determines outcomes. The engine becomes the coordination layer that operations run through. The rest of this deck shows what the engine looks like today and what coordination pieces are emerging.

Stage 1
The wedge
Sensor
Sensors generate the row. The smallest sale that gets the platform inside the customer's process.
We are here
Stage 2 · the differentiator
Engine
Models on top of the sensor data determine performance. Yield, quality, consistency — the customer hits targets because of the platform, not just with it.
Stage 3
The prize
Coordination
The platform coordinates how the work happens — planning, execution, supplier QA, regulatory filings flow through one system. The customer's operations are run through the platform, not just supported by it.
Where we sit today. The engine is built (single-tenant). The parameter knowledge graph and outcome-label table are coordination-layer pieces already in place. The cross-customer memory layer that lets the engine compound across deployments is design-sketched, not yet built — activating that is the next strategic build.
Two movements

The engine governs performance. The coordination layer regulates behavior.

These are the two distinct paths Deere walked, and they reinforce each other. The engine determines how well the system runs. The coordination layer determines how the system is run — who talks to whom, what gets reported where, when each step happens. Together they shift the locus of advantage toward the platform.

When a platform decides outcomes for a customer, the customer's operations reorganize around it. That's a real trade. We make it favorable by keeping the customer in control of three things they actually care about: their data, their rules, and the math behind every decision. The platform that earns the dependence is the one that respects it.
AI-first · human-in-proximitythe frame

AI handles continuous reasoning. Humans intervene where it matters.

The system runs autonomously for the bulk of the analytical work — ingestion, feature derivation, drift detection, watchlist ranking, decision synthesis, audit. Proximity engineering dictates where humans are inserted: at moments closest to a consequence — confirming a root cause, approving a regime change, accepting a threshold revision. Everything else is delegated to the AI layer.

AI · autonomous
Continuous reasoning
  • Ingest FERM_RUNS, FERM_ENV every 30 s
  • Sensor agents simulate / read instruments; produce traces
  • Compute Lens 1–7 + 9 — population stats, adjusted ORs, watchlist, drift, lifecycle graph
  • Synthesize Lens 8 decision (regime + action + differential)
  • Persist hot tier (FERM_RUN_STATE) on every recompute
  • Run automated confirm-by checks on each hypothesis
  • Score every in-flight run; surface attention router rankings
  • Append audit log; refresh stratified-baseline mview nightly
Human · proximity engineering
Judgment at the edge
  • Confirm root cause after a contamination event resolves
  • Override regime when domain context contradicts the synthesizer
  • Approve a threshold revision (with documented evidence)
  • Authorize role tier elevation (operator → supervisor)
  • Triage which in-flight run to focus on next
  • Annotate post-mortem narrative on a high-impact event
The partition is intentional and observable. Every AI output carries an audit trail; every human action is logged with viewer attribution; the role tier system enforces who can do what. The closer the action is to a consequence with monetary or safety impact, the higher the human-in-loop requirement.
Analytical surfacethe engine — built today

Nine lenses, three scopes — each answers a different operator question.

Every analytical output the dashboard produces falls into one of three scopes: population (what does the dataset say in aggregate), single run (what's happening in this specific batch right now), and action (what should I do about it). Lenses 1–5 build trust in the dataset. Lens 6 + 9 show what's happening in flight. Lens 7 raises the alarm. Lens 8 tells the operator what to do.

Population scope
Build trust in the dataset
L1
Prediction
"What's the chance — overall, and for this run's configuration?"
Wilson CI · stratified contextual baseline
L2
Explanation
"Which factors raise the rate, and what's left after controlling for confounders?"
χ² + BH-FDR · adjusted OR via ridge logistic
L3
Findings
"Which hardcoded process patterns show up in this dataset?"
same stats as L2 · neutral magnitude tags
L4
Biomass signal
"Among continuous predictors, which carry per-SD weight?"
ridge logistic on standardised features
L5
Watch list
"Across ~44 candidate predictors, which are stably selected?"
elastic-net + bootstrap stability · CV-tuned λ
Single-run scope
Show what's happening
L9
Process graph
"Where am I in the lifecycle, and which phase is alarmed?"
reuses L6 phases + L7 alerts · no new stats
L6
Run dynamics
"What did the trajectory of OUR / CER / VCD / iPH / viability look like?"
sensor-agent traces · phase + onset markers
L7
Early warnings
"Did any rule fire on the live trace, and how many hours before harvest?"
hand-coded detectors · severity-tagged · h-before-harvest
Action scope
Tell the operator what to do
L8
Decision card
"What should I do right now — continue / verify / hold / abort, and why?"
synthesizer over L1–L7 · regime · drivers · differential
+
Differential diagnosis
Ranked candidate causes, each with a confirm-by check.
3 rule-based candidates · up to 2 LLM-proposed · 4/7 automated checks
+
Attention router
"Of the runs in flight, which one needs me next?"
priority score over recent FERM_RUN_STATE rows
L1–L5 · answer "do I trust this system?" — they're the audit-grade evidence that the model has any business making downstream decisions.
L6 + L9 + L7 · are the in-flight panel — trajectory chart, lifecycle strip, and rule-based alerts on the same run.
L8 + extras · close the loop — synthesize, propose actions, rank candidates, route attention. The only lens that recommends.
The same nine lenses feed five presentation layers: analyst dashboard at / (process scientists — density + audit) · Trip view at /trip (operators — route, ETA, re-route options) · Run journey at /journey (engineers — one run as a star with phases and the slice of the parameter graph that governs its readings) · Parameter repo at /repo (domain experts — full cross-industry knowledge graph) · Universe at /universe (VP / investor / board — living particle field where density tracks real record counts). Same backend, five UX shapes, five different jobs.
Soothing & insightful loading — every art-style page (/journey, /repo, /universe, /trip) opens with a quiet composition rather than a spinner: concentric ripples around a slow-pulsing core, six drifting particles, status sub-line that updates with the work happening underneath (checking session → resolving run → loading state · phases · parameter graph → weaving the constellation). The motion respects prefers-reduced-motion; the loader fades out on the same RAF cycle the constellation's own appear animations begin, so the particles you were watching seamlessly become the data. Calm, status-aware, audit-grade — the product breathes even before it has anything to say.
Presentation surfacesengine surfaces

Five UX shapes over one backend — one per audience.

Same nine lenses, same endpoints, same audit trail. The deployed site exposes five distinct routes — each tuned to a different reader and a different question. Density-and-audit for analysts; route-and-ETA for operators; constellations for engineers and domain experts; particle field for the boardroom.

Analyst
Dashboard
Process scientist · data analyst
"Do I trust this system, and what does the data say about contamination drivers?"
/
Operator
Trip view
Bioreactor operator · shift lead
"Is my batch on track, and what do I do next?"
/trip
Engineer
Run journey
Process engineer · QA reviewer
"This run, in its full regulatory cosmos."
/journey
Domain expert
Parameter repo
Regulatory affairs · subject expert
"What governs each parameter, who measures it, how does it fail — across industries?"
/repo
Executive
Universe
VP · investor · board
"What's the shape and scale of the entire dataset?"
/universe
The next three slides walk through what each operator-facing surface looks like. The analyst dashboard is the original product surface (covered by the Nine Lenses slide already); the parameter repo follows the same constellation idiom as Run journey at a different scale.
/trip · operator

Trip view — Maps-style decision UI for the bioreactor floor.

Modeled on Waze. The operator gets a route, an ETA, traffic ahead, and re-route options — not a wall of metrics. Same data the analyst dashboard has, framed as navigation.

QM · TRIP 113 h remaining · contam 45% · drift 0.97 INTERVENE THE ROUTE Lag Exp 4 Stationary · 4 alerts Harvest Outcome ⊙ WHAT'S AHEAD OUR sharp drop · t=55h Viability collapse · t=55h iPH stress · t=56h RQ regime shift · t=74h ↻ RE-ROUTE OPTIONS ABORT verify · hold · continue → TURN-BY-TURN Yeast lot C31 drift Osmotic stress · hard water Sterilization breach via LLM · confirm-by

What it shows

  • Trip bar — ETA to harvest · contam probability with CI · earliest alert · drift · regime pill · recommended action.
  • The route — large SVG lifecycle. Phases as nodes; severity-gradient edges flow source-color → destination color; you-are-here pulses; alerted phases turn red with a count badge; ambient page tint cued off the regime.
  • Run switcher — top 5 in-flight runs ranked by priority, color-coded by regime. Click to hop without typing IDs.
  • What's ahead — Lens 7 alerts colored by severity, sorted by hours-before-harvest.
  • Re-route options — recommended action and alternatives, each with consequence.
  • Turn-by-turn — Lens 8 differential ranked candidate causes with confirm-by checks. LLM-proposed entries flagged.

For a contaminated LIVE_… run with active alerts, the trip-bar reads "ABORT · 113 h remaining · now at t=55 h" — the operator sees the action and the runway in one glance.

/journey · engineer

Run journey — one batch as a star, surrounded by what governs it.

For the process engineer or QA reviewer who wants the full picture of a single run. The run is a pulsing white star at the center; phases orbit blue; live readings bind to canonical parameters; regulations · vendors · failure modes fan out as outer arcs — only the slice of the parameter graph that governs this run's measurements, not the full universe.

REGULATIONS 21 CFR 211 EU GMP A1 ICH Q8 VENDORS Hamilton Mettler INDUSTRY Pharma Enzymes FAILURE MODES Probe drift Fouling Cal loss OUR CER iPH Viab Lag Exp Stat Hvst LIVE_…831

What it shows

  • Center — the focused run as a pulsing white star.
  • Inner orbit — the four phases (Lag · Exp · Stat · Harvest), chained by lifecycle edges. Phases with attached Lens 7 alerts pulse red and carry a count badge.
  • Mid orbit — live parameter readings (OUR · CER · RQ · iPH · Viability) bound to canonical KG parameters via is_a edges.
  • Outer arcs — only the slice of the parameter knowledge graph that governs this run's measurements: regulations (top, amber), vendors (left, purple), industries (right, green), failure modes (bottom, red).
  • Alerts — comet flares anchored on the affected phase node, with continuously pulsing connecting edges.
  • Ambient body tint cues off the regime — green / amber / red wash, very subtle.

Click any node → slide-in detail panel with confidence bar, metadata, grouped-by-edge-kind connections (each clickable to navigate the graph). Drag to pan, scroll to zoom.

/universe · executive

Universe — the entire dataset as a living particle field.

For the boardroom view: every entity in the system as a glowing point. Color encodes type; particle density tracks real record counts in the dataset; connections form by canvas proximity with traveling pulses on each edge. The visual swarm is the dataset's shape.

What it shows

  • Every dot is a real entity — KG parameter, regulation, vendor, failure mode, industry, run, outcome label. Hover any dot for type · label · context · the 3 nearest particles in canvas distance.
  • Color encodes type — teal parameters · amber regulations · purple vendors · red failure modes · green industries · white runs · blue snapshots · violet labels.
  • Density tracks real counts — more white particles = more runs in the dataset; more amber = more regulatory anchors. The visual swarm is data-truthful.
  • Connections form by proximity with a traveling pulse on each edge. Hover a particle: dim non-neighbors, brighten the actual semantic edges to its 1-hop neighbors.
  • Click any particle → drill in. Parameters/regulations open /repo; runs open /journey?run_id=….

Stats panel at bottom-left shows the live counts driving the field. Calm motion (orbit speeds, drift radii, edge pulses all ~3× slower than first build) so the field reads as ocean swell, not insect twitch.

Architectural backboneinfrastructure spine — single-tenant today

Brain ↔ Memory — where data lives and how it flows.

The AI-first surface area sits inside the brain — per-run intelligence (regime, drift, alerts, decision) cached in a small hot tier the operator UI reads in O(1). The data lake is the historian — consulted only on cache miss, append-only for audit, and the source of population-level priors. The two feed each other on a defined cadence; the human only intervenes at proximity events (next slide).

BRAIN · HOT TIER Per-run intelligence read by the operator UI · 5-min lazy TTL FERM_RUN_STATE read-of-record · 1 row per run · MERGE upsert • regime · recommended_action · rationale • drift_score · drift_trend · drift_components_json • alerts_json · earliest_alert_severity • drivers_json · differentials_json • narrative · confidence · n_active_alerts • population_baseline · contextual_baseline • vessel_load · last_updated_at · age_seconds In-process caches _derived_cached(watchlist · strata · attention) ords_fetch_all_cached(ferm_full/, 60s) MEMORY · LAKE Historian + source of priors append-only · daily refresh on derived views FERM_RUNS in-tank sensor readings FERM_ENV env layer · shift, lot, weather, crew FERM_PARAMETER_CATALOG registry of every parameter FERM_BASELINE_STRATA mview · contextual rates per cell FERM_SNAPSHOTS 5-min pre-aggregated time-series FERM_OUTCOME_LABEL operator-confirmed root causes FERM_AUDIT viewer attribution log ferm_full_v · view FERM_RUNS ⋈ FERM_ENV joined raw rows contextual baseline labeled priors computed state audit trail
Hot — sub-second reads
Warm — single-digit-second reads
Operational reference
Audit / outcome
Cold — historical / time-series
Data inventorythe wedge

Every table and view, with role and cadence.

Color = tier. Each card answers: what is this?, who reads / writes it?, how fresh?. The full schema lives in data/01_*.sql through data/10_*.sql and is exposed in the dashboard's methodology panel.

FERM_RUN_STATE
Per-run cached intelligence — regime, drift, alerts, drivers, differentials, narrative.
tier · hotread · /api/run_state, /api/attention_router
write · MERGE on every state recompute
cadence · 5-min lazy TTL on read
FERM_RUNS
In-tank sensor summary per batch — temp, pH, DO, OUR yield, biomass.
tier · warmread · ferm_full_v
write · live injector every 30 s
retention · indefinite
FERM_ENV
Around-the-tank context — shift, crew, yeast lot, ambient, water utility.
tier · warmread · ferm_full_v
write · live injector + manual env updates
retention · indefinite
ferm_full_v · view
Joined view of FERM_RUNS ⋈ FERM_ENV. The single SELECT every analytical endpoint pulls from.
tier · warmread · /api/snapshot, /api/run_trace, /api/decision …
cache · 60 s in-process via ords_fetch_all_cached
FERM_PARAMETER_CATALOG
Registry of every parameter — units, expected range, regulatory class, role flags, simulator hookup.
tier · operationalread · /api/catalog, watchlist predictor specs
write · governed · update via SQL + approval
FERM_BASELINE_STRATA · mview
Pre-aggregated contextual rates per cell — shift × yeast_band × humidity × hardness.
tier · operationalread · /api/snapshot prediction.contextual
refresh · DBMS_SCHEDULER nightly 02:30
FERM_SNAPSHOTS
Pre-aggregated dashboard time-series captured every 5 min for the timeline chart.
tier · coldread · /api/timeline
write · scheduler job · retention 90 d
FERM_OUTCOME_LABEL
Operator-confirmed root causes per run — feeds future Bayesian calibration of differential ★ ratings.
tier · auditwrite · /api/label_outcome (role-gated)
read · /api/labels · feeds priors loop
FERM_AUDIT
Viewer-attributed log of every endpoint call — who read what, when, with what question.
tier · auditwrite · log_event from every endpoint
read · /api/audit · retention indefinite
End-to-end flowengine pipeline

From observation to decision to label — the full pipeline.

Each box is a stage; the latency tag tells you the dominant cost. The hot tier ends the chain — operators read from FERM_RUN_STATE, never from raw rows.

1 · INGEST Live injector every 30 s → FERM_RUNS → FERM_ENV 2 · DERIVE Sensor agents simulate / read probes traces · phase features drift score · alerts 3 · ANALYZE Lenses 1–7 + stratified baseline Wilson · ridge · elastic-net cached 60–300 s 4 · SYNTHESIZE Lens 8 regime · action drivers · differentials _build_decision() 5 · PERSIST Hot tier write FERM_RUN_STATE MERGE ferm_state_upsert/ + FERM_AUDIT 6 · READ Operator UI O(1) hot read /api/run_state /api/attention_router A · ACT Operator decides continue · verify · hold · abort role-gated writes B · LABEL Confirmed cause ↓ post-mortem /api/label_outcome C · PERSIST FERM_OUTCOME_LABEL MERGE upsert · audit ferm_outcome_upsert/ D · CALIBRATE (future tier) Posteriors update from labeled history heuristic ★ ratings → Bayesian P(cause | evidence) activates when ~50 confirmed cases per cause accumulate operator decides + acts labeled outcomes feed priors → smarter next time
The feedback loopinfrastructure — closes the loop

How the brain learns from memory.

Population-level priors flow from the lake into the brain on every recompute. Per-run conclusions flow from the brain back into the lake as audit + labeled outcomes. Each pass tightens the next: the more confirmed cases FERM_OUTCOME_LABEL accumulates, the closer the differential's heuristic ★ ratings get to calibrated Bayesian probabilities.

1 · OBSERVE live data FERM_RUNS FERM_ENV 2 · ANALYZE 8 lenses + contextual + drift 3 · DECIDE regime FERM_RUN_STATE 4 · OUTCOME batch resolves contam Y/N 5 · LABEL confirmed cause FERM_OUTCOME_ LABEL closed loop priors flow inward · evidence flows outward aggregate · derive features synthesize · persist the run plays out post-mortem label tightens posterior
For data governance: every step has a defined owner, latency target, and audit link. The brain is the operator's source of truth; the lake is the historian; the loop is what keeps the brain calibrated. As the labeled set grows, the differential's heuristic ★ ratings approach Bayesian posteriors — which is what regulated industries need to defend the system in audit.
Agentic frameworkengine internals

What's autonomous, layer by layer.

Five tiers of agents, each with its own role · methods · tools · skills · memory. Sensing agents read instruments. Analytical agents fit models. Reasoning agents synthesize. Triage agents allocate attention. The narration agent translates results to plain language.

TIER 1 Sensing · 18 agents in 5 packs read instruments → produce per-batch summaries + traces
OffGasPack
Mass-balance from vent sensors
OURAgent · CERAgent · RQAgent
toolsvent flow, %O₂, %CO₂ skillsq_O₂·X physics memcal drift
CellularPack
Live-cell state probes
VCD · Viability · IntracellularPH
toolscapacitance, BCECF skillsstress indicators memfouling state
EquipmentPack
Vessel + utility integrity
KLa · Antifoam · VesselAge
PHCalAge · SIP · CIP
toolsCIP/SIP logs skillskLa derivation memcumulative wear
MaterialsPack
Raw-material lot attributes
LactosePurity · TraceIron · Biotin
toolsCOA, lab assays skillslot risk memlot history
OperationsPack
Facility + personnel signal
HVACFilter · DoorEvents
OperatorTenure
toolsdoor sensors, HR skillsspatial / temporal memshift patterns
TIER 2 Analytical · 6 model fitters & feature derivers tier-1 outputs → statistical structure
PhaseFeatureExtractor
Collapses traces into per-batch shape features
methodsextract_phase_features() toolsproduce_trace outputs skillsslopes · curvatures · late-window memstateless
StratifiedBaseliner
Per-context rate via groupby + Wilson CI
methods_compute_strata_baselines() toolsferm_full_v rows · n-guardrail skillscontextual fallback mem5-min TTL cache
DriftScorer
Continuous deviation from healthy reference
methodscompute_drift_score() toolsactual + healthy ref traces skillsRMS z-score · trend mem24h rolling windows
AdjustedORFitter
Ridge logistic with Wald CIs
methods_build_adjusted_ors() toolsnumpy · IRLS skillsper-predictor adj OR + p mempredictor specs
StabilitySelector
Elastic-net + bootstrap stability
methods_build_watchlist() toolsFISTA · 40 resamples skillsCV-tuned λ · selection freq mem60s TTL cache
AlertDetector
Rule-based detectors on traces
methodsrun_early_warnings() tools4 rule definitions skillstrigger time + hours-before-harvest memstateless
TIER 3 Reasoning · 3 synthesizers tier-2 outputs → ranked candidate explanations + a single recommended action
DifferentialDiagnoser
Ranks candidate causes when regime ≠ nominal
methodsdifferential_diagnose() tools7-hypothesis library skillstop-3 ★-ranked + tier markers memhypothesis weights (heuristic → posterior)
ConfirmByChecker
Runs automated probes on demand
methodsconfirm_check endpoint tools4 automated checks (registry) skillssupports / refutes / inconclusive verdicts memstateless
DecisionSynthesizer
Combines all lens outputs into one regime + action
methods_build_decision() toolssnapshot · watchlist · alerts · drivers skillsrule-based regime + rationale + confidence memstateless
TIER 4 Triage & memory · 3 cross-run agents manage attention across runs · persist state · close the loop
AttentionRouter
Ranks in-flight runs by urgency
methods/api/attention_router toolsFERM_RUN_STATE reads skillsregime + drift×50 + alerts×10 + load×20 mem10-min hot cache window
RunStateCache
Persistent per-run hot tier — the brain
methodsget/set via MERGE upsert toolsFERM_RUN_STATE table skillsO(1) lookup with TTL freshness memeverything — this IS the memory
OutcomeLearner
Future: turns labels into Bayesian priors for the differential
methodsposterior_update() — planned toolsFERM_OUTCOME_LABEL · hypothesis priors skills★ → P(cause | evidence) at ~50 cases per cause memlifetime label history
TIER 5 Narration · 1 translator structured outputs → operator-readable English
GemmaNarrator
Translates already-computed numbers into plain language. Never invents new numbers, never re-derives results — strictly a JSON-to-text translator. Falls back to a structured auto-summary if the LLM is slow or unavailable.
methods/api/narrate toolsGemma-7B (cloud) or local Ollama skillsstructured-data → text memstateless · per-call only
TIER 3+ · 5+ LLM-augmented · 3 agents (Claude / Gemma · local Ollama for air-gapped) extend the deterministic core where free-text reasoning helps · fail-soft
LLMDifferentialDiagnoser
Proposes UP TO 2 additional candidate causes the rule-based library missed, using free-text reasoning over evidence + label history. Tagged speculative.
methodspropose() toolscall_llm · Claude Haiku 4.5 · JSON skillsreads run state + ≤20 recent labels memstateless · audit-logged
LLMConfirmByChecker
Fallback when a hypothesis has no hardcoded probe (3 of 7 today). Renders supports / refutes / inconclusive verdict from the confirm-by text + run-state evidence.
methodscheck() toolscall_llm · Claude · evidence bundle skillsconservative verdict · prefers "inconclusive" memstateless · audit-logged
OperatorQA
Free-text Q&A over run state + audit log + recent labels. Cites specific run_ids; refuses when data is insufficient.
methodsanswer() · /api/operator_qa toolscall_llm · Claude · ctx bundle skillscitation-grounded answers memstateless · audit-logged
Agentic topology · operational wiringengine deployment

Which agent runs where, and how they hand off.

33 of 34 agents are operational today. The framework now includes 3 LLM-augmented agents (Tier 3+ / 5+) layered on top of the 30-component deterministic core, plus the original GemmaNarrator and the future-tier OutcomeLearner. One agent (OutcomeLearner) is built into the architecture as future-tier — its data pipeline is live (FERM_OUTCOME_LABEL + /api/label_outcome), the Bayesian posterior step activates once ~50 confirmed cases per cause accumulate. Everything else is in production code.

BROWSER · 5 PRESENTATION LAYERS / analyst · /trip operator · /journey run-cosmos · /repo parameter graph · /universe living particle field all 5 routes consume the same /api/* endpoints HTTPS · cookie auth VERCEL · FastAPI · 30 active agents stateless serverless functions · auto-scale · in-process TTL caches (60s / 300s) · cold start ~900ms with numpy TIER 1 · SENSING 18 agents · 5 packs OffGasPack OUR · CER · RQ CellularPack VCD · Viability · pHᵢ EquipmentPack KLa · Antifoam · etc. MaterialsPack Lactose · Iron · Biotin OperationsPack HVAC · Doors · Op tenure simulate_run_trace() TIER 2 · ANALYTICAL 6 fitters / feature derivers PhaseFeatureExtractor extract_phase_features StratifiedBaseliner _compute_strata_baselines DriftScorer compute_drift_score AdjustedORFitter _build_adjusted_ors · IRLS StabilitySelector _build_watchlist · FISTA · 40× AlertDetector run_early_warnings · 4 rules TIER 3 · REASONING 3 synthesizers DifferentialDiagnoser differential_diagnose 7 hypotheses · ★-ranked ConfirmByChecker /api/confirm_check CONFIRM_CHECKS · 4 active DecisionSynthesizer _build_decision regime · action · rationale TIER 4 · TRIAGE 2 active · 1 planned AttentionRouter _attention_priority priority = regime+drift+alerts+load RunStateCache _compute_run_state · _persist 5-min lazy TTL · MERGE upsert OutcomeLearner posterior_update — planned data pipeline live · model future TIER 5 · NARRATION + LLM ext. 1 narrator + 3 augmenters GemmaNarrator /api/narrate · text translator LLMDifferentialDiagnoser propose() · speculative cands LLMConfirmByChecker check() · LLM fallback verdict OperatorQA /api/operator_qa free-text Q&A · cited ORDS · MERGE upsert + auto-rest read cache-or-compute · O(1) read ORACLE ADB · system of record FERM_RUNS · FERM_ENV · ferm_full_v · FERM_RUN_STATE (hot tier · MERGE upsert) FERM_PARAMETER_CATALOG · FERM_BASELINE_STRATA (mview, nightly) · FERM_OUTCOME_LABEL · FERM_AUDIT DBMS_SCHEDULER: live injector (30 s) · mview refresh (02:30 daily) · optional state pre-warm LLM SERVICE · external Claude Haiku 4.5 · Gemma fallback narrate · propose · verify · QA air-gap mode: Ollama on-prem

33 of 34 ACTIVE Built into 18 sensor classes · 6 analytical functions · 3 reasoning synthesizers · 2 triage agents (router + cache) · 4 LLM-augmented agents (1 narrator + 1 differential proposer + 1 confirm-by checker + 1 operator QA) · plus the planned posterior calibrator that activates on accumulated labels.

Where next

Industrial verticals the framework handles with minimum changes.

The architecture is sensor-agnostic and outcome-flexible. The nine lenses, the hot-tier inversion, the parameter knowledge graph, the agentic stack, the five presentation surfaces — none of it is fermentation-specific. Five adjacent verticals adapt with bounded engineering work; the table below shows what stays, what swaps, and the lift to ship a domain pack.

Industrial enzymes
Running today
StatusLive in this preview. Built on a public Kaggle industrial-enzymes dataset (~57K rows) + a synthesized env layer. Every lens, every endpoint, every art surface runs against this data right now.
StaysEverything — this is the dataset the framework was authored against.
SwapsPer-customer SOPs and parameter-catalog enzyme-spec rows when onboarding a real customer. Outcome label is yield-grade vs spec; regs typically ISO 9001 + customer-specific.
Lift to ship~2 weeks per real customer for SOP + spec mapping. Detergent, food, textile, paper-pulp, animal-feed enzymes all share the same shape. Demo-ready as proof.
Brewing — beer · wine · spirits · kombucha
Native fit · ready to onboard
StatusArchitecturally compatible; not yet onboarded. Yeast biology, sensor stack, env-layer shape, all 9 lenses, hot tier, decision card, KG, the five surfaces — every framework piece applies as-is.
StaysEngine + infrastructure unchanged. Trip view especially fits brewing's shift-paced operations; Run journey fits supplier QA workflows.
SwapsOutcome vocabulary → off-flavor / stuck fermentation / attenuation deviation / spec failure. Regulations to HACCP + ISO 22000 + TTB (US alcohol). KG seed: gravity, attenuation, ester profile, dissolved CO₂.
Lift to shipSmallest lift to a real-customer dataset. Zero framework changes — needs brewing data + outcome labels + a brewing-tuned KG seed. ~6 weeks for a first pilot.
Industrial dairy & fermented foods
Small lift
StaysProcess model, hot-tier inversion, lenses 1–8, env layer.
SwapsSensors simpler (pH, temp, lactate, salt). Outcome → spoilage / spec failure / texture deviation. Regs → FSMA + ISO 22000 + HACCP CCPs.
Time~6–8 weeks. Yogurt, cheese, fermented plant proteins, alt-protein.
Cell & gene therapy bioreactors
Medium lift
StaysBioreactor sensor stack, hot-tier, lenses, decision card, KG framework.
SwapsOutcome → viable cell density spec, transduction efficiency, vector titer. Regulations to 21 CFR Part 211 + EU GMP Annex 2. Vendor catalog adds Sartorius BioPAT, Cytiva.
Time~3 months. Highest-margin vertical but heaviest validation overhead.
Wastewater treatment & anaerobic digesters
Larger lift
StaysAll 9 lenses, hot-tier, decision card, agentic stack, the whole presentation surface (Trip view fits especially well).
SwapsSensors are different (DO, COD, BOD, NH₄⁺, total N, P, ORP). Process is anaerobic-aerobic cascade. Outcome → effluent spec breach, biogas yield, odor. Regs → EPA Clean Water Act permits, regional environmental.
Time~4–5 months. Sensor agents need a new family; everything downstream stays.
What's true today vs what's next. The framework is running end-to-end on the industrial-enzymes Kaggle dataset — that's not aspirational, it's what the deployed app shows. Brewing is the next ship: zero framework changes needed, just a real brewing dataset and outcome labels. The other three (dairy, cell+gene therapy, wastewater) need a domain pack on top of the existing engine. The architecture's portability is the product moat — but the moat doesn't compound until the cross-customer memory layer (Brain ↔ Memory slide) is built.