No selection bias
The model learns from the population it will score instead of a proxy cohort with different age mix, benefit design, or provider network characteristics.
Warehouse-Native Product
Off-the-shelf risk scores rarely fit the data you actually have. Predictive Models trains and scores directly in your warehouse so the model learns your payer mix, your coding patterns, and your cost structure instead of someone else’s benchmark cohort.
Define the scoring universe, target policy, and prediction horizon first, then publish predictions as standard warehouse tables and inspect train/test evaluation metrics before teams act on the results.
The operating model stays dbt-native, so product teams can version behavior, rerun training intentionally, and keep scoring close to the same curated data pipeline they already trust.
Product brief
Predictive Models starts by defining exactly who is eligible to be scored, what future event matters, and how model outputs should be evaluated before they enter operations.
Why This Model
Benchmark models inherit every mismatch between the data they were built on and the population you are trying to manage.
The model learns from the population it will score instead of a proxy cohort with different age mix, benefit design, or provider network characteristics.
Sparse diagnosis families, missing claim types, and uneven fill rates become part of the learned signal rather than a silent source of model drift.
Because training and scoring stay in the same warehouse pipeline, predicted totals stay closer to actual experience and require less translation for stakeholders.
Targets
By default the package can model total spend, inpatient utilization, emergency department visits, SNF utilization, and probability-style top-k or at-least-one-event targets. The point is not the default list, though. The point is that target policy is explicit and versioned.
Teams can tune horizon, target definition, encounter dimensions, and feature groups without switching tools or introducing sidecar infrastructure just to get a new prediction family into production.
Warehouse outputs
Predictions and evaluation metrics land as standard warehouse tables your team can query directly.
Operating Model
Training and inference run where your current modeling pipeline already operates, which keeps scheduling, lineage, and deployment familiar.
Registry-aware persistence and reuse prevent accidental retrains while still letting teams manage intentional model versioning.
The companion viewer exposes predictions, metrics, and feature diagnostics through PHI-safe exports so product owners and analysts can interrogate model behavior.
Implementation
A typical launch starts by validating the upstream Tuva-style contract, defining the target policy that maps to real operational decisions, and establishing how evaluation metrics will be reviewed before downstream adoption.
From there, the work is the same as any Illuminate engagement: train, evaluate, reconcile expected volume and value patterns, then operationalize the outputs inside the warehouse and reporting workflows your team already uses.
Next step
We can talk through your current cloud platform, upstream source contract, and the operational workflow that needs to be supported before we recommend an implementation path.