PropensityScore.calculate_impact#
- PropensityScore.calculate_impact(y_true, y_pred, response_type='expectation')#
Calculate the causal impact as the difference between observed and predicted values.
By default, the impact is calculated using the posterior expectation (mu) rather than the posterior predictive (y_hat). This means the causal impact represents the difference from the expected value of the model, excluding observation noise. This approach provides a cleaner measure of the causal effect by focusing on the systematic difference rather than including sampling variability from the observation noise term.
- Parameters:
y_true (xr.DataArray) – The observed outcome values with dimensions [“obs_ind”, “treated_units”].
y_pred (az.InferenceData) – The posterior predictive samples containing the predicted values.
response_type ({"expectation", "prediction"}, default="expectation") –
The response type to use for impact calculation:
"expectation": Uses the model expectation (μ). Excludes observation noise, focusing on the systematic causal effect."prediction": Uses the full posterior predictive (ŷ). Includes observation noise, showing the full predictive uncertainty.
- Returns:
The causal impact with dimensions ending in “obs_ind”.
- Return type:
xr.DataArray
Notes
When using
response_type="expectation"(the posterior expectation), the uncertainty in the impact reflects:Parameter uncertainty in the fitted model
Uncertainty in the counterfactual prediction
But excludes:
Observation-level noise (sigma)
When using
response_type="prediction", the uncertainty also includes observation-level noise, resulting in wider intervals.Note
REFACTOR TARGET: Currently, experiment classes store impacts using the default
response_type="expectation"at fit time. When users callplot(response_type="prediction"), the prediction-based impact is calculated on-the-fly in the_bayesian_plot()methods. This approach works but duplicates the decision logic. A future refactor could unify this by either storing both variants or always calculating on-demand.