← back to series

Module 5, Week 1: Time Series & Panel Data

Article 10 of 1320 min read

📊 Running Example: Regional Promo Rollout

How do we estimate causal effects when treatments vary over time? We'll explore methods for longitudinal data and time-varying interventions.

1. Introduction

Longitudinal data with repeated measurements presents unique opportunities and challenges for causal inference. We'll cover methods designed for panel data and time-varying treatments.

2. Synthetic Control Methods

Synthetic control creates a weighted combination of control units to match the treated unit's pre-treatment trajectory.

Key Idea:

Find weights wj for control units such that ΣwjYjt closely matches Ytreated,t before treatment. Post-treatment difference is the treatment effect.

from scipy.optimize import minimize

def synthetic_control(Y_treated_pre, Y_controls_pre):
    """Find optimal weights for synthetic control"""
    n_controls = Y_controls_pre.shape[1]

    def objective(w):
        synthetic = Y_controls_pre @ w
        return np.sum((Y_treated_pre - synthetic) ** 2)

    constraints = ({'type': 'eq', 'fun': lambda w: np.sum(w) - 1})
    bounds = [(0, 1)] * n_controls

    result = minimize(objective, x0=np.ones(n_controls)/n_controls,
                     bounds=bounds, constraints=constraints)
    return result.x

3. Marginal Structural Models

Marginal Structural Models (MSMs) use inverse probability weighting to handle time-varying confounding.

E[Yt(w̄)] = β0 + β1wt + β2wt-1 + ...

Estimated using stabilized inverse probability of treatment weights (IPTW).

4. Time-Varying Treatments

When treatments change over time, standard methods fail. Key challenges include:

  • Time-varying confounding affected by prior treatment
  • Treatment-confounder feedback loops
  • Dynamic treatment regimes

5. Key Takeaways

  • Synthetic control constructs counterfactuals from weighted control units
  • MSMs handle time-varying confounding with IPTW
  • Panel data methods leverage temporal structure for identification

6. Next Week Preview

Module 5, Week 2: Causal Reinforcement Learning

We'll explore off-policy evaluation, contextual bandits, and counterfactual reasoning in sequential decision-making.