Logistics

Date: Sunday, October 27, 2024- Wednesday, October 30, 2024

Location: The voco Chicago Downtown, an IHG Hotel 350 W. Wolf Point Plaza, Building 1, Chicago, IL 60654

FOCS website here

Description

This workshop includes theoretical topics related to calibration. A predictor is calibrated if its predictions are empirically correct. For example, in the weather forecasting application, a predictor is calibrated if, among the times where the predictor predicts a p probability of rain, the empirical frequency of rains is also p. Calibration allows algorithmic outputs to be reliably interpreted as probabilities, which guarantees the trustworthiness and interpretability of predictions.

There has been recent literature on the algorithmic foundations of calibration. This workshop will introduce the following topics, with their connections with calibration:

  • Complexity theory. Calibration and multi-calibration has implications in the complexity theory, including indistinguishability [1] and the Regularity Lemma [2]. 

  • Uncertainty quantification. Calibrated predictions can be used to generate prediction sets that cover their target labels with a target confidence level, not just marginally, but conditional on various overlapping properties of the covariates. More generally everything that “multi-calibration” can do for means can be done for exactly those distributional statistics that can be expressed as M-estimators — a class that includes e.g. quantiles, which yield applications in conformal prediction [3, 4]. 

  • Decision making, regret, and omni-prediction. Calibration allows decision making to be separated into two components: a prediction task and a decision making task. Calibrated predictions from the prediction task are trustworthy for every downstream decision maker [5, 8]. Particularly, best responding to calibrated predictions guarantees no swap regret for every decision maker [6, 7]. 

  • Fairness. Multi-calibration originated from the algorithmic fairness literature, and continues to have important applications in guaranteeing that models capture appropriate amounts of signal about populations not just marginally, but broken down by intersecting demographic attributes like race and sex [9, 10].

 

The workshop will be coordinated with the Fall 2024 IDEAL Special Program on Interpretability, Privacy, and Fairness. IDEAL is an NSF-funded foundations of data science institute with members from five institutions in the greater Chicago area. The program also includes a quarter-long Ph.D. level seminar on calibration.  Participants of the special program and course will be encouraged to attend the workshop.

 

Speakers:

TBA

Organizers:

Jason Hartline (hartline@northwestern.edu)

Jamie Morgenstern (jamiemmt@cs.washington.edu)

Aaron Roth (aaroth@cis.upenn.edu)

Yifan Wu (yifan.wu@u.northwestern.edu)

    References: 

    [1] Dwork, C., Kim, M. P., Reingold, O., Rothblum, G. N., & Yona, G. (2021, June). Outcome indistinguishability. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (pp. 1095-1108).]

    [2] Casacuberta, S., Dwork, C., & Vadhan, S. (2024, June). Complexity-Theoretic Implications of Multicalibration. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (pp. 1071-1082). 

    [3] Jung, C., Noarov, G., Ramalingam, R., & Roth, A. (2022). Batch multivalid conformal prediction. arXiv preprint arXiv:2209.15145.

    [4] Noarov, G., & Roth, A. (2023, July). The statistical scope of multicalibration. In International Conference on Machine Learning (pp. 26283-26310). PMLR.

    [5] Kleinberg, B., Leme, R. P., Schneider, J., & Teng, Y. (2023, July). U-calibration: Forecasting for an unknown agent. In The Thirty Sixth Annual Conference on Learning Theory (pp. 5143-5145). PMLR.

    [6] Noarov, G., Ramalingam, R., Roth, A., Xie, S. 2024. High-Dimensional Prediction for Sequential Decision Making.

    [7] Roth, A., & Shi, M. (2024). Forecasting for Swap Regret for All Downstream Agents. In Proceedings of the 25rd ACM Conference on Economics and Computation (EC 2024).

    [8] Gopalan, P., Okoroafor, P., Raghavendra, P., Sherry, A., & Singhal, M. (2024, June). Omnipredictors for regression and the approximate rank of convex functions. In The Thirty Seventh Annual Conference on Learning Theory (pp. 2027-2070). PMLR.

    [9] Globus-Harris, I., Kearns, M., Roth, A. An Algorithmic Framework for Bias Bounties. FAccT 2022

    [10] Roth, A. Tolbert, A. Weinstein, S. Reconciling Individual Probability Forecasts. FAccT 2023

    Join Our Newsletter