March 28: Kate Donahue (MIT)
Opportunities and Challenges in Human-AI Collaboration
In this talk, I will describe prior work and current research in human-AI collaboration, specifically focusing on the role
theoretical modeling can play in better understanding when socially desirable outcomes are feasible. In particular, I will
discuss the influence that different factors can have, such as human cognitive biases, the way that the human and AI interact,
and different levels of accuracy. First, I will discuss the goal of strict benefits to human-algorithm collaboration
(complementarity, Bansal et al ’21), presenting several impossibility results and conditions for when complementarity may be
in tension with other goals, such as fairness. These results help us understand when we can (and cannot) achieve certain
accuracy goals, and give insight into how we should design AI tools. Next, I will present a stylized model of a
strategic-decision using an algorithmic tool: a firm using an algorithmic tool to select candidates. We show that when the
firm has access to side information (e.g. employment status of candidates), counter-intuitive results can occur, such as
increased accuracy of the AI tool leading to worse social outcomes. Finally, I will conclude by discussing several directions
in human-LLM interaction and the ways in which generative AI poses unique challenges (and benefits).
March 7: Akshaya Jha (CMU)
Estimating electricity market responses to changes in demand
Data centers consumed 4% of total U.S. electricity production in 2023 and are expected to consume approximately 7-12% of total U.S.
electricity production by 2028 (LBNL, 2024). The sheer scale of data center electricity demand combined with the propensity to
collocate with power plants poses unique challenges to grid operations. In this talk, I cover two approaches to modeling changes
in power plant dispatch in response to rising electricity demand: operations research dispatch models and machine learning models.
To illustrate these approaches, I cover applications to rooftop solar penetration in Australia and Germany’s nuclear phase-out:
1. Firms expect to recover the fixed costs required to start production by earning positive operating profits in subsequent periods.
We develop a dynamic competitive benchmark that accounts for start-up costs, showing that static markups overstate the rents
attributable to market power in an electricity market where generators frequently stop and start production in response to rooftop
solar output. We demonstrate that the large-scale expansion of solar capacity can lead to increases in the collective profitability
of fossil fuel plants because competition softens at sunset—plants displaced by solar during the day must incur start-up costs to
compete in the evening.
2: Many countries have phased out nuclear power in response to concerns about nuclear waste and the risk of nuclear accidents. This
paper examines the shutdown of more than half of the nuclear production capacity in Germany after the Fukushima accident in 2011.
We use hourly data on power plant operations and a machine learning approach to estimate the impacts of the phase-out policy. We
find that reductions in nuclear electricity production were offset primarily by increases in coal-fired production and net electricity
imports. Our estimates of the social cost of the phase-out range from €3 to €8 billion per year. The majority of this cost comes from
the increased mortality risk associated with exposure to the local air pollution emitted when burning fossil fuels. Policymakers
would have to significantly overestimate the risk or cost of a nuclear accident to conclude that the benefits of the phase-out exceed
its social costs. We discuss the likely role of behavioral biases in this setting, and highlight the importance of ensuring that
policymakers and the public are informed about the health effects of local air pollution.
March 3: Laura Veldkamp (Columbia Business School)
Valuing Firms’ Data Assets
The modern economy increasingly revolves around data, yet its value remains largely unmeasured. We explore why traditional economic
metrics fail to capture the value of data, how firms implicitly barter data and bundle it with other transactions, and how
mismeasurement impacts GDP and productivity estimates. We introduce key tools for measuring data value—using value functions,
complementary inputs, and revenue effects —and discuss the role of forecast errors in assessing firms’ data-driven gains. The
findings highlight the evolving landscape of data ownership, the growing economic significance of data across industries, and the
potential for data democratization. By refining our measurement frameworks, we can better understand data’s impact on firm
productivity, market power, and economic policy. Click here to view the whole paper.
November 22: Neil Thompson (MIT)
Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?
The faster AI automation spreads through the economy, the more profound its potential impacts, both positive (improved productivity)
and negative (worker displacement). The previous literature on “AI Exposure” cannot predict this pace of automation since it attempts
to measure an overall potential for AI to affect an area, not the technical feasibility and economic attractiveness of building such
systems. In this article, we present a new type of AI task automation model that is end-to-end, estimating: the level of technical
performance needed to do a task, the characteristics of an AI system capable of that performance, and the economic choice of whether
to build and deploy such a system. The result is a first estimate of which tasks are technically feasible and economically attractive
to automate – and which are not. We focus on computer vision, where cost modeling is more developed. We find that at today’s costs
U.S. businesses would choose not to automate most vision tasks that have “AI Exposure,” and that only 23% of worker wages being paid
for vision tasks would be attractive to automate. This slower roll-out of AI can be accelerated if costs falls rapidly or if it is
deployed via AI-as-a-service platforms that have greater scale than individual firms, both of which we quantify. Overall, our findings
suggest that AI job displacement will be substantial, but also gradual – and therefore there is room for policy and retraining to
mitigate unemployment impacts. Click here to view the full paper.
November 15: Brian Jabarian (Chicago)
Critical Thinking and Storytelling Contexts
In this paper, we show that storytelling contexts, i.e., sources, designs, writing styles and delivery of information, impact the
effectiveness of surveys and elections in eliciting preferences formed by critical thinking (reasoned preferences). Through an
artefactual field experiment with a US sample (N=725), incentivized by an LLM, we find that intermediate storytelling contexts prompt
critical thinking more effectively than basic or sophisticated ones. Participants with high need-for-cognition are particularly
responsive to these contexts. In a conceptual framework, we explore how critical thinkers impact the efficiency of elections and polls
in aggregating reasoned preferences. Storytelling contexts that effectively prompt critical thinking improve election efficiency.
Click here to view the full paper.
September 20: Rad Niazadeh (Chicago Booth)
Markovian Search with Socially-aware Constraints
In this talk, we study a general class of constrained sequential search problems for selecting multiple candidates from a pool that
belongs to different societal groups. The focus is on ex-ante constraints aimed at promoting socially desirable and diverse outcomes,
e.g. demographic parity and quotas. I start with a canonical search model, known as the Pandora’s box under a single affine constraint
on selection and inspection probabilities of candidates. We show that the optimal policy for such a constrained problem retains the
index-based structure of the optimal policy for the unconstrained one but potentially randomizes between two policies that are
dual-based adjustments of the unconstrained problem; thus, they are easy to compute and economically interpretable. Building on these
insights, we consider the richer class of search processes, such as search with rejection and multistage search, that can be modeled
by joint Markov scheduling (JMS). Imposing general affine and convex ex-ante constraints, we give a primal-dual algorithm to find a
near-feasible and near-optimal policy. This algorithm, too, randomizes over polynomial number of index-based policies, whose indices
are dual-based adjustments to the Gittins indices of the unconstrained JMS. Our algorithmic developments involve many intricacies
which I will explore as much as time permits. Finally, using a numerical case study, we investigate the implications of imposing
various constraints, the price of imposing them in terms of utilitarian loss, and whether they induce their intended societally
desirable outcomes. Click here to view the full paper.