## Working Papers

*Information Hierarchies* (with Ben Brooks and Emir Kamenica)

September 2019

View Abstract

If Anne knows more than Bob about the state of the world, she may or may not know what Bob thinks, but it is always possible that she does. In other words, if the distribution of Anne's first-order belief is a mean-preserving spread of the distribution of Bob's first-order belief, we can construct signals for Anne and Bob that induce these distributions of beliefs and provide Anne with full information about Bob's belief. We establish that with more agents, the analogous result does not hold. It might be that Anne knows more than Bob and Charles, who in turn both know more than David, yet what they know about the state precludes the possibility that Anne knows what Bob and Charles think and that everyone knows what David thinks. More generally, we define an information hierarchy as a partially ordered set and ask whether higher elements having more information about the state always makes the hierarchy compatible with higher elements knowing the beliefs of lower elements. We show that the answer is affirmative if and only if the graph of the hierarchy is a forest. We discuss applications of this result to rationalizing a decision maker's reaction to unknown sources of information and to information design in hierarchical vs. non-hierarchical organizations.

*Improving Information from Manipulable Data* (with Navin Kartik)

arXiv

First version posted August 2019; Updated September 2019

View Abstract

Data-based decision-making must account for data manipulation, or gaming, by agents who are aware of how decisions are being made. We study a framework in which this manipulation makes data less informative when decisions depend more strongly on data. We formalize why and how a decision-maker should commit to under-utilizing data in order to attenuate information loss.

*Which Findings Should Be Published?* (with Max Kasy)

First version posted June 2018; Updated April 2019

View Abstract

Given a scarcity of journal space, what is the optimal rule for whether an empirical finding should be published? Suppose publications inform the public about a policy-relevant state. Then journals should publish extreme results, meaning ones that move beliefs sufficiently. This optimal rule may take the form of a one- or a two-sided test comparing a point estimate to the prior mean, with critical values determined by a cost-benefit analysis. Consideration of future studies may additionally justify the publication of precise null results. If one insists that standard inference remain valid, however, publication must not select on the study's findings.

*Selecting Applicants*

First version posted May 2015; Updated February 2019

Revision requested at *Econometrica*

View Abstract

A firm selects applicants to hire based on hard information, such as a test result, and soft information, such as a manager's evaluation of an interview. The contract that the firm offers to the manager can be thought of as a restriction on acceptance rates as a function of test results. I characterize optimal acceptance rate functions both when the firm knows the manager's mix of information and biases and when the firm is uncertain. These contracts may admit a simple implementation in which the manager can accept any set of applicants with a sufficiently high average test score.

## Forthcoming and Published Papers

*Quantifying Information and Uncertainty* (with Emir Kamenica)

First version posted April 2018; Updated April 2019

Accepted at *American Economic Review*

View Abstract

We examine ways to measure the amount of information generated by a piece of news and the
amount of uncertainty implicit in a given belief. Say a measure of information is valid if
it corresponds to the value of news in some decision problem. Say a measure of
uncertainty is valid if it corresponds to expected utility loss from not knowing the
state in some decision problem. We axiomatically characterize all valid measures of
information and uncertainty. We show that if a measure of information and uncertainty
arise from the same decision problem, then the two are coupled in that the expected
reduction in uncertainty always equals the expected amount of information generated. We
provide explicit formulas for the measure of information that is coupled with any given
measure of uncertainty and vice versa. Finally, we show that valid measures of information
are the only payment schemes that never provide incentives to delay information
revelation.

*Muddled Information* (with Navin Kartik)

*Journal of Political Economy*, August 2019 [127(4):1739-1776]

View Abstract

We study a model of signaling in which agents are heterogeneous on two dimensions.
An agent's natural action is the action taken in the absence of signaling concerns.
Her gaming ability parameterizes the cost of increasing the action. Equilibrium behavior
muddles information across dimensions. As incentives to take higher actions
increase—due to higher stakes or more manipulable signaling
technology—more information is revealed about gaming ability,
and less about natural actions. We explore a new externality: showing agents' actions
to additional observers can worsen information for existing observers.
Applications to credit scoring, school testing, and web search are discussed.

*A Note on Interval Delegation* (with Manuel Amador and Kyle Bagwell)

*Economic Theory Bulletin*, October 2018, [6(2):239-249]

View Abstract

In this note we extend the Amador and Bagwell (2013) conditions for confirming the
optimality of a proposed interval delegation set to the possibility of degenerate
intervals, in which the agent takes the same action at every state.
We consider the cases of money burning as well as no money burning. These results allow
us to provide new sufficient conditions on utility functions and state distributions to
guarantee that some interval -- degenerate or non-degenerate -- will be optimal.

*What Kind of Central Bank Competence?* (with Navin Kartik)

*Theoretical Economics*, May 2018 [13:697-728]

An earlier version with a different focus was circulated under the title
*What Kind of Transparency?*

View Abstract

How much information should a Central Bank (CB) have about (i) optimal policy objectives and (ii) operational shocks to the effect of monetary policy? We consider a version of the Barro-Gordon credibility problem in which monetary policy signals an inflation-biased CB's private information on both these dimensions. We find that greater CB competence--more private information--about policy objectives is desirable while greater competence about operational shocks need not be. When the CB has less private information about operational shocks, the public infers that monetary policy depends more on the CB's information about objectives. Inflation expectations become more responsive to monetary policy, which mitigates the CB's temptation to produce surprise inflation.

*Discounted Quotas*

*Journal of Economic Theory*, November 2016 [166:396-444]

View Abstract

This paper extends the concept of a quota contract to account for discounting and for the possibility of infinitely many periods: a *discounted quota* fixes the number of expected discounted plays on each action. I first present a repeated principal-agent contracting environment in which menus of discounted quota contracts are optimal. I then recursively characterize the dynamics of discounted quotas for an infinitely repeated iid problem. Dynamics are described more explicitly for the limit as interactions become frequent, and for the case where only two actions are available.

*Delegating Multiple Decisions*

*AEJ: Micro*, November 2016
[8(4):16-53]

View Abstract

This paper shows how to extend the heuristic of capping an agent
against her bias to delegation problems over multiple decisions. These caps may be
exactly optimal when the agent has constant biases, in which case a cap corresponds to a
ceiling on the weighted average of actions. In more general settings caps give
approximately first-best payoffs when there are many independent decisions. The geometric
logic of a cap translates into economic intuition on how to let agents trade off
increases on one action for decreases on other actions. I consider specific applications
to political delegation, capital investments, monopoly price regulation, and tariff
policy.

*Suspense and Surprise* (with Jeff Ely and Emir
Kamenica)

Online Appendix

*Journal of Political Economy*, February 2015 [123(1):215-260]

Press: *New York Times*,
*Chicago Tribune*,
Freakonomics (1),
Freakonomics (2)

View Abstract

We model demand for non-instrumental information, drawing on the
idea that people derive entertainment utility from suspense and surprise. A period has
more suspense if the variance of the next period's beliefs is greater. A period has more
surprise if the current belief is further from the last period's belief. Under these
definitions, we analyze the optimal way to reveal information over time so as to maximize
expected suspense or surprise experienced by a Bayesian audience. We apply our results to
the design of mystery novels, political primaries, casinos, game shows, auctions, and
sports.

*Taxation of Couples under Assortative Mating*

*AEJ: Policy*, August 2014 [6(3):155-177]

View Abstract

I present a simple and tractable model of the optimal taxation of
married couples, working off of the multidimensional screening framework of Armstrong and
Rochet (1999). In particular, I study how the tax code varies with the degree of
assortative mating. One result is that the "negative jointness" of marginal tax rates
found in Kleven, Kreiner, and Saez (2007, 2009) for couples with uncorrelated earnings
should be attenuated in the presence of assortative mating. When mating is sufficiently
assortative, the optimal tax schedule is separable: an individual's taxes do not depend on
his or her spouse's income.

*Aligned Delegation*

Online Appendix

*American Economic Review*, January 2014 [104(1):66-83]

View Abstract

A principal delegates multiple decisions to an
agent, who has private information relevant to each decision. The principal is uncertain
about the agent's preferences. I solve for max-min optimal mechanisms -- those which
maximize the principal's payoff against the worst case agent preference types. These
mechanisms are characterized by a property I call "aligned delegation:" all agent types
play identically, as if they shared the principal's preferences. Max-min optimal
mechanisms may take the simple forms of ranking mechanisms, budgets, or sequential
quotas.

*Experts and Their Records* (with Michael Schwarz)

*Economic Inquiry*, January 2014 [52(1):56-71]

View Abstract

Consider an environment where long-lived experts
repeatedly interact with short-lived customers. In periods when an expert is hired, she
chooses between providing a profitable major treatment or a less profitable minor
treatment. The expert has private information about which treatment best serves the
customer, but has no direct incentive to act in the customer's interest. Customers can
observe the past record of each expert's actions, but never learn which actions would have
been appropriate. We find that there exists an equilibrium in which experts always play
truthfully and choose the customer's preferred treatment. The expert is rewarded for
choosing the less profitable action with future business: customers return to an expert
with high probability if the previous treatment was minor, and low probability if it was
major.

If experts have private information regarding their own payoffs as well as
what treatments are appropriate, then there is no equilibrium with truthful play in every
period. But we construct equilibria where experts are truthful arbitrarily often as their
discount factor converges to one.