@alexolegimas
What will economic outcomes look like as transactions become delegated to AI agents? Will human differences be smoothed away, leading to more homogenous outcomes, or will they be recreated and potentially even amplified? Will AI agents mitigate inequality, or will it persist and potentially take on new forms? Will AI agents eliminate information asymmetry in principal-agent relationships, or introduce new frictions? New paper with K. Lee and @sanjog_misra provides some early answers: 1) AI-agentic interactions, if anything, generate more dispersion and heterogeneity in economic outcomes than human-human benchmarks. 2) Dispersion of agentic interactions can be directly traced back to non-instrumental traits and biases of human principals doing the prompting. Hypothesis of greater homogeneity from (AI)agentic interactions does not seem to hold. 3) There are substantial differences in “machine fluency” —the ability to write prompts that align the agent with the principal’s objective. Some principals are better at maximizing agentic outcomes than others. Principal characteristics predict performance of agent, suggesting new source of inequality. 4) Some traits have similar relationship to outcomes as human-human interactions, but others reverse, e.g., gender difference in negotiated outcomes. 5) Principal-agent relationship changes: prompt now acts as contract. But black-box objective function of agent implies new type of contract incompleteness, which we broadly term “specification hazard.” As economic activity shifts to autonomous agents, primary source of market distortion may shift from information asymmetries between parties, to principals’ mental models of the AI agents that they are delegating to.