Incentives for Digital Twins: Task-Based Productivity Enhancements with Generative AI
Abstract
Generative AI is a technology which depends in part on participation by humans in training and improving the automation potential. We focus on the development of an "AI twin" that could complement its creator's efforts, enabling them to produce higher-quality output in their individual style. However, AI twins could also, over time, replace individual humans. We analyze this trade-off using a principal-agent model in which agents have the opportunity to make investments into training an AI twin that lead to a lower cost of effort, a higher probability of success, or both. We propose a new framework to situate the model in which the tasks performed vary in the ease to which AI output can be improved by the human (the "editability") and also vary in the extent to which a non-expert can assess the quality of output (its "verifiability.") Our synthesis of recent empirical studies indicates that productivity gains from the use of generative AI are higher overall when task editability is higher, while non-experts enjoy greater relative productivity gains for tasks with higher verifiability. We show that during investment a strategic agent will trade off improvements in quality and ease of effort to preserve their wage bargaining power. Tasks with high verifiability and low editability are most aligned with a worker's incentives to train their twin, but for tasks where the stakes are low, this alignment is constrained by the risk of displacement. Our results suggest that sustained improvements in company-sponsored generative AI will require nuanced design of human incentives, and that public policy which encourages balancing worker returns with generative AI improvements could yield more sustained long-run productivity gains.