Large language models reduce agency costs


With Jason Potts, Darcy W E Allen, and Nataliya Ilyushina. Available on SSRN.

Large Language Models (LLMs) or generative AI have emerged as a new general-purpose technology in applied machine learning. These models are increasingly employed within firms to support a range of economic tasks. This paper investigates the economic value generated by the adoption and use of LLMs, which often occurs on an experimental basis, through two main channels. The first channel, already explored in the literature (e.g. Eloundou et al. 2023, Noy and Wang 2023), involves LLMs providing productive support akin to other capital investments or tools. The second, less examined channel concerns the reduction or elimination of agency costs in economic organisation due to the enhanced ability of economic actors to insource more tasks. This is particularly relevant for tasks that previously required contracting within or outside a firm. With LLMs enabling workers to perform tasks in which they had less specialisation, the costs associated with managing relationships and contracts decrease. This paper focuses on this second path of value creation through adoption of this innovative new general purpose technology. Furthermore, we examine the wider implications of the lower agency costs pathway on innovation, entrepreneurship and competition.

The problem of ubiquitous computing for regulatory costs

Working paper on SSRN

The benefits of regulation should exceed the cost of regulating. This paper investigates the impact of widespread general-purpose computing on the cost of enforcing of regulations on generative artificial intelligence (AI) and decentralized finance (DeFi). We present a simple model illustrating regulators’ preferences for minimising enforcement costs and discuss the implications of regulatory preferences for the number and size of regulated firms. Regulators would rather regulate a small number of large firms rather than a large number of small firms. General-purpose computing radically expands the number of potentially regulated entities. For Defi, the decentralized nature of blockchain technology, global scale of transactions, and decentralised hosting increase the number of potentially regulated entities by an order of magnitude. Likewise, locally deployed open-source generative AI models make regulating AI safety extremely difficult. This creates a regulatory dilemma that forces regulators to reassess the social harm of targeted economic activity. The paper draws a historical comparison with the attempts to reduce copyright infringement through file sharing in the early 2000s in order to present strategic options for regulators in addressing the challenges of AI safety and DeFi compliance.

The Case for Generative AI in Scholarly Practice

Available at SSRN

Abstract: This paper defends the use of generative artificial intelligence (AI) in scholarship and argues for its legitimacy as a valuable tool for contemporary research practice. It uses a emergent property rights model of writing to shed light on the evolution of scholarly norms and practices in academic practice. The paper argues that generative AI extends the capital-intensive nature of modern academic writing. The paper discussing three potential uses for AI models in research practice: AI as a mentor, AI as an analytic tool, and AI as a writing tool. The paper considers how the use of generative AI interacts with two critical norms in scholarship: norms around authorship attribution and credits for contributions, and the norm against plagiarism. It concludes that the effective use of generative AI is a legitimate research practice for scholars seeking to experiment with new technologies that might enhance their productivity.