Essays in the Capabilities of Large Language Models and Historical Reactions to Technological Change

Janna Lu

Advisor: Tyler Cowen, PhD, Department of Economics

Committee Members: Peter Boettke, Alex Tabarrok

Online Location, https://gmu.zoom.us/j/98200174414?pwd=6ufcP0XJijMwIYs5xf3jGgHJUIOi5z.1
July 14, 2025, 01:00 PM to 02:00 PM

Abstract:

This dissertation explores the historical context of technological resistance and fear, and the capabilities of large language models, by investigating their ability to forecast future events and whether they have tacit knowledge. It argues that LLMs surpass average human forecasting abilities but still fall short of human superforecasters, acquire some facets of tacit knowledge without physical embodiment, and current anxieties about technological displacement and existential risk caused by AI reflect a recurring pattern across major technological revolutions.
 
The first chapter empirically evaluates frontier LLMs on forecasting accuracy using data from Metaculus, a crowd forecasting site. While state-of-the-art models outperform average human crowd predictions, they lag behind expert human forecasters. The study also identifies how narrative framing influences model accuracy, highlighting potential degradation of performance when jailbroken with fictional scenarios.
 
The second chapter argues that LLMs have acquired certain aspects of tacit knowledge such as contextual and cultural nuances, and implicit language rules without embodied experiences. By employing Polanyi’s and Hayek’s frameworks, this chapter illustrates how LLMs acquire non-articulable knowledge through pre-training, challenging the notion that these models are merely sophisticated stochastic parrots.
 
The third chapter traces the history of thought behind societal fears of technological disruption, from the Luddite uprisings of the Industrial Revolution to mid-20th-century cybernetics anxieties, and to contemporary AI-doom narratives. The chapter demonstrates how such fears persist and resurface, despite historical evidence of eventual adaptation, and proposes that this frame is useful for understanding modern AI-doom debates.