Accelerating mathematical research with language models: A case study of an interaction with GPT-5-Pro on a convex analysis problem
Abstract
Recent progress in large language models has made them increasingly capable research assistants in mathematics. Yet, as their reasoning abilities improve, evaluating their mathematical competence becomes increasingly challenging. The problems used for assessment must be neither too easy nor too difficult, their performance can no longer be summarized by a single numerical score, and meaningful evaluation requires expert oversight. In this work, we study an interaction between the author and a large language model in proving a lemma from convex optimization. Specifically, we establish a Taylor expansion for the gradient of the biconjugation operator--that is, the operator obtained by applying the Fenchel transform twice--around a strictly convex function, with assistance from GPT-5-pro, OpenAI's latest model. Beyond the mathematical result itself, whose novelty we do not claim with certainty, our main contribution lies in documenting the collaborative reasoning process. GPT-5-pro accelerated our progress by suggesting, relevant research directions and by proving some intermediate results. However, its reasoning still required careful supervision, particularly to correct subtle mistakes. While limited to a single mathematical problem and a single language model, this experiment illustrates both the promise and the current limitations of large language models as mathematical collaborators.