The Role of Computing Resources in Publishing Foundation Model Research

Yuexing Hao1,2†, Yue Huang3†, Haoran Zhang1, Chenyang Zhao4, Zhenwen Liang3,

Paul Pu Liang1, Yue Zhao6, Lichao Sun5, Saleh Kalantari2, Xiangliang Zhang3, Marzyeh Ghassemi1*

1EECS, MIT

2Cornell University

3CSE, University of Notre Dame

4Computer Science Department, UCLA

5CS, Lehigh University

6School of Advanced Computing, USC

*Corresponding author. Email: mghassem@mit.edu

These authors contributed equally to this work.

Exploring the relationship between computing resources and scientific advancement in foundation models (FM).

Research Overview

Abstract

Cutting-edge research in Artificial Intelligence (AI) requires considerable resources, including Graphics Processing Units (GPUs), data, and human resources. In this paper, we evaluate the relationship between these resources and the scientific advancement of foundation models (FM). We reviewed 6517 FM papers published between 2022 to 2024, and surveyed 229 first-authors to understand the impact of computing resources on scientific output. We find that increased computing is correlated with individual paper acceptance rates and national funding allocations, but is not correlated with research environment (academic or industrial), domain, or study methodology.

Study Scope

Analysis of 34,828 accepted papers between 2022 and 2024, identifying 5,889 foundation model papers.

Resource Analysis

Examining GPU access, TFLOP measurements, and their correlation with research outcomes.

Survey Results

Insights from 229 authors across 312 papers on resource usage and impact.

Key Findings

Computing Impact

Greater GPU access yields superior pre-trained models and correlates with higher acceptance rates in top AI/ML publications.

Resource Distribution

Analysis of hardware resources and their relationship with publication success in selective AI/ML computer science conferences.

Future Implications

Projections for GPU availability and recommendations for individuals and institutions to ensure progress in AI research.

Research Team

Yuexing Hao

Yuexing Hao

EECS, MIT & Cornell University

Yue Huang

Yue Huang

CSE, University of Notre Dame

Haoran Zhang

Haoran Zhang

EECS, MIT

Zhenwen Liang

Zhenwen Liang

CSE, University of Notre Dame

Paul Pu Liang

Paul Pu Liang

EECS, MIT

Yue Zhao

Yue Zhao

School of Advanced Computing, USC

Lichao Sun

Lichao Sun

CS, Lehigh University

Saleh Kalantari

Saleh Kalantari

Cornell University

Xiangliang Zhang

Xiangliang Zhang

CSE, University of Notre Dame

Marzyeh Ghassemi

Marzyeh Ghassemi

EECS, MIT