A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models. Researchers from some of the ...