LLM360 enables community-owned AGI through open-source large model research and development.

Our models


Image 2


Image 3



A 65B parameter language model trained on 1.4T tokens. It outperforms Llama 2 70B, but uses approximately 35% less compute to train.


A 7B parameter language model, distinctively trained on the SlimPajama and StarCoder datasets, eclipsing the Llama 2 frontier, skillfully balances language and coding. Its instruction-following variant, CrystalChat, stands out as a top-scoring 7B chat model, trained on a carefully selected mix publicly available language and code datasets.


A 7B parameter English language model based on the LLaMA architecture has two fine-tuned instruction-following models named AmberChat and AmberSafe.

LLM360 Suites

LLM360 Research Suite

The Research Suite is a comprehensive set of large language model (LLM) artifacts from each of our models, for academic and industry researchers to explore LLM training dynamics.

LLM360 Pretraining Suite

The Pretraining Suite is a series of step-by-step guides to reproduce each of our models, for tech enthusiasts, AI practitioners, and academic or industry researchers, to transfer knowledge on LLM pretraining techniques.

LLM360 Developer Suite

The Developer Suite is a series of fine-tuning and inference tutorials for tech enthusiasts, AI practitioners, and academic or industry researchers, who are interested in general model usage or downstream task evaluation and research.


LLM360 K2-65B: Scaling Up Fully Transparent Open-Source LLMs

In this paper, we present LLM360 K2-65B, the most powerful fully transparent open-source large language model (LLM) released to date. K2 is a 65 billion parameter LLM, which follows best practices for reproducibility from the LLM360 project. Despite numerous efforts to develop and release open-source LLMs, full transparency around the training process still remains limited...

LLM360: Towards Fully Transparent Open-Source LLMs

The recent surge in open-source Large Language Models (LLMs), such as LLaMA, Falcon, and Mistral, provides diverse options for AI practitioners and researchers. However, most LLMs have only released partial artifacts, such as the final model weights or inference code, and technical reports increasingly limit their scope to high-level design choices and surface statistics...

Inspired Research:


Introducing K2-65B: Charting the Blueprint Towards Open-Source Artificial General Intelligence

LLM360 is excited to announce several new releases to further our mission enabling community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.

Introducing LLM360: Fully Transparent Open-Source LLMs

In recent months, the open-source large language model (LLM) community has seen tremendous model contributions. However, model weight releases and overview technical reports do not contain enough information to cover the complexity of LLM training, which hinders openness and transparency, the mechanisms behind trustworthy and innovative research and science for decades.

Connect with Us

The LLM360 team is here to solve the most challenging AI problems. Reach out if you'd like to discuss.