NVIDIA BioNeMo Recipes Scale Biology Transformer Models with PyTorch
Published on November 5, 2025 at 12:00 AM

NVIDIA announced on November 5, 2025, that its BioNeMo Recipes are now available to simplify and accelerate the training of large-scale AI models for biology. These recipes provide step-by-step guides built on PyTorch and Hugging Face, lowering the barrier to entry for large-scale model training. By integrating accelerated libraries like NVIDIA Transformer Engine (TE), researchers can unlock speed and memory efficiency through techniques like Fully Sharded Data Parallel (FSDP) and Context Parallelism.
The BioNeMo Recipes accelerate transformer-style AI models for biology using the Hugging Face ESM-2 protein language model with a native PyTorch training loop. Key features include:
- Transformer Engine (TE) Integration: Optimizes transformer computations on NVIDIA GPUs.
- FSDP2 Integration: Enables auto-parallelism.
- Sequence Packing: Achieves greater performance by removing padding tokens.