NVIDIA released Nvidia Nemotron Nano 9B V2 on August 18, 2025 as the compressed reasoning variant of the Nemotron Nano 2 family. It is a 9B-parameter model with a context window of 131.1K tokens.
Nvidia Nemotron Nano 9B V2 matches or exceeds Qwen3-8B on complex reasoning tasks at up to 6x the throughput. The hybrid Mamba-Transformer architecture contributes to this efficiency. Mamba layers handle sequence processing with sub-quadratic memory scaling, while Transformer attention layers maintain precision on retrieval-heavy tasks within the context window.
Nvidia Nemotron Nano 9B V2 also supports thinking budget control. You can prompt it to reason briefly for simple tasks (faster, cheaper) or thoroughly for hard problems (slower, more accurate). Adjust the latency-accuracy tradeoff at inference time without switching models. Technical report and assets: https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html.