Llama 3.1 Nemotron Ultra 253B v1

NVIDIA

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta's Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node. Note: you must include `detailed thinking on` in the system prompt to enable reasoning.

Try Now

Capabilities

Thinking

Technical Specifications

Context Window

131,072 tokens

Max Output

131,072 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.60

Non-Reasoning Output

$1.80

Legacy

Made legacy on

Reason

253B model; expensive; superseded by Nemotron 3 Super

Recommended Replacement

Qwen3.6 Plus