Llama 3.1 Nemotron Ultra 253B v1

nvidia

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node. Note: you must include `detailed thinking on` in the system prompt to enable reasoning.

Try Now

Capabilities

Extended Thinking

Example Use Cases

Advanced reasoning with large nvidia model

Rag or tool-calling at scale

Frontier open-source thinking

Technical Specifications

Context Window

131,072 tokens

Max Output

131,072 tokens

Cache Miss Cost

$0.60 per 1M tokens

Non-Reasoning Cost

$1.80 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus