Llama 3 70B Instruct

Meta

Meta's Llama 3 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong performance compared to leading closed-source models in human evaluations. Usage of this model is subject to Meta's Acceptable Use Policy.

Try Now

Technical Specifications

Context Window

8,192 tokens

Max Output

8,000 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.51

Non-Reasoning Output

$0.74

Legacy

Made legacy on

Reason

Superseded by Llama 3.1 70B with 128K context and tool support

Recommended Replacement

MiniMax M2.7