TA/mistralai/Mixtral-8x7B-Instruct-v0.1

Common Name: Mixtral 8x7B Instruct v0.1

TogetherAI
Released on Feb 17 12:00 AM
CompareTry in Chat

Mistral AI's Mixtral 8x7B MoE model instruction-tuned, hosted on TogetherAI.

Specifications

Context
128K
Inputtext
Outputtext

Performance (7-day Average)

Collecting…
Collecting…
Collecting…

Pricing

Input$0.66/MTokens
Output$0.66/MTokens

Availability Trend (24h)

Performance Metrics (24h)

Similar Models

$0.66/$1.87/M
ctx128Kmaxavailtps
InOutCap

DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.

$0.97/$0.97/M
ctx128Kmaxavailtps
InOut

Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.

$1.32/$1.32/M
ctx128Kmaxavailtps
InOut

Alibaba's Qwen2.5 7B model optimized for fast inference on TogetherAI.

$0.66/$0.66/M
ctx128Kmaxavailtps
InOut

Mistral AI's 7B instruction-tuned model v0.1, hosted on TogetherAI.