Back to Model Catalog
Model Popular OpenAI-compatible

Ministral-3-3B-Reasoning-2512

XIM Only Popular
973063DC-E3A7-42F8-A0F6-4F3043473E7B
Parameters4.7B
Context262K
LicenseUnknown
Architecturemistral3

Description

---
libraryname: vllm
language:

  • en
  • fr
  • es
  • de
  • it
  • pt
  • nl
  • zh
  • ja
  • ko
  • ar

license: apache-2.0
inference: false
base
model:
  • mistralai/Ministral-3-3B-Base-2512

extragateddescription: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
  • mistral-common

---

Ministral 3 3B Reasoning 2512

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

This model is the reasoning post-trained version, trained for reasoning tasks, making it ideal for math, coding and stem related use cases.

The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 3B can even be deployed locally, fitting in 16GB of VRAM in BF16, and less than 8GB of RAM/VRAM when quantized.

Learn more in our blog post and paper.

Key Features

Ministral 3 3B consists of two main architectural components:
  • 3.4B Language Model
  • 0.4B Vision Encoder

The Ministral 3 3B Reasoning model offers the following capabilities:

  • Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
  • Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
  • System Prompt: Maintains strong adherence and support for system prompts.
  • Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
  • Reasoning: Excels at complex, multi-step reasoning and dynamic problem-solving.
  • Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
  • Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
  • Large Context Window: Supports a 256k context window.

Use Cases

Ideal for lightweight, real-time applications on edge or low-resource devices, such as:
  • Image captioning
  • Text classification
  • Real-time efficient translation
  • Data extraction
  • Short content generation
  • Fine-tuning and specialization
  • And more...
Bringing advanced AI capabilities to edge and distributed environments for embedded systems.

Recommended Settings

We recommend deploying with the following best practices:

  • System Prompt: Use our provided system prompt, and append it to your custom system prompt to define a clear environment and use case, including guidance on how to effectively leverage tools in agentic systems.
  • Multi-turn Traces: We highly recommend keeping the reasoning traces in context.
  • Sampling Parameters: Use a temperature of 0.7 for most environments ; Different temperatures may be explored for different use cases - developers are encouraged to experiment with alternative settings.
  • Tools: Keep the set of tools well-defined and limit their number to the minimum required for the use case - Avoiding overloading the model with an excessive number of tools.
  • Vision: When deploying with vision capabilities, we recommend maintaining an aspect ratio close to 1:1 (width-to-height) for images. Avoiding the use of overly thin or wide images - crop them as needed to ensure optimal performance.

Ministral 3 Family

Model NameTypePrecisionLink
Ministral 3 3B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 3B Instruct 2512Instruct post-trainedFP8Hugging Face
Ministral 3 3B Reasoning 2512Reasoning capableBF16Hugging Face
Ministral 3 8B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 8B Instruct 2512Instruct post-trainedFP8Hugging Face
Ministral 3 8B Reasoning 2512Reasoning capableBF16Hugging Face
Ministral 3 14B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 14B Instruct 2512Instruct post-trainedFP8Hugging Face
Ministral 3 14B Reasoning 2512Reasoning capableBF16Hugging Face

Other formats available here.

Benchmark Results

We compare Ministral 3 to similar sized models.

Reasoning

ModelAIME25AIME24GPQA DiamondLiveCodeBench
Ministral 3 14B<u>0.850</u><u>0.898</u><u>0.712</u><u>0.646</u>
Qwen3-14B (Thinking)0.7370.8370.6630.593
| | | | | | | Ministral 3 8B | 0.787 | <u>0.860</u>| 0.668 | <u>0.616</u> | | Qwen3-VL-8B-Thinking | <u>0.798</u>| <u>0.860</u>| <u>0.671</u> | 0.580 | | | | | | | | Ministral 3 3B | <u>0.721</u>| <u>0.775</u>| 0.534 | <u>0.548</u> | | Qwen3-VL-4B-Thinking | 0.697 | 0.729 | <u>0.601</u> | 0.513 |

Instruct

ModelArena HardWildBenchMATH Maj@1MM MTBench
Ministral 3 14B<u>0.551</u><u>68.5</u><u>0.904</u><u>8.49</u>
Qwen3 14B (Non-Thinking)0.42765.10.870NOT MULTIMODAL
Gemma3-12B-Instruct0.43663.20.8546.70
| | | | | | | Ministral 3 8B | 0.509 | <u>66.8</u>| 0.876 | <u>8.08</u> | | Qwen3-VL-8B-Instruct | <u>0.528</u>| 66.3 | <u>0.946</u>| 8.00 | | | | | | | | Ministral 3 3B | 0.305 | <u>56.8</u>| 0.830 | 7.83 | | Qwen3-VL-4B-Instruct | <u>0.438</u>| <u>56.8</u>| <u>0.900</u>| <u>8.01</u> | | Qwen3-VL-2B-Instruct | 0.163 | 42.2 | 0.786 | 6.36 | | Gemma3-4B-Instruct | 0.318 | 49.1 | 0.759 | 5.23 |

Base

ModelMultilingual MMLUMATH CoT 2-ShotAGIEval 5-shotMMLU Redux 5-shotMMLU 5-shotTriviaQA 5-shot
Ministral 3 14B0.742<u>0.676</u>0.6480.8200.7940.749
Qwen3 14B Base<u>0.754</u>0.620<u>0.661</u><u>0.837</u><u>0.804</u>0.703
Gemma 3 12B Base0.6900.4870.5870.7660.745<u>0.788</u>
| | | | | | | | | Ministral 3 8B | <u>0.706</u> | <u>0.626</u> | 0.591 | 0.793 | <u>0.761</u>| <u>0.681</u> | | Qwen 3 8B Base | 0.700 | 0.576 | <u>0.596</u> | <u>0.794</u> | 0.760 | 0.639 | | | | | | | | | | Ministral 3 3B | 0.652 | <u>0.601</u> | 0.511 | 0.735 | 0.707 | 0.592 | | Qwen 3 4B Base | <u>0.677</u> | 0.405 | <u>0.570</u> | <u>0.759</u> | <u>0.713</u>| 0.530 | | Gemma 3 4B Base | 0.516 | 0.294 | 0.430 | 0.626 | 0.589 | <u>0.640</u> |

Usage

The model can be used with the following frameworks;



vLLM

We recommend using this model with vLLM.

Installation

Make sure to install vllm >= 0.12.0:

pip install vllm --upgrade

Doing so should automatically install mistral_common >= 1.8.6.

To check:

python -c "import mistral_common; print(mistral_common.__version__)"

You can also make use of a ready-to-go docker image or on the docker hub.

Serve

Due to their size, Ministral-3-3B-Reasoning-2512 and Ministral-3-8B-Reasoning-2512 can run on a single 1xH200 GPU.

A simple launch command is:


vllm serve mistralai/Ministral-3-3B-Reasoning-2512 \
  --tokenizer_mode mistral --config_format mistral --load_format mistral \
  --enable-auto-tool-choice --tool-call-parser mistral \
  --reasoning-parser mistral

Key parameter notes:

  • enable-auto-tool-choice: Required when enabling tool usage.
  • tool-call-parser mistral: Required when enabling tool usage.
  • reasoning-parser mistral: Required when enabling reasoning.

Additional flags:

  • You can set --max-model-len to preserve memory. By default it is set to 262144 which is quite large but not necessary for most scenarios.
  • You can set --max-num-batched-tokens to balance throughput and latency, higher means higher throughput but higher latency.

Recommended Sampling Settings:

  • We recommend starting with a Temperature of 0.7 for most use cases. Feel free to experiment with different settings to best suit your specific needs.

Usage of the model

Here we assume that the model mistralai/Ministral-3-3B-Reasoning-2512 is served and you can ping it to the domain localhost with the port 8000 which is the default for vLLM.

<details>
<summary>Vision Reasoning</summary>

Let's see if the Ministral 3 model knows when to pick a fight !

from typing import Any

from openai import OpenAI
from huggingface_hub import hf_hub_download

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 262144
client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id


def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
    file_path = hf_hub_download(repo_id=repo_id, filename=filename)
    with open(file_path, "r") as file:
        system_prompt = file.read()

    index_begin_think = system_prompt.find("[THINK]")
    index_end_think = system_prompt.find("[/THINK]")

    return {
        "role": "system",
        "content": [
            {"type": "text", "text": system_prompt[:index_begin_think]},
            {
                "type": "thinking",
                "thinking": system_prompt[
                    index_begin_think + len("[THINK]") : index_end_think
                ],
                "closed": True,
            },
            {
                "type": "text",
                "text": system_prompt[index_end_think + len("[/THINK]") :],
            },
        ],
    }


SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")

image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"

messages = [
    SYSTEM_PROMPT,
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
            },
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    },
]


stream = client.chat.completions.create(
    model=model,
    messages=messages,
    stream=True,
    temperature=TEMP,
    top_p=TOP_P,
    max_tokens=MAX_TOK,
)

print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []

for chunk in stream:
    reasoning_content = None
    content = None
    # Check the content is reasoning_content or content
    if hasattr(chunk.choices[0].delta, "reasoning_content"):
        reasoning_content = chunk.choices[0].delta.reasoning_content
    if hasattr(chunk.choices[0].delta, "content"):
        content = chunk.choices[0].delta.content

    if reasoning_content is not None:
        if not printed_reasoning_content:
            printed_reasoning_content = True
            print("Start reasoning:\n", end="", flush=True)
        print(reasoning_content, end="", flush=True)
    elif content is not None:
        # Extract and print the content
        if not reasoning_content and printed_reasoning_content:
            answer.extend(content)
        print(content, end="", flush=True)

if answer:
    print("\n\n=============\nAnswer\n=============\n")
    print("".join(answer))
else:
    print("\n\n=============\nNo Answer\n=============\n")
    print(
        "No answer was generated by the model, probably because the maximum number of tokens was reached."
    )

</details>

Transformers

You can also use Ministral 3 3B Reasoning 2512 with Transformers !
Make sure to install Transformers from its first v5 release candidate or from "main":

pip install transformers==5.0.0rc0

To make the best use of our model with Transformers make sure to have installed mistral-common >= 1.8.6 to use our tokenizer.

pip install mistral-common --upgrade

Then load our tokenizer along with the model and generate:

<details>
<summary>Python snippet</summary>

import torch
from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend

model_id = "mistralai/Ministral-3-3B-Reasoning-2512"

tokenizer = MistralCommonBackend.from_pretrained(model_id)
model = Mistral3ForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
            },
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    },
]

tokenized = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True)

tokenized["input_ids"] = tokenized["input_ids"].to(device="cuda")
tokenized["pixel_values"] = tokenized["pixel_values"].to(dtype=torch.bfloat16, device="cuda")
image_sizes = [tokenized["pixel_values"].shape[-2:]]

output = model.generate(
    **tokenized,
    image_sizes=image_sizes,
    max_new_tokens=8092,
)[0]

decoded_output = tokenizer.decode(output[len(tokenized["input_ids"][0]):])
print(decoded_output)

</details>

License

This model is licensed under the Apache 2.0 License.

You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.

Specifications

Parameters 4.7B
Context Length 262K
Architecture mistral3
License Unknown

Tags

chat reasoning