---
libraryname: transformers
license: apache-2.0
licenselink: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipelinetag: text-generation
basemodel:
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
Qwen3-0.6B has the following features:
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
> [!TIP]
> If you encounter significant endless repetitions, please refer to the Best Practices section for optimal sampling parameters, and set the INLINECODE
The code of Qwen3 has been in the latest Hugging Face INLINECODE
2E53238B3 and we advise you to use the latest version of INLINECODE3B9230337.With INLINECODE
4D8635A21, you will encounter the following error:The following contains a code snippet illustrating how to use the model generate content based on given inputs.
CODEBLOCK
For deployment, you can use INLINECODE
5AEFCE228 or INLINECODE62092DF7B or to create an OpenAI-compatible API endpoint:For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
> [!TIP]
> The INLINECODE
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting INLINECODE
15D4A354CF or leaving it as the default value in INLINECODE16699DAB13, the model will engage its thinking mode.CODEBLOCK
27154AFABIn this mode, the model will generate think content wrapped in a INLINECODE
1791470E17 block, followed by the final response.> [!NOTE]
> For thinking mode, use INLINECODE
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
CODEBLOCK338E93D31
In this mode, the model will not generate any think content and will not include a INLINECODE247D1A75A6 block.
> [!NOTE]
> For non-thinking mode, we suggest using INLINECODE25EFF73B13, INLINECODE263F391C47, INLINECODE275112A2D5, and INLINECODE285F15EA12. For more detailed guidance, please refer to the Best Practices section.
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when INLINECODE
29ADE6779C. Specifically, you can add INLINECODE30962FEB14 and INLINECODE310BE38710 to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.Here is an example of a multi-turn conversation:
CODEBLOCK
46C186EB3> [!NOTE]
> For API compatibility, when INLINECODE
Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
INLINECODE42A8EA1C95INLINECODE434C58D377enablethinking=TrueINLINECODE44CF502B51Temperature=0.6INLINECODE451DF0DAA2TopP=0.95INLINECODE4695420AA3TopK=20INLINECODE47A17D23DFMinP=0INLINECODE4846B2A296enablethinking=FalseINLINECODE49C49D018ATemperature=0.7INLINECODE50107A165ATopP=0.8INLINECODE51D49306DFTopK=20INLINECODE52569ED95EMinP=0INLINECODE53782E8BC1presencepenaltyINLINECODE546B3B0CE3answerINLINECODE559712949D"answer": "C"`."
If you find our work helpful, feel free to give us a cite.
CODEBLOCK
5_8E2CC9FB