---
libraryname: transformers
license: apache-2.0
licenselink: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipelinetag: text-generation
basemodel: Qwen/Qwen3-14B
---
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
Qwen3-14B has the following features:
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
The code of Qwen3 has been in the latest Hugging Face INLINECODE
0DAA76A9F and we advise you to use the latest version of INLINECODE185C917F1.With INLINECODE
24A9575AB, you will encounter the following error:The following contains a code snippet illustrating how to use the model generate content based on given inputs.
CODEBLOCK
For deployment, you can use INLINECODE
3A88B4FF7 or INLINECODE4AB443D5A or to create an OpenAI-compatible API endpoint:Also check out our AWQ documentation for more usage guide.
> [!TIP]
> The INLINECODE11F994F47F switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for SGLang and vLLM users.
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting INLINECODE1334413948 or leaving it as the default value in INLINECODE141428CBE1, the model will engage its thinking mode.
CODEBLOCK2E6F6CD87
In this mode, the model will generate think content wrapped in a INLINECODE15F1C15E33 block, followed by the final response.
> [!NOTE]
> For thinking mode, use INLINECODE16F7A29C11, INLINECODE17DE244AA3, INLINECODE181763AC53, and INLINECODE199B2C2A26 (the default setting in INLINECODE20C6163DA7). DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the Best Practices section.
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
CODEBLOCK
364664D5CIn this mode, the model will not generate any think content and will not include a INLINECODE
22D50F3D11 block.> [!NOTE]
> For non-thinking mode, we suggest using INLINECODE
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when INLINECODE274DD44765. Specifically, you can add INLINECODE2855FD7C5B and INLINECODE292A56C52D to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
CODEBLOCK47374DE15
> [!NOTE]
> For API compatibility, when INLINECODE30B24BF481, regardless of whether the user uses INLINECODE31B79B22C3 or INLINECODE32D88653B9, the model will always output a block wrapped in INLINECODE33BB68F60E. However, the content inside this block may be empty if thinking is disabled.
> When INLINECODE3420CC4118, the soft switches are not valid. Regardless of any INLINECODE3531B17C74 or INLINECODE361E38A9D0 tags input by the user, the model will not generate think content and will not include a INLINECODE37A36E88CB block.
Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
INLINECODE
If you find our work helpful, feel free to give us a cite.
CODEBLOCK5E5A0DA5D