March 11, 2024

Swallow (Mistral)

These are large language models (Swallow-MS 7B and Swallow-MX 8x7B) with enhanced Japanese capabilities based on Mistral 7B and Mixtral 8x7B. The model parameters (weights) are released under the permissive Apache 2.0 license, allowing their use in both research and commercial applications.

Children telling a story. Long-form Sales Landing Page demo

Legacy (higher-performance models have been developed and released)

Models

8x7B Instruct v0.1

With post-training (chat model)


7B Instruct v0.1

With post-training (chat model)


7B v0.1

Without post-training


Changelog

Overview

The large language models Swallow-MS 7B and Swallow-MX 8x7B were developed by research teams from the Okazaki and Yokota Laboratories at the School of Computing, Tokyo Institute of Technology, together with the National Institute of Advanced Industrial Science and Technology (AIST). To enhance the Japanese capabilities of the large language models Mistral 7B and Mixtral 8x7B, which exhibit strong performance in English language understanding and dialogue, the research team conducted continued pre-training using large-scale Japanese language data. In performance evaluations conducted by the team, Swallow-MS 7B achieved the highest performance among open 7B large language models on benchmarks related to Japanese knowledge, reasoning, and language generation (as of March 2024, in comparisons among base models). In addition, Swallow-MX 8x7B adopts a Mixture of Experts (MoE) architecture and is the first open model with enhanced Japanese capabilities based on this architecture. The released models can be downloaded from Hugging Face.

The licenses for Swallow-MS 7B and Swallow-MX 8x7B inherit the Apache 2.0 licenses of Mistral 7B and Mixtral 8x7B. As long as users comply with this license, the models may be used for both research and commercial purposes.

Performance

Swallow-MS 7B

Let us compare the Japanese performance of models obtained through continued pre-training of Mistral 7B and Llama 2, as well as other 7B models. Swallow-MS 7B steadily improves the Japanese capabilities of Mistral 7B through continued pre-training. On the Japanese benchmarks used for evaluation, Swallow-MS 7B shows higher average performance than other open 7B models available as of March 2024. In particular, it achieves substantial improvements on knowledge-oriented tasks such as JCommonsenseQA (JComQA) and NIILC, a trend similar to the performance gains observed for Swallow over Llama 2. In addition, Swallow-MS 7B extends Mistral 7B with Japanese vocabulary, enabling more few-shot examples to be packed into prompts, reducing garbled text generation, and accelerating generation speed.

ModelJa AvgJComQAJEMHopQANIILCJSQuADXL-SumJa-EnEn-JaMGSM
CyberAgentLM2-7B (base)0.30980.21980.50470.50660.77990.02330.14990.23450.0600
Llama 2 7B (base)0.32010.38520.42400.34100.79170.19050.17380.17830.0760
Japanese Stable LM Beta 7B (base)0.33660.36100.44780.44320.83180.21950.12260.19460.0720
Japanese Stable LM Beta 7B (base, VE)0.29370.21720.44820.43090.82020.07570.14530.16010.0520
ELYZA-japanese-Llama-2-7b (base)0.34670.57910.47030.40190.82260.13120.12890.17950.0600
ELYZA-japanese-Llama-2-7b-fast (base, VE)0.33120.53080.43300.38980.81310.12890.11430.16780.0720
Youri 7B (base)0.37670.46200.47760.49990.85060.19570.19710.26710.0640
Swallow 7B(base, VE)0.39400.48080.50780.59680.85730.18300.15110.25100.1240
Swallow 7B-plus (base, VE)0.40900.54780.54930.60300.85440.18060.14410.25680.1360
Qwen-7B0.37420.77120.42340.23760.85940.13710.18010.16890.2160
Nekomata 7B0.41850.74170.49280.50220.87070.16760.18150.26730.1240
Mistral-7B-v0.1 (7B, base)0.37170.73010.42450.27220.85630.20060.17330.14050.1760
Japanese Stable LM Base Gamma 7B (base)0.43010.73640.46430.55680.89100.22930.15610.23900.1680
Swallow-MS 7B (base, VE)0.45240.85700.49150.55190.88020.19880.16670.24940.2240

Next, let us examine the performance in English.
As is commonly observed in continued pre-training with cross-lingual transfer, performance on English tasks declines in the post–continued pre-training model (Swallow-MS 7B) compared to the original model (Mistral 7B).
In line with the improvements in Japanese performance, the decline is particularly noticeable on knowledge-oriented tasks such as TriviaQA.
We attempted to mitigate the degradation in English performance while improving Japanese performance by mixing Japanese and English data at a 9:1 ratio and adjusting the learning rate. However, possibly due to the small model size, it was not possible to completely prevent the performance drop.
Nevertheless, the average English performance of Swallow-MS 7B slightly surpasses that of Llama 2 7B and remains higher than that of Swallow 7B, and we therefore expect it to be used as a model that is strong in both Japanese and English.

ModelEn AvgOpenBookQAXWINOTriviaQASQuAD 2.0HellaSwagGSM8k
CyberAgentLM2-7B (base)0.40260.28600.85810.34960.35100.50030.0705
Llama-2-7b (base)0.48950.35800.90490.62650.32070.58600.1410
Japanese Stable LM Beta 7B (base)0.47360.36200.89940.59030.29920.57070.1198
Japanese Stable LM Beta 7B (base, VE)0.45450.35200.89420.55490.30790.56440.0538
ELYZA-japanese-Llama-2-7b (base)0.47030.34000.89890.58750.27210.55950.1638
ELYZA-japanese-Llama-2-7b-fast (base, VE)0.46080.32800.89890.58170.26050.55300.1425
Youri 7B (base)0.45660.34000.89380.52570.32970.55400.0963
Swallow 7B (base, VE)0.43990.31800.88170.48360.31250.53080.1130
Swallow 7B-plus (base, VE)0.43700.32800.89290.45580.31340.52590.1061
Qwen-7B0.54120.36400.89330.56950.37990.57870.4617
Nekomata 7B0.43800.33400.87660.43710.29330.53400.1531
Mistral-7B-v0.1 (7B, base)0.55770.36600.91570.70500.37990.62640.3533
Japanese Stable LM Base Gamma 7B (base)0.48600.32400.89760.57450.35460.57390.1911
Swallow-MS 7B (base, VE)0.50420.34400.90370.59760.33640.58100.2623

We present here radar charts visualizing selected portions of these evaluation results.

Performance of 7B model (Japanese)
Performance of 7B model (Japanese)
Performance of 7B model (English)
Performance of 7B model (English)

Swallow-MX 8x7B

Next, we examine Swallow-MX 8x7B.
Because this model is a Mixture of Experts (MoE) model that combines eight 7B models, we compare it with 70B-class models that have a similar total number of parameters (Mixtral shares attention and layer normalization parameters across experts, resulting in a total of 47B parameters).
Evaluation on Japanese benchmarks shows that continued pre-training steadily improves the Japanese capabilities of Mixtral 8x7B in Swallow-MX 8x7B.
Significant performance gains are again observed on knowledge-oriented tasks such as JCommonsenseQA (JComQA) and NIILC.
Although it does not surpass Swallow 70B, which has a larger total number of parameters, it demonstrates performance comparable to 70B-class models, highlighting the strong potential of MoE architectures.

ModelJa AvgJComQAJEMHopQANIILCJSQuADXL-SumJa-EnEn-JaMGSM
KARAKURI LM 70B (base)0.46690.85790.51250.57130.91000.14640.21130.25400.2720
Llama-2-70b (base)0.48300.86860.46560.52560.90800.23610.23980.26430.3560
Japanese Stable LM Beta 70B (base)0.51380.91150.49250.60420.91920.25730.23350.27650.4160
Swallow 70B(base, 語彙拡張)0.55280.93480.62900.69600.91760.22660.22980.30430.4840
Qwen-14B0.44310.88290.42430.32200.89800.18510.22240.22230.3880
Qwen-72B0.52440.92940.55660.45180.91590.21790.23560.25610.6320
Mixtral 8x7B v0.1 (instruct)0.44860.84000.50330.31070.88080.20020.20630.19560.4520
Swallow-MX 8x7B0.52080.92580.58430.56870.91480.25890.20740.27050.4360

Finally, we examine the English performance of Swallow-MX 8x7B.
Unlike in the case of Swallow-MS 7B, Swallow-MX 8x7B shows little degradation compared to the original model, Mixtral 8x7B Instruct.
A similar mitigation of English performance degradation has also been observed in Swallow 70B, suggesting that increasing the number of model parameters may help reduce performance loss, regardless of whether the architecture is MoE or not.
However, in the continued pre-training of Swallow-MX 8x7B, the training data used a Japanese-to-English ratio of 72:28 (due to historical circumstances related to debugging issues in the training framework). Therefore, we would like to further investigate the possibility that differences in the Japanese–English data mixing ratio also contributed to this outcome.

ModelEn AvgOpenBookQAXWINOTriviaQASQuAD 2.0HellaSwagGSM8k
Llama-2-70b (base)0.62680.42800.92900.82390.37700.67420.5284
Japanese Stable LM Beta 70B (base)0.62880.42000.92990.82030.38670.67290.5428
Swallow 70B(base, VE)0.60420.42200.92040.77560.37450.64580.4867
Qwen-14B0.59450.37200.90670.65430.41670.64730.5701
Qwen-72B0.63690.40400.92000.75010.34010.66470.7422
Mixtral 8x7B v0.1 (instruct)0.63350.41600.92260.77400.37140.68230.6346
Swallow-MX 8x7B0.61290.37400.91700.78470.38010.65200.5694
Performance of 47B- and 70B-class models (Japanese)
Performance of 47B- and 70B-class models (Japanese)

Note that benchmark scores cannot be compared across different datasets. For example, in a model’s Japanese evaluation results, even if the score for mathematics is higher than that for machine translation, this does not imply that the model is better at mathematics than at translation (it would be like comparing the results of entirely different exams with different difficulty levels and grading criteria). For the same reason, even if the average score on English tasks is higher than the average score on Japanese tasks for a given model, one cannot conclude that the model is stronger in English. Because evaluation scales and difficulty levels differ across benchmark datasets, it is inappropriate to discuss task strengths and weaknesses based solely on the shape of this radar chart.

Evaluation Benchmarks

For Japanese evaluation benchmarks, we used llm-jp-eval (v1.0.0) and the JP Language Model Evaluation Harness (commit #9b42d41). The breakdown is as follows:

Note that natural language inference (NLI), which is commonly used as an evaluation benchmark for large language models, was excluded in this study. Language models tend to exhibit biased label predictions in NLI tasks, and when this bias happens to coincide with the correct labels, the resulting scores become artificially high. Consequently, the evaluation results—especially for 7B models—were unstable.

For English evaluation benchmarks, we used the Language Model Evaluation Harness (v0.3.0). The breakdown is as follows:

Method

Swallow-MS 7B and Swallow-MX 8x7B were constructed by applying continued pre-training to Mistral 7B and Mixtral 8x7B Instruct, respectively.
To develop Japanese large language models strong in arithmetic reasoning and code generation, source code corpora were mixed with text corpora during training.
Specifically, Swallow-MS 7B was trained on AlgebraicStack [Azerbayev+, 2024], a corpus of mathematics-related source code, while Swallow-MX 8x7B was trained on both AlgebraicStack and The Vault [Nguyen+, 2023], a corpus pairing natural language with source code.
The effects of incorporating source code corpora will be further investigated through comparative experiments in future work.

The text corpora followed the same configuration as Swallow, using a Japanese-to-English mixture ratio of 9:1 (except for Swallow-MX 8x7B, which used 72:28), and consisted of the Swallow corpus, Japanese Wikipedia, and for English, RefinedWeb and the arXiv subset of The Stack.
In this release, Japanese vocabulary expansion was applied only to Swallow-MS 7B and not to Swallow-MX 8x7B.
With the vocabulary expansion, the number of Hiragana characters included in the vocabulary increased from 58 to 83, Katakana from 76 to 87, and Kanji from 1,456 to 3,208.

Additional pre-training of Mistral and Mixtral was conducted using software developed in-house.

Acknowledgements

The research and development of Swallow-MS and Swallow-MX were supported by several initiatives, including the “Large-Scale Language Model Development Support Program” of the AI Bridging Cloud Infrastructure (ABCI), which is built and operated by AIST; the project “Development of AI Application Technologies to Support Decision-Making in Design Risk Assessment Based on Expert Perspectives” under the NEDO program “Development of Core Integrated Technologies for Next-Generation Artificial Intelligence and Robots” (JPNP18002); and other supporting programs. Part of these results was also achieved through the “Large-Scale Foundation Model Development Support Program” of ABCI. This program was jointly proposed in September 2023 by the LLM-jp study group—organized by the National Institute of Informatics (NII), AIST, and Tokyo Institute of Technology, and involving research teams from institutions such as NII, Tohoku University, the University of Tokyo, and Waseda University—and was subsequently selected. It provided an opportunity to exclusively use a portion of ABCI’s high-performance computational resources (referred to as A-nodes) for up to 60 days. In addition, evaluation experiments of the trained large language models utilized datasets and insights developed within the LLM-jp study group.