Моделът зад приложението за чат BgGPT вече е публикуван

3 март 2024 г

(Този текст е автоматично генериран от модела от английската версия на блога. [*])

В INSAIT сме развълнувани да пуснем BgGPT-7B-Instruct-v0.2, модела, който стои зад приложението за чат BgGPT: https://chat.bggpt.ai. Този модел, част от серията BgGPT, е подобрена версия на тази, която пуснахме преди няколко седмици. BGGPT-7B-Instruct-v0.2 все още е 7B модел, което го прави много бърз за генериране на текст и може да работи на повечето съвременни персонални компютри. Освен това идва с лиценз Apache 2.0, който е свободен и подходящ за търговски цели. Моделът се основава на Mistral-7B, но беше обучен върху значителни количества данни и комбиниран с други нововъведения (които ще бъдат публикувани в изследователски конференции), може да надмине много по-големи модели на задачи на български език. Обучението на BGGPT-7B-Instruct-v0.2 се финансира изцяло от частни средства и дарения. Моля, вижте блога ни за BGGPT-7B-Instruct-v0.1, който пуснахме по-рано.

Успешна история на BgGPT

През последните 2 седмици BGGPT-7B-Instruct-v0.1 вече е приет от различни компании, които са коментирали, че с малко часове работа и ниски разходи за изчислителни ресурси за фина настройка, той може да достигне производителността на GPT-4 на конкретна задача на български език.

Оценяване и бенчмаркове

Както при много други езикови модели, ние оценяваме на набор от стандартни превeдени на български тестове, както и английски тестове:

Тези тестове тестват логическото разсъждение, математическите умения, знанията, разбирането на езика и други умения на модела.

Резултати от оценката

Следните графики показват представянето на BgGPT-7B-Instruct-v0.2. Той надминава моделите със същия размер на българските бенчмаркове, включително подобрява предишната версия на BgGPT-7B (BGGPT-7B-Instruct-v0.1). Той също така надмина по-големия Mixtral-8x7B-Instruct-v0.1 на българските бенчмаркове. Той запази своите английски умения и в някои отношения е сравним или по-добър от моделите на Gemma-7B на Google, Mistral-7B, Llama-7B и др.

Изгледи

Въпреки че моделът е доста конкурентен на безплатните отворени модели и особено като се има предвид неговият размер, той все още не е на нивото на комерсиалните платени предложения. Въпреки това, дори на сегашното си ниво, той може да бъде полезен за много приложения.

[*] Преводът е извършен в 2 стъпки. Първо попитахме: “Преведи на български език следния текст:” и поставяме английската версия на текста без заглавието. След това в същия чат попитахме “Направи го да звучи по-точно”.

Препратки

  1. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106, 2021.
  2. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? https://arxiv.org/abs/1905.07830
  3. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. https://arxiv.org/abs/1803.05457
  4. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. https://arxiv.org/abs/2009.03300
  5. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms https://arxiv.org/abs/1905.13319
  6. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. https://arxiv.org/abs/2110.14168
  7. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. https://arxiv.org/abs/1705.03551
  8. Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova, Kiril Simov, Petya Osenova, Veselin Stoyanov, Ivan Koychev, Preslav Nakov, and Dragomir Radev. bgGLUE: A Bulgarian general language understanding evaluation benchmark. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8733–8759 https://bgglue.github.io/

The model behind the BgGPT chat is now published

March 3, 2024

At INSAIT we are delighted to release BgGPT-7B-Instruct-v0.2, the model used behind the BgGPT chat app: https://chat.bggpt.ai. This model, part of the BgGPT series of models, is an improved version of the one we released a couple of weeks ago. BGGPT-7B-Instruct-v0.2 is still a 7B model, which is very fast for text generation and can run on most recent personal computers. It also comes with a permissive and commercial-friendly Apache 2.0 licence. The model is based on Mistral-7B, but was trained on significant amounts of data, and combined with other advances (to be published in research conferences), can outperform much larger models on Bulgarian tasks. The training costs of BGGPT-7B-Instruct-v0.2 come entirely from private funds and donations. Please see the blog post for BGGPT-7B-Instruct-v0.1 we released earlier.

BgGPT Success Story

In only 2 weeks, BGGPT-7B-Instruct-v0.1 has already been adopted by various companies who remarked that with only few hours of work and low computation and financial resources for fine-tuning, it can reach the performance of GPT-4 on a particular task in Bulgarian.

Evaluation & Benchmarks

As with many other language models, we evaluate on a set of standard benchmarks translated to Bulgarian as well as on English benchmarks:

These benchmarks test the logical reasoning, math, knowledge, language understanding and other skills of the model.

Evaluation Results

The following graphs show the performance of BgGPT-7B-Instruct-v0.2. It outperforms same-sized models on Bulgarian benchmarks, including improving upon the previous version of BgGPT-7B (BGGPT-7B-Instruct-v0.1). It also outperformed the much larger Mixtral-8x7B-Instruct-v0.1 on Bulgarian benchmarks. It also did not lose English skills and on some is comparable or better than the models of Google’s Gemma-7B, Mistral-7B, Llama-7B and others.

Outlook

Note that while the model is quite competitive to free open-source models, and especially for its size, it is still not on the level of paid commercial offerings. Yet, even at the current level, it can be useful for many applications.

References

  1. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106, 2021.
  2. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? https://arxiv.org/abs/1905.07830
  3. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. https://arxiv.org/abs/1803.05457
  4. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. https://arxiv.org/abs/2009.03300
  5. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms https://arxiv.org/abs/1905.13319
  6. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. https://arxiv.org/abs/2110.14168
  7. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. https://arxiv.org/abs/1705.03551
  8. Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova, Kiril Simov, Petya Osenova, Veselin Stoyanov, Ivan Koychev, Preslav Nakov, and Dragomir Radev. bgGLUE: A Bulgarian general language understanding evaluation benchmark. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8733–8759 https://bgglue.github.io/

Launching the first free and open Bulgarian LLM

February 18, 2024

At INSAIT we are thrilled to launch BgGPT-7B-Instruct-v0.1, the first free and open Bulgarian Large Language Model in the BgGPT series (more models coming soon). BgGPT-7B-Instruct-v0.1 is now available for download at HuggingFace with the permissive and commercial-friendly Apache 2.0 licence. The model, which builds on Mistral-7B, already outperforms similarly sized models such as LLaMA2-7b and Mistral-7B on all Bulgarian language tasks. On many of these tasks, It also outperforms much larger models such as Mixtral-8x7B-Instruct-v0.1 (about 6.5 times larger), which has been shown to have similar capabilities as GPT-3.5.

Evaluation & Benchmarks

To systematically evaluate the Bulgarian performance of LLMs, including our model and any existing or future models, we translated a set of benchmarks to Bulgarian, including:

These benchmarks (except the last one which already exists) were built via both machine translation as well as our amazing team of translators. For evaluation, we forked a version of the EuletherAI's evaluation harness. All benchmark data is made publicly available in our HF repository to help others evaluate their own models.

Note on evaluation: great care should be taken to not contaminate training or fine-tuning datasets by including the above benchmarks (generally known as overfitting, but a threat recently explored in detail here [9]), which can lead to misreported results.

Evaluation Results

The following graphs show the performance of BgGPT-7B-Instruct-v0.1. It clearly outperforms same-sized models on Bulgarian benchmarks as well as on most other benchmarks. It also outperformed the much larger Mixtral-8x7B-Instruct-v0.1 on Bulgarian benchmarks. That said, the model does not excel at deep reasoning and knowledge skills, though this is somewhat expected as smaller models can learn less which is reflected in the knowledge-testing benchmarks. We expect this to improve in the BgGPT that will follow. Interestingly, even though the model is biased to Bulgarian, it does retain some English skills, making it a versatile tool for cross-lingual tasks including translation from English to Bulgarian. Here we include a gist of the benchmark results.

Outlook

While larger models will in general offer superior performance, we see that specialised, smaller 7B models can actually produce similar results to non-specialized much larger models, while enjoying much cheaper inference costs. Further, for many business applications, smaller models may suffice. Over the next weeks, we will release improved models, so stay tuned!

Institutional use of BgGPT

If you are an institution or a business organisation interested in using BgGPT internally and have questions on how to do so, please contact us at: bggpt@insait.ai

References

  1. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106, 2021.
  2. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? https://arxiv.org/abs/1905.07830
  3. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. https://arxiv.org/abs/1803.05457
  4. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. https://arxiv.org/abs/2009.03300
  5. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms https://arxiv.org/abs/1905.13319
  6. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. https://arxiv.org/abs/2110.14168
  7. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. https://arxiv.org/abs/1705.03551
  8. Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova, Kiril Simov, Petya Osenova, Veselin Stoyanov, Ivan Koychev, Preslav Nakov, and Dragomir Radev. bgGLUE: A Bulgarian general language understanding evaluation benchmark. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8733–8759 https://bgglue.github.io/
  9. Evading Data Contamination Detection for Language Models is (too) Easy, Dekonick et. al. https://arxiv.org/abs/2402.02823