Fudan University’s ChatGPT-like Model MOSS Officially Open-sourced

On April 21st, it was announced that the new version of the MOSS model developed by Fudan University’s Natural Language Processing Laboratory was officially launched today. It is the first open-source conversational language model in China with plugin enhancements.

Currently, the MOSS model has been open-sourced and its related code, data, and model parameters have been made available on platforms such as Github and Hugging Face for researchers to download.

According to the introduction, MOSS is an open-source conversational language model that supports both Chinese and English languages as well as multiple plugins.

The moss-moon series model has 16 billion parameters and can run on a single A100/A800 or two 3090 graphics cards at FP16 precision, or on a single 3090 graphics card at INT4/8 precision.

SEE ALSO: Fudan University Team Introduces ChatGPT-like Model MOSS

The MOSS base language model is pre-trained on approximately 700 billion Chinese and English words, as well as code words. It has the ability to engage in multi-turn conversations and use various plugins after fine-tuning through dialogue instructions, plugin reinforcement learning, and human preference training.

MOSS is a project from Professor Qiu Xipeng’s team at Fudan University’s Natural Language Processing Laboratory. Its name is the same as the AI in the movie “The Wandering Earth”. It has been released on a public platform (https://moss.fastnlp.top/) and invites the public to participate in beta testing.