The newest flagship non-reasoning model series.
inclusionAI
Team
community
AI & ML interests
None defined yet.
Recent Activity
Papers
Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception
UI-Venus-1.5 Technical Report
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any ⢠Updated ⢠9.27k ⢠252 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech ⢠18B ⢠Updated ⢠112 ⢠27 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech ⢠2B ⢠Updated ⢠3.48k ⢠31 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio ⢠0.8B ⢠Updated ⢠66 ⢠5
A collection of TwinFlow-accelerated diffusion models
GroveMoE is an open-source family of large language models developed by the AGI Center, Ant Research Institute.
-
inclusionAI/Ling-lite-1.5-2507
Text Generation ⢠Updated ⢠8 ⢠76 -
inclusionAI/Ling-lite-1.5-2506
Text Generation ⢠17B ⢠Updated ⢠8 ⢠52 -
inclusionAI/Ling-lite-1.5
Text Generation ⢠17B ⢠Updated ⢠10.5k ⢠57 -
inclusionAI/Ling-lite-base-1.5
Text Generation ⢠17B ⢠Updated ⢠3 ⢠33
AReaL-boba-2
-
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper ⢠2512.15745 ⢠Published ⢠87 -
inclusionAI/LLaDA2.0-flash
Text Generation ⢠Updated ⢠214 ⢠67 -
inclusionAI/LLaDA2.0-mini
Text Generation ⢠Updated ⢠33.6k ⢠58 -
inclusionAI/LLaDA2.0-flash-preview
Text Generation ⢠103B ⢠Updated ⢠11 ⢠68
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
-
UI-Venus-1.5 Technical Report
Paper ⢠2602.09082 ⢠Published ⢠154 -
inclusionAI/UI-Venus-1.5-30B-A3B
Image-Text-to-Text ⢠Updated ⢠1.46k ⢠21 -
inclusionAI/UI-Venus-1.5-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠1.99k ⢠22 -
inclusionAI/UI-Venus-1.5-2B
Image-Text-to-Text ⢠2B ⢠Updated ⢠1.59k ⢠31
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper ⢠2506.09344 ⢠Published ⢠31 -
inclusionAI/Ming-Lite-Omni
Any-to-Any ⢠Updated ⢠413 ⢠198 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any ⢠Updated ⢠266 ⢠85 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any ⢠18B ⢠Updated ⢠16 ⢠74
The newest flagship non-reasoning model series.
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any ⢠Updated ⢠9.27k ⢠252 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech ⢠18B ⢠Updated ⢠112 ⢠27 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech ⢠2B ⢠Updated ⢠3.48k ⢠31 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio ⢠0.8B ⢠Updated ⢠66 ⢠5
-
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper ⢠2512.15745 ⢠Published ⢠87 -
inclusionAI/LLaDA2.0-flash
Text Generation ⢠Updated ⢠214 ⢠67 -
inclusionAI/LLaDA2.0-mini
Text Generation ⢠Updated ⢠33.6k ⢠58 -
inclusionAI/LLaDA2.0-flash-preview
Text Generation ⢠103B ⢠Updated ⢠11 ⢠68
A collection of TwinFlow-accelerated diffusion models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
GroveMoE is an open-source family of large language models developed by the AGI Center, Ant Research Institute.
-
UI-Venus-1.5 Technical Report
Paper ⢠2602.09082 ⢠Published ⢠154 -
inclusionAI/UI-Venus-1.5-30B-A3B
Image-Text-to-Text ⢠Updated ⢠1.46k ⢠21 -
inclusionAI/UI-Venus-1.5-8B
Image-Text-to-Text ⢠9B ⢠Updated ⢠1.99k ⢠22 -
inclusionAI/UI-Venus-1.5-2B
Image-Text-to-Text ⢠2B ⢠Updated ⢠1.59k ⢠31
-
inclusionAI/Ling-lite-1.5-2507
Text Generation ⢠Updated ⢠8 ⢠76 -
inclusionAI/Ling-lite-1.5-2506
Text Generation ⢠17B ⢠Updated ⢠8 ⢠52 -
inclusionAI/Ling-lite-1.5
Text Generation ⢠17B ⢠Updated ⢠10.5k ⢠57 -
inclusionAI/Ling-lite-base-1.5
Text Generation ⢠17B ⢠Updated ⢠3 ⢠33
AReaL-boba-2
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper ⢠2506.09344 ⢠Published ⢠31 -
inclusionAI/Ming-Lite-Omni
Any-to-Any ⢠Updated ⢠413 ⢠198 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any ⢠Updated ⢠266 ⢠85 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any ⢠18B ⢠Updated ⢠16 ⢠74