| --- |
| license: apache-2.0 |
| pipeline_tag: text-generation |
| library_name: transformers |
| --- |
| |
| # JanusCoder-14B |
|
|
| [💻Github Repo](https://github.com/InternLM/JanusCoder) • [🤗Model Collections](https://huggingface.co/collections/internlm/januscoder) • [📜Technical Report](https://www.arxiv.org/abs/2510.23538) |
|
|
| ## Introduction |
|
|
| We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. |
| This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. |
| This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. |
|
|
| ## Model Downloads |
|
|
| | Model Name | Description | Download | |
| | --- | --- | --- | |
| | JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoder-8B) | |
| | 👉 **JanusCoder-14B** | 14B text model based on Qwen3-14B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoder-14B) | |
| | JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoderV-7B) | |
| | JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoderV-8B) | |
|
|
| ## Performance |
|
|
| We evaluate the JanusCoder model on various benchmarks that span code interlligence tasks on multiple PLs: |
|
|
| | Model | JanusCoder-14B | Qwen3-14B | Qwen2.5-Coder-32B-Instruct | LLaMA3-8B-Instruct | GPT-4o | |
| | --- | --- | --- | --- | --- | --- | |
| | PandasPlotBench (Task) | 86 | 78 | 82 | 69 | 85 | |
| | ArtifactsBench | 41.1 | 36.5 | 35.5 | 36.5 | 37.9 | |
| | DTVBench (Manim) | 8.41 | 6.63 | 9.61 | 4.92 | 10.60 | |
| | DTVBench (Wolfram) | 5.97 | 5.08 | 4.98 | 3.15 | 5.97 | |
|
|
| ## Quick Start |
|
|
| **Transformers** |
|
|
| The following provides demo code illustrating how to generate text using JanusCoder-14B. |
|
|
| > Please use transformers >= 4.55.0 to ensure the model works normally. |
|
|
| ```python |
| import torch |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| |
| model_name = "internlm/JanusCoder-14B" |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_name) |
| model = AutoModelForCausalLM.from_pretrained( |
| model_name, device_map="auto", dtype="auto", |
| ).eval() |
| |
| messages = [ |
| {"role": "user", "content": "Create a line plot that illustrates function y=x."} |
| ] |
| |
| inputs = tokenizer.apply_chat_template( |
| messages, |
| add_generation_prompt=True, |
| tokenize=True, |
| return_dict=True, |
| return_tensors="pt" |
| ).to(model.device) |
| |
| with torch.inference_mode(): |
| generate_ids = model.generate(**inputs, max_new_tokens=200) |
| decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True) |
| |
| print(decoded_output[0]) |
| ``` |
|
|
| ## Citation |
| 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers: |
|
|
| ```bibtex |
| @article{sun2025januscoder, |
| title={JanusCoder: Towards a Foundational Visual-Programmatic Interface for Code Intelligence}, |
| author={Sun, Qiushi and Gong, Jingyang and Liu, Yang and Chen, Qiaosheng and Li, Lei and Chen, Kai and Guo, Qipeng and Kao, Ben and Yuan, Fei}, |
| journal={arXiv preprint arXiv:2510.23538}, |
| year={2025} |
| } |
| |
| @article{sun2024survey, |
| title={A survey of neural code intelligence: Paradigms, advances and beyond}, |
| author={Sun, Qiushi and Chen, Zhirui and Xu, Fangzhi and Cheng, Kanzhi and Ma, Chang and Yin, Zhangyue and Wang, Jianing and Han, Chengcheng and Zhu, Renyu and Yuan, Shuai and others}, |
| journal={arXiv preprint arXiv:2403.14734}, |
| year={2024} |
| } |
| |
| @article{chen2025interactscience, |
| title={InteractScience: Programmatic and Visually-Grounded Evaluation of Interactive Scientific Demonstration Code Generation}, |
| author={Chen, Qiaosheng and Liu, Yang and Li, Lei and Chen, Kai and Guo, Qipeng and Cheng, Gong and Yuan, Fei}, |
| journal={arXiv preprint arXiv:2510.09724}, |
| year={2025} |
| } |
| |
| @article{sun2025codeevo, |
| title={CodeEvo: Interaction-Driven Synthesis of Code-centric Data through Hybrid and Iterative Feedback}, |
| author={Sun, Qiushi and Gong, Jinyang and Li, Lei and Guo, Qipeng and Yuan, Fei}, |
| journal={arXiv preprint arXiv:2507.22080}, |
| year={2025} |
| } |
| ``` |