Challenging666's picture
Add model card and metadata for InCoder-32B (#1)
e4e9396
---
library_name: transformers
pipeline_tag: text-generation
license: other
tags:
- code
- industrial-ai
- code-generation
---
# InCoder-32B: Industrial Code Foundation Model
[InCoder-32B](https://huggingface.co/papers/2603.16790) (Industrial-Coder-32B) is the first 32B-parameter code foundation model purpose-built for industrial code intelligence. While general code LLMs excel at standard programming tasks, InCoder-32B is specifically designed to address challenges in hardware semantics, specialized language constructs, and strict resource constraints.
## Model Description
InCoder-32B unifies code intelligence across several industrial engineering domains:
- **Chip Design** (Verilog / RTL)
- **GPU Kernel Optimization** (CUDA / Triton)
- **Embedded Systems** (ARM Cortex-M, STM32)
- **Compiler Optimization** (x86-64 assembly, LLVM)
- **3D Modeling** (CAD/CAM via CadQuery / OpenCascade)
The model supports a native long-context window of up to **128K tokens**.
### Links
- **Paper**: [InCoder-32B: Code Foundation Model for Industrial Scenarios](https://huggingface.co/papers/2603.16790)
- **GitHub**: [CSJianYang/Industrial-Coder](https://github.com/CSJianYang/Industrial-Coder)
- **Project Page**: [IndustrialCoder](https://huggingface.co/Multilingual-Multimodal-NLP/IndustrialCoder)
## Performance Highlights
InCoder-32B leads open-weight baselines across industrial domains and surpasses proprietary models like Claude-Sonnet-4.6 on specific benchmarks such as CAD-Coder IoU and KernelBench.
| Domain | Benchmark | InCoder-32B | Claude-Sonnet-4.6 |
|---|---|:---:|:---:|
| **Chip Design** | RealBench Func@1 (Mod) | **62.7** | 37.2 |
| **GPU Optim.** | KernelBench L1/L2/L3 | **22.2/36.0/14.0** | 11.1/28.0/2.0 |
| **3D Modeling** | CAD-Coder Compile (%) | **82.0** | 77.0 |
| **Code Optim.** | SuperCoder Acc | **91.0** | 88.0 |
## Quickstart
### Installation
```bash
pip install -U "transformers>=4.57.1" accelerate safetensors
```
### Usage with Transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Multilingual-Multimodal-NLP/IndustrialCoder"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
messages = [{"role": "user", "content": "Optimize this CUDA kernel for better memory coalescing."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
with torch.no_grad():
out = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, top_p=0.85, top_k=20)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
```
## Training Pipeline
The model is trained via a three-stage **Code-Flow** pipeline:
1. **Pre-training & Annealing**: General pre-training followed by curated industrial code annealing.
2. **Mid-training**: Progressive context extension from 8K to 128K tokens using synthetic industrial reasoning data.
3. **Post-training**: Execution-grounded SFT with 2.5M samples and feedback-driven repair trajectories.
## Citation
```bibtex
@article{yang2025incoder32b,
title={InCoder-32B: Code Foundation Model for Industrial Scenarios},
author={Yang, Jian and Zhang, Wei and Wu, Jiajun and Cheng, Junhang and Guo, Shawn and Wang, Haowen and Gu, Weicheng and Du, Yaxin and Li, Joseph and Xu, Fanglin and others},
journal={arXiv preprint arXiv:2603.16790},
year={2025}
}
```
## Disclaimer
The model may generate incorrect or unsafe code. Always review and test outputs in a sandboxed environment before production use. Industrial code (RTL, embedded firmware, GPU kernels) requires expert human review before deployment.