Datasets:
LM-SimBench
Dataset Description
LM-SimBench is a large-scale training-performance profiling dataset for large language models. The dataset is collected from training runs based on the MindSpeed-LLM framework and the Ascend NPU development stack, covering multiple model families, context lengths, and distributed parallel configurations.
Each model is sampled under feasible combinations of data parallelism (DP), tensor parallelism (TP), pipeline parallelism (PP), context parallelism (CP), expert parallelism (EP, for MoE models), and recomputation settings. The released files contain structured CSV records for iteration time, rank-level peak memory, operator latency, communication behavior, and matrix-operator hardware characteristics.
The original raw dataset is available at jiujiudahaozi/LM-SimBench_row. This repository publishes the organized CSV dataset for downstream performance modeling and analysis.
This dataset is ready for research use. Commercial use should follow the license and terms below.
Dataset Owner(s)
jiujiudahaozi
Dataset Creation Date
Created on: May 1, 2026
License/Terms of Use
This dataset is governed by the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).
Users may share and adapt the dataset, including for commercial purposes, provided that appropriate attribution is given and derived datasets are distributed under the same license. Please cite this dataset and keep a reference to the original raw-data repository when redistributing derivative versions.
Intended Usage
This dataset is intended for:
- Training-time prediction for LLM distributed training.
- Parallel-configuration performance modeling and selection.
- Communication bottleneck analysis across DP, TP, PP, CP, EP, and recomputation settings.
- Rank-level memory modeling and peak memory comparison.
- Operator latency modeling for communication, attention, normalization, elementwise, and matrix operators.
- Hardware-aware analysis of matrix operator shapes, latency, and utilization.
Dataset Composition
Model Sources
The dataset contains profiling records for multiple open LLM model families, including Qwen, Qwen2, Qwen2.5, Qwen3, InternLM, Gemma2, GLM4, Mistral, and Phi-3.5 variants.
Parallel Configuration Sources
Each model directory contains one or more feasible training parallel configurations. A configuration name follows this pattern:
dp<dp_size>-tp<tp_size>-pp<pp_size>-cp<cp_size>-<RC|noRC>
For example, dp4-tp2-pp1-cp1-noRC indicates:
dp4: data parallel size is 4.tp2: tensor parallel size is 2.pp1: pipeline parallel size is 1.cp1: context parallel size is 1.noRC: recomputation is disabled.
RC means recomputation is enabled.
Dataset Fields
The released dataset is organized as CSV files. The main file types are:
- step_time_summary.csv: Configuration-level training iteration time.
- feasible_configs_sorted.csv: Feasible parallel configurations sorted by
iteration_time_msin ascending order. - peak_memory.csv: Rank-level peak allocated memory.
- operator_latency_by_category.csv: Operator latency grouped by operator category and type.
- comm_time.csv: Rank-level communication timing and bandwidth statistics.
- matrix_operator_latency.csv: Matrix operator latency, shape, dtype, and hardware-counter statistics.
- data_manifest.csv: Repository-level index of model directories, configuration counts, file coverage, and disk size.
step_time_summary.csv
- optimization_config: Parallel configuration name.
- iteration_time_ms: Configuration-level training iteration time in milliseconds.
feasible_configs_sorted.csv
- optimization_config: Feasible parallel configuration name.
- iteration_time_ms: Configuration-level training iteration time in milliseconds, sorted ascending within each model directory.
peak_memory.csv
- optimization_config: Parallel configuration name.
- rank_id: Distributed rank ID.
- peak_memory_mb: Peak allocated memory in MB.
operator_latency_by_category.csv
- optimization_config: Parallel configuration name.
- rank_id: Distributed rank ID.
- op_category: Operator category, such as
matmul,attention,communication,normalization,activation, orelementwise. - op_type: Original profiler operator type.
- core_type: Accelerator core type.
- count: Number of operator invocations.
- total_time_us: Total operator time in microseconds.
- min_time_us: Minimum operator time in microseconds.
- avg_time_us: Average operator time in microseconds.
- max_time_us: Maximum operator time in microseconds.
- ratio_percent: Operator time ratio in percent.
comm_time.csv
- comm_scope: Communication scope, such as
collectiveorp2p. - comm_type: Communication type, such as broadcast, all-reduce, all-gather, reduce-scatter, send, or receive.
- comm_group: Communication group identifier.
- op_name: Communication operator name.
- start_timestamp_us: Start timestamp in microseconds.
- elapsed_time_ms: Elapsed communication time in milliseconds.
- transit_time_ms: Data transit time in milliseconds.
- wait_time_ms: Wait time in milliseconds.
- synchronization_time_ms: Synchronization time in milliseconds.
- idle_time_ms: Idle time in milliseconds.
- link_type: Communication link type, such as HCCS, RDMA, PCIE, SDMA, or SIO.
- link_transit_size_mb: Transferred data size in MB.
- link_transit_time_ms: Link transit time in milliseconds.
- link_bandwidth_gbps: Link bandwidth in GB/s.
- packet_count: Number of packets recorded for the communication event.
- matrix_pair_count: Number of rank pairs in the communication matrix.
- matrix_transport_types: Transport types observed in the communication matrix.
- matrix_transit_size_mb: Matrix-level transferred data size in MB.
- matrix_transit_time_ms: Matrix-level transit time in milliseconds.
matrix_operator_latency.csv
- task_id: Profiler task ID.
- stream_id: Profiler stream ID.
- name: Operator instance name.
- type: Matrix operator type.
- accelerator_core: Accelerator core used by the operator.
- duration_us: Operator duration in microseconds.
- wait_time_us: Operator wait time in microseconds.
- input_shapes: Input tensor shapes.
- input_data_types: Input tensor data types.
- output_shapes: Output tensor shapes.
- output_data_types: Output tensor data types.
- a_shape, b_shape, c_shape: Parsed matrix input and output shapes.
- batch, m, n, k: Parsed matrix dimensions.
- shape_signature: Compact matrix shape signature.
- aicore_time_us: AI Core execution time in microseconds.
- aic_total_cycles: Total AI Core cycles.
- aic_mac_time_us: Matrix multiply-accumulate time in microseconds.
- aic_mac_ratio: MAC time ratio.
- aic_scalar_time_us: Scalar time in microseconds.
- aic_mte1_time_us: MTE1 time in microseconds.
- aic_mte2_time_us: MTE2 time in microseconds.
- aic_fixpipe_time_us: Fixpipe time in microseconds.
- aic_icache_miss_rate: Instruction cache miss rate.
- cube_utilization_percent: Cube utilization percentage.
Dataset Format
Modality: Tabular performance profiling data
Format: CSV
Structure:
<model_name>/
step_time_summary.csv
feasible_configs_sorted.csv
peak_memory.csv
operator_latency_by_category.csv
<parallel_config>/
rank_<id>/
comm_time.csv
matrix_operator_latency.csv
data_manifest.csv
The describe/ directory is not part of the intended public dataset upload.
Dataset Quantification
| Model directory | Parallel configurations | Size |
|---|---|---|
gemma2_27b_8k |
3 | 695.83 MB |
gemma2_9b_8k |
5 | 498.04 MB |
glm4_9b_8k |
6 | 423.87 MB |
internlm25_1point8b_32k |
16 | 672.68 MB |
internlm25_20b_32k_16p |
9 | 1.64 GB |
internlm25_7b_32k |
5 | 341.59 MB |
internlm3_8b_8k |
7 | 319.08 MB |
mistral_7b_32k |
11 | 349.47 MB |
phi35_mini_4k |
38 | 2.67 GB |
phi35_moe_4k |
10 | 899.59 MB |
qwen15_1.8b_8k |
40 | 2.16 GB |
qwen15_4b_8k |
40 | 3.57 GB |
qwen25_0point5_32k |
6 | 118.38 MB |
qwen25_14b_8k |
1 | 128.93 MB |
qwen25_32b_32k |
4 | 1.19 GB |
qwen25_3b_32k |
6 | 228.94 MB |
qwen25_72b_8k |
3 | 179.26 MB |
qwen25_7b_8k |
6 | 135.90 MB |
qwen2_1.5b_4k |
26 | 491.72 MB |
qwen3_0.6b_4k |
38 | 1.16 GB |
qwen3_1.7b_4k |
38 | 1.17 GB |
qwen3_30b_a3b_4k |
2 | 3.38 GB |
qwen3_8b_4k |
28 | 2.07 GB |
Total model directories: 23
Total feasible parallel configurations: 348
Total disk size: approximately 25 GB
Loading Examples
Read the manifest:
import pandas as pd
manifest = pd.read_csv("data_manifest.csv")
print(manifest.head())
Read iteration-time data:
import pandas as pd
step_time = pd.read_csv("qwen3_8b_4k/step_time_summary.csv")
print(step_time.head())
Read rank-level communication data:
import pandas as pd
comm = pd.read_csv("qwen3_8b_4k/dp4-tp2-pp1-cp1-noRC/rank_0/comm_time.csv")
print(comm[["comm_scope", "comm_type", "elapsed_time_ms", "link_type"]].head())
Read matrix-operator latency data:
import pandas as pd
ops = pd.read_csv("qwen3_8b_4k/dp4-tp2-pp1-cp1-noRC/rank_0/matrix_operator_latency.csv")
print(ops[["type", "shape_signature", "duration_us", "cube_utilization_percent"]].head())
Dataset Characterization
Data Collection Method
Automated profiler collection from distributed LLM training runs.
Labeling Method
Automated extraction from training logs and profiler records.
Limitations
- The dataset reflects the specific software stack, hardware environment, model implementations, and training settings used during collection.
- Profiling records may contain measurement overhead, profiler-specific aggregation behavior, and runtime noise.
- Different models have different numbers of feasible parallel configurations.
- The dataset is intended for performance-analysis research and should not be treated as a universal benchmark for all hardware or LLM training systems.
Reference(s)
Citation
If you use LM-SimBench in your research, please cite this dataset repository:
@dataset{lm_simbench,
title = {LM-SimBench: A MindSpeed-LLM Training Performance Profiling Dataset},
author = {jiujiudahaozi},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/jiujiudahaozi/LM-SimBench}
}
Ethical Considerations
LM-SimBench contains training-performance profiling records rather than natural-language conversations, personal information, or user-generated text. Users should still ensure that any downstream use complies with applicable dataset licenses, platform terms, and institutional policies.
- Downloads last month
- 13