Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

QQA4CO Combinatorial Optimization Benchmark Suite

A unified, pre-converted, ready-to-benchmark collection of Combinatorial-Optimization (CO) instances for discrete samplers, annealers, and learning-based solvers. Every family referenced in the PQQA paper Ichikawa & Iwashita, 2024 is reproduced here, together with a few broadly used community benchmarks (G-set, DIMACS COLOR, Edwards-Anderson). The dataset is designed to be solver-agnostic — a companion Python loader is shipped in QQA4CO, but every file is plain pickle(networkx.Graph) or numpy.savez, so it can be used from any framework (PyTorch, JAX, C++, Julia, ...).

Repository id history. The dataset was renamed from Yuma-Ichikawa/discs-co-benchYuma-Ichikawa/qqa4co-bench on 2026-04-20 to reflect the scope expansion beyond DISCS. Hugging Face preserves a redirect from the old id, but please update bookmarks, snapshots, and snapshot_download calls.

What's inside

The eight config_names (MaxCut, G-set, MIS, MaxClique, NormCut, Coloring, MIS-RRG, EA3D) collectively cover:

  1. DISCS (NeurIPS 2023) — MaxCut / MIS / MaxClique / NormCut, repackaged from the original mixed pickle layouts into a uniform *.gpickle + manifest.jsonl format. We only repackage; always cite Goshvadi et al., 2023.
  2. MaxCut G-set — the Helmberg & Rendl (2000) superset (G1..G67 + G70, G72, G77, G81, 71 graphs total) via Yinyu Ye's Stanford mirror. Best-known cuts track Benlic & Hao (2013) and Matsuda (2018). This is the de facto MaxCut benchmark for annealing and PI-GNN solvers.
  3. PQQA reproduction set (arXiv:2409.02135v2) — full reproduction of every benchmark from §5.1–§5.5:
    • MIS on SATLIB (500 graphs) and Erdős–Rényi random graphs (ER-[700-800], ER-[9000-11000]) via the DISCS snapshot.
    • MIS on d-regular random graphs (RRGs) with d ∈ {20, 100} × n ∈ {10^4, 10^5, 10^6} — 5 seeds per cell, 30 instances total, ~9.7 GB.
    • Max Clique on the RB synthetic graphs and the SNAP Twitter real-world graph.
    • Max Cut on the DISCS ER/BA random-graph sizes and the Optsicom real-world set.
    • Balanced graph partition on VGG / MNIST-conv / ResNet / AlexNet / Inception-v3 computation graphs (re-uses the normcut/nets/ DISCS graphs with a different objective).
    • Graph Coloring on all 12 COLOR instances cited in Table 6 — Mycielski, square Queen, and the three DIMACS real-world graphs anna, jean, queen8_12.
  4. 3D Edwards–Anderson spin glass — Gaussian and bimodal (±J) couplings on cubic lattices L ∈ {4, 6, 8}, 50 disorder realisations per cell, periodic boundary. A convenience extra for Ising / sampler benchmarks.

Coverage matrix vs. arXiv:2409.02135v2

Paper row Instances HF subset num_instances Best-known reference
§5.1 Table 1 — MIS SATLIB 500 CNFs, ≤1,347 nodes, ≤5,978 edges mis/satlib/uf 500 KaMIS (Lamm et al., 2016; Hespe et al., 2019)
§5.1 Table 1 — MIS ER-[700-800] 128 ER graphs mis/er/800 128 per-instance KaMIS (carried in manifest)
§5.1 Table 1 — MIS ER-[9000-11000] 16 ER graphs (see caveat ★) mis/er/10k 16 KaMIS aggregate 381.31 (paper Table 5 footnote)
§5.1 Table 2 — MIS RRG d=20 n ∈ {10⁴, 10⁵, 10⁶}, 5 seeds each mis-rrg/d20_n{10000,100000,1000000} 15 Barbier–Krzakała–Zdeborová 2013 RS density ρ = 0.2498
§5.1 Table 2 — MIS RRG d=100 n ∈ {10⁴, 10⁵, 10⁶}, 5 seeds each mis-rrg/d100_n{10000,100000,1000000} 15 Barbier–Krzakała–Zdeborová 2013 RS density ρ = 0.0669
§5.2 Table 3 — Max Clique RB Xu et al. (2007) RBtest maxclique/rb/all 500 per-instance DISCS reference
§5.2 Table 3 — Max Clique Twitter SNAP Twitter ego-network maxclique/twitter/all 196 per-instance DISCS reference
§5.3 Fig. 3 — Max Cut ER 7 size buckets, 16–1,100 nodes maxcut/er/er-0.15-n-* 700 Gurobi 1 h (DISCS)
§5.3 Fig. 3 — Max Cut BA 7 size buckets, 16–1,100 nodes maxcut/ba/ba-4-n-* 700 Gurobi 1 h (DISCS)
§5.3 Table 4 — Max Cut Optsicom 10 real-world ±1/0 graphs maxcut/optsicom/b 10 per-instance DISCS reference
§5.4 Table 5 — Balanced partition VGG, MNIST-conv, ResNet, AlexNet, Inception-v3 normcut/nets/{VGG,MNIST,RESNET,ALEXNET,INCEPTION} (loader swaps the objective) 5 no reference optimum; comparative only
§5.5 Table 6 — Coloring (Myciel) myciel5, myciel6 (plus {3,4,7} as extras) coloring/myciel 5 chromatic number (Mycielski 1955)
§5.5 Table 6 — Coloring (Queen) queen{5..13}_{5..13} (7 of 9 used in Table 6) coloring/queen 9 tabulated chromatic number
§5.5 Table 6 — Coloring (DIMACS) anna, jean, queen8_12 coloring/dimacs 3 chromatic number from Trick, 2002

★ ER-[9000-11000] caveat. The 16 ER_9000_11000_*.gpickle graphs are the exact instances released with DISCS and used by PQQA. The upstream DISCS conversion does not include per-instance KaMIS labels, so the manifest records best_known: null and qqa bench-run reports ApR = NaN on this subset. The paper quotes the aggregate KaMIS average (381.31, Table 5 footnote) — reproducing the Table 1 ApR numbers requires either running KaMIS yourself on each of the 16 graphs or dividing raw IS sizes by that aggregate. We are tracking upstream to restore per-instance labels.

Convenience extras shipped here that are not in the paper: gset/standard (G-set MaxCut benchmark), mis/er_density/* (ER MIS density sweeps), normcut/nets/{BABELFISH,NMT,TTS} (additional DISCS compute graphs beyond §5.4), and ea3d/* (3D Ising spin glass).


Quick start

Option A — Via QQA4CO (recommended, three one-liners)

git clone https://github.com/Yuma-Ichikawa/QQA4CO.git
cd QQA4CO
pip install -e ".[discs,dev]"

make bench-all-setup                                   # pull every family from this dataset (~14 GB)
qqa bench-run --suite all --output bench_results/mine.json
qqa bench-plot bench_results/mine.json --output report.png

qqa bench-plot renders a publication-quality 2×2 figure with per-subset bars, a radar chart, feasibility ratios, and per-instance violin+strip plots. Pass multiple JSON files to produce an A/B/C comparison:

qqa bench-plot bench_results/baseline.json bench_results/mine.json \
    --labels "baseline" "my method" \
    --title "ablation vs. baseline" \
    --output ab.png

Scoping to a single family or subset:

qqa bench-list                                         # show every available suite
qqa bench-run --suite gset                      --instances 5
qqa bench-run --suite coloring-dimacs           --instances 3    # anna, jean, queen8_12
qqa bench-run --suite mis-rrg-d20_n10000        --instances 5    # PQQA Table 2, n=10^4
qqa bench-run --suite mis-rrg-d100_n1000000     --instances 1    # PQQA Table 2, n=10^6
qqa bench-run --suite ea3d-gaussian-L4          --instances 3
qqa bench-run --suite balanced-partition-nets-INCEPTION --instances 1

The same three verbs are available from Python:

from qqa import bench

bench.list_suites()                                       # {suite_id: (family, graph_type, subset)}
bench.run("gset", instances=5, output="mine.json")        # writes to ./bench_results/
bench.plot(["bench_results/mine.json"], output="report.png")

Option B — Plain Python (no QQA4CO required)

from huggingface_hub import snapshot_download
import json, pickle, pathlib
import networkx as nx
import numpy as np

local = snapshot_download(
    repo_id="Yuma-Ichikawa/qqa4co-bench",
    repo_type="dataset",
    allow_patterns=["gset/**"],       # ~30 MB; omit to download everything
)
root = pathlib.Path(local)

for line in (root / "gset/standard/manifest.jsonl").open():
    rec = json.loads(line)
    with (root / "gset/standard" / rec["file"]).open("rb") as fh:
        g: nx.Graph = pickle.load(fh)
    print(rec["id"], "n=", g.number_of_nodes(), "best_known=", rec["best_known"])

# 3D Edwards-Anderson: coupling lists shipped as .npz
ea = np.load(root / "ea3d/gaussian/L4/0001.npz")
print("L=", int(ea["L"]), "num_couplings=", len(ea["J"]))

Option C — datasets library

Each of the eight families is a separate config_name, so you can stream just the subset you care about:

from datasets import load_dataset

ds = load_dataset(
    "Yuma-Ichikawa/qqa4co-bench",
    name="mis-rrg",                  # or maxcut, gset, mis, maxclique, ...
    split="train",
    streaming=True,
)
for rec in ds.take(1):
    print(rec.keys())               # -> dict_keys(['path', 'bytes', ...])

Layout

.
├── maxcut/                              (~3.3 GB, ~9,000 instances; DISCS)
│   ├── ba/ba-4-n-{16-20,32-40,64-75,128-150,256-300,512-600,1024-1100}/
│   ├── er/er-0.15-n-{16-20,32-40,64-75,128-150,256-300,512-600,1024-1100}/
│   └── optsicom/b/                  (10 real-world Optsicom graphs)
├── mis/                                 (~365 MB; DISCS)
│   ├── satlib/uf/                   (500 SATLIB CNF → IS graph encodings)
│   ├── er/{800,10k}/                (ER-[700-800] × 128, ER-[9000-11000] × 16)
│   └── er_density/{0.05,0.10,0.20,0.25}/  (additional ER density sweeps)
├── maxclique/                           (~131 MB; DISCS)
│   ├── rb/all/                      (Xu et al. 2007 RBtest, 500 instances)
│   └── twitter/all/                 (SNAP Twitter ego-network, 196 instances)
├── normcut/                             (~7.4 MB; DISCS; also consumed by Balanced Partition)
│   └── nets/{VGG,MNIST,RESNET,ALEXNET,INCEPTION,BABELFISH,NMT,TTS}/
├── gset/                                (~30 MB; 71 G-set graphs)
│   └── standard/ (G1..G67, G70, G72, G77, G81)
├── coloring/                            (~350 KB; procedural + DIMACS)
│   ├── myciel/     (Mycielski graphs k=3..7; chromatic number = k)
│   ├── queen/      (queen-attack graphs on k × k boards, k = 5..13)
│   └── dimacs/     (DIMACS COLOR real-world: anna, jean, queen8_12 — arXiv Table 6)
├── mis-rrg/                             (~9.7 GB; procedural)
│   ├── d20_n10000/    (d=20, n=10^4, 5 seeds; PQQA §5.1 Table 2)
│   ├── d20_n100000/   (d=20, n=10^5)
│   ├── d20_n1000000/  (d=20, n=10^6; ~1.6 GB total)
│   ├── d100_n10000/   (d=100, n=10^4; dense regime)
│   ├── d100_n100000/  (d=100, n=10^5)
│   └── d100_n1000000/ (d=100, n=10^6; ~8 GB total)
└── ea3d/                                (~280 KB; procedural)
    ├── gaussian/{L4,L6,L8}/    (N(0,1) couplings; cubic lattice, PBC)
    └── bimodal/{L4,L6,L8}/     (±1 couplings; ±J spin glass)

Per-instance file formats

  • *.gpickle (DISCS, G-set, Coloring, MIS-RRG) — pickle.dump(networkx.Graph). Edge weights are carried on the weight edge attribute (±1 for the G-set ±1 families, real-valued for the Gaussian-weighted families).
  • *.npz (EA3D) — sparse coupling list with arrays i, j, J, L (lattice edge list + cube side). The QQA4CO loader qqa.datasets.ea3d reassembles the J matrix and instantiates qqa.problems.EdwardsAnderson.
  • manifest.jsonl — one JSON object per line, with at least {id, file, best_known, source}. Family-specific extras: num_colors, best_known_source (coloring); d, n, seed (mis-rrg); num_spins, L, distribution (ea3d); num_nodes, num_edges, best_known_source, source_url (gset); problem, graph_type, subset, source (DISCS).

Best-known references

Family best_known semantics
maxcut / mis / maxclique upstream DISCS reference, higher is better
gset Benlic & Hao (2013), Matsuda (2018); higher is better
normcut upstream DISCS reference, lower is better
coloring 0 (= minimum number of edge conflicts a proper K-colouring must reach); num_colors carries the (known) chromatic number
mis-rrg Barbier–Krzakała–Zdeborová (2013) replica-symmetric asymptotic MIS density × n (ρ_{d=20} = 0.2498, ρ_{d=100} = 0.0669)
ea3d brute-force ground-state energy for N ≤ 20; NaN for larger lattices
balanced-partition (on normcut/nets/*) no published reference (NaN); use for comparative runs only

Approximation Ratio (ApR) conventions — consistent with the PQQA paper and with qqa bench-plot:

  • Maximization (MaxCut, MIS, MaxClique): ApR = value / best_known, so ApR ≤ 1 at optimality.
  • Minimization (NormCut, Coloring objective-as-conflicts): ApR = best_known / value, so ApR ≤ 1 at optimality.
  • Subsets whose best_known is null or NaN report ApR = NaN and are excluded from aggregate statistics.

Caveats and reproducibility notes

  • G-set has upstream holes. G68, G69, G71, G73G76, G78G80 return HTTP 404 from the Stanford mirror and are therefore not hosted here. scripts/fetch_gset_data.py logs the skipped indices.
  • normcut/nets/ graphs are highly disconnected. Several computation graphs (BABELFISH, TTS, ALEXNET, VGG, …) consist of a giant component plus dozens of 2–4 node fragments. A naive solver reaches Ncut = 0 trivially by isolating the small components. Restrict to the largest connected component when you want a non-trivial bisection.
  • TRANSFORMER.pkl was empty in the upstream DISCS release — omitted.
  • normcut-gap_rand is not present in the upstream DISCS tarball — omitted.
  • MIS on RRG at n = 10^6 is fully hosted (5 seeds × 2 degrees, ~9.7 GB total). Each d=100, n=10^6 .gpickle is ~1.6 GB on disk; the full adjacency matrix does not fit in a single dense tensor (n² = 10¹² entries), so solvers must use sparse representations. Re-generate locally with python scripts/generate_rrg_instances.py --include-huge if needed.
  • Barbier d=100 density correction (Apr 2026). Earlier snapshots of scripts/generate_rrg_instances.py carried _BARBIER_DENSITY[100] = 0.1360 (a typo; that is the d=20 asymptotic density), which inflated best_known by roughly 2× on the d100_n10000 manifest. The current version uses the correct ρ_{d=100} = 0.0669; all d100_n* manifests have been rebuilt.
  • DIMACS coloring/dimacs/ was added 2026-04-22. The three real-world instances anna, jean, queen8_12 (Table 6 of the PQQA paper) were previously missing. They are now fetched once from Trick's canonical mirror by scripts/generate_coloring_instances.py, cached locally under data/coloring/_dimacs_cache/, and shipped on the Hub.

Sources


Citation

If you use this dataset, please cite the PQQA paper (the design target for the scope and the paper that tabulates the expected ApR numbers):

@article{ichikawa2024pqqa,
  title   = {Optimization by Parallel Quasi-Quantum Annealing with
             Gradient-Based Sampling},
  author  = {Ichikawa, Yuma and Iwashita, Hiroshi},
  journal = {arXiv preprint arXiv:2409.02135},
  year    = {2024},
  url     = {https://arxiv.org/abs/2409.02135}
}

In addition, please cite the upstream source(s) of whichever subset(s) you use.

DISCS subsets (maxcut, mis, maxclique, normcut) — data is Goshvadi et al.'s; we only repackage:

@inproceedings{goshvadi2023discs,
  title     = {{DISCS}: A Benchmark for Discrete Sampling},
  author    = {Goshvadi, Katayoon and Sun, Haoran and Liu, Xingchao
               and Nova, Azade and Zhang, Ruqi and Grathwohl, Will
               and Schuurmans, Dale and Dai, Hanjun},
  booktitle = {Advances in Neural Information Processing Systems
               (NeurIPS Datasets and Benchmarks Track)},
  year      = {2023},
  url       = {https://openreview.net/forum?id=oi1MUMk5NF}
}

@inproceedings{sun2023revisiting,
  title     = {Revisiting Sampling for Combinatorial Optimization},
  author    = {Sun, Haoran and Goshvadi, Katayoon and Nova, Azade and
               Schuurmans, Dale and Dai, Hanjun},
  booktitle = {International Conference on Machine Learning (ICML)},
  year      = {2023}
}

MIS on SATLIB — cite the original SATLIB benchmark:

@incollection{hoos2000satlib,
  title     = {{SATLIB}: An Online Resource for Research on {SAT}},
  author    = {Hoos, Holger H. and St{\"u}tzle, Thomas},
  booktitle = {SAT 2000: Highlights of Satisfiability Research in the Year 2000},
  editor    = {Gent, I. P. and van Maaren, H. and Walsh, T.},
  publisher = {IOS Press},
  pages     = {283--292},
  year      = {2000}
}

MIS reference solutions (KaMIS) — used for SATLIB and ER:

@inproceedings{lamm2016finding,
  title     = {Finding Near-Optimal Independent Sets at Scale},
  author    = {Lamm, Sebastian and Sanders, Peter and Schulz,
               Christian and Strash, Darren and Werneck, Renato F.},
  booktitle = {Proceedings of the 18th Meeting on Algorithm Engineering
               and Experiments (ALENEX)},
  year      = {2016},
  doi       = {10.1137/1.9781611974317.12}
}

@inproceedings{hespe2019scalable,
  title     = {Scalable Kernelization for Maximum Independent Sets},
  author    = {Hespe, Demian and Schulz, Christian and Strash, Darren},
  booktitle = {ACM Journal of Experimental Algorithmics (JEA)},
  year      = {2019},
  doi       = {10.1145/3355502}
}

Max Clique — RB random model and SNAP Twitter ego-graph:

@article{xu2007random,
  title   = {Random Constraint Satisfaction: Easy Generation of Hard
             (Satisfiable) Instances},
  author  = {Xu, Ke and Boussemart, Fr{\'e}d{\'e}ric and Hemery, Fred
             and Lecoutre, Christophe},
  journal = {Artificial Intelligence},
  volume  = {171},
  number  = {8-9},
  pages   = {514--534},
  year    = {2007},
  doi     = {10.1016/j.artint.2007.04.001}
}

@misc{leskovec2014snap,
  title  = {{SNAP} Datasets: {Stanford} Large Network Dataset Collection},
  author = {Leskovec, Jure and Krevl, Andrej},
  year   = {2014},
  url    = {http://snap.stanford.edu/data}
}

Max Cut G-set — seminal spectral-bundle paper and best-known reference:

@article{helmberg2000gset,
  title   = {A Spectral Bundle Method for Semidefinite Programming},
  author  = {Helmberg, Christoph and Rendl, Franz},
  journal = {SIAM Journal on Optimization},
  volume  = {10},
  number  = {3},
  pages   = {673--696},
  year    = {2000},
  doi     = {10.1137/S1052623497328987}
}

@article{benlic2013bls,
  title   = {Breakout Local Search for the Max-Cut Problem},
  author  = {Benlic, Una and Hao, Jin-Kao},
  journal = {Engineering Applications of Artificial Intelligence},
  volume  = {26},
  number  = {3},
  pages   = {1162--1173},
  year    = {2013},
  doi     = {10.1016/j.engappai.2012.09.001}
}

@article{matsuda2018gset,
  title   = {{MQLib}: Infrastructure for Empirical Evaluation of Heuristics
             for Max-Cut and {QUBO}},
  author  = {Dunning, Iain and Gupta, Swati and Silberholz, John},
  journal = {INFORMS Journal on Computing},
  volume  = {30},
  number  = {3},
  pages   = {608--624},
  year    = {2018},
  doi     = {10.1287/ijoc.2017.0798}
}

MIS on regular random graphs — asymptotic density (used as best_known) and the original hardness argument:

@article{barbier2013hard,
  title   = {The Hard-Core Model on Random Graphs Revisited},
  author  = {Barbier, Jean and Krzakala, Florent and Zdeborov{\'a},
             Lenka and Zhang, Pan},
  journal = {Journal of Physics: Conference Series},
  volume  = {473},
  pages   = {012021},
  year    = {2013},
  doi     = {10.1088/1742-6596/473/1/012021}
}

@article{angelini2023modern,
  title   = {Modern Graph Neural Networks Do Worse than Classical
             Greedy Algorithms in Solving Combinatorial Optimization
             Problems like Maximum Independent Set},
  author  = {Angelini, Maria Chiara and Ricci-Tersenghi, Federico},
  journal = {Nature Machine Intelligence},
  volume  = {5},
  pages   = {29--31},
  year    = {2023},
  doi     = {10.1038/s42256-022-00589-y}
}

Graph Coloring (COLOR / DIMACS) — Trick's canonical compilation and the two procedural families that cover most of Table 6:

@misc{trick2002color,
  title  = {Graph Coloring Instances},
  author = {Trick, Michael A.},
  year   = {2002},
  note   = {Carnegie Mellon, \url{https://mat.tepper.cmu.edu/COLOR/instances.html}}
}

@article{mycielski1955coloring,
  title   = {Sur le coloriage des graphes},
  author  = {Mycielski, Jan},
  journal = {Colloquium Mathematicae},
  volume  = {3},
  number  = {2},
  pages   = {161--162},
  year    = {1955}
}

@book{knuth1993sgb,
  title     = {The Stanford {GraphBase}: A Platform for Combinatorial Computing},
  author    = {Knuth, Donald E.},
  publisher = {ACM Press / Addison-Wesley},
  year      = {1993}
}

Balanced Graph Partition — GAP baseline and hMETIS framework:

@inproceedings{nazi2019gap,
  title     = {{GAP}: Generalizable Approximate Graph Partitioning Framework},
  author    = {Nazi, Azade and Hang, Will and Goldie, Anna and
               Ravi, Sujith and Mirhoseini, Azalia},
  booktitle = {ICLR Workshop on Representation Learning on Graphs and Manifolds},
  year      = {2019}
}

@article{karypis1999multilevel,
  title   = {Multilevel Hypergraph Partitioning: Applications in {VLSI} Domain},
  author  = {Karypis, George and Aggarwal, Rajat and Kumar, Vipin
             and Shekhar, Shashi},
  journal = {IEEE Transactions on Very Large Scale Integration (VLSI) Systems},
  volume  = {7},
  number  = {1},
  pages   = {69--79},
  year    = {1999},
  doi     = {10.1109/92.748202}
}

Optionally, also cite the QQA4CO repackaging infrastructure:

@misc{ichikawa2026qqa4co,
  title  = {{QQA4CO}: A Reproducible GPU Benchmark Suite for
            Combinatorial Optimization},
  author = {Ichikawa, Yuma},
  year   = {2026},
  url    = {https://github.com/Yuma-Ichikawa/QQA4CO}
}

License

This repackaging is released under Apache-2.0. The underlying DISCS, SATLIB, DIMACS, SNAP, and G-set instances inherit the licenses of their original sources (see the Sources table above); please consult those upstream links if you redistribute. The procedurally generated coloring/{myciel,queen}, mis-rrg, and ea3d subsets are original to this dataset and are released under Apache-2.0.


How to benchmark a new solver

A detailed guide (Python + CLI + Make, with ratio conventions and per-family feasibility definitions) lives in the QQA4CO docs: docs/how-to/benchmark.md.

Changelog

  • 2026-04-22 — added coloring/dimacs/ (anna, jean, queen8_12); arXiv-2409.02135v2 Table 6 coverage is now 12/12.
  • 2026-04-20 — added the d20_n{10⁵, 10⁶} and d100_n{10⁵, 10⁶} RRG cells (four 5-seed subsets, ~9 GB); corrected the Barbier d=100 density bug on d100_n10000/manifest.jsonl (best_known ×½).
  • 2026-04-20 — repository renamed from discs-co-benchqqa4co-bench.
Downloads last month
514

Paper for Yuma-Ichikawa/qqa4co-bench