Datasets:
TTS Human Preferences (Medium)
Human preference dataset for text-to-speech (TTS) audio quality evaluation. Each row contains two TTS audio renderings of the same text prompt, along with 15 human preference annotations indicating which audio sounds more natural.
This is the medium (2,000-row) subset. See also: small (1,000 rows). Larger versions will follow.
Dataset Summary
| Metric | Value |
|---|---|
| Total rows | 2,000 |
| Annotations per row | 15 |
| Total annotations | 30,000 |
| Unique prompts | 2,000 |
| Audio format | MP3 |
Comparison Strategies
The dataset uses three strategies (balanced ~667 rows each) to create meaningful variation between audio pairs:
1. Model Gap (1_model_gap)
Same voice, same settings, different TTS model. Compares across model pairs from: eleven_multilingual_v2, eleven_monolingual_v1, eleven_turbo_v2_5, eleven_flash_v2_5.
2. Settings Gap (2_settings_gap)
Same voice, same model, different voice settings. Four sub-strategies:
- stability_contrast — low (0.25-0.40) vs high (0.75-0.90) stability
- speed_contrast — slow (0.75-0.88) vs fast (1.12-1.25)
- style_contrast — low (0.0-0.10) vs high (0.35-0.55) style
- combined — multiple settings differ simultaneously
3. Voice Gap (3_voice_gap)
Same model, same settings, different voice. Three sub-types: male vs male, female vs female, male vs female. Voice pool: 10 male + 8 female voices.
Prompts
Sourced from the LJ Speech dataset:
- Filtered to 8-22 word English sentences ending in
.!? - Content-filtered to remove violence, crime, sexual, and other sensitive topics
- 2,000 unique, deduplicated prompts
Dataset Structure
Columns
| Column | Type | Description |
|---|---|---|
prompt |
string | Text prompt used to generate both audio clips |
audio_a |
audio | MP3 audio file for audio A |
audio_b |
audio | MP3 audio file for audio B |
strategy |
string | Comparison strategy: 1_model_gap, 2_settings_gap, 3_voice_gap |
strategy_detail |
string | Sub-strategy (e.g., v2_vs_v1, stability_contrast, male_vs_female) |
audio_a_voice_name |
string | Voice name for audio A |
audio_a_voice_id |
string | Voice ID for audio A |
audio_a_model_id |
string | TTS model for audio A |
audio_a_stability |
float | Stability setting for audio A |
audio_a_similarity_boost |
float | Similarity boost setting for audio A |
audio_a_style |
string | Style setting for audio A (N/A for unsupported models) |
audio_a_speed |
float | Speed setting for audio A |
audio_a_speaker_boost |
string | Speaker boost for audio A (N/A for unsupported models) |
audio_b_voice_name |
string | Voice name for audio B |
audio_b_voice_id |
string | Voice ID for audio B |
audio_b_model_id |
string | TTS model for audio B |
audio_b_stability |
float | Stability setting for audio B |
audio_b_similarity_boost |
float | Similarity boost setting for audio B |
audio_b_style |
string | Style setting for audio B (N/A for unsupported models) |
audio_b_speed |
float | Speed setting for audio B |
audio_b_speaker_boost |
string | Speaker boost for audio B (N/A for unsupported models) |
weighted_results_audio_a |
float | Fraction of annotators who preferred audio A |
weighted_results_audio_b |
float | Fraction of annotators who preferred audio B |
num_annotations |
int | Number of annotations (always 15) |
detailed_results |
list | Per-annotator votes with display_position, time_taken_ms, winner |
Detailed Results Structure
Each entry in detailed_results:
{
"display_position": "unknown",
"time_taken_ms": 14376,
"winner": "audio_a"
}
Usage
from datasets import load_dataset
ds = load_dataset("datapointai/tts-human-preferences-medium", split="train")
print(ds[0]["prompt"])
print(f"Audio A preferred: {ds[0]['weighted_results_audio_a']:.1%}")
print(f"Audio B preferred: {ds[0]['weighted_results_audio_b']:.1%}")
Train a Reward Model
import pandas as pd
from datasets import load_dataset
ds = load_dataset("datapointai/tts-human-preferences-medium", split="train")
df = ds.to_pandas()
for _, row in df.iterrows():
prompt = row["prompt"]
score_a = row["weighted_results_audio_a"]
score_b = row["weighted_results_audio_b"]
strategy = row["strategy"]
# Use as preference pairs for DPO, reward modeling, etc.
Data Collection
Annotations were collected through Datapoint AI's consumer app SDK using forced-choice pairwise comparison ("Which audio sounds more natural?"). Each comparison was annotated by 15 unique annotators.
License
CC-BY-4.0
Citation
@dataset{datapointai_tts_preferences_2026,
title={TTS Human Preferences},
author={Datapoint AI},
year={2026},
url={https://huggingface.co/datasets/datapointai/tts-human-preferences-medium},
note={30,000 pairwise human preference labels for TTS audio quality}
}
- Downloads last month
- 26