hf-transformers-bot commited on
Commit
6c3ab25
·
verified ·
1 Parent(s): a564140

Upload 2026-04-20/runs/7331-24676670554/ci_results_run_models_gpu/model_results.json with huggingface_hub

Browse files
2026-04-20/runs/7331-24676670554/ci_results_run_models_gpu/model_results.json ADDED
@@ -0,0 +1,1854 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_auto": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "Tokenizers": {
10
+ "unclassified": 0,
11
+ "single": 1,
12
+ "multi": 1
13
+ },
14
+ "Pipelines": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Trainer": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "ONNX": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Auto": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "Quantization": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Unclassified": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ }
44
+ },
45
+ "errors": 0,
46
+ "success": 258,
47
+ "skipped": 14,
48
+ "time_spent": [
49
+ 87.07,
50
+ 87.45
51
+ ],
52
+ "error": false,
53
+ "failures": {
54
+ "single": [
55
+ {
56
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_custom_tokenizer_from_hub",
57
+ "trace": "(line 687) AssertionError: False is not true"
58
+ }
59
+ ],
60
+ "multi": [
61
+ {
62
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_custom_tokenizer_from_hub",
63
+ "trace": "(line 687) AssertionError: False is not true"
64
+ }
65
+ ]
66
+ },
67
+ "job_link": {
68
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567383",
69
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567190"
70
+ },
71
+ "captured_info": {}
72
+ },
73
+ "models_bert": {
74
+ "failed": {
75
+ "PyTorch": {
76
+ "unclassified": 0,
77
+ "single": 2,
78
+ "multi": 2
79
+ },
80
+ "Tokenizers": {
81
+ "unclassified": 0,
82
+ "single": 0,
83
+ "multi": 0
84
+ },
85
+ "Pipelines": {
86
+ "unclassified": 0,
87
+ "single": 0,
88
+ "multi": 0
89
+ },
90
+ "Trainer": {
91
+ "unclassified": 0,
92
+ "single": 0,
93
+ "multi": 0
94
+ },
95
+ "ONNX": {
96
+ "unclassified": 0,
97
+ "single": 0,
98
+ "multi": 0
99
+ },
100
+ "Auto": {
101
+ "unclassified": 0,
102
+ "single": 0,
103
+ "multi": 0
104
+ },
105
+ "Quantization": {
106
+ "unclassified": 0,
107
+ "single": 0,
108
+ "multi": 0
109
+ },
110
+ "Unclassified": {
111
+ "unclassified": 0,
112
+ "single": 0,
113
+ "multi": 0
114
+ }
115
+ },
116
+ "errors": 0,
117
+ "success": 415,
118
+ "skipped": 193,
119
+ "time_spent": [
120
+ 146.13,
121
+ 143.91
122
+ ],
123
+ "error": false,
124
+ "failures": {
125
+ "multi": [
126
+ {
127
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence",
128
+ "trace": "(line 3388) AssertionError: Tensor-likes are not close!"
129
+ },
130
+ {
131
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence_right_padding",
132
+ "trace": "(line 3390) AssertionError: Tensor-likes are not close!"
133
+ }
134
+ ],
135
+ "single": [
136
+ {
137
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence",
138
+ "trace": "(line 3388) AssertionError: Tensor-likes are not close!"
139
+ },
140
+ {
141
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence_right_padding",
142
+ "trace": "(line 3390) AssertionError: Tensor-likes are not close!"
143
+ }
144
+ ]
145
+ },
146
+ "job_link": {
147
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567151",
148
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567297"
149
+ },
150
+ "captured_info": {
151
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567151#step:16:1",
152
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567297#step:16:1"
153
+ }
154
+ },
155
+ "models_clip": {
156
+ "failed": {
157
+ "PyTorch": {
158
+ "unclassified": 0,
159
+ "single": 0,
160
+ "multi": 0
161
+ },
162
+ "Tokenizers": {
163
+ "unclassified": 0,
164
+ "single": 0,
165
+ "multi": 0
166
+ },
167
+ "Pipelines": {
168
+ "unclassified": 0,
169
+ "single": 0,
170
+ "multi": 0
171
+ },
172
+ "Trainer": {
173
+ "unclassified": 0,
174
+ "single": 0,
175
+ "multi": 0
176
+ },
177
+ "ONNX": {
178
+ "unclassified": 0,
179
+ "single": 0,
180
+ "multi": 0
181
+ },
182
+ "Auto": {
183
+ "unclassified": 0,
184
+ "single": 0,
185
+ "multi": 0
186
+ },
187
+ "Quantization": {
188
+ "unclassified": 0,
189
+ "single": 0,
190
+ "multi": 0
191
+ },
192
+ "Unclassified": {
193
+ "unclassified": 0,
194
+ "single": 0,
195
+ "multi": 0
196
+ }
197
+ },
198
+ "errors": 0,
199
+ "success": 1030,
200
+ "skipped": 572,
201
+ "time_spent": [
202
+ 158.73,
203
+ 155.45
204
+ ],
205
+ "error": false,
206
+ "failures": {},
207
+ "job_link": {
208
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567270",
209
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567388"
210
+ },
211
+ "captured_info": {}
212
+ },
213
+ "models_csm": {
214
+ "failed": {
215
+ "PyTorch": {
216
+ "unclassified": 0,
217
+ "single": 0,
218
+ "multi": 0
219
+ },
220
+ "Tokenizers": {
221
+ "unclassified": 0,
222
+ "single": 0,
223
+ "multi": 0
224
+ },
225
+ "Pipelines": {
226
+ "unclassified": 0,
227
+ "single": 0,
228
+ "multi": 0
229
+ },
230
+ "Trainer": {
231
+ "unclassified": 0,
232
+ "single": 0,
233
+ "multi": 0
234
+ },
235
+ "ONNX": {
236
+ "unclassified": 0,
237
+ "single": 0,
238
+ "multi": 0
239
+ },
240
+ "Auto": {
241
+ "unclassified": 0,
242
+ "single": 0,
243
+ "multi": 0
244
+ },
245
+ "Quantization": {
246
+ "unclassified": 0,
247
+ "single": 0,
248
+ "multi": 0
249
+ },
250
+ "Unclassified": {
251
+ "unclassified": 0,
252
+ "single": 0,
253
+ "multi": 0
254
+ }
255
+ },
256
+ "errors": 0,
257
+ "success": 292,
258
+ "skipped": 212,
259
+ "time_spent": [
260
+ 169.32,
261
+ 171.82
262
+ ],
263
+ "error": false,
264
+ "failures": {},
265
+ "job_link": {
266
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567379",
267
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567173"
268
+ },
269
+ "captured_info": {}
270
+ },
271
+ "models_detr": {
272
+ "failed": {
273
+ "PyTorch": {
274
+ "unclassified": 0,
275
+ "single": 0,
276
+ "multi": 0
277
+ },
278
+ "Tokenizers": {
279
+ "unclassified": 0,
280
+ "single": 0,
281
+ "multi": 0
282
+ },
283
+ "Pipelines": {
284
+ "unclassified": 0,
285
+ "single": 0,
286
+ "multi": 0
287
+ },
288
+ "Trainer": {
289
+ "unclassified": 0,
290
+ "single": 0,
291
+ "multi": 0
292
+ },
293
+ "ONNX": {
294
+ "unclassified": 0,
295
+ "single": 0,
296
+ "multi": 0
297
+ },
298
+ "Auto": {
299
+ "unclassified": 0,
300
+ "single": 0,
301
+ "multi": 0
302
+ },
303
+ "Quantization": {
304
+ "unclassified": 0,
305
+ "single": 0,
306
+ "multi": 0
307
+ },
308
+ "Unclassified": {
309
+ "unclassified": 0,
310
+ "single": 0,
311
+ "multi": 0
312
+ }
313
+ },
314
+ "errors": 0,
315
+ "success": 251,
316
+ "skipped": 211,
317
+ "time_spent": [
318
+ 90.47,
319
+ 92.59
320
+ ],
321
+ "error": false,
322
+ "failures": {},
323
+ "job_link": {
324
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567446",
325
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567197"
326
+ },
327
+ "captured_info": {}
328
+ },
329
+ "models_gemma3": {
330
+ "failed": {
331
+ "PyTorch": {
332
+ "unclassified": 0,
333
+ "single": 5,
334
+ "multi": 5
335
+ },
336
+ "Tokenizers": {
337
+ "unclassified": 0,
338
+ "single": 0,
339
+ "multi": 0
340
+ },
341
+ "Pipelines": {
342
+ "unclassified": 0,
343
+ "single": 0,
344
+ "multi": 0
345
+ },
346
+ "Trainer": {
347
+ "unclassified": 0,
348
+ "single": 0,
349
+ "multi": 0
350
+ },
351
+ "ONNX": {
352
+ "unclassified": 0,
353
+ "single": 0,
354
+ "multi": 0
355
+ },
356
+ "Auto": {
357
+ "unclassified": 0,
358
+ "single": 0,
359
+ "multi": 0
360
+ },
361
+ "Quantization": {
362
+ "unclassified": 0,
363
+ "single": 0,
364
+ "multi": 0
365
+ },
366
+ "Unclassified": {
367
+ "unclassified": 0,
368
+ "single": 0,
369
+ "multi": 0
370
+ }
371
+ },
372
+ "errors": 0,
373
+ "success": 722,
374
+ "skipped": 440,
375
+ "time_spent": [
376
+ 469.69,
377
+ 468.5
378
+ ],
379
+ "error": false,
380
+ "failures": {
381
+ "multi": [
382
+ {
383
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
384
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7fed7436dde0> not registered"
385
+ },
386
+ {
387
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
388
+ "trace": "(line 865) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
389
+ },
390
+ {
391
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
392
+ "trace": "(line 580) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[267 chars]ve.\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
393
+ },
394
+ {
395
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
396
+ "trace": "(line 753) AssertionError: Lists differ: ['use[75 chars]del\\nCertainly! \\n\\nThe image shows a brown an[92 chars]and'] != ['use[75 chars]del\\nThe image shows a brown and white cow sta[106 chars]day']"
397
+ },
398
+ {
399
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
400
+ "trace": "(line 696) AssertionError: Lists differ: [\"use[115 chars]image:\\n\\n**Overall Scene:**\\n\\nIt looks like [26 chars]ith\"] != [\"use[115 chars]image!\\n\\nHere's a description of the scene:\\n[17 chars]rch\"]"
401
+ }
402
+ ],
403
+ "single": [
404
+ {
405
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
406
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7f0a77b407c0> not registered"
407
+ },
408
+ {
409
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
410
+ "trace": "(line 865) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
411
+ },
412
+ {
413
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
414
+ "trace": "(line 580) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[267 chars]ve.\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
415
+ },
416
+ {
417
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
418
+ "trace": "(line 753) AssertionError: Lists differ: ['use[75 chars]del\\nCertainly! \\n\\nThe image shows a brown an[92 chars]and'] != ['use[75 chars]del\\nThe image shows a brown and white cow sta[106 chars]day']"
419
+ },
420
+ {
421
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
422
+ "trace": "(line 696) AssertionError: Lists differ: [\"use[115 chars]image:\\n\\n**Overall Scene:**\\n\\nIt looks like [26 chars]ith\"] != [\"use[115 chars]image!\\n\\nHere's a description of the scene:\\n[17 chars]rch\"]"
423
+ }
424
+ ]
425
+ },
426
+ "job_link": {
427
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567512",
428
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567592"
429
+ },
430
+ "captured_info": {
431
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567512#step:16:1",
432
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567592#step:16:1"
433
+ }
434
+ },
435
+ "models_gemma3n": {
436
+ "failed": {
437
+ "PyTorch": {
438
+ "unclassified": 0,
439
+ "single": 7,
440
+ "multi": 9
441
+ },
442
+ "Tokenizers": {
443
+ "unclassified": 0,
444
+ "single": 0,
445
+ "multi": 0
446
+ },
447
+ "Pipelines": {
448
+ "unclassified": 0,
449
+ "single": 0,
450
+ "multi": 0
451
+ },
452
+ "Trainer": {
453
+ "unclassified": 0,
454
+ "single": 0,
455
+ "multi": 0
456
+ },
457
+ "ONNX": {
458
+ "unclassified": 0,
459
+ "single": 0,
460
+ "multi": 0
461
+ },
462
+ "Auto": {
463
+ "unclassified": 0,
464
+ "single": 0,
465
+ "multi": 0
466
+ },
467
+ "Quantization": {
468
+ "unclassified": 0,
469
+ "single": 0,
470
+ "multi": 0
471
+ },
472
+ "Unclassified": {
473
+ "unclassified": 0,
474
+ "single": 0,
475
+ "multi": 0
476
+ }
477
+ },
478
+ "errors": 0,
479
+ "success": 610,
480
+ "skipped": 720,
481
+ "time_spent": [
482
+ 618.19,
483
+ 621.3
484
+ ],
485
+ "error": false,
486
+ "failures": {
487
+ "single": [
488
+ {
489
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
490
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
491
+ },
492
+ {
493
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
494
+ "trace": "(line 3386) AssertionError: Tensor-likes are not close!"
495
+ },
496
+ {
497
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
498
+ "trace": "(line 3386) AssertionError: Tensor-likes are not close!"
499
+ },
500
+ {
501
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window",
502
+ "trace": "(line 1196) AssertionError: Lists differ: [\" and the food is delicious. I'm so glad I came her[83 chars]re'\"] != [\" and the people are so friendly. I'm so glad I cam[83 chars]re'\"]"
503
+ },
504
+ {
505
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window_with_generation_config",
506
+ "trace": "(line 1228) AssertionError: Lists differ: [\" and I'm very happy to be here. This is a nice pla[87 chars]re'\"] != [\" and I'm glad I came here. This is a nice place. T[88 chars]re'\"]"
507
+ },
508
+ {
509
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_bf16",
510
+ "trace": "(line 998) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
511
+ },
512
+ {
513
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_image",
514
+ "trace": "(line 1110) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
515
+ }
516
+ ],
517
+ "multi": [
518
+ {
519
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
520
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
521
+ },
522
+ {
523
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
524
+ "trace": "(line 3388) AssertionError: Tensor-likes are not close!"
525
+ },
526
+ {
527
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
528
+ "trace": "(line 3386) AssertionError: Tensor-likes are not close!"
529
+ },
530
+ {
531
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_model_parallelism",
532
+ "trace": "(line 1962) AttributeError: 'Gemma3nModel' object has no attribute 'hf_device_map'"
533
+ },
534
+ {
535
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_multi_gpu_data_parallel_forward",
536
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
537
+ },
538
+ {
539
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window",
540
+ "trace": "(line 1196) AssertionError: Lists differ: [\" and the food is delicious. I'm so glad I came her[83 chars]re'\"] != [\" and the people are so friendly. I'm so glad I cam[83 chars]re'\"]"
541
+ },
542
+ {
543
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window_with_generation_config",
544
+ "trace": "(line 1228) AssertionError: Lists differ: [\" and I'm very happy to be here. This is a nice pla[87 chars]re'\"] != [\" and I'm glad I came here. This is a nice place. T[88 chars]re'\"]"
545
+ },
546
+ {
547
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_bf16",
548
+ "trace": "(line 998) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
549
+ },
550
+ {
551
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_image",
552
+ "trace": "(line 1110) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
553
+ }
554
+ ]
555
+ },
556
+ "job_link": {
557
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567718",
558
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567280"
559
+ },
560
+ "captured_info": {
561
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567718#step:16:1",
562
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567280#step:16:1"
563
+ }
564
+ },
565
+ "models_got_ocr2": {
566
+ "failed": {
567
+ "PyTorch": {
568
+ "unclassified": 0,
569
+ "single": 1,
570
+ "multi": 1
571
+ },
572
+ "Tokenizers": {
573
+ "unclassified": 0,
574
+ "single": 0,
575
+ "multi": 0
576
+ },
577
+ "Pipelines": {
578
+ "unclassified": 0,
579
+ "single": 0,
580
+ "multi": 0
581
+ },
582
+ "Trainer": {
583
+ "unclassified": 0,
584
+ "single": 0,
585
+ "multi": 0
586
+ },
587
+ "ONNX": {
588
+ "unclassified": 0,
589
+ "single": 0,
590
+ "multi": 0
591
+ },
592
+ "Auto": {
593
+ "unclassified": 0,
594
+ "single": 0,
595
+ "multi": 0
596
+ },
597
+ "Quantization": {
598
+ "unclassified": 0,
599
+ "single": 0,
600
+ "multi": 0
601
+ },
602
+ "Unclassified": {
603
+ "unclassified": 0,
604
+ "single": 0,
605
+ "multi": 0
606
+ }
607
+ },
608
+ "errors": 0,
609
+ "success": 327,
610
+ "skipped": 319,
611
+ "time_spent": [
612
+ 183.74,
613
+ 184.99
614
+ ],
615
+ "error": false,
616
+ "failures": {
617
+ "multi": [
618
+ {
619
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
620
+ "trace": "(line 213) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
621
+ }
622
+ ],
623
+ "single": [
624
+ {
625
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
626
+ "trace": "(line 213) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
627
+ }
628
+ ]
629
+ },
630
+ "job_link": {
631
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567220",
632
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567646"
633
+ },
634
+ "captured_info": {
635
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567220#step:16:1",
636
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567646#step:16:1"
637
+ }
638
+ },
639
+ "models_gpt2": {
640
+ "failed": {
641
+ "PyTorch": {
642
+ "unclassified": 0,
643
+ "single": 0,
644
+ "multi": 0
645
+ },
646
+ "Tokenizers": {
647
+ "unclassified": 0,
648
+ "single": 0,
649
+ "multi": 0
650
+ },
651
+ "Pipelines": {
652
+ "unclassified": 0,
653
+ "single": 0,
654
+ "multi": 0
655
+ },
656
+ "Trainer": {
657
+ "unclassified": 0,
658
+ "single": 0,
659
+ "multi": 0
660
+ },
661
+ "ONNX": {
662
+ "unclassified": 0,
663
+ "single": 0,
664
+ "multi": 0
665
+ },
666
+ "Auto": {
667
+ "unclassified": 0,
668
+ "single": 0,
669
+ "multi": 0
670
+ },
671
+ "Quantization": {
672
+ "unclassified": 0,
673
+ "single": 0,
674
+ "multi": 0
675
+ },
676
+ "Unclassified": {
677
+ "unclassified": 0,
678
+ "single": 0,
679
+ "multi": 0
680
+ }
681
+ },
682
+ "errors": 0,
683
+ "success": 441,
684
+ "skipped": 213,
685
+ "time_spent": [
686
+ 147.28,
687
+ 149.79
688
+ ],
689
+ "error": false,
690
+ "failures": {},
691
+ "job_link": {
692
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567518",
693
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567268"
694
+ },
695
+ "captured_info": {}
696
+ },
697
+ "models_internvl": {
698
+ "failed": {
699
+ "PyTorch": {
700
+ "unclassified": 0,
701
+ "single": 0,
702
+ "multi": 1
703
+ },
704
+ "Tokenizers": {
705
+ "unclassified": 0,
706
+ "single": 0,
707
+ "multi": 0
708
+ },
709
+ "Pipelines": {
710
+ "unclassified": 0,
711
+ "single": 0,
712
+ "multi": 0
713
+ },
714
+ "Trainer": {
715
+ "unclassified": 0,
716
+ "single": 0,
717
+ "multi": 0
718
+ },
719
+ "ONNX": {
720
+ "unclassified": 0,
721
+ "single": 0,
722
+ "multi": 0
723
+ },
724
+ "Auto": {
725
+ "unclassified": 0,
726
+ "single": 0,
727
+ "multi": 0
728
+ },
729
+ "Quantization": {
730
+ "unclassified": 0,
731
+ "single": 0,
732
+ "multi": 0
733
+ },
734
+ "Unclassified": {
735
+ "unclassified": 0,
736
+ "single": 0,
737
+ "multi": 0
738
+ }
739
+ },
740
+ "errors": 0,
741
+ "success": 446,
742
+ "skipped": 213,
743
+ "time_spent": [
744
+ 237.11,
745
+ 241.48
746
+ ],
747
+ "error": false,
748
+ "failures": {
749
+ "multi": [
750
+ {
751
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLModelTest::test_multi_gpu_data_parallel_forward",
752
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
753
+ }
754
+ ]
755
+ },
756
+ "job_link": {
757
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567271",
758
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567539"
759
+ },
760
+ "captured_info": {}
761
+ },
762
+ "models_llama": {
763
+ "failed": {
764
+ "PyTorch": {
765
+ "unclassified": 0,
766
+ "single": 0,
767
+ "multi": 0
768
+ },
769
+ "Tokenizers": {
770
+ "unclassified": 0,
771
+ "single": 0,
772
+ "multi": 0
773
+ },
774
+ "Pipelines": {
775
+ "unclassified": 0,
776
+ "single": 0,
777
+ "multi": 0
778
+ },
779
+ "Trainer": {
780
+ "unclassified": 0,
781
+ "single": 0,
782
+ "multi": 0
783
+ },
784
+ "ONNX": {
785
+ "unclassified": 0,
786
+ "single": 0,
787
+ "multi": 0
788
+ },
789
+ "Auto": {
790
+ "unclassified": 0,
791
+ "single": 0,
792
+ "multi": 0
793
+ },
794
+ "Quantization": {
795
+ "unclassified": 0,
796
+ "single": 0,
797
+ "multi": 0
798
+ },
799
+ "Unclassified": {
800
+ "unclassified": 0,
801
+ "single": 0,
802
+ "multi": 0
803
+ }
804
+ },
805
+ "errors": 0,
806
+ "success": 457,
807
+ "skipped": 179,
808
+ "time_spent": [
809
+ 283.88,
810
+ 269.17
811
+ ],
812
+ "error": false,
813
+ "failures": {},
814
+ "job_link": {
815
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567362",
816
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567772"
817
+ },
818
+ "captured_info": {
819
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567362#step:16:1",
820
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567772#step:16:1"
821
+ }
822
+ },
823
+ "models_llava": {
824
+ "failed": {
825
+ "PyTorch": {
826
+ "unclassified": 0,
827
+ "single": 8,
828
+ "multi": 8
829
+ },
830
+ "Tokenizers": {
831
+ "unclassified": 0,
832
+ "single": 0,
833
+ "multi": 0
834
+ },
835
+ "Pipelines": {
836
+ "unclassified": 0,
837
+ "single": 0,
838
+ "multi": 0
839
+ },
840
+ "Trainer": {
841
+ "unclassified": 0,
842
+ "single": 0,
843
+ "multi": 0
844
+ },
845
+ "ONNX": {
846
+ "unclassified": 0,
847
+ "single": 0,
848
+ "multi": 0
849
+ },
850
+ "Auto": {
851
+ "unclassified": 0,
852
+ "single": 0,
853
+ "multi": 0
854
+ },
855
+ "Quantization": {
856
+ "unclassified": 0,
857
+ "single": 0,
858
+ "multi": 0
859
+ },
860
+ "Unclassified": {
861
+ "unclassified": 0,
862
+ "single": 0,
863
+ "multi": 0
864
+ }
865
+ },
866
+ "errors": 0,
867
+ "success": 423,
868
+ "skipped": 231,
869
+ "time_spent": [
870
+ 260.87,
871
+ 265.65
872
+ ],
873
+ "error": false,
874
+ "failures": {
875
+ "multi": [
876
+ {
877
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
878
+ "trace": "(line 569) AssertionError: Lists differ: [\"\\n\\nUSER: What's the difference of two imag[339 chars]ama'] != [\"\\n \\nUSER: What's the difference of two ima[351 chars]ama']"
879
+ },
880
+ {
881
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
882
+ "trace": "(line 940) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 10.69 MiB is free. Process 70866 has 22.29 GiB memory in use. Of the allocated memory 21.77 GiB is allocated by PyTorch, and 14.88 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
883
+ },
884
+ {
885
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
886
+ "trace": "(line 4877) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.77 GiB. GPU 0 has a total capacity of 22.30 GiB of which 704.00 KiB is free. Process 70866 has 22.29 GiB memory in use. Of the allocated memory 21.78 GiB is allocated by PyTorch, and 14.87 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
887
+ },
888
+ {
889
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
890
+ "trace": "(line 727) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
891
+ },
892
+ {
893
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched",
894
+ "trace": "(line 407) AssertionError: Lists differ: ['USER: \\nWhat are the things I should be cautiou[269 chars] on'] != ['USER: \\nWhat are the things I should be cautio[271 chars] on']"
895
+ },
896
+ {
897
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched_regression",
898
+ "trace": "(line 513) AssertionError: Lists differ: ['USER: \\nWhat are the things I should be cautiou[280 chars]ed.'] != ['USER: \\nWhat are the things I should be cautio[283 chars]ed.']"
899
+ },
900
+ {
901
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_single",
902
+ "trace": "(line 356) AssertionError: 'USER: \\nWhat are the things I should be cautiou[748 chars]ies.' != 'USER: \\nWhat are the things I should be cautio[749 chars]ies.'"
903
+ },
904
+ {
905
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_tokenizer_integration",
906
+ "trace": "(line 586) AssertionError: Lists differ: ['<|im_start|>', '▁system', '\\n', '▁Answer', '▁the', '▁ques[176 chars]'\\n'] != ['<|im_start|>', 'system', '\\n', 'Answer', '▁the', '▁questi[175 chars]'\\n']"
907
+ }
908
+ ],
909
+ "single": [
910
+ {
911
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
912
+ "trace": "(line 569) AssertionError: Lists differ: [\"\\n\\nUSER: What's the difference of two imag[339 chars]ama'] != [\"\\n \\nUSER: What's the difference of two ima[351 chars]ama']"
913
+ },
914
+ {
915
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
916
+ "trace": "(line 940) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 140.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 86.69 MiB is free. Process 77562 has 22.21 GiB memory in use. Of the allocated memory 21.81 GiB is allocated by PyTorch, and 13.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
917
+ },
918
+ {
919
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
920
+ "trace": "(line 4877) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.77 GiB. GPU 0 has a total capacity of 22.30 GiB of which 36.69 MiB is free. Process 77562 has 22.26 GiB memory in use. Of the allocated memory 21.86 GiB is allocated by PyTorch, and 12.99 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
921
+ },
922
+ {
923
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
924
+ "trace": "(line 727) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
925
+ },
926
+ {
927
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched",
928
+ "trace": "(line 407) AssertionError: Lists differ: ['USER: \\nWhat are the things I should be cautiou[269 chars] on'] != ['USER: \\nWhat are the things I should be cautio[271 chars] on']"
929
+ },
930
+ {
931
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched_regression",
932
+ "trace": "(line 513) AssertionError: Lists differ: ['USER: \\nWhat are the things I should be cautiou[280 chars]ed.'] != ['USER: \\nWhat are the things I should be cautio[283 chars]ed.']"
933
+ },
934
+ {
935
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_single",
936
+ "trace": "(line 356) AssertionError: 'USER: \\nWhat are the things I should be cautiou[748 chars]ies.' != 'USER: \\nWhat are the things I should be cautio[749 chars]ies.'"
937
+ },
938
+ {
939
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_tokenizer_integration",
940
+ "trace": "(line 586) AssertionError: Lists differ: ['<|im_start|>', '▁system', '\\n', '▁Answer', '▁the', '▁ques[176 chars]'\\n'] != ['<|im_start|>', 'system', '\\n', 'Answer', '▁the', '▁questi[175 chars]'\\n']"
941
+ }
942
+ ]
943
+ },
944
+ "job_link": {
945
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567339",
946
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567553"
947
+ },
948
+ "captured_info": {
949
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567339#step:16:1",
950
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567553#step:16:1"
951
+ }
952
+ },
953
+ "models_mistral3": {
954
+ "failed": {
955
+ "PyTorch": {
956
+ "unclassified": 0,
957
+ "single": 2,
958
+ "multi": 2
959
+ },
960
+ "Tokenizers": {
961
+ "unclassified": 0,
962
+ "single": 0,
963
+ "multi": 0
964
+ },
965
+ "Pipelines": {
966
+ "unclassified": 0,
967
+ "single": 0,
968
+ "multi": 0
969
+ },
970
+ "Trainer": {
971
+ "unclassified": 0,
972
+ "single": 0,
973
+ "multi": 0
974
+ },
975
+ "ONNX": {
976
+ "unclassified": 0,
977
+ "single": 0,
978
+ "multi": 0
979
+ },
980
+ "Auto": {
981
+ "unclassified": 0,
982
+ "single": 0,
983
+ "multi": 0
984
+ },
985
+ "Quantization": {
986
+ "unclassified": 0,
987
+ "single": 0,
988
+ "multi": 0
989
+ },
990
+ "Unclassified": {
991
+ "unclassified": 0,
992
+ "single": 0,
993
+ "multi": 0
994
+ }
995
+ },
996
+ "errors": 0,
997
+ "success": 357,
998
+ "skipped": 245,
999
+ "time_spent": [
1000
+ 653.41,
1001
+ 642.34
1002
+ ],
1003
+ "error": false,
1004
+ "failures": {
1005
+ "single": [
1006
+ {
1007
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
1008
+ "trace": "(line 365) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
1009
+ },
1010
+ {
1011
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
1012
+ "trace": "(line 441) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
1013
+ }
1014
+ ],
1015
+ "multi": [
1016
+ {
1017
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
1018
+ "trace": "(line 365) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
1019
+ },
1020
+ {
1021
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
1022
+ "trace": "(line 441) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
1023
+ }
1024
+ ]
1025
+ },
1026
+ "job_link": {
1027
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567825",
1028
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567678"
1029
+ },
1030
+ "captured_info": {
1031
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567825#step:16:1",
1032
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567678#step:16:1"
1033
+ }
1034
+ },
1035
+ "models_modernbert": {
1036
+ "failed": {
1037
+ "PyTorch": {
1038
+ "unclassified": 0,
1039
+ "single": 1,
1040
+ "multi": 1
1041
+ },
1042
+ "Tokenizers": {
1043
+ "unclassified": 0,
1044
+ "single": 0,
1045
+ "multi": 0
1046
+ },
1047
+ "Pipelines": {
1048
+ "unclassified": 0,
1049
+ "single": 0,
1050
+ "multi": 0
1051
+ },
1052
+ "Trainer": {
1053
+ "unclassified": 0,
1054
+ "single": 0,
1055
+ "multi": 0
1056
+ },
1057
+ "ONNX": {
1058
+ "unclassified": 0,
1059
+ "single": 0,
1060
+ "multi": 0
1061
+ },
1062
+ "Auto": {
1063
+ "unclassified": 0,
1064
+ "single": 0,
1065
+ "multi": 0
1066
+ },
1067
+ "Quantization": {
1068
+ "unclassified": 0,
1069
+ "single": 0,
1070
+ "multi": 0
1071
+ },
1072
+ "Unclassified": {
1073
+ "unclassified": 0,
1074
+ "single": 0,
1075
+ "multi": 0
1076
+ }
1077
+ },
1078
+ "errors": 0,
1079
+ "success": 238,
1080
+ "skipped": 162,
1081
+ "time_spent": [
1082
+ 103.6,
1083
+ 103.81
1084
+ ],
1085
+ "error": false,
1086
+ "failures": {
1087
+ "multi": [
1088
+ {
1089
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1090
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1091
+ }
1092
+ ],
1093
+ "single": [
1094
+ {
1095
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1096
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1097
+ }
1098
+ ]
1099
+ },
1100
+ "job_link": {
1101
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567701",
1102
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567733"
1103
+ },
1104
+ "captured_info": {
1105
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567701#step:16:1",
1106
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567733#step:16:1"
1107
+ }
1108
+ },
1109
+ "models_pi0": {
1110
+ "failed": {
1111
+ "PyTorch": {
1112
+ "unclassified": 0,
1113
+ "single": 1,
1114
+ "multi": 1
1115
+ },
1116
+ "Tokenizers": {
1117
+ "unclassified": 0,
1118
+ "single": 0,
1119
+ "multi": 0
1120
+ },
1121
+ "Pipelines": {
1122
+ "unclassified": 0,
1123
+ "single": 0,
1124
+ "multi": 0
1125
+ },
1126
+ "Trainer": {
1127
+ "unclassified": 0,
1128
+ "single": 0,
1129
+ "multi": 0
1130
+ },
1131
+ "ONNX": {
1132
+ "unclassified": 0,
1133
+ "single": 0,
1134
+ "multi": 0
1135
+ },
1136
+ "Auto": {
1137
+ "unclassified": 0,
1138
+ "single": 0,
1139
+ "multi": 0
1140
+ },
1141
+ "Quantization": {
1142
+ "unclassified": 0,
1143
+ "single": 0,
1144
+ "multi": 0
1145
+ },
1146
+ "Unclassified": {
1147
+ "unclassified": 0,
1148
+ "single": 0,
1149
+ "multi": 0
1150
+ }
1151
+ },
1152
+ "errors": 0,
1153
+ "success": 208,
1154
+ "skipped": 202,
1155
+ "time_spent": [
1156
+ 139.16,
1157
+ 132.3
1158
+ ],
1159
+ "error": false,
1160
+ "failures": {
1161
+ "multi": [
1162
+ {
1163
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1164
+ "trace": "(line 769) torch.OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0."
1165
+ }
1166
+ ],
1167
+ "single": [
1168
+ {
1169
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1170
+ "trace": "(line 193) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 18.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 8.69 MiB is free. Process 45659 has 22.29 GiB memory in use. Of the allocated memory 21.50 GiB is allocated by PyTorch, and 478.93 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
1171
+ }
1172
+ ]
1173
+ },
1174
+ "job_link": {
1175
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567595",
1176
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567769"
1177
+ },
1178
+ "captured_info": {}
1179
+ },
1180
+ "models_qwen2": {
1181
+ "failed": {
1182
+ "PyTorch": {
1183
+ "unclassified": 0,
1184
+ "single": 1,
1185
+ "multi": 1
1186
+ },
1187
+ "Tokenizers": {
1188
+ "unclassified": 0,
1189
+ "single": 0,
1190
+ "multi": 0
1191
+ },
1192
+ "Pipelines": {
1193
+ "unclassified": 0,
1194
+ "single": 0,
1195
+ "multi": 0
1196
+ },
1197
+ "Trainer": {
1198
+ "unclassified": 0,
1199
+ "single": 0,
1200
+ "multi": 0
1201
+ },
1202
+ "ONNX": {
1203
+ "unclassified": 0,
1204
+ "single": 0,
1205
+ "multi": 0
1206
+ },
1207
+ "Auto": {
1208
+ "unclassified": 0,
1209
+ "single": 0,
1210
+ "multi": 0
1211
+ },
1212
+ "Quantization": {
1213
+ "unclassified": 0,
1214
+ "single": 0,
1215
+ "multi": 0
1216
+ },
1217
+ "Unclassified": {
1218
+ "unclassified": 0,
1219
+ "single": 0,
1220
+ "multi": 0
1221
+ }
1222
+ },
1223
+ "errors": 0,
1224
+ "success": 451,
1225
+ "skipped": 177,
1226
+ "time_spent": [
1227
+ 227.64,
1228
+ 225.81
1229
+ ],
1230
+ "error": false,
1231
+ "failures": {
1232
+ "multi": [
1233
+ {
1234
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1235
+ "trace": "(line 287) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1236
+ }
1237
+ ],
1238
+ "single": [
1239
+ {
1240
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1241
+ "trace": "(line 287) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1242
+ }
1243
+ ]
1244
+ },
1245
+ "job_link": {
1246
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567768",
1247
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567866"
1248
+ },
1249
+ "captured_info": {
1250
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567768#step:16:1",
1251
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567866#step:16:1"
1252
+ }
1253
+ },
1254
+ "models_qwen2_5_omni": {
1255
+ "failed": {
1256
+ "PyTorch": {
1257
+ "unclassified": 0,
1258
+ "single": 2,
1259
+ "multi": 3
1260
+ },
1261
+ "Tokenizers": {
1262
+ "unclassified": 0,
1263
+ "single": 0,
1264
+ "multi": 0
1265
+ },
1266
+ "Pipelines": {
1267
+ "unclassified": 0,
1268
+ "single": 0,
1269
+ "multi": 0
1270
+ },
1271
+ "Trainer": {
1272
+ "unclassified": 0,
1273
+ "single": 0,
1274
+ "multi": 0
1275
+ },
1276
+ "ONNX": {
1277
+ "unclassified": 0,
1278
+ "single": 0,
1279
+ "multi": 0
1280
+ },
1281
+ "Auto": {
1282
+ "unclassified": 0,
1283
+ "single": 0,
1284
+ "multi": 0
1285
+ },
1286
+ "Quantization": {
1287
+ "unclassified": 0,
1288
+ "single": 0,
1289
+ "multi": 0
1290
+ },
1291
+ "Unclassified": {
1292
+ "unclassified": 0,
1293
+ "single": 0,
1294
+ "multi": 0
1295
+ }
1296
+ },
1297
+ "errors": 0,
1298
+ "success": 360,
1299
+ "skipped": 235,
1300
+ "time_spent": [
1301
+ 189.37,
1302
+ 224.31
1303
+ ],
1304
+ "error": false,
1305
+ "failures": {
1306
+ "multi": [
1307
+ {
1308
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniThinkerForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1309
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1310
+ },
1311
+ {
1312
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1313
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1314
+ },
1315
+ {
1316
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1317
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1318
+ }
1319
+ ],
1320
+ "single": [
1321
+ {
1322
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1323
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1324
+ },
1325
+ {
1326
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1327
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1328
+ }
1329
+ ]
1330
+ },
1331
+ "job_link": {
1332
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567686",
1333
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567848"
1334
+ },
1335
+ "captured_info": {
1336
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567686#step:16:1",
1337
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567848#step:16:1"
1338
+ }
1339
+ },
1340
+ "models_qwen2_5_vl": {
1341
+ "failed": {
1342
+ "PyTorch": {
1343
+ "unclassified": 0,
1344
+ "single": 1,
1345
+ "multi": 1
1346
+ },
1347
+ "Tokenizers": {
1348
+ "unclassified": 0,
1349
+ "single": 0,
1350
+ "multi": 0
1351
+ },
1352
+ "Pipelines": {
1353
+ "unclassified": 0,
1354
+ "single": 0,
1355
+ "multi": 0
1356
+ },
1357
+ "Trainer": {
1358
+ "unclassified": 0,
1359
+ "single": 0,
1360
+ "multi": 0
1361
+ },
1362
+ "ONNX": {
1363
+ "unclassified": 0,
1364
+ "single": 0,
1365
+ "multi": 0
1366
+ },
1367
+ "Auto": {
1368
+ "unclassified": 0,
1369
+ "single": 0,
1370
+ "multi": 0
1371
+ },
1372
+ "Quantization": {
1373
+ "unclassified": 0,
1374
+ "single": 0,
1375
+ "multi": 0
1376
+ },
1377
+ "Unclassified": {
1378
+ "unclassified": 0,
1379
+ "single": 0,
1380
+ "multi": 0
1381
+ }
1382
+ },
1383
+ "errors": 0,
1384
+ "success": 397,
1385
+ "skipped": 121,
1386
+ "time_spent": [
1387
+ 226.3,
1388
+ 227.26
1389
+ ],
1390
+ "error": false,
1391
+ "failures": {
1392
+ "multi": [
1393
+ {
1394
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1395
+ "trace": "(line 746) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1396
+ }
1397
+ ],
1398
+ "single": [
1399
+ {
1400
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1401
+ "trace": "(line 746) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1402
+ }
1403
+ ]
1404
+ },
1405
+ "job_link": {
1406
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567671",
1407
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567968"
1408
+ },
1409
+ "captured_info": {
1410
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567671#step:16:1",
1411
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567968#step:16:1"
1412
+ }
1413
+ },
1414
+ "models_qwen2_audio": {
1415
+ "failed": {
1416
+ "PyTorch": {
1417
+ "unclassified": 0,
1418
+ "single": 0,
1419
+ "multi": 1
1420
+ },
1421
+ "Tokenizers": {
1422
+ "unclassified": 0,
1423
+ "single": 0,
1424
+ "multi": 0
1425
+ },
1426
+ "Pipelines": {
1427
+ "unclassified": 0,
1428
+ "single": 0,
1429
+ "multi": 0
1430
+ },
1431
+ "Trainer": {
1432
+ "unclassified": 0,
1433
+ "single": 0,
1434
+ "multi": 0
1435
+ },
1436
+ "ONNX": {
1437
+ "unclassified": 0,
1438
+ "single": 0,
1439
+ "multi": 0
1440
+ },
1441
+ "Auto": {
1442
+ "unclassified": 0,
1443
+ "single": 0,
1444
+ "multi": 0
1445
+ },
1446
+ "Quantization": {
1447
+ "unclassified": 0,
1448
+ "single": 0,
1449
+ "multi": 0
1450
+ },
1451
+ "Unclassified": {
1452
+ "unclassified": 0,
1453
+ "single": 0,
1454
+ "multi": 0
1455
+ }
1456
+ },
1457
+ "errors": 0,
1458
+ "success": 320,
1459
+ "skipped": 275,
1460
+ "time_spent": [
1461
+ 138.72,
1462
+ 135.63
1463
+ ],
1464
+ "error": false,
1465
+ "failures": {
1466
+ "multi": [
1467
+ {
1468
+ "line": "tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1469
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1470
+ }
1471
+ ]
1472
+ },
1473
+ "job_link": {
1474
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567689",
1475
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567852"
1476
+ },
1477
+ "captured_info": {}
1478
+ },
1479
+ "models_smolvlm": {
1480
+ "failed": {
1481
+ "PyTorch": {
1482
+ "unclassified": 0,
1483
+ "single": 0,
1484
+ "multi": 2
1485
+ },
1486
+ "Tokenizers": {
1487
+ "unclassified": 0,
1488
+ "single": 0,
1489
+ "multi": 0
1490
+ },
1491
+ "Pipelines": {
1492
+ "unclassified": 0,
1493
+ "single": 0,
1494
+ "multi": 0
1495
+ },
1496
+ "Trainer": {
1497
+ "unclassified": 0,
1498
+ "single": 0,
1499
+ "multi": 0
1500
+ },
1501
+ "ONNX": {
1502
+ "unclassified": 0,
1503
+ "single": 0,
1504
+ "multi": 0
1505
+ },
1506
+ "Auto": {
1507
+ "unclassified": 0,
1508
+ "single": 0,
1509
+ "multi": 0
1510
+ },
1511
+ "Quantization": {
1512
+ "unclassified": 0,
1513
+ "single": 0,
1514
+ "multi": 0
1515
+ },
1516
+ "Unclassified": {
1517
+ "unclassified": 0,
1518
+ "single": 0,
1519
+ "multi": 0
1520
+ }
1521
+ },
1522
+ "errors": 0,
1523
+ "success": 667,
1524
+ "skipped": 309,
1525
+ "time_spent": [
1526
+ 123.95,
1527
+ 124.84
1528
+ ],
1529
+ "error": false,
1530
+ "failures": {
1531
+ "multi": [
1532
+ {
1533
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMModelTest::test_multi_gpu_data_parallel_forward",
1534
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1535
+ },
1536
+ {
1537
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1538
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1539
+ }
1540
+ ]
1541
+ },
1542
+ "job_link": {
1543
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567846",
1544
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162568082"
1545
+ },
1546
+ "captured_info": {}
1547
+ },
1548
+ "models_t5": {
1549
+ "failed": {
1550
+ "PyTorch": {
1551
+ "unclassified": 0,
1552
+ "single": 0,
1553
+ "multi": 0
1554
+ },
1555
+ "Tokenizers": {
1556
+ "unclassified": 0,
1557
+ "single": 0,
1558
+ "multi": 0
1559
+ },
1560
+ "Pipelines": {
1561
+ "unclassified": 0,
1562
+ "single": 0,
1563
+ "multi": 0
1564
+ },
1565
+ "Trainer": {
1566
+ "unclassified": 0,
1567
+ "single": 0,
1568
+ "multi": 0
1569
+ },
1570
+ "ONNX": {
1571
+ "unclassified": 0,
1572
+ "single": 0,
1573
+ "multi": 0
1574
+ },
1575
+ "Auto": {
1576
+ "unclassified": 0,
1577
+ "single": 0,
1578
+ "multi": 0
1579
+ },
1580
+ "Quantization": {
1581
+ "unclassified": 0,
1582
+ "single": 0,
1583
+ "multi": 0
1584
+ },
1585
+ "Unclassified": {
1586
+ "unclassified": 0,
1587
+ "single": 0,
1588
+ "multi": 0
1589
+ }
1590
+ },
1591
+ "errors": 0,
1592
+ "success": 521,
1593
+ "skipped": 507,
1594
+ "time_spent": [
1595
+ 156.79,
1596
+ 168.33
1597
+ ],
1598
+ "error": false,
1599
+ "failures": {},
1600
+ "job_link": {
1601
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567822",
1602
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567649"
1603
+ },
1604
+ "captured_info": {}
1605
+ },
1606
+ "models_table_transformer": {
1607
+ "failed": {
1608
+ "PyTorch": {
1609
+ "unclassified": 0,
1610
+ "single": 1,
1611
+ "multi": 1
1612
+ },
1613
+ "Tokenizers": {
1614
+ "unclassified": 0,
1615
+ "single": 0,
1616
+ "multi": 0
1617
+ },
1618
+ "Pipelines": {
1619
+ "unclassified": 0,
1620
+ "single": 0,
1621
+ "multi": 0
1622
+ },
1623
+ "Trainer": {
1624
+ "unclassified": 0,
1625
+ "single": 0,
1626
+ "multi": 0
1627
+ },
1628
+ "ONNX": {
1629
+ "unclassified": 0,
1630
+ "single": 0,
1631
+ "multi": 0
1632
+ },
1633
+ "Auto": {
1634
+ "unclassified": 0,
1635
+ "single": 0,
1636
+ "multi": 0
1637
+ },
1638
+ "Quantization": {
1639
+ "unclassified": 0,
1640
+ "single": 0,
1641
+ "multi": 0
1642
+ },
1643
+ "Unclassified": {
1644
+ "unclassified": 0,
1645
+ "single": 0,
1646
+ "multi": 0
1647
+ }
1648
+ },
1649
+ "errors": 0,
1650
+ "success": 156,
1651
+ "skipped": 238,
1652
+ "time_spent": [
1653
+ 50.42,
1654
+ 48.65
1655
+ ],
1656
+ "error": false,
1657
+ "failures": {
1658
+ "multi": [
1659
+ {
1660
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1661
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1662
+ }
1663
+ ],
1664
+ "single": [
1665
+ {
1666
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1667
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1668
+ }
1669
+ ]
1670
+ },
1671
+ "job_link": {
1672
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567681",
1673
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162568126"
1674
+ },
1675
+ "captured_info": {
1676
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567681#step:16:1",
1677
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162568126#step:16:1"
1678
+ }
1679
+ },
1680
+ "models_vit": {
1681
+ "failed": {
1682
+ "PyTorch": {
1683
+ "unclassified": 0,
1684
+ "single": 0,
1685
+ "multi": 0
1686
+ },
1687
+ "Tokenizers": {
1688
+ "unclassified": 0,
1689
+ "single": 0,
1690
+ "multi": 0
1691
+ },
1692
+ "Pipelines": {
1693
+ "unclassified": 0,
1694
+ "single": 0,
1695
+ "multi": 0
1696
+ },
1697
+ "Trainer": {
1698
+ "unclassified": 0,
1699
+ "single": 0,
1700
+ "multi": 0
1701
+ },
1702
+ "ONNX": {
1703
+ "unclassified": 0,
1704
+ "single": 0,
1705
+ "multi": 0
1706
+ },
1707
+ "Auto": {
1708
+ "unclassified": 0,
1709
+ "single": 0,
1710
+ "multi": 0
1711
+ },
1712
+ "Quantization": {
1713
+ "unclassified": 0,
1714
+ "single": 0,
1715
+ "multi": 0
1716
+ },
1717
+ "Unclassified": {
1718
+ "unclassified": 0,
1719
+ "single": 0,
1720
+ "multi": 0
1721
+ }
1722
+ },
1723
+ "errors": 0,
1724
+ "success": 259,
1725
+ "skipped": 175,
1726
+ "time_spent": [
1727
+ 51.69,
1728
+ 52.33
1729
+ ],
1730
+ "error": false,
1731
+ "failures": {},
1732
+ "job_link": {
1733
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162568027",
1734
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567793"
1735
+ },
1736
+ "captured_info": {}
1737
+ },
1738
+ "models_wav2vec2": {
1739
+ "failed": {
1740
+ "PyTorch": {
1741
+ "unclassified": 0,
1742
+ "single": 0,
1743
+ "multi": 0
1744
+ },
1745
+ "Tokenizers": {
1746
+ "unclassified": 0,
1747
+ "single": 0,
1748
+ "multi": 0
1749
+ },
1750
+ "Pipelines": {
1751
+ "unclassified": 0,
1752
+ "single": 0,
1753
+ "multi": 0
1754
+ },
1755
+ "Trainer": {
1756
+ "unclassified": 0,
1757
+ "single": 0,
1758
+ "multi": 0
1759
+ },
1760
+ "ONNX": {
1761
+ "unclassified": 0,
1762
+ "single": 0,
1763
+ "multi": 0
1764
+ },
1765
+ "Auto": {
1766
+ "unclassified": 0,
1767
+ "single": 0,
1768
+ "multi": 0
1769
+ },
1770
+ "Quantization": {
1771
+ "unclassified": 0,
1772
+ "single": 0,
1773
+ "multi": 0
1774
+ },
1775
+ "Unclassified": {
1776
+ "unclassified": 0,
1777
+ "single": 0,
1778
+ "multi": 0
1779
+ }
1780
+ },
1781
+ "errors": 0,
1782
+ "success": 0,
1783
+ "skipped": 0,
1784
+ "time_spent": [
1785
+ 5.67,
1786
+ 5.63
1787
+ ],
1788
+ "error": false,
1789
+ "failures": {},
1790
+ "job_link": {
1791
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567808",
1792
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567912"
1793
+ },
1794
+ "captured_info": {}
1795
+ },
1796
+ "models_whisper": {
1797
+ "failed": {
1798
+ "PyTorch": {
1799
+ "unclassified": 0,
1800
+ "single": 0,
1801
+ "multi": 0
1802
+ },
1803
+ "Tokenizers": {
1804
+ "unclassified": 0,
1805
+ "single": 0,
1806
+ "multi": 0
1807
+ },
1808
+ "Pipelines": {
1809
+ "unclassified": 0,
1810
+ "single": 0,
1811
+ "multi": 0
1812
+ },
1813
+ "Trainer": {
1814
+ "unclassified": 0,
1815
+ "single": 0,
1816
+ "multi": 0
1817
+ },
1818
+ "ONNX": {
1819
+ "unclassified": 0,
1820
+ "single": 0,
1821
+ "multi": 0
1822
+ },
1823
+ "Auto": {
1824
+ "unclassified": 0,
1825
+ "single": 0,
1826
+ "multi": 0
1827
+ },
1828
+ "Quantization": {
1829
+ "unclassified": 0,
1830
+ "single": 0,
1831
+ "multi": 0
1832
+ },
1833
+ "Unclassified": {
1834
+ "unclassified": 0,
1835
+ "single": 0,
1836
+ "multi": 0
1837
+ }
1838
+ },
1839
+ "errors": 0,
1840
+ "success": 0,
1841
+ "skipped": 0,
1842
+ "time_spent": [
1843
+ 5.59,
1844
+ 5.59
1845
+ ],
1846
+ "error": false,
1847
+ "failures": {},
1848
+ "job_link": {
1849
+ "single": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567957",
1850
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24676670554/job/72162567791"
1851
+ },
1852
+ "captured_info": {}
1853
+ }
1854
+ }