Datasets:
Add QLoRA + RAG few-shot 53.3% to Chichewa progression
Browse files
README.md
CHANGED
|
@@ -33,7 +33,7 @@ This benchmark was constructed to investigate the adaptation of Large Language M
|
|
| 33 |
|
| 34 |
Key findings from the accompanying research:
|
| 35 |
- **English** zero-shot execution accuracy: 20% → 50% (random few-shot) → 70% (RAG few-shot) → **76.7% (QLoRA)**
|
| 36 |
-
- **Chichewa** zero-shot execution accuracy: 0% across all models → 41.7% (RAG few-shot) →
|
| 37 |
|
| 38 |
---
|
| 39 |
|
|
|
|
| 33 |
|
| 34 |
Key findings from the accompanying research:
|
| 35 |
- **English** zero-shot execution accuracy: 20% → 50% (random few-shot) → 70% (RAG few-shot) → **76.7% (QLoRA)**
|
| 36 |
+
- **Chichewa** zero-shot execution accuracy: 0% across all models → 41.7% (RAG few-shot) → 41.7% (QLoRA) → **53.3% (QLoRA + RAG few-shot)**
|
| 37 |
|
| 38 |
---
|
| 39 |
|