Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,7 @@ Developed by Deep Learning Efficiency Research (DLER) team at NVIDIA Research.
|
|
| 28 |
- [Hymba-1.5B-Base](https://huggingface.co/nvidia/Hymba-1.5B): Outperform all sub-2B public models, e.g., matching Llama 3.2 3B’s commonsense reasoning accuracy, being 3.49× faster, and reducing cache size by 11.7×
|
| 29 |
|
| 30 |
<div align="center">
|
| 31 |
-
<img src="https://huggingface.co/nvidia/Hymba-1.5B/resolve/main/images/performance1.png" alt="Compare with SoTA Small LMs" width="600">
|
| 32 |
</div>
|
| 33 |
|
| 34 |
|
|
@@ -36,7 +36,7 @@ Developed by Deep Learning Efficiency Research (DLER) team at NVIDIA Research.
|
|
| 36 |
|
| 37 |
|
| 38 |
<div align="center">
|
| 39 |
-
<img src="https://huggingface.co/nvidia/Hymba-1.5B/resolve/main/images/instruct_performance.png" alt="Compare with SoTA Small LMs" width="600">
|
| 40 |
</div>
|
| 41 |
|
| 42 |
|
|
|
|
| 28 |
- [Hymba-1.5B-Base](https://huggingface.co/nvidia/Hymba-1.5B): Outperform all sub-2B public models, e.g., matching Llama 3.2 3B’s commonsense reasoning accuracy, being 3.49× faster, and reducing cache size by 11.7×
|
| 29 |
|
| 30 |
<div align="center">
|
| 31 |
+
<img src="https://huggingface.co/nvidia/Hymba-1.5B-Instruct/resolve/main/images/performance1.png" alt="Compare with SoTA Small LMs" width="600">
|
| 32 |
</div>
|
| 33 |
|
| 34 |
|
|
|
|
| 36 |
|
| 37 |
|
| 38 |
<div align="center">
|
| 39 |
+
<img src="https://huggingface.co/nvidia/Hymba-1.5B-Instruct/resolve/main/images/instruct_performance.png" alt="Compare with SoTA Small LMs" width="600">
|
| 40 |
</div>
|
| 41 |
|
| 42 |
|