Instructions to use OEvortex/TTS-OLD with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OEvortex/TTS-OLD with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-to-speech", model="OEvortex/TTS-OLD")# Load model directly from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("OEvortex/TTS-OLD", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- d8a3377ffc6462aa4f216dfa196aae6118625994d7a978582255a1983be299ad
- Size of remote file:
- 1.8 MB
- SHA256:
- bc8fa773221597d09cfadb23a2b1bd717488a0481505469ea56d42cb044de9b5
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.