No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.
Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.
Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.
This is amazing! If you are looking for US EN and use a phone with arm64 I can recommend sherpa-onnx-1.10.27-arm64-v8a-en-tts-vits-piper-en_US-kristin-medium and sherpa-onnx-1.10.27-arm64-v8a-en-tts-vits-piper-en_US-norman-medium.
Edit: I don’t seem to be able to get the engine to show up as a tts engine but it works well within the tts app itself. Hopefully I’ll find a fix I’ve been searching for a good tts engine for android eBook apps.
Edit 2. Fixed I mistakenly downloaded the standalone version from https://k2-fsa.github.io/sherpa/onnx/tts/apk.html instead of https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html