Click the icon to listen
Installation
Run the following command to add the speaker component to your project:
npx shadcn@latest add @flowui/speakerCreate Speaker Component
Create the component speaker.tsx.
// This component provides a speaker icon that, when clicked, uses the Web Speech API to read aloud the provided text
// in the specified language.
import { cn } from "@/lib/utils";
import { Volume2 } from "lucide-react";
import { useCallback, useEffect, useState } from "react";
type SpeakerProps = {
text: string;
lang?: string;
className?: string;
};
const Speaker = ({ text, lang = 'en-IN', className }: SpeakerProps) => {
const [isSupported] = useState<boolean>(() => {
return typeof window !== 'undefined' && 'speechSynthesis' in window;
});
const [voice, setVoice] = useState<SpeechSynthesisVoice | null>(null);
useEffect(() => {
if (typeof window !== 'undefined' && 'speechSynthesis' in window) {
const setPreferredVoice = () => {
const voices = window.speechSynthesis.getVoices();
if(voices.length === 0) return;
// Try to find a specific, high-quality voice
const preferredVoice = voices.find(
(v) => v.lang === lang && v.name.includes('Google')
) || voices.find((v) => v.lang.startsWith(lang.split('-')[0])); // Fallback to any voice with the same language prefix
if (preferredVoice) {
setVoice(preferredVoice);
}
};
window.speechSynthesis.addEventListener('voiceschanged', setPreferredVoice);
setPreferredVoice();
return () => {
window.speechSynthesis.removeEventListener('voiceschanged', setPreferredVoice);
};
}
}, []);
const speakText = useCallback(() => {
if (!isSupported) {
console.error('Web Speech API is not supported.');
return;
}
// 1. Cancel any currently speaking utterance
window.speechSynthesis.cancel();
// 2. Create the utterance
const utterance = new SpeechSynthesisUtterance(`${text}`);
// 3. Apply the preferred voice if available
if (voice) {
utterance.voice = voice;
}
// 4. Set other optional properties
utterance.pitch = 1;
utterance.rate = 1;
utterance.lang = lang;
// 5. Speak the text
window.speechSynthesis.speak(utterance);
}, [isSupported, voice, text, lang]);
if (!isSupported) {
return null;
}
return (
<Volume2
onClick={speakText}
className={cn("size-6",className)}
/>
)
}
export default Speaker;Usage
Default Usage
English (US)
import Speaker from "@/components/ui/speaker";
export function DefaultExample() {
return (
<Speaker text="This is a text-to-speech example in English." lang="en-US" />
)
}Multiple Languages
The Speaker component supports various languages by passing the lang prop.
Spanish (ES)
French (FR)
German (DE)
Japanese (JP)
// Spanish
<Speaker text="Este es un ejemplo de texto a voz." lang="es-ES" />
// French
<Speaker text="Ceci est un exemple de synthèse vocale." lang="fr-FR" />
// German
<Speaker text="Dies ist ein Beispiel für Text-zu-Sprache." lang="de-DE" />
// Japanese
<Speaker text="これはテキスト読み上げの例です。 " lang="ja-JP" />Props
The Speaker component accepts the following props:
| Prop | Type | Default | Description |
|---|---|---|---|
text | string | - | The text to be read aloud. |
lang | string | "en-IN" | The BCP 47 language tag for the speech synthesis voice. |
className | string | undefined | Additional class names to apply to the component. |
Supported Languages
The lang prop accepts BCP 47 language tags. Common examples include:
en-US(English - United States)en-GB(English - United Kingdom)es-ES(Spanish - Spain)fr-FR(French - France)de-DE(German - Germany)ja-JP(Japanese - Japan)zh-CN(Chinese - China)
Note: Identifying available voices depends on the browser and operating system. The component attempts to find a Google voice for the specified language, falling back to any voice matching the language prefix.
Web Speech API Reference
This component utilizes the Web Speech API specifically the SpeechSynthesis interface. It provides text-to-speech functionality directly in the browser.
For more information on browser compatibility and advanced usage, refer to the MDN Web Docs.