Web Speech API

Experimental: This is an experimental technology
Check the Browser compatibility table carefully before using this in production.

The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)

Web Speech Concepts and Usage

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recognition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognize. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.

For more details on using these features, see Using the Web Speech API.

Web Speech API Interfaces

Speech recognition

SpeechRecognition

The controller interface for the recognition service; this also handles the SpeechRecognitionEvent sent from the recognition service.

SpeechRecognitionAlternative

Represents a single word that has been recognized by the speech recognition service.

SpeechRecognitionError

Represents error messages from the recognition service.

SpeechRecognitionEvent

The event object for the result and nomatch events, and contains all the data associated with an interim or final speech recognition result.

SpeechGrammar

The words or patterns of words that we want the recognition service to recognize.

SpeechGrammarList

Represents a list of SpeechGrammar objects.

SpeechRecognitionResult

Represents a single recognition match, which may contain multiple SpeechRecognitionAlternative objects.

SpeechRecognitionResultList

Represents a list of SpeechRecognitionResult objects, or a single one if results are being captured in continuous mode.

Speech synthesis

SpeechSynthesis

The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.

SpeechSynthesisErrorEvent

Contains information about any errors that occur while processing SpeechSynthesisUtterance objects in the speech service.

SpeechSynthesisEvent

Contains information about the current state of SpeechSynthesisUtterance objects that have been processed in the speech service.

SpeechSynthesisUtterance

Represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)

SpeechSynthesisVoice

Represents a voice that the system supports. Every SpeechSynthesisVoice has its own relative speech service including information about language, name and URI.

Window.speechSynthesis

Specced out as part of a [NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the SpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.

Examples

The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.

Specifications

Specification
Web Speech API

Browser compatibility

Desktop Mobile
Chrome Edge Firefox Internet Explorer Opera Safari WebView Android Chrome Android Firefox for Android Opera Android Safari on IOS Samsung Internet
Web_Speech_API
33
≤18
49
No
21
7
No
33
62
No
7
3.0
cancel
33
14
49
No
21
7
No
33
62
No
7
3.0
getVoices
33
14
49
No
21
7
No
33
62
No
7
3.0
onvoiceschanged
33
14
49
No
No
No
No
33
62
No
No
3.0
pause
33
14
49
No
21
7
No
33
In Android, pause() ends the current utterance. pause() behaves the same as cancel().
62
In Android, pause() ends the current utterance. pause() behaves the same as cancel().
No
7
3.0
In Android, pause() ends the current utterance. pause() behaves the same as cancel().
paused
33
14
49
No
21
7
No
33
62
No
7
3.0
pending
33
14
49
No
21
7
No
33
62
No
7
3.0
resume
33
14
49
No
21
7
No
33
62
No
7
3.0
speak
33
14
49
No
21
7
No
33
62
No
7
3.0
speaking
33
14
49
No
21
7
No
33
62
No
7
3.0
voiceschanged_event
33
14
49
No
21
7
No
33
62
No
7
3.0
Desktop Mobile
Chrome Edge Firefox Internet Explorer Opera Safari WebView Android Chrome Android Firefox for Android Opera Android Safari on IOS Samsung Internet
Web_Speech_API
33
You'll need to serve your code through a web server for recognition to work.
≤79
You'll need to serve your code through a web server for recognition to work.
No
No
No
14.1
4.4.3
You'll need to serve your code through a web server for recognition to work.
33
You'll need to serve your code through a web server for recognition to work.
No
No
14.5
2.0
You'll need to serve your code through a web server for recognition to work.
SpeechRecognition
33
≤79
No
No
No
14.1
37
Yes
No
No
14.5
Yes
abort
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
audioend_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
audiostart_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
continuous
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
end_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
error_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
grammars
33
≤79
No
No
No
No
Yes
Yes
No
No
No
Yes
interimResults
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
lang
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
maxAlternatives
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
nomatch_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onaudioend
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onaudiostart
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onend
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onerror
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onnomatch
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onresult
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onsoundend
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onsoundstart
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onspeechend
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onspeechstart
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
onstart
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
result_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
serviceURI
33
≤79
No
No
No
No
Yes
Yes
No
No
No
Yes
soundend_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
soundstart_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
speechend_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
speechstart_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
start
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
start_event
33
79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes
stop
33
≤79
No
No
No
14.1
Yes
Yes
No
No
14.5
Yes

BCD tables only load in the browser

SpeechSynthesis

BCD tables only load in the browser

See also

© 2005–2021 MDN contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API