v3 makes use of React hooks to simplify the consumption of react-speech-recognition
:
- Replacing the higher order component with a React hook
- Introducing commands, functions that get executed when the user says a particular phrase
- A clear separation between all parts of
react-speech-recognition
that are global (e.g. whether the microphone is listening or not) and local (e.g. transcripts). This makes it possible to have multiple components consuming the global microphone input while maintaining their own transcripts and commands - Some default prop values have changed so check those out below
import React, { Component } from "react";
import PropTypes from "prop-types";
import SpeechRecognition from "react-speech-recognition";
const propTypes = {
// Props injected by SpeechRecognition
transcript: PropTypes.string,
resetTranscript: PropTypes.func,
browserSupportsSpeechRecognition: PropTypes.bool
};
const Dictaphone = ({
transcript,
resetTranscript,
browserSupportsSpeechRecognition
}) => {
if (!browserSupportsSpeechRecognition) {
return null;
}
return (
<div>
<button onClick={resetTranscript}>Reset</button>
<span>{transcript}</span>
</div>
);
};
Dictaphone.propTypes = propTypes;
export default SpeechRecognition(Dictaphone);
Automatically enabling the microphone without any user input is no longer encouraged as most browsers now prevent this. This is due to concerns about privacy - users don't necessarily want their browser listening to them without being asked. The "auto-start" has been replaced with a button to trigger the microphone being turned on.
import React, { useEffect } from 'react'
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
const Dictaphone = () => {
const { transcript, resetTranscript, browserSupportsSpeechRecognition } = useSpeechRecognition()
const startListening = () => SpeechRecognition.startListening({ continuous: true })
if (!browserSupportsSpeechRecognition) {
return null
}
return (
<div>
<button onClick={startListening}>Start</button>
<button onClick={resetTranscript}>Reset</button>
<p>{transcript}</p>
</div>
)
}
export default Dictaphone
This was a global option in v2 that would cause the microphone to start listening from the beginning by default. In v3, the microphone is initially turned off by default.
Automatically enabling the microphone without any user input is no longer encouraged as most browsers now prevent this. This is due to concerns about privacy - users don't necessarily want their browser listening to them without being asked. The preferred approach is to have a button that starts the microphone when clicked.
However, if you still want an auto-start feature for the purposes of testing in Chrome, which still allows it, you can do the following: the microphone can be turned on when your component first renders by either useEffect
if you're using hooks or componentDidMount
if you're still using class components. It is recommended that you do this close to the root of your application as this affects global state.
useEffect(() => {
SpeechRecognition.startListening({ continuous: true })
}, []);
This was another global option in v2 that would by default have the microphone permanently listen to the user, even when they finished speaking. This default behaviour did not match the most common usage pattern, which is to use react-speech-recognition
for "press to talk" buttons that stop listening once a command has been spoken.
continuous
is now an option that can be passed to SpeechRecognition.startListening
. It is false
by default, but can be overridden like so:
SpeechRecognition.startListening({ continuous: true })
This is a new prop in v3 that is passed into useSpeechRecognition
from the consumer. Its default value makes a subtle change to the previous behaviour. When continuous
was set to false
in v2, the transcript would not be reset when the microphone started listening again. clearTranscriptOnListen
changes that, clearing the component's transcript at the beginning of every new discontinuous speech. To replicate the old behaviour, this can be turned off when passing props into useSpeechRecognition
:
const { transcript } = useSpeechRecognition({ clearTranscriptOnListen: false })
SpeechRecognition
used to inject props into components in v2. These props are still available, but in different forms.
This is now state returned by useSpeechRecognition
. This transcript is local to the component using the hook.
This is now state returned by useSpeechRecognition
. This only resets the component's transcript, not any global state.
This is now available as SpeechRecognition.startListening
, an asynchronous function documented here.
This is now available as SpeechRecognition.stopListening
, documented here.
This is now available as SpeechRecognition.abortListening
, documented here.
This is now available as the function SpeechRecognition.browserSupportsSpeechRecognition
, documented here.
This is now state returned by useSpeechRecognition
. This is the global listening state.
This is now state returned by useSpeechRecognition
. This transcript is local to the component using the hook.
This is now state returned by useSpeechRecognition
. This transcript is local to the component using the hook.
This is now returned by the function SpeechRecognition.getRecognition
, documented here.