Skip to content

Commit

Permalink
Merge pull request #163 from sitek94/master
Browse files Browse the repository at this point in the history
Add syntax highlighting in readme and docs
  • Loading branch information
JamesBrill authored Apr 14, 2024
2 parents 303cc7d + bfbfd6b commit c54ea33
Show file tree
Hide file tree
Showing 4 changed files with 45 additions and 41 deletions.
36 changes: 20 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,17 +32,21 @@ This version requires React 16.8 so that React hooks can be used. If you're used

To install:

`npm install --save react-speech-recognition`
```shell
npm install --save react-speech-recognition
```

To import in your React code:

`import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'`
```js
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
```

## Basic example

The most basic example of a component using this hook would be:

```javascript
```jsx
import React from 'react';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';

Expand Down Expand Up @@ -103,7 +107,7 @@ You can find the full guide for setting up a polyfill [here](docs/POLYFILLS.md).
* Install `@speechly/speech-recognition-polyfill` in your web app
* You will need a Speechly app ID. To get one of these, sign up for free with Speechly and follow [the guide here](https://docs.speechly.com/quick-start/stt-only/)
* Here's a component for a push-to-talk button. The basic example above would also work fine.
```javascript
```jsx
import React from 'react';
import { createSpeechlySpeechRecognition } from '@speechly/speech-recognition-polyfill';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';
Expand Down Expand Up @@ -144,7 +148,7 @@ export default Dictaphone;

If you choose not to use a polyfill, this library still fails gracefully on browsers that don't support speech recognition. It is recommended that you render some fallback content if it is not supported by the user's browser:

```javascript
```js
if (!browserSupportsSpeechRecognition) {
// Render some fallback content
}
Expand All @@ -167,7 +171,7 @@ For all other browsers, you can render fallback content using the `SpeechRecogni

Even if the browser supports the Web Speech API, the user still has to give permission for their microphone to be used before transcription can begin. They are asked for permission when `react-speech-recognition` first tries to start listening. At this point, you can detect when the user denies access via the `isMicrophoneAvailable` state. When this becomes `false`, it's advised that you disable voice-driven features and indicate that microphone access is needed for them to work.

```javascript
```js
if (!isMicrophoneAvailable) {
// Render some fallback content
}
Expand All @@ -181,7 +185,7 @@ Before consuming the transcript, you should be familiar with `SpeechRecognition`
To start listening to speech, call the `startListening` function.
```javascript
```js
SpeechRecognition.startListening()
```
Expand All @@ -191,29 +195,29 @@ This is an asynchronous function, so it will need to be awaited if you want to d
To turn the microphone off, but still finish processing any speech in progress, call `stopListening`.
```javascript
```js
SpeechRecognition.stopListening()
```
To turn the microphone off, and cancel the processing of any speech in progress, call `abortListening`.
```javascript
```js
SpeechRecognition.abortListening()
```
## Consuming the microphone transcript
To make the microphone transcript available in your component, simply add:
```javascript
```js
const { transcript } = useSpeechRecognition()
```
## Resetting the microphone transcript
To set the transcript to an empty string, you can call the `resetTranscript` function provided by `useSpeechRecognition`. Note that this is local to your component and does not affect any other components using Speech Recognition.
```javascript
```js
const { resetTranscript } = useSpeechRecognition()
```
Expand Down Expand Up @@ -248,7 +252,7 @@ To make commands easier to write, the following symbols are supported:
### Example with commands
```javascript
```jsx
import React, { useState } from 'react'
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

Expand Down Expand Up @@ -318,13 +322,13 @@ By default, the microphone will stop listening when the user stops speaking. Thi
If you want to listen continuously, set the `continuous` property to `true` when calling `startListening`. The microphone will continue to listen, even after the user has stopped speaking.
```javascript
```js
SpeechRecognition.startListening({ continuous: true })
```
Be warned that not all browsers have good support for continuous listening. Chrome on Android in particular constantly restarts the microphone, leading to a frustrating and noisy (from the beeping) experience. To avoid enabling continuous listening on these browsers, you can make use of the `browserSupportsContinuousListening` state from `useSpeechRecognition` to detect support for this feature.
```javascript
```js
if (browserSupportsContinuousListening) {
SpeechRecognition.startListening({ continuous: true })
} else {
Expand All @@ -338,7 +342,7 @@ Alternatively, you can try one of the [polyfills](docs/POLYFILLS.md) to enable c
To listen for a specific language, you can pass a language tag (e.g. `'zh-CN'` for Chinese) when calling `startListening`. See [here](docs/API.md#language-string) for a list of supported languages.
```javascript
```js
SpeechRecognition.startListening({ language: 'zh-CN' })
```
Expand All @@ -359,7 +363,7 @@ If you are building an offline web app, you can detect when the browser is offli
## Developing

You can run an example React app that uses `react-speech-recognition` with:
```
```shell
npm i
npm run dev
```
Expand Down
30 changes: 15 additions & 15 deletions docs/API.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@

React hook for consuming speech recorded by the microphone. Import with:

```
```js
import { useSpeechRecognition } from 'react-speech-recognition'
```

### Input props

These are passed as an object argument to `useSpeechRecognition`:

```
```js
useSpeechRecognition({ transcribing, clearTranscriptOnListen, commands })
```

Expand All @@ -37,7 +37,7 @@ See [Commands](../README.md#Commands).

These are returned from `useSpeechRecognition`:

```
```js
const {
transcript,
interimTranscript,
Expand Down Expand Up @@ -84,7 +84,7 @@ Transcription of speech that the Web Speech API has finished processing.

The Web Speech API is not supported on all browsers, so it is recommended that you render some fallback content if it is not supported by the user's browser:

```
```js
if (!browserSupportsSpeechRecognition) {
// Render some fallback content
}
Expand All @@ -94,7 +94,7 @@ if (!browserSupportsSpeechRecognition) {

Continuous listening is not supported on all browsers, so it is recommended that you apply some fallback behaviour if your web app uses continuous listening and is running on a browser that doesn't support it:

```
```js
if (browserSupportsContinuousListening) {
SpeechRecognition.startListening({ continuous: true })
} else {
Expand All @@ -106,7 +106,7 @@ if (browserSupportsContinuousListening) {

The user has to give permission for their microphone to be used before transcription can begin. They are asked for permission when `react-speech-recognition` first tries to start listening. This state will become `false` if they deny access. In this case, it's advised that you disable voice-driven features and indicate that microphone access is needed for them to work.

```
```js
if (!isMicrophoneAvailable) {
// Render some fallback content
}
Expand All @@ -116,7 +116,7 @@ if (!isMicrophoneAvailable) {

Object providing functions to manage the global state of the microphone. Import with:

```
```js
import SpeechRecognition from 'react-speech-recognition'
```

Expand All @@ -126,15 +126,15 @@ import SpeechRecognition from 'react-speech-recognition'

Start listening to speech.

```
```js
SpeechRecognition.startListening()
```

This is an asynchronous function, so it will need to be awaited if you want to do something after the microphone has been turned on.

It can be called with an options argument. For example:

```
```js
SpeechRecognition.startListening({
continuous: true,
language: 'zh-CN'
Expand All @@ -149,15 +149,15 @@ By default, the microphone will stop listening when the user stops speaking (`co

If you want to listen continuously, set the `continuous` property to `true` when calling `startListening`. The microphone will continue to listen, even after the user has stopped speaking.

```
```js
SpeechRecognition.startListening({ continuous: true })
```

##### language [string]

To listen for a specific language, you can pass a language tag (e.g. `'zh-CN'` for Chinese) when calling `startListening`.

```
```js
SpeechRecognition.startListening({ language: 'zh-CN' })
```

Expand Down Expand Up @@ -245,7 +245,7 @@ Some known supported languages (based on [this Stack Overflow post](http://stack

Turn the microphone off, but still finish processing any speech in progress.

```
```js
SpeechRecognition.stopListening()
```

Expand All @@ -255,7 +255,7 @@ This is an asynchronous function, so it will need to be awaited if you want to d

Turn the microphone off, and cancel the processing of any speech in progress.

```
```js
SpeechRecognition.abortListening()
```

Expand All @@ -269,14 +269,14 @@ This returns the underlying [object](https://developer.mozilla.org/en-US/docs/We

Replace the native Speech Recognition engine (if there is one) with a custom implementation of the [W3C SpeechRecognition specification](https://wicg.github.io/speech-api/#speechreco-section). If there is a Speech Recognition implementation already listening to the microphone, this will be turned off. See [Polyfills](./POLYFILLS.md) for more information on how to use this.

```
```js
SpeechRecognition.applyPolyfill(SpeechRecognitionPolyfill)
```

#### removePolyfill

If a polyfill was applied using `applyPolyfill`, reset the Speech Recognition engine to the native implementation. This can be useful when the user switches to a language that is supported by the native engine but not the polyfill engine.

```
```js
SpeechRecognition.removePolyfill()
```
10 changes: 5 additions & 5 deletions docs/POLYFILLS.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ Under the hood, Web Speech API in Chrome uses Google's speech recognition server

The `SpeechRecognition` class exported by `react-speech-recognition` has the method `applyPolyfill`. This can take an implementation of the [W3C SpeechRecognition specification](https://wicg.github.io/speech-api/#speechreco-section). From then on, that implementation will used by `react-speech-recognition` to transcribe speech picked up by the microphone.

```
```js
SpeechRecognition.applyPolyfill(SpeechRecognitionPolyfill)
```

Note that this type of polyfill that does not pollute the global scope is known as a "ponyfill" - the distinction is explained [here](https://ponyfoo.com/articles/polyfills-or-ponyfills). `react-speech-recognition` will also pick up traditional polyfills - just make sure you import them before `react-speech-recognition`.

Polyfills can be removed using `removePolyfill`. This can be useful when the user switches to a language that is supported by the native Speech Recognition engine but not the polyfill engine.

```
```js
SpeechRecognition.removePolyfill()
```

Expand Down Expand Up @@ -45,7 +45,7 @@ Rather than roll your own, you should use a ready-made polyfill for a cloud prov

Here is a basic example combining `speech-recognition-polyfill` and `react-speech-recognition` to get you started. This code worked with version 1.0.0 of the polyfill in May 2021 - if it has become outdated due to changes in the polyfill or in Speechly, please raise a GitHub issue or PR to get this updated.

```
```jsx
import React from 'react';
import { createSpeechlySpeechRecognition } from '@speechly/speech-recognition-polyfill';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';
Expand Down Expand Up @@ -105,7 +105,7 @@ This is Microsoft's offering for speech recognition (among many other features).
Here is a basic example combining `web-speech-cognitive-services` and `react-speech-recognition` to get you started (do not use this in production; for a production-friendly version, read on below). This code worked with version 7.1.0 of the polyfill in February 2021 - if it has become outdated due to changes in the polyfill or in Azure Cognitive Services, please raise a GitHub issue or PR to get this updated.
```
```jsx
import React from 'react';
import createSpeechServicesPonyfill from 'web-speech-cognitive-services';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';
Expand Down Expand Up @@ -154,7 +154,7 @@ export default Dictaphone;
Your subscription key is a secret that you should not be leaking to your users in production. In other words, it should never be downloaded to your users' browsers. A more secure approach that's recommended by Microsoft is to exchange your subscription key for an authorization token, which has a limited lifetime. You should get this token on your backend and pass this to your frontend React app. Microsoft give guidance on how to do this [here](https://docs.microsoft.com/en-us/azure/cognitive-services/authentication?tabs=powershell).
Once your React app has the authorization token, it should be passed into the polyfill creator instead of the subscription key like this:
```
```js
const { SpeechRecognition: AzureSpeechRecognition } = createSpeechServicesPonyfill({
credentials: {
region: REGION,
Expand Down
10 changes: 5 additions & 5 deletions docs/V3-MIGRATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ v3 makes use of React hooks to simplify the consumption of `react-speech-recogni

### In v2

```
```jsx
import React, { Component } from "react";
import PropTypes from "prop-types";
import SpeechRecognition from "react-speech-recognition";
Expand Down Expand Up @@ -48,7 +48,7 @@ export default SpeechRecognition(Dictaphone);

Automatically enabling the microphone without any user input is no longer encouraged as most browsers now prevent this. This is due to concerns about privacy - users don't necessarily want their browser listening to them without being asked. The "auto-start" has been replaced with a button to trigger the microphone being turned on.

```
```jsx
import React, { useEffect } from 'react'
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

Expand Down Expand Up @@ -79,7 +79,7 @@ Automatically enabling the microphone without any user input is no longer encour

However, if you still want an auto-start feature for the purposes of testing in Chrome, which still allows it, you can do the following: the microphone can be turned on when your component first renders by either `useEffect` if you're using hooks or `componentDidMount` if you're still using class components. It is recommended that you do this close to the root of your application as this affects global state.

```
```js
useEffect(() => {
SpeechRecognition.startListening({ continuous: true })
}, []);
Expand All @@ -91,15 +91,15 @@ This was another global option in v2 that would by default have the microphone p

`continuous` is now an option that can be passed to `SpeechRecognition.startListening`. It is `false` by default, but can be overridden like so:

```
```js
SpeechRecognition.startListening({ continuous: true })
```

## clearTranscriptOnListen

This is a new prop in v3 that is passed into `useSpeechRecognition` from the consumer. Its default value makes a subtle change to the previous behaviour. When `continuous` was set to `false` in v2, the transcript would not be reset when the microphone started listening again. `clearTranscriptOnListen` changes that, clearing the component's transcript at the beginning of every new discontinuous speech. To replicate the old behaviour, this can be turned off when passing props into `useSpeechRecognition`:

```
```js
const { transcript } = useSpeechRecognition({ clearTranscriptOnListen: false })
```

Expand Down

0 comments on commit c54ea33

Please sign in to comment.