You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the US, historically captions are intended for users who cannot hear audio. Therefore, they should contain a written form of every sound that occurs in a video or audio item. They include sound effects and music in addition to dialogue. Subtitles were originally intended for users who could not understand the spoken language, therefore only contain text that represents speech. Subtitles that contain representation of all audio may be referred to as SDH (subtitles for the deaf/hard of hearing).
For a user who is unable to hear the audio for a video item, I want to be able to select a closed caption that includes the written representation of all audio (as opposed to subtitles only).
Variation(s)
While this is appropriate for users with hearing impairments, it is also relevant for users in situations where video is being played muted or where users are in a noisy environment. In this scenario, either the user or the IIIF viewer itself may want to enable use of captions rather than subtitles (where both exist) so that maximum information is available.
Proposed Solutions
Introduce annotation motivations of captioning and subtitling to distinguish between subtitles that represent spoken words and captions that include all audio.
Additional Background
(more about your perspective, existing work, etc. goes here.)
The text was updated successfully, but these errors were encountered:
What if you have data that are either captions/subtitles but you don't know the underlying motivation for which they were created? Is there a way to distinguish between the two?
elynema
changed the title
I want to allow the client/user to identify captions for audio and video items that include audio description
I want to allow the client/user to identify captions for audio and video items that include representation of all sounds, not just dialogue
Jan 17, 2024
Description
In the US, historically captions are intended for users who cannot hear audio. Therefore, they should contain a written form of every sound that occurs in a video or audio item. They include sound effects and music in addition to dialogue. Subtitles were originally intended for users who could not understand the spoken language, therefore only contain text that represents speech. Subtitles that contain representation of all audio may be referred to as SDH (subtitles for the deaf/hard of hearing).
For a user who is unable to hear the audio for a video item, I want to be able to select a closed caption that includes the written representation of all audio (as opposed to subtitles only).
Variation(s)
While this is appropriate for users with hearing impairments, it is also relevant for users in situations where video is being played muted or where users are in a noisy environment. In this scenario, either the user or the IIIF viewer itself may want to enable use of captions rather than subtitles (where both exist) so that maximum information is available.
Proposed Solutions
Introduce annotation motivations of
captioning
andsubtitling
to distinguish between subtitles that represent spoken words and captions that include all audio.Additional Background
(more about your perspective, existing work, etc. goes here.)
The text was updated successfully, but these errors were encountered: