-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
682 lines (426 loc) · 43.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<meta charset="utf-8" />
<title>XR Accessibility User Requirements</title>
<script src="https://www.w3.org/Tools/respec/respec-w3c" class="remove"></script>
<script src="respec-config.js" class="remove"></script>
<script src="../biblio.js" class="remove"></script>
</head>
<body>
<section id="abstract">
<h2>Abstract</h2>
<p>This document lists user needs and requirements for people with disabilities when using virtual reality or immersive environments, augmented or mixed reality and other related technologies (XR). It first introduces a definition of <abbr title="Virtual and Augmented Reality">XR</abbr> as used throughout the document, then briefly outlines some uses of XR. It outlines the complexity of understanding XR, introduces some technical accessibility challenges such as the need for multi-modal support, synchronization of input and output devices and customization. It then outlines accessibility related user needs for XR and suggests subsequent requirements. This is followed by related work that may be helpful understanding the complex technical architecture and processes behind how XR environments are built and what may form the basis of a robust accessibility architecture.</p>
<p>This document is most explicitly not a collection of baseline requirements. It is also important to note that some of the requirements may be implemented at a system or platform level, and some may be authoring requirements.</p>
</section>
<section id="sotd">
<p>To comment on this draft <a href="https://github.com/w3c/apa/issues/new">file an issue in the <abbr title="World Wide Web Consortium">W3C</abbr> APA GitHub repository</a>. If this is not feasible, send email to <a href="mailto:public-apa@w3.org">public-apa@w3.org</a> (<a href="https://lists.w3.org/Archives/Public/public-apa/">archives</a>). In-progress updates to the document may be viewed in the <a href="https://w3c.github.io/apa/xaur/">publicly visible editors' draft</a>.</p>
</section>
<section>
<h2>Introduction</h2>
<p>XR is an acronym used to refer to the spectrum of hardware, applications, and techniques used for virtual reality or immersive environments, augmented or mixed reality and other related technologies. This document is developed as part of a discovery into accessibility related user needs and requirements for XR. This document does not represent a formal working group position, nor does it currently represent a set of technical requirements that a developer or designer need strictly follow. It aims to outline the diversity of some current accessibility related user needs in XR and what potential requirements to meet those needs may be.</p>
<section>
<h2>What does the term 'XR' mean?</h2>
<p>As with the <a href="https://immersive-web.github.io/webxr/">WebXR API spec</a> and as indicated in the related <a href="https://github.com/immersive-web/webxr/blob/master/explainer.md#what-is-webxr">WebXR explainer</a>, this document uses the acronym XR to refer to the spectrum of hardware, applications, and techniques used for virtual reality or immersive environments, augmented or mixed reality and other related technologies. Examples include, but are not limited to:</p>
<ul>
<li>Immersive or augmented environments used for education, gaming, multimedia, 360° content and other applications.</li>
<li>Head mounted displays, whether they are opaque, transparent, or utilise video passthrough.</li>
<li>Mobile devices with positional tracking.</li>
<li>Fixed displays with head tracking capabilities.</li>
</ul>
<p>The important commonality between them being that they all offer some degree of spatial tracking with which to simulate a view of virtual content as well as navigation and interaction with the objects within these environments.</p>
<p>Terms like "XR Device", "XR Application", etc. are generally understood to apply to any of the above. Portions of this document that only apply to a subset of these devices will be indicated as appropriate.</p>
</section>
<section>
<h3>Definitions of virtual reality and immersive environments</h3>
<p>Virtual reality and immersive environment definitions vary but converge on the notion of immersive computer-mediated experiences. They involve interaction with objects, people and environments using a range of controls. These experiences are often multi-sensory and may be used for educational, therapeutic or entertainment purposes.</p>
</section>
<section>
<h3>Definitions of augmented and mixed reality</h3>
<p>Augmented and mixed reality definitions vary but converge on the notion of computer-mediated interactions involving overlays on the real world. These may be informational, or interactive depending on the application.</p>
</section>
</section>
<section>
<h3>What is XR used for?</h3>
<p>XR has a range of purposes from work, education, gaming, multimedia and communication. It is evolving at a fast rate and while not yet mainstream, this will change as computing power increases, hardware becomes cheaper and the quality of the user experience improves. XR will be more commonly used for the performance of work tasks, for therapeutic uses, education and for entertainment.</p>
</section>
<section>
<h2>Understanding XR and Accessibility Challenges</h2>
<p>Understanding XR itself presents various challenges that are technical. They include issues with a range of hardware, software and authoring tools. To make accessible XR experiences there is a need to understand interaction design principles, accessibility semantics and assistive technologies. However, these all represent 'basic' complexities that are in themselves substantial. To add to this, for many designers and authors they may neither know nor have access to people with disabilities for usability testing. Neither may they have a practical way of understanding accessibility related user needs that they can build a solid set of requirements from. In short, they just may not understand what user needs they are trying to meet.</p>
<p>Some of the issues in XR, for example in gaming, for people with disabilities include:</p>
<ul>
<li><strong>Over emphasis on motion controls</strong>. There are many motion controllers that emphasise using your body to control the experience. Some games with XR components may lock out traditional control methods when a VR headset is being used, and the user should always be able to use a range of input mechanisms. <br />
<div class="note"><abbr title="Three Degrees of Freedom">3DOF</abbr> and <abbr title="Six Degrees of Freedom">6DOF</abbr> may have their own specific mobility issues, for example <abbr title="Three Degrees of Freedom">3DOF</abbr> may have implications for people who have motor impairments that affect the use of one or both arms. <abbr title="Six Degrees of Freedom">6DOF</abbr> may have implications for people who are quadriplegic and for people that use a wheelchair or mobility aid for navigation where there is a need to move directionally in physical space or a higher emphasis on the lower extremity for movement.</div>
</li>
<li><strong>VR Headsets need the user to be a physical position to play</strong>. The user should not have to be in a particular physical position such as standing or sitting to play a game or perform some action. Or there should be ability to remap these 'physical positions' to other controls (such as using <a href="https://www.walkinvrdriver.com">WalkinVRDriver</a>).</li>
<li><strong>Games and hardware being locked to certain manufacturers</strong>. Consoles should allow full button remapping on standard game controllers - to different types of assistive technologies such as switches. These remapping preferences should be mobile, and transportable across a range of hardware devices and software.</li>
<li><strong>Gamification of VR forces game dynamics on the user</strong>. Some users may wish to just explore an immersive environment without the 'game'.</li>
<li><strong>Audio design lacks spatial accuracy</strong>. Sound design needs particular attention and can be critical for a good user experience for people with disabilities. The auditory experience of a game or other immersive environment may 'be' the experience [[able-gamers]].</li>
</ul>
<p>There are a range of disabilities that will need to be considered in making XR accessible. It is beyond the scope of this document to describe them all in detail. General categories or types of disabilities are:</p>
<ul>
<li>Auditory disabilities</li>
<li>Cognitive disabilities</li>
<li>Neurological disabilities</li>
<li>Physical disabilities</li>
<li>Speech disabilities</li>
<li>Visual disabilities</li>
</ul>
<p>A person may have one of these disabilities or a combination of several. User needs are presented here that may relate to several of these disabilities with a range of requirements that should be met by the author or the platform. For XR designers and authors understanding these needs is crucial when making XR environments accessible.</p>
<p>Some things designers and authors need to be aware of:</p>
<ul>
<li>Understanding specific diverse user needs and how they relate to XR.</li>
<li>Successfully identifying modality needs that are not obvious - but still need to be supported.</li>
<li>Suitable authoring tools that support accessibility requirements in XR.</li>
<li>Using languages, platforms and engines that support accessibility semantics.</li>
<li>Providing accessible alternatives for content and interaction.</li>
<li>The provision of specific commands within the VR environment (e.g., to go directly to a specified location or to follow another user) which assist with navigation to support different modalities.</li>
<li>The use of virtual assistive technologies (e.g., white cane via a haptic device) to provide non-visual feedback. The research identified that if the same audio cues associated with a real-world infrared white cane were used in immersive environment, users were able to effectively centre themselves in the middle of pathways and walk successfully through virtual doorways based on the same audio feedback as used in the equivalent real-world device [[maidenbaum-amendi]]</li>
</ul>
<section>
<h3>Immersive Environment challenges</h3>
<p>Some of the challenges within immersive environments (and gaming) accessibility include the use of extremely complex input devices, control schemes that require a high degree of precision, timing and simultaneous action; ability to distinguish subtle differences in busy visual and audio information, having to juggle multiple complex goals and objectives [[web-adapt]].</p>
<p>There are also currently very useful accessibility guidelines available that are specific to gaming [[game-a11y]].</p>
</section>
<section>
<h2>XR and supporting multimodality</h2>
<p>Modality relates to modes of sense perception such as sight, hearing, touch and so on. Accessibility can be thought of as supporting multi-modal requirements and the transformation of content or aspects of a user interface from one mode to another that will support various user needs.</p>
<p>Considering various modality requirements in the foundation of XR means these platforms will be better able to support accessibility related user needs. There will be many modality aspects for the developer and/or content author to consider.</p>
<p>XR authors and content designers will also need access to tools that support the multi-modal requirements listed below. </p>
<p>The following inputs and outputs can be considered modalities that should be supported in XR environments.</p>
</section>
<section>
<h3>Various input modalities </h3>
<p>The following are example of some of the diverse input methods used by people with disabilities. In many real-world applications these input methods may be combined.</p>
<ul>
<li><strong>Speech</strong> - this is where a user's voice is the main input. Using a range of speech commands, a user should be able to navigate in an XR environment, interact with the objects in that environment using their voice alone.</li>
<li><strong>Keyboard </strong> - this is where the keyboard alone is the user's main input. A user should be able to navigate in an XR environment, interact with the objects in that environment using the keyboard alone.</li>
<li><strong>Switch </strong> this is where a since button Switch alone is the user's main input. A user should be able to navigate in an XR environment, interact with the objects in that environment using a Switch alone. This switch may be used in conjunction with an assistive technology scanning application within the XR environment that allows them to select directions for navigation, macros for communication and interaction.</li>
<li><strong>Gesture </strong> - this is where gesture-based controllers are the main input and can be used to navigate in an XR environment, interact with the objects in that environment make selections using their voice alone.</li>
<li><strong>Eye Tracking</strong> - this is where eye tracking applications is the main input. Using a range of commands, a user should be able to navigate in an XR environment, interact with the objects in that environment using these eye tracking applications.</li>
</ul>
</section>
<section>
<h3>Various output modalities </h3>
<p>The following are a list of outputs that can be available to a user to help them understand, interact with and 'sense' feedback from an XR application. Some of these are in common use on the Web and other exploratory (such as Olfactory and Gustatory.)</p>
<ul>
<li>Tactile - this is using the sense of touch, or commonly referred to as haptics.</li>
<li>Visual - this is using the sense of sight, such as 2D and 3D graphics.</li>
<li>Auditory - this is using the sense of sound, such as rich spatial audio, surround sound.</li>
<li>Olfactory - this is the sense of smell.</li>
<li>Gustatory - this is the sense of taste.</li>
</ul>
</section>
<section>
<h2>XR controller challenges</h2>
<p>As mentioned, there are a range of input devices that may be used. Supporting these controllers requires an understanding of what they are and how they work.
There are a variety of alternative gaming controls that may be very useful in XR environments and applications. For example, the <a href="https://www.xbox.com/en-US/xbox-one/accessories/controllers/xbox-adaptive-controller ">Xbox Adaptive Controller</a>.</p>
<p>While XR is the experience, the controller plays a critical part in overcoming some complexity as well as mediating issues that may relate to other challenges around usability and helping the user understand sensory substitution devices. </p>
<p>Controllers such as the Xbox Adaptive Controller and other switch type inputs allow the user to remap keyboard inputs to control or interact with virtual environments. These powerful customizations allow the user to "do that thing that is difficult" for them with ease. In conjunction with this controller, for example, users with limited mobility they can also simulate actions in the XR environment that they would not be able to physically perform. <a href="https://www.walkinvrdriver.com">WalkinVRDriver</a> is a good example of this where motion range, position and orientation can be set to the user's ability.</p>
</section>
<section>
<h3>Customization of control inputs</h3>
<p>Give the user the ability to modify their input preference or use a variety of input devices. The remapping of keys used to control movement or interaction in virtual environments is not currently required by WCAG. It is nevertheless noted in the literature as desirable.</p>
</section>
<section>
<h3> Using multiple diverse inputs simultaneously </h3>
<p>A user with a disability may have several input devices or different assistive technologies. A user may switch 'mode' of interaction, or the tools used without degrading the user experience where they lose focus on a task and cannot return to it, or make unwanted input.</p>
<p>Complexity needs to be managed and co-ordinated between different kinds of assistive technology in immersive environments. There is a platform level requirement to support multiple assistive technologies in a cohesive manner. This would allow combinations to be used in a co-ordinated way e.g. where the users day-to-day AT, can be used with other AT that may be embedded in the environment already for example.</p>
<p class="note">The REQ 5b: Voice activation also indicates potential issues with pairing multiple devices via Bluetooth.</p>
</section>
<section>
<h3>Consistent tracking with multiple inputs </h3>
<p>There may be tracking issues when switching input devices. A tracking issue is where the user may lose their focus or it can be modified in unpredictable or unwanted ways, this can cause loss of focus and potentially push the user to make unwanted inputs or choices.</p>
<p>Outputs sent to multiple devices will need to be synchronised.</p>
</section>
<section>
<h2>Usability and affordances in XR</h2>
<p>An XR application should have a high level of usability for someone with a disability using assistive technology. Therefore, communicating affordances successfully is critical and needs to be done in a way that supports multiple modalities. Some related questions are:</p>
<ul>
<li>How can affordances be successfully translated from one modality to another?</li>
<li>Can affordances be mediated or transformed, as needed by the users own modality preferences?</li>
<li>Should affordances change depending on context of use? What interactions are allowed or not allowed? </li>
<li>How can we ensure what happens in one modality, is reflected in another? So various modalities are not out of sync e.g. synchronization of captions between real time text transcriptions and other alternatives such as symbols or AAC?</li>
</ul>
<p class="note">Regarding the discoverability of accessibility features in XR. It is important for designers of accessible XR to understand how to categorize various accessibility features and understand where to place them, in a menu for example. An accessibility related accommodation may have multiple contexts of use that may not be obvious. For example, the suggested use of "mono" in User Need 19 is not just an accessibility feature under a hearing-impaired category, as it is also useful for users with spatial orientation impairments or cognitive and learning disabilities. Care should be taken to ensure these features are categorized in menus correctly and discoverable in multiple contexts.</p>
</section>
</section>
<section>
<h2>XR User Needs and Requirements</h2>
<p>This document outlines various accessibility related user needs for XR. These user needs should drive accessibility requirements for XR and its related architecture. These come from people with disabilities who use assistive technologies and wish to see the features described available within XR enabled applications.</p>
<p>User needs and requirements are often dependent on context of use. The following outline some accessibility user needs and requirements that may be applicable in immersive environments, augmented reality and 360° applications.</p>
<p>These following are neither exhaustive, nor definitive but are presented to help orientate the reader towards understanding some broad user needs and how to meet them. </p>
<section>
<h3>Immersive semantics and customization</h3>
<ul>
<li><strong>User Need 1:</strong> A user of assistive technology wants to navigate, identify locations, objects and interact within an immersive environment.</li>
<li><strong>REQ 1a:</strong> Navigation mechanisms must be intuitive with robust affordances. Navigation, location and object descriptions must be accurate and identified in a way that is understood by assistive technology. </li>
<li><strong>REQ 1b:</strong> Controls need to support alternative mapping, rearranging of position, resizing and sensitivity adjustment.</li>
<li><strong>REQ 1c:</strong> Objects that are important within any given context of time and place can be identified in a suitable modality.</li>
<li><strong>REQ 1d:</strong> Allow the user to filter or sort objects and content.</li>
<li><strong>REQ 1e:</strong> Allow the user to query objects and content for more details.</li>
</ul>
<p class="note">In an spatialized augmented reality environment a blind user may find a combination of text to speech and sonic symbols helpful. By using a combination of text to speech and sonic symbolism a blind user can do a self-guided tour of a given area using their smartphone. [[spatialized-navigation]]</p>
</section>
<section>
<h3>Motion agnostic interactions</h3>
<ul>
<li><strong>User Need 2:</strong> A person with a physical disability may want to interact with items in an immersive environment in a way that doesn't require particular bodily movement to perform any given action.</li>
<li><strong>REQ 2a:</strong> Allow the user performing an action in the environment, in a device independent way, without having to do so physically.</li>
<li><strong>REQ 2b:</strong> Ensure that all areas of the user interface can be accessed using the same input method.</li>
<li><strong>REQ 2c:</strong> Allow multiple input methods to be used at the same time.</li>
</ul>
<p class="note">There are accessibility issues specific to augmented reality. For example, the user may be expected to scan the environment, or scan physical objects, to determine the placement of virtual objects. The user may need to mark a location or an area in space so that the AR application can generate appropriate virtual objects. The user should be able to perform these actions in a motion agnostic way.</p>
</section>
<section>
<h4>Immersive personalization</h4>
<ul>
<li><strong>User Need 3:</strong> Users with cognitive and learning disabilities may need to personalize the immersive experience in various ways.</li>
<li><strong>REQ 3a:</strong> Support Symbol sets so they can be used to communicate and layered over objects and items to convey affordances or other needed information in way that can be understood according to user preference.</li>
<li><strong>REQ 3b:</strong> Allow the user to turn off or 'mute' non-critical environmental content such as animations, visual or audio content, or non-critical messaging.</li>
</ul>
<aside class="note">
<p>Personalization involves tailoring aspects of the user experience to meet the needs and preferences of the individual user. W3C are working on various modules for web content that aim to support personalization and are exploring areas such as:</p>
<ul>
<li>Expanding the accessibility information that may be supplied by the author. [[personalization-semantics]]</li>
<li>Facilitating preference driven individual personalization. [[personalization-content]] </li>
<li>Enabling the author to specify key semantics needed to support users with cognitive impairments. [[personalization-requirements]]</li>
</ul>
</aside>
</section>
<section>
<h4>Interaction and target customization</h4>
<ul>
<li><strong>User Need 4:</strong> A user with limited mobility, or users with tunnel or peripheral vision may need a larger 'Target size' for a button or other controls.</li>
<li><strong>REQ 4a:</strong> Ensure fine motion control is not needed to activate an input.</li>
<li><strong>REQ 4b:</strong> Ensure hit targets are large enough with suitable spacing around them.</li>
<li><strong>REQ 4c:</strong> Ensure multiple actions or gestures are not required at the same time to perform any action.</li>
<li><strong>REQ 4d:</strong> Support 'Sticky Keys' requirements such as serialization for various inputs when the user needs to press multiple buttons.</li>
</ul>
<p class="note">Users with cognitive and learning disabilities need to understand what items in a visual display are actionable targets and how to interact with them. There is a need for accessibility API's that map custom user interface actions to control types. These actions can then be understood by a broad range of assistive technologies. This would help indicate to users what targets are actionable, and how they can interact with them. By supporting this kind of adaptation and personalization the user can select preferred, familiar options from a set of alternatives. The W3C have produced a useful list of these patterns that could help readers understand the user needs of people with cognitive and learning disabilities, as well as in the development of suitable APIs. [[coga-usable]], especially <a href="https://www.w3.org/TR/coga-usable/#design_for_everyone">section 4, the Design Guide</a>.</p>
</section>
<section>
<h4>Voice commands</h4>
<ul>
<li><strong>User Need 5:</strong> A user with limited mobility may want to be able to use Voice Commands within the immersive environment, to navigate, interact and communicate with others.</li>
<li><strong>REQ 5a:</strong> Ensure Navigation and interaction can be controlled by Voice Activation.</li>
<li><strong>REQ 5b:</strong> Voice activation should preferably use native screen readers or voice assistants rather than external devices to eliminate the additional step needed to pair devices.</li>
</ul>
</section>
<section>
<h4>Color changes</h4>
<ul>
<li><strong>User Need 6:</strong> Color blind users may need to be able to customise the colors used in the immersive environment. This will help with understanding affordances of various controls or where color is used to signify danger or permission.</li>
<li><strong>REQ 6a:</strong> Provide customised high contrast skins for the environment to suit luminosity and color contrast requirements.</li>
</ul>
</section>
<section>
<h4>Magnification context and resetting</h4>
<ul>
<li><strong>User Need 7:</strong> Screen magnification users may need to be able to check the context of their view in immersive environments.</li>
<li><strong>REQ 7a:</strong> Allow the screen magnification user to check the context of their view and track/reset focus as needed.</li>
<li><strong>REQ 7b:</strong> Where it makes sense (such as in menus) interface elements can be enlarged and the menu reflowed to enhance the usability of the interface up to a certain magnification requirement.</li>
</ul>
<p class="note">There are customisation approaches such as the automatic generation of user interfaces as demonstrated in the SUPPLE project, which adapt to the different challenges the user may face, such as vision, motor control and other user preferences and abilities. A generated UI can make multiple adaptations for different user needs at the same time. This is achieved by generating a UI, or several - after testing a person's ability using an algorithm to learn their preferences. [[supple-project]]</p>
</section>
<section>
<h4>Critical messaging and alerts</h4>
<ul>
<li><strong>User Need 8:</strong> Screen magnification users may need to be made aware of critical messaging and alerts in immersive environments often without losing focus. They may also need to route these messages to a 'second screen' (see <strong>REQ 14</strong> Second Screen).</li>
<li><strong>REQ 8a:</strong> Ensure that critical messaging, or alerts have priority roles that can be understood and flagged to AT, without moving focus.</li>
</ul>
</section>
<section>
<h4>Gestural interfaces and interactions</h4>
<ul>
<li><strong>User Need 9:</strong> A blind user may wish to interact with a gestural interface, such as a virtual menu system.</li>
<li><strong>REQ 9a:</strong> Support touch screen accessibility gestures (e.g. swipes, flicks and single, double or triple taps with 1, 2 or 3 fingers). See <strong>REQ 14</strong> Second Screen.</li>
<li><strong>REQ 9b:</strong> Using a virtual menu system - enable a self-voicing option and have each category, or item description, spoken as they receive focus via a gesture or other input. As the blind user gestures to trigger both movement and interaction they may get more detail about items that are closer to them. The user must be allowed to query and interrogate these items and make selections.</li>
<li><strong>REQ 9c:</strong> Allow for the re-mapping of gestures to associate different actions with different input types or gestures. This may be a virtual switch that can map to new macros on the fly. This will allow the user to change defaults and employ gestures to carry out new actions offered by the immersive environment as required.</li>
</ul>
</section>
<section>
<h4>Signing videos and text description transformation</h4>
<ul>
<li><strong>User Need 10:</strong> A deaf or hard of hearing person, for whom a written language may not be their first language, may prefer signing of video for text, objects or item descriptions.</li>
<li><strong>REQ 10a:</strong> Allow text, objects or item descriptions to be presented to the user via a signing avatar (pre-recorded only).</li>
<li><strong>REQ 10b:</strong> Any signing videos should be 1/3rd minimum of the original streams signing size. This requirement comes from research of 240 ASL videos intended to be watched by deaf signers,
that found that nearly all set signer size to at least one-third the size of the full video. [[raja-asl]]</li>
</ul>
<aside class="note">
<p>
Currently, it is not possible to provide an accurate live interpretation via a signing avatar. In general, animated or digital signing avatars should be avoided as users find them less expressive than recorded video of humans who can convey the natural quality and skill provided by appropriately trained and qualified interpreters and translators. Therefore, uses of signed avatars should rely only on pre-recording of 'real people' who are trained and qualified interpreters and translators. See the concerns expressed by the WFD and WASLI 'Statement on Use of Signing Avatars'. [[wfd-wasli]] </p>
<p>However, we note this is an emerging field and exploration is encouraged to ensure the future development of quality signing avatars. For example, this could be via building a signing avatar that both provides a face with fully functioning muscular variables and can successfully parse the nuances of vocal expression and meaning.</p>
</aside>
</section>
<section>
<h4>Safe harbour controls</h4>
<ul>
<li><strong>User Need 11:</strong> People with Cognitive Impairments may be easily overwhelmed in Immersive Environments.</li>
<li><strong>REQ 11a:</strong> Allow the user to set a 'safe place' - quick key, shortcut or macro.</li>
</ul>
</section>
<section>
<h4>Immersive time limits</h4>
<ul>
<li><strong>User Need 12:</strong> Users may be adversely affected by spending too much time in an immersive environment or experience, and may lose track of time. </li>
<li><strong>REQ 12a:</strong> Provide a platform integration with tools that support digital wellbeing, allow the user to access alarms for time limits during an immersive session. </li>
</ul>
</section>
<section>
<h4>Orientation and navigation</h4>
<ul>
<li><strong>User Need 13:</strong> A screen magnification user or user with a cognitive and learning disability or spatial orientation impairment needs to maintain focus and understand where they are in immersive environments.</li>
<li><strong>REQ 13a:</strong> Ensure the user can reset and calibrate their orientation/view in a device independent way.</li>
<li><strong>REQ 13b:</strong> Ensure field of view in Immersive environments, are appropriate, and can be personalized - so users are not disorientated.</li>
<li><strong>REQ 13c:</strong> Provide clear visual or audio landmarks.</li>
</ul>
</section>
<section>
<h4>Second screen devices</h4>
<ul>
<li><strong>User Need 14:</strong> Users of assistive technology such as a blind, or deaf-blind users communicating via a RTC application in XR, may have sophisticated 'routing' requirements for various inputs and outputs and the need to manage same. </li>
<li><strong>REQ 14a:</strong> Allow the user to route text output, alerts, environment sounds or audio to a braille or other second screen device.</li>
<li><strong>REQ 14b:</strong> Ensure that the user can manage the flow of critical messaging, or content to display on a second screen.</li>
<li><strong>REQ 14c:</strong> Support touch screen accessibility gestures (e.g. swipes, flicks and single, double or triple taps with 1, 2 or 3 fingers) on a second screen device to allow the user to navigate menus and interact.</li>
</ul>
<p class="note">'Second screen' is a term used in this document to denote any another external output device, such as a monitor or sound card, or assistive technology such as braille output. The use of the term is not restricted to just these devices and can refer to any output device a user may choose.</p>
</section>
<section>
<h4>Interaction speed</h4>
<ul>
<li><strong>User Need 15:</strong> Users with physical disabilities or cognitive and learning disabilities may find some interactions too fast to keep up with or maintain.</li>
<li><strong>REQ 15a:</strong> Allow users to change the speed they can travel or perform interactions, in an immersive environment.</li>
<li><strong>REQ 15b:</strong> Allow timings for interactions or critical inputs to be modified or extended.</li>
<li><strong>REQ 15c:</strong> Provide help for the user with a cognitive or learning disability.</li>
<li><strong>REQ 15d:</strong> Provide clear start and stop mechanisms.</li>
</ul>
<p class="note"> The term 'help' for REQ 15c may vary from explanatory information such as textual/symbolic annotations in an application, to human assistance in real time.</p>
</section>
<section>
<h4>Avoiding sickness triggers</h4>
<ul>
<li><strong>User Need 16:</strong> Users with vestibular disorders, Epilepsy, and photo sensitivity may find some interactions trigger motion sickness and other affects. This may be triggered when doing teleportation or other movements in XR.</li>
<li><strong>REQ 16a:</strong> Avoid interactions that trigger epilepsy or motion sickness and provide alternatives.</li>
<li><strong>REQ 16b:</strong> Ensure flickering images are at a minimum, will not trigger seizures (more than 3 times a second), or can be turned off or reduced.</li>
</ul>
</section>
<section>
<h4>Spatial audio tracks and alternatives</h4>
<ul>
<li><strong>User Need 17:</strong> Hard of hearing users may need accommodations to perceive audio.</li>
<li><strong>REQ 17a:</strong> Provide spatialized audio content to emulate three-dimensional sound forms in immersive environments.</li>
<li><strong>REQ 17b:</strong> Provide text descriptions of important audio content.</li>
</ul>
</section>
<section>
<h4>Spatial orientation: Mono audio option</h4>
<ul>
<li><strong>User Need 18:</strong> Users with spatial orientation impairments, cognitive impairments or hearing loss in just one ear may miss information in a stereo or binaural soundscape.</li>
<li><strong>REQ 18a:</strong> Allow mono audio sound to be sent to both headphones so that the user can perceive the whole soundscape through either ear. [[mono-ios]].</li>
</ul>
<p class="note">People with traumatic brain injuries can have a range of impairments. These may be spatial orientation impairments, auditory processing difficulties, visual processing difficulties or a combination. They may miss information in stereo or binaural soundscapes. This can affect orientation while navigating. Even if provided with accurate directions, they may not recognize surroundings, or experience anxiety when navigating. </p>
</section>
<section>
<h4>Captioning, Subtitling and Text: Support and customization</h4>
<ul>
<li><strong>User Need 19:</strong> Users may need to customise captions, subtitles and other text in XR environments.</li>
<li><strong>REQ 19a:</strong> Provide support for captioning and subtitling of multimedia content.</li>
<li><strong>REQ 19b:</strong> Allow customisable context sensitive reflow of captions, subtitles and text content in XR environments. The suitable subtitling area may be smaller than what is required currently for television. [[inclusive-seattle]]</li>
</ul>
<p class="note">The <a href="https://www.w3.org/community/immersive-captions/">W3C Immersive Captions Community Group</a> is actively contributing to this emerging accessibility standards work representing a diverse range of user needs.</p>
</section>
</section>
<section>
<h2>Related Documents</h2>
<p>Other documents that relate to this and represent current work in the RQTF/APA are:</p>
<ul>
<li> <a href="https://www.w3.org/WAI/APA/wiki/XRA-Semantics-Module">XR Semantics Module</a> - this document outlines proposed accessibility requirements that may be used in a modular way in immersive, augmented or mixed reality (XR). A modular approach may help us to define clear accessibility requirements that support XR accessibility user needs, as they relate to the immersive environment, objects, movement, and interaction accessibility. Such a modular approach may help the development of clear semantics, designed to describe specific parts of the immersive eco-system. In immersive environments it is imperative that the user can understand what objects are, understand their purpose, as well as another qualities and properties including interaction affordance, size, form, shape, and other inherent properties or attributes.</li>
<li><a href="https://www.w3.org/WAI/APA/wiki/WebXR_Standards_and_Accessibility_Architecture_Issues"> WebXR Standards and Accessibility Architecture Issues</a> - this document is informative and aims to outline some of the challenges in understanding the complex technical architecture and processes behind how XR environments are currently rendered. To make these environments accessible and provide a quality user experience it is important to also understand the nuances and complexity of accessible user interface design and development for the 2D web. Any attempt to make XR accessible needs to be based on meeting the practical user needs of people with disabilities.</li>
</ul>
</section>
<section class="appendix" id="change-log">
<h2 id="c-change-log">Change Log<a class="self-link" aria-label="§" href="#change-log"></a></h2>
<p>The following is a list of new requirements and other changes in this document:</p>
<ul>
<li><strong>Immersive semantics and customization:</strong>
REQ 1c: Objects that are important within any given context of time and place can be identified in a suitable modality.</li>
<li><strong>Immersive semantics and customization:</strong>
REQ 1d: Allow filtering and the ability to query items and content for more details. </li>
<li><strong>REQ 1e:</strong> Allow the user to query objects and content for more details.</li>
<li> <strong>Mono audio option:</strong>
REQ 19a: Allow mono audio sound to be sent to both headphones so that the user can perceive the whole soundscape through either ear. [[mono-ios]]</li>
<li> <strong>Interaction and target customization:</strong>
REQ 4d: Support 'Sticky Keys' requirements such as serialization for various inputs when the user needs to press multiple buttons.</li>
<li> <strong>Gestural interfaces and interactions</strong>
REQ 9a: Support touch screen accessibility gestures (e.g. swipes, flicks and single, double or triple taps with 1, 2 or 3 fingers).</li>
<li> <strong>Magnification context and resetting </strong>
REQ 7b: Where it makes sense (such as in menus) interface elements can be enlarged and the menu reflowed to enhance the usability of the interface up to a certain magnification requirement.</li>
<li> <strong>Gestural interfaces and interactions</strong>
REQ 9a: Support touch screen accessibility gestures (e.g. swipes, flicks and single, double or triple taps with 1, 2 or 3 fingers) on a second screen device to navigate menus and interact. </li>
<li> <strong>Gestural interfaces and interactions</strong>
REQ 9c: Allow for the re-mapping of gestures to associate different actions with different input types or gestures. This may be a virtual switch that can map to new macros on the fly. This will allow the user to change defaults and employ gestures to carry out new actions offered by the immersive environment as required.</li>
<li><strong>Signing videos and text description transformation</strong> REQ 10b: Any signing videos should be 1/3rd minimum of the original streams signing size. This requirement comes from research of 240 ASL videos intended to be watched by deaf signers, that found that nearly all set signer size to at least one-third the size of the full video. </li>
<li> <strong>Orientation and navigation</strong> REQ 13c: Provide clear visual or audio landmarks.</li>
<li> <strong>Second screen devices</strong>
REQ 14b: Ensure that the user can manage the flow of critical messaging, or content to display on a second screen.</li>
<li> <strong>Second screen</strong> REQ 14c: Support touch screen accessibility gestures (e.g. swipes, flicks and single, double or triple taps with 1, 2 or 3 fingers) on a second screen device to allow the user to navigate menus and interact.</li>
<li><strong>Spatial audio tracks and alternatives</strong> REQ 17b: Provide text descriptions of important audio content.</li>
</ul>
<p>Requirements have been updated based on combined review feedback, discussion and Research Questions Task Force consensus. Other user needs have been edited to better reference related requirements such as with Second screen devices.</p>
<p>Various clarification or reference notes have been added relating to: </p>
<ul>
<li>The importance of discoverability relating to accessibility features in XR.</li>
<li>Specific augmented reality issues when marking locations in a motion agnostic way.</li>
<li>W3C Personalization semantics and how they can support people with cognitive impairments.</li>
<li>The need for accessibility API's that map custom user interface actions to control types.</li>
<li>The limitations of accurate live interpretation via a digital signing avatar.</li>
<li>The impact of traumatic brain injury on a user's spatial orientation, auditory and visual processing abilities.</li>
</ul>
</section>
<section class="appendix">
<h2>Acknowledgements</h2>
<section>
<h3>Participants of the APA working group active in the development of this document</h3>
<ul>
<li>Shadi Abou-Zahra, W3C</li>
<li>Matthew Tylee Atkinson, The Paciello Group</li>
<li>Judy Brewer, W3C</li>
<li>Michael Cooper, W3C</li>
<li>Scott Hollier, Invited Expert</li>
<li>Joshue O'Connor, W3C</li>
<li>Janina Sajka, Invited Expert</li>
<li>Jason White, Educational Testing Service</li>
</ul>
</section>
<section>
<h3>Previously active participants, commenters, and other contributors</h3>
<ul>
<li>Frances Baum</li>
<li>Nicoló Carpignoli, Chialab </li>
<li>Wendy Dannels, RIT/NTID Research Center on Culture and Language</li>
<li>David Fazio, Helix Opportunity</li>
<li>Marku Hakkinen, Educational Testing Service</li>
<li>Charles Hall, Invited Expert</li>
<li>Ian Hamilton</li>
<li>John Kirkwood</li>
<li>Raja Kushalnagar, Department of Science, Technology and Mathematics Gallaudet University</li>
<li>Charles LaPierre, Benetech</li>
<li>Thomas Logan, Equal Entry</li>
<li>Melina Möhlne, IRT</li>
<li>Estel·la Oncins Noguer, UAB TransMedia Catalonia</li>
<li>Christopher Patnoe, Google</li>
<li>Devon Persing, Shopify</li>
<li>Sonali Rai, Royal National Institute of the Blind</li>
<li>Ajay Sharma, HCL Technologies</li>
<li>Suzanne Taylor, Things Entertainment</li>
<li>Léonie Watson, TetraLogical</li>
</ul>
</section>
<section>
<h3>Enabling Funders</h3>
<p>This work is supported by the <a href="https://www.w3.org/WAI/about/projects/wai-guide/">EC-funded WAI-Guide Project</a>.</p>
</section>
</section>
</body>
</html>