acm-header
Sign In

Communications of the ACM

Research Highlights

Technical Perspective: Pandemics, Remote Work, and Accessibility


remote worker sits at a computer on a desk

Credit: Microsoft

Many of us dramatically pivoted at the beginning of COVID-19, dropping in-person meetings, and taking up a variety of remote collaboration tools with little sense of how to keep productivity high. The good news was that productivity across many sectors did not fall. The even better news was the flexibility afforded by remote work (and the absence of lengthy commutes) improved work-life balance for many. The bad news was that accessibility considerations were not sufficiently supported by our tools and practices when thrust into a fully remote environment. New barriers to full participation in the workplace arose as a result.

Seeing this as an opportunity, a team of accessibility researchers at Microsoft spent the summer of 2020 observing their own experiences. The following paper reviews what they learned. Their observations are insightful and sometimes quite surprising. Their recommendations are actionable and would improve both our collaboration tools and the practices surrounding their use.

Before touching on what these researchers found, let's unpack the title of their paper: Mixed Abilities and Varied Experiences: A Group Autoethnography of a Virtual Summer Internship. "Mixed abilities" denotes the team included some members with preexisting disabilities (including vision, hearing, and cognitive difficulties), initially unknown disabilities (motion sickness induced by unsteady video), and no known disabilities. "Autoethnography" refers to the approach the team took to observe, document, and report on their own behavior. While some might think a much larger and more controlled experiment would be preferable to the observations of and by a team of 11, the richness of their findings suggests theirs was the right approach. "Internship" points to the fact the group included summer interns with somewhat staggered starting and ending dates, working alongside more senior research staff.

Remote collaboration within the team was supported by commercially available text, audio, video, and screen-sharing tools. These tools, while essential for the remote work to happen at all, presented multiple accessibility challenges. Delays in automatic speech-to-text transcription made following a remote presentation difficult. This was compounded by the fact not all meeting participants turned on transcription, so there was not a shared awareness of the delays that might have prompted the speaker to slow down a bit and help everyone have the same experience. In another instance, the absence of a within-tool alerting mechanism caused a hearing-impaired speaker to keep presenting while on mute, unable to see the hidden chat window and unable to hear people calling out the problem. This was finally fixed when someone held up a paper sign that read: "You're on mute."

New practices ranged from simple techniques such as each team member announcing their name each time they spoke (for those who could not see the video) to complex ones such as setting up parallel channels to bring an American Sign Language interpreter into a shared environment. Even simple practices broke down, however, both over the course of a single meeting (forgetting to say one's name prior to speaking for example) and over the course of the summer (where variability in attendance and the associated mix of abilities impeded the development of stable habits).


Back channels, nearly always text based, emerged as a powerful means for providing on-the-fly support.


Back channels, nearly always text based, emerged as a powerful means for providing on-the-fly support. Sometimes this took the form of someone spontaneously creating a textual description of something on the screen that would be missed by someone visually impaired. Other times, it took the form of allyship in which a team member would remind someone of the need to follow an accessibility practice, or to gently query a disabled colleague whether they would like them to speak up in support of a needed accommodation. But back channels also increased cognitive load, making it difficult to attend to the meeting content.

Perhaps the most interesting set of problems arose from failures to adequately support sign language interpreters in shared meeting spaces. First, the meeting software triggered off spoken audio to highlight the current speaker. This meant the interpreter but not the signer was highlighted on the screen (a violation of the norm of paying attention to the signer and not the interpreter in in-person settings). Second, as there was no easy way to associate the interpreter with the signer it was not always clear what role an interpreter played. Third, the absence of a robust linkage between an interpreter and a signer meant they might be assigned to different breakout rooms forcing last-minute workarounds to bring them back together.

Remote and hybrid work is likely here to stay. This paper provides guidance on how to make it work better for everyone.

Back to Top

Author

John Richards is an ACM Fellow and a Distinguished Research Scientist at IBM's Thomas J. Watson Research Center, Yorktown Heights, NY, USA, where he explores the future of remote and hybrid work.

Back to Top

Footnotes

To view the accompanying paper, visit doi.acm.org/10.1145/3604622


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: