Succeeding on the Marginal Path

The third article in a series by my guest blogger, Melissa Giles, about text, editing, and media accessibility.

For many people, career-building moments such as job offers and invitations to attend industry events are not only seized and accepted, but also celebrated. For others, including Sydney-based writer and editor Gaele Sobott, no matter how flattering or significant such opportunities might be, there may be no choice but to turn them down because of disabling transport systems, built environments and organisational cultures.

Black and white headshot of Gaele, before a bookshelf. She has chin-length blond hair and glasses and is wearing a dark turtleneck

 

 

 

 

 

 

 

 

 

 

 

 

Sobott, who has a form of muscular dystrophy, usually works from home in a space designed for her specific needs. She received advice on aspects of the set-up from an occupational therapist whom she employed through the National Disability Insurance Scheme. As a member of the Australian Society of Authors, Sobott was also eligible for a benevolent fund grant to purchase an ergonomic chair that has subsequently relieved the pressure sores, neck spasms and migraines she previously endured.

Appropriate office furniture, computer hardware and use of the internet enable Sobott’s home-based employment. Her capacities are further enhanced with chat apps, meeting software, email and the track changes function in Microsoft Word. Being at home allows her to avoid ‘the stress of trying to negotiate a city and workplaces that are built for non-disabled bodies’. She says she can rest, sleep, eat, stretch and adjust the lighting as required, which helps her to better manage her energy levels.

During Sobott’s long career, which has included writing children’s fiction, editing publications, working on theatre productions and directing disability-led arts organisation Outlandish Arts, she has developed substantial expertise as a wordsmith. Because of her skills, she is sometimes offered projects that entail being onsite at other organisations. She now declines if her requirements – such as manageable hours, and toilets and buildings that are accessible with her mobility device – are not met.

‘The stress, physical pain and exhaustion I experience from [inaccessible workplaces] mean the work is not worth taking on,’ Sobott says. ‘I have learnt to say no, no matter how prestigious the work may be, how much I admire the organisation or people in charge or how much money may be offered.’

Her need to turn down otherwise beneficial opportunities also extends to industry events. For instance, a peak arts agency invited her to one of its meetings – in a room that she could only have accessed by walking up a long flight of stairs. The inaccessible location made Sobott feel like her presence was not important to the agency. She encourages event organisers to be mindful that venue choices can directly exclude people and devalue their participation.

Alternative linguistic paradigms

In contrast with the recommended term person with disability, Sobott calls herself a disabled person, as a political statement. She is comfortable saying that she has impairments, but refuses to say that she has disabilities because, from her perspective, she is disabled by society.

‘Although there is no doubt that I experience pain, fatigue and other difficulties due to my various impairments, the more distressing aspects of my existence and the factors that I find most damaging to my mental and physical health are social and economic,’ Sobott says.

She is committed to ‘developing a consciousness of how language is used to oppress disabled people and enforce disablism’ because, she says, language is never neutral.

‘We make choices about the words we use, and we have a responsibility to understand the connotation of the words we choose. I try to interrogate metaphors and other forms of speech in the same way I investigate and understand any theory or concept before I use it in my work.’

The political power of metaphors is important for Sobott. She would like writers and editors to replace ubiquitous and damaging pejorative disability metaphors with ‘innovative, politically accountable uses of metaphor that make people think more deeply and alternatively’.

New voices and perspectives

Much of Sobott’s recent work has involved disabled writers she has met through Outlandish Arts and /dis'rʌpt/ (Disrupt), a publication she co-edits with blind editor Amanda Tink, which was designed to showcase disabled writers internationally. Sobott says that such writers see many topics differently because of their embodied experience of life in a world that commonly excludes them.

When editing their writing, Sobott has noticed that, at times, she must ‘disengage from the dominant editorial and discursive paradigms’. One example is recognising that editors do not always have to enforce standard grammar and spelling when non-standard word choices are important for communicating particular emotions or affording particular rhythms.

Sobott finds working with writers who have disabilities rewarding for many reasons. In her experience, they often communicate new ways of thinking and are fearless about experimenting and improvising across a range of art forms.

‘They end up with exciting, surprising outcomes that often challenge the reader or audience to question preconceived notions,’ Sobott says.

For more people to access the richness of disabled writers’ work, of course, more work by disabled writers must be published. Sobott wants the Australian writing and publishing industry to be more inclusive and support this aim. She highlights Writers Victoria’s award-winning Write-ability Fellowship program as one outstanding example that helps emerging disabled writers to develop professionally.

To increase the number of disabled editors, Sobott advocates for the creation of dedicated internships with publishers and the introduction of employment quotas. She urges established editors not only to mentor emerging disabled editors, but also to use their position within their workplaces, unions and other representative bodies to agitate for affirmative action.

About the author

Melissa Giles is a copyeditor from Brisbane. She would like to advance the understanding of communication accessibility and related professional practices. This includes encouraging diversity within the editing profession and highlighting ways that editors and organisations can incorporate people who are often overlooked in the communication process.

This article was first published in the Editors Queensland July 2019 newsletter OffPress. Editors Queensland is a branch of the Institute of Professional Editors Ltd (IPEd) in Australia.

Be Natural Needs to Be Accessible

Review of Be Natural: The Untold Story of Alice Guy-Blaché (2018)

Old black and white photo of a sign on wall inside an industrial space that says BE NATURAL.
The sign at Alice Guy-Blaché’s Solax Studio in the US. She wanted her actors to remember this guideline in order to avoid the overacting so common in films during the silent era.

 

Six years ago, I made a pledge to a Kickstarter campaign for the documentary Be Natural. There have been some bumps in the road for supporters, which is to be expected for a passion project, I suppose: some of the perks never arrived, some were disappointing, and some have yet to arrive (promised post–home video release). It seems creators Pamela Green and Jarik van Sluijs had more success in the campaign and developments with the film than they had anticipated. I applaud them. They’ve brought a major project to fruition.

However, when I finally got to see it at one of only three Toronto screenings, I was underwhelmed. This was mostly due to its inaccessibility but also its overall direction. Then I learned more about the premise of Alice being “discovered” by the filmmakers. I’d like to share some thoughts on this film because I’ve yet to see a review online that wasn’t bursting at the seams with enthusiasm, and I think some balance in its reception is merited.

Robert Redford and narrator Jodie Foster are listed as two of the executive producers, which gives it some cachet, I expect. However, direction by Green is problematic. As a backer, I got to see teasers and contacted the production company about issues with the subtitling as it stood then; I did not get a reply. The final product is less than accessible to several groups of people.

The music is far too loud and drowns out other elements, making it hard to follow the extremely fast pace of the doc. There are some interesting ideas for indicating sources—for example, audio tape via a little icon in the corner of the screen—but often these are more cute than necessary, adding clutter to a busy screen; I found it hard to watch such indicators, plus maps, dynamic travel routes, superimposed pictures and image captions, and subtitles. The viewer must be resigned to picking and choosing what information they want to take in, which is a shame because it’s quite interesting.

I wish I could have acted as a consultant for them on the function and form of effective captions and subtitles. In fact, there are no captions at all for the film because of the overcrowded shots, so right off the bat the film is not available to Deaf, deaf, or hard of hearing viewers, or any other group who uses captions. Also, the subtitles are full of bad line breaks and inconsistencies. The worst practice is when the text was deemed worthy to highlight, it is often presented in a cursive font in paragraph form across the width of the screen, making it difficult to read, particularly as there is a great deal of figured background used. Also, the reading speed required for the longer excerpts is too high given the font and form. Accessibility to the French audio is therefore very poor through the subtitling.

Black and white still showing 2-line subtitle with far too many characters/words per line

The other unfortunate directorial choice made is the pace of the overall contents. Understandably, the creators wanted to share as much of Alice’s history as possible within the time available, and the story is fascinating. But too much of the search efforts are shown, so that clips from Alice’s films that are highlighted are on-screen for about one second (maybe two seconds) each. In that time, the viewer has to take in the visual and the captioned title and date before they are on to another example; a series of these make it impossible to really learn much about the films Guy-Blaché made, which I would have thought was a major goal of the documentary. So, the information itself is often inaccessible.

This is the type of film that would have benefited from subtitle editing (and captioning), and it could have been done if a director of accessibility and translation had been consulted, as recommended by Pablo Romero-Fresco. As it stands, there are multiple barriers to the documentary.

But that’s only what I gleaned from watching it. I’ve since learned that, in fact, Alice’s history has been under our noses all along. (Note to self: do your homework before backing on Kickstarter.) It turns out that there have been a few documentaries made about her already, including a 1996 Canadian doc and an eponymous 1997 German one. There’s also a book about her, Alice Guy-Blaché: Lost Visionary of the Cinema by Alison McMahan (2003). And there is no lack of knowledge and discussion about her in academia or the blogosphere/internet. So, the hype of discovering Alice is misplaced.

Alice’s life is an interesting topic to cover in a documentary film, to be sure, but Be Natural seems a bit meta, focusing more on Green’s journey to find Alice than being excited to share the story in a well-paced exposition of Alice’s story. I came away feeling I had missed a lot of the audio and visual in it and retained bare bones. Another audience member asked me afterwards if I remembered the location of the Solax studio, and we decided we’d have to google it when we got home. (It was Fort Lee, New Jersey.)

I feel like the film festival and critical hype is due to others not having done their homework, either. It’s a nice film, but it’s not earth-shattering in form or as provenance. Since it’s so hard to find screenings, why not find some clips of her films on YouTube or watch this introduction to Alice Guy-Blaché on Vimeo? The captions here aren’t perfect either, but the video allows you the time to appreciate the creative and technical contributions Alice made in a man’s industry a hundred years ago.

PS: I did have to laugh at one clip that was shown long enough to take in. In the 1913 A House Divided, about a separated couple living together, there’s some acrimony. Plus ça change…

 


Communication Design and the End of Inscrutable Objects

The second article in a series by my guest blogger, Melissa Giles, about text, editing, and media accessibility.

 

Imagine if you were blind and were frequently emailed invoices as PDF files that your screen reader could not access, or if you were repeatedly mailed unusable hard-copy magazines because the sender said they could not provide an accessible digital version. These things happened to Jonathan Craig, a writer and editor from Brisbane. What surprises him the most is that the senders were disability service providers.

 

Torso shot of Jonathan Craig in his wheelchair at a table, coffee cup in hand.

 

These kinds of experiences are commonplace for people with vision impairment, but can largely be prevented or solved with improved awareness and motivation of the document creators.

Many other accessibility problems for people with vision impairment have been solved with the internet, screen readers and devices such as computers, smartphones and tablets. But these solutions are not universally available and do not replace the need for good communication design.

Craig points out that these technology solutions are not available to all Blind Citizens Australia (BCA) members, the main readers of Blind Citizens News, the magazine he edits. The magazine is available in a wide range of formats that take into account the equally wide range of readers’ skills, internet access and device hardware and software.

One assumption about skills that was questioned to cater for the publication’s readers is the idea that all blind people can read braille, Craig says. It takes some time to learn braille after acquiring or developing vision impairment and, for various reasons, including other disabilities, some people never do.

After Craig produces each issue of Blind Citizens News as a Word document, it is sent to other specialised contractors to reformat in braille, audio and large print. ‘There is great infrastructure available already to allow for alternative formatting,’ he says, ‘so we don’t need to reinvent the wheel.’

BCA members can elect to receive one of these formats in the mail or, like non-members, can read the magazine online. Each article is published as text on its own web page and has a linked audio file. The web page text and the downloadable Word documents of the whole magazine can be read by screen readers, transformed into braille or enlarged as text.

Of course, BCA goes to this effort because of its readers’ particular requirements. But these readers also want – and need – to access other publications that are not aimed specifically at people with vision impairments. Unfortunately, due to inadequate consideration of communication design, many publishers exclude such readers.

Craig emphasises that sighted audiences can also be served when publications use formats designed for people with vision impairment, such as spoken versions of text. Audio books were originally created for people who couldn’t read print, Craig says, but others enjoy listening to them too.

By learning from the multi-format approach of magazines such as Blind Citizens News, Craig argues that other publications can serve people with a range of disabilities and reach unexpected non-disabled audiences – for instance, those who want to access content on the go, while commuting or exercising.

Screen readers

Many accessibility factors must be considered beyond a publication’s file type or format. One factor is how the content will be read. When designing a publication that is inclusive of audiences with vision impairment, the way that screen readers will interpret the content becomes an important consideration. This is a common way that many of these people access content online, both as downloadable documents and as web pages.

‘There are a surprising number of people who still believe that we can’t access computers,’ Craig says. ‘As a result of this awareness problem, a lot of people never think about how they create their documents, apps or even memes, because they don’t know what a screen reader is or how it works.’

The easiest way to experience a screen reader is through activating the technology built in to many touchscreen devices, such as smartphones and tablets. Another way is through the basic demonstration version provided in Vision Australia’s free Document Accessibility Toolbar for Microsoft Word (available for PC only). This toolbar includes a range of other functions designed to make it easier to create accessible content.

For a fully functional computer-based program, you could install a free screen reader called NVDA (non-visual desktop access) and use it to experience the web and digital documents and preview your own content. Be warned that the basic NVDA download comes with a harsh, robotic-sounding voice, but the program can be customised with purchased voices that are easier to listen to.

If you have more detailed knowledge about web design and programming, a webinar by Smashing TV called ‘How a screen reader user accesses the web’ might help you to gain a better understanding of website navigation from a blind person’s point of view.

Much online content is more accessible now via screen readers, Craig says, but this positive trend means that ‘the ongoing habits which render documents unreadable by screen readers are more frustrating than ever’.

PDF files are one of the culprits. As illustrated by Craig’s invoice problems, PDFs are often inaccessible if screen readers cannot interpret them as text. Some PDFs are interpreted as images and are therefore unreadable, as are actual images, including infographics and other visual objects.

Alternative text

One important step in creating accessible content is ensuring that every image in documents, on web pages and on other platforms has ‘alternative text’ (or ‘alt text’). The World Wide Web Consortium’s Web Content Accessibility Guidelines recommend that alternative text be used to reproduce the meaning of all non-text content because this allows users, including users of screen readers, to access the same information in other formats.

For sighted users, correctly formatted alternative text becomes visible in a box that appears when holding the mouse over an image. But for users of screen readers, the text will be spoken or, additionally, transformed into braille if the user has connected a refreshable braille display to their screen reader.

Craig has noticed, on Twitter especially, that more people are using alternative text. ‘Though I believe accessibility is a right, I am still absurdly grateful every time someone describes a photo they’ve posted,’ he says. One side effect of this increased use of alternative text is Craig’s developing sense of appreciation for ‘exactly why people’s cats and dogs are cute’.

Alternative text should include the equivalent essential details needed to make sense of an image, given the reading context. So instead of inserting alternative text saying ‘My dog at the park being cute’ on your next social media post, describe what the dog is doing that makes it look so cute.

Once researched, some basic accessibility principles, such as always including alternative text for meaningful (not decorative) images, are relatively straightforward to understand and remember. But there is much more to know about creating accessible content, including PDF documents, and communicating with people who use different forms of technology and have different disabilities.

To help make this process easier, various organisations offer training, in addition to services, including checking and amending existing content and providing accessible document templates.

 

About Jonathan Craig

Jonathan Craig has been the editor of Blind Citizens News for the last year. He extends the idea of accessibility to include access to his publication for writers who may never have had anything published before.

‘Whenever I can, I work very closely with them, to show them what I’ve learned about the mechanics of storytelling,’ Craig says. ‘It would be easier just to rewrite (their stories) where necessary, but I love seeing their confidence grow as they create drafts which look more and more like what they wanted to put on the page, but couldn’t produce alone.’

Recently, Craig replaced his magazine editor ‘hat’ with his broadcaster headphones and worked at the BCA national convention, assisting with live streaming of the event and co-presenting a daily podcast – both efforts by BCA to include as many non-attendees as possible in the proceedings.

Being part of a minority community and having to work hard for social change can be an ‘agonisingly slow’ process, Craig says. But he is proud of how his fellow BCA members unite in their struggle to be included in everyday activities.

BCA is currently campaigning to have audio description on Australian television and raising awareness about specific touchscreen EFTPOS terminals that prevent blind and vision-impaired users from independently typing their PIN. Find out more at www.bca.org.au/campaigns.

Contact Jonathan Craig via bca@bca.org.au with ‘Att: Jonathan Craig’ in the subject line, or via the BCA office on 1800 033 660.

About the author

Melissa Giles is a copyeditor from Brisbane. She would like to advance the understanding of communication accessibility and related professional practices. This includes encouraging diversity within the editing profession and highlighting ways that editors and organisations can incorporate people who are often overlooked in the communication process.

This article was first published in the Editors Queensland April 2019 newsletter OffPress. Editors Queensland is a branch of the Institute of Professional Editors Ltd (IPEd) in Australia.

Turning Sounds into Text

This is the first article in a series by my guest blogger, Melissa Giles, about text, editing, and media accessibility.

 

Clip art image of a rectangular black speech bubble with three horizontal lines indicating speech and "CC" within a black outlined tv screen, both recognized symbols for subtitles and captions.

 

Captions are essential for people with some level of hearing loss. Verbatim transcriptions of speech and descriptions of sound effects and music, not only for television and films but also for social media content and at live events, are essential for an inclusive society. However, captions are not always provided, and when they are, they are often not copyedited or proofread.

Canadian caption editor Vanessa Wells wants to solve these problems. Wells has a rare combination of experience as a caption writer, caption editor and caption user. She has hearing loss and hyperacusis, making captions vital in loud or crowded spaces.

Wells recalls a telling experience with one of her favourite movies, Interstellar. It took three attempts for her to understand what the star Matthew McConaughey was saying. The first attempt was in a theatre unaided, then she tried again in a theatre using one of the available personal amplifiers, but it could not overcome the audio feedback in the room. The third attempt with captions on a purchased DVD was finally a success.

Another experience was at a conference. ‘I couldn’t hear well because people were chit-chatting the entire time,’ Wells says. ‘Even whispering nearby was very disruptive.’ She tried requesting that the speakers use the microphone and that the other attendees stop talking, but she still couldn’t hear clearly. Next time, she says she’ll ask for Communication Access Real-time Translation (CART): live captions displayed on a large screen in the room.

As with many accessibility measures designed for a particular group, CART and other captioning can benefit various people, including those who require simultaneous aural and visual information to aid comprehension or processing.

In Australia, the Australian Communications and Media Authority (ACMA) regulates minimum quality and quantity standards of captioning on content accessed through television stations and similar services. But there is no regulation of captioning, for example, in videos produced by individuals or other kinds of organisations, which often appear online.

The World Wide Web Consortium’s Web Content Accessibility Guidelines recommend that audio content in all online pre-recorded and live synchronised media be captioned, except when the content is clearly identified as already being an alternative for text-based material. This recommendation can only go so far, though, because it is part of a set of voluntary standards.

What makes a good caption?

To the uninitiated, captioning might appear to be a simple process, especially for pre-prepared captions, which are not produced under the immediate time pressure of live captions. However, as with all written content, many elements affect the accessibility and meaningfulness of captions.

For example, captions must be accurate, clear, comprehensive and contain the equivalent meaning to the audio content they replace. They must also be displayed in a consistent style, well placed on the screen, in appropriate colours and well synchronised to the audio. Users must be able to easily switch captions on in the case of ‘closed’ captions, which are not permanently displayed like ‘open’ captions are.

The ability to create high-quality captions is affected, of course, by the captioners, their training and their working conditions. Wells used to work as an in-house captioner in Canada for pre-recorded television content and highlighted some of the reasons for sub-par captions.

In her experience, the inadequate training left her cohort struggling to learn ‘the new software, dedicated keyboards, the rules for each broadcaster’ and no-one in her training group ended up staying in the industry.
In the workplace, Wells encountered other challenges, such as the last minute timeframes, the atrocious pay (based on speed, not accuracy) and the general lack of concern for quality.

Wells was a book editor before becoming a captioner. She recalls her captioning boss saying to her specifically: ‘Don’t get so hung up on the editing: it’s not like you’re editing a book.’ But, she thought: ‘Well, it should be like you’re editing a book – it’s that important.’

DIY captioning

You can caption your online videos using free tools such as Amara or the captioning functions that YouTube provides, among others. However, Wells urges caution because the result of using automatic options through voice recognition software or having untrained people creating captions is often non-accessible and non-usable ‘craptions’.

Wells supports the #NoMoreCraptions campaign to end near-enough-is-good-enough captioning. She argues against the idea that ‘something is better than nothing’ for caption users because ‘if you have gibberish, then that is not better than nothing’.

Captions are essential for communication, but Wells also sees them as a way to facilitate audience immersion, which is not possible if viewers are distracted by typos or confused by other errors that copyeditors and proofreaders are trained to identify and fix.

Caption editing

The caption text produced even by professional captioners requires expert copyediting and proofreading, but this niche role is largely unfilled. Despite the fact that the captioning field is growing, relevant training for editors wanting to become caption editors is hard to come by.

Wells is currently in discussions with universities and colleges about offering her caption-editing course online and making it available internationally. She argues that captioning education is necessary in all post-secondary courses that include studies in accessibility, media, audiovisual content and communication.

Many captioning companies produce craptions, Wells says, because they are operating without the required knowledge and training, ‘akin to when people who like to find typos in the newspaper hang out their shingle as professional copy editors and proofreaders’.

Wells accepts caption files (such as .srt and .stl) of any quality – even if they were produced automatically or contain craptions – and copyedits the content to be accessible and usable. Her main clients are usually larger television and film producers, post-production houses and subtitlers who translate into English but do not have native-level proficiency.

‘So-called captioning companies don’t hire me because I would be an added cost and, as in book editing, there’s a huge race to the bottom for bargain-basement rates,’ Wells says. ‘That suggests to me that they don’t really care that much about accessibility.’

About the author

Melissa Giles is a copyeditor from Brisbane. She would like to advance the understanding of communication accessibility and related professional practices. This includes encouraging diversity within the editing profession and highlighting ways that editors and organisations can incorporate people who are often overlooked in the communication process.

This article was first published in the Editors Queensland March 2019 newsletter OffPress. Editors Queensland is a branch of the Institute of Professional Editors Ltd (IPEd) in Australia.
It discusses why caption editing is key to caption accessibility for users.

Perhaps “Word Nerding through Netflix”

Read the background and objections here, then delve in to my POV.
Please note I’ve used quotation marks rather than italicizing words as words (in captions/subtitles) with the aim of making a more accessible document.

Spoiler alert: you aren’t going to learn a language with the “Language Learning with Netflix” browser extension. You may confirm what you know, learn the odd word, or see something spelled that you’d only heard before, but you aren’t going to learn a language.

Now, I’m a language nerd, so I’m not knocking different modes of language acquisition or people’s desire to expand their worldview or personal skills. But to get viewers’ hopes up by presenting this tool thus is like saying you’ll learn to be a chef by working as a cashier at McDonald’s. You’ll learn stuff, it may be fun—it may even be “cool” as the linked article says—but you won’t be able to converse in the original language of the show. Especially based on most of the subtitles.

There are indeed some very cool functionalities to this tool. You can choose to see the automatic voice recognition–software’s subtitle translation or the human translation or both. Most useful is the ability to set the automatic pause on each text box. Unfortunately, the two versions of subtitles are so poorly handled that there’s no way in Hades you could learn much language from them.

I experimented with a film and language I was familiar with: “Incendies” by director Denis Villeneuve (2010) in French—Québécois to be exact, which is no mean feat for a French-from-France-translator to tackle. (I know because I went to translation school and, being in Canada, we dealt with Québécois French as much as European French.) (And sidebar, it’s a difficult film to watch but excellent. I recommend it.) I chose to watch both the “machine” translations (as Netflix calls the autocraptions) and the “human” translations simultaneously. In the following examples, when all three languages are shown, the original French is on top, machine in the middle, and human on the bottom. The Arabic is not subtitled. And by the way, I could have selected a film in any of the languages I know and found similar issues; this is just an illustration.
Let’s look at some examples.

Longshot man and woman approaching their car in a city , captioned J'ai la crisse de paix/I have the crease of peace/I feel so fucking peaceful
“J’ai la crisse de paix” is not about “the crease of peace” or even, dictionary-wise, “the crisis of peace”: it’s swearing with “Christ” and colloquially would be used as in the human subtitled “I feel so fucking peaceful.” So, that part is good! If it were France, it would likely have been some form of “putain,” but it looks like the translator asked someone who was familiar with swearing as you’d find it in Quebec or perhaps Maritime Canada (because as we’ll see below, the rest of the translation is problematic). But how did a machine supposedly translate “la crisse” to “the crease” if they’re using a corpus dictionary? Autocraptions 0, #NoMoreCraptions 1.

Young man at side of woman in hospital stretcher captioned Souffle haletant/Breathless breath
“Haleter” means “to pant,” “gasp,” or “puff” in French, but for the moment, let’s look at the autocraption “(Breathless breath)” which a human has not chosen to correct. That’s somewhat of a grammatical nullification in English, never mind a contradiction in meaning. In this scene, the young man is upset, stressed. A good subtitle would have replaced the machine one with something like “(sighs with stress)” or “(anxiously sighs).” This subtitle is used many times in the film, unfortunately.

Doctor examining woman in hospital, adult children looking on, captioned Elle est absente en general/She is absent in general/She's usually confused
Here the doctor is taking a history of the woman and asking her children questions about her health and behaviour of late. The machine subtitle is typically autogenerated: it just translated the line literally. The subtitler is just wrong. “Absente” and “désorientée” or “confuse” wouldn’t be synonymous here. In fact, here’s an argument for giving captioners and subtitlers reasonable work timelines instead of ridiculous demands of urgency. Had the subtitler watched the film first, they would have known that the woman has PTSD, which is unknown to her children, so her son just finds her emotionally unavailable and is very hurt and angry about that. Therefore, the subtitle must be “She’s always absent”: the English audience would understand that it doesn’t mean just physically but more so emotionally; the always would be more colloquial than “usually,” and it would be understood as “[not literally] always” but “[pretty much] always.” So if someone went to a French class and used what they’d learned here and said to the teacher, “Je suis absente” to indicate they needed further help, they’d be laughed at. Not what you want when learning a language.

Young woman looking from desk skeptically at offscreen woman, captioned J'etais meme pas nee/I was not even born/You're kidding right? I wasn't born
Here the “You’re kidding right?” is a hangover from the previous shot/subtitle and shouldn’t even be included again. But the young woman is being asked about something from thirty-five years ago and predictably responds with the French line as shown. The machine version is literally correct but not idiomatically. The human translation is incomplete and misses the mark, thus leaving the viewer in the dark. “I wasn’t born” is not the same as a snarky “I wasn’t even born yet” or “I wasn’t even alive then.” So let’s imagine someone (for some reason) wanted to learn how to say “I wasn’t born” in French: they would use a totally incorrect/inappropriate construction, confusing their listener. Part of learning a language is about clarity, so that there is no miscommunication.

Woman looking distraught in the front of a bus, captioned Cris de fillette/Cree of little girl
The translation by machine apparently went for an aural equivalent here; a human should have changed this to “(cries of little girl).” Unfortunately, this one is doubly problematic in Canada: “Cree” is the name of the Algonquian language of the indigenous Cree people. Confusion could reign supreme here, especially in a film so much about culture and place. Furthermore, knowing it was a Canadian film, viewers might see the subtitle briefly, wonder at it, and then lose track after it has passed by but still be pondering the meaning: audience immersion down the toilet. It certainly would detract from the cultural aspect of learning French.

Longshot of open-doored car in the countryside, with a man pointing the way to a woman on the road, captioned Stridulations d'insectes/Stridulations of insects
This is a good example of the need to understand diction and register in audiovisual translation. “Stridulations” can mean “chirps,” “chirring,” or “shrill sounds.” It refers to the sound crickets and other insects make by rubbing their legs, wings, etc. together. In English, “stridulations” would only be used within a scientific context, perhaps even only an academic one. Here, it’s just about the countryside setting, and we would say “(insects chirring)”—if at all. There’s an argument that the caption is not even necessary as it doesn’t advance the plot: we can see it’s empty and remote. In any event, a language student who then said on a beautiful summer night in Provence, “Oh listen to the stridulations of insects!” would be looked at like they had three heads…or too big a head. Subtitling and captioning is not about dictionary and thesaurus use. The audiovisual translator has to understand meaning, context, and changes in the target language. For the record, I don’t believe the audio has insects: I think it’s birds and the wind.

Arab older man captioned C'est la Femme qui chante./It's the woman singing./She's the Woman who Sings. Number 72.
The machine definitely blew this one with its literal translation. This is a key thematic and character-relevant phrase and is even a chapter title in the film. The human was closer but the “Number 72” is repeated in the next subtitle. Also, there is no understanding of capitalization conventions: as an epithet and important theme, “the Woman Who Sings” needs a capital on “who” in headline style; here it’s a mixture of headline and sentence. Probably the subtitler is working under the misapprehension that “little words” don’t get capitalized, a rule from the dinosaur age. All caps on the phrase would forewarn a language learner that this is not everyday usage.

Arab older man, captioned Inspiration/Inspiration
Here, “Inspiration” (and elsewhere “(Grande respiration)” as “(Great breath)”) is a total craption. Inspiration comes from the Holy Spirit or a muse or a lightbulb above your head, but its Latin root about breathing cannot be applied here. Furthermore, it’s hardly a notable or significant inhalation (unlike an example below) and could have been omitted. I don’t understand how a professional translator or QC person could have stetted this machine error.

Vista with woman at wall and car in midground, chapter title La Femme Qui Chante, captioned Un homme parle en arabe sur un haut-parleur/A man speakes in Arabic on a speaker/THE WOMAN WHO SINGS

Vista with woman at wall and car in midground, chapter title La Femme Qui Chante, captioned Gazouillis d'oiseaux/Bird chirping/THE WOMAN WHO SINGS
The errors in these subtitles are obvious in that the chapter title denies access to the viewer of the other subtitles if using the human version. “La Femme Qui Chante” should have been made a forced narrative, and the correct translations of the audio should have been “(man speaking over PA system)” and “(birds chirping),” despite the latter being insignificant. Most importantly, during the top shot, the young woman is sobbing (plot pertinent!) and that absolutely should be captioned, with the PA part placed on the next shot where that audio continues. No one’s learning any language here.

Middleeastern-dressed nurse speaking over an ill Middleeastern woman's bed, captioned Mme Mika?/Ms. Maika?/Mrs Manka?
As far as I know, Arab culture doesn’t espouse women’s lib, so the machine “Ms.” is a cultural #SubtitleFail. Then, it seems the translator is used to British conventions because in North America we use a period after “Mrs.” and the surname is misspelled. In any case, these inconsistencies would be confusing to a language learner without the knowledge of these cultural points.

Middleeastern-dressed nurse speaking over an ill Middleeastern woman's bed, captioned Elle a recueilli les enfants./She collected the children./She safeguarded the babies
Here the nurse is interpreting from the patient’s Arabic. “Safeguarded” is the wrong diction for this scene: it’s too formal and, in terms of babies, is a bit archaic. For the newborns, who are essentially refugees, “took in” is an appropriate choice. A student using this would sound like they were talking about a report by a board of governors rather than caring for little ones.

Young woman facing young man, her face expressing horror, captioned Inspiration/Inspiration
No spoilers, but here is another misrepresentation of “(Inspiration).” This is a gut-wrenching gasp of horror at the first of two climaxes in the film…

Closeup of young man and woman, captioned  Je vous aime/I like You/I love you.

Closeup of young man and woman, captioned Vore mere, Nawal/Your mother, Nawal./Your mother.

Closeup of young man and woman, captioned  Reniflements/Sniffles/Nawal
The problems with these three subsequent subtitles are obvious. Again, they take the viewer out of the narrative, disrupting their immersion in the poignant dénouement of the story, and teaching nothing about language.

These are just a few examples to illustrate how the notion of teaching a language is far more complex than throwing up some setting options and calling it language learning.
Yes, it’s great if you know some, say, Polish and want to check what a character said, or if you need to pause the subtitles for better comprehension. But to suggest that language lessons are being made available by a streaming service that is known for its problematic subtitles and its craptions is misleading. It’s just another way Netflix holds a monopoly on the international offerings of video-on-demand but is putting the cart way before the horse. They need to get serious about native target-language speakers as subtitle and caption editors and fix the timed text before they start misinforming the public about foreign languages. For now, I’d recommend using some language-learning software or apps, or—much better—taking accredited classes in the language you want to learn. You can’t learn how to drive an eighteen-wheeler on the highway by trying out a Segway.

Interview with Shell Little: Captions and Neurodiversity

Headshot of young white woman in makeup and lilac-colored shoulder-length hair with bangs wearing a mauve top, necklace and dark jacket. She is looking into the camera.

At the #a11yTO 2018 conference, I heard Shell Little speak on accessibility, and she shared quite openly about what it’s like to be neurodivergent. I was really affected by her talk because, although I advocate for captions for people including those with cognitive differences, I hadn’t really heard from someone so candidly about their experiences. Many emails and DMs later, Shell and I have assembled an interview that explores what are sometimes called creative or alternative captions and how a neurodivergent (ND) person is helped or hindered by them.

RW: Right, so I’m on the fence about these because the creative people making them often don’t get accessibility. But then some accessibility folks are using creative means to be make captions more usable! Some people are experimenting with colours (old hat in the UK and Europe) but also icons, avatars, and other non-traditional captioning ideas. I remember you saying movement across a screen could present a barrier to retention, so what about something like this?

Please see the trailer for John Wick 2 at the 1:24min mark; these captions are for style more than accessibility, but I wondered how they might be received by an ND person.

https://www.imdb.com/title/tt4425200/videoplayer/vi1127331353?ref_=tt_ov_vi

And here’s a combo platter of style and caption provision. In Man on Fire, this scene has the page erasing the caption. Annoying? Frustrating? Not an issue?

Medium shot of three people walking and talking in shadow against a domestic courtyard background. A full caption line says, Approximately a month.

Shot of a white hand turning the page of a document in a binder; as the page is turning, the caption from above is being erased by the turning page, so that only the following is shown in this still: Approximatel

SL: Wow, I have a lot of mixed feelings about these creative captions! On one hand, they seem really cool because they are integrating the text into the story, making the text feel like it’s not just slapped on top of the screen, but right in there with it. I also think it’s cool to give someone who is HoH/D/deaf an interesting experience with their CCs. Making the style reflect the tone could add another layer to entertainment.

Now, I could see the motion being a bit much for someone who has sensory-related disabilities. And, if not done in a tasteful way, it could become a big distraction. I think the idea of less is more could apply for something like creative captions.

On the other hand, I play a lot of video games, and that kind of creative style text is really, really common. Not exactly like the page-turning example you provided, but more like the ASL-translation type of “flowing” text in that John Wick example. In games that have no actual spoken words, they use style to push the tone of the text. An example of a game would be Ori and the Blind Forest. Here is a link for an example of how the text flows across the screen and is part of the scene itself: https://www.youtube.com/watch?v=ufD833Vgfnk

The number one thing when it comes to cognitive accessibility is context. If the text is the main point of the scene, then it can have a lot more liberty to be moving in general. Much like how moving content works for Pause, Stop, or Hide buttons. If the movement is deemed essential, such as a spinner for loading, that kind of movement is okay and is helpful. But if the text is being used in a very distracting way, I could see it being a barrier, for sure. For video games, text is often done on a frozen screen or a cut scene where the text is essential, and there isn’t always a ton going on (depending on the game). That can’t be said for movies and TV shows. I think to have a full understanding of how I feel, I’d need to watch a full movie with the creative captions. Part of me feels like I would get used to it and enjoy the integration as I rely heavily on captions when watching movies. But as I mentioned in my talk, there are so many types of cognitive disabilities. I’d be curious to hear what someone with Autism would think as well!

RW: The game you shared brought up some more questions for me.

First, it drives me crazy that all the lines start with capital letters, even if each caption is not a new sentence, and there’s no punctuation. But being dyslexic, is it actually less cumbersome for you to process by being cleaner-looking in that way? I’m a prescriptivist mostly and am wondering if I need to shift my understanding to include how cognitive disabilities might actually make this style better received.

SL: I’ve seen a lot of different ways to do captioning throughout the years, especially when we’re talking about the video-game space. The style of captions Ori uses delivers content that is short in length but with no proper grammar in terms of sentence structure. To me, that looks sharp, clean, and is easy to consume. I honestly had to go back to the clip and look to see if there really was lack of punctuation because it mattered so little to me at the time I was playing the game! Thanks to texting and IM’ing, I’m very comfortable consuming small portions of content that lack proper punctuation, capitalization, and sentence structure, so much so I didn’t even notice it in these captions.

RW: Ah, my query may be a function of my job—and generation! I use short forms in texts, but punctuation has to go in 😛

SL: Why I think this style of captions works is—again—all about the context. The statements are short and to the point. They’re delivered in parallel to a lot of moving content in some segments, and they’re short enough that I can get through the information while fighting the moving content going on around it. In a race for my attention, motion will always beat out written text. Now, if there was a nonsense paragraph of captions with no punctuation and run-on sentences, that would be infuriating. This style of captioning works because the information is delivered in bite-size packages. I would take these small five- to ten-word captions over having to read extensive text any day.

RW: So, it’s context and quantity, it sounds like.

You also said you’re dyscalculic. In captions, would you prefer to see numerals or numbers as words, considering the speed of captions? Would it make a difference if they were smaller numbers (captioned 10 or ten) or larger (captioned 2000 or two thousand)?

SL: First, I would say my issue with numbers in captions would be more due to my dyslexia than my dyscalculia. Unless they were asking me to hold meaning to the numbers or do math! Example: doing conversions of currency or going from Celsius to Fahrenheit—never going to happen, ha-ha!

But, ah, yes…numbers in captions! This is a perfect example why captions are so important for ND people like me. It’s about receiving information in more than one format. Hearing something said and being able to read it simultaneously is a way to solidify the meaning of that spoken content. So, if a character yells “Fifty thousand dollars? He wants that much?!” and the captions read [$50,000? He wants that much?!], I have enough context and am not required to figure out how many zeros are on that number. No need to hold my fingers up to make sure my brain isn’t moving the comma and that it really is fifty thousand and not five.

Where I run into issues is subtitles. With subtitles I’m only able to get the information in one format. I watch a lot of K-drama [Korean drama tv shows], and the money there causes me constant frustration. This is due to the number of zeros used in their money. For example, $5 is roughly 5,000 won. So, you can imagine how many zeros end up being used when we are talking about large figures. My rule would be: if the information is being delivered in more than one format, using numerals is fine. The issue with that is we can’t assume everyone who is ND is also hearing. I’m sure there are plenty of ND people who are also HoH/D/deaf. So, to be truly inclusive, I would lean towards the option of writing out larger quantities.

RW: So many variables! I recently contributed to a LinkedIn post that invited discussion about the “accessible seating” in a cinema:

Shot taken from cinema seating of a section before it that is ostensibly reserved for "accessible seating": there is an area of four seats' width barred off at the sides from the other seats in the middle of the row. This section is in the second row of the cinema, with a transverse aisle behind it.

Picture by Thea Kurdi, LinkedIn February 6, 2019. Used with permission.

I posted: If you were additionally using a CaptiView, there’s nowhere to attach it, it’s visible to the entire audience, and it’s too close to the screen to allow for the quick eye refocusing that’s needed. Also, it assumes only two people in wheelchairs [or other assistive technology] want to go to a movie. What if a bunch wanted to go as a group? But mostly, relegating people with disabilities to the crappiest section of the theatre is a statement of its own.

To be fair, perhaps this and all cinemas can’t be made perfectly accessible. No product or service can. But attempts to at least recognize, address, and improve barriers to accessibility are important.

So maybe there’s no way of making all creative captioning accessible to all users. What works for an ND user might not work for a low-vision user, for example. But it’s kind of nice to dream of a future time when you could open your streaming app and have not only (excellent) captions as an option—as in real captions, not subtitles being appended in lieu of captioning—but also multiple options.

The makers of a film from 2016 called Notes on Blindness created an accessibility campaign to accompany their film: they made alternative audio-description soundtracks with different levels of access for users to choose from! That’s the kind of inclusive thinking and action we need!

Maybe one day, captions and subtitles will get their own Oscar category, and the right to and the usability of accessibility measures will be as much a no-brainer as buses that kneel or curb cuts. Fingers crossed, and much work ahead of us.

Thanks so much, Shell, for helping me understand a bit better the complexity around ND and how it applies to captioning. I’m adding some content on it to my course. And if you see other captioning that causes you to pause, please share!

Interview with Adam Pottle

Voice: On Writing with Deafness

part of the University of Regina’s Writers on Writing series

RW: Your book Voice: On Writing with Deafness is rich with insights—not only about writing but about how your deafness influenced your writing path. As a book editor and a caption editor, I found myself nodding in agreement and recognition frequently.

AP: Thank you, Vanessa. It was a different and, at times, difficult book to write, so I’m happy to read your positive reaction.

RW: I’ve been taking American Sign Language classes for 18 months now, and between school and Deaf social events, I’m learning a lot about Deaf culture—albeit as a hearing person. But even though I know about De’VIA and try to support Deaf talent and so on, I think your book was the first time I’d come across someone speaking about Deaf culture with a sense of nationhood and citizenship. To you, is this a regional thing (e.g., in our case, a relationship with Deaf Canadians) or is it an international state (different sign languages aside)?

AP: Deaf communities are polymorphic. They include so many different dialects and iterations, yet Deaf people around the world unite through the common goal of accessibility and linguistic beauty. While individual dialects and languages may differ, the need to use Sign doesn’t. I’m not sure if nation is the right word, because Deafness extends beyond borders. Nations are divided and divisible. Deafness is not regional or geographical. It’s not bound in one type of body or one specific area of the world. It’s sensual. It’s physical. It’s linguistic. It’s cultural. Deaf communities possess a unique and beautiful character, and Deaf people use their language and their imaginations to eliminate divisions and create connections.

RW: You mention having varying degrees of control, focus, and effort but also being imaginative and creative. You say you’re straddling the hearing and Deaf worlds. And you talk about constructing boundaries, a bivouac, around yourself when writing, in an attempt to ignore outside distractions. Do you think all writers struggle with personal dichotomies and opposing forces, or are they more pronounced for you as a Deaf writer?

AP: All writers of conscience struggle with different forces. Whatever struggles I have are no more pronounced than any other writer’s. The only thing that separates me from most other writers is the way my deafness has calibrated my imagination and the way I think.

RW: The power and interiority of voice feature prominently in your discussion. But so does the role of silence, both as a muse perhaps and as a sociological trigger for discomfort. I laughed when the section on silence listed all the noises that bombard us and ended with “Not to mention fucking people.” I have hyperacusis, and my audiologist chastised me not to isolate myself from noise, but I admit I’m paradoxically uncomfortable with total silence when I’m alone. I agree with you that it can be sacred, but what is it with us and silence? Are we just too afraid of hearing a still small voice? Why can you embrace it for imagination and growth through writing, but people like me, a non-writer, eschew it?

AP: Most of us are afraid of silence because we don’t want to hear that little voice. I’m more comfortable with it by virtue of my occupation and my physiological makeup. My interior voice and I have a strong relationship. I work at it. Most writers work hard at it, I think. I can embrace it because I know what the end product looks like—if I listen to my inner voice, I’m better able to connect with people and ask questions that trigger my imagination and allow me to write stories. That voice is always yammering, questioning, barking. The inner voice is always curious.

The majority of people—especially hearing people—hate silence. They can’t be alone with their thoughts because they’re worried where their thoughts may lead. God forbid they experience a little self-discovery, so they turn to their phones or their computers or their video games. They need to be distracted. And the really frightening thing is that distractions are probably the biggest industry going right now. We love being distracted. People in power love it when we’re distracted because distracted people are easier to govern and manipulate. I’m hyperaware of distractions. We’re in an election year here in Canada, so we’ll have to watch for distractions leading up to October so we don’t end up with Trump 2.0 at 24 Sussex Drive.

RW: You write unflinchingly about suicide, self-harm, euthanasia of the disabled, anxiety, and other things that loads of writers wouldn’t put out there. So congrats on that: it’s really important that we normalize these conversations and own them honestly, as opposed to worrying about squeaky-clean branding.

But I want to bring up another wave of contention (especially in online forums): ableism. You talk about the artistic trope of the self-hating disabled person, disability–inspiration porn posts, and the absence of front-and-centre disability in Canadian (every country’s?) literature, which your PhD thesis addressed. We probably all unintentionally step on some toes with our ableist attitudes sometimes, whether a product of our socialization or our hurried and unthinking society. Aside from seeing the Deaf or disabled or other minorities better and more frequently represented in fiction, the arts, and cultural content, what else do we need to do to educate ableism away? Would writing about it more be effective because we spend a longer time in the consumption of words than we do on a streamed show, for example? I feel like writing would be a powerful vehicle for changing these world views.

AP: We need to recognize that most of us have internalized ableism, and we need to listen to Deaf and disabled people rather than dismissing them. Even in issues as simple as installing ramps, able people think they know better. Able people need to listen to their inner voices and ask, “Do I really know what is best for these people, even though I have no idea what it’s like to live as a Deaf or disabled person?” Pardon me for being crude, but able people need to unfuck themselves, shut the fuck up, and fucking listen.

[Vanessa applauding]

AP: I like your question about what is most effective. Written forms such as books and articles and essays are crucial. With streaming services like Netflix and Crave, films and television shows have become bonbons. We gobble them up, then forget about them. More films and television shows are being produced now than ever before because the demand for content—that is, distractions!—has never been higher. But they’re also more evanescent than ever before. They are much more liable to fade. Back when television had three channels, everyone was watching the same thing, and those shows lived on—and still live on—in people’s memories forever.

At the same time, these mediums reach millions of people, and if you produce it well and show people something they’ve never seen before, they’ll remember it, they’ll absorb its message. Hannah Gadsby’s comedy show Nanette is a great example because she deconstructs the foundations of comedy while she’s making people laugh. She delivers hard truths in that show, things that we need to hear. I remember my jaw dropping open when I first saw her show, then going back immediately and rewatching the important parts and screaming, “Yes, Hannah! You’re fuckin’ right!” She made me question many of my own experiences. Most shows on Netflix don’t do that, but hers did. She’s brilliant.

So the question becomes: How can you market books the way the Netflix markets Stranger Things? That is, as crucial information disguised as a distraction?

It’s a difficult question, a difficult issue. Writing is only as powerful as the people who read and absorb it. It’s a tough time to be a writer because there are now so many of us, and so many distractions, and we’re all clamouring to be heard, and we all deserve to be heard, but nobody gets heard equally, and some aren’t heard at all. There’s only so much time for reading and thinking, and none of us have anywhere near enough time. I can only create to the best of my ability and trust my perspective and my instinct as things that might help me stand out.

RW: Obviously, I was super interested in your take on captions and subtitles! You talk about the barriers they create as well as the doors they open for entertainment, personal, and professional situations. Can I say something a bit heretical here? As much as I advocate for access to and excellence in captioning, I sort of feel like captioning—even more than the broader term accessibility—has become the new shingle that everyone is hanging out to indicate how salable their product or service is. It’s like the feel-good sticker we can easily apply because YouTube autocaptions <insert eyeroll>. If I see one more person recommend a list of companies that produce less-than-stellar captions (and I know because I paid to test it out)… My takeaway from your writing is that, after improved access, your appreciation of captions is more aesthetic and sensual—as you say, synesthetic. But then, you certainly told it like it was with the dissertation defences and the book-festival experiences. How are you feeling about the State of the Caption and, for want of a better word, the politics of captioning right now?

AP: Captions are useful, but like all accessibility tech, they need improvement. People who don’t use captions are often the ones who take the most pride in them: “Look! Look what we have here!” But it’s not a catch-all. It’s a stepping stone. I see a lot of horror movies, and the captioning machine I use at the movie theatre always gives me a headache for the first fifteen minutes. Those little green letters, that long adjustable arm. We need open captions, but because hearing people bitch about them, we don’t get them. The one really helpful thing about these captioning machines is that they’re shaped like maces. The end is really heavy, so if any hearing punk gives me guff, I can beat him to death with it.

RW: Bahaha! And I know what you mean. I wrote about my experience using CaptiView and other assistive tech at the cinema.

AP: I feel like there’s an untapped artistic potential in captions. I was at the Saskatchewan Festival of Words last year, and I had a remote captionist typing from an undisclosed location. Who knows—it could’ve been a serial killer. Anyway, I was in a playful mood, and I said to the captionist, “We should take the captions and mix them up into a poem or something,” and the captionist typed, “Good idea.” Imagine that—watching TV or something and taking captions that are unique—like sound descriptions, or captions where the typist made an error, and combining them all into a long poem. It’d be the next Waste Land.

RW: I love that idea!

AP: But things need to improve. We’re a long way from equitable access. It goes back to ableism: as long as able people think they know better, and as long as they believe their needs are more important, and as long as they’re unwilling to relax their tight little egos, we won’t have the full access we need.

RW: My grandfather homesteaded out West, and my mum grew up in Saskatoon; prairie people are stalwart, perhaps necessarily so. I thought Voice was honest, rattling, and uplifting: very apt for your geographical home. There are insights in it that I’d like to include in my caption editing course because I think hearing about form and function is more effective when it’s given a face—or rather, a voice. Thank you so much for sharing about your writing and how it intertwines with your experiences. It makes a great read for students of writing or accessibility studies, folks in the Deaf community, and the general public.

AP: Thank you, Vanessa. I tried to be as honest as possible when writing the book. And yes, we have to be stalwart when it’s forty below without the windchill.

I’m not sure I agree that most prairie people are honest, though. Prairie people are, by and large, conservative, which means they hide. They hide their insecurities or cast them onto other people. They don’t like talking or rocking the boat—unless of course they’re publicly fantasizing about killing Justin Trudeau or murdering Indigenous people. Many writers on the prairies, such as Tenille Campbell, Louise Bernice Halfe, David Carpenter, Joanne Weber, Anne Lazurko, Brenda Schmidt, and Iryn Tushabe, are all doing crucial work to show prairie people (and really all people) how to be more open, how to be less afraid. I hope my book helps with that. I hope it helps by showing people—especially people who seldom get the chance to express themselves—that their voices are crucial and that what may be perceived as a vulnerability—whether it’s Deafness, mental illness, or disability—may actually be a source of strength.

Adam’s book is on sale as of March 2 through the University of Regina Press and other outlets.

Author photo by Deborah Popovici. Reproduced with permission.

Santa Can Only Be Captioned with One Ho! ?

Closeup of Santa Claus

 

I recently had a conversation with another subtitling professional about a particular Netflix “rule.” Many companies that aren’t even Netflix’s Preferred Vendors use the streaming service’s online guides for subtitling and captioning (or SDH: subtitles for the deaf and hard of hearing) whether their client has stipulated it or not. There’s some erroneous thought that the timed-text guide is some kind of definitive subtitling (and captioning) resource, when in fact it is inconsistent and incredibly insubstantial. In his article From old tricks to Netflix: How local are interlingual subtitling norms for streamed television? in the inaugural edition of the Journal of Audiovisual Translation, veteran AVT scholar Jan Pedersen presents his findings about the various languages’ Netflix guides, which he objectively describes as “work[s] in progress” (pg. 16).

My interlocutor maintained that Netflix doesn’t allow for repetition of subtitles, and I maintained that was a guideline that often had to be disregarded. She said her clients insisted on that; I said mine (and I) insisted on common sense and contextual consideration.

First we had a minor discussion about the differences between stuttering and stammering in speech, in editing, and thus in subtitling. But the main focus was whether a person greeting a large group with many Hellos should be captioned with more than one Hello. The Netflix guide says:

II.16. Repetitions

Do not subtitle words or phrases repeated more than once by the same speaker.

If the repeated word or phrase is said twice in a row, time subtitle to the audio but translate only once.

Now, supposing an onscreen child is being a pest, hounding an adult, but other dialogue also needs to be subtitled, then it’s understandable to indicate that (perhaps with [child continues interrupting]) but not repeating their line. Time and other factors may well preclude repeating. But, generally, repetition is plot-pertinent, moving the action or characterization or mood along; writers don’t include it to be annoying. In order to present complete content (and because time was not a factor in our example), viewers using subtitles or captions have the right to know that the character is personally greeting as many people in the crowd as he can: it indicates his level of engagement with them. To subtitle him as saying Hello once not only suggests he says it once and then ignores the rest of the group—while he is facing away from the camera but the audio repeats his greeting—it conveys inaccurate narrative, particularly to the  nonhearing viewer. This is audist and inaccessible: bad subtitling practice and ethics.

In frustration, I said that being pedantic like that meant that we should caption Santa Claus as saying Ho! rather than Ho! Ho! Ho! She conceded that “I suppose that particular case would be treated differently.”

But overall she wasn’t having it. Her bottom line was satisfying her customers’ needs by sticking to the rules and getting paid, repeat business.

My bottom line is serving the subtitle/caption user by educating the client—if it’s even needed. I’ve never had a client question repetitions that were contextually correct, race back to section II.16, and refuse to pay me. They appreciate the feedback notes in my returned file. That’s how I communicate with subtitling and publishing clients: I share the rationale I’ve employed based on my expertise in editing and suggest they use my edit. If they want to stet for whatever reason, that’s their choice, but they rarely disagree.

Eventually, I wound up our discussion by suggesting we might agree to disagree, and we both very politely agreed to do so.

But this is the kind of issue in subtitling and captioning practice today that irks me. Rule-sticklerism trumps complete access to clear communication. The inconsistencies Pedersen reported from his research confirm my ongoing opinion that Netflix (and people following their and similar guides) do not understand the nuances of English and how language needs to be treated in subtitling. His discussion points out their abbreviated and less detailed guidelines (for various languages’ timed-text guides). He correctly suggests:

...The great similarity among the TTSGs shows that Netflix is basically rebooting subtitling norms, by prescribing the use of the same norms across the board (albeit with local language examples) and then gradually adapting them to local norms via updates… When there was suddenly a huge and urgent need for subtitling into many languages, it was probably too time-consuming to research local norms… This must have prompted the use of the one-size-fits-all solution of norms, influenced by DVD norms, which could then be modified as you go along, so to speak. Here, new  norms  are  imposing  a  one-size-fits-all  system,  which  is  then  adjusted  post hoc. (pg. 17)

He says scholarly research into other VODs would be interesting, and if my survey of lack of quality captioning by streamers is any indication, I bet the results would be even more lacklustre.

So, let’s look at some examples that prove that repetition of dialogue is necessary and others that were not well handled.

Episode 3 of Deadwind seems to ignore its own rule:

Woman in restaurant standing and leaning over table, speaking to someone offscreen, captioned "I'm on your side. I'm on your side."

So does season 1 episode 5 of The Kominsky Method, when Arkin’s character walks through his office to welcomes from several employees:

Alan Arkin in character, walking through an office filled with employees, captioned "Thank you. Thank you."

And in episode 9:

Alan Arkin and Michael Douglas seen through a car window while the latter is driving, captioned, "Good. Good."

The Dinner does the same:

Laura Linney facing an unseen man, captioned, "Do not quote at me right now. mDo not quote at me right now."

It doesn’t seem like Netflix has a total hate-on for repetitions. Or else their QA people are asleep at the wheel.

Let’s look at this rule when it was pedantically applied by a well-meaning vendor but then two episodes later was wisely disregarded. (This is an argument for using the same subtitler for a series or at the very least creating and using a show bible.)

When, in episode 6, Arnau marries the bridezilla, she is subtitled—incorrectly. A good translator should know or at least confirm meaning: to pledge one’s troth is to promise marriage or get engaged, not to wed.

Video still of medieval characters, a priest flanked by a man and woman facing each other in a church, captioned "With this ring, I plight thee my troth."

But when it’s his turn to return the ersatz wedding vow, he is uncaptioned. Note that in the dub, he repeats “With this ring, I thee wed” but neither version is provided. Of course it’s “twice in a row,” but that’s the presentation of the ceremony in the show, and withholding his vow is absolutely ridiculous.

Video still of medieval characters, a priest flanked by a man and woman facing each other in a church.

Why is it withheld? Because “rule” II.16 says not to. [Insert eyeroll.]

In episode 8, when he gets to marry the Nice Girl, he again is subtitled wrong for “I thee wed” (and their order in making the vows is reversed):

Closeup of male hand placing gold band on female hand, with medieval cuffs visible, captioned "With this ring, I plight thee my troth."

And she promises likewise, even though it’s the same line.

Longshot of medieval wedding scene in a chapel, captioned "I plight thee my troth."

So, what’s the big deal in the first instance? We get it, right? It should be obvious from the video… That’s not the point. First, it gives the subtitler (or vendor/contractor/client/whoever) the power to decide that a major event in the story should not be conveyed with completeness to the viewer—and I believe that ethically that’s not in their purview to do. It is pedantry of the worst sort, and I don’t regularly see Netflix as using a whole lot of language knowledge or common sense, so no surprise. But most importantly, it yanks the viewer out the narrative so that they are no longer fully immersed in the plot. They’re no longer fully immersed in the story, but wondering, Wait—what? Why isn’t he talking? Is there something wrong with the stream? No, his mouth is moving… I guess he said the same vow, coz we can see it’s the wedding, but…

I would argue that subtitling/captioning ethics demand that we convey to a viewer using timed text what a non–caption user is getting: the whole cultural content, uncensored and correct, whenever possible. The politics of deciding who gets to “hear” what, based on blind rule-following, is ableist.

These are the kinds of issues that need to be considered in media-production programs that cover accessibility: usability is the icing on the accessibility cake. We need to teach these nuances to students and current professionals. Otherwise, subtitles and captions are just lip service to our present and forthcoming legislation, never mind our supposedly more inclusive world.

 

 

 

Santa photo source:

photo from https://www.decaturdaily.com/life/letters-to-santa-claus/article_4eb4842a-b5fc-522f-8fc6-e783b448d55c.html

Book Review: The Routledge Handbook of Audiovisual Translation

Abstract watercolour spheres as decoration of textbook,  The Routledge Handbook of Audiovisual TranslationEdited by Luis Pérez-González as part of the Routledge Handbooks in Translation and Interpreting Studies series, this new book is textbook material but is still accessible to the nonacademic with an interest in audiovisual translation.

I spent my first two years of university studying translation and linguistics and, in hindsight, now regret not having stayed in that stream. While my work focuses on the end steps of the AVT process (whether subtitles or captions/SDH), I’m still interested in language and how it is not as discrete from the technical production process as most people think. Scholarly work in this area is being taken more seriously as the field has now been accepted as a bona fide academic discipline.

Because they were brought up by so many of the 32 leading scholars who contributed essay-chapters, I’d like to discuss the main themes I noted: changes in technology, obviously, but also inclusion, exclusion, and changes in quality standards (the latter being my favourite aspect, of course).

The book provides some history in terms of subtitles, captions, and translation in cinema and discusses some of the software options currently available. It’s interesting that where Alina Secară’s part (p.139, 141) mentions eyeglass development as a means of caption delivery, even that area is changing quickly as we saw in October 2018 with the National Theatre in London’s introduction of Smart Caption Glasses by Epson. There is also a return (for me) to some concepts I read about in books I reviewed and interviewed authors about, such as Nornes’s thoughts on abusive subtitling (p.460) and Dwyer’s on prosumers (p.442) and the politics of fansubbing.

There seems to be a tension between the inclusion and exclusion that can be found in AVT. As I understand it, inclusion involves the performativity (p.446) and widespread participation by various factions (p.419, 438, 442). Sometimes the work is done by collectives on Viki or Amara, for example, and sometimes by fewer contributors, such as individual YouTubers—whether it’s their own content or someone else’s. The idea of prosumerism is covered not only by Dwyer but also Díaz-Cintas (p.31), Pérez-González (p.31) and Jones (p.187). Dwyer introduced me to the element of play being part of the performativity (p.446), and it took me this second crack at the literature to understand the degree to which AVT not only involves various politics (e.g. participation) but also the economics of the social contracts that are understood in many unofficial or unsanctioned undertakings. Localization straddles the areas of inclusion and exclusion, both as an “act of homage” (p.446) but also a kind of bowdlerization, such as the de-anglicization of text in Harry Potter for an American audience (Guillot on Nornes’s corrupt domestication, p.38).

But all is not warm and fuzzy. There is exclusion that is perhaps inevitable with AVT. In her discussion of music-video fansubbing, Johnson (p.421) cites Pérez-González and the “widespread assumptions of the dominance of English in globalizing process.” Dwyer (p.441) talks about the “global language politics and hierarchies” by netizens or global citizens. In her chapter on AVT and activism, Baker notes that not only fansubbers but also most subtitlers and captioners are not credited, or at least work unappreciated, in anonymity or invisibility (Baker, p.456–57). In my own advocacy efforts, which call for subtitle and caption editing to be recognized by film awards as much as other technical contributions like sound editing, I will give shout outs to excellent translations for film (such as in Les Innocentes, 2016; I can’t find my original post praising the subtitler anymore, so if anyone knows their name, please contact me!). I don’t understand why title designers are front and centre, but the professionals who made the audience’s comprehension of the dialogue accessible aren’t considered worthy of a credit line. Secară (p.138) also quotes Rondin’s discussion of smart glasses as a solution “without interfering with the overall show.” Maybe this is just my politics, but it always sounds like providing caption users with the technology to take part in this cultural content is a pain in the ass and must not disturb the public, such as the public’s general distaste for open captioning, unfortunately supported by a deaf person in a recent piece. From what I hear in Deaf social circles and forums, the expectation isn’t perfection, just something that’s effective (not craptions, for example). Captioning excellence seems like it shouldn’t require advocacy for improvement. It’s not like we accept mediocrity in the latest smartphones. Anyway, that’s a jump I made in my thinking.

Of course, what I was most thrilled by were the chapters where AVT training and teaching are addressed and what the future of quality assurance will involve with legislation. For instance, here, the Accessible Canada Act (ACA) is forthcoming, and the AODA is in place, but my Twitter feed is full of justified complaints by people of all types of disabilities because standards on paper and actual, informed enforcement are not the same thing. Merchán’s chapter (29) about training and McLoughlin’s (30) about teaching and learning made me hopeful. I was thrilled to read about Ken Loach and his rejection of the traditional AVT-as-postproduction model because budgets don’t plan or allow for quality subtitling/captioning, and Liz Crow (p.506) seeing accessibility as integral to the production process rather than a lowly add-on. Pablo Romero-Fresco has a book coming out shortly, Accessible Filmmaking Guide (London, BFI), which I couldn’t be more excited about (and he’s graciously agreed to an interview with me once I’ve read it). Study of filmmaker/subtitler collaboration by the University of Roehampton and programs like the MA in filmmaking at Kingston University (London) addressing accessibility and AVT as par for the course also give me hope. I’m currently trying to impress upon colleges near me the importance of caption editing being taught as a foundational course and program requisite because all the ACAs and equivalents in the world aren’t going to eradicate the problem of craptions (as inaccessibility) if filmmakers aren’t taught the soft skills now. I can’t figure out why more postsecondary institutions aren’t scrambling to implement this, particularly when they advertise accessibility production as one of their training outcomes. Mohawk College’s Accessible Media Production is the only program where I can see the genesis of serious application to this in their curriculum.

I loved the quotation of Marleau from 1982 that Secară concludes her chapter with (p.142)—and here surtitling could easily be replaced by subtitling: “…surtitling and captioning services are not to be regarded as ‘un mal necessaire’ [sic] (‘a necessary evil’).” I’ve attempted to walk the walk in my rhetoric about this and have launched an award for excellence in captioning in the hope that we will raise more Loaches and Crows who will see captioning excellence as one of the foundational stones in the building of a film, and not as a requirement remembered just as the student is about to hit Send. The d/Deaf, hard of hearing, and many other types of caption users are not dismissible, and as I’ve written before, I’m not going to shut up about it. Fortunately, inquiries about the award from filmmakers are heartening: there is will—but also many barriers remain.

Pérez-González’s edited collection of essays by some of the top scholars in audiovisual translation today—for me—is summarized best in Romero-Fresco’s position that AVT services are an afterthought at best. He notes that the United Nations’ ITU Focus Group on Media Accessibility and filmmakers such as Tarantino and Iñárritu are trying to influence, respectively, the profession and the process by being involved in subtitling (p.510). I don’t see change being swift, but I hope that ten years from now we will see improvements in quality via subtitle and caption editing. Meanwhile, The Routledge Handbook of Audiovisual Translation  gives the student, academic, professional, and interested lay reader an excellent idea of the lay of the land in AVT. It will be interesting to see what has—and hasn’t—changed in education, standards, and enforcement by the time a second edition is published.

 

Captioning Ethics: Introduce No Harm

Skylar Jay being dressed for Queen Eye on the left, closeup of him wearing a backwards green baseball cap on the right

 

Images via https://www.them.us

After the #a11yTO 2018 Conference, another participant reminded me about a story. I’m only now getting around to addressing it. I’ve been thinking a lot about ethics and the role of the captioner lately—something that will be covered in my caption editing course syllabus.

The #a11y said there was a story about miscaptioning in the episode about Skyler Jay on Queer Eye. Perhaps you read about Karamo Brown talking to Netflix about the need for better-quality captioning and it seemingly getting some traction (even though Nyle DiMarco, Marlee Matlin and a ton of other people have been complaining forever). Anyway, as per Jay, it sounds like the captioning was either autogenerated or done by a nonprofessional because there were egregious captioning errors, spelling mistakes—the usual CC issues because caption editing is not embraced (or understood) by Netflix.

We probably can’t know for sure whether the significant error was done out of ignorance or not, but let’s consider the erroneous use of transgendered instead of transgender in a caption about Jay simply as an idea. I’m not interested in the facts of the matter here, because I’m just looking at the ethos behind it, not discussing the actual incident. But let’s assume for the purpose of our discussion that it was either done knowingly but the CCer didn’t care, or that it was done out of a lack of awareness of LGBTQ issues and the feelings around trans vocabulary in general.

There are a bunch of reasons why the outcry about the use of transgendered was unacceptable. Let’s take a look at some of them from a captioning perspective.

First of all, in such a case, it could happen that the TV personality did use the wrong word themselves—either because they weren’t familiar with their subject (see the article about Jay feeling he was educating some of the hosts) OR because the script was wrong. Even if it was technically unscripted, shows have outlines about what’ll be covered, and it could be that the writer or a producer wrote it in wrong, as transgendered. I can’t tell you how many shows I’ve worked on where they send the script and it’s full of grammatical and vocabulary errors. Some screenwriters and documentary writers need their work edited, just like authors of books. I often have to fix vocabulary (if a word is misspelled; it doesn’t affect the pronunciation, so captioning can be verbatim but corrected); but if the production didn’t use a script editor (who actually copy edits, too), wrong words often make it into shows and movies. Take my teasing about Game of Thronesand the incorrect use of the accusative, with the hashtag #WhomDoesntMakeItMedieval:

https://twitter.com/reelwordsedit/status/1026966989850304512

https://twitter.com/reelwordsedit/status/1036690682809741312

Anyway, mistakes can wriggle in that are not the captioner’s error nor the actor’s or narrator’s. The code of conduct (both the official one and more nebulous ones) for captioners and subtitlers require that captions be presented as spoken. Filler like um and uh don’t need to be included, unless they are germane to the content and context, but generally it is supposed to be carried out verbatim. That means we cannot correct the speaker’s grammar. Some reasons are:

1.    Don’t Be a Jerk. The role of the captioner is to do your job as expected, and correcting people is not in its purview.

2.    You’d better be sure you’re correct. Unless you’re an experienced, trained and professional copy editor, you’d do well to think twice about inserting what you “know.” And no, having a degree in English does not make you qualified.

3.   Check the show/series/film bible. They should be addressing house style there, and it is necessary that we adhere to style guides. Sometimes that can make us die inside a little bit (I’ve seen some god-awful and downright incorrect guides). If for whatever reason the person who hired you says they want xyz word—even if it’s wrong and you can’t convince them otherwise—you must do what you were paid to do: deliver the product they want.

Now, if the issue, like in our transgender/ed example, is so egregious or offensive to you that you can’t live with doing what your client wants, then you might seriously consider abandoning the project. We all need to pay the bills, but we also need to work ethically, and sometimes standing up for our or society’s values costs us.

4.    Desired corrections like this are better queried than made. We shouldn’t adopt the Better to ask forgiveness than permission rule in captioning. It is not our right to mess around; true issues should be brought up with the client before the file is returned. You’re not there to throw the show or the people involved under the bus. As many professions insist: introduce no harm in your work.

5.    Finally, introducing and correcting errors both carry with them a degree of politics and subjectivity. It’s not the place of the captioner to get involved with the content by judging it (inadvertently or not). Like an oral language interpreter or a telephone, your job is to convey the material. Save your commentary for your social media accounts.

If you are a thoughtful person and want to avoid being ignorant (which just means not knowing, not that you’re an ignoramus), do some research! Ask, read, search, consult, query: a lot of my time in editing is spent doing factchecking or research. Yes, you’re under a timeline and probably not paid to do more, but you can either do a good job and learn something along the way at a minor cost to you, or you can dig your heels in and only work for the defined scope and say That’s not my job. I’ve made peanuts loads of times because I won’t compromise: I always do the extra work. And if you don’t care for your own edification or standing up for what’s right, do it for the others in our profession. Most of us work extremely hard and consider captioning a vocation. We should all work in line with our standards.

Do we have an updated and localized code of conduct as captioners? This came up in a CCers’ forum recently, and it didn’t seem anyone knew of one (for any country), although all thought it important.

But you don’t need a hard-copy values statement to work from. Most professions uphold the pursuit of knowledge, integrity, honesty, and social consciousness as pillars of the job. If you’re in captioning with no sense of this calling, you might want to rethink your career choice. We’re here to help not only the Deaf, deaf, and hard of hearing but lots of different people who need or desire to use captions.

So back to the transgender/transgendered error. If it occurred with connection to any of the above, we’ve learned something. But ultimately, as a show not requiring live captioning, there was no excuse for the error. The show should have used a caption editor who would review the caption file for mechanical and other offensive errors before it was sent out. But Netflix and a bunch of other VODS I reviewed don’t yet see the need for caption editing, and whether it was Queer Eye’s postproduction contractors who captioned and made the error or Netflix that didn’t review and correct it at the spot QC stage (did you know a lot of their review is random and only a tiny portion of the shows?) or at least during full review, it happened. And not only was it an egregious error, which in an editor’s hands would have been caught and addressed, it hurt some people. It was offensive. And that’s introducing harm.

When we speak or write, we make mistakes. We just can’t see our own mechanical errors or our other blind spots, habits, or prejudices. I always have to flag captioners and authors about issues of sensitivity: they’re not jerks, they just have their own way of seeing the world, and it’s an editor’s job to identify and broach possible landmines and save them embarrassment upon release. None of us are stupid; it’s just difficult to see without the help of another’s review.

And since captioners are in the business of facilitating accessibility, inclusion, and human rights, we’d do well to work conscientiously and consciously. We’re not hired to just bang out a caption file. We are contracted to be an agent of communication for someone who needs captions. Put a face to that Caption User Out There, and bear them in mind as you provide your service.

[Reading Sounds]: Interview with Sean Zdenek

Cover of Sean Zdenek's book: Reading Sounds: Closed Captioned Media and Popular Media. It is an inverted image with a blue and cloudy sky on the bottom half and paved road on the top.

 

 

VW: Thank you for agreeing to this interview, Sean! I’m excited to have the opportunity to ask you more about your book, Reading Sounds: Closed-Captioned Media and Popular Culture. I think we agree on a lot of issues about captioning, but your book made me think about some of them differently and, in some cases, more deeply.

For instance, I liked your succinct five guidelines for thinking through the question of significance (pg.123):

1. Captions should support the emotional arc of a text.

2. A sound is significant if it contributes to the purpose of the scene.

3. Caption space is precious. It should never be wasted on superfluous sounds that may confuse viewers or diminish their sense of identification with the protagonist(s).

4. Sounds in the background do not necessarily need to be captioned, even if they are loud.

5. Every caption should honor and respect the narrative. While a narrative does not have one correct reading, it does have a sequence and arc that must be nourished. [All emphases by VW.]

And I think the echo effect you created for the captions from The Three Musketeers (see still image below of captions in a poisoning scene) honoured those points. Actually, I’m going to write an article later about creative captioning which I meant to do before now, and I polled some D/deaf and hearing followers on social media about it (separate polls). Now I’m kind of glad I didn’t get around to that post before reading your book, which made me see them in a more positive light. But more on that in another article!

Video clip of a musketeer collapsed on the floor with a caption that says, Well, just so you don't leave empty-handed, with the text repeated 3 times and overlapping, reflecting his experience of being poisoned

Copyright Sean Zdenek. Do not reproduce.

You may have seen my interview with Tessa Dwyer about her book, Speaking in Subtitles: Revaluing Screen Translation, and I was interested to see a political discussion in your book, too: linguistic imperialism, or the idea that “only English matters” (pg.271). Jon Christian’s outing of Netflix a few years ago started off a more frequent public discussion about captions on VODs and in broadcasting; more recently we’ve had Karamo Brown’s calling out Netflix on Twitter about wanting intralingual verbatim captioning and that got some coverage recently. What’s your POV on what’s happening to the online discussion these days around captioning?

SZ:  As you’ll recall, it wasn’t too long ago—circa 2010—when Hulu and Netflix were scrambling to offer any captioning at all on their streaming content. The National Association of the Deaf filed a lawsuit, which was settled in 2012 when Netflix “agreed to caption all of its shows by the year 2014” (Mullin 2012). Around the same time, Hulu was only captioning about 5 percent of its online programming. I wrote a blog post in 2009 to call attention to the small percentage of captioned programs on Hulu and to show my support for what would become the “21st Century Communications and Video Accessibility Act” (CVAA), signed into law by President Obama in 2010. The CVAA “requires video programming that is closed captioned on TV to be closed captioned when distributed on the Internet (does not cover programs shown only on the Internet).”

Autocaptioning, which Google debuted in 2009, is an important part of the history of online captioning, too. It has received some well-deserved criticism over the years but also, more recently, some praise as it continues to improve and evolve. (See Rikki Poynter’s 2018 blog post, Are automatic captions on YouTube getting better?) No doubt the ubiquity of autocaptioning on YouTube, despite (or because of) its limitations, has been crucial in shaping the public’s understanding of good and bad captioning.  

Today, digital captioning is having its viral moment, finally. The best example: Nyle DiMarco wrote a series of tweets in February 2018 following a bad experience with movie theater captioning. His story was picked up by a number of news outlets and written up as an op-ed for Teen Vogue. Other popular writers and bloggers, including Ace Ratcliff and Rikki Poynter, have called attention to access barriers and problems with captioning.

VW: Yes, I was interviewed on CBC Radio One’s Metro Morning show about open captioning [transcript here], and the conversation began with the host talking about Nyle’s tweets: his reach was international!

SZ: What we’re seeing with these stories, I think, is the power of social media to elevate to viral status the needs of people who require quality captioning. We’re seeing captioning break into the mainstream in ways it hasn’t before. Popular personalities (celebrities, models, YouTube bloggers) are driving compelling stories that seem tailor-made for viral media.

The online landscape has changed radically in the last decade too. Video rules the web. By 2021, according to Cisco’s projections, most internet traffic—from 80 to 90 percent—will be video, “up from 73 percent in 2016” (Cisco 2017). Netflix alone is responsible for more than one-third of all internet traffic in North America (Luckerson 2015). A decade ago, online captioning was a technical problem to be solved. Today, viewers demand quality captioning and lean on the power of social media to call out instances of poor captioning.  

VW: I’ve shared with you that my in-house captioning experience was eye-opening on several levels. In Canada, even with the upcoming AODA in Ontario, the on-paper standards are basically moot, and as you say, it does seem like CCs are provided to “placate government requirements” (pg.xv) and that they’re seen as “mandatory…as a condition” (pg.80) of broadcasting. Even in accessible projects, captions do indeed seem to be added on at the end “after the real work has been completed” (pg.291). There’s only one full post-secondary study program in accessible media production in Canada, at Mohawk College, that addresses captioning, although I see other schools starting to pick up the idea. You mention hoping CCs will be addressed in the scholarly realm more frequently and seriously. What’s the state of captioning studies as a discipline or even a program in the US? Because, as you say, there is a lot of power and responsibility in the hands of captioners (pg.53), but that’s pretty scary when production isn’t regulated and the craft isn’t even fully taught!

SZ: Academic interest in captioning continues to grow, especially in the humanities. I think the biggest hurdle, from the humanities side, is that captioning has usually been viewed as purely technical or objective, a useful skill or trade but not a complex array of theories or deeper questions of meaning and user experiences. When I refer to caption studies, I intend to link the study of captioning to other humanistic pursuits in writing studies, sound studies, graphic design, art, accessibility, universal design, rhetoric, and more. In fact, I would argue that captioning unites these disparate areas and offers the perfect laboratory for studying questions of digital access across multiple fields of inquiry. A small number of scholars in my own fields of rhetoric and professional writing have taken up the subject of captioning, often in the name of disability studies, which has grown into a vibrant, interdisciplinary research program.

The term caption studies is performative: it doesn’t really exist (yet), but I was hoping to bring it into being in the act of naming it. In my opinion, we need a label that reflects the complexity of the subject itself, one that also aligns with the humanistic inquiry that is at the heart of other studies (e.g. sound studies, gender studies, science studies, etc.). Names matter, of course, which is why I prefer captioner to captionist: the latter sounds too much like typist or transcriptionist (with connotations of direct copying), while the former sounds like (or invokes) writer (with all the agency and creativity that being a writer entails).    

VW: Exactly where I’m at with caption and subtitle editing! I’m trying to raise awareness that just as books don’t just get published as written and editors are integral to the publishing process, so to must caption editing be part of production. Someone summarized my work the other day as “fixing typos,” and I was quick to point out that editing is not just proofreading. It’s a craft, science, and art rolled into one that I’m trying to shed light on because until now it’s been ignored, or at least underserved by so-called quality control. I often make about 150 edits to 60 runtime minutes captioned by a professional captioner or subtitler, not because they aren’t good at their job, but because they’re like book authors and my making the text more clear and correct for the user’s full immersion in the content is a separate skill set. Most of the captioners and subtitlers I work with get this and thank me for what I bring to the edited timed text. Sounds like we both have an opportunity to show the academic and lay worlds that captioning is a humanistic study, as you say, and the holistic, performative aspect goes way beyond avoiding the popular #CraptionFails we see posted online.

SZ: I’m currently teaching an undergraduate course called Web Access for All this semester. It covers several topics, one of which is captioning. As far as I know, it’s the first and only course of its kind at my university. It complements other courses and programs in interactive media, professional writing, and disability studies. But by no means is caption studies a formal program of study in higher education. One way to get there, I think, is to fold the study of captioning into courses on digital access or multimedia design, and then fold those courses into disability studies minors.

Academics and practitioners also need to work together. I’m fascinated by the important work that captioners do but have never worked as a professional captioner. I’ve interviewed captioners but have never observed captioners at work. A full-bodied program of study would support collaborations among multidisciplinary teams of researchers and practitioners from academia and industry. Workplace studies of captioners are vital if we want to call attention to the forms of labor and creativity that captioners provide.

VW: They’re also vital to demonstrate to captioning houses, departments and companies that how they’re supposedly training people doesn’t work. It’s not just about being a fast typist, and you can’t be a good captioner with baptism by fire. It’s got to be taught—as in pedagogy—with a view that goes beyond having a facility with subtitling software functionalities. Like writing and editing courses.

You correctly discuss how producers don’t work with captioners (pg.77) and that there’s a disconnect between producers and captioners (pg.290). My experience is that we’re definitely an add-on and that the only feedback is about frames and other technical elements. Whether subtitles or captions, I really think production houses just don’t understand the nuances of captioning (see above) and are just concerned with getting out quick and cheap CCs to meet requirements. It’s kind of depressing sometimes! Do you have more you can add about this that didn’t make it into your book?

SZ: For me, the problem was summarized nicely in an email I received in 2012 from a professional captioner as I was beginning to work on my book. Her email (which I posted anonymously on my blog with her permission) was full of provocative claims:

  • The main factor that drives quality captioning is what clients are willing to pay for it.
  • Most clients see captioning as that mandatory last step that has to get done as a condition of their materials going on air.
  • The vast majority of clients do not care what the captioning looks like, as long as it gets done in time for the stations to receive their captioned masters.
  • Clients will often choose to go to cheaper captioning houses who promise to get their feature film captioned in a day.
  • When a captioning company charges low prices on high volumes of work, it’s because they hire lots of people at low wages. [All emphases by VW.]

It’s hard to imagine a collection of more depressing claims for captioning advocates. I’d be curious to get your take on them. It’s also difficult for me to imagine a harder problem to solve in caption studies.

VW: This is my experience completely, and I see it echoed on closed social media groups for captioners everyday. I also hired one of those cute-kitty-typing-ad captioning services to double check the reality. I sent in a one-minute video and the caption file came back inaccurate!

That’s largely why I’m turning to the nudge paradigm with an initiative that will be announced shortly. I just don’t think that 30 years of demands for accessibility has worked: there’s some progress in quantity but not quality. I think it needs to start with a change of view and attitude in filmmakers, since they’re where everything starts. But more on that later…

SZ: Carefully and creatively subtitled films do give me some hope. The English subtitles in Night Watch, for example, were produced under the director’s supervision. As I wrote about the film in a blog post:

In an unusual move, director Timur Bekmambetov “insisted on subtitling [Night Watch] and took charge of the design process himself,” as opposed to having the Russian speech dubbed into English or leaving the subtitling process to an outside company (Rawsthorn 2007). He adopted an innovative approach: “We thought of the subtitles as another character in the film, another way to tell the story” (Rosenberg 2007).

Several subtitles in this movie are painstakingly integrated into the aesthetic of the film. They reinforce meaning and mood by blending form and content. Meaning is expressed not only through the words but how they are visually designed (color, movement, dimensionality, transformation). When objects temporarily cover or block the subtitles, we are reminded that the subtitles are part of the scene itself (instead of an add-on or afterthought).

VW: This makes me teary-eyed…

SZ: Night Watch inspired me to explore non-traditional forms of captioning. My experiments with color, icons, typography, and effects were intended to be disruptive and controversial. But I think we need to push against conventions that are limiting and constraining. I published seventeen of my experiments as an online journal article entitled “Designing Captions: Disruptive Experiments with Typography, Color, Icons, and Effects.”

VW: Dear readers, the fact that you’re reading this interview means that you will find “Designing Captions” fascinating. And Sean, I’ll have to check out Night Watch! Whenever I see a show or film with excellent captioning, I always get on my social media soapbox and sing their praises. It’s so rare that filmmakers a) get it or b) care.

As that captioner said to you, in terms of value, “quality is what clients are willing to pay for it” (pg.80) which, depending on the genre or product type, is next to nothing. That was my experience in-house—most of my training cohort quit because it wasn’t a livable or predictable wage, more suited to students wanting part-time work who could drop everything and show up last minute (we stayed in touch and discussed our takes on it). Even now, I get inquiries about rates from filmmakers who’ve been told that they need captions in order to submit their projects for consideration in film festivals, and they balk at professional (not exorbitant) rates. I’m always banging the drum about filmmakers needing to plan for this minuscule percentage of their overall budget so that it’s not an unexpected submission issue… Aside from educating the content producers and production houses, how else do you think we can create a shift in thinking about the need for excellent captions (not just “good enough” ones) and the potential increase in distribution and profits by making them accessible to another 10+ per cent of the population (D/deaf/HoH/other folks who need accessibility)? My upcoming initiative aside, that is.

 

 It’s no exaggeration to say that the entertainment industry is rooted in ableism.

 

SZ: You’re asking the hardest question of all. Advocates and organizations have worked tirelessly on behalf of individuals who need quality captioning. So many of us care deeply about captioning. It does feel, at times, as though the message isn’t breaking through.

It’s no exaggeration to say that the entertainment industry is rooted in ableism. Movies are made for people who can hear and see—it’s as simple as that. Stories about inaccessible, or accessible but not usable, movie theater captions remind us that movie producers are not really thinking about the needs of people who are deaf or hard of hearing. They are satisfying legal requirements and, in most cases, doing the bare minimum at a fraction of the movie’s budget. Captions come last because, to be frank, the needs of people with disabilities have historically come last (or not at all).

VW: Which is why I’ve been handwriting letters to directors and producers for quite a while now to ask them to be more engaged in the production and use of captions so that more people can watch their films; just because captions are made for bigger films doesn’t mean they’re well done, and usually the caption files are shelved because the cinemas claim no demand for them. So far, no one has replied, which is really depressing.

SZ: How do we “create a shift in thinking about the need for excellent captions”? I think we need to continue to write about and advocate for quality captioning from our diverse positions of expertise, as you have obviously been doing. Teachers can do their part in training new generations of accessibility-minded producers and consumers. As an educator, I teach my students about captioning and place accessibility at the center of digital design. I also advocate for digital inclusion on my campus. Promoting accessibility is one way for academics to speak to people outside of their narrow scholarly fields who will go on to work in many industries (including the entertainment industry).

VW: I agree. I write about it and I’m seeing an uptick in people in more countries, like Tweeters @deafieblogger, @lifeanddeafpost, @Limping_Chicken, as well as advocacy groups I’m a member of or in touch with, like CCAC and DC Deaf Moviegoers. I did write a speculative piece about how one day we’ll look back at this time of ridiculous exclusivity; I hope you and I are eventually proven right that advocacy will change the landscape.

Even in my work with my clients (who clearly do care about quality as they pay me to edit them, not just review them for typos), the attempt to keep things consistent within a series with a bible I’ll create for the captioners and implementation of extended style guidelines (I model mine on CMoS, too, (pg.161, 162) can still get ignored if I don’t advocate for change with explanations as to how it aids the users.

Even the oft-recommended resources are “lite” in scope and imperfect. (The Captioning Key, for instance, has at least one self-contradictory error that I can recall off the top of my head.) Do you agree with my position that, after 30 years, it’s time for an overhaul of outdated bits and pieces (language has changed since the 80s!) and for the creation of a robust and standardized “CMoS for captioning”?

SZ: Yes, I agree completely. The publicly available captioning style guides need work. I reviewed four style manuals for my book. It was challenging to try to reconcile and justify the differences among them. But more importantly, it wasn’t clear to me why some of the guidelines existed at all. Guidelines are usually offered up as facts with little justification in terms of usability and users’ preferences. For example, guidelines for styling speaker identifiers are conflicting. Do we put parentheses around names and place them on their own line? Or do we set names in all caps and use colons (which is standard in DVD captioning)? Why should we choose one or the other—the style guides are silent on this question. WGBH’s Media Access Group suggests styling speaker identifiers in all caps, but then presents an opposite example:

The Media Access Group’s convention is to show IDs in uppercase, rendered in Roman and set off with a colon. Parentheses or brackets may also be considered. For example, a bottom-center caption with an ID might look like this:
Narrator:
THE RIVERS RAN DRY
WITH DEVASTATING EFFECTS.

Guidelines like this one not only need to be corrected (so the example supports the guideline) but reconsidered entirely. A “robust and standardized” style manual would need to be deeply informed by user studies (focus groups, surveys, eye tracking) and theories of reading, typography, design, and perception.

A related issue is that captioning itself is often assumed to be simple—a matter of transcribing (narrowly defined) or copying down what people are saying. The online DIY tools are built for speed to allow users to quickly transcribe speech. But these tools can reinforce the idea that style manuals are not needed because captioning is straightforward.   

VW: This brings to mind the guy who called me “simplistic” in response to my article, “Good Enough” Captions Aren’t. He was all about the tools, speed, and ease of application, and I think he felt threatened by the position I take.

I couldn’t agree more with your comments about who makes a good captioner. Just as book editors (my other hat) must be well-read, well-educated, and professionally trained in editing best practices, I think captioners do need to be mature individuals with a wide knowledge base and extensive cultural literacy (pgs. 22, 73, 221, 235). I was recently asked why I had sent back an edit to an experienced subtitler with a particular sentence put into quotation marks; in the narration, it was an unattributed quote, but because the person wasn’t of the age or background to at least twig that something had a different register and perhaps should be investigated, it had gone over their head. I’m not blaming them, but it just highlights the need for a certain type or age of person from the workforce—or at least, it validates my insistence that captions and subtitles need an editor. But sadly, typing speed and facility with software are what create the poor results from freelance-marketplace lowballers who are willing to transcribe for pennies. Aside from style standardization and formalization of training, how will we be able to create an understanding that captioning is a skilled profession requiring education (and perhaps accreditation) and to get away from untrained people banging out craptions?

SZ: You’ve raised another excellent question. I don’t have any easy answers. I think we can continue to chip away at people’s expectations and assumptions about captioning (and about access more broadly). Above, I mentioned educating the public, both formally (in our classrooms) and informally (through blog posts, social media, interactions with clients). I am hopeful that our college courses—even when they are not focused on training captioners or even captioning per se—can create lifelong advocates for digital inclusion. More students than ever are being introduced to digital accessibility and universal design. My hope is that they will take their knowledge into their future workplaces and teach others about the value and importance of video access for all.

 

 I hoped to be able to turn some readers into captioning and access advocates. Several have told me that they will “never look at captions the same way again.”

 

We can also continue to research captions and user experiences to disrupt the status quo. With Reading Sounds, I set out to show that captioning is much more complex, rhetorical, subjective, creative, and interesting than we have typically assumed. I had in mind a diverse audience (not just scholars in my own fields) because I hoped that the book’s message might resonate with students, film fans, and others who may not be connected directly to captioning. In other words, I hoped to be able to turn some readers into captioning and access advocates. Several people have told me after reading my book or attending one of my presentations that they will “never look at captions the same way again.” If we can find ways to get this message into the minds of more people, including movie producers, perhaps we can chip away at the assumption that the subject couldn’t possibly be rich enough to support a book-length treatment, that captioning is not a profession but a simple skill, that captioning only benefits a few people, and so on.

VW: That’s why I wanted to spotlight your book with an interview. It is not only accessible but fascinating and thought-provoking reading for anyone, not just academia. I think I’ve told you that if I ever get to teach a course in caption editing, it’s going to be required reading.

The feedback from the caption users you surveyed did not surprise me. They struggled with having to rethink content in bad captions (pg.67) and expressed a need and appreciation for excellent captions (pg.71), which reflects my articles and guest writers’ experiences. You’re open about your son being deaf and your subsequent interest in captions; I now rely on them due to the hearing conditions that affect my hearing. Do you think, with seemingly international pressure to legislate accessibility (despite my letters to Hollywood!), that all the different types of caption users, but especially the D/deaf/HoH, will ever see true access—and by that I mean high-quality captioning? It’s been three decades already with increased application but stagnant quality. What’s it going to take til craptions are basically a thing of the past?

SZ: The number of people who need or want quality captioning only seems to be increasing as the population ages. In an era of streaming global media, more people are reading movies as well. Netflix has introduced more viewers to the pleasures and challenges of watching foreign movies with subtitles and/or with dubbed speech. (Whereas dubbing is well-known to European audiences, it is not common in the US.) Media globalization is helping to normalize words on the screen for US audiences. 

Universal design has also produced powerful arguments in favor of quality captioning for all. We know the claims and contexts so well by now that they’ve become stereotypes: watching TV in a noisy bar, studying a video lecture in a quiet library (without headphones), learning to read a first language (child) or a second language (adult), and on and on. Even nonhumans rely on captions: Google uses caption data to index the content of videos on YouTube, “but only if you upload your own professional captions. If you use the auto-generated captions that YouTube provides, they won’t be indexed because the quality tends to be very poor” (Dillman 2017). Another reason why autocaptioning is insufficient!

These developments do not eliminate craptions, but they do make captions and subtitles more visible, needed, and expected. As more users encounter and demand quality captions in more contexts, the calls for quality captioning will hopefully become more frequent and persuasive.

VW: There are so many topics you covered that I don’t think are considered even by current advocates: captioned irony; treatment of silences; nonspeech information; continue captions. And I learned a new term: captioned modulation (pg.200). Thank you for such a broad introduction to captioning theory and practice. I hope by the time your next book comes out (), rhetoric will have moved out of accessibility-focused circles and into the mainstream as a career option to fill a need and is given more than lip service. I’d love nothing more than to not have material for [intensifies], [indistinct conversations], and [music] craption memes!

 

Head shot of Dr. Sean Zdenek in a blue shirt, dark glasses, outside with snow behind him; he is smiling broadlySean Zdenek is associate professor of technical and professional writing at the University of Delaware. His research interests include web accessibility, disability studies, sound studies, and rhetorical theory and criticism. Prior to joining the Department of English in 2017, Dr. Zdenek was a faculty member at Texas Tech University for fourteen years, where he taught undergraduate and graduate courses on a range of subjects. Dr. Zdenek's book, Reading Sounds: Closed-Captioned Media and Popular Culture (University of Chicago Press), received the 2017 best book award in technical or scientific communication from the Conference on College Composition and Communication (4Cs).

 

 

Vanessa will be speaking October 15–16 at #a11yTOConf on caption editing for accessibility. The title of her presentation is [dog barking in distance].

Interview: Tessa Dwyer, author of Speaking in Subtitles: Revaluing Screen Translation

Cover of Speaking in Subtitles: Revaluing Screen Translation by Dr. Tessa Dwyer, showeing a film still of a young Asian couple in a dramatic setup, with the subtitle, "There's something I haven't told you yet."

RW: Hi, Tessa! Thank you for agreeing to this interview. I found your book  Speaking in Subtitles: Revaluing Screen Translation very timely, and it provoked many questions and some new thinking for me.

I started my university studies in translation, but I was surprised to learn about “value politics” in translation, which certainly wasn’t something I heard about 35 years ago. Could you provide a sort of elevator-pitch definition for readers?

TD: Perhaps because I come from a different disciplinary background – Film and Screen Studies – the “value politics” of translation immediately stood out to me when I started to engage with intercultural viewing practices and, especially, subtitling and dubbing. In fact, it was an encounter with “value politics” that really sparked my sustained interest in the topic. I was writing about Hong Kong action films in my MA thesis, using some French critical theory. My supervisor suggested I read the French theory in the “original,” yet had no qualms about my viewing of subtitled Hong Kong action films. Obviously, there are hierarchies in place about when and why translation does, or doesn’t, matter. What I found ironic was that a very learned translation of a French theorist by someone with expert knowledge of the field was not deemed worthy of serious analysis, while the less than stringent (to say the least!) subtitling of the Hong Kong film industry flew completely under the radar.

In Film and Screen Studies – especially Anglophone film theory – translation is so undervalued and un-theorised that it is almost entirely invisible. Despite the canonical centrality of European filmmaking, for instance, in the development of film theory and culture, the role of translation and the inter-cultural basis of much theorisation is almost entirely ignored. Translation speaks to reception contexts, over those of production/creation, and for this reason, it is often regarded as utterly inconsequential or, worse, as an affront to the creative process and to authorial vision. In this way, translation threatens the core stakes upon which so much of film and screen culture remains invested. That, I guess, it why I find it so fascinating and why I love how translation can demonstrate in myriad ways how the very distinction between production and reception breaks down. Everyday practices of subtitling and dubbing can really challenge so many assumptions and biases in the way we understand and discuss film and screen.

So much for an elevator pitch!... more like a meandering rumination.

RW: That’s great: all helpful!

You discuss critiques of subtitles which include elitism. Do you think wider access to film and video through prevalent video-on-demand streaming services is reducing this problem, which perhaps was more of an art-house issue for foreign films in the past?

TD: This is certainly something to consider. The disruptive influence of streaming platforms is immense, and as I argue in the book, the global media flows enabled by online networking are affected, at every turn, by language difference and translation. These recent industry shifts really bring issues of translation to the forefront of our changing media landscape. So yes, I think that streaming services are set to impact significantly on attitudes to subtitling and dubbing, yet it is too early to tell how this will play out. In 2014, there were predictions that Netflix would cause the demise of dubbing within Europe by providing timely access to content in its original language. However, by 2018, Netflix was streaming dubbed versions of shows by default, claiming that even when audiences insist they prefer subtitling, dubbing keeps more people watching.

RW: You cover issues around translation studies in your book, and current focus on content accessibility has certainly made this area more important than ever. Do you see audio-visual studies increasing in popularity, either as a result of demands for accessibility or because of the globalization of video content (VODs, gaming, etc.)?

TD: Yes, as I mentioned above, I think that the advent of streaming services is increasing attention exponentially on screen translation and localisation (including fan translation and crowdsourcing) and hence, burgeoning areas of research are emerging within Translation Studies. Content accessibility is definitely on the agenda in terms of industry regulation and policy, while global streaming services are having to prioritise translation and localisation. In 2017, for instance, Netflix launched the custom-built HERMES subtitling and translation test and indexing system, which it claims will allow them to “resource quality at scale” through standardised testing and unique identifiers, enabling it to use “metrics in concert with other innovations to ‘recommend’ the best subtitler for specific work based on their past performance.”

RW: Cultural misappropriation in the arts is a hot topic at the moment. Can you share some advice for young or emerging filmmakers, who might be trying to be more creative in order to get a foothold and visibility in a noisy film climate, about how and why to avoid détourning?

TD: Well, I think cultural misappropriation is an ongoing (perhaps necessary) risk attached to many forms of intercultural communication and creation. Détournement was a radical, activist strategy that sought to upset boundaries and challenge modes of thought and politics. It didn’t shy away from cultural misappropriation, but rather, confronted it head-on. It set out to offend and to shock. My take on all this is that intercultural modes of production and reception are vital, essential elements of mediatisation – no matter how risky. We need to recognise this and consider the complexities of translation involved in everyday practices and modes of engagement. I would rather that misappropriation continue to surface as an issue, than that creatives simply avoid engaging beyond their own safe cultural borders and boundaries.

RW: You talk about abusive and corruptive translation and quote Derrida about translation: “... it necessarily violates even as it devotedly follows or respects the original.” As a copy editor of books, I find my profession needs to walk a fine line between being “at once violent and faithful” in helping but also maintaining the author’s voice. “Nornes locates translation abuse within populist practices like anime fansubbing.” I feel the same way about self-publishers who think Grammarly can replace professional editing or who just want to ignore all writing conventions in the name of creativity. But your book seems to make a reasonable, unemotional examination of fansubbing. You changed my black-and-white thinking about it—well, brought my righteous indignation down a notch or two! Just as editors should not encourage grammar policing, what can you say to people who really bristle at fansubbing?

Let’s start with a provocation: maybe translation is, at heart, a fan activity?

TD: Let’s start with a provocation: maybe translation is, at heart, a fan activity? What motivates someone to labour so intensively and minutely with another’s text or creative work, if not some form of respect, devotion or fandom? Of course, the professionalization of the industry means that naturally many translators now routinely labour on works they do not love in any sense, but if we try to think about the origins of the practice, in scholarly and religious contexts say, the fan sense of investment holds.

Speaking from outside the field of professional translation – without the need to defend my own territory – I think it’s easier for me to appreciate the creative and sometimes subversive nature of fansubbing. Also, I’m interested in what fansubbing shows us about global media industries broadly. Fansubbing alerts us to very interesting things that are happening within global media flows, articulating gaps and loopholes, challenging politics, re-purposing technologies and, in some ways, helping to shape the future of global media industries.

Fansubbing is thought to have begun in the US when TV networks stopped broadcasting anime titles like Astroboy and Gigantor. Fans simply went in search of content themselves (sourcing video tapes directly from Japan or Hawaii), which then needed to be translated. As they set about translating for themselves, they discovered the extent of cultural adaptation/appropriation and reworking involved in the US TV broadcasts, and came to see their own translations as more faithful and authentic, and ultimately as safe-guarding the texts. This history is important as it shows how professionalism is by no means a guarantee of quality, due to corporate agendas, industry conventions, cultural attitudes and others factors.

Also – I should mention that many professional audiovisual translators are themselves very interested in fansubbing, and feel that there are many lesson to be learnt. Minako O’Hagen, for instance, notes the benefits of collaborative, peer-to-peer working environments with in-built feedback and mentoring mechanisms. O’Hagen and others also point to the value of expert genre knowledge as something that the industry is learning from the example of fansubbing. Netflix’s Hermes tool is a case in point: the aim is to match the right translator with the right content.

... we should value, not fear, fansubbing...

One of the major reasons why we should value, not fear, fansubbing is due to the fact that many language communities around the world are underserved by online offerings and by professional translation. Collaborative fansubbing provides a means to do something about the inequalities that persist in online modes of screen media access. While Netflix has expanded into 190 (out of 195) countries across the world, it only supports around 20 languages. The Netflix Vietnam service, for instance, offers a very limited range of Vietnamese-subtitled content, and so, once again, viewers resort to fansubs, using websites like subscene.com.

RW: Some people might be surprised to learn about subversive and spontaneous translation of films by audience members; online, I recently learned about lektoring. These brought to mind my days watching shadow-cast performances at The Rocky Horror Picture Show! You also talk about the “participatory” nature of today’s popular and public realms in the area of media consumption. Recently, an article I had posted, about the “good enough” attitude to captions being unacceptable particularly in terms of accessibility, was criticized by a competitor as being too simplistic. I know your book focuses on debates around translation in subtitles, but what’s your opinion on accepting a “good enough” level of captioning? (And you don’t have to agree with me. )

TD: I think it’s always important to advocate for high standards in captioning and other forms of media translation – especially in relation to policy guidelines and regulations. Yes, good enough is not an attitude that industry bodies should take on board, nor translation professionals. And yet, I would never want to dismiss the efforts of amateur, volunteer and community translators, who largely labour at the task of translation in response to industry gaps. I agree whole-heartedly that machine translation can never substitute for human translation and perhaps streaming platforms like YouTube that offer automatic captioning tools are creating such a misconception. The fact that captioning is often unedited is indeed a sign of discrimination and shows a lack of commitment by governments and media industries. It’s an important issue, and one that I think fansubbing and DIY captioning can actually aid. The battle isn’t against amateurs lending a hand where they can – it’s about governments and corporations avoiding their responsibilities and obligations. This is largely what fansubbers are also battling against: lack of access. So why not join forces and get fansubbers to champion the cause and help advocate for change? (n.b. Viki did this when it joined with deaf actor Marlee Matlin in the Billion Words March campaign.)

RW: If I ever teach a course on caption and subtitle editing, Speaking in Subtitles is going to be one of the books on my required reading list, and it really should be a staple on cinema studies intro courses. Although it’s academic, it’s packed with interesting information for general readers that will open their eyes to subtitling and captioning issues that go way beyond craptions and typos: literacy, ethics, politics, media piracy and guerilla efforts, cultural capital, interactivity, quality control, “thick translation” and User Generated Content, massively open translation, CT3—community, collaborative and crowdsourced translation, and Viki. Even the term animé is demystified. And thank you for setting us straight on the word for @#$%&! to represent prohibited expletives: grawlixes or “obscenicons” (Dwyer, pg 120; Díaz Cintas, pg 13). Finally, you’ve provided me with the terminology I needed for a future article I’ll be posting about more creative applications of captions: “integrated subtitles.”

Is there anything else you’d like to share? Perhaps something that didn’t make it into this book?

TD: I’ve published recently on barrage cinema in China (where viewers text comment onto the movie screen) – which relates tangentially to subtitling as a text-on-image mode! I’m also developing a fansubbing project around an in-production Spanish-language web-series called Distancia (watch the trailer here).

RW: I love the discussions around language and vocabulary in the barrage cinema article (“assault,” “bullet subtitles,” “hecklevision”!), and I'll keep an eye open about Distancia. Thank you again, Tessa!

TD: Thanks so much for this positive feedback. It’s truly gratifying to hear that you have found something of value in my book (despite its occasional forays into academic abstraction), and that it even has use for someone working in the industry. I really appreciate your thoughtful comments and enquiries and look forward to catching your next post. So, the pleasure is all mine – thank you!

 

Headshot of Dr. Tessa DwyerDr. Tessa Dwyer is a Lecturer in Film and Screen Studies. Prior to arriving at Monash, she taught Screen and Cultural Studies at the University of Melbourne, and worked as a researcher at the Swinburne Institute for Social Research, Swinburne University. Tessa is a member of the inter-disciplinary research group Eye Tracking the Moving Image (ETMI) and president of the journal Senses of Cinema (www.sensesofcinema.org).

Tessa’s research focuses on screen translation, language difference and transnational reception and distribution practices. She holds an Honours and MA degree in Fine Arts (Film) from the University of Melbourne, and a PhD in Screen and Media Culture, also from the University of Melbourne.

ASL Resources for Hearing Students

[Pre-video intro: When you’re starting out as a hearing person learning ASL, there is a wealth of info available but it can be hard to sift through it all and find the most helpful. Sure, loads of people get into Switched at Birth and new signers love This Close, but picking up a few signs from shows is not the same as having resources that are pedagogically sound, meaning they’ll actually help you learn and retain material. The rest is up to you. You’ll have to practise and review regularly, or it’s not going to stick.

I’m a <cough> mature learner which means I have to do way more review than my 20- and 30-something classmates. Unlike some of them, my hearing is not imminently disappearing, although it was checked and determined to have some loss on top my Ménières disease, tinnitus and hyperacusis. I use captions and find I’m needing them more and more. But even those in my classes who are hard of hearing or have CIs or hearing aids need to do the homework, review and look for additional resources.]

Video on YouTube on my Reel Words | No More Craptions! channel. Images included in the video are not reproduced here.

[Video transcript: I’ve spent the last year exploring some ASL learning resources and would like to share my impressions. I’m not endorsing any, just giving my personal reactions as a hearing learner. I also am not saying they are all correct or perfect; be aware that like any language, there are regionalisms to sign language, even within one (like ASL), and as a newbie I am no expert. I’ll talk about apps, websites, books and meetings.

One caveat though: BSL is VERY different from ASL, so watch the source of resources. I can’t understand BSL, but I do watch films on BSL Zone as part of cultural studies, though.

Start with finding a class offered by a local association or approved, certified course. As someone pointed out to me, you never want to learn from a hearing person, just as it’s best not to learn any other language from someone who doesn’t have it as their mother tongue. I’m in my third level of one at the same place of instruction, where the curriculum is Signing Naturally by Dawn Sign Press.

Signing Naturally is a course widely used in classroom settings, and it uses multiple learning styles. The workbook includes CDs which cover learning material, vocab review and extensive supplementary material, such as elements of storytelling, which is particularly important to Deaf culture.

I like the fact that even in parts published 10 years ago, it’s LGBTQ friendly, it includes racial diversity in its actor casting, and the material is usable and real: no "See Spot run" equivalents. The stuff I learn is what I actually need when I interact in Deaf social situations (more on that later). I haven’t seen any disability diversity, ironically*, in the casting, but I’m only partway through Book 2, so perhaps that comes up later. The second book is better in that vocab review is appended to the end of each lesson rather than each unit, which is easier to study with. It uses photos of the actors with superimposed directional arrows and other non-manual markers, and these are clearly visible in the videos. Vocabulary is presented in a non-glossed way, which is in keeping with ASL pedagogy, so that you aren’t looking up word-for-word equivalents. This may seem like a drawback to newbies, but trust me, it makes perfect sense once you as a hearing student get going and learn more about the language and culture. So, with no traditional vocab lists, I have a lot of paper clips and stickies for stuff I need to review more often. The only problem with the curriculum I have encountered is that the textbook needs copy editing and a professionally produced index, and I don’t say that as an editor/indexer (which I am) but because I find it hard to access some information as a student.

Another wonderful part of the course is their inclusion of profiles of prominent Deaf people and reproductions of art by Deaf artists. Both are key to broadening the hearing person’s learning about Deaf culture. They also discuss communication and etiquette, which is also invaluable.

Finally, the course has a website with corresponding video libraries, so if I’m not home, I can practice some vocab, for instance, while I have some time to kill. It only has a one-month free trial before you have to pay around $15US for access to the videos, but once you’ve paid, you have access for good: so while I am now in the video library for Book 2, I still have access to those for Book 1, which helps me review. Also, I was having trouble loading videos on my iPhone’s default Safari, but they advised me to try accessing via Chrome on my phone, which worked, and I appreciated the quick and useful customer service.

I can’t comment on other curriculums. I did ask for access to a review copy of one of the “Green Books” by Charlotte Baker-Shenk and Dennis Cokely out of Gallaudet University but didn’t hear back. I got the impression they are used for more “serious” students of ASL, such as those going through for eventual training in ASL interpretation at the post-secondary level, rather than Signing Naturally, which seems more directed at non-professional goals, although Dawn Sign Press does also offer the Effective Interpreting Series by Carol J. Patrie, for professional ends.

Another curriculum option for the determined and self-disciplined student is Lifeprint or “ASLU” by Dr. Bill Vicars, available online for free (incredibly), although course completion can only be recognized by certified schools using it. This was started, it seems, as a labour of love by Dr. Bill and is well known as a great resource, both as a course and for its dictionary functions. His classes are posted online to follow along with the online lesson plans, and he has seemingly endless video resources which have been updated over the years. He does have a Donate button, there is a separate site for fingerspelling practice, and he and his wife, Belinda G. Vicars, are indefatigable admins for a lively, helpful and engaging Facebook page.

I use Lifeprint for the weeks between my course’s terms. I learn extra signs and information, and I reinforce what I have learned. You can learn more about ASLU here and check out his homepage for the shocking number of resources available. His presentation of hand shape and non-manual markers suits my learning style to a T. Dr. Bill discusses similarities in signs, provides hyperlinked cross-references, and his sharing of nuances of Deaf culture are invaluable to hearing people. Considering he and Belinda are both faculty at California State U at Sacramento, they’re dedicated contributors to the free online fabric of ASL resources.

If you’re not looking for official learning yet and just want to dip your toe in the water, there are a gajillion resources online to let you check out ASL and see whether it’s something you might be prepared to commit to learning.

Who doesn’t love Marlee Matlin? She has an app called Marlee Signs—a tiny bit outdated IMHO. (Am I a horrible person for saying that?) There’s competition in app world, and I’m afraid this one didn’t keep me using it. Like many, it comes with a basic starter pack and adds others for about $2–3. It also has a Baby Signs package. Who knows? Maybe it’s right up your alley. Definitely worth a look.

A definitely sexier and hipper app is, appropriately, The ASL App by Ink and Salt LLC. Lots of people are drawn to it as a starting point for good reason. Aside from including people well known in both the Deaf and hearing cultures, such as Nyle DiMarco, the app comes with about 6 free packs or $9.99US will get you everything, including updates. I snapped it up. The reason this is so useful is that it includes truly useful content. Although my curriculum teaches beer and wine, it doesn’t include how to sign wasted, selfie or stalker, for instance, or a bunch of social media terms that come up in real conversation. It’s great for current colloquialisms to get you by, but a dictionary resource or course it is not.

They also have ASL with Carebears, Speak2Sign (which looks like a B2B training resource) and the ever-fun Nyle’s Stickers for iMessaging, which gives good bang for the $1.39. The thumbnail for the latter is a cartoon, but the app is augmented video: it’s even better than it looks. I’d love to see it available for Facebook, Messenger, Twitter, and Instagram, too. They created Cardzilla, too, discussed later. You can learn more on their various social media platforms also.

I suck at fingerspelling (doing and reading), so I needed an app! I like Spell by Wit Dot Media. Word lengths and speeds increase with success, and what is very helpful to me is that the hand shapes change direction: you will not always be signing with someone dead-centre to you, so you have to learn to recognize fingerspelling from different angles. The app is clear and simple.

I’m not great at some numbers, either, such as ordinals, ages, or year dates, and I’ve included a...  decent info from 1 to 100 at https://www.youtube.com/watch?v=6r8HDsBMk1E.

I and a bunch of other people were looking for an app from which you could insert handshapes for alphabet letters into various social media, so Signily looked promising. Save your $1.39. It doesn’t work, even when you allow its access on your phone’s keyboard options as per their troubleshooting. We’re all ticked! If you know of another one that works, please share!

I hate, hate, hate using the phone and, by extension, Skype, Zoom and Facetime. But a Deaf friend recommended the app Glide, which I confess I haven’t forced myself to use to contact her yet. It looks like they’re also developing CMRA, “the camera for Apple Watch,” fyi/if you’re into that sort of thing!

Speaking of chatting with Deaf folks, I attend social coffees where D/deaf, HoH and ASL-fluent hearing people go to chat/keep up their language skills, and they are very gracious about welcoming people like me with my kindergarten ASL. There, as in class, it’s “voice off,” meaning you don’t speak vocally both as a courtesy and to encourage language development. But there are times when you just don’t have the sign vocab and can’t fingerspell an entire paragraph or question, so people often use their phones to type out messages. The ASL App folks created Cardzilla, which is insanely useful in its simplicity. Instead of opening Notes and getting tiny print, it starts out with really large letters (which even I don’t need my glasses for) and conversation is easily passed back and forth. With swipes you can see your history, faves, share via Airdrop, shake to clear, and it will resize automatically if you want a long text extract to fit on one screen. Love it. Apply your saved Signily $1.39 to buying Cardzilla!

Another godsend in the reference line is Signing Savvy. There are soooo many resources available, but this seems to be a very reliable one. I’ve even seen my teachers check a few things on it. They have a good website, which is perhaps best known for the Word of the Day that is available on social media and/or daily email notifications. With each WOTD is a corresponding sentence using it in context, again with real-life sentences, not stupid examples. The BEST part of SS, like The ASL App, is the turtle function! You can slow the video down and watch it slowly for as many times as you need to, until you can parse and reproduce the sign! It can be set to as slow as ¼ speed or as fast as 2x, and you can print out the frames. There is a personal dictionary function you can create, the videos can be enlarged, there are variations presented, they have hand shape/NMM/facial grammar/movement descriptions (my greatest requirement) and a memory aid (ditto). Generally there is the same guy doing the WOTD and the same woman doing the sentences, so you aren’t distracted by changing presenters and individualisms. It’s consistent and the dictionary search function is excellent. The catch? The online free version is okay, but all these features are available and augmented by paying for a subscription. I saved 64% by signing up for a 3-year subscription for $129.95 US ($167CAN) which sounds like a lot but works out to 15 cents/day! Getting the membership and extra functions (I’ve only mentioned a few) was a no-brainer. I note they also have a new Chat service where you get 30 mins for $20US with a Deaf expert (credentials provided on the site), which is on par with tutoring fees where I live.

Finally, I want to cover an old-school resource: a bound and printed paper book: The Canadian Dictionary of ASL by the University of Alberta Press. Now, before you roll your eyes, there are many advantages to using a nonscreen resource like a printed dictionary, including the ability to be fortuitously distracted by nearby information, rather than having to search intentionally for it electronically. This 840-page book has over 8700 signs relevant to ASL—as it is used in Canada. This is hugely important because, just as no language course can cover all regionalisms, my textbook often presents American signs that we then have to unlearn and then learn the Canadian version.

The dictionary, by Carole Sue Bailey and Kathy Dolby, is well laid out. For instance, the front endpapers give quick access to alphabet and numbers, while the back ones review basic handshapes (not the same as letters; these are some of the building blocks of signing). The extensive front matter includes almost a hundred pages on numbers, time concepts, geographical place names and pronouns, before the entries proper begin. Line drawings are clean and clear, and even fingerspelled entries are presented. Like any good dictionary, different meanings get their own entries but I like that even these have discrete presentations, so a verb and an adjective are clearly divided for instance. Same-signs, alternates, and discrete cultural applications are often given. So signs for pee (sorry, that’s just what I flipped to!) are divided into general, women’s/girls’, men’s/boys’ and animals’ entries! This is a rich language!

But I wondered, having been published in 2002, how up to date it was, so I decided to look for some newer words. Internet is in there. So is email and spreadsheet. Even the Oxford Canadian Dictionary (English language) 2nd ed. is as old as 2006, and with the internet, language changes too quickly to afford updates in Canadian [word lists] in our publishing climate.

What’s important is that I can look up toque, Saskatoon, parliament, toonie, and slow—both the Atlantic and more westerly signs for it!

I hope the opinions about these resources help. If you know of other excellent ones, feel free to share on Twitter, Facebook and my website. I’m generally searchable as Reel Words Subtitle and Caption Editing.

Thanks!

2021 Edit

I've been apprised of a few other resources for people who have experienced hearing loss. These are American and I have no affiliation or experience with them, but they may be useful for some readers, especially seniors:

https://www.helpadvisor.com/conditions/hearing-loss-resource-guide

https://www.medicareadvantage.com/resources/hearing-loss-and-medicare-resource-guide

 

* NB: I do not subscribe to the medical model of deafness, so my comment about a video not having disability diversity is not about "being handicapped" but rather that the Deaf community encourages Deaf talent, and I thought this awareness might extend to using actors with additional accessibility barriers to being hired in acting.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dear VODs: Stop Blindfolding Deaf People!

Close-up of CC #NoMoreCraptions button. CC logo in black and white on button, pinned to black leather bag. VWells image of Rikki Poynter button.

I often can’t hear sound effects in shows and movies, so I use captions so as not to miss anything. Because I mostly use Netflix to stream (I don’t have a TV; I sometimes rent/borrow DVDs), I was curious to know how other VODs (video on demand [streaming] services) fared in the application of and care taken with subtitles and captions. It’s not just the D/deaf/hard of hearing who use captions for same-language film and video, and access to global programming is making good subtitling a must more than ever.

This is my survey of captions and subtitles on some common VODs. As in my survey of cinema access for the D/deaf in Toronto, when I contacted these companies about problems I was transparent, with Reel Words info in the email signature. I also approached this as an everyday user. Where necessary, I paid out of pocket to get the services. I’ve assigned a star rating system for overall application and treatment of captions/subtitles.

D/deaf people who sign have the right not to have their hands and arms restrained because it prevents them from communicating. What I discovered is that these providers might as well be blindfolding the D/deaf/hard of hearing. They can’t see the content that isn’t provided for them.

Google Play 

I had heard that The Silent Child (2017) was on Google Play for $2.99, and I obviously wanted to see it for the storyline and use of ASL. It turned out the film could only be placed on a wish list for when it was made available on Google Play in the future. YouTube Movies says it is not available there.

I searched for some free movies and tripped upon the sociologically fascinating (although perhaps not intentionally…)The Creators, made “in conjunction with YouTube” and boy, did that ever show!

It was a sort of advertorial documentary about young YouTube phenoms in the UK making their living through that platform, all with different…talents. Here are two I’d never heard of, Niki and Sammy, branded online as NikiNSammy. Not quite sure what their talent was aside from having sprung from the same egg, but let’s focus on the captions as they were used throughout the doc. Here’s an example of YouTube’s idea of accessibility:

Young adult twins side by side, incomplete captions are And that's really; Which is amazing
Screenshot from The Creators on Google Play

 

NikiNSammy’s captions were split, both left-aligned (which is not helpful for twins…) and these two titles containing 36 characters was only up for one second. Now, the most current Netflix guideline is 20 characters per second. While it was a UK-filmed short doc, the spelling used was both British and American. The worst offence was that it seemed they really did use YouTube automatic captioning because there were constant errors, such as captioning react for interact and real for raw. The caption bands jumped all over the screen, as if placed for the cool factor—with absolutely no understanding of what captions (CCs) are intended for. A professional titler was not employed, and no QC person could have reviewed it. Clearly the producers didn’t give a hoot about accessibility. Thoroughly appalled at Google/YouTube.

Amazon Prime Video ★★

The Silent Child wasn’t listed on amazon.ca’s Prime Video.

So I decided to take the 30-day free trial to watch another example and promptly cancelled to avoid the $79CAD/yr fee.

I decided to go with Robin Hood (Ridley Scott, 2010), a choice influenced rather arbitrarily by my recent participation at an axe-, knife- and archery-target place. I wasn’t sure if the captions used were made for this stream or were from a previous production and distribution. As is common, the captioner was not credited.

The font was crisp white on black below the screen or with a shaded dark band for a top line placed above that on the film. But there were great inconsistencies: [ALL SCREAMING] vs only [CHEERING] when all were cheering; poor and inconsistently styled offscreen speaker IDs; poor choices in deciding on the most relevant sound effects, such as dogs barking but not men yelling in attacks or falling off walls; the King of France (should be king if not using a personal name); Jimmy boy as a name (should be Jimmy-boy or Jimmy Boy). In short, the usual problems that I see constantly on Netflix. Visually accessible but not textually accessible, which is a huge part of the game! As I often comment, unedited titles cause reader stumbles, which cause the user to lose concentration and thus comprehension.

Group shot of warriors and brigands under a tent at night. Caption: When you had us herd 2,500 Muslim men, women, and children together, (incomplete)
Screenshot from Robin Hood (Ridley Scott version) on Amazon Prime Video (amazon.ca)

 

Here, two and a half thousand should have been used, but space was tight; however, space could have been made by forcing an earlier and split title. Not great, but Russell Crowe did not say “twenty-five hundred.”

Good points: the ability to rewind OR fast forward by 10 seconds, and three subtitle format options (the fourth was black on black…).

Netflix ★★

Netflix’s captions and subtitles were the raison d’être for this survey. I was curious to know if other VODs were as ineffective and disregarding of Deaf/HoH needs as Netflix. (You can read a lot about my feelings on Netflix on my website on the blog tab and see various #CraptionFails on the Gallery of Fails tab.) No point in repeating it all here, but suffice it to say that despite (or perhaps because of) having the monopoly on VOD, this machine has grown too big to have serious quality control of captions and subtitles. The failure of the Hermes Test, the lack of qualified QC (“Master-Level Quality Control”) editors, and the low pay leave it just a step above YouTube craptions. Really and inconsistently poor. I’ve only had about four or five examples in my portfolio of well-done captions with this provider. See my website and blog posts for many illustrated examples of the problems and how they should have been avoided.

Sundance Now 1/2★*

This  check started with a series of emails to customer service because I couldn’t find the CC/ST button on the interface for some movies I wanted to review. Also, the overall platform is clumsy and annoying to navigate; the only good point is the ability to rewind 10 seconds but—unlike Amazon Prime—there’s no option to fast forward by increments.

The customer service rep kept insisting they were there if only I would use a supporting device and look in the right place, but they weren’t. His final email said in part: “I checked Off the Rails and Julia, and unfortunately we don’t have captions available on those two titles right now—- [sic] while the majority of our content is now captioned, we’re still working on updating our catalog.” Now this may be true. I didn’t check all of their catalogue. But my random selections of movies did not have captions.

Incidentally, I used the search box to find films with the keyword deaf and was provided with 13 suggestions on death content.

The usual problems with missing words, wrong words (Yeah for Yes) and letters, speaker IDs, punctuation. Overall it was very sluggish. Read: inaccessible. Note these captions, all of which lagged, from Broken Flowers (2005) with a terrible line break.

Exterior shot, two men speaking seriously. Captioned A few years ago now, mate; Yeah, well, you'd hope so; She's dead you (incomplete)
Screenshot from Broken Flowers on Sundance Now

 

The worst part was that, no, I didn’t read all 17 pages of the Terms and Conditions. I had stopped at the part that says [sic]: “7 day free trial, you can cancel anytime” which, due to the lack of copy editing, I read as you had seven days free and you could cancel anytime. NOPE. $59.00 down the tube. You can “cancel” your subscription but you’re free to watch for the rest of the year. In other words, you’ve cancelled a renewal next year. The only reason I’m not freaking out about the $60 is that I thought This Close, even at six episodes (and six “discussions” post-watch) was worth the money, especially if season two comes out within my subscription year.

So overall, the captions were terrible. But here’s the thing: for This Close, they were flawless. Not only that, they dealt with a bilingual show creatively and effectively. This show saved them from getting no stars.

Sub-survey: This Close as an example of captioning as it should be.

Here the captions are placed according to the speaker:

Michael and Kate in bookstore, talking, with ASL interpreter to the right, bookstore manager to the left. Captions: Can we get out of here? Okay, I need you to deal with the emergency.
Screenshot from season 1 of This Close on Sundance Now

Bookstore manager in background, Kate and Michael blurred in foreground. Captions: You can't tell me what to do. They're talking business, right?
Screenshot from season 1 of This Close on Sundance Now

 

White letters and, in fact, a different font are used for the ASL translation, and white on dark grey bands is for the oral dialogue. This is particularly helpful if you’re like me and following both sets of captions, listening to the hearing speaker and trying to follow the ASL or watching the fictional ASL interpreter sign and interpret! It was actually doable with this thoughtful crafting.

And when they do have to include an offscreen line, it’s correctly IDed and formatted to the side.

Kate in front of a bookstore's shelves, a finger pointing at her. Left caption: You can't tell me what to do. Right captions: [Morgan] Yeah.
Screenshot from season 1 of This Close on Sundance Now

 

And sidebar, I loved this scene where Michael’s hands are restrained as he is removed from a plane, and Kate says they can’t do that to a deaf person:

Kate yelling on an airplane: He's just trying to communicate, you fucking audist!
Screenshot from season 1 of This Close on Sundance Now

 

So, really clear, almost always perfect renditions of the audio (except for the odd Yes for Yeah). Although, I’m not sure what’s up with the three lines, especially when they are so short. Perhaps to ensure the titles are large enough to be visible for low-vision users, since Sundance Now doesn’t allow viewers different caption-display options?

This tells me one thing. VODs basically don’t care about Deaf/HoH access unless the executive producers (and guest stars like Nyle DiMarco in this case) of the show are Deaf and get the need! So, yay them, but that’s a fairly easy win, and boo Sundance overall.

Back to the show’s “Now The Discussion” segments. These featured cool young people sitting around a lovely studio drinking beer and chatting about the previous episode. Thankfully they had a Deaf/HoH participant in one I watched, and they also interviewed Shoshannah Stern and Josh Feldman after the last one. BUT but but: they applied post-production dubbing of an interpreter to overlay on the signers’ speech, rather than using the real-time interpretation they’d have had to facilitate the convo with the hearing people. So it struck me as a bit disingenuous: a bit like sim com, trying to create a false syncopation or an aversion to causing unwanted pauses in group dialogue while the signers’ words were finished being interpreted. Ew. As if waiting for a Deaf/HoH person to be orally interpreted was a problem or awkward. This struck me as ironically ableist/audist.

Also, these chats allowed expletives to be kept in the audio track but then captioned them with two hyphens, which is not only bad practice but also reinforces that the Deaf/HoH audience is not deserving of full and correct content. When characters swore/used vulgarisms in the show, the f-bombs etc. are shown fully spelled out. I thought this was another unfortunate dichotomy that spoke volumes.

As for the show itself, who doesn’t appreciate it when #DeafTalent is used, such as RJ Mitte, who has cerebral palsy? And the occasional use of loud static and other noises to obscure dialogue reinforces the challenges faced by some deaf people, in case we hearies get complacent in our viewing.

*So This Close gets ★★★★1/2  but Sundance Now itself gets only a half star.

Hulu (star rating not applicable, but see Reel Words home page image)

I’ve wanted to watch The Handmaid’s Tale since the get-go, as it is one of my favourite books. This time I did check the 17-page Terms which do allow for cancellation at any time, for reelz… I wanted to sign up for the $7.99 Limited Commercial plan ($11.99 for no commercials: isn’t that the flipping point of VOD—no commercials?)

I had trouble signing up due to not having a US zip code but a support chat told me “Our services are actually not available outside of the US unless you're a US military member living on base. Would you happen to fall in this category?”

Okay, over to:

CRAVE TV ★★★

Sub-survey: THE HANDMAID’S TALE

I went with the $7.99 monthly after 30-day free trial because I really, really wanted to see the show and didn’t mind paying for month two if I didn’t get through it all. (Who was I kidding? I binged it!)

Sidebar: as I’m sure you know, this was filmed in Toronto and around/near the GTA, so if you’re interested, here are links to the locations details.

https://torontoist.com/2017/06/where-the-handmaids-tale-was-filmed-in-toronto-part-one/

https://torontoist.com/2017/07/where-the-handmaids-tale-was-filmed-in-toronto-part-2/

If you go to my website home page, you’ll see a screenshot from Hulu from a trailer for the show. I discovered that they hadn’t done anything to address the issues of captioning since it was bought by Crave, and here are some of the more problematic captions as they appeared in Episode 1.

The Handmaid’s Tale has some parts of captioning done right, but then there are the usual inconsistencies that creep in. Sometimes [indistinct radio chatter] is heard and captioned as a very important part of setting and mood for the show, but at other times it is not captioned when clearly heard and significant. This does not provide full and complete access. And that’s supposed to be a standard in this country, but it is not enforced.

The show starts with a bang, with sirens blaring before the first visible frame, but [SIRENS BLARING] only shows for one second (nonstandard practice) and does not continue with the action, even though this is important to set up the mood and story.

This show is often based on the unshared thoughts of the heroine, Offred, and what she does dare to utter aloud. But these are not differentiated. (That’s Captioning 101, by the way.) In this scene, she was mocking another character internally but vocally answered Yes to her.

Offred leaving sumputuous grounds of house; captions: You wanna come along? Yes.
The Handmaid's Tale exterior

Below, a character who is losing it says I want my mom. and is comforted by Offred with a gentle Okay. We can barely see this dark scene, so a caption-dependent person might well be confused by the lack of speaker ID.

Dim shot of woman's head. Captions: I want my mom. Okay.
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

Netflix’s style guide determines that vulgarisms are to be spelled out (see my discussion of this around the 45-second mark on my Craption Eyerolls series on YouTube), and the same problem arises here. I’m not sure if the captioner wasn’t savvy or was applying their own personal values to not using the spelling cum. Either way, QC should have caught this. Crave certainly lets the f-bombs drop in captions, so I don’t think it was an issue of conservatism.

Offred lying on a pillow in moonlight in a dark room; caption: I can feel the Commander's come [sic] running out of me.
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

The Handmaid’s Tale takes place in a new-order society which has retro values. Church bells toll frequently, driven by plot and mood. But I kind of think using thrice is a bit over the top. Since it’s not the first time that church bells toll in the episode, space could have been saved by shortening the subject thus: Bells tolling three times. We would get that they are church bells from episode history, context and connotation.

Looking down from window to SUV in a driveway by a garden. Caption: (CHURCH BELLS TOLL THRICE)
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

Sometimes, captions are moved up to avoid covering a speaker’s mouth to make it clear that the person on screen is the speaker. But here Offred is not speaking; Aunt Lydia’s words should be italicized both as offscreen and over an amplification system. It also just looks weird.

Offred looking down, distressed. Caption: As you know, the penalty for rape is death.
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

Just as we need to see the speaker’s mouth moving, we need to watch Offred’s eyes carefully, as they reflect so much of what she may not say aloud. The cinematography in the series includes a lot of close-ups, particularly of Offred, to make us feel sympatico with her. Here, she is appalled by the speech, but we’re distracted by the titles actually touching her eyes proper rather than pondering her reaction. I talk about the need to facilitate audience immersion, rather than distract, frequently on my website. Breaking or preventing that immersion is one of the main ways to fail the caption or subtitle user, and it’s a key focus in my posts. This scene was made less evocative by careless captioning. And again, this show has been bought and the captions could have been improved by the new provider, but they either didn’t check or didn’t care.

Close-up. Offred looking up, pensively. Caption: But we cannot wish that ugliness away. Caption coversher eyebrows and upper parts of her eyes.
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

I really dislike the way VODs treat music, songs and lyrics. The presentations are not helpful, the rules don’t make sense, and they’re inconsistent. And they need to change.

Credit for director Reed Morano. Caption (YOU DON'T OWN ME BY LESLEY GORING PLAYING)
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

Here, for clarity, I would have styled the caption like this:

[♫ You Don’t Own Me ♫ by Lesley Gore]

It’s obvious that it’s playing—by the fact that the caption is there—and it’s obviously a title of a song, as indicated by the addition of customary musical notes. Had the house style not been to use ALL CAPS for sound effects, styling this like I have would have made the text clearer as a title, too.

And now, the pièce de résistance, the type of reason this show is on my home page:

Credit for Samira Wiley. Caption: alt code gibberish Don't say I can't go with other boys alt code gibberish.
Screenshot from season 1 of The Handmaid's Tale on Crave TV

 

Every musical caption has these—what I can only guess are Unicode cut-and-paste issues. This says We don’t care enough about accessibility to create acceptable captioning for our users.

And this is the crux of the matter. Communication is a right, and bad communication is a breach of that right.

Kate pointing angrily offscreen; caption: You can't hold a deaf person's hands like that.
Screenshot from season 1 of This Close on Sundance Now

 

As people in the cultural-content and entertainment industry, people who use captions and subtitles (basically everyone at some point or another) are our everything. The reason we have jobs. The reason we will always have work. And if there is one theme that is prevalent on my website, it is that the audience deserves better, and we should be ashamed of delivering less than excellent. Sure, human errors happen. We do our best. But when we deliver a product at subpar quality because it doesn’t matter to us personally or we are ignorant of issues of accessibility, we fail our fellow viewers.

I’m not the only one who thinks like this. I noted this Gamasutra post for its candour, and it reminds me to try my best to make my work accessible:

“I never have forgot the feeling of of depriving someone of an experience just because I didn't think to add a button”

Ian Holstead, Ubisoft

By providing craptions, VODs are preventing all but primarily D/deaf/hard of hearing viewers from accessing content—analogously blindfolding them.

I think we can do better than an average of 1.7★ out of ★★★★★ in caption and subtitle content on VODs. Industry standards are 95–98% accuracy, and in these five services I have found a 34% rate of success.

Please share this widely. And please leave a comment about your experiences in other countries.

Coming soon: Apps for ASL Learners; Creative Applications of Captions and Subtitles: Yay or Nay?

How Uncaptioned Movies Are Like Old-Fashioned Vegetable Peelers

Old-fashioned vegetable or fruit peeler, with bare metal handle, against a mottled grey backgroundWould it really kill us hearing folks to go to the movies with open captions?

No one complains about sidewalk curb ramps or the bumpy yellow warning strips at TTC** subway stations: they’re just…there. We don’t tell folks in wheelchairs or scooters to use regular-height curbs and in the evenings only, or folks who are blind they will have safety during rush hours only. Then why the flip are we insisting that captioned movies (the few that are provided with captions) will be shown only on certain days, schedules and cinema screens?

I propose that we caption all movies and make them available at all times.

Now, before you whip out your I-hate-subtitled-foreign-films argument or your I’m-a-details-person rhetoric about how captions will block your view of the mise-en-scène, just grab a handful of popcorn and hear me out.

Humans don’t tend to love change. But generally, we do adapt. That’s why images like this are amusing:

Old fashioned produce peeler with bare metal handle; post says: did-anyone-ever-use-a-peeler-like-this-one-shareWe think, Wow, I can’t believe I used to put up with that! It’s ugly, it’s uncomfortable, it’s inefficient, and not everyone can use that thing effectively.

Eventually, we can barely remember what life was like before a new, improved version and, usually, we even realize that the old way wasn’t that great after all.

That is what would happen with open captions on movies. (Open captions are those that don’t leave the screen—you can’t decide to “close” them and watch video/TV/movies without. They’re ever-present.) We would become so used them that we wouldn't even remember what it was like not to have them. Non-users would tune them out; users would enjoy content more easily.

Not only would about 10% more of the population be able to go to the movies, individuals could broaden their social horizons by being able to attend a film with D/deaf or hard of hearing friends. As I mentioned in a recent article, I couldn’t attend a movie with more than two deaf friends due to the undersupply of assistive equipment (never mind captions). My more cynical side doesn’t understand why movie producers and cinema mega corps aren’t embracing this—aren’t they supposed to want higher box-office takings?

When surtitles*** were introduced to the opera world (by a Canadian company, by the way), people went bonkers. Opera would be ruined, the companies wailed. Opera had to be kept pure, cried the audiences. Guess what happened. More people started going. And most of them were young people. I LOVE surtitles and find they have enriched my opera experience. And if I lose interest or there’s a repetitive text being sung, I just look away. They’re placed in opera houses in such a way so that they don’t distract the disinterested eye but are quickly adjusted to when used. I don’t have to use or pay attention to them if I don’t want to.

But wait! you may interject, you can skip reading boring repeats in opera, but you can’t skip dialogue in a movie or you’ll be lost! Aha! rejoin I. Welcome to the world of the D/deaf and hard of hearing: the dialogue is integral to film. You’re aiding my argument.

And if you’re going to tell me your eye never leaves the multiplex screen for 92 minutes and you have taken in every object in a film, you must have a photographic memory. Most of us aren’t taking in the whole scene—in fact only about 12% of it (Sorry, directors!)—and research suggests that subtitles (and presumably captions) improve the visual experience of film or TV content. Or we look down when we drop popcorn, check our phone for the time, note the green or red exit sign, look at the couple two rows down who won’t stop talking, etc. We already are distracted. If anything, the research suggests, captions will hold our attention to the visual, not adversely affect it.

Also, captions and subtitles that have been edited should be of such a standard that we stop noticing that we’re reading them. So even if our eye does drift to them, they’ll allow us to be fully immersed in the storyline.

We have scent-free institutions for those with allergies. We have Braille on bathroom doors and other public signs. We allow service animals into restaurants. We keep peanuts out of schools. We’re starting to provide alternative-experience concerts for people on the autism spectrum. Do we have a fit about these accommodations? No. They have become part of the public fabric. Those who benefit from them, use them. Those who don’t, ignore them. So why the radio silence about open captions for movies? It’s like it’s not even up for discussion.

If you are such a purist cinephile who must see a “clean” version of a director’s oeuvre, buy the DVD with the director’s cut. (God knows it’ll be out soon enough.)

Or, even better, why don’t you invent some disposable eye gear like 3D glasses that will block out caption boxes at the bottom of screens? Or maybe a big tool, that looks like ET, to stick in your cupholder that will project the virgin film to your sightline alone? Oh…you…wait—why should you be put out so much when you’re just trying to see a movie?

That’s an interesting question, now, isn’t it?

 

 

 

* flickr.com, Grannies Kitchen, "Vintage Vegetable Peeler"

**TTC is the Toronto Transit Commission, which encompasses subways, buses, and LRT and hooks into regional traffic options.

***Surtitles is a trademarked version of supertitles, but as few people seem to know the latter term, I am referring to the former for clarity.

“I wish I had heard all of my dad’s eulogy”: Hearing Aids as a New Lease on Life

Patricia MacDonald is one of a few editorial colleagues with a story to share about hearing. Hers is honest and hopeful. I'm taking her words to heart as I go to get my hearing retested later this month.

She also touches on how she uses closed captioning, reminding us that not all users are totally deaf or "hearies" using captions for other reasons.

 

Headshot of Patricia Morris MacDonald.

I can’t remember when I started noticing my hearing loss. I was probably in my late 20s. I do know the exact moment I couldn’t deny any longer that it was a problem: when I couldn’t hear everything my brother was saying as he was delivering my father’s eulogy. What a thing to miss. 

But still I didn’t get my hearing checked. I knew I needed hearing aids, but I didn’t want to wear them. Hearing aids are for old people, I thought. Everyone will notice them. So I struggled on for another few years, constantly frustrated when I caught only bits of conversations, wondering what I had missed when others around me were laughing at something I hadn’t heard. My husband bought me a cheap little device that amplified sound, and I used that a lot, especially when I was watching TV. It worked great but could only do so much. I was still missing out on a lot in real life.  

I did eventually get my hearing tested, and the results were as expected: significant hearing loss in both ears. The culprit? Otosclerosis. Basically there was a hardening of the bones in my middle ear, and they were unable to vibrate properly in order to conduct sound. The good news? I was a perfect candidate for hearing aids. The bad news? I was a perfect candidate for hearing aids. I still didn’t want them, and it was at least another year before I finally went for a fitting.  

The catalyst was an editing conference I attended in Ottawa in 2012. The sessions were fine because I had my trusty sound booster with me; socializing, however, was a different story. One-on-one interaction was okay for the most part, but put me around a table in a noisy restaurant and I was lost. I still ended up having a wonderful time, but it was a wake-up call. I needed to do something.  

So I took the leap and got two hearing aids. And suddenly I could hear all that I was missing—and it was a lot, trust me. I was very grateful for this new lease on life, although I was extremely self-conscious for the first little while, the first couple of years, even. To this day I’m still a little self-conscious. But I can hear better, and that’s really all that matters. 

Hearing aids aren’t the perfect solution, though. I often have trouble hearing on the phone and when I’m in a crowded room. I still miss some things.

Closed captioning has become a good friend, especially when I’m watching a show with fast dialogue or accents.

So there’s still frustration. But I can function almost normally again. And I must say that when I “take my ears out” at night, I welcome the quiet and enjoy a good sleep. It’s not all bad. :^) 

It’s taken a while, but I’ve come to terms with my hearing loss—I have a disability that fortunately I was able to correct. I just wish I had done it years earlier. I wish I had heard all of my dad’s eulogy. But I was thinking about how I would look instead of how I could hear. If you have hearing loss and are hesitant about trying hearing aids, for whatever reason, I urge you to give them a shot. It will change your life for the better.  

 

Patricia MacDonald is a freelance copyeditor in Cape Breton, Nova Scotia, specializing in sports books and memoirs, guides for athletes and coaches, and textbooks for physical education and kinesiology students.

She can be reached at powerplayediting@gmail.com.

 

Photo courtesy of P. MacDonald.

The State of the Caption: Deaf Accessibility in Toronto’s Cinemas

In December 2017, I contacted the main corporate and some independent film venues in Toronto to canvass their provision of access to hearing assistance for the *Deaf, deaf and hard of hearing. Cinemas are only able to provide captions when production companies include the files. But who has what capabilities?

Before you scroll away because you “don’t know any deaf people,” consider this: you may think you don’t, but a lot of people don’t advertise their deafness because a) it doesn’t define them and b) it’s frustrating to keep explaining it repeatedly to hearing people. Also, hearing folks do use captions: English language learners; people needing cognitive support with visual reinforcement; watching shows with heavily accented or audio-obscured speakers; and in noisy places or where the volume is off or problematic. To read about my personal experience using assistive tech in a cinema, read this article. But here’s my experience accessing information about captioning and hearing-assistive devices in eight Toronto cinemas and chains.

In my online contact attempts, my website-linked business name was in my email signature (providing transparency), and I asked only the following of each recipient:

Hello,

I've looked at your accessibility page, and I'm writing to get up-to-date information about the availability of listening assistance at your cinema[s], be it open captions or assistive technologies. I'm in Toronto. Can the Deaf, deafened or hard of hearing attend movies with full access? Do I just show up and any showing will just have assistance available?

Thank you.

 

Here are the fascinating results of my inquiries.

Exterior photo of the Cineplex Scotiabank multiplex in Toronto

Image: https://www.flickr.com/photos/stevenharris/3371960989

Cineplex

Not surprisingly for a large corporation, I was assigned a ticket to my inquiry and received an automated reply. It said:

- Please type your reply above this line -##

Screenshot of automated reply from Cineplex about caption options and how to access that information on their website

 

 

I replied,

If an individual reads my email, they'll see I've reviewed the website and am requesting up to date information—i.e., has anything about availability changed?, etc.

I'd appreciate a non-automated reply.

Many thanks.

 

Hello Vanessa,

Thank you for contacting Cineplex.

All of our theatre locations have the capability to present shows with Closed Captioning and/or Described Services. Ultimately we rely on the film distributors to provide our theatres with the appropriate files for each film so our guests can enjoy these services; because of this there can sometimes be a film without these features offered. You can always check the availability of these services by searching film showtimes online and if you see (CC/DS) underneath the film format your show will have Closed Captioning and/or Described Services. (see example below)

If you need more information on what each device does see the links below.

https://www.cineplex.com/Theatres/ClosedCaption

https://www.cineplex.com/theatres/described-services

When you go to your local theatre simply request either device from the box office and the staff will be happy to set it up for your show.

If you have anymore questions please let me know.

Have a great day!

Cineplex Guest Services

 

I thought the “there can sometimes be a film without these features offered” was gilding the lily a bit. I also felt the DS info indicated that they were copy-and-pasting and not writing back specifically with my question in mind. I wrote back,

Thank you again.

But would it be possible to implement a search function so that we can look for films with CCs rather than clicking through every possible movie and theatre to see if they have captions/assistance? I think I will also have to approach cinemaclocktoronto.com about considering adding a search feature, since most of us look for movies online in one place, not at the discrete sites of cinema corporations.

The answer:

Hello again

I will happily share your feedback with the IT team in the hopes they can add that functionality. Please note you can search individual theatres and their showtimes at the bottom of each of those links I sent you.

https://www.cineplex.com/Theatres/ClosedCaption

https://www.cineplex.com/theatres/described-services

Have a great day!

Cineplex Guest Services

 

Hm. I think a chat with Powers That Be about searches and increased access would be fruitful...if only I could reach them.

 

External photo of the Carlton Cinema in Toronto

Imagine Cinemas

Image: grainger via https://commons.wikimedia.org/wiki/File:Carlton_Cinemas_Toronto.jpg

The much-loved local Carlton and its related cinemas have demonstrated commitment to accessibility for audiences and employees, which I have respected greatly. Plus, their staff are outstanding in their customer service, and their reply reflected some of that.

 

Hi Vanessa,

We have a few locations with assistive listening devices, however we often experience technical difficulties with them which is why they aren’t advertised. We also have a few locations that play open caption films on certain days/ show times.

I will forward your email on to our Carlton and Market Square locations as they are our DT [downtown] Toronto locations and would have a better idea as to what they actually have.

 

And further follow up, since I’d mentioned me needing to bring this up with cinemaclocktoronto.com:

 

Our Carlton location have headphones that amplify sounds but no open caption.

Our Market Square location has assistive listening devices but have expressed that they don’t work very well. They do however have 1 open caption movie a week (see screenshot on how it would appear). They are the ones that say OC. The upcoming week they are playing The Greatest Showman and the following week is Star Wars.

Cinema clock is a third party website so unfortunately we cannot control how they display content.

Hope that helps!

 

Customer service first prize to Imagine! They already show one captioned movie a week—not what I’d call fully accessible but certainly open and willing to improve. They admitted their lackluster customer feedback and communicated that openly to me. They were also the first company to reply—the same day I emailed and a week before the other replies began to trickle in. Small chain, bigger heart?

 

 

Interior photo of the Hal Jackman auditorium at the AGO in Toronto

Image: https://www.ago.net/jackman-hall-overview

Jackman Hall, AGO

I received this reply:

Hello Vanessa,

Thank you for your email.

At the moment Jackman Hall Theatre is not equipped with open captions options or assistive technologies. We are able to transmit, however do not have the devices in house. It would be up to the client to provide the film with captions built into the film as well as provide any devices or hardware. We are working towards upgrading our venue in order to be more inclusive. At this time we not able to assist with deaf, deafened or hard of hearing patrons with full access unless the film is subtitle or a client provides the assistive devices/hardware.

Please feel free to contact me if you have further questions or concerns.

 

I found this interesting considering that it is in the AGO, which is 33% government funded, and that the AGO Transformation, which included the renovation of Jackman Hall, was completed in 2008. The $276 million project couldn’t throw in some hardware then or since??

 

Exterior photo of the TIFF Bell Lightbox cinema in Toronto

 

 

 

 

 

 

Image: https://www.ticketmaster.ca/TIFF-Bell-Lightbox-tickets/artist/2270943

TIFF Bell Lightbox

Bell TIFF Lightbox’s reply was interesting in several ways.

Hello Vanessa,

Thank you for your email and interest in attending films at the TIFF Bell Lightbox.

Hearing Assist (which raises the volume for visitors with slight to moderate hearing impairment) is available for all of our screenings as it is provided by our theatre.

The availability of Closed Captioning and Descriptive Audio is dependant [sic] on the copy provided to us by the distributor. If we have films that come with Closed Captioning or Descriptive Audio we will display that information on our Website.

Please refer to the example below. (SUB = Subtitle, CC = Closed Captioning, DS = Descriptive Sound, TBLB 3 = Cinema #3)

When you arrive at the cinema please inform Box Office staff that you require additional equipment and the staff will be happy to assist you with the set-up and procedure.

Hope this information helps.

 

They said Hearing Assist was used as an amplifier but a phone call confirmed the brand was Listen, so this doesn’t seem to be updated info as requested.

Note, too, the use of the term “hearing impairment,” which is interesting terminology from a charity purporting to be “committed to a strategy that works to remove barriers to interacting with our programming” (TIFF, 2016: http://humber.ca/makingaccessiblemedia/modules/03/09.html). The echo I experienced was still a barrier. And being non-hearing is not considered an “impairment” or disability by the so-called afflicted or disabled. (http://cad.ca/issues-positions/statistics-on-deaf-canadians/)

I’ll also add here that during  December, as a TIFF member, I received nine donation-dunning emails between the 13th and 31st, with clickbait-worthy subject lines like “You’re on my mind” pleading for access to our thoughts about connection and access to members’ wishes. I am seriously considering cancelling my membership. It will be dependent on the response I get when I email them. If they’re really serious about “want[ing] to know what you think,” they may bite.

I wrote back asking about a search function for CCed films.

 

Hi Vanessa,

We are glad to hear the information was helpful.

At this time we do not offer a feature on our website like the one you have described but I have passed your email on to the department that handles the website for consideration when planning future updates and features. In the mean time [sic] clicking through the various films is the only way to see the information regarding Closed Captioning and Descriptive Audio.

Regards,

Customer Relations

 

Both in email and in person, I was readily informed that Call Me by My Name has CCs. This is incorrect, and I suspect a lot of venues and services, perhaps unwittingly, plug such films as accessible to the non-hearing: that film’s dialogue includes three languages, which are subtitled for marketability; it is not accessible in that it was intentionally released with non-hearing audiences in mind—that is captioning.

 

Interior photo of the auditorium of the Hot Docs Cinema in Toronto

 

 

 

 

 

 

 

 

 

 

 

 

 

Image: https://hotdocscinema.ca/c/history

Hot Docs

The old Bloor Cinema seems to be typically at the mercy of producers’ inclusion (or not) of caption files. I suspect most documentarians are making their films on low budgets; however, some are backed by humanitarian organizations, and you’d think their mores would support full accessibility. (Perhaps some PhD student can do some research into how many docs are captioned…) And, frankly, I’m surprised when the theatre was renovated, some funds weren’t allocated for assistive equipment like CaptiView. Surely some doc files are captioned?

Hi Vanessa,

Thanks for your question!

Firstly, for each screening we can provide head phones that allow the viewer to increase their own listening volume independently.

For more specific hearing aid devices, such as closed captioning, the most up-to-date information would be available at our Box Office, per screening; each documentary comes with/without its own set of closed captioning and hard of hearing accessibilities.

The best option would be to call our box office the week your preferred documentary is showing and ask for the accessibility options on that specific film.

I hope this helps.

 

 

Exterior photo of the Revuew Cinema in Toronto

 

 

 

 

 

https://en.wikipedia.org/wiki/Revue_Cinema

The Revue

Being a very small enterprise, understandably they wrote back

Hello  Vanessa,

We currently do not have assisted listening devices available at our theatre, But we are working on it.
So sorry for the inconvenience... We will make it a priority to attain these devices during the new year and you will be contacted promptly once we have acquired a few.

Thank you for your interest,

-Revue Cinema

 

Note the apology and plan of action. Good sign! I’ve made a note to check back with them.

 

Interior photo of the Royal Cinema's auditorium in Toronto

 

 

 

 

 

 

 

Image: http://newsite.theroyal.to/about-the-royal/#

Royal 

Through several emails, I learned that the Royal generally does not offer access to deaf-assistive technology but does, interestingly, have two series events that feature access vehicles: Drunk Feminist Films and Screen Queens, the former having hired ASL interpretation in the past due to a small Deaf community being involved. The other reason for provided captions is that events have live comedy commentary over top of the movie through microphones, thus captions allow people to listen to the comedians talk about the movie while also watching the movie. But with general programming, like all cinemas, they are stuck with only being able to provide captioning when it is provided by movie producers.

 

Cinema Clock  https://www.cinemaclock.com/ont/toronto

I also emailed this clearinghouse of movie listings to see if the search function could not be tweaked to include a way to find only those movies with CCs.

The email envelope icon on their home page does not work, so I went to their Contact Us link on the More tab and filled out an online form there. After three weeks, I have not heard back from them.

 

 

I’m going to assume that we have it relatively good in Toronto and that access to assistance for deaf moviegoers is generally sketchy or non-existent in most smaller Canadian cities and towns. If you know otherwise, please share information in the Comments.

If you're interested in accessing more caption options (i.e. for any time slot, not just the cinema's dead day), have a polite chat with or send an informative email to the management. Eventually, feedback will work its way up the corporate ladder and maybe—one day—access to movies for the Deaf, deaf and hard of hearing will no longer be considered an extra cost or frill. It'll just be going to the movies.

 

Accessibility in Movies and Video in 2018

...and Why I’m Not Going to Shut Up about It

 

I recently had my first experiences using hearing assistance technology (and I use the word technology with something of an eyeroll) at two movie theatres in Toronto. Here's why filmmakers have got to start putting accessibility functions and services into their budgets. The cinemas can't project captions that aren't there.

 

Amazon's photo of a Listen personal amplifier: a small black device resembling a handheld transistor radio, showing volume and battery levels in a screen near the top.

 

 

 

 

 

 

 

Audio Assistance

At the first cinema, I was lent a Listen personal amplifier device with disposable earbuds in exchange for my driver’s license as collateral.

I was rather excited because I was seeing Interstellar, and I knew from previous viewings at home and in the cinema that Matthew McConnaughey’s voice is very difficult to hear in that movie. I thought this would help me hear more of his lines.

With the Listen brand amplifier (smaller than handheld transistor radios and thicker than a cellphone), there was a belt clip. That’s great if you’re wearing something with a waistband. Also, a little red light is visible, which I suspect may be annoying to seat neighbours, and I’m not sure there’s necessarily enough earbud-wire length to turn it upside down in your cupholder or to wear it upside down on your belt to not distract them periperhally.

If you know Interstellar, you’ll know that Hans Zimmer’s awesome soundtrack blasts through much of it—and I mean blasts, to the point of the seats and walls shaking in a non-IMAX movie: organ-lover’s delight! So, every time the action and mood was ramping up, I had to whip the earbuds out (I ended up using my own, as the provided ones were cheap) or have my eardrums practically split. Fine: hazard of the film, and amplification was not needed at that point. However, what was so disappointing was that all the hearing receiver did was create an annoying echo due to a delay in transmission, sort of like echos in cell calls or the overseas long-distance calls of yesteryear on landlines. Now I was hearing Matthew utter his tortured feelings in duplicated mumbling. I gave up on the “assistance” halfway through. This is a device retailing for about $250US or $400CA, so it’s no cheapie, and still the results were less than stellar…

How can an echo assist hearing? Do more current or more expensive models avoid this problem? Leave a comment below if you have other experiences. I retrieved my ID without comment, as I didn’t feel the box-office staff would be very invested in my feedback. They’re 20something with normal hearing, after all.

 

A CaptiView device in the foreground of a dim cinema auditorium; the green digital print gives connection instructions for that cinema.

 

Visual Assistance

My second experience was using CaptiView in a cinema; I did call ahead to make sure one would be available. Not only was it available, it was at the tickettaker’s booth, ready to go, and she knew how to set it up: good start! It conked out during previews “battery very low!!” and the manager said it was because it had just been used in a previous showing, so that indicates to me that perhaps several more are needed on hand to provide access while spent ones are recharged. Nevertheless he had another immediately available. Interestingly, they did not ask for ID or any security for the device: not that I’d want to walk out with one, but I was still pleasantly surprised not to be treated like a potential thief.

Complicating this experience was the flawed subtitles in the movie I saw when a foreign lanugage was translated, so I had two layers of imperfect access to negotiate in my attempt to be fully immersed in the story. In general, the CaptiView worked okay. But:

  1. As I’ve pointed out before, you have to place it in your cupholder, so that leave no cupholder for your pop, which is a problem if you also have popcorn to eat.
  2. Twice the device popped out of the cupholder and fell to the ground: you need to really shove it down prior to the show!
  3. The green type was a good size and clear, except when it didn’t show! Some captions were missing—as in entire or partial sentences never came up and the subsequent lines were on the second or third lines of the device, so I don’t know if that meant the problem was in the CaptiView or the digital file, but it happened about a dozen times.
  4. I felt like I was restricted to a corner of the back row: the green light is distracting to other audience members. What if I had arrived late to a filled auditorium? Would I have to ask a bunch of people to move or would they have to put up with it?
  5. Only the main feature was guaranteed to be captioned. In fact, as noted above, one trailer was captioned on the device. All the other movies that could have attracted the Deaf/hoh audience did not. (I’m assuming the Red Sparrows film itself is accessible and not just the trailer.) And all the pre-film promotional chatter, games, quizzes, interviews, etc. were not accessible. What—only hearing people want to know celebrity news and movie hype?
  6. What if a bunch of my D/deaf friends and I wanted to go to a movie? We couldn’t go spontaneously (what cinema is going to be able to guarantee six recharged devices always available?) or perhaps at all even if notice was given to a smaller company that might not have that many CaptiViews. So, in essence, we’re still facing a lack of or inadequate access. We’re still not able to participate fully in cultural content in the same way hearing folks are. Would it be okay if wheelchair ramps were only available 10% of the time?

In general, the experience wasn’t a disaster, but I certainly wasn’t enthralled with this option. Between my eyes constantly changing focus from short- to long-range, stumbling and losing story immersion when captions were missing, and missing a lot of the movie’s visual impact with the device as a distraction, I definitely did not engage with the film the way I normally would. In short, the CaptiView is sometimes available, but not always conducive to full cultural engagement, and that is a half-baked experience, not full access.

 

In December 2017, Charlie Swinbourne (UK journalist and Limping Chicken blogger about all things Deaf) started a poll to have UK cinemas dedicate one screen per multiplex to captioned movies. This was prompted after a fiasco of inaccessibility at the opening of The Last Jedi, where Deaf/deaf folks were treated shamefully. As of early January, he had 23,000 signatures and had spoken to cinema executives about relevant issues.

This  coincided with the investigation I was carrying out over the pond, which I have tweeted about, and I have been engaging in similar conversations with execs of the cinema corps I have access to in Toronto: Cineplex, Lantern/Imagine, TIFF, Hal Jackman (formerly Cinématheque at the AGO), the Revue, the Royal, and Hot Docs (formerly the Bloor Cinema). I emailed each via general contact emails to start and asked what availability they had for Deaf/deaf/hard of hearing moviegoers. The responses—some seemingly canned, some more invested—are here.

I’ve had conversations with some of these same execs to see if I can’t do some educating about hearing loss, advocating for better accessibility, and asking for meaningful follow through. Some have indicated a willingness to implement more if more products were provided with caption files. The general public tends to blame the cinemas, but they can’t project captions that aren’t on the film file. It’s the movie producers who need to step up.

I also canvassed some small film producers who are making films on (their own) shoestring budgets. Again, there is willingness to caption but not available financial resources.

This investigation has convinced me that if there is to be greater access, it must begin with the major film producers—the ones with the financial ability and the cultural clout to make it the norm to include caption-accessible prints amongst their distributions. I bet if a close relative of one of the big wigs were deaf, they’d have captions all over their product, including trailers (of which I only saw one file on CaptiView for the upcoming Red Sparrows by 20th Century Fox and Chernin Entertainment).

Finally, I don’t know why the government has paid only lip service to the large Deaf, deafened and hard of hearing communities, considering how many there are in Canada (most Western countries use the estimate of 10% of the population as having some form of hearing loss or problems). Accessibility to content for the Deaf, deaf or hard of hearing has been in the supposed forefront of accessibility changes for 30 years (really spurred on by the advent of VCRs in the 80s), but not much has changed. Currently, there's a survey about live captioning in Canadian TV, but this is duplication. The CRTC did a survey about captioning 10 years ago, and the requirement to provide captions (with various exceptions) is not enforced. You can read about its status here; while you can complain about bad captions, the independent ombudsman they promised in 2015 is still slated for the future. They've done standards policies, surveys, focus groups and pilot projects (2008, 2012, 2015). Where's the improvement? They don't seem to understand the nuances of captioning, reading, and how textual editing affects user experience. You can have the fastest captioners in the world produce CCs, but "quality control" needs to be two-stepped: technical and editorial. The latter is not taught or enforced. I know because I worked as a captioner. (I did offer to help set up a vocational school so that effective language training would be offered—in Canada, anyway. But crickets.)

VODS like Netflix* have continued to do the poorest job, just enough to stay out of regulatory trouble it seems, but my educational portfolio of hundreds of caption fails proves that the non-hearing are completely underserviced in all public services with captioning. Based on the attempts I’ve made to educate and offer improvement, the interest and will just isn’t there. And I’m not going to shut up about it until people needing excellent captioning in all aspects of life start seeing improvement to access, and therefore participation in Canadian life.

I’ve also written about 2017 being touted as the year of the deaf in the movies. Here’s what I thought about The Shape of Water, Wonderstruck and all the hype about D/deaf folks in film.

 

 

*I really don’t have a hate-on for Netflix alone; it’s just the system I have and use, not having a TV. But I do have ill regard for them: their attempts to service the non-hearing are terrible and nowhere near meet their advertised standards. This is because their system of hiring “Preferred Vendors” promotes unqualified bottom-feeders in many cases. (Not all—I have subtitling colleagues who are professional translators and titlers who are NPVs, but they are the exception to the rule.) If you’d like to send me screenshots of caption or subtitle fails from other sources, please do (info@reelwords.ca): I’ll add them to my portfolio of fails and fixes.

 

For information on booking teaching and speaking engagements, see that tab on my website.

 

Top image: from amazon.com

Bottom image: by author

Deafness and the Movies in 2017: Sanctimonious Narratives?

 

Sally Hawkins signing "egg" to offscreen monster in the dim laboratory in The Shape of Water

 

A couple of movies with ASL and people go crazy, saying the tide has turned for Deaf/deaf actors and filmmakers. Um—not necessarily. Here’s a new-year look back at the hype and some re-examined perspective.

I loved Wonderstruck on a bunch of levels. Millicent Simmonds certainly appears to have an acting career ahead of her, but there was a lot of blowback about Julianne Moore having been cast to play a Deaf woman. The movement for equality in Hollywood often makes the point that mainstream actors should not be cast to play folks with various differences in terms of realism, employment for overlooked talent, and plain ol’ decency. Not being Deaf or deaf, I don’t feel qualified to comment on Moore’s casting or portrayal: I was too busy trying to watch the ASL by her and Tom Noonan, being a new student of signing myself.

I feel like the movie would have made more of a statement had it been distributed with open (always on) captions. It took some risks with a lot of other artistic choices but didn’t inherently provide access to all viewers and include the D/deaf/HoH except by half-assed access in some theatres with some assistance (in other words, the status quo). A petition about this went virtually nowhere.

I binged on the Call the Midwife series, which did use a deaf actress in Season 4, Episode 8.  I can’t comment much as she learned BSL for her role and I am only familiar with ASL. But I can’t believe no one has batted an eye at IMDb’s plot outline being about “a deaf-and-dumb woman”! Have we just backslid several decades? Holy moly—send their PR department a cheat sheet on current acceptable vocabulary choices!

I went to The Shape of Water knowing I wasn’t really into director Guillermo del Toro, but my friend was curious and I wanted to see the ASL by Sally Hawkins. Again, not being D/deaf, I can’t really judge her skill. However, I was uncomfortable with the implied sim com. Simultaneous communication—signing and speaking English at the same time—is generally considered oppressive and disrespectful of the Deaf community and signing as a bona fide lanugage. In the movie, there are moments when Hawkins is signing and Richard Jenkins’s character is translating into English for himself, but the signs were syllabically aligned with sign movements, so the number of signs matched the number of words. This was factually incorrect interpretation, and it bugged me that a production decision was (seemingly) made to “make it equal” for the hearing audience to supposedly access the ASL. We made the signing inclusive by ironically captioning it for the hearing to access, but didn’t make the entire movie open-captioned for the deaf to access (again). That kind of inequality and ingrained exclusion makes me really annoyed. I don’t expect every screening of every film to be OCed, but one about a signer could surely have made a statement of inclusion like this, couldn’t it? Particularly since the main theme was otherness...

Some time ago, I wrote about The Tribe or Plemya (dir. Miroslav Slaboshpitsky, 2014), an uncaptioned and unsubtitled Ukrainian sign-language film with no spoken dialogue, which was made all the more affecting by not making it accessible to the hearing with captions: it’s one of the most powerful films I’ve seen. If we want to get all warm and fuzzy about deafness entering mainstream pop culture, we need to back the production and embrace the release of art like this: made with and for (and preferably by) those being portrayed.

Four boys lead a smaller one by the ear down a dimly lit institutional hallway; a still from the movie The Tribe.

 

ABC’s Switched at Birth (exposed more broadly by Netflix) apparently drove tons of people to go sign up for ASL across North America. Once the complexity and demands of this beautiful and challenging language were encountered in the classroom, though, I wonder how many stuck it out? Marlee Matlin made sign language sexy to cinema, and her advocacy over the decades has improved attitudes towards the Deaf. ASL advocate and model Nyle DiMarco certainly turns heads "despite" being Deaf, and The ASL App has upped the cool factor. But let’s not get carried away and create a sanctimonious narrative that the movies are all over deafness and sign language. If there were sustained interest and true access, I wouldn’t be writing op eds about the need for excellence in captioning and the right to cultural access for the Deaf, deaf and hard of hearing. In fact, there wouldn’t be any commentary about #DeafTalent: it would just be there, amidst the rest of the hearing world’s projects.

 

 

Top image: http://www.moviemuser.co.uk/2017/07/19/shape-water-trailer-sally-hawkins-meets-underwater-creature-guillermo-del-toros-latest/

Second image: www.vice.com

Captions Need Show “Bibles”

Colour photo closeup of gilded Bible pages, with gold cover, snap closure and tasselled bookbark hanging in the foreground.

 

Captions and subtitles need "bibles" just like theatre pieces or movie productions. Like their literal iterations, these collections of information are guides for all the relevant players on how to present content so that it's clear, correct, and, most of all, consistent.

When I was a captioner, some shows had 'em and some didn't. Worst was when we had to consult fan wikis for character name spellings, backstory, etc. VODS, shows, and movies need bibles templated and used, if they're going to commit to full accessibility for all users.*

Depending on where the captioner or subtitler is, there are differences in how they would normally write as a layman and how they would do their work. A Canadian captioning a show from and about the States would defer to American dictionary spellings and definitions and standard writing style guides, plus the client's house style guide. But an American subtitling an import series from Scandinavia would be wise to not only adhere to the client's wishes and that country's standard guides but also recommend other applications based on show content and branding, audience composition and an eye to future distribution potential.

Show bibles vary from artform to artform. It may well develop to have set and costume notes and samples, helpful visual ephemera, guidelines on authorized style guides, character details, notes on directorial changes and edits (updated), and all of this should be backed up—at least twice. Hard copies might also be wise should the internetalypse happen midproduction.

Here's an example of what Netflix's much (self-)touted subtitling policies did not address or succeed at (or this wouldn't have happened).

Peaky Blinders, Season 4, Episode 5 (accessed December 2017). In one scene, Cockney Jewish character Alfie Solomons is saying Good boy but the caption says Goodbye. Perhaps the non-native captioner (or one without British background or dialectic familiarity) should not be the titler for dialogue if they can't understand the accent, let alone understand that Goodbye wouldn't even make sense in the context if that were the audio. It causes errors and (although apparently not here) extra costs in QC corrections.

Screenshot of Alfie Solomons and Luca Changretta characters in Peaky Blinders show. The erroneous caption for Alfie says, Goodbye, trot on. Down there is Bonnie Street.
Image: cropped screenshot accessed Netflix, Peaky Blinders, December 31, 2017.

If a show bible is not extant or available, a good editor will do some research and preferably some subsequent consultation. The latter should be done by the most qualified expert in their professional network: moms with English degrees don't count. Having established some form of NDA, the editor should present their problem and its context, their research, and a suggested edit to the consultant. Confirmation or correction should lead to a fix, and either way the edit should be flagged with a justified query or note to the managing editor. Time is tight on titling projects, but there's no excuse for guessing. I have a time limit on how long I'll do my own research before turning to an expert; if I can't get the ME a recommended edit, I'll pass on my recommendations for next steps.

This example also points out the pitfalls of having blinders on about vendors. Perhaps your regular multilingual translator in Europe is multitalented, but this show would have required a titler who had ties to or experience with people in London and Birmingham, for instance.

Another problem with this scene was when, in the same episode,

Alfie Solomons was captioned as speaking Italian when in fact he was speaking Yiddish...

Alfie Solomons in Peaky Blinders show is captioned as "[speaking Italian]"

 

...but the captioner didn't have enough linguistic background to tell the difference between guttural and romance language phonemes. (Note that although different, captions and subtitles are sometimes needed in the same product. Read more here.) The titler should have consulted someone (or perhaps shouldn't have been contracted in the first place). I have a whole discrete presentation I can give about foreign language subtitling inconsistencies within Netflix captions; see the Engagements tab to book similar lessons and discussions.

So a bible, shared with the captioner, would have been available to tell them that Alfie Solomons is a Jew from the East End, living in Birmingham, with the common interruptor of the area's "yeah" and that he has no known connections to the Italian language. These are two instances where Netflix would have been saved embarrassment from YGWYPF vendors. If they aren't embarrassed, simply in terms of access to content for the deaf they should be.

Bibles can be simple, and they don't have to be pretty. But they do need to be complete, proactive, shared and USED.

 

*Read here about who should be using captions and/or subtitles (and sometimes both); it's not just a "deaf problem."

Changing Text: Part II — Expletives

Ada Shelby from Peaky Blinders is shouting to the projectionist behind her in a dimmed, empty cinema, Oi! I'm a Shleby too, you know. Put my fucking film back on!"

 

Expletives may have different treatments based on house styles, but they must still be retained in some form or another (even if it’s %^@##!).

Swear words in films or shows often bring up the issue of censorship—by whoever has the final word on content and house style. But the captioner/subtitler has a duty to at least present an argument (even if they don’t win people over) as to why potentially objectionable words must remain or at least be titled in a similar form.

No matter what country you’re working in, standards of captioning/subtitling will all get at the point that it is the titler’s job to provide full access to the video product, with 95–100% accuracy for preprogrammed content. As in book editing, the titler must not edit the work to the point of changing content. So, if I’m a very conservative person, I may not decide to “fix” f-bombs or other offensive dialogue; even if I’m liberal personally, I must not “err on the side of caution” and tone down swear words in case a vulnerable audience is watching. I may be allowed, or indeed instructed, to use house style to represent those f-bombs with nonsense characters, universally understood to mean expletives, but I may not choose to as a matter of my practice.

I complain often about CCs on Netflix (see this article for a good chuckle and my opinion here), but I do appreciate that their style guideline says “Dialogue must never be censored.” They do retain expletives as used by onscreen characters. This is as it should be.

Just as we do not cover classical sculpture with fig leaves or add clothes to nudes in paintings, we should not censor swearing in films. Screenplay writers and directors include it intentionally to produce an effect, and it is effectively intellectual theft for the titler to remove it. There are many aspects of a video product that could offend audiences, but it is their job to choose their entertainment judiciously and not ours to introduce our personal bias into the work. Titlers do not have the right to judge; the have the responsibility to provide access. Period.

 

[Note that the incorrect caption in the image above should read: Oi! A native English speaker, especially one with British background (who would be the ideal choice as titler) would know this. Oy is an alternative. Spelling and punctuation fails…]

If you ever see an example of captions or subtitles that do not represent the content (with the exception of occasional fudges required by timing and space allowance for reading speed), please email a screenshot to or tell me about it at info@reelwords.ca. I keep a file of such infringements to accessibility rights.

 

 

Changing Text: Part I — Opera Surtitles

Long shot colour photo of an opera production with a seafaring theme as a set and English surtitles projected above the stageImage not credited on original source: https://www.sdopera.org/experience/supertitles

I’ve commented elsewhere about the responsibilities of the captioner or subtitler, which include the best practice of not changing the film’s text.* Our personal feelings about content, as far as producing or editing the content is concerned, are irrelevant. (If something is truly offensive, you can turn down the project, just as we do in book editing.) I recently participated in a survey of subtitlers about emotional reactions to content we are working on; it is a legitimate consideration. However, assuming we are content to work on the file, the captioner or subtitler (or book editor) may not change the content. We are not the creators of the work.

I saw the HD Live Met presentation in the cinema of the fabulous opera Exterminating Angel by Thomas Adès. Although it is in English, surtitles** are provided, which is common for most major opera companies. With the exception of one title which might have caused confusion with an appositive due to the accompanying live shot, they were excellent. Until the climax of this dystopian nightmare story. There, in terror, and also in the last lines of the opera, the characters are singing a prayer: Libera me de morte aeterna et lux aeterna luceat, which translates to “Deliver me from eternal death and let eternal light shine.” The use of the Latin is intentional and very moving because these words are excerpts from the Catholic Office of the Dead text. (If you know the movie or the opera, you’ll understand why these are used.) To my amazement, the Latin was not only not projected in the surtitles, it was replaced with the English as the Latin was being sung. This is unacceptable captioning (or surtitling).

While it is possible that the surtitle writer felt they were being “helpful” by providing the English, they shouldn’t have.

First, they changed Adès’s and librettist Tom Cairns’s work fundamentally. They did not write that part in English for a reason. So, right off the bat, they made an editorial decision about an artist’s work. (If Adès or Cairns directed them to do so, I would happily stand corrected, but I doubt this very much. If the Metropolitan Opera directed it, I would disagree with that decision.) Captioners do not have the right to change art text: their responsibility is to make the piece as it stands accessible. A caption editor would know to retain the original text.

Another reason this is not best practice is that it makes an editorial assumption about the audience: that they are not culturally savvy enough to know what these words mean, even if they aren’t Catholic. It would be deemed fairly common knowledge in the humanities audience to at least have a sense what that Latin excerpt was about, even if they couldn’t translate it word for word. So the surtitler decided who they were dealing with. (Again, if the Met directed them to do it—well, my words would then be directed at them.) The composer knows who he will reach with the Latin, and he knows how best to do it in that scene: with the atmospheric layer of using Latin. He does not dumb down his librettist’s text for the audience.

Opera is attracting more young people these days, so some might argue that Millennials just don’t have that common knowledge, but that too is insulting and presumptive. The surtitler may not assume: that’s not their job.

The other thing that is wrong about this involves the Deaf/deaf/hard of hearing community. Did you know that some deaf people do go to and love the opera? My deafened friend loves opera: she said as long as the voices are big enough and surtitles are provided, she can attend and enjoy live opera and HD broadcasts. So the surtitler assumed it wouldn’t matter if the English were used (even if they did know deaf folks can go to the opera), and that is the type of trope the Deaf/deaf/hard of hearing community too often faces: they don’t matter. This is akin to the attitude of Ill tell you later or Why cant you just enjoy the beat? which I have tweeted about. If they are in the audience, they have the right to access the artistic work as it was created by the artist. It is not the surtitler’s right to even assume they won’t be in attendance, never mind that best practices wouldn’t apply to them. They cannot change an aspect of art because they figure an attendee won’t know anyway.

A final note about surtitles: there are various technological choices available, such as the old PowerPoint way still used by some, and current surtitling software. These products can force certain style decisions for the surtitler. Also, some theatre and opera companies take divergent theoretical views of how far translations or same-language titles are to go. I belong to the more prescriptive school, obviously, and disapprove of summarization. However, there are times in opera when very repetitious text, such as in arias, may be omitted and understood as such, or when multi-part sections must be flexibly handled. Straightforward English libretti do not fall into these specialized areas of captioning skills.

Newsflash: The Deaf Are Not Intellectually Disabled

Especially for a government-published "educational" graphic about accessibility!

The Responsibilities of the Captioner

Scene from a modern-set opera with surtitles projected above the stage

 

I’ve commented elsewhere about the responsibilities of the captioner or subtitler, which include the best practice of not changing the film’s text.* Our personal feelings about content, as far as producing or editing the content is concerned, are irrelevant. (If something is truly offensive, you can turn down the project, just as we do in book editing.) I recently participated in a survey of subtitlers about emotional reactions to content we are working on, so it is a thing. However, assuming we are content to work on the file, the captioner or subtitler (or book editor) may not change the content. We are not the creators of the work.

I have two examples to discuss: translating and expletives.

I saw the HD Live Met presentation in the cinema of the fabulous opera Exterminating Angel by Thomas Adès. Although it is sung in English, surtitles** are provided, which is common for most major opera companies. With the exception of one title which might have caused confusion with an appositive due to the accompanying live shot, they were excellent. Until the climax of this dystopian nightmare story. There and in the last lines of the opera, in their characters’ terror the cast are singing a prayer: Libera me de morte aeterna et lux aeterna luceat, which translates to Deliver me from eternal death and let eternal light shine. The use of the Latin is intentional and very moving, because these words are excerpts from the Catholic Office of the Dead text. (If you know the movie or the opera, you’ll understand why these are used.) To my amazement, the Latin was not only not projected in the surtitles, it was replaced with the English as the Latin was being sung. This is unacceptable captioning.

While it is possible that the surtitle writer felt they were being “helpful” by providing the English, they shouldn’t have. First, they changed Adès’s and librettist Tom Cairns’s work fundamentally. They did not write that part in English for a reason. So, right off the bat, they made an editorial decision about an artist’s work. (If Adès or Cairns directed them to do so, I would happily stand corrected, but I doubt this very much. If the Met directed it, I would disagree with that decision.)

Captioners do not have the right to change art text: their responsibility is to make the piece as it stands accessible.

A caption editor (or book editor) knows to retain the original text.

Another reason this is not best practice is that it makes an editorial assumption about the audience: that they are not culturally savvy enough to know what these words mean, even if they aren’t Catholic. It would be deemed fairly common knowledge in the arts and literature audience to at least have a sense what the Latin was about, even if they couldn’t translate it word for word. So the surtitler decided who they were dealing with. (Again, if the Metropolitan Opera directed them to do it—well, my words would then be directed at them.) The composer knows who he will reach with the Latin and he knows how to best do it in that scene: with the atmospheric layer of using Latin. He does not dumb his libretto down for the audience.

Opera is attracting more young people these days, so some might argue that Millennials just don’t have that common knowledge, but that too is insulting and presumptive. The surtitler may not assume: that’s not their job.

The other thing that is wrong about this involves the Deaf/deaf/hard of hearing community. Did you know that some deaf people do go to and love the opera? My deafened friend loves opera: she said as long as the voices are big enough and surtitles are provided, she can attend and enjoy live opera and HD broadcasts. So the surtitler assumed it wouldn’t matter if the English were used (even if they did know deaf folks can go to the opera), and that is the type of trope the D/d/HoH community too often faces: they don’t matter. This is akin to the attitude of Ill tell you later or Why cant you just enjoy the beat? which I have tweeted about. If they are in the audience, they have the right to access the artistic work as it was created by the artist. It is not the surtitler’s right to even assume they won’t be in attendance, never mind that best practices wouldn’t apply to them. They cannot change an aspect of art because they figure an attendee won’t know anyway.

One final note about surtitles: there are various technological choices available, such as the old PowerPoint way, still used by some, and current surtitling software. These products can force certain style decisions for the surtitler. Also, some theatre and opera companies take divergent theoretical views of how far translations or same-language titles are to go. I belong to the more prescriptive school, obviously, and disapprove of general summarization.

Expletives in films or shows often bring up the issue of censorship—by whoever has the final word on content and house style. But the captioner/subtitler has a duty to at least present an argument (even if they don’t win people over) as to why potentially objectionable words must remain or at least be titled in a similar form.

It is the titler’s job to provide full access to the video product, with 95–100% accuracy for preprogrammed content

No matter what country you’re working in, standards of captioning/subtitling will all get at the point that it is the titler’s job to provide full access to the video product, with 95–100% accuracy for preprogrammed content. As in book editing, the titler must not edit the work to the point of changing content. So, if I’m a very conservative person, I may not decide to “fix” f-bombs or other offensive dialogue; even if I’m liberal personally, I must not “err on the side of caution” and tone down swear words in case a vulnerable audience is watching. I may be allowed, or indeed instructed, to use house style represent those f-bombs with nonsense characters, universally understood to mean expletives, but I may not choose to as a matter of my practice. I complain often about CCs on Netflix (see this article for a good chuckle), but I do appreciate that their style guideline says “Dialogue must never be censored.” They do retain expletives as used by onscreen characters. This is as it should be.

Just as we do not cover classical sculpture with fig leaves or add clothes to nudes in paintings, we should not censor swearing in films. Screenplay writers and directors include it intentionally to produce an effect, and it is effectively intellectual theft for the titler to remove it. There are many aspects of a video product that could offend audiences, but it is their job to choose their entertainment judiciously and not ours to introduce our personal bias into the work. Titlers do not have the right to judge; the have the responsibility to provide access. Period.

As Ada in Peaky Blinders (Season 1, Episode 2) says:Ada Shelby from Peaky Blinders is shouting to the projectionist behind her in a dimmed, empty cinema, Oi! I'm a Shleby too, you know. Put my fucking film back on!"

NB this incorrect caption should read: Oi! A native English speaker, especially one with British background (who would be the ideal choice as titler) would know this. Oy is an alternative.

 

 

If you ever see an example of captions or subtitles that do not represent the content (with the exception of occasional fudges required by timing and space allowance for reading speed), please email a screenshot to or tell me about it at info@reelwords.ca. I keep a file of such infringements to accessibility rights.

 

 

 

*Expletives may have different treatment, based on house style, but they must still be retained in some form or another (even if it’s %^@##!).

**The word surtitles is a trademark of the Canadian Opera Company, where the practice and technology was developed. [Yay, Canada!] The general term is supertitles, but as most readers will be familiar with surtitles, I’ve used that in this article.

 

Re: top photo: Image not credited on original source https://www.sdopera.org/experience/supertitles

Bottom photo is a screenshot from the Peaky Blinders series as presented on Netflix.

“Captions Connect People”: Personalizing the Call for No More Craptions!

A cropped closeup colour photo of closed captions on a screen, the text being cut off to prevent understanding a sentence.

Guest Post by Chelsea MacLeod

Do you find craptions funny? Think closed captioning is only for “a few deaf people”? Read what one woman has to say about captions and why they need to be clear and accurate. The changes in her hearing may not be obvious to outsiders, but she depends on captions to engage in the wider world. And although her attitude is positive, she and the other perhaps 10% of North Americans who are hard of hearing or deaf still struggle with alienation, isolation, and missing out on art, culture and communication. Here is Chelsea MacLeod’s story. In the last section, I’ve highlighted her feelings. As an advocate for excellence in closed captioning and subtitling, I’m baffled as to why the Canadian government has failed to enforce the high standards that are legislated and why it has only paid lip service to the deaf/Deaf/hard of hearing. Calls for better quality have been made for 30 years! Accessibility through excellent captions is not an expensive frill: it’s a right that can be addressed and planned for in the post-production budget.

 

If captions are unavailable, I usually pass on participating.

I began to notice a significant change in my physicality and level of hearing in 1996.

I was at an Edmonton Oiler playoff hockey game. We were sitting in the nosebleeds, two rows from the very top. In Edmonton, hockey is extremely popular—it is like the life blood of the city—and the home team was winning. Describing the crowd as wild is a gross understatement. The whole place was electric, in a way that was beyond merely sound. It was an intensely energetic experience, in that I realized I was feeling sound in addition to hearing it. (This is difficult to explain, but I believe this was the beginning of an almost heightened sensory awareness.)

Each time they scored, the noise in the arena was literally deafening. I remember a pop or some kind of release occurring in my right ear over the course of the game, but I didn’t think too much of it. By the time it was over, I was so dizzy and nauseous that I could barely walk out of the arena on my own.

I had no idea what was happening to me. I spent the rest of the night and the next day unable to get off the couch. My whole world was spinning. I couldn’t really ground myself to determine what was up or down. This was my first experience with vertigo and the symptoms of Ménières disease, although I wasn’t aware of it as such at that time.

After a few days, the physical sensations passed, and I went on to normal life. I had periods of experiencing the vertigo off and on again, but I chalked it up to a heavy school and work schedule and simply tried to get more rest to mitigate the symptoms. In short, I was a graduate student with a part-time teaching schedule and various odd jobs, and I was living on coffee and convenient foods. In retrospect, I can safely say my nervous system was completely shot.

Fast forward, almost ten years later. In 2003, I gave birth my daughter. It was a joyous occasion despite a difficult labour and an emergency C-section. Again, I suppose my entire physical system was stretched to the limit. I developed a bad infection in the hospital and required large doses of antibiotics. My recovery period was slow and steady, but I had Chloe, and we just spent our days together getting to know each other with feedings and regular nap times.

Pretty early on, I realized that if I was sleeping on my left side, I wasn’t able to hear her when she was crying. I began a series of testing at a Rehabilitative Hospital in Edmonton where they not only tested my hearing levels but also my balance, coordination, etc.

The diagnosis was Ménières disease. I had 20% hearing ability in my right ear. My left ear also showed some damage, but it was relatively minor. 

At the time, they presented a few options I could pursue that required more antibiotics and invasive surgery. I declined and opted to see what I could do to treat the symptoms naturally. I began an acupuncture and herbal-medicine protocol. I did not notice any changes to my hearing levels, but I did begin to see the effects of addressing certain aspects of my nervous system by way of meditation, diet, and increased relaxation. This seemed to keep the Ménières at bay, but if it got really bad, there was nothing I could do but surrender to it, lie down, and be with my body until the symptoms subsided.

In terms of hearing loss, this is when I began a process of learning to adapt and manage the changes to my hearing levels. I began to lip read pretty naturally. I would turn my head to position my left ear to a speaker or to any sound. I began to listen to music with headphones to direct the sound more effectively. I also began to use closed captions when watching anything. If there were no captions, chances were I wouldn’t be able to watch it. This included going to movies.

During this time, I contemplated hearing aids but felt like I was managing all right without them, and so I decided against it.

Funnily enough, at Christmas 2013, my dad made me promise that I would get some hearing aids. It had always really bothered him that I wasn’t able to hear fully. I put it off for a while and then felt badly that I was not honouring my promise to him, and I booked an appointment at the Canadian Hearing Society. I went in for another round of testing, and it was revealed that I had progressively lost more hearing in my good ear.

I signed up for a pair of hearing aids immediately and, needless to say, they opened up the world to me. I really have my father to thank, as he was the catalyst. Incidentally, the sensory input of sound was so profound to my system that it took almost a full month until I was able to wear the hearing aids all day long. It was like I had to go in stages in order to condition my ears to all the sounds they were able to hear again.

One day in 2015, my partner and I were downtown at rush hour, standing on the sidewalk. I started to feel my balance go; I got shaky and began to hear electrical currents all around me. It was like I was hearing sound emitting from street lights and the air itself. By the time we were in a taxi, I could barely hear anything. My hearing aids could not keep up. It coincided with another Ménières episode, and I thought my hearing might return if I rested. However, after 24 hours, there was no change. I booked in to my audiologist immediately for more testing and to amplify my hearing aids. Without my hearing aids, I was pretty much stone cold deaf. But even with them, I could barely manage daily life.

Also during this time, I saw more doctors, naturopaths, and herbalists. I even underwent hyperbaric oxygen treatment because I had read that, for acute hearing loss, high amounts oxygen could restore hearing loss if it was done right away, as soon as possible after the traumatic event.

The hearing tests revealed that my left ear had less than 20% hearing ability. My right ear had also decreased slightly in ability, but it was now my dominant ear. I remember thinking of the irony of all those years I had referred to my right ear as my bad ear. Now it’s my good ear. From then on, I made a commitment to myself to refrain from talking about my body negatively in any way. I just decided both my ears were good and that they were doing the best they could. 

About a year after that, I began to notice I had become quite isolated, as meeting up with people, socially in restaurants or anywhere it was even slightly noisy, was pretty much out of the question. I think I didn’t notice it so much because I am blessed with a really supportive family and I have a busy and full life. All the same, I felt a need to connect with others and to be comfortable with who I was becoming without feeling shy or like I couldn’t keep up with conversation.

That’s the main reason I wanted to take up ASL. I felt like cultivating a community that was not hearing dependent—where I could communicate freely with others and learn to express myself in a visual way.

As I have progressively lost the sense of hearing, visual communication has heightened and taken precedence. And this brings me to the crucial role of closed captioning in my ability to experience the world around me. I use captioning on every type of media I can. If captions are unavailable, I usually pass on participating. Captions are my way of accessing vital information, art, culture and entertainment.

 

Captions that can’t keep up or that are garbled or inaccurate don’t serve anyone.

 

 

They not only distort the story or message that is being conveyed but also alienate anyone who requires them to receive information. This includes not just words but also sound experiences like laughter or a doorbell chime.

Sensitivity to captioning is the same as having sensitivity to written or oral language. How it is intended to be received by an audience is as important as translation, in order to accurately communicate anything from feelings and ideas to concepts and current events. Captions connect people. They engage a sense of belonging to the wider world and, when done well, captions have the potential to inspire a variety of interests and curiosities, not only those of the deaf and hard of hearing community.

 

While I’ve had hearing issues (Ménières disease, hyperacusis, and tinnitus) for about 30 years and do a fair bit of lip reading in noisy environments, I wanted to share the emotional and practical impacts of craptions and captions on someone with more hearing loss than me. I met Chelsea at the Canadian Hearing Society where we study ASL, and I'm grateful to her for being willing to share her story as a guest post for Reel Words.

“Good Enough” Captions Aren’t

I recently watched an amateur video about DIY captions. The fellow who made it was earnest, trying to make it easy for the average person to create captions, and I'm sure he meant well. But then he said that although they wouldn't be perfect, they'd be "good enough."

Granted, he was referring to fansubbing movies (which is a topic for another time), but I get the sense that this is a common attitude of the hearing world towards captioning for the accessibility purposes. Would blue and purple traffic lights be good enough? How about food with just a bit of salmonella? I know I wouldn't want to buy a tire with a slow leak.

Captions are used by the Deaf, deaf and hard of hearing (Deaf/HoH), second-language learners, university students as study aids, people in sound-sensitive environments, and many other folks.

Many countries, provinces and states have legislated that media must provide video material that is accessible and that captioning be of excellent quality. It's not optional. But very rarely do I see closed captions that meet the required standards.*

Some producers of video rely on automated captioning services or, if they have "the budget for it," a closed-captioning provider. But the latter do not have trained professionals copy editing the files and/or they often don't understand the specialized editing required to meet the accessibility standards needed for users. Anybody can transcribe audio. But caption text has to be rendered readable by humans in 2-second chunks. And by readable, I mean comprehensible so that the entire video context is taken in with ease and appreciation for the content. But that's not what’s getting churned out. (See my opinion about video-on-demand services here.)

I'm tired of "good enough." I'm frustrated by reading about craptions being doled out to the Deaf/HoH. I'm fed up with empty promises about the delivery of accessibility.

When are the Deaf/HoH going to get the quality of captioning they're legally (and morally) entitled to? Why is "good enough" the status quo?

I've written many articles and posts about why captions and subtitles require not just proofreading but copy editing, just as the printed word does. (You can read them here to learn more about the nuts and bolts.) But I'm increasingly interested in making some noise about cranking up the demand for #NoMoreCraptions! As someone who appreciates closed captions (and may later need them more), I am no longer willing to let this slide.

“Captioning should not look like throwing magnetic letters on a fridge.”**

And yet, that's what the CC setting on our screens usually generates because (seemingly) providers don't think the Deaf/HoH are worth the expense of creating high-quality, copy-edited captions. Like other areas being bandaided because of a lack of enforcement or true dedication to creating accessibility (e.g. the wonderful but shamefully needed food banks, Stopgap Foundation, etc.), unedited captions are generally of such poor quality that they're useless and watching TV, movies, etc. is often given up on.** And saying there isn't money for quality captioning comes from an outlook of discrimination.

It's also uninformed. Budgeting for this aspect of production and distribution does not have to be expensive. If absolutely necessary, fine—use automated captioning in some form of AVR (automatic voice recognition). But then turn the rough copy over to a professional to be perfected. It's like writers who say they can't afford any professional editing or proofreading but then complain that no one bought their book: if its content isn't edited properly, readers aren't going to want to slog through it.

Until governments enforce the standards they've promised on paper so that the digital files are accompanied by high-quality captioning, they're short-changing the Deaf/HoH of their right to a huge part of full engagement in modern cultural content.

I'm not. . .er. . .crapping on the DIYer per se. I'm saying his comment is exemplary of the attitude society has towards people needing captioning: if you're not a hearing person, you can just make do with good enough. (And that's audism.)

#NoMoreCraptions!

 

 

*Canada's 2016 CRTC policy can be found here.

**Unattributed comments from CRTC 2008 Stakeholder Consultations on Accessibility Issues for Persons with Disabilities.

Cinema Gets Heritage Status

 

I'm sharing this good news as posted. I worked at these theatres (as did my dad as projectionist) with Dawn and Dan's father Peter.

 

City grants Mt. Pleasant theatre heritage status

Davisville landmarks opened in the ’20s and continue to show films today

 

 

Published: 

 

Councillor Josh Matlow stands outside Mount Pleasant Theatre

Councillor Josh Matlow stands outside Mount Pleasant Theatre

The Regent Theatre and Mount Pleasant Theatre have both been a prominent part of Davisville village since the 1920s. Now, thanks to a motion put forward by councillor Josh Matlow of Ward 22, St. Paul’s, both buildings will stay that way. The two theatres were granted heritage status by the Toronto and East York Community Council in May.

“These movie houses are iconic institutions in our Midtown neighbourhood,” said Matlow. “When you come to the Davisville village, they stand out. They tell you where you are and give you a sense of identity and a story in the community. This is clearly linked to the architectural and cultural story of our community.”

The designation comes at an important time as several historic buildings in Midtown have been torn down in recent years, including the century-old Bank of Montreal building at Yonge Street and Roselawn Avenue and the Stollerys building at Yonge and Bloor Street.

“These movie houses are iconic institutions in our Midtown neighbourhood.”

Mount Pleasant Theatre, at 675 Mt Pleasant Rd., opened in 1926 and is one of Toronto’s oldest surviving movie theatres.

Regent Theatre, at 551 Mount Pleasant Rd., opened in 1927 as the Belsize Theatre. The marquee on the building facade and the architectural styling of the building represent the work of architect Murray Brown, who was well-known for designing movie theatres across Canada.

Both theatres are currently owned by Dawn and Dan Sorokolit. While a heritage designation is widely considered an honour that ensures a building will remain a part of Toronto’s history, it’s possible the theatres’ owners might not be happy about the designation. Moving forward, any plans to demolish or build overtop of either property will be subject to further approval from Heritage Preservation Services.

Post City reached out to the owners about the designation, however neither was available for comment.

“The theatres really are important to the landscape and the streetscape along Mount Pleasant Road.… They were both built at a time when the city was really expanding northward,” said Kaitlin Wainwright, director of programming at Heritage Toronto. “They really are touchstones in a way that hearkens back to that period of change.”

Although community theatres across Toronto have largely been replaced by big multiplexes, like the Scotiabank Theatre, Mount Pleasant and Regent theatres both continue to show films today.

The Netflix Subtitling Test Is Inadequate

Neflix logo with black-outlined letters against red background, "Netflix"

Netflix introduced a subtitling test called Hermes to vet potential vendors. Here's how it's inadequate and why not much is going to change.

  1. The announcement on the Netflix blog is rife with errors that need copy editing. That should be our first red flag.
  2. The comments following that post indicate that the test system itself is full of bugs; potential vendors ("Fulfillment Partners") can't access or proceed with parts of the text. (Ironically, one says the videos won't load.) Second red flag.
  3. The queries to Netflix from would-be test takers have not been replied to. Vendors might do well to take that as an indication of how they'd be treated if they were signed on to translate titles... While they invite contact via email, posting answers would be a more expeditious way of sharing info other folks would be needing.
  4. It refers to the importance of ensuring quality, but it contains writing not worthy of a communications professional. Aside from the errors mentioned above, there are scare quotes, which indicates to me that quality is not in fact ensured.
  5. It's copyrighted 2016 but was posted at the end of March 2017. This tells me that better meeting subscribers' viewing needs are not a priority, and that change in this area will be very slow.

I don't have a particular hate-on for Netflix: I'm sure most (S)VOD entities are streaming international shows with substandard subtitles. But it's the subscription I have and, as a professional, I can confidently report that the quality and consistency of title delivery is all over the map. (See a mini-gallery here of the hundreds of error examples I have on file.) They need subtitle editing, and they're going at it back-asswards. They could continue to use current vendors and have the files edited as tweaks. No need to re-invent the wheel. And that new wheel is going to raise your subscription price.

They're big on tech innovation (e.g. here), but if basic spelling and grammar errors prevent comprehension, it's sort of useless. Reminds me of the joke cartoon of the caveman who invented square wheels for his cart.

Here's an example. It's not huge, but it's telling. Netflix house style apparently allows for the use of "alright" in its subtitle translations. That spelling is recognized as a nonstandard alternate but is not the recommended or preferred spelling in Canadian, American or British dictionaries. If non-English shows and films are to be streamed in what is commonly called "world English" and usually defers to UK preferences, why are they condoning a second-choice, nonstandard—I'd go so far as to say colloquial or popular—spelling for a common idiom? Standard spellings and conventions are taught and used for good reason, and there is no contextual reason to use variations in most cases. It's sloppy, and it shows a disregard for viewers who use titles for many different reasons.

What bothers me about the providers and the regulation makers is that improvements to subtitles and captions are moving at a snail's pace. In Canada, a report issued in 2008 revealed useful—and at times poignant—data and commentary on the state of accessible telecommunications and, while much has been done on paper,

people with disabilities are still not treated with the respect (via access) that other Canadians enjoy.

From what I see in industry sources and reporting, it's not much better elsewhere.

And if you'd like to watch video programming made in other countries, you'd better resign yourself to subtitling that does not facilitate your immersion into the story. See my case for subtitle editing here.

Craptions is a lighthearted word, but the bureaucracies and corporate attitudes preventing us from having (long overdue) accessibility and seamless enjoyment of mainstream culture is no laughing matter.

If you have experiences with poor subtitles and captions, please share them in the comment section.

Get Distributed

Get Noticed for the Right Reasons

You've invested a lot of time, money and heart into making your film, show or video. You have big plans for distribution—whether it's worldwide or just to corporate headquarters. You can't afford to become one of those online memes because of errors.

Reel Words is the only subtitle editing company providing quality control for flawless English text because it's the only one with extensive editorial and titling experience behind it. Translators and transcribers are terrific at their craft but, like authors of books, they're not trained to review text for correctness, consistency and clarity.

Don't settle for error-ridden automated titles. Impress distributors or stakeholders with professional-looking, accurate subtitles or captions, and enjoy rave reviews from audiences by fulfilling the growing demand for No More Craptions!

What’s the Difference between Subtitles and Captions, Anyway?

Colour photo of oranges in vertical rows on the left and red applies on the right, as they might be lined up on display in a grocery store.

 

Fuzzy on the difference between subtitles and captions? We tend to use the terms fairly interchangeably, lumping them into some vague notions about "boring films" and closed captions on TV "for the deaf." But they are distinct animals, and here I'll share some straightforward info about the two, why the distinction matters, and why they're necessary.

Let's start chronologically, with subtitles. Before talkie films, silent films relied on cards with text shown for several frames, to insert dialogue or other information relevant to the story. Later (see the links provided under History to fill jump ahead), subtitles were introduced so that audiences of foreign films to translated the actors' words. Although helping hearing people understand the language, subtitles are also used in teaching scenarios, as visual reinforcement of the audio aids language acquisition.

Subtitles tend to be at the bottom centre of the screen (although that convention is changing in some productions), they can be turned off (they can be "closed"), and they never mention the onscreen audio language, although they may when another language is used in the action. They often are not used if the audio is considered common knowledge or if the word sounds the same in the translated language (e.g. a lot of languages use some form of the word "cool" or "okay"). Subtitled films can subsequently be captioned. You'll see why below.

Captions are intended for the Deaf, deaf, or hard of hearing, people with a variety of hearing issues (such as the ones I outlined in the here), and in situations that aid the hearing audience: noisy spaces or places where the sound has been turned off. They can be closed (optional) or open (embedded in the video). Sometimes their position moves to indicate who is speaking or that name can be explicitly shown, such as [VANESSA:] or  >> TV anchor. They are usually in the same language as the audio and provide all utterances, tone of voice, atmospheric sounds or other effects. They can be added to subtitled work if this latter information needs to be conveyed for the CC audience.

The commonality is that both are often poorly written, or lack lustre at best. Many countries now have laws and regulations in effect to require that film productions and TV shows are distributed and broadcast with content that communicates with almost perfect verbatim accuracy and correct syntax, presentation, etc. That's where we come in.

Communication is a right, not a privilege.

In my work experience, subtitlers are professional translators and titlers who, despite their advanced training and skills, are hired at very low rates and with unreasonable turnarounds. It's no surprise, then, that they are too rushed to create a perfect file or that less trained people are awarded files. Unfortunately, subtitles are not considered important, seemingly something to slap on the end product to say it was done, without considering how their level of quality affects the viewer's immersion in the film. Which is counterproductive to critical and popular success, isn't it?

Closed captions are created with the same attitude (again, in my experience; others might have had better luck): captioners are typically not paid a living wage, and the speed at which they have to process material before broadcast encourages errors in spelling, grammar and written style. [However, house styles and extenuating circumstances in the material can force those shifts and they are then not errors.] Often they are hired largely based on keyboarding speed, and writing and editing training is erroneously considered irrelevant. When I captioned, the employees processing several shows or movies per day had no time or knowledge to be able to apply the editing I do, and the quality assurance supervisors were in the same boat.

So, while their form and function can be quite different, subtitles and captions both require editing. Google "caption errors" and the images that show up readily prove my point. No one is offering the editing Reel Words is, despite the very real need. And it's a shame because it is disrespectful of viewers who require the accessibility to fully participate in current culture and it ruins the enjoyment of audiences who love foreign films. Basically, the current state of affairs in subtitling and captioning is unacceptably abysmal.

The goal of all film storytellers is to keep their audiences completely immersed in the content; once attention is sidelined by errors, the flow is lost while the brain struggles to figure out what was (not) communicated and to keep up with the subsequent titles.
Our view is that we all deserve better—whether we are hearing or non-hearing. We expect outstanding CGI reults and online variety, but captions and subtitles are ignored. Part of the ethos of Reel Words is to advocate for actual improvement in standards, not just on the books. No More Craptions! may be lighthearted in tone, but the rallying cry is serious in vision.

Closed captions used to be considered a frill, and now they are required. Together, let's demand improvements in quality. If you are a producer, you can start by having your subtitle or caption file edited.

 

 

 

Photo source: frankieleon, let's compare apples and oranges, May 3, 2009 on Flickr.com

Who Benefits from Caption or Subtitle Editing?

Black and white photo from the 1950s with a young woman seated on the carpet between two television sets; image appears to be an advertisement

You might think subtitles and captions are compartmentalized in one or two business niches like foreign films and TV shows watched by people with hearing loss. But there are many places captions and subtitles are needed, and if you produce any of the following, you need to have them edited properly for consistency, correctness, and clarity if you want your target audience to benefit from them.

Before you scroll away because you "don’t know any deaf people," consider this: you may think you don't, but a lot of people don't advertise their deafness because a) it doesn't define them, and b) it's frustrating to keep explaining it over and over to hearing people.

Here are some examples of products and users for where there's a need for a final edit for audience immersion and comprehension:

  • hearing and deaf friends who want to see a movie together
  • English language learners
  • people needing cognitive support with visual reinforcement cues
  • shows with heavily accented or audio-obscured speakers
  • folks in noisy or quiet places or where the volume is off or problematic
  • company profile videos
  • corporate promos and demos
  • automatically craptioned YouTube videos
  • educational and training videos
  • supertitles for live performances, such as opera or bilingual theatre
  • projection of lyrics for sing-along events, movies or congregational worship
  • TV pitches and pilots
  • conference recordings
  • DIY videos
  • online tutorials
  • captioned programming requiring localization (i.e. using the correct conventions for another country's standard English)
  • presentations and pre-written talks
  • institutional video archives
  • reported speech on TV shows (e.g. quoting a speaker on a news report)
  • museum or art exhibits
  • retrofitting outdated visual materials (especially in light of new legislation in many areas which directs content to be fully accessible)

The beauty of subtitle editing is that you aren't adding a large expense to your budget: the larger outlay is already done (translation and/or transcription), so you're only paying for an edit of your current product, which will be recouped by higher sales from satisfied customers and, by extension, word of mouth. It's an affordable add-on that increases product value, adheres to accessibility rights, and gives you an edge over competitors. You stand to win when others in the marketplace are generating social media memes for their uncaught errors in the current grammar-vigilante atmosphere. It's not true that the public doesn't care about spelling and grammar: they judge reliability and credibility by professionally presented products and copy and, if they're comparison shopping, they're bound to choose the company that communicates flawlessly.

 

Four Generations of Projectionists

The window from the small dark room into the large dark room had a sill depth of about eight inches, or so I remember it. I would perch on the high metal stool—but carefully because there was no back—and peer as far into the view port as I could with my knees pressed against the wall. I was six, and I was watching Oliver! from the projection booth at the Mt. Pleasant Theatre in Toronto. I was riveted but terrified of Oliver Reed’s Bill Sikes: it was the first murder I had ever encountered. But I knew I was safe because my dad was the projectionist, and he was beside me.

 

Our family trade was film projection, although Wells Bros. Amusements had started out as a general entertainment venture in 1908, Carol C. Wells (pictured right) and brother Sam I. Wells being about 20 years old. Soon, they had a travelling Wild West Show that went from the CNE on to the fair circuit in southern Ontario. That same year, they had the idea to expand upon the wildly popular motion-picture industry by having a travelling motion-picture show and, with the purchase of a projector and portable booth, the family trade was born. My grandfather was responsible for creating safer booths that became the provincial occupational standard, so that the heat and chemical dangers of projection were lessened. Their letterhead explains the work their partnership undertook. Fortunately, a written account of their business remains.

Later, my dad, his brothers Howard and Gordon, his brother-in-law Richard, my cousin Charles and his son Andrew took up the apprenticeship of movie projection. My cousin and his son, like so many others, lost their work to technological developments, namely the digitization of film projection.

But I spent my childhood in movie theatres: initially in the booths, later working at the candy bars of the Crest and Mt. Pleasant Theatres on Mt. Pleasant Rd. in Toronto. I loved this job because I got to see free movies, and I was welcome to free popcorn and pop: Orange Crush and buttered corn for me, in those days. Sometimes my boss, theatre-owner Peter Sorok, would give me stills or posters once the movie was finished its run: my Chariots of Fire poster was a prized possession for many years. If I haunted the theatres my relatives were working in, I’d always get a pass for myself and a friend. It was a pretty sweet perk.

I had a polyester smock that zipped up and had pockets—I think it was red, with white trim—that I wore behind the candy bar. My friend Cameron, an usher, often had to help me out at intermission between double bills: the restrictive area behind the popcorn and pop machines often lent itself to us blushing, trying to jockey around each other and serve dozens (hundreds?) of customers ASAP.

In between candy-bar rushes, we would sweep the carpet or refill supplies or check the neatness of the bathrooms. Usually, if I were working without an usher, I would sit on one of the lobby chairs and do my Latin homework under the cast iron, Italianate wall sconces. The best part about the theatre then, either as a viewer or candy bar girl, was that you could smoke there. The last six rows of the mezzanine and the balcony were smoking areas! You’d just throw your butt on the floor, and the usher would sweep it up later. Amazing!

I saw iconic movies there: Apocalypse Now; The Rose; Chariots of Fire. I probably saw some flops. But I could watch them over and over, to my heart’s content.

When I got older, I was allowed to start working the box office at the Crest; in 1980, this involved some cash in a drawer and a hand-torn ticket.

I remember some patrons. There was one man who underestimated the arc of the large, glass entrance door and its force when swung open, and he was knocked to the ground by it, his nose broken and bleeding like crazy. He said he had been so excited to get there just on time that he didn’t really watch what he was doing.

There was a lady who would come for weekend double bills and only order a small plain popcorn at intermission—in those days, probably about a 12 ounce container: she told us she had lost over a hundred pounds, and that was her one cheat that she allowed herself in celebration and as part of her diet maintenance.

It was there that I learned that newspaper and vinegar were excellent materials for cleaning windows without streaks.

And it was to be the last job I had that would not charge me income tax. I think my little brown pay envelope contained earnings at a rate of about $2.35/hr by the time I left. I would walk the mile and a half home in my cool leather clogs and boho clothes, feeling independent and excited by film. If my dad was the projectionist that night, I wouldn’t stay til the end of the movie for a ride home, but I would’ve had a lift to the theatre as we started at the same time.

Nowadays, I go to the movies in three different ways. My number one choice is always the local, independent cinema, because they don’t tend to play the kinds of movies that require pre-show games, noise and flashing lights like the big chains offer. Sometimes I fork out for the Varsity VIP: the seats are roomy and the price usually discourages audience types who text or talk through the movie. Finally, I go to the TIFF Bell Lightbox for more "serious" film experiences. Those soundproof auditoriums do wonders. But always, I go at least a half hour early to be the first in to get the end seat of the back row. If I can’t do that, I won’t go. At home, I don't have TV, but I watch Netflix on my computer.

Recently at TIFF, I looked up into the lit projection booth where there were aluminum ducts and other unfamiliar machinery parts visible. I also spotted a computer screen. If my dad were still alive, he wouldn’t know how to run a movie today.

But sitting in the booth with him was like magic. He’d spool film onto reels, prepare the jump, fire up the second projector, and start rewinding the played reel for the next show. He taught me how to assess a good jump at the cue dot.

I know most nights he would drink coffee and read novels during down times, but I think he did this second job (on top of telecine at CBC) because he had drunk the movie Kool-Aid, too. He’d grown up in an era of motion picture madness and in a family that ate, drank and breathed "show business."

I didn’t go into film studies or production or theatre management. I would have made a great continuity girl, I’ve been told. My mother-in-law has won Geminis for her movie-costume designs. Somehow my connection has remained fairly common: a movie lover who experiences the medium on large screens, on DVD or online. But I do feel an affinity for the whole area of movies and cinemas, both as art and as social gauge.

Mostly, though, I treasure my memories of peering through the view port into the auditorium. It was a sacred space to me, and it planted a seed that grew into my love of the movies.

~ FIN ~

 

Epilogue: For an update on the Mt. Pleasant and Regent (Crest) Theatres, click here to read some good news!

The Case for Subtitle Editing

Colour photo of a cinema from the back row looking at a blank screen.

 

The explosion of access to international shows and films from independent filmmakers and from (S)VOD* suppliers like Netflix provides viewers with diverse and exciting choices. Many series and movies are outstanding. Except in one area.

If you hope to reach viewers around the globe, your production’s subtitles or captions must communicate flawlessly, and currently many are failing miserably.

You only have about 2 seconds per title to enable the viewer to absorb the content, so it needs to be picture-perfect.

What does picture-perfect mean in subtitling? It requires quality-control editing to catch more potential problem areas than you’d think. Recently, I did a survey of pitfalls in the final episode of a foreign TV series I’d been watching on Netflix. During that one episode, I documented 84 discrete errors—meaning 84 usage errors, not repeated occurrences like “hte” or even the possible multiple errors within one word or phrase.

At that rate, the reader stumbles due to incorrect subtitles about every 30 seconds and loses concentration on the dialogue.

By the time the brain has sorted out the discrepancy or compensated for misunderstanding, another title has flown by. Subtitles must facilitate viewer immersion.

The problems I found in the show I surveyed involved not just typos but also errors in spelling, grammar, punctuation, timing, capitalization, speaker identification and, most often, idiomatic usage.** Never mind missing titles or titling a character’s use of the English “Okay.” with “alright”. [Can you identify the 5 errors there?] A subtitle editor would catch and fix those.

Why does this happen? It’s probably not the subtitler’s/captioner’s fault. They work under extremely tight deadlines. Good translation takes time. The technology is intricate. And they are usually not briefed to copy edit—nor should they be: translation and copy editing for film are totally different skill sets.

Many shows are titled by people contracted to do the freelance work by companies that, frankly, want output quantity rather than quality. But if you’re working with a professional subtitler and translator, such as those affiliated with SUBTLE, the international Subtitlers’ Association (full disclosure: I’m a member), you are likely dealing with a highly trained and invested individual contractor or small company. Just like writers who need copy editors and proofreaders, as the filmmaker you may wish to hire a collaborative team: the translator/subtitler and the subtitle editor to check for idiomatic correctness. Did you know that “English” in print and film is edited by country? Editing English texts from Britain, the U.S., Canada and Australia requires education and experience in working with those countries’ conventions. Like all types of editing, to edit titles for film you need more than experience helping your friends with their resumes or teaching English for 20 years. You need formal training and ongoing professional development because “the rules” are always changing.

“Native speakers only” is not an adequate qualification requirement for captioning.

Subtitle editing is affordable because the subtitler has done the bulk of the work; the editing just cleans up the titles with a fresh pair of eyes and ensures that your long and expensive project is professional and truly accessible.

The goal of subtitles and captions is to communicate while making viewers forget they are reading titles. Good titling is as important as movie soundtracks: they should enhance the experience while being unnoticeable in the moment.

 

 

*(Subscription) Video on Demand

"Facilitate viewer immersion" (and all grammatical variations of it) is a copyrighted phrase. © Vanessa Wells, 2017.

 

 

 

Photo by Daniel Olnes, February 14, 2008, Flickr.com

 

What IS a Subtitle or Caption Editor?

A cropped closeup colour photo of closed captions on a screen, the text being cut off to prevent understanding a sentence.

 

 

 

 

 

 

 

You might wonder what a subtitle editor is, since many companies already offer subtitle translations. Those like my colleagues in SUBTLE (Subtitlers' Association) produce professional results—yay! But frankly, subtitling companies are hanging out their shingles despite lacking one important component: editing skills. (Not technical video editors: that's a different area.) I realized this when I worked in captioning and saw how the products needed editing. It's like expecting authors to turn out perfect books without manuscript editing: not good.

Subtitles cannot be flawless or even excellent without editing, and they require a trained, professional editor who is also knowledgeable about captioning and subtitling, translation, foreign languages, linguistics and the conventions of different kinds of English. Otherwise, the results are unsatisfactory: even if you aren't reading them critically, imperfect subtitles are distracting.

Subtitles must facilitate viewer immersion.

A subtitle editor checks, adjusts and polishes the text so that it is clear, consistent and correct.

Did you try to solve the challenge I included in a recent post? After seeing 84 discrete subtitle errors in one episode of a show on Netflix, I posted one example and suggested that there were 5 errors in it and asked if you could find them. The subtitle read:

Alright

for a non-English-speaking character saying

Okay.

In fact, I'd even argue that there are 6 errors in that one word. (Email me if you think you can figure out the problems in that example.) But that word distracted me, and I didn't even have my editor's cap on—I was just chilling with a show on the weekend. Not the end of the world, granted; but my reading brain stumbled, and that caused me to pause, which caused me to miss the next title, which made me lose the thread of the dialogue, and I had to rewind. (This is especially problematic if you're watching a show that is info-heavy, such as a mystery or crime thriller.)

While providers like Netflix are rolling out new services to try and produce better subtitle translations, they're still missing this essential step in the process. No reputable book publisher would release a book without editing or proofreading done. But more on that in a future article.

So if you watch shows and films to relax and to rest your weary brain and you don't want to have to think while you're doing it (isn't that the point of recreational viewing?), you should be demanding this level of production from providers. Part of your monthly subscription fee or movie charges goes to subtitling, so you might as well get good product for your money. Would you want to buy a new book that hadn't been edited? No, but we constantly do because it's considered too costly by a lot of publishers now. If you expect your can of paint to be sold with a handle attached or fruit not to be sold when it's moldy, why are you settling for second rate in your entertainment? Rise up, good people, and demand excellence! It doesn't look like online viewing is going away anytime soon, but if we continue to accept second-best quality, we'll soon be given third.

Clear communication is not a frill, it's a basic requirement.

To see the areas of both work and play which need excellence in captioning and subtitling, see my post, Who Needs Subtitle Editing?

 

 

 

Photo by Daniel Olnes, February 14, 2008, Flickr.com

Subtitle Edit Draw: Hindi version

Here is the Hindi version of the subtitle edit post from this week:

कनाडा में फिल्मकारों के लिए मुफ़्त फ़िल्म सबटाइटल सम्पादन का मौका

क्या आप कनाडा में रह रहे एक फिल्मकार हैं? क्या आपके पास अंग्रेज़ी के अतिरिक्त किसी दूसरी भाषा में बनी फ़िल्म है? आप अपनी फ़िल्म की रिलीज़ के पहले उसके कैप्शंस या सबटाइटलों के अनुवाद का मुफ़्त सम्पादन जीत सकते हैं!

19 अप्रैल 2017 को मनाए जाने वाले नेशनल कैनेडियन फ़िल्म डे 150 (NCFD 150) #CanFilmDay  के उपलक्ष्य में को वेल्स रीड एडिटिंग द्वारा 25 अप्रैल की शाम को एक लकी ड्रा का आयोजन किया जा रहा है (रैंडम पिकर द्वारा, अधिकतम 1000 प्रविष्टियाँ). विजेता को एक फ़िल्म के अंग्रेज़ी सबटाइटलों की मुफ़्त प्रूफरीडिंग, सम्पादन और भाषागत शुद्धता की जाँच की सुविधा दी जाएगी.

ड्रा में भाग लेने के लिए आपको कनाडा का नागरिक होना ज़रूरी नहीं है, लेकिन यह आवश्यक है कि आपकी आयु 18 साल या उस से अधिक हो और आप कनाडा में अपने वर्तमान पते और काम/सेल्फ़ एम्प्लॉयमेंट/फ़िल्म स्टडीज़/अमेचर फ़िल्म निर्माण का सबूत दें. फ़िल्म की लम्बाई दो घंटे से से अधिक नहीं होनी चाहिए, हालाँकि 120 फ़िल्म मिनट से अधिक का काम हमारी सामान्य दरों पर पूरा किया जा सकता है; इस स्थिति में भुगतान पहले से तय और अग्रिम होगा. समय की गिनती पहले फ्रेम से शुरू होगी, चाहे वह क्रेडिट टाइटल/विजुअल हों. काम के पूरा होने की तारीख़ सम्पादक और विजेता द्वारा तय की जाएगी. सबटाइटल अंग्रेज़ी भाषा में ही होने चाहिए, और आपको यह तय करना होगा कि कैनेडियन, अमेरिकन, ब्रिटिश और ऑस्ट्रेलियन में से किस पद्धति की अंग्रेज़ी का प्रयोग किया जाएगा (आप जिस बाज़ार में अपनी फ़िल्म ले जाना चाहते हैं, उसके अनुसार). इस ड्रा के पुरस्कार के रूप में दी गई सेवा में सम्पादन का काम टेक्स्ट डॉक्यूमेंट या स्क्रीनशॉट के पीडीएफ़ में मार्क-अप के साथ या संपादक और विजेता द्वारा तय किये गए अन्य किसी तरीके से होगा. सम्पादित सबटाइटल को वीडियो फ़ाइल या टाइटलिंग सोफ्ट्वेयर में एम्बेड करना इस पुरस्कार का हिस्सा नहीं है. फ़िल्म के क्रेडिट्स में '"Subtitle Editing by Wells Read Editing" शामिल किया जाएगा.

भाग लेने के लिए @vwellseditor को संबोधित करते हुए  #CanFilmDay #SubtitleEditDraw हैशटैग के साथ ट्वीट करें.

I would like to thank editorial colleagues Shruti Nagar for translating this post and the related tweet and Vivek Kumar for his additional help.

Subtitle Edit Draw

Are you a filmmaker in Canada? Do you have a film made in a language other than English? You could win a subtitle edit of your transcribed captions or translated subtitles before your film’s release!

In celebration of National Canadian Film Day 150 (NCFD 150) #CanFilmDay on April 19, 2017, Wells Read Editing will hold a draw (via Random Picker, maximum 1000 entries) on April 26 for entries received by (re-)tweet with the hashtags #CanFilmDay #SubtitleEditDraw by 11:59pm EST on April 25. One winner will have one film’s English subtitles proofread, edited and checked for idiomatic correctness for FREE; two alternates will be generated by the software in case the winner cannot accept the prize.

Entrants do not have to be Canadian citizens but must be 18 years of age or older and able to provide current proof of residence, work/self-employment/film studies/amateur film making in Canada. Film length is not to exceed two hours, although work past 120 film minutes may be completed at regular fees; payment to be arranged and paid in advance; minutes begin with opening frame even if they are credit titles/visuals. Date of work fulfillment to be determined between editor and winner. Language of subtitles must be English, and Canadian, American, British or Australian conventions can be specified (depending on your intended market). For this draw’s prize, editing will not be embedded in the titling software or video file and will be completed by text document, screenshot PDFs with mark up or another mutually agreed-upon manner. Film credits will include reference to “Subtitle Editing by Wells Read Editing.”

To enter, tweet #CanFilmDay #SubtitleEditDraw to @vwellseditor 

~ FIN ~

Subtitle First Aid, Part I

It happened again.

I was watching a foreign film with subtitles. They were very well done: the English was correct, the titles themselves were very readable, and the subtitling did not distract from the content—which is one of the key requirements of successful titling.

But, as I am wont to do, I stayed and read the credits. [Insert car-brakes-screeching sound effect.]

“Filmed on Loaction”

I wasn’t obsessively looking for errors. I wasn’t putting on my Holier Than Thou grammar hat. But this jumped out at me, all the way to the back row of the theatre.

Granted: errors in subtitling or end credits are not the end of the world. They don’t make it a horrible cinematic experience. And mistakes slip by. But doesn’t the visual text of the project you’ve slaved over for months or years warrant a professional once-over? Doesn’t it deserve to have all its elements treated with regard for correctness and excellence? Shouldn’t the film have a great shot at international marketability and good critical reception?

If you skip the proofreading of your film’s text, you may be sending a message to your audience that they’re not worth considering: it’s only the end credits, right?

If you skip the proofreading of the subtitles, you may be sending a message to foreign distributors that their audiences aren’t as important as your original-language audience was to you: it’s just a secondary market, so no big deal.

This is not about being too uptight, too nit-picky, too pedantic. You wouldn’t distribute your film with sloppy sound editing or jump cuts. You probably have someone (or plural, if you’re lucky) either exclusively handling or at least keeping an eye on prop and costume continuity. You want to create a beautiful, whole and masterly film. So you can’t afford to leave the most in-their-face part of the film half-addressed for your audience. If you do, you’re—perhaps only subconsciously—conveying an attitude that says that film can be dumbed down for the masses and that the bums-in-seats don’t care about writing and language or their experience with your art.

If your production budget is over $5000, you need to have an editor review the text or at least a proofreader look at it with fresh eyes. (Your mum/husband/BFF won’t do because there are things to consider that they aren’t trained to look for.) For as little as the price of a couple of first-release DVDs, you can have your post-production text in a workplace-training video reviewed (word count depending, of course). For the price you’d plunk down for a new cellphone, you can have your short documentary proofread.

All the social media shares of signs with bad spelling, grammar and punctuation are an indicator of the appetite people have for mocking errors. If you don’t want your work turned into a derisory meme that gets more coverage than the original piece, you need to consider this often-overlooked aspect of post-production.

Just as THX reminds us that “the audience is listening,” it would be wise to remember that it is reading, too.

 

 

This is the first of three pieces about why film subtitles need copy editing and proofreading by a professional editor and subtitler. The others will address inadequate translations and poor word choices in subtitles. Vanessa Wells is a member of Editors’ Association of Canada and SUBTLE: The Subtitlers’ Association.

Subtitle First Aid, Part II

Young man wearing a helmet holds a vehicle's steering wheel, visibly crying with red and tear-filled eyes. Captio reads: "[SADLY GO-KARTS]'

 

 

 

 

 

 

 

 

 

[Please note: a new version of the WordPress theme won't allow the previous spacing in this post. Please forgive the poor layout.]

Very generally, subtitles are used in film and TV for translating foreign or indistinct speech and closed captions are for providing the hearing-impaired viewer with the audio information they are missing. As I said in the first article of this series, subtitling must not distract from the film experience, so titles or captions both require judicious choice of wording.

There are many variables involved in subtitling that aren’t evident when we watch a subtitled foreign film or closed-captioned TV show. As in many areas, projects are usually not adhering to their projected timeline, and titlers (like book proofreaders) are at the end of the process; read: rush job with no rush-job fees. Subtitling and captioning have many spatial and temporal requirements; some are based on government standards, others on average reading rates, on industry-wide conventions, and so on. Pop-ons and roll-ups use different production models. And cost is affected by companies using international roster or tender systems for finding the most cost-effective labour market they can. So it’s not always fair to complain about subtitle quality but, reasons or excuses aside, they do get noticed and it does matter.

The reason [Sadly go-karts] is lamentable is that there is a finesse to captioning and subtitling in knowing what needs to be written and when. Paul Aaron (above) is neither saying that he is sadly go-karting, nor is that a sound that must be replicated for the viewer: it’s a visual, and it’s self-evident.

Let’s look at some other problematic subtitles and captions.

I’m sure you can discern the utter uselessness of this one:

Long shot of TV coverage of tennis match, captioned [tik...tak...tik...tak...]

 

Or this one:

 

A noble ancient Roman woman stands inside a large arena, with a group of lower-classed men listening to her, and stadium step seating full of spectators. Caption reads: "Is Rome won'th one good mars life?"

 

But what about this one?

A middle-aged man is on the floor against filthy kitchen cupboards, screaming and recoiling from something in front of him off screen. Caption reads: "[SCREAMING LIKE A SISSY]"

 

 

 

 

 

 

 

 

 

It is sort of funny, and it does the trick. But “sissy” is a subjective description, and it’s likely a localized idiom that may not communicate to people of all ages or all cultures. An editor should have flagged this caption as problematic because it could put up a potential barrier between the medium and some viewers.

And here’s one for the “intensity” sub-genre of bad captions:

Scene from Breaking Bad; woman recoiling from obscured individual, holding a baby protectively. Closed caption says "Stealing intensifies."

 

 

 

 

 

 

 

 

 

 

 

 

You can’t steal intensely; you can only steal with intense emotions. Even then, this is not a word or sound to be communicated aurally.

 

Just as you can’t loudly imply cannibalism:

Man in dark scene captioned [loudly implied cannibalism]

 

You could perhaps convey that there is a loud gnawing sound, but if it’s cannibalistic eating, that’s either known to the viewer or will be, but cannibalism is not inherently aural, nor is implication loud.

Here’s another inaccurate one that a caption editor would have re-written:

Piper sitting on the toliet, captioned [urinating forcefully]

 

 

 

 

 

 

 

 

 

I saw this episode of Orange Is the New Black, and Piper is not urinating forcefully, as if she were straining with a kidney stone; she had been desperate to go for hours and was finally allowed to but only with a male guard present. A more accurate title would have said [Urgent stream of urine]. That’s a sound and that fills in the missing information more correctly. Her face conveys her disgust.

This isn’t the worst caption in the world:

Man in green hospital scrubs captioned, {makes "I don't know" sound]

 

 

 

 

 

 

 

 

 

 

 

 

But in best practices, it might have been better to write something like “Expresses indecision” (if that were the case; I don’t know the scene) because the “I don’t know” sound is a culturally differentiated mannerism.

My final example is not from a subtitle or caption but could easily be. A fellow editor told of a South African correspondent who was talking about a "toot" which, to her, meant a drink. My colleague commented that "toot" means something very different to us in North America (and she didn’t mean a cute car-horn sound). This demonstrates the need to have an editor review the text for idioms appropriate for the intended market. Sometimes idioms must be retained to convey cultural richness and idiosyncrasies in the story, but it is important to have someone who is aware of potential stumbling blocks (and riotous audience laughter) and who is capable of supplying synonyms that will still work with the film. The Harry Potter books were Americanized for this continent’s market (and some would argue unnecessarily), but there are times when professional copy editing of the subtitles can prevent gaffes, offence or derision and—ultimately—loss of post-distribution revenue.

Subtitlers and captioners have to work at unbelievable speeds and too often with insultingly low pay. It's not always their fault if the titles we see are poor or just plain wrong. But a subtitling editor can check the work with a lot less hassle than your production team would have going back down the pipeline to get the errors dealt with. Then, when your film is received with popular and critical acclaim, you can pop that bottle of bubbly and have a toot to celebrate!

 

This is the second of three pieces about why film subtitles need copy editing and proofreading by a professional editor and subtitler. The first addressed proofreading as a basic component to post-production and the final one will deal with inadequate translations. Vanessa Wells is a member of Editors' Association of Canada and SUBTLE: The Subtitlers' Association.

 

 

The balance of the photos used in this post were retrieved on July 7, 2016 from here.

Subtitle First Aid, Part III

Four boys lead a smaller one by the ear down a dimly lit institutional hallway; a still from the movie The Tribe.
                                                               http://www.vice.com

Parts I and II discussed the need for filmmakers to incorporate proofreading and copy editing respectively into their post-production plans. I also wrote about some of the technical difficulties titlers and captioners face, including time and space, which are connected to fonts and the languages themselves. For instance, French text is typically 20% longer than English, so if titling for an English-to-French film, you'd have to take all of these things into consideration to keep the titling up to speed with the English actors' speech.

In this final piece, I'll discuss issues of translation for subtitles, and you'll note complicating crossover problems. I picked a random foreign film to examine its subtitles' translation. (I'm not going to name the film because my aim is not to shame anyone making mistakes, for reasons outlined in Part II). I'll simply outline typical problems I found in it.

First some good points: the translator used slang such as "gonna" appropriately, based on character. They correctly ignored a lot of background chatter that was intended to establish setting elements and that was not integral to the plot or action. For the most part, idioms were correctly used. I bristled a little at the choice of US over UK/World English spelling but, looking at the secondary releases, I see that it had greater American than European distribution, so fair enough. (Although I still believe that World English is preferable because it prevents reader stumbling for more viewers worldwide.)

As in any copy editing job, there are stylistic choices decided on by the higher-ups which must be respected. Just as in editing an author's book, you can't hijack their style and have it your way, unless you can demonstrate your concerns about potential problems the reader may encounter and provide workable solutions. So this film used some editing choices in the subtitles that I found a bit clunky for continuity, such as capitalizing a new phrase following an ellipsis from another frame, when I would have used less distracting commas and lower-case letters, as it befitted the grammar. I found my eye jumping to the upper case and wondering if I'd lost the train of conversation from the last title. However, this is a grey area.

But my encounters with inconsistencies, treatments of numbers, expressions and, most egregiously, omitted titles were problematic.

Aside from the above regarding caps following ellipses, there were too many inconsistencies in punctuation treatment. Numerous clauses and sentences were incorrectly elided, either with too many or incorrectly placed commas, so that some sequences of subtitles should have been self-contained sentences and some should have been restricted to fewer clauses. Good writing in the script was misrepresented as long strings of spoken clauses.This sloppiness loses the reader, whose focus is returned to concentrating on the subtitles rather than absorbing their content subconsciously.

The treatment of numbers may seem like a picky topic, but it's not. Generally, editing conventions are to write out numbers between zero and nine or ten and to use numerals for 11 and above. Even if this had not been the stylistic choice, the jumping around was very distracting. I saw "2," "1st," "6-7 years" and, worst, "five minutes" and "15 minutes" and "30 minutes." In their contexts, those first three examples should have been written as "two," "first," and "six to seven years." Yes, the last three follow the above convention but a good editor knows when to break the rules to maintain reading flow. The scene involved counting off time being wasted by a character, so for better flow, I would have recommended using "5 minutes" to match the latter two time references.

Another translation and copy editing issue was around "n" and "N" plus a numeral: viewers were expected to know that "n55" and (inconsistently) "N55" meant number 55 or #55 as used in street addresses. This kind of error shows lack of consideration for the audience: it assumes a worldliness in all filmgoers, that they will know cultural references for all countries.

Here is another example of culturally differentiated mannerisms not being served by the subtitles. A character said she was going for—and made a going-to-sleep gesture, putting her head to the side on her joined hands under her ear. This is a gesture that is not culturally exclusive and probably is understood by most of the world as meaning "going to sleep." But in this case the subtitle was not left out and inserted "A nap!" (which is both incorrectly capped and punctuated); this is poor titling because she did not say "a nap" verbally, she only gesturally conveyed it.

One expression missed the mark. "It's a bit tradesman's entrance" should have been "It's a bit of a tradesman's entrance" or, because the point was to emphasize the slang and the distaste of the speaker, "It's a bit of a tradesman's entrance" (since italics would work better than single quotes inside double in titles). Not a horrendous problem, but I was stopped momentarily by it.

The choice to omit subtitles for some words was very unwise. One example was when a foreign word on a sign, key to a sub-plot, was left untranslated. It should be assumed that filmgoers are not all bilingual or multilingual and, even if they are, that the film's original language might not be one of theirs (and English itself might be a learned tongue). This type of error excludes some viewers and affects their experience with the film.

The other omission was frequent: completely non-existent subtitles for foreign words that were proper names for objects—and inconsistently! The post-production team and translator should have discussed and decided on the treatment of these names, applied the usage consistently and, again, not made assumptions about the viewers and what is general knowledge, especially when it applied to another language and a very particular niche of work. Equally annoying was when they allowed a spoken English word mid-phrase to have no subtitle, because it was assumed the English viewer knew what it meant. But when you have an actor saying it with an accent and when you drop a subtitle off, that creates reading and film-watching stumbles. Here is a fictional example of what I'm referring to:

Yes, it was on the

 

Was it? I didn't see that.

The words "BBC News" were omitted because they were spoken in English. But that is egregiously poor subtitling practice. The constant omissions were very distracting from the film experience, which is antithetical to subtitling and captioning.

It is rare to have perfect subtitles in a full-length feature, but the above examples illustrate some of the problems a subtitle editor can find by reviewing the text before distribution. The key is to allocate budget and time for this step in post-production. Film cannot engage foreign viewers if their absorption is interrupted, and being engrossed in a film is the audience's primary desire. Subtitling excellence is part of the value which filmmakers owe them.

 

This is the final of three pieces about why film subtitles need copy editing and proofreading by a professional editor and subtitler. The first addressed proofreading as a basic component to post-production and the second looked at editing poorly worded subtitles and captions. Vanessa Wells is a member of Editors' Association of Canada and SUBTLE: The Subtitlers' Association.

The photo above is from The Tribe, a movie which was made all the better for not using subtitles. Read my review of it in the second entry of this blog post.