Deaf & Hard of Hearing

The Timed Text Working Group at the W3C invites implementation of the Candidate Recommendation of TTML Profiles for Internet Media Subtitles and Captions 1.0 (IMSC1)

The document specifies two profiles:

  • Text-only
  • Image-only

These profiles are meant to be used across subtitle and caption delivery applications globally to simplify interoperability, consistent rendering and conversion to other subtitling and captioning formats.

The World Wide Web Consortium (W3C), which is the global standards organization that develops foundational technologies for the WEB, received a Technology & Engineering Emmy®Award on January 8, 2016.  The award was given by the National Academy of Television Arts & Sciences (NATAS) for the W3C’s work on making video content Accessible through text captioning and subtitles.

The Emmy®Award recognized W3C’s Timed Text Markup Language (TTML) standard in the category of “Standardization and Pioneering Development of Non-Live Broadband Captioning.”

Read more on the W3C’s Emmy Award.  

We need your help!  Interactive Accessibility is a Boston-based consulting firm that conducts usability studies in-person or remote from your location to learn what can be improved on websites and applications to make them easier to use.  We are currently looking for people who have a visual, hearing, mobility or cognitive disability to participate in these studies.  Most studies take 45 minutes to 1 hour and participants will be paid by Amazon Gift card worth $50-100 for their time.   Sign up to partisipate at SurveyMonkey.

 

A Texas A&M University biomedical engineering researcher, Roozbeh Jafari, is developing a wearable technology that will facilitate communication between people who are deaf and people who don’t know sign language.

Jafari explains, “The smart device combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures.”  The device is still in prototype but can recognized 40 American Sign Language (ASL) signs.

Read more on Jafari’s prototype.

Today the U.S. Department of Justice filed “Statements of Interest” in cases against Harvard and MIT filed by the National Association of the Deaf.  The lawsuits allege that the universities failed to caption online video content and that the failure violated the ADA and Section 504 of the Rehabilitation Act. The DOJ documents support NAD’s position.

The universities argued that the court should throw the cases out of court because the ADA and 504 don’t cover websites. The schools also argued that the court should wait for the Title III web regulations. The DOJ couldn’t have written a stronger brief (one in each case).  The ADA already covers websites.  There is no need to wait for regulations. 

This is not the first time the DOJ has reminded us all that the ADA already covers websites, just the most recent.  For other DOJ activity, see the digital access legal update written by Lainey Feingold in March. Another update is coming soon.

DOJ Statement of Interest in MIT case

DOJ Statement of Interest in Harvard case

FUJITSU’s software LiveTalk, a participatory communications tool for people with hearing disabilities, is now on sale to companies and schools in Japan.

Live Talk helps people with hearing disabilities participate and share information in meetings with multiple people. The software recognizes speech and converts in to text, which is then displayed on multiple PC screens.

LiveTalk boasts the following features:

  • Converts speech into text with speech-recognition technology and real-time display of speech with wireless LAN transmissions
    • Enables text-based communication, as speech is converted into text and displayed on PC screens in real time with speech recognition using handheld and headset mics.
    • If multiple people speak at once the text conversion is processed in parallel and displayed simultaneously.
    • Mistakes corrected on the PC.
    • Text is transmitted in real time to PCs and tablets connected to a wireless LAN router.
  • Provides for a variety of modes of expression, such as the transmission of stamps or fixed expressions in real time
    • In addition to keyboard input, through the input of easily understandable emoticon stamps and preregistered, frequently used, fixed phrases anyone can comment quickly.

A vest that allows people who are deaf to feel sounds and understand speech is being developed by Rice University and Baylor College of Medicine. David Eagleman, neuroscientist and best-selling author is leading students in refining the vest which has several embedded actuators that vibrate in patterns that represent words. The vest pares with a smart phone that pick out speech from the amount sounds in the environment.

Eagleman’s Versatile Extra-Sensory Transducer (VEST) was the subject of his March TED Talk.

Monday China Disabled Persons Federation (CDPF) and China Banking Association issued a guideline requiring China’s electronic banking service to provide easier access for people with disabilities.

The guideline focuses on three types of disabilities in order to promote accessibility for the E-Banking services such as phone and online banking:

  • Vision Disabilities – E-Banking will provide a specially designed shortcut menu, ID recognition and easy verification codes.
  • Hearing Disabilities – Offer multiple visual facilities and instant short messaging service.
  • Mobility Impairments – establish a long-distance self-service system allowing accounts that traditionally require a physical presents to be open from home.  

Information on accessibility assessments

Motion Savvy’s American Sign Language (ASL) interpreting software UNI works with a Leap Motion sensor integrated with a Dell Venue 8 Pro Tablet, allowing people with hearing impairments more freedom in their communications.

Motion Savvy recently earned $25,000 from Leap Motion’s LEAP AXLR8R program, Leap Motion’s investment competition. This allowed Motion Savvy to relocate from the Rochester Institute of Technology’s National Technical Institute for the Deaf to the Bay Area.

Read more about UNI.

Researchers at the Georgia Institute of Technology created a Google Glass app that delivers real-time closed captioning through speech-to-text technology. Individuals with hearing impairments can wear the glasses while someone speaks into a smartphone. The speech is then converted to text by the Android transcription API and displayed in the glasses.

Google Glasses has its own microphone but the use of a separate phone picks up speech from someone not wearing the glasses and reduces back ground noise.  

Captioning is available for download at MyGlass. For more information and support go to the project website.

Read more about real-time closed captioning on Google Glass

Pages

Subscribe to RSS - Deaf & Hard of Hearing