Video & Audio

The company that brought Video Relay Service (VRS) communication to people who are deaf, Sorenson Communications, has now introduced the first American Sign Language (ASL) Phone Tree called the Sorenson Bridge.

The Sorenson Bridge will strengthen the way people with hearing disabilities communicate when using a VRS. The Sorenson Bridge replaces the time-consuming process of navigating audio phone trees using sign language interpreters with video menus shown in ASL. The ASL video menus make it much faster and easier for people whose native language is ASL to select the option they want.

Read more on the Sorenson Bridge

The W3C’s WAI Education and Outreach Working Group (EOWG) has made public the first version of Web Accessibility Perspectives, which introduces ten videos that explore the impact of accessibility on people with disabilities and how accessibility benefits everyone. The videos show how accessibility benefits everyone in different situations and inspires viewers to explore web accessibility. The WCAG 2.0 guidelines inform accessible web development.You can read more about the initiative at Web Accessibility Initiative (WAI).

The World Wide Web Consortium (W3C), which is the global standards organization that develops foundational technologies for the WEB, received a Technology & Engineering Emmy®Award on January 8, 2016.  The award was given by the National Academy of Television Arts & Sciences (NATAS) for the W3C’s work on making video content Accessible through text captioning and subtitles.

The Emmy®Award recognized W3C’s Timed Text Markup Language (TTML) standard in the category of “Standardization and Pioneering Development of Non-Live Broadband Captioning.”

Read more on the W3C’s Emmy Award.  


As accessibility consultants who live in the world of Section 508, WCAG 2.0, heading structures, alt text and text alternatives, we can often find ourselves removed from the human side of what we do. Day to day we’re buried in code, writing reports, working with developers and project managers, which can distance us from the original reason we got into the business. I love it when something comes around that reminds us of the very human purpose behind our daily work.

Fuze, which offers a rich visual experience, has enabled students with hearing disabilities who are in different locations learn together. For example, nursing students with hearing disabilities who are interning at a hospital can remotely attend classes.

Among the latest universities to adopt Fuze and help faculty and students come together through cloud-based video collaboration are Georgetown Law School, the University of Massachusetts at Amherst, Saint Louis University, the University of Alaska Fairbanks, and Gallaudet University.

The United States Access Board is updating its Section 508 Standards and Telecommunication Act Accessibility Guidelines together. The Section 508 Standards are issued under the Rehabilitation Act and apply to electronic and information technology procured by the federal government. This includes Computer hardware and software, websites and multimedia (video, phone systems, and copiers.) The Telecommunications Act guidelines were issued under Section 255 of the Telecommunications Act and addresses access to telecom products and services and apply to manufactures of telecom equipment. The purpose of updating both together is to ensure consistency of coverage of telecommunications technologies and products.

New, more comprehensive rules for TV closed captioning was unanimously approved. The approval will ensure that viewers who are deaf and hard of hearing have full access to programming and resolved concerns from deaf and hard of hearing communities to improve captioning quality. Moreover, it provides much needed guidance to video programming distributors and programmers.

Included in the new requirements are:

  • Accurate: Captions must match the spoken words in the dialogue and convey background noises and other sounds to the fullest extent possible.
  • Synchronous: Captions must coincide with their corresponding spoken words and sounds to the greatest extent possible and must be displayed on the screen at a speed that can be read by viewers.
  • Complete: Captions must run from the beginning to the end of the program to the fullest extent possible.
  • Properly placed: Captions should not block other important visual content on the screen, overlap one another, or run off the edge of the video screen.
The new Vimeo HTML5 Player has added new accessibility features that improve the video experience for people with disabilities. Like with any product, some things could be improved to make the player more accessible and easier to use for people with disabilities.

IBM and UMass Boston will work together in conjunction with state and federal government agencies, local and global organizations to advocate for key policies and legislation for technology accessibility. The purpose of the collaboration is to explore ways assistive technologies and the design of mobile devices, apps or websites that enable access for people with disabilities can be integrated and how overall user experience can be improved.

As part of this initiative, IBM will give students, professors, researchers and UMass Boston’s new School for Global Inclusion and Social Development access to technology and industry expertise.   

Examples of applications that the collaboration will work on are:

  • IBM My Campus Mobile App: This is a navigation application for the UMass Boston campus. It uses GPS and mapping technology to identify accessible features of architecture such as ramps or text-to-speech capabilities and helps guide people around campus.
  • IBM Media Captioner and Editor: This application automates video captioning.

Read more on the Research to advance technology solutions for people with disabilities.

The OrCam uses a 5.1 mega-pixel camera module and low-power digital image processor to “read” signs, packaging and publications for people with vision disabilities. STMicroelectronics, a global semiconductor leader, created the small device that clips on to eyeglasses.

The wearer can point the OrCam at an object and the camera and processor will work together to analyze and interpret the scene and describe it to the user. It can read scenes or text in a variety of lighting conditions and surfaces, including newspapers and signs. The camera is pre-loaded with a library of objects and the wearer can teach the OrCam new objects while they use it. 


Subscribe to RSS - Video & Audio