How To Make Live Chat Accessible

Live Chat is the buzz word in the modern communication system. It may be a  retail business that want to reach their customers or government department that want to solve its citizens problems. Any business or an organization that reaches the customers through phone and email goes a step further. The live chat is an instant and more convenient way to answer the customer queries.

The modern live chat services are not only the instant chatting systems but can also be majorly automated using predetermining the content + machine learning + artificial Intelligence. The human intervention can be largely minimized.

The wide platforms these live chats can be deployed is another opportunity to reach as many customers as possible. Apart from websites and mobile apps, the live chat data can also deployed on applications such as Google assistant, facebook messenger etc  . While the businesses consider reaching wider customers, it is also important to meet the customer’s requirements in using the live chat.

In this article we want to highlight the needs of commonly ignored customer base i.e. customers with disabilities. In other words, this article will highlight the best practices to make a live chat accessible to keyboard only, screen reader, low vision and various other users with disabilities.

Best practices to make a live chat accessible

Though not exhaustive list but here are few best practices to make a live chat accessible.

Choosing the platform

The high-level objective is to allow the content authors to customize the UI. Not all platforms allow to make customizing the UI of the chat widget. In case the platform supplied UI is not accessible, content authors should be able to tweak the code. May it be adding a proper heading structure or labeling the buttons. So, ensure that the platform you choose for the live chat allows the content developers to make relevant accessibility changes in the user-interphase.

Keyboard access

All the user interphase elements must be keyboard accessible. Users who depend only with a keyboard must be able to move between all actionable elements. These include the UI elements on the widget as well as the buttons or links that are part of the conversation. An automated chat bot may ask the user

“Bot says The next flight to LA from SFO is in an hour, do you want to know the details?


In the above message Yes and No are buttons and user should be able to move focus to those buttons with the keyboard and access them.

The other UI elements on the live chat widget may include “Send button”, email transcript, upload/ share, emojis etc.  Of course   the message text box also need to be operable with keyboard alone.

Proper labels

Users with assistive technologies such as screen readers need proper labels for all user-interface elements. If the platform does not provide accessible labels or labels themselves, content authors should provide them. The labels for the buttons convey the action associated with them. Having visible labels are useful for everyone, however if the usage of the element is conveyed in any other visual means to a sighted user, aria-label can be served as a substitute.

Proper labels or alternate text is required for other  features such as emoticons. The images shared by the bots also need proper alternate text. Providing a provision to add captions to the documents, images or videos uploaded by the user will be an added advantage.

Navigation

Majorly the user interacts in the chat history pane and the message text box. A screen reader user should be given an opportunity to switch between these two panes easily and quickly. It can be a simple tab and shift+  tab or by using access keys. Few chat bot developers also provide access keys to read recent messages for a screen reader user. Providing such access keys may not be harm or may not add much value as user should remember few more commands but can read the recent messages as many times as he need without navigating away from the message text box.

Screen reader users must be able to navigate as they need in the history pane. The text navigation commands such as left arrow, right arrow, control left arrow, control right arrow, up arrow, down arrow etc should work as usual in the history pane.

Reading the alerts and recent messages

Many dynamic changes happen from the time user initiates the live chat until the conversations is end. If the human agent is responding from the other side, user may have to wait in the queue for the agent. The  status of the queue will be dynamic, the agent may be typing or may stop typing, user would have received a message from the agent or automated bot. All these are the dynamic messages that are displayed on the screen. No matter where the screen reader users current focus is these dynamic changes and updates need to be informed to the user. Some of these changes may also be only visual changes but convey important information to the user.

Aria-live property can be added to the dynamic updates for the user to understand the changes in the live chat. The specific aria-live properties such as aria-live=”assertive” or aria-live=”polite” can specifically address many of these problems. The aria-label property can address labeling the visual changes that happen on the screen and convey the message to the user at the right time with aria-live.

Other considerations

  • Usually the agent name or the word bot will be appended before the message from the agent side and user name will be appended for the message from the user side separating with a : (Collen). It can be made more conversational with the words Agent name says, user name says (replace agent name and user name with the actual names).
  • Allow the user to switch off the time stamp. Many times the time stamp is not much important in the conversation but it adds lot of verbiage disturbance when the screen reader reads each message. If the user be able to switch off the time stamp , users who are not comfortable with that verbiage can benefit.
  • Visually differentiating the normal text and buttons  or links in the chat history. In modern chat bots agents or users can share links or the bot can do a conversation with simple questions such as yes , no. These actionable elements should be easily differentiated with its surrounding text in addition to the keyboard access to them.
  • Adhere all other applicable W3C WCAG  guidelines and chat bot best practices.

JAWS 2019 Download and What’s New

JAWS, Job Access With Speech is the popular and commercial screen reading software for Windows Operating System. JAWS being the old and reliable screen reader, it is preferred choice for many end users and also used for accessibility testing by many organizations. JAWS is a Freedom Scientific product.

JAWS 2019

JAWS 2019 is released in October 2018. Here are few new features in this release.

  • JAWS 2019 now collect anonymous usage information. Users can accept or deny the collection of such information used by Freedom scientific to enhance the user experience. Users can switch this on or off anytime from Setting center> Default settings > Miscellaneous> Submit Anonymous Usage Data.
  • The multiline edit field can now be announced in JAWS 2019. Enable it from Miscellaneous group under Setting Center.
  • Audio ducking is now available for Windows 10 spring 2018 creators update. This is not enabled by default. Enable it from  Speech settings page under Setting center.
  • JAWS 2019 now have improved support in Office 365 applications.
  • Here is the complete list of updates.

JAWS 2019 Download

From JAWS download page, you can download the latest version. The free version runs for 40 minutes and the computer need to be restarted to use it again. If you have purchased JAWS after September 2018, you will be eligible to receive JAWS 2019. Please contact Freedom Scientific to know how to upgrade your license.

What is FS Cast

Freedom scientific publishes a pod cast called the FS Caste every month. Download and listen. FS Cast 158 to know more information on JAWS 2019.

Remember all the JAWS licenses are for accessibility testing or commercial use. Look at the licenses page to find the right one for you.

 

Related Links

International Day of Persons with Disabilities, December 3rd 2018 (IDPD-2018)

Empowering persons with disabilities and ensuring inclusiveness and equality

Is the theme for IDPD-2018.

 

The annual observance of the International Day of Persons with Disabilities was proclaimed by the United Nations General Assembly resolution 47/3 in 1992. It aims to promote the rights and well-being of persons with disabilities in all spheres of society and development, and to increase awareness of on the situation of persons with disabilities in every aspect of political, social, economic and cultural life. Further, India signed the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) and subsequently ratified the same on 1st October, 2007. The Convention came into effect on 3rd May 2008.  Being a signatory to the Convention, India has an international obligation to comply with the provisions of the Convention.

Read more about IDPD-2017, IDPD-2014 and IDPD-2013 on Maxability.

What can we do?

Organizations such as United Nations and various countries in the world are putting lot of efforts in safeguarding the rights and respect of persons with disabilities. These efforts are not sufficient to see an inclusive society. Each one of us have shared responsibility in ensuring that we treat every person equally no matter what their abilities or disabilities are. We tried to contribute in the little way we could.

On this International Day Of Persons With Disabilities IDPD-2018, let us

  • Understand problems of persons with disabilities and how can we help overcome them.
  • Pledge to add captions for the pictures we share on Facebook, Twitter  and Whatsapp.
  • Quickly check the accessibility problems on websites and inform the developers.
  • Spend an hour with your computer without using a mouse doing the daily tasks.

On behalf of Maxability we are committed to increase the awareness on digital equality and disability. Check your knowledge on disability with this quick quiz,  sensitization on disability and accessibility.

We will be releasing a sensitization training module on this International Day Of Persons With Disabilities (IDPD-2018) and will provide one year free subscription. Reach-out to us if you wish to get this free subscription from the contact page. Select “Training needs” from the enquiry type.

Let us all together make this world a better place to live. Share your experiences in the comments section below, they help many others to get inspired.

Resources

For International Day Of Persons With Disabilities IDPD-2018,

Rakesh Paladugula

Maxability – Towards an Inclusive Web.

aria-rowindex Property

aria-rowindex property is used on a table, grid or a treegrid where all the rows are currently not present. aria-rowindex property notifies the index or position of the row in a table or grid or a treegrid with respect to the number of rows available. User agents can calculate the index of the row if all the rows of the table or grid are available, so aria-rowindex property is not required in those cases.

The value of aria-rowindex property must be greater than or equal to one, greater than the value of previous rowindex in the same row and less than or equal to total number of rows in the table. For a cell or gridcell that spans more than one row set aria-rowindex property at the beginning of the cell. If the set of rows which is present in the DOM is contiguous, and if there are no cells which span more than one row or row in that set, then authors may place aria-rowindex on each row, setting the value to the index of the first row of the set.

aria-rowindex property used in roles

Values of aria-rowindex property

The value of aria-rowindex property must be greater than or equal to one, greater than the value of previous rowindex in the same row and less than or equal to total number of rows in the table.

 

Related Links

1.3.5 Identify Input Purpose

The purpose of each input field collecting information about the user can be programmatically determined when:

  • The input field serves a purpose identified in the Input Purposes for User Interface Components section; and
  • The content is implemented using technologies with support for identifying the expected meaning for form input data. (Level AA)

Description

 

While labels or instructions provide clear information on the data that is expected by the user, programmatically providing a way to suggest the type of data expected in the form field may be useful for many users. The success criteria 1.3.5 identify input purpose even try to give some guidance on personalizing the data input.

For example, type=”tel” simply says that the user is expected to enter the phone number but this success criteria gives an opportunity to define if the telephone number to be entered is your telephone or some other persons.

While the type attribute defines the kind of data to be provided, attributes such as HTML5 autocomplete, autofill allows the user programmatically identify the data to be provided.

These properties also allow the assistive technologies to provide additional queues when the user have to enter a particular data. For example, a birthday cake adjacent to the date field represent a birthday field. This can be more appropriate when the HTML5 autocomplete attribute is set to birthday.

 

To ensure that this success criteria, ensure that an autocomplete attribute is specified wherever appropriate and have a value that represents the label. For example for a input text field with a label First Name can have an autocomplete value of given name. This value is independent of language of the page or label and can be understood by most users, user agents and assistive technologies.

 

Who benefits with 1.3.5 Identify input purpose

Many user groups benefits with this success criteria.

  • People with dexterity benefit with the selection of auto-filled values in the input field as it will be difficult for them to type.
  • People with language or memory difficulties benefit with the auto-filled values as they no need to remember the values such as complete address, zip code etc.
  • People with cerebral palsy, stroke, head injury, motor neuron disease or learning disability benefit if the assistive technologies or browser addons can provide icons along with the labels for the input fields.

Related Links

HTML5 Autocomplete

NVDA 2018.3 is released, Download and What’s new

NVDA the free Windows screen reader by NVAccess released the next version for 2018 i.e NVDA 2018.3. The release include few new features, bug fixes and developer enhancements.

Note before download: NVDA 2018.3 breaks compatibility with NVDARemote 2.1 due to a necessary upgrade of WXPython (our Graphical User Interface library). We do expect however that a new version of NVDARemote compatible with NVDA 2018.3 will be released in the coming days.

What’s new in NVDA 2018.3

  •  NVDA will report grammar errors when appropriately exposed by web pages in Mozilla Firefox and Google Chrome.
  • Content marked as being either inserted or deleted in web pages is now reported in Google Chrome.
  • Custom roles via the aria-roledescription attribute are now supported in all web browsers.
  • Added support for various modern input features introduced in recent Windows 10 releases. These include emoji panel (Fall Creators Update), dictation (Fall Creators Update), hardware keyboard input suggestions (April 2018 Update), and cloud clipboard paste (October 2018 Update).
  • Content marked as a block quote using ARIA (role blockquote) is now supported in Mozilla Firefox 63.
  • The user is asked once when NVDA starts if they are happy sending usage statistics to NV Access when checking for NVDA updates.)
  •  Accessible labels for controls in Google Chrome are now more readily reported in browse mode when the label does not appear as content itself.

Complete list of new features, bug fixes and changes can be found here.

 

Download or update to NVDA 2018.3

Visit the NVDA download page for NVDA 2018.3. If already have an older version update to the latest version.

updating to NVDA 2018.3

  1. Press NVDA + n to open NVDA settings.
  2. Navigate to Help by pressing down arrow until you hear Help.
  3. Press right arrow to move into the help menu.
  4. Press down arrow until you hear Check for updates.
  5. Press enter Check for updates and follow the onscreen instructions.

NVDA is an open source and free screen reading solution for Windows operating system. Support the cause by donating to the NVDA project.

 

CSUN 2019 General Call for Papers, Dates and Venue

CSUN 2019, the 34th CSUN Assistive Technology Conference  is scheduled between March 11-15, 2019 at the Anaheim Marriott hotel. Yes, it is a change in venue, route your plan from San Diego  to Anaheim this time.

The general call for papers has just opened on September 13, 2018 and will be closed on October 2nd, 2018 3 : PMPDT. The CSUN program committee has published the instructions and procedure to submit. People submitting proposals for papers are expected to read them and adhere for considering it for review.

 

Call for paper submission

CSUN 2019 papers for presentation should be filled with all the details asked for in the submission form in the prescribed format. An extended abstract with not less than 500 words must be ready in accessible format before beginning the submission process.

 

Once you are ready with the details, submit your form at CSUN 2019 call for papers page.

 

Important Dates

Conference Dates:

March 11th 2019 to March 15, 2019.

March 13th to March 15th will be the conference with a kickoff event on 12th. The preconference workshops will be held on 11th March and 12th March.

General call for papers:

Opens on September 13, 2018 and ends on October 2nd, 2018 3 PM PT.

Notification of acceptance

October 23, 2018

Venue

Anaheim Marriott

700 West Convention Way, Anaheim, California 92802 USA   +1 714-750-8000

Additional details of the dates and venue are available on the CSUN 2019 conference page.

 

Now think about your proposals and all the best from Maxability for those submitting their papers.

 

 

aria-rowcount (property)

Dynamic tables are quite common in modern web. In these dynamic tables, Number of rows and columns depends on the data fetched by the input given by the user or any other parameters that define the table. aria-rowcount (property) will be ideal to use in scenarios where the total number of rows are not available currently in the DOM. If all the rows of the table are present in the DOM this property) is not required. The user agents can calculate the total number of rows in the table and share it with assistive technologies.

Along with aria-rowcount property aria-rowindex property also need to be provided for user agents to effectively inform the position of current row within the table in case of dynamic tables.

Developers must set the value of aria-rowcount to an integer equal to the number of rows in the full table. In case the total number of rows is unknown, authors must set the value of aria-rowcount to -1 to indicate that the value should not be calculated by the user agent.

Aria-rowcount property used in roles

Values of aria-rowcount property

Integer that is equal to the total number of rows in the table. The value must be -1 (minus one) if the total number of rows is unknown.

Also have a look at aria-colcount and aria-colindex properties.

 

Maxability Webinar – October 2018 – Web Accessibility Testing Tools

The next webinar on web accessibility is scheduled on October 9th, 2018. Finding an accessibility testing tool that fits right for your project or the role you play might be difficult. Some cases you want to check the accessibility of a webpage real quick to have a high-level understanding, sometimes you need to make a complete analysis. You might not be a technical person but still you want to check if a given website is accessible.

In the market there are many tools available to provide the accessibility audit report in the way you want. This webinar aims in sharing those free and commercial tools. You can make your choice!!!!

Presenter

Rakesh Paladugula

Key take-aways

  • Brief introduction of various accessibility testing tools.
  • Which one is free and which is commercial.
  • Web based and browser plugins.
  • Quick analysis and detailed reports
  • Technical knowledge required or not.

We will not explain you which is good and which is not. This webinar should help you analyzing which is the right tool for you.

Webinar Timing

On October 9th, 2018. Between

  • 9: 00 PM and 10 : 00 PM IST
  • 11 : 30 Am and 12 : 30 pM ET
  • 8 : 30 AM and 9 : 30aM PT

Accessibility Scanner App on Android

Accessibility scanner is an accessibility testing tool available for Android devices. No technical knowledge required to use it. It is a easy to use and self-explanatory mobile accessibility testing tool.

Download from Google Play store

The Accessibility scanner can be downloaded from the play store like any other app. Remember that this app is only available for Android version 6 and above.

 

Setting up and using Accessibility scanner

The Accessibility scanner need to be turned on to use. You can turn accessibility scanner in the accessibility settings of the operating system. Alternately it prompts you to switch it on first time when you open the app on the device.

Switching the Accessibility Scanner On

Navigate to

Apps > Settings > Accessibility > Scanner

And toggle the scanner button to on position. This is one time activity and you can switch the scanner off anytime by following the steps above.

Now the accessibility scanner is ready to do the accessibility audit of your screen.

Scanning for accessibility

Follow the steps below to do an accessibility test.

  1. Open the app you want to test.
  2. Tap the accessibility scanner icon.
  3. The accessibility results will be stored in the scanner app.
  4. The notes of the possible improvements can be found here and you can use the list feature for a detailed information.
  5. The past reports are also available in the scanner app and these can be shared with anyone.

Note: If you are using talkback, use explore by touch to find the scanner icon and double tap on it to perform the audit.

 

What tests can the Accessibility Scanner do?

The accessibility scanner is not a replacement for manual audit. However, it helps everyone including non-technical people to quickly check for some problems their app is causing for various users with disabilities. The accessibility scanner can test the accessibility problems specified below.

  • Content labels
  • Touch target size
  • Clickable items
  • Text and image contrast

The color contrast ratio that the scanner checks is the minimum contrast requirement from W3C WCAG. The touch target is the Android accessibility material design requirements. These can be changed from the accessibility scanner app settings if required.