EmojiSpeak is a platform that attempts to expose how emoji is currently interpreted on screenreaders, through a text-to-speech message board simulator; and a community-driven emoji dictionary to mitigate its visual inequity on the web.
The rise of “visual content as interface” on touch screens and mobile devices has paved the way for users to adopt emojis in daily conversations since 1999 when Shigetaka Kurita first released the original 176 emoji for mobile phones, through Japanese telecom, NTT DOCOMO.
In recent years, emoji has evolved in usage as it started appearing in court documents, and in 2015, the “Face with Tears of Joy” emoji landed as the Oxford Dictionary Word of the Year.
As online interactions become increasingly visual and less accessible to people with vision impairments, I’m proposing to redesign emoji through a platform that enables a community to redefine emoji’s context, its usage and meaning, through a crowdsourced dictionary and a message board simulator with Text-to-Speech.
My project aims to inquire about the visual inequity of emoji’s. Who is excluded from this social phenomenon? How do emojis appear in screenreaders? Microsoft’s Inclusive Design Toolkit encourages that designers find touchpoints of exclusion in their design process. Their principle is heavily inspired by the Social Model of Disability, which states that “Removing barriers creates equality and offers disabled people more independence, choice and control.” The Social Model of Disability is a new way of approach that contradicts medical model of disability which focuses on people as being disabled by their impairments or differences. The medical model looks at what is ‘wrong’ with the person, not what the person needs. This creates low expectations and leads to people losing independence, choice and control in their lives.