As part of my thesis project, this is an installation that consists of an automatic dice roller that rolls dice by triggering reggaeton sounds, shaking a container attached to the speaker, making the dice dance. Then it captures the result using a computer vision system, stores the dice result sequence, and also creates a “digital painting” with the resulting numbers. The viewers can also see the number of times the dice has resulted in each of the six numbers, and compare these numbers to the visual result on the projection.
Through an educational learning hub and an interactive playground, both hosted on the same web app, Paralang aims to cultivate literacy, inspire curiosity, and arouse concern with respect to emerging neural language models.
Recently released, state of the art language models have been shown to be able to produce text that is nearly indistinguishable from that produced by humans.
These recent advances, which have proved plenty controversial within machine learning circles, have caused ripples in the general media landscape as well, where coverage has been largely hyperbolic, excessive, and occasionally uninformed or even incorrect.
With the belief that this natural language generation technology, more than mere novelty, will gradually assume a more and more pervasive role in our everyday lives, I wanted to intervene, however modestly, and provide an accessible, beginner-friendly platform to help secularize this technology and elaborate on some of its inner-workings as well as its repercussions both for us as individuals and a society. Iâ€™d like to help answer questions like: what makes these recent advances so compelling and new? Or: how might existing societal problems by reproduced and reinforced by these advanced language models?
Ultimately, my aim is to help cultivate a more level-headed literacy as well as inspire both a sense of informed curiosity and concern with respect to these emerging models and their ramifications, with an emphasis on the recent and state of the art (particularly Googleâ€™s BERT and OpenAIâ€™s GPT-2).
The platform consists of two components, both hosted on a single web app. One is educational, revolving around a learning hub, glossary, and resources curated for all skill levels â€” newcomer, intermediate, and advanced. The other is interactive, comprising of a â€œplaygroundâ€ encouraging hands-on experimentation with some of the language models featured in the educational component.
Altogether, the platform is built to accommodate non-linear engagements â€” users can begin with the learning hub and progress through to the playground, or simply jump to the playground, or maybe even just skip around between glossary and resources.
Accessibility is the heart of experience design because itâ€™s not usable if itâ€™s not accessible. For a sighted designer, itâ€™s an opportunity to improve the design process since understanding the userâ€™s needs require listening to and working alongside them to design something together.
The particular accessibility use case came from making the Physical Computing coursework more accessible, since the schematics on the classâ€™s site are images. While sighted learners rely on schematic images, blind and low vision learners rely on circuit descriptions to understand how electronics work. No graphical representation has yet been able to compete with circuit descriptions. This pain point became the focal point of the subsequent design research.
Using participatory and human-centered design with 5 blind and low vision participants through NYUâ€™s Institutional Review Board (IRB), a set of design standards and best practices were developed to illustrate how to design a readable tactile schematic. These standards were then applied to the 50+ schematics from the Physical Computing site. The standards and best practices and book of tactile schematics were made available for download by the public.