My thesis project, Letâ€™s Read A Story, is a speculative exploration on how computers and technology can turn story time into a conversation between parents, children and a computer. Human gestures — speaking, drawing, typing â€“ allows the reader to participate in a new form of conversation on a smart device (tablet or a smart speaker) that yields a surprising new story.
Technology and smart devices are ever present in childrenâ€™s everyday lives and their development. Letâ€™s Read A Story investigates the possibilities of enhancing storytime for children with a Machine Learning-based program to engaging a child’s creativity, imagination, and inventiveness. The program is intended to be an activity joined by parents and child.
The project addresses the following questions:
1. can technology augment how parents read and tell stories to children?
2. Can a child interact with a piece of technology to create a meaningful connection through literature, sound and visual art?
Using recently possible machine learning techniques, various childrenâ€™s literature corpuses have been analyzed in order to build a conversational platform that allows the reader to navigate through different narrative bits and pieces to weave a new, original immersive story of his own. The core of the experience is in the form of generated text that is a response or an answer to a human gesture (e.g speaking or drawing). The text is the first layer that carries the plot progression forward, on top of which, layers of generative sound and illustration are formed to enhance the text and based on various predictive models.
Lastly, the reader can change and bend the story as it progresses, drawing illustrations of their own and changing lines of text as their heart desires.