My thesis project, Let’s Read A Story, investigates how machine learning technology can augment creativity in storytelling for children. Human gestures — speaking, drawing, typing — allows the reader to participate in a new form of conversation on a smart device (tablet or a smart speaker) that yields a surprising new story.
Technology and smart devices are ever present in children’s everyday lives and their development. Let’s Read A Story investigates the possibilities of enhancing storytime for children with a Machine Learning-based program to engaging a child’s creativity, imagination, and inventiveness. The program is intended to be an activity joined by parents and child.
The project addresses the following questions: 1. can technology augment how parents read and tell stories to children? 2. Can a child interact with a piece of technology to create a meaningful connection through literature, sound and visual art? Using recently possible machine learning techniques, I analyzed various children’s literature corpuses and built a conversational platform that allows the reader to navigate through different narrative bits and pieces and weave a new, original immersive story of his own. The core of the experience is in the form of generated text that’s a response or an answer to a human gesture (e.g speaking or drawing). The text is the first layer that carries the plot progression forward, on top of which, layers of generative sound and illustration are formed to enhance the text and based on various predictive models.
Lastly, the reader can change and bend the story as it progresses, drawing illustrations of their own and changing lines of text as they’re heart desires.