All posts by Michael Oneppo

CoDesign on Emotional Spaces

Our co-design “partner” was Nadine, who did an excellent job of breaking down the users and scenarios for our project. This was an excellent experience for better understanding the target audience for our project and optimize the message.


In general, the primary users of our project would be large groups of people, most likely in public exhibitions with many things the see and experience. Nadine was smart to point out that there seems to be a need for a set of experiences for the crowd, as our project relies heavily on the visible reactions of people.

The crowd would work best in a relatively small space due to the constraints of the number of surveillance cameras needed to cover a large area.

Some example scenarios:

  • The ITP show
  • Undergraduate students playing on a basketball team
  • Graduate students at design and tech program in a classroom (very funny Nadine)
  • Middle school students in a public school classroom

In regard to what attracts the audience, there is a wide range of themes, rather than needs, considering our project is a commentary on surveillance as much as it is a tool for quantification of the self. These include privacy, rules & regulation, and authenticity.

For the personas, Nadine believes the priorities we should focus on are therefore:

  1. The purpose of the project – is it entertainment, informationally persuasive, etc.?
  2. The size of the audience
  3. Examination of privacy/rules and regulations
  4. Who is observing and receiving data

User Stories

Nadine outlined 2 possible stories for our project.

The first story is the ITP Spring Show. It is a crowded (300+ people) event, with many different age groups. In general however the group is generally tech enthusiasts, and the space is constantly fluctuating.

The second story is an interesting use of the technology we had not thought of: watching the crowd at an athletic game. The space is just as crowded but the people are more stationary, allowing better measurement. People come for a good time and are intending to be social, so we would find more expressions and reactions.



“Hardware” is a strong word for me right now. I’ve focused on getting a webcam to (a) detect faces (easy!) (b) detect a variety of states about the face – pulse, mood, etc. (not easy!). This was all done using OpenCV and Node.js, which I hope will make my life easier if I need to network a bunch of PCs together to coordinate data I collect.

  • Heartbeat seems to be possible using an FFT of the forehead. Right now I’m using Numeric.JS to pull this data, no results yet.
  • Mood is a bit harder. Training sets for image-based recognition are hard to come by, and I don’t feel like I can put one together myself. Right now I’m looking into relative positions of features, e.g. positions of the corners of the mouth and eyes.


The key issue I’m facing is that all of these problems are easily solved when given a baseline, e.g. 10 seconds of a face to detect pulse, neutral face state for mood, etc.

How can I make something that pulls just enough interesting data from faces, without taking forever to do it? I think once I figure this piece out, the possibilities just explode.



Prior Art: Imaging Systems as Biosensors

My primary interests in the Quantified Self concern two major topics:

  • Imaging Systems as Biosensors. A great example of this is Eulerian Video Magnificationwhich can detect a subject’s pulse using imaging alone.
  • Population-based Quantification. This is the general concept of gathering data across multiple people rather than just an individual, giving the opportunity to make broader observations that can apply to individuals as well as the group itself.

The first place I saw these two things combining was in simple crowd analysis systems, such as counting the number of people passing by a camera:

Subburaman, Venkatesh Bala, Adrien Descamps, and Cyril Carincotte. “Counting people in the crowd using a generic head detector.“ Advanced Video and Signal-Based Surveillance (AVSS), 2012 IEEE Ninth International Conference on. IEEE, 2012.

I feel that imaging systems have tremendous potential to get much more accurate mood and disposition information that other sensor systems. For example, in their “Sense of Space” project, Al-Husain et. al. link a physiological response to location:

Al-Husain, Luluah, Eiman Kanjo, and Alan Chamberlain. “Sense of space: mapping physiological emotion response in urban space.“ Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. ACM, 2013.

For another example, in the Mappiness app people can manually log their emotional states to generate a map of the general feeling of places.

I posit that if you could look at the face of each person tracked in these systems, you could measure an extremely accurate mood for each person, passively.

Perhaps my research could be as “simple” as having a series of cameras across a number of locations, panning across populations and recording the emotions captured on people’s faces. This data could be similarly mapped as in “Sense of Space” or “Mappiness” but be much more accurate about the recorded emotional states.

However I would like to also generate conclusions beyond a simple mapping of place to emotion. The potential for applying a system of this to affective computing is huge, however I could not find any significant prior art that applied gathered emotional states to improve user experiences. I was at least expecting a paper showing how simplifying a user experience when the user is stressed improved effectiveness or productivity. The only thing I could find was this article:

Nasoz, Fatma, Christine L. Lisetti, and Athanasios V. Vasilakos. “Affectively intelligent and adaptive car interfaces.” Information Sciences 180.20 (2010): 3817-3836.

In this study, Nasoz et. al. reflected detected emotional states back onto a driver of a virtual car to encourage better behavior. However, the paper focused mostly on the detection of emotional states (no easy task given the set of inputs they used) rather that how the possible actions the car could take affected the user’s mood or task effectiveness.

I would like to bridge this gap, and find a set of ways to reflect the data gathered by my proposed system onto a crowd to improve a situation. Some ideas:

  1. Simply providing an indication of the emotional state of a crowd to the crowd by asking the obvious question: “Why is everyone sad/happy/angry?”. Done right, this could empower the crowd to fix the problem.
    1. Variation: Allow the crowd to vote on the problem/cause, hopefully illuminate and empower the crowd collectively.
  2. Same as above, but single out a person in the crowd who happens to be an emotional outlier, e.g. “Why are you sad? Everyone around you seems to be happy.”
  3. Show the emotional gradient based on location, either in map form or just on direction, to help people physically move to happier locations. This could be a fun interface for helping people find a place that has a more positive vibe. Over the span of a city, this could be a more “intuitive” metric for addressing the Fear of Missing Out (FOMO) – giving you a direction of where you’re more likely to have a good time, you can move to maximize your own happiness.
  4. Same as above, but as a more “intuitive” version of Foursquare. Rather than deciding whether a place is interesting based on your friends, one could make a decision based on whether or not the general mood of people at a place is positive.
    1. An interesting side effect of this could be a negative-bias to places that are frequented by people with generally “cool” dispositions, e.g. places frequented by hipsters would be graded negatively.

Hello My Name is Michael

Hello My Name is Michael.

  • Name: Michael Oneppo
  • First year at ITP
  • Background: 8 years at Microsoft working on Windows Vista, 7, and 8: Also CTO of
  • Why this class:
    • I feel that data on people or groups is the way computing systems gain intuition. The concepts behind the Quantified Self and location awareness can enhance our own understandings of things, but I want to understand how the same data that we can use to understand ourselves can help computing systems understand us.
  • Goals:
    • Understand the nuances around what can and cannot be gleaned from quantification.
    • Apply the principles of the quantified self to groups and populations, both by making predictions of an individual’s actions based on aggregate behaviors and by making predictions about whole or partial populations.
    • Learn about imaging systems as bio-sensors.
  • My Quantified Self:
    • If you look at the chart you see no data after a point. That which is measured improves, until you improve so quickly that you break your foot.