Black == Our project Blue == example study
Question & purpose: how to show your work in public areas(location) and how to get the ideal situation you want(how to show it better, in a suitable way — quantified)
Analyze the personal emotion and group emotion(self)
Think about this project from: User Experience Design Aspect
From User-Centered to Participatory Design Approaches gives me some basic information about user experience design and psychology
In the user-centered design process, we are focused on the thing being designed (e.g., the object, communication, space, interface, service, etc.), looking for ways to ensure that it meets the needs of the user.
The application in user test area
The social scientist/researcher serves as the interface between the user and the designer. The researcher collects primary data or uses secondary sources to learn about the needs of the user. The researcher interprets this information, often in the form of design criteria. The designer interprets these criteria, typically through concept sketches or scenarios. The focus continues then on the design development of the thing. The researcher and user may or may not come back into the process for usability testing.
Design for Experiencing
Today we are beginning to hear about “Experience Design,” whose aim is to design users’ experiences of things, events and places. This influence on design can be attributed to a significant literature being written in the social sciences that has begun to acknowledge the role of emotions in human experience (see Jensen, 1999 for example).
Furthermore, as we know, the user’s behavior would be effected by realizing whether he or she is be observed or not. And also if we ask them for some questionnaires or interviews, they only gives us what they want us to hear. some It is also a good way to get the result that is closest to the natural personal feelings.
The conclusion: Why our project is meaningful?
It is about the recognition that all people have something to offer and that they, when given the means to express themselves, can be both articulate and creative.
How this system can be used in the data analysis?
1. get to know about the personal interests and tastes
2. get to know about the public interests and tastes
3. help the decision making process, by providing the collected data results to a person
4. do research about the exhibition space or show case space, to better plan and organize a exhibition (for example: which position or wall would be firstly realized or paid attention to by the visitors?)
5. get the first hand data about the users’ feeling, it is also a good user test way (in some shops).
6. offering some potential interesting options based on the facial expression and the personal interests
7. Humans interact with each other mainly through speech, but also through body gestures, to emphasize a certain part of the speech and display of emotions. Emotions are displayed by visual, vocal, and other physiological means. There is a growing amount of evidence showing that emotional skills are part of what is called ‘‘intelligence’’ [16,36]. One of the important way humans display emotions is through facial expressions.
Related example from design perspective:
The influence of prototype fidelity and aesthetics of design in usability tests: Effects on user behaviour, subjective evaluation and emotion gives me some basic information about user test
Think about this project from: Technology Aspect
How to track human beings’ emotion from facial expression
Facial expression recognition from video sequences: temporal and static modeling
In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input.
Dynamics of facial expression extracted automatically from video
Three-Dimensional Head Tracking and Facial Expression Recovery Using an Anthropometric Muscle-Based Active Appearance Model
Measuring emotion: The self-assessment manikin and the semantic differential
Concept of Ubiquitous Stereo Vision and Applications for Human Sensing
1. Design of a Social Mobile Robot Using Emotion-Based Decision Mechanisms
I think this is a good example for us to learn, which has similar foundational idea and willings to ours, but realized by another way.
The paper describes a robot that interacts with humans in a crowded conference environment. The robot detects faces, determines the shirt color of onlooking conference attendants, and reacts with a combination of speech, musical, and movement responses. It continuously updates an internal emotional state, modeled realistically after human psychology research. Using empirically-determined mapping functions, the robot’s state in the emotion space is translated to a particular set of sound and movement responses. This robot’s goal is showing the potential for emotional modeling to improve human-robot interaction.
Using an onboard camera, it detects faces and determines the presence of onlooking people. It is not detect the expression directly, instead, it uses some other input (like the color of the user’s shirt) to help interact with users (the project is not so so meaningful any more, in this case). I think the camera detect expression system is very difficult for us.
Face Recognition Tech — OpenCV, as well
To do face detection, OpenCV’s  object detection function was used. This function is based on the Viola- Jones face detector , which was later improved upon by Rainer Lienhart . It uses a large number of simple Haar- like features, trained using a boost algorithm to return a 1 in the presence of a face, and a 0 otherwise. The OpenCV object detector takes a cascade of Haar classifiers specific to the object being detected, such as a frontal face or a profile face, and returns the bounding box if a face is found. An included cascade for frontal faces was used for this system.
To differentiate actual faces from picture sand other”face-like”stationary objects,we added a motion check based on a difference filter. Whenever the Haar detect or reports a face, the robot stops, and waits for a set time interval to eliminate any oscillations in the camera boom. Once the camera is perfectly stil, the difference operator is executed over a few frames in the bounding box of the face, and the area under it, where the body of the person is supposedly located. If sufficient motion is found(defined by an empirical threshold), the robot transitions from the Wander to the Person state. The motion check coupled with the Haar cascade proved reliable and accurate in all situations where sufficient lighting was present.
Also mentioned combine with GPS and head-tracker
Another example a little bit too science and tech for me
LAFTER: a real-time face and lips tracker with facial expression recognition
We will be using surveillance cameras to capture public mood over a wide area. By using face tracking we can get a general sense of the mood of a crowd, and the relative positions of the people in the crowd.
Mappiness: Generates a map of people’s emotions by asking them how they are feeling as they move about. We feel that asking people how they are feeling creates a bias. Is it possible to get a “truer” emotional state?
Sense of Space: Uses biosensors to capture people’s moods as they move about. Using biosensors and a mobile device is “truer” but doesn’t scale – the sensors can only be connected to so many people. Are there ways to capture groups of people’s emotions more easily?
We believe this system can be used for large groups of any kind can be targeted, preferably indoors. For example, the ITP Spring Show.
By collecting general moods of individuals at each exhibit (grey circles), we can assess the general mood of people during the spring show, gather clues about what exhibits are the most enjoyable, etc.
We are also looking to collaborate with CASE, a architecture consulting firm in NYC, to leverage existing surveillance equipment to capture the moods of spaces. This could lead into insight into the layout of spaces for architecture and design.
One challenge is that the current technology requires faces to be head-on or near head on. As such we need to position many cameras about a space to capture as many faces as possible.
Hardware: GPS Logger Shield and Pulse Sensor
I experimented with the GPS shield with a pulse sensor. Before knowing to use a jumper wire to connect the TX to the Arduino’s RX pin, I was having a hard time getting a GPS log. The pulse sensor seemed working fine although the pulse rate fluctuated a lot when putting at different body parts.
Another problem I encountered was writing into an SD card. After a discussion with Arlene, we found my hardware setup and the code were alright. We assume there was either something wrong with my shield or the SD card because with Arlene’s shield and SD card, it worked fine. Therefore, I was only able to print geolocation data and pulse rates in the serial monitor as shown below.
The topic I would to research and explore throughout this semester is cognitive computing interfaces. I want to explore EEG sensors that are currently available in market. I will pair EEG data with GPS and other sensors such as light, sound, & pulse. There are three major challenges with this project. The first challenge is the physical computing aspect. I would need to get all the sensors to work together with a microprocessor and be able to log the data. The second challenge would be to visualize all these data together in a way that’s comprehensible and show their relevance to the user’s daily activities. The third challenge is to package the sensors and other electronics components into a visually pleasing and comfortable wearable device.
The only two reliable readings out of all the EEG readings are Attention and Meditation. Working within the limitation of this current available technology, I will only be relying on the Attention reading to use for logging and visualization.
It will be interesting to use this device as a tool to boost online education performances. There are many free online courses available but many people fail to follow through and complete the courses that they have signed up or started with. This device can potentially support the student working on an online course by alerting her when her attention level has dropped. This would allow her to quickly refocus on the lesson and shorten the length of time that’s needed to mentally process the video lesson.
As mentioned above, one of the main challenge of this project is visualizing all these biometric, GPS, and environmental data together in a way that’s useful, readable, and aesthetically pleasing. The wearable market is becoming hugely popular and is expanding rapidly. However, the pairing mobile applications and the data visualization begs major improvements. Take Fitbit for instance, the number and line graphs regarding my daily steps gives me a very superficial understanding of my daily activities. If more sensory data were gathered and presented in tandem with this step count maybe it will give me a better understanding of my day. However, the challenge with having more data is then how to show the relationship between of these datas, how they affect or complement each other, and most of all how can the user relate these data with their daily life.
I’m excited for this project because it ties in with my industrial design background and my current interests with wearable devices and data visualization. I hope to also create an Android phone app that would pair with EEG device to log and visualize the gathered data. As for the form of the device, I am picturing something similar to the Melon. The form would be simple band but somehow communicates its EEG capability more than just a band that wraps around the head. Instead of plastic and rubber, the device could possible be constructed with fabric or more flexible materials. This will provide form fitting comfort as well as a unique look that will set it apart from other EEG devices available in the market.
My intended user would be adults within the early twenties to late thirty age range. They would be urban dwellers who would most often travel by public transportations and walking. The GPS component would be use to log the location that produces the highest level of concentration. This location information as well as other data such as sound and light data will help the user to recognise their ideal study location, time, and environment. This understanding of the ideal studying environment pairing with the refocusing alert feature could potentially boost the user’s overall online educational experience and performance.
“Activity Recognition for the Mind: Toward a Cognitive “Quantified Self”” Activity Recognition for the Mind: Toward a Cognitive “Quantified Self” Web. 25 Mar. 2014. <http://www.m.cs.osakafu-u.ac.jp/publication_data/1362/mco2013100105.pdf>.
“NeuroPlace: Making Sense of a Place.” NeuroPlace. Web. 25 Mar. 2014. <http://dl.acm.org/citation.cfm?id=2459267>.
“Recognizing the Degree of Human Attention Using EEG Signals from Mobile Sensors.” Sensors. Web. 25 Mar. 2014. <http://www.mdpi.com/1424-8220/13/8/10273>.
I have successfully powered the GPS Shield and arduino, without a lithium battery using a sold panel. I am now exploring different variations of solar panel; I would ideally like to use flexible panels to aesthetically work with the wearable.
MindFlex headbands successfully hacked. I am now keen to remove the hardware from the original headband and place it into my own minimalist designed headband. I intend to make this using the DiWire to construct a simple helmet with accurately bent wire.
The Solar Panel successfully powering my GPS. Next is to link both the GPS and EEG sensors together using Solar Panels.
I am very keen to build a helmet that combines EEG data with GPS data allowing me to redesign the urban environment based on my emotive behaviour. Emotive 3D Mapping is something I do not just want to present digitally but I will pass the information through python into Rhino and 3D print the objects. I will begin small and even look to redesign furniture I am sitting on before extending my project into the urban environment with the hope of extending my data from objects to architectural structures.
I took a stroll through Central Park to gather GPS data which I then passed through Python into Rhino to produce a staircase. It is pretty basic in terms of the design which is why I am keen to add the EEG data to give a more flamboyant and crisp edge to the design. Fortunately both data sets have a timestamp allowing me to sync each fluidly.
I have successfully hacked the MindFlex EEG headbands and have gathered strong data. I especially enjoy visualising it using the Braingrapher in Processing. Below is a sample of data gathered from the GPS shield and EEG sensor.
Here is a link to my hardware tinkering experiments with the GPS logger.