Zander Midterm – H@

H@ – locative emotive design.

Concept
I am building a solar powered EEG and GPS wearable. H@ will allow me to emotively map my movement through NYC which I will 3D print into a unique map of the city. H@ also allows me to redesign the urban environment based on emotional data I will gather at each lon and lat point. I will 3D print my emotional interpretation of the city and its features including lampposts, pavements, bridges and buildings.

Motivation
∆ Quantify emotional relationship with environment
∆ Physically print the emotional feedback
∆ Challenge traditional concepts of designing and mapping.
∆ Attempt to redesign urban spaces based on exact emotional feedback.

Prior Mapping and EEG Design

Cristian Nold – Bio Mapping – 2004
The Bio Mapping tool allows the wearer to record their Galvanic Skin Response (GSR), which is a simple indicator of emotional arousal in conjunction with their geographical location. This can be used to plot a map that highlights point of high and low arousal. By sharing this data we can construct maps that visualise where we as a community feel stressed and excited.

CN

Prototype

IMG_2162

The current prototype is not as elegant as I would want the final but it has been useful to get dimensions etc. I have scanned my head using the Xbox Kinect and can therefore 3D Print an exact fitting headband. The current solar panelled prototype does work and has been tested in Washington Square Park.

plot 1 plot 2

Challenges
Current Challenges that I am faced by are interlinking the EEG timestamp with the GPS timestamp, allowing me to plot exact designs in each position. I am also keen to really challenge the role of ‘designer’ with my procedure and am looking forward to the challenge of designing with data.
Documentation

 

 

Rivet: Hardware + Software

 

1

Hardware:

The basic setup that we opted for is the EEG sensor board that is contained in the MindFlex game device, or alternatively, the MindWave headset as it comes with Bluetooth connectivity options built-in. We decided to go with the iPhone handset as the data collection device, and we plan to make use of the onboard sensors, mainly the GPS sensor, as well as light and sound sensors.

The reasons for opting to go with the iPhone are:

 

  1. We were not able to get the GPS shield to work reliably with the EEG sensor, but we were able to get the data from the iPhone.
  2. The MindWave headset comes with an accessible iOS SDK.
  3. Our vision of the project is to have the user receive feedback, both in the form of visuals and possible vibration, to aid them in understanding their attention patterns more.

 

 

Software:

We are in the process of developing an application to get the various required functionalities to work in a number of demo applications.  We will then put all various pieces together in the final prototype.

 

We have put together a basic mockup that we are following, and intend to improve as we go along:

Screen Shot 2014-03-27 at 12.39.07 AM

 

 

In terms of progress, we were able to both get and map the user location through two demo applications that we made, shown below.

iOS Simulator Screen shot Mar 26, 2014, 11.45.37 PMphoto

 

Researches

Black == Our project  Blue == example study        

Question & purpose: how to show your work in public areas(location) and how to get the ideal situation you want(how to show it better, in a suitable way — quantified)

Analyze the personal emotion and group emotion(self)

 

Think about this project from: User Experience Design Aspect

From User-Centered to Participatory Design Approaches gives me some basic information about user experience design and psychology

http://www.maketools.com/articles-papers/FromUsercenteredtoParticipatory_Sanders_%2002.pdf

Useful sentences:

In the user-centered design process, we are focused on the thing being designed (e.g., the object, communication, space, interface, service, etc.), looking for ways to ensure that it meets the needs of the user.

The application in user test area

The social scientist/researcher serves as the interface between the user and the designer. The researcher collects primary data or uses secondary sources to learn about the needs of the user. The researcher interprets this information, often in the form of design criteria. The designer interprets these criteria, typically through concept sketches or scenarios. The focus continues then on the design development of the thing. The researcher and user may or may not come back into the process for usability testing.

 

Design for Experiencing

Today we are beginning to hear about “Experience Design,” whose aim is to design users’ experiences of things, events and places. This influence on design can be attributed to a significant literature being written in the social sciences that has begun to acknowledge the role of emotions in human experience (see Jensen, 1999 for example).

 Furthermore, as we know, the user’s behavior would be effected by realizing whether he or she is be observed or not. And also if we ask them for some questionnaires or interviews, they only gives us what they want us to hear. some It is also a good way to get the result that is closest to the natural personal feelings.

 

The conclusion: Why our project is meaningful?

It is about the recognition that all people have something to offer and that they, when given the means to express themselves, can be both articulate and creative.

How this system can be used in the data analysis?

1.  get to know about the personal interests and tastes

2. get to know about the public interests and tastes

3. help the decision making process, by providing the collected data results to a person

4. do research about the exhibition space or show case space, to better plan and organize a exhibition (for example: which position or wall would be firstly realized or paid attention to by the visitors?)

5. get the first hand data about the users’ feeling, it is also a good user test way (in some shops).

6. offering some potential interesting options based on the facial expression and the personal interests

7. Humans interact with each other mainly through speech, but also through body gestures, to emphasize a certain part of the speech and display of emotions. Emotions are displayed by visual, vocal, and other physiological means. There is a growing amount of evidence showing that emotional skills are part of what is called ‘‘intelligence’’ [16,36]. One of the important way humans display emotions is through facial expressions.

 

Related example from design perspective:

The influence of prototype fidelity and aesthetics of design in usability tests: Effects on user behaviour, subjective evaluation and emotion gives me some basic information about user test

http://www.sciencedirect.com/science/article/pii/S0003687008001129

Think about this project from: Technology Aspect

How to track human beings’ emotion from facial expression

mostly related

Facial expression recognition from video sequences: temporal and static modeling

In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input.

http://ac.els-cdn.com/S107731420300081X/1-s2.0-S107731420300081X-main.pdf?_tid=9acf5e8e-b522-11e3-aab0-00000aacb361&acdnat=1395864778_d5e622fa00c01bd8c6a62aa22df7d562

Dynamics of facial expression extracted automatically from video

http://ac.els-cdn.com/S0262885605001654/1-s2.0-S0262885605001654-main.pdf?_tid=a4ba168c-b522-11e3-9ef4-00000aab0f26&acdnat=1395864794_c1fce58c8f5ba828748698e6a9710d3a

Three-Dimensional Head Tracking and Facial Expression Recovery Using an Anthropometric Muscle-Based Active Appearance Model

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4522536

 

Measuring emotion: The self-assessment manikin and the semantic differential

http://ac.els-cdn.com/0005791694900639/1-s2.0-0005791694900639-main.pdf?_tid=ea81a798-b522-11e3-8a70-00000aacb360&acdnat=1395864912_5a69046afb2509d13a4bfdb0fa9e4f4c

 

Concept of Ubiquitous Stereo Vision and Applications for Human Sensing

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1222176

 

Great examples:

1. Design of a Social Mobile Robot Using Emotion-Based Decision Mechanisms

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4058870&tag=1

I think this is a good example for us to learn, which has similar foundational idea and willings to ours, but realized by another way.

The paper describes a robot that interacts with humans in a crowded conference environment. The robot detects faces, determines the shirt color of onlooking conference attendants, and reacts with a combination of speech, musical, and movement responses. It continuously updates an internal emotional state, modeled realistically after human psychology research. Using empirically-determined mapping functions, the robot’s state in the emotion space is translated to a particular set of sound and movement responses. This robot’s goal is showing the potential for emotional modeling to improve human-robot interaction.

 

Using an onboard camera, it detects faces and determines the presence of onlooking people. It is not detect the expression directly, instead, it uses some other input (like the color of the user’s shirt) to help interact with users (the project is not so so meaningful any more, in this case). I think the camera detect expression system is very difficult for us. 

Face Recognition Tech — OpenCV, as well

Details here:

To do face detection, OpenCV’s [5] object detection function was used. This function is based on the Viola- Jones face detector [10], which was later improved upon by Rainer Lienhart [7]. It uses a large number of simple Haar- like features, trained using a boost algorithm to return a 1 in the presence of a face, and a 0 otherwise. The OpenCV object detector takes a cascade of Haar classifiers specific to the object being detected, such as a frontal face or a profile face, and returns the bounding box if a face is found. An included cascade for frontal faces was used for this system.

To differentiate actual faces from picture sand other”face-like”stationary objects,we added a motion check based on a difference filter. Whenever the Haar detect or reports a face, the robot stops, and waits for a set time interval to eliminate any oscillations in the camera boom. Once the camera is perfectly stil, the difference operator is executed over a few frames in the bounding box of the face, and the area under it, where the body of the person is supposedly located. If sufficient motion is found(defined by an empirical threshold), the robot transitions from the Wander to the Person state. The motion check coupled with the Haar cascade proved reliable and accurate in all situations where sufficient lighting was present.

 

Great example 

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=729538

Also mentioned combine with GPS and head-tracker

 

 

Another example a little bit too science and tech for me

LAFTER: a real-time face and lips tracker with facial expression recognition

http://ac.els-cdn.com/S0031320399001132/1-s2.0-S0031320399001132-main.pdf?_tid=1f37f24e-b523-11e3-920e-00000aacb35f&acdnat=1395865000_d8cf4d16341b1580fefa5f5244391d6d

 

 

Midterm Presentation – Michael & Yu

Presentation

Concept

We will be using  surveillance cameras to capture public mood over a wide area. By using face tracking we can get a general sense of the mood of a crowd, and the relative positions of the people in the crowd.

Motivation

  1. Examine using surveillance as a form of quantification, rather than for security.
  2. Play with concepts of privacy and group-think.
  3. Trying to catch “real” emotions over space.

Prior Art

Mappiness: Generates a map of people’s emotions by asking them how they are feeling as they move about. We feel that asking people how they are feeling creates a bias. Is it possible to get a “truer” emotional state?

 Sense of Space: Uses biosensors to capture people’s moods as they move about.  Using biosensors and a mobile device is “truer” but doesn’t scale – the sensors can only be connected to so many people. Are there ways to capture groups of people’s emotions more easily?

Our Prototype

frown

Target Users

We believe this system can be used for large groups of any kind can be targeted, preferably indoors. For example, the ITP Spring Show.

By collecting general moods of individuals at each exhibit (grey circles), we can assess the general mood of people during the spring show,  gather clues about what exhibits are the most enjoyable, etc.

itp

We are also looking to collaborate with CASE, a architecture consulting firm in NYC, to leverage existing surveillance equipment to capture the moods of spaces. This could lead into insight into the layout of spaces for architecture and design.

Challenges

One challenge is that the current technology requires faces to be head-on or near head on. As such we need to position many cameras about a space to capture as many faces as possible.

Untitled-3

Week 2: Hardware experiments

Hardware: GPS Logger Shield and Pulse Sensor

GPS shield pulse sensor

I experimented with the GPS shield with a pulse sensor. Before knowing to use a jumper wire to connect the TX to the Arduino’s RX pin, I was having a hard time getting a GPS log. The pulse sensor seemed working fine although the pulse rate fluctuated a lot when putting at different body parts.

Another problem I encountered was writing into an SD card. After a discussion with Arlene, we found my hardware setup and the code were alright. We assume there was either something wrong with my shield or the SD card because with Arlene’s shield and SD card, it worked fine. Therefore, I was only able to print geolocation data and pulse rates in the serial monitor as shown below.

geolocation and pulse

Blog Post #2: Prior Art, Future Work

The topic I would to research and explore throughout this semester is cognitive computing interfaces.  I want to explore EEG sensors that are currently available in market.  I will pair EEG data with GPS and other sensors such as light, sound, & pulse.  There are three major challenges with this project.  The first challenge is the physical computing aspect.  I would need to get all the sensors to work together with a microprocessor and be able to log the data.  The second challenge would be to visualize all these data together in a way that’s comprehensible and show their relevance to the user’s daily activities.  The third challenge is to package the sensors and other electronics components into a visually pleasing and comfortable wearable device.

The only two reliable readings out of all the EEG readings are Attention and Meditation.  Working within the limitation of this current available technology, I will only be relying on the Attention reading to use for logging and visualization.

It will be interesting to use this device as a tool to boost online education performances.  There are many free online courses available but many people fail to follow through and complete the courses that they have signed up or started with.  This device can potentially support the student working on an online course by alerting her when her attention level has dropped.  This would allow her to quickly refocus on the lesson and shorten the length of time that’s needed to mentally process the video lesson.

As mentioned above, one of the main challenge of this project is visualizing all these biometric, GPS, and environmental data together in a way that’s useful, readable, and aesthetically pleasing.  The wearable market is becoming hugely popular and is expanding rapidly.  However, the pairing mobile applications and the data visualization begs major improvements. Take Fitbit for instance, the number and line graphs regarding my daily steps gives me a very superficial understanding of my daily activities.  If more sensory data were gathered and presented in tandem with this step count maybe it will give me a better understanding of my day.  However, the challenge with having more data is then how to show the relationship between of these datas, how they affect or complement each other, and most of all how can the user relate these data with their daily life.

I’m excited for this project because it ties in with my industrial design background and my current interests with wearable devices and data visualization.  I hope to also create an Android phone app that would pair with EEG device to log and visualize the gathered data.  As for the form of the device, I am picturing something similar to the Melon.  The form would be simple band but somehow communicates its EEG capability more than just a band that wraps around the head.  Instead of plastic and rubber, the device could possible be constructed with fabric or more flexible materials.  This will provide form fitting comfort as well as a unique look that will set it apart from other EEG devices available in the market.

My intended user would be adults within the early twenties to late thirty age range.  They would be urban dwellers who would most often travel by public transportations and walking.  The GPS component would be use to log the location that produces the highest level of concentration.  This location information as well as other data such as sound and light data will help the user to recognise their ideal study location, time, and environment.  This understanding of the ideal studying environment pairing with the refocusing alert feature could potentially boost the user’s overall online educational experience and performance.

 

Works Cited

“Activity Recognition for the Mind: Toward a Cognitive “Quantified Self”” Activity Recognition for the Mind: Toward a Cognitive “Quantified Self” Web. 25 Mar. 2014. <http://www.m.cs.osakafu-u.ac.jp/publication_data/1362/mco2013100105.pdf>.

“NeuroPlace: Making Sense of a Place.” NeuroPlace. Web. 25 Mar. 2014. <http://dl.acm.org/citation.cfm?id=2459267>.

“Recognizing the Degree of Human Attention Using EEG Signals from Mobile Sensors.” Sensors. Web. 25 Mar. 2014. <http://www.mdpi.com/1424-8220/13/8/10273>.

 

#2

IMG_6224

 

I have successfully powered the GPS Shield and arduino, without a lithium battery using a sold panel. I am now exploring different variations of solar panel; I would ideally like to use flexible panels to aesthetically work with the wearable.

 

IMG_1580

MindFlex headbands successfully hacked. I am now keen to remove the hardware from the original headband and place it into my own minimalist designed headband.  I intend to make this using the DiWire to construct a simple helmet with accurately bent wire.

IMG_1540

 

 

The Solar Panel successfully powering my GPS. Next is to link both the GPS and EEG sensors together using Solar Panels.