Class 6: Indoor Location (Gershon Dublon), Prep for Final

After discussing last week’s user personal exercises and your responses to them, we’ll spend a bit of time thinking about indoor location. Our guest speaker is Gershon Dublon, a PhD candidate at MIT who assistant taught a class on Location-Based Technologies that I took a few years ago. Gershon does a lot with environmental and somatic sensor networks, location-based tracking, and the arts, as you can see from his site.

We will also talk about final presentations in two weeks, and then we’ll end class early so you have extra work time.  I’d like to see evidence that you’ve been through the whole prototyping cycle:

Screen Shot 2014-02-13 at 9.38.00 PM

Specifically , please present

– A DEMO (don’t stress if it doesn’t work perfectly), including QS data collection and spatial visualization
– ANALYSIS of a small user study (preferably 2 or more people outside of class)
– REFLECTION on the prototyping cycle and on how you’d prepare for a next iteration, if you were to continue your project.

Though it’s not required, I’m really interested in how (or if!) considering location has changed your perspective on quantified self. This will help me in planning the class for next year.

Rivet_User Persona Exercise Response

The user stories presented to us by Michael, Sabrina and Yu confirmed our interpretation of the Rivet user.  The user profile identified the student, child with ADHD and elderly person with cognitive changes.  We will user test the Rivet with students at ITP as well as students self-identified as being diagnosed with ADHD.  We believe our user could be an online or classroom learner that wants to track their attention while listening to or watching a lecture.  Other users might want to increase their ability to focus for longer durations of time and the Rivet can provide information to enable this user to track focus.  The Rivet can also be helpful to users that want to track their attention in different environments accomplishing various tasks.

Although the elderly person with cognitive changes is of interest as a user group, we concluded that it is beyond the scope of this project at the current time.  It would be too difficult to understand the etiology of the cognitive change as well as how to measure improvement. Based on the sensors commercially available we do not feel that we would be able to detect these changes as well.

Our long-term goal would be to test with children however, we would like to test extensively with adults first. We think the Rivet has a broad range of users and applications both academic and social.

Alcove- Data Visualization & Mapping

Processing VisualizationThis week we explored our individual study patterns and manually entered the location element. We realised that we needed to find a better solution to finding GPS as the shield not working indoors was a major usability problem.

Hence, our next step in GPS tracking was: obtaining location coordinates from browser(laptop) or gathering mobile gps data over server databases.

In terms of visualization, the mockups have currently been created by manually mapping the data to the map image.

The blue represents a relaxed mind and the green represents concentration. The details of data viewed depends on the zoom level of the map.

1. Zoomed Out: Concentration of meditation coloured over broad large areas in the map.

Map-View1

2. Medium zoom: Points in the map and which mindset dominated in that place.
3. Zoomed In: A detailed breakdown of  location, time, topics worked on and relative state of mind.

Map--View3

Next Steps:
1. Mobile GPS: Gather mobile GPS data to server. Connect server data to processing visualisation.
2. Browser Data: Work with Fusion maps to integrate maps and brain wave data.
3.  Visualization: Generate visualisations using Processing/ Fusion Maps.
4. Figure out when a mark is made in a map, or would it be a continuous stream on the map. Will time spent at one location determine if a mark is made?
5. Get MindWave data wirelessly to the computer

Diana/Billy/Salem_Spatial Visualization

Spatial Visualization

 

This is a preliminary sketch of GPS locations.  It was created using my Openpath data and mapped with Mapbox.   We are still working with different ideas on how to show the variation of EEG read outs and how that relates to other environmental data.  We’ll also need to decide on a color scheme.

Inpyo / Woonyung : Spatial Visualization

 

img1

We did spatial visualization by assuming what will look like. Mostly we are going to map the color value based on the data we got from facial tracker (in this case : webcam –> OSC –> MAX/MSP).  Each dots will be placed where users take individual photos (it will be click-able dot that can see little pop up window shows photographs.

We are thinking we are going to start from self journaling system to public recommendation site.

Data #5

GPS Case

 I have mainly focused this week  on the design of my wearable. I have successfully gathered EEG data and passed it through python into Rhino. The patches I have created are relatively exciting but I am keen to link these directly to my GPS data.  I have added Python code to raise points that are closer to each other. Both examples vary in fluctuated depth but it is this that most excites me; I want to test my design skills with the limitations of data.  The only question I have at this point is which section of EEG data to sculpt?

plot 2

 

EEG vis

GPSculpt

I have visualised my GPS routes and placed them onto google maps. However, I am more fascinated by the physical visualisation which is why I sculpted the walk into these objects. The choice of objects is somewhat random and this could be filtered.

Class 5: Users, Communities, Co-design

After discussing our spatial visualizations in the first part of today’s class, we’ll then spend the rest of class talking about users, co-creators, and everyone else outside our projects and departments. This will culminate in a user persona exercise.

Our guest critic from last week, Becky Hurwitz, will give a short talk about the history and theory of co-design, and how her class at MIT Center for Civic Media has contributed to the field. You can follow along through the not-yet-live Civic Design Toolkit site. After Q&A with Becky, we’ll then try a modified version of this user persona exercise. I’ve set up a shared Google Doc for any links and images that we may find during the exercise.

Many of us have experience working with users through methods like clinical trials or product testing, but co-design is, arguably, rarely applied outside of fields of social justice, social change, and humanitarian work. With that in mind, I think it will be an interesting exercise to apply concepts of co-design to our quantified self location projects. I myself have not applied co-design to any of my quantified self projects, so I’m excited to give it a try based on our findings from this class.

Other Resources: I’ve also found the IDEO Human Centered Design Toolkit to be quite handy. Since my experience with co-design was centered on Indonesia, I used the Solo Kota Kita (Firm Foundation) Social Design Field Guide quite a bit, and I think it can be translated to many other projects and regions. In translating user studies to the academic domain, Creswell’s Research Design is a great, short overview of how to develop academic, user-centered studies.

Su’s spatial visualization

 

I did a visualization experiment with my Basis data and FourSquare data. This screenshot is showing three days of my heart rate on different locations.
spatial_su

 

I do not think that I got a valuable insight out of my visualization. For my project, I am not sure spatial is a way to go as one can have so many emotions in one place.  My original assumption was that you will be able to see different emotion on different locations. This presumption was permissive and was proved that it is not easy to “average” out emotions and generalize a personality of a place if you are doing various tasks in one place.  See the upper part of my map, that is my home. It shows wide spectrum of hues. What this exercise verified was that you can have a diverse rage of heart rate data in a same location (especially home and work).

What I am curious to see is other people’s data at same location or time. I wonder if my product, Tiya can mediate various data. For example, one can be motivated if he/she sees that someone else near by is more active and positive. Seeing a group of people’s data will be really interesting for this experiment.

 

Zander Midterm – H@

H@ – locative emotive design.

Concept
I am building a solar powered EEG and GPS wearable. H@ will allow me to emotively map my movement through NYC which I will 3D print into a unique map of the city. H@ also allows me to redesign the urban environment based on emotional data I will gather at each lon and lat point. I will 3D print my emotional interpretation of the city and its features including lampposts, pavements, bridges and buildings.

Motivation
∆ Quantify emotional relationship with environment
∆ Physically print the emotional feedback
∆ Challenge traditional concepts of designing and mapping.
∆ Attempt to redesign urban spaces based on exact emotional feedback.

Prior Mapping and EEG Design

Cristian Nold – Bio Mapping – 2004
The Bio Mapping tool allows the wearer to record their Galvanic Skin Response (GSR), which is a simple indicator of emotional arousal in conjunction with their geographical location. This can be used to plot a map that highlights point of high and low arousal. By sharing this data we can construct maps that visualise where we as a community feel stressed and excited.

CN

Prototype

IMG_2162

The current prototype is not as elegant as I would want the final but it has been useful to get dimensions etc. I have scanned my head using the Xbox Kinect and can therefore 3D Print an exact fitting headband. The current solar panelled prototype does work and has been tested in Washington Square Park.

plot 1 plot 2

Challenges
Current Challenges that I am faced by are interlinking the EEG timestamp with the GPS timestamp, allowing me to plot exact designs in each position. I am also keen to really challenge the role of ‘designer’ with my procedure and am looking forward to the challenge of designing with data.
Documentation

 

 

Rivet: Hardware + Software

 

1

Hardware:

The basic setup that we opted for is the EEG sensor board that is contained in the MindFlex game device, or alternatively, the MindWave headset as it comes with Bluetooth connectivity options built-in. We decided to go with the iPhone handset as the data collection device, and we plan to make use of the onboard sensors, mainly the GPS sensor, as well as light and sound sensors.

The reasons for opting to go with the iPhone are:

 

  1. We were not able to get the GPS shield to work reliably with the EEG sensor, but we were able to get the data from the iPhone.
  2. The MindWave headset comes with an accessible iOS SDK.
  3. Our vision of the project is to have the user receive feedback, both in the form of visuals and possible vibration, to aid them in understanding their attention patterns more.

 

 

Software:

We are in the process of developing an application to get the various required functionalities to work in a number of demo applications.  We will then put all various pieces together in the final prototype.

 

We have put together a basic mockup that we are following, and intend to improve as we go along:

Screen Shot 2014-03-27 at 12.39.07 AM

 

 

In terms of progress, we were able to both get and map the user location through two demo applications that we made, shown below.

iOS Simulator Screen shot Mar 26, 2014, 11.45.37 PMphoto