Category Archives: Setting

Week1 – Jump to Universality: A Meditation

 

Based on:

Chapter 6. The Jump to Universality
From: The Beginning of Infinity: Explanations That Transform the World – David Deutsch (2011)

where do I begin…

Okay let’s talk about abstraction. The first half of the Jumper chapter talks about the evolution of language and numerical abstraction. This is great because we can practically track the origins of “Abstraction” in and out of itself. A Conscious being looks at its origins, its cradle, its source, to find and understand novel things about the world to help him in life.

The problem is there’s an infinite amount of information out there and non of them are objectively useful to us. Well our sensors are limited anyways so that narrows down the amount of data we absorb from the environment. So the question becomes: What should he pay attention to? If the world around him can be infinitely broken and abstracted into different enough compartments, what comes first?

This is the direction i thought the article is going. I’m going to continue it and see how that merges into the Jumper.
So i think the clue to the answer is basically in the origins of the question. Why would he want to extract from its surrounding? What does the infinite have for him that he lacks? Well therein lies the answer: Limitation. His main reason is to overcome his limitations. That’s what gives meaning to his limitations. It can be as simple as a long stick that he can use to get the hard-to-reach fruit, or to attack the predator from a safer distance. So in essence, it’s the limitations imposed on him that creates function, creates meaning out of the infinite.

So this is actually what i thought Jumper was about. The evolution of meaning through language. Even with our limited senses, we are bombarded by infinite amounts of data and (although sad) we can’t possibly parse through it all. But one can project meaning on to the world and see Potential in things. So a cliff is not a rock formation; it’s a “Falling Off” place first, and then a physical reality with properties. A curb side might a “Step On” place for a tall guy like me, but can be a “Sitting On” place for a shorter person. Both of which have those potential for us (and not the other) basically because we are limited that way.

I would like to look at language the same way. This eventually writes off the idea of universality. Yes, in theory we are able to create universality in different fields, but that doesn’t mean it’s in any way practical to do so. The history of math laid out in the Jumper is a good example of that: there are infinite ways to create a number system, but only a handful will actually be practical. We can’t possibly store and keep track of all the abstraction a universal language can bestow upon us. A usual example is the Eskimo having (allegedly) 50 different words for Snow. Well that doesn’t mean 50 ways of pointing out the same thing, but means 50 different variation of Snow that doesn’t even occur to us that live outside of the Eskimo environment. Knowledge of different variation of snow and ice is absolutely crucial for their survival and having those compartments in their consciousness mirrors the necessity for those abstraction in their language. But how many words do you think they have for Soil? or Vegetation? How many different words they have for different shades of Green?

The Eskimo language is a good example because the Polysynthetic nature of it somewhat resembles unparsed postfix/suffix structures you see in “Language Theory and Automata” examples (Chomsky/Greibach context free grammar)

 

, which is maybe the closest we got to the idea of language universality. Words aren’t simple containers, but chains that can be added and added until it builds the meaning you want. You can think of Oriental languages as the polar opposite with a limited number of words. Each have their own benefits and hindrance.

Why don’t go a step further in the DNA language? Sure it seems that it fulfills that desire for universality, 4 letters and make the world out of it. But the result might be the most impractical language ever created; and we still haven’t figured it out! If we made Binary computers, we created the language, and the entity designed to understand it bit by bit. One can read and understand Binary, but we created abstractions on top of it to make it infinitely more practical. As for DNA, we’re not Cells that can read it bit by bit, but we CAN create higher level abstractions to parse through what the resulting bundle of cells mean to us.

I hope that makes sense.

P.S. I would like to also add that a computer never misses a bit, or the whole program fails. And the DNA/RNA can’t be misread, or it turns Cancerous. I guess that’s also a trade-off. Simple and Universal means less margin of error. That’s also a proper reason for abstraction. Think of EVERY language made on top of Assembly.

Blog link: http://blog.mhrahmani.com/osc/week1-jumper/

Time Slices

I wanted to try something that’s been on the back of my mind for some time. With AR and immersive media in general, I keep thinking what about this new medium is special and could be leveraged to show something that hasn’t yet been possible. Immersive media has two things really going for it: depth (immersion/space) and time. So I started out making a story/environment that mixes time and space around a user.

By bringing in video assets on an inverted sphere (as a panorama or skybox) and cutting that background panorama up into slices, you can create an immersive space that pays with space and time. As a simple example, imagine recording a 24 hr 360 video from the center of the Washington Square park fountain. Now take 1 minute slices of the video and evenly distribute it as a skybox around the inner circle of Washington Square Park, now you have slices of the day as a spacial installation.

That’s the idea that started this series of experiments. From there I realized if you brought in an “occlusion material” from Unity’s assets, you can cut away video in interesting ways. You can make a truly experimental space when bringing in videos from other places and perspectives (as a hockney-esc collage). And finally, I looked into a Vimeo Unity SDK that allows (pro) members to stream live footage. So I have a lot of tools at my disposal and I’m seeing this as just the beginning of exploring what’s possible with all of them.


^video^
vi

img ^video^ jbj
image^video^^video^^video^

(re)Creating 3d spaces

What are the advantages of working in 3D? What can we get out of it and what does it mean for human perception? Actually, a lot. The world that surrounds us is three-dimensional. That’s how we understand the world, how we traverse it. So, naturally having a 3D environment is the best way for us to create new stories and for other people to relate to them. But it’s not that easy…

[Read the rest in my blog…]

Week2_Settings

So after reading Cutting the VR, I kept thinking about the focus point of a photo and where are audience’s attention are drawn, especially a panoramas. In particular, I was playing Google Pixel with my friend earlier and we found a lot of interesting tricks when taking a panoramas. Before, one person can only be at one spot when taking the photo. But, a panoramas allows people to present several times in one photo by taking just one shot! This is exiting of how we can tricks audience’s attention and playing around in between the space and time.

So, for this week’s assignment, I found a photo that I took during the break with my friends. Before taking the panoramas, we planed out the composition and even the storyline which we did not noticed while doing this. By rotating the camera, a lot of my friends moved around according to the camera perspective. By looking at the result, it was a bit tricky to find out there’s only a fixed amount of people in the photo; however, it looks like there was a double amount of people in the photo. Also, a panoramas distorts the time and space that tricks our mental at first glance which we can’t hardly tell the starting and the ending point. Well, we can sort of find out if the panoramas is not compiled smoothly.

This is another panoramas that I took earlier, which is playing with time and space using Google Pixels. Taking a panoramas by using Google Pixels is different than Apple iphone. This panoramas is taken by several images from different angles and later puzzled up into one.

Interestingly, my observation just refers back to the reading, taking a panoramas is similar to the idea of the Hero’s Journey (image below). By sliding different layers, the dots on each layer is actually not moving but the composition is different everytime. So, this makes me think of by sliding different layers, there could be so many versions of stories in one space and one time! This opens up the possibilities of playing with the time and space!

Click here to watch the gif

e

 

Week 2 – Panorama

For my 3D setting, I wanted to create a scene in my favorite place on earth: the Outer Banks of North Carolina.

It turns out that these picturesque vistas are quite popular with the 360 camera crowd and Google Street View had tons of high quality captures already uploaded to it. I really like the idea of playing with existing public assets so I set out to use the Google Street View API to create my scene.

Using Dan’s code, I signed up for an API key, found the image I wanted to use and then proceeded to fail completely at getting it to work in time for class. It was particularly frustrating because I was sure I understood what I was supposed to do to get it to work:

  1. Copy and Paste the new API key into the global variable “apiKey”
  2. Make sure the database and collection global variables are the same as in your Mlab.
  3. Copy and paste the panoID from the street view location you want to use in the “pano” in the “panoramaOptions” in setUpStreetView()
  4. Adjust heading and pitch if desired.

I found this view that I loved from the sound side of Jockey’s Ridge. Found this StackOverflow thread explaining how to find the panoID. I then proceeded to try every combination of the hexadecimal string in the URL and couldn’t figure out how to make it work.

Instead I created a 3D scene using a panorama I took in Corolla in January, the last time I went down there:

corolla, jan 2018

Although I tried to fudge it into being a 360 scene using a sphere geometry, it never quite worked. Instead, I made peace with it and used a cylinder geometry, futzing with the dimensions until it looked a least warped as possible. The code is on GitHub here.