This is pretty long, but was transferring over from my internal record-keeping–

Experiments with electromagnetic recordings— 

PHASE ONE: Basic sound

Tackling both local data assignment and an initial question for thesis, “what does data sound like?”

This was the first experiment in gaining a sonic understanding of wireless network technology, ‘data,’ and the transmission of data through existing communication lines (radio waves like WiFi, cellular 4G vs 4G, fiber optics, more??). Part of my interest in pursuing sound as form is to approach these topics of “tech determinism,” “inevitability” concepts that come with “surveillance capitalism,” dichotomies between “public” and “private,” “audience” vs. “public”, an observation or digitization of reality being flattened to a 0 or a 1 (and further compression / warping through transcoding) more closely (physically? materially??). WHERE DOES THE CONVERSION HAPPEN? WHERE DOES THE NEGLIGENCE COME IN? WHERE DOES COMPLIANCE START?


1st: 6 notifications, just Go Mic | 2nd: 10, Go Mic w/ EM | 3rd: 5, direct EM | 4th: 1, direct EM

I started this exercise with the question, “what does data sound like?” or more specifically, “what do data transmissions sound like?” Categorized more as data sonification, I was less interested in more frequent uses of sonification, mapping data to musical parameters, and more interested in audification of data. As a more direct correlation to data transmission or data signals, I chose to focus on my phone’s notifications. Because I keep my phone silent most of the time, I first customized certain sounds for different types of notifications. After, I turned on almost every app’s notifications, turned silent mode off, and recorded my phone for about 45 minutes with a Samsung Go Mic. Coming back, I was a bit disappointed because I thought I’d get more notifications, but I suppose the quantity reflected Boyd and Crawford’s observation of needing to more critically analyze active participation among users (I am not very active on most of the apps I own). In this first round, the sounds were quite interesting however, and I attribute it mostly to the mic picking up the phone’s vibrations against the glass countertop. 

Rather than somehow increasing the frequency of notifications (I realize this experiment both collects sound data and regularity of iPhone notifications), I remembered I had brought my electromagnetic listening device with me to CO.

I didn’t have my recorder with me to directly input the electromagnetic microphone/converter, but I put the Samsung mic inside a pair of headphones and amped up the volume of the device. Interestingly, the sounds aren’t that dissimilar, and the only obvious difference is the hum in the EM recordings, because it’s picking up additional frequencies undetectable to the human ear. Since the device wasn’t directly connected to a recorder, it likely wasn’t the most accurate representation of the sounds the device was actually picking up, so I knew I had to try it again once returning to NY.

On the third try, I hooked the EM device (Elektrosluch) directly into my audio interface. The differences in audio recordings are much more noticeable. The vibration notifications in this trial were much fuller, capturing many higher frequencies in the mix and more “noise.” In “b” there are some really nuanced peaks, which I’m not sure came from where or what in terms of things happening inside my phone. In “c,” the frequencies are much more robust and deeper, sounding almost like a saw wave one octave below the most regularly-emitted frequency. In this trial, I realized I forgot to turn silent mode off, which I’m not sure would make a difference, but I decided to do it a fourth time for consistency.

In the fourth trial, I intentionally moved the iphone closer to the laptop because I was curious to see if there’d be interesting signal interactions between the two devices. Upon coming back to the recording, I was surprised to see so many variations in amplitude and frequency from the visual waveform so I assumed that I had received a lot of notifications, but I had only received one (at 6:05)! I’m still not even quite sure why this happened, other than attributing it to unseen interactions between my phone and laptop, (or could it be something more)? In part, my desire to explore these invisible emissions or streams is to become more in touch with data as constant, as tangible, and not discrete somethings that can be dumped into buckets of behavior. Can it ever be private or public with direct consent? Myths of “they’re always listening” or the inevitability of data extraction aren’t as easily understood with the tools we defer to that are predominantly visual. In hindsight, it probably would have been advantageous to record video of the process too, but the point here is that most of what we’re hearing isn’t at all translated to our visual interfaces. I would like to continue with this exercise and build out a topology of my local neighborhood through electromagnetic recordings. I wonder what types of infrastructure we’d “see” then.

PHASE TWO: Testing my theory of “electromagnetic infrastructure instead of invisible network structures”

Part of this second, ongoing phase is to thoroughly understand the physics of radio waves and more plainly, how the Internet works. More of that research is documented above, but wireless connection and faster connection is built into the world with very little physical evidence, whether it’s buried underground, trenched in the ocean, or carried through electromagnetic waves. For this, I attached my Elektrosluch to my TASCAM DR-05 and started walking around the city. An Elektrosluch (quoting friend) is 2 antennas with an OP amp (operational amplifier) so it can pick up on infinite frequencies, but what you’re able to pick out is determined by filters, analog circuit design or digital– sdr lets the computer filter or fine-tune it). So, despite me thinking that I could pick up infinite signals, I would probably lose my hearing if the Elek amplified every signal. In that sense, what I was hoping to hear on my walk didn’t happen, but here are some interesting collections: 

ConEd things?? Need to figure out what this is>>

Low hum

The panning is fascinating 

 

Wires outside this building

For some reason the El was picking up a loud hum here, and the only thing I could attribute it to were these wires 

Like con ed, but more noticeable chunk of other frequencies

 

CITI Bike purchase screen @ Thompkins & Willoughby

I thought more emitted light = higher frequencies, but maybe that’s not the case? Either way, moving the El to the screen picked up higher f’s

Takeaways: mapping a neighborhood EM infrastructure will be more difficult (most likely infeasible) than I thought given the tools I currently have.

 

PHASE THREE: Building a lo-fi ‘plugin’ or ‘effects pedal’ an LFO for feelings 

Going off this idea of data intervention or manipulation, I suppose I wanted to see how this would work with sound waves, since I am more acquainted with frequency modulation (modulating a carrier wave with another oscillator). So, how would it look like to warp different waves (as metaphors for packets of data) across physical space? 

To test this version of not building my own analog filter, I used an existing sampler/machine a friend had lent me, the elektron Digitakt. While it isn’t a powerful fm synthesizer, it does have pretty good sampling functionalities involving filters and LFOs, so I hooked up the El to the digitakt, where the El would be feeding whatever sounds it picked up directly into the machine.

Video 

 

iPhone activity

Original Audio

After picking out a sample (just choosing start & end point), I played with pitch, filter, and LFO

Filter

Basic LFO

LFO + Pitch

And again

again

Again with decay

Last 

Laptop activity

Original audio

Filter

LFO

LFO more intense maybe a delay on it?

LFO w/ delay out of control ** (had to shut off the machine, what could this mean??)

LFO + filter

Live sequence of turning the knob on LFO  / pitch / filter 

Takeaways: 

There’s a lot more that I feel like I can dig into with this playful experimentation. It was interesting that at one point, manipulating the delay on the sample caused the digitakt to keep delaying and amplifying the delay to the point where the sound became obnoxiously loud and no knob could cut off the delay. How does this relate to instances of ‘virality’ or fetishization, lack of digital memorialization? What parallels are there between this and the proliferation of the violence against black bodies on the Internet? Could harm or the inability to cap it or reframe the conversation around that be demonstrated through this spiraling sound? 

With the iterations in LFO (esp in iphone), how does the reverse engineering of it look like? How would this apply to a data double, or the profiles built from algorithms? What would it take to get it back to original form? Is there such a thing as original form when life can never be a captured moment in time? 

 

PHASE FOUR – indoor modulation environment

This isn’t built off phase three but was done in conjunction with phase three as a continuation of phase two, a different way of branching off.

Since the outdoor, local neighborhood EM mapping felt infeasible, I wanted to try it with an enclosed indoor environment, a simulation of some kind. Here, I mostly wanted to test the types of sonic interactions that occurred between devices placed in close proximity. Whether or not data signals change based on physical proximity is still up in the air (research lol), but might as well test it out. 

[didn’t take a picture oops]

First up – Alexa

  • Because I was asking Alexa to either play music or answer my questions, I was mostly picking up that sound and other EM signals were harder to distinguish.

2nd – Modem 

  • Listening to the modem was amazing, because it had a very distinct rhythm, like it was following morse code 

3rd – Router

  • Loud, maybe could have been from my hand movement with the microphone
  • Then I experimented with placing the router and the modem closer together (even though they were already close)

4th – Nintendo – not really (to me) that interesting or something I want to experiment with further since it was mostly similar to devices I already recorded 

5th – iphone / laptop / nintendo

I guess I was interested in hearing what a conglomeration of devices would sound like – were there minute interactions taking place that I didn’t know about? 

Again, not sure whether I could attribute this to coming distinctly from one source or the aggregation of sources, one sound I found particularly interesting and beautiful was this low hum that seems to resonate from the laptop, but I had placed the phone on top.

Hum and the hum isolated w/ filtering – it seemed like lower frequencies were easier to filter out

6th – iphone and modem

I think at this point you can sort of pick out which source is attributing to which sound (the stuttering is the modem, the higher attenuated frequencies are from the iphone, but then again, is there anything new happening with the physical proximity of these devices?) Also it’s funny because people get into modular synthesis just to recreate these sounds using patches / filtering / etc. when these are ‘naturally’ occurring!

7th – iphone to Laptop ‘airdrop’

Who knows if this is actually the sound of an airdrop, but in my understanding of it, it seems unique enough and was triggered from the instance of me airdropping a file. Airdrop uses bluetooth to find other devices and then uses p2p connection to transfer the file — more learning to be done here!

Takeaway: it’s so hard to pinpoint how sound is created – is playing a phone near a laptop actually changing the sounds in relation to one another or is it just amplifying more sounds? If a cell phone airdrops to a laptop, is a unique sound created? While I’m not entirely sure where to take this experiment next or how to build off these findings, I do hope to return to them. I think I have to do some research in the mixing of sound waves – wave interference? 

 

PHASE FIVE – sound walk in Manhattan

Since I didn’t want to give up on my idea of local EM topography, I thought I might try a different, maybe more connected neighborhood.

Washington Square Park – here, I was interested in picking up Wifi signals since most parks have free wifi and I imagined would be a place where there wouldn’t be other signals coming from other sources. Turns out, there was a noticeable hum coming from the perimeter of the park, so I guess a way to compare this finding would be to look for any physical wifi infrastructure near the park. When listening to the audio, the noise mostly occurs when I’m walking around the edge of the inner park circle (before the paths go out) and I’m pointing the El facing outwards.

Video Audio

The Link – had a whole array of sounds and the actual source was more surprising to me. I couldn’t pick up any from the giant screen itself but most of the loud low noise came from the bottom of the screen, where it looks like there’s a vent(?). The smaller screen for phone calls, map, etc. was obviously interesting to listen to, as again it was emitting higher frequencies.

Video Audio (THICK and juicy lead synth sound) and more audio

325 Hudson datacenter

Wow I wish I could be more certain that the noise I was picking up was actually coming from this building, but other buildings didn’t emit the same level of noise? This is probably something I need to do a few times to establish reliability, but lo and behold, this building generated a noticeable thick low band of sound. Maybe it’s the energy being used??

Video Audio

 

PHASE SIX (can I call this six?) – Digital manipulation of audio samples

How close can I get to one waveform? What’s the wavelength I’m looking at?

Video

Another attenuation of a longer sample to achieve 1 instance – is this classification? How does one one get to decide where it starts and where it ends?

Using Ableton’s in-house sampler to achieve frequency modulation! Who knew the sampler could do this? Here, I attenuated a ‘single waveform’ of the ConEd energy circles, and as you can see from the picture, looks very similar to a square wave. I used the Pitch/Osc tab to modulate this frequency w/ different oscillation types at various pitches (frequencies). 

Video

 

Doing the same experiment but with a different ‘instance’ of the sound recording.

Video 

The physical deconstruction or shaping of the wave is most noticeable with the low volume frequency modulation, because it really sounds as if the wave is being stretched or refitted to allow for a new sound.

 

Takeaways: I did this experiment to see if this can be done physically, or with signals transmitted in real time. Or is there a way to almost attach this filter to the receiving end of the signal? Who am I keeping out? Who am I preventing from seeing or using the content? Why do I feel the need to experiment wiith radio??

 

PHASE SEVEN (in actuality this was an ideation that occurred around phase five) FM TRANSMITTER !!

I still need to develop this further, but I built an fm transmitter with a raspberry pi, which is exciting in and of itself. Is it usable? I’m not sure yet.

Thanks to Sarah Grant’s help and suggestion, I used a raspberry pi along with this script to transmit a .wav file to the pocket radio. When consulting a friend who had some experience with radio, he strongly cautioned me with FCC regulations in that I could be fined for 75K for illegally hacking into a radio frequency. Therefore, I built the tin foil box, but I think it’s mostly unnecessary because I didn’t attach an antenna to the rpi4 and the computer can only emit across a distance of maybe 5 ft (felt more like 2). Anyways, for AESTHETICS.

First, I transmitted the github’s default test track, and then I exported a 16 bit .wav file of a portion of the laptop_iphone cake recording. I loved it because it didn’t sound like I was intentionally transmitting a different sound. It sounded like noise, like the experiment had in fact not worked. When conducting it in front of another person, their first thoughts were that they were hearing radio static or noise, and that it was as if I was still searching for a radio station. Because of the box, they were even thinking that the sounds were produced by bouncing off the walls of the tin foil, somehow triggered by metal. The overtly distorted, innately manipulated nature of the transmitted signal and its effects is very interesting to me. While needing more conceptual development, I think there’s effectiveness or potential in this form as being undecipherable to the ways we haven’t been socially conditioned to tune into noise, the same way we do with visual noise (bombardment with social platforms, proliferation of violated privacy, etc.).

 

FIRST LO-FI PROTOTYPE – working website using three js, might switch to another AR platform

let’s just call this volumetric sound for now Audio AR – mostly calculating amplitude and frequency range of different devices (just laptop and phone for now) and seeing if this can be translated to a sonic AR 

Area under curve~

Baseline dB bottom is -100db

Conversion: 1dB = 1cm or 1 inch? 

100hz = 1 cm / 1inch

 

Modem

Volume: 883 cubic cm

iphone

Volume: 7448.04 cm^3

Reg: 8.12 cm^3

Phone to laptop interaction (airdrop?)

Total volume = 1,229,596 or 12,283.9 cubic cm

Reg lap top cubic cm: 1302