Transforming Media

Currently, I am deeply engaged in media transformations, both in my dissertation and in a project with Michael Johansson (Wandering Landscape Machine). My interest lies not in the isomorphic transfer of one media technology to another – an impossible feat – but rather in the creative and aesthetic potential that lays in the disruptions, signal losses, and noise that occur during these transformations. Inspired by ETA Hoffmann, who elevated this phenomenon to a poetic principle in the early 1800s, this project explores the transformation of images into music.

The tool we are developing will eventually integrate into the Wandering Landscape Machine artistic resaerch template to enable the machine to produce (shadow) images as well as generate sounds from these images. Unlike my Camera to MIDI project, this tool will work with static images and aims to perform more efficiently while employing different probing approaches.

Technical Approach

A static image is “probed” at specific critical points to generate MIDI signals (and thus sounds) from these spots. Two primary methods are employed:

  1. Blob Detection: We locate local contrast points within the image. The x and y coordinates of these points (“blobs”) are then converted into corresponding MIDI signals. For the blob detection I use theblobdetction library by v3ga.
  2. Phonograph: Given the specific visual style of the Landscape Wandering Machine (square images with circular content), the color values are read in a spiral pattern (similar to a needle on a vinyl record player/phonograph). These “probed” color values will then be converted to generate MIDI control signals, while the steepness of the spiral curve will be adjustable.

Example of a shadow image, produced by the Wandering Landscape Machine. Since all images feature this overall shape it is crucial to limit the phonograph-function to the content of the circle.

Since the tool will feature a small GUI and needs to run as a standalone *.exe, I chose Processing over Python. Processing’s/Java’s cp5 library allows for quick and efficient creation of a basic prototype.

Version 0.1: Screenshot of the early GUI running on Windows, made with cp5. (Left: placeholder for a shadow image, produced by the Landscape Wandering Machine)
Version 0.2: Updated version of the UI, reflecting a bunch of new features of the tool: A possibility to send chords instead of single notes, an option to enforce harmonic chords (currently only in C scale), two sliders that enable limiting the highest and lowest pitch sent to the synthesizer and a small status window (bottom right) that gives feedback to the user.

Transforming Image Data to MIDI Signals

Transforming image data into MIDI signals is merely the first step. “Real” music is only produced when these MIDI signals are converted into actual sounds using a synthesizer. As we plan to use an external hardware synthesizer, the tool will detect all available MIDI devices on the system and prompt the user to select the desired MIDI interface.

Observations and Learnings

MIDI Panic Message: Sending a MIDI Panic Message via the Processing library midibus to immediately send NoteOff on all channels (MIDI CC 120 or CC 123) isn’t straightforward. I created a workaround by looping a NoteOff message to all possible 127 notes. Suggestions for a better solution are very welcome.

// hacky "MIDI panic" command (sends a noteOff to all pitches in channel 1
for (int j=1; j<127; j++) {
    myBus.sendNoteOff(1, j, 127); // Send a Midi nodeOff
    println("sending MIDI panicOff to: "+j);
    }

👉 Update to this problem: I found a solution to send MIDI controller changes (and therefore also MIDI Panic messages) via themidibus. The trick is to set the last int to 0:

// send a CC 120 ("All Sound Off")
myBus.sendControllerChange(1, 120, 0);

// send a CC 123 ("All Notes Off")
myBus.sendControllerChange(1, 123, 0);

Although CC 120 seems to be the ultima ratio that should mute all sounds (regardless of release time and sustain) I observed in around 5% of all test cases that some sounds still kept playing at the synthesizer, even after sending CC 120. To ensure that all sounds are reliable cut off when STOP is triggered, I use a combination of my “hacky” for-loop that sends NoteOffs by iterating through all possible MIDI pitch values and sending a CC 120 in parallel.1

Compatibility Issues: Midibus has compatibility issues with the Arturia Microfreak when connected via USB as an MIDI interface. While MidiBus.list() recognizes the device, it seemingly ignores all MIDI commands sent to it. That’s why I am using the Steinberg UR 22 mkII audio/MIDI interface at the moment to supply MIDI signals to the Microfreak via traditional DIN cables.

Windows vs. Mac OS/X: Midibus doesn’t recognize my Swissonic USB-MIDI 1×1 interface on Windows, but it works fine on Mac OS/X. I suppose this seems to be a issue with the device itself, since there are some reports online that it is quite picky in terms of the mainboard’s chipset. (Maybe there is also an issue with the drivers – some people suggested that the software from Roland UM-1G can also be used with the Swissonic USB-MIDI 1×1. I need to check this out in the future.)

Threading and Synchronization: Implementing threading in Processing seems to be fairly straightforward by threads("[name of function]"), but it quickly desynchronizes the timing between functions, which is devastating in a time-critical environment such as music generation. That’s why the script currently runs on a single thread.

NoteOn and NoteOff Commands: To ensure that the synthesizer produces meaningful sound, a delay is necessary between each NoteOn and NoteOff command otherwise each note is immediately muted after triggering. Although most code examples of midibus use Processing’s delay() function, this approach seems ineffective as it halts the entire sketch, including the user interface. To avoid this, I developed a function using millis() to adjust the delay between NoteOn and NoteOff via a slider.

Conclusion

The tool is open source and still very much (⚠) a work in progress. The code is available on GitHub.


Footnotes

  1. A very nice list with all available MIDI command messages including some explanations can be found here. ↩︎