Patched Fragments of Queen Crystalette

1 – Archegonia and the Antherozoids 2 – A Miracle 3 – Disintegration of the Obsidian Hypogeum A composition by way of audio and video signal interaction, through real-time feedback manipulation and cv processing. To Winsor McCay ( c. 1866–71…

Patched Fragments of Queen Crystalette

Source

0
(0)

1 – Archegonia and the Antherozoids
2 – A Miracle
3 – Disintegration of the Obsidian Hypogeum

A composition by way of audio and video signal interaction, through real-time feedback manipulation and cv processing.
To Winsor McCay ( c. 1866–71 – July 26, 1934)
schulzlibrary.files.wordpress.com/2010/11/p3kiss.jpg

WARNING: This video may potentially trigger seizures for people with photosensitive epilepsy.
Viewer discretion is advised.

https://alexjanuary.bandcamp.com

Similarly to Chladni plates (youtube.com/watch?v=tFAcYruShow), my work with video feedback is based on resonance patterns produced by self-oscillation, as described by Douglas R. Hofstadter in his book “I am a strange loop”
(publicism.info/philosophy/strange/7.html) and as investigated, since, in many different fields
(neurosciencenews.com/consciousness-80s-video-10386/)
Additionally, in my work, image generation is influenced by sound, which is in turn influenced by these images.
My process is the following : a camera is pointed to a screen that the camera is connected to through a video mixer.
The video mixer (an Edirol V4) receives cc messages for variation of luminance, chrominance, keying and symmetric shaping from a modular synthesizer’s oscillators, ring modulator and envelopes CV outputs, through a Kenton CV-to-MIDI adaptor.
The henceforth produced variations influence the patterns that are generated through the video loop.
The video is recorded from the video mixer to an Avermedia capture box that sports a component video output.
The recorded patterns are played back from the capture box and the Y cable of its component output is reinjected in the mixer’s composite input for a new session of video feedback, where pattern production results from a mix of pure video feedback and the “reaction” to these previous patterns appearing on the screen, albeit “distorted” by the component-to-composite conversion
(I sometimes also employ a 2nd video mixer, a Videonics MX-1, before the final output, for additional image shaping through mirroring).
Meanwhile the Pb and Pr cables are sent to the modular synth to modulate, through a CV processor, the FM inputs of the oscillators, the amplitude of the ring modulator, the time response of the envelopes…
For each “take”, video and sound are recorded simultaneously on the Avermedia boxes (I employ two boxes, one for signal relay and one at the final output, though both record all inputted signals).
Material gathered from these boxes is then edited on Avid First.
Although this is by no means a “live” process, image and sound are thus produced each time in “real time”, without the use of a computer program.

Alex January – 01/01/2021

0 / 5. 0

Leave a Reply

Your email address will not be published. Required fields are marked *