by Saviana Stanescu, directed by Jeremy Goren
Ice Factory Festival, NEW OHIO THEATER, NYC, 2023 (video of full performance)
International Studio Theater Festival, TEATRUL ALEXANDRU DAVILA, Piteşti, Romania, 2024
TEATRELLI, Buchrest, Romania, 2024
A live camera image of the performer playing AL (Tim Craig) is fragmented into a cloud of 20,000 particles that are projected into a virtual 3D space displayed on AL’s monitor. I can manipulate this particle cloud interactively during the performance, causing the fragments to flow together into a recognizable image, dissociate into noise, and warp into unlimited patterns and forms. I use this image-world to create a progression through the play from a highly abstract approximation of a human face through multiple permutations of increasingly recognizable “humanity." Performing the image manipulations alongside Tim, I can respond to the performance moment-by-moment, creating a counterpoint between Tim’s vocal and facial expressions and the ever-shifting, re-constituting image of the character. The performance is informed by my background as composer and instrumentalist: I plays the image in real time, responding, riffing, and improvising in a duet with the actor. Additionally, my software accesses a database of AI textual analysis of the script created by data scientist Niki Athanasiadou, which allows the images on AL’s screen to be influenced by an AI language model’s “understanding” of the text itself, represented as a matrix of shifting data points that influence the scale, color, position, and rotation of objects in the virtual space of AL’s monitor.
Samples from video projection design for Grenfell Theater’s Spring 2020 production of Macbeth; Corner Brook, Newfoundland.
Video projection design driven by the performers’ brain waves, captured live in performance using wireless EEG
Created by Transforma Theater.
The high tech show was vividly supported by the artistry of … John J.A. Jannone’s video design.
Ana-Maria Bandean, Scena.ro
Nominee: New York Innovative Theater Award for Outstanding Innovative Design
I enjoyed the notion that the performers would be linked to the design; that their brains would be doing all manner of complex work that the they themselves were not consciously aware of, but were nonetheless radiating from them, and -- via the technology -- suffusing the space, re-appearing as echoes and ripples in the imagery.
Rather than designing for each scene, I created a total visual approach to the piece: I created a piece of software that could manipulate a “particle system” of 10,000 circular particles in a virtual 3D space, and record various manipulations as “cues.” The particles could be arranged into shapes, laid out in grids to become the "halftone” images of the role models, connected by lines, altered in color and transparency, and, importantly, displaced and altered by live brain waves. So these same 10,000 particles continuously re-arranged themselves into the various images and patterns seen on stage.
Knowing that live brainwaves would be transmitted throughout the space, and that I could pick up on these energies and use them in the design, steered the whole design process. The liveness of this data kept me from creating locked-down looks and cues, keeping the process open so that these brain-energies could influence, or determine, the image-world at any moment. So, even if the literal presence of the brain activity was not utilized in a literal or linear way, its existence as a part of the piece was what brought about the whole design modality.
every image in the show was rendered by the computer in the moment it was being seen; this “liveness” created an opportunity for any look to be modulated by the performers’ brain waves in real time. The EEG headsets transmitted the brainwaves on the theater’s WiFi, which was picked up by my software, and could be inserted into any cue, altering position, size, color, and arrangement of the particles based on brain activity. Sometimes these waves were rendered as literal “graphs,” showing the various brain regions and their energy spread across the backdrop like a chart; other times the effects were less literal, with performers’ brains producing the undulations and shifts of the imagery.
Photographs by Kathryn Butler
The Female Role Model Project
3 Legged Dog, NYC; November/December 2018
Edinburgh Fringe Festival, July/August 2019
Created by Tjaša Ferme
Developed with and Directed by Ana Margineanu
Devisers/Performers: Meggan Dodd, Tjaša Ferme, Gina Pemberton, Yiqing Zhao
Sound design and Composition: Justin Mathews
Video Design: John J.A. Jannone
Video Design Assistant: Alexandria Grajales
Lighting Design: Ayumu Poe Saegusa (CMsl)
Movement: Joya Powell
Cognitive Neuroscientist: Natalie Kacinik
Live Neuroscientific Interpretation by Phoebe Chen
On-stage Techies: Stephen Bryson and Maryam Choudhary
Stage Manager: Mariah Plante
Technical Director: Becca Key
Managing Director: Jacob Sebag
Director of Science: Natalie Kacinik
Artistic Director: Tjaša Ferme
Company Manager: Jeremy Goren
Tech Sponsors: Brainbit and Emotiv