Performative Media (Weeekly Learning)

Performative Media (Weeekly Learning)

WEEK1-WEEK14
LI YUHAN (0379857)

Week 1

This week marked my first introduction to the field of Performative Design. The instructor began by clarifying the core concept and used a series of video examples to let us experience its appeal firsthand. I was particularly struck by the music video for "Love." The use of robotic arms and mirrors created a visual poem full of dynamics and reflection, significantly enhancing the narrative tension and audience immersion. This gave me a preliminary understanding of the magic of performative design as an intermedial strategy. Subsequently, the teacher facilitated a group discussion where we shared a story, film, artwork, game, or performance that had inspired us. Within our group, a shooting game sparked the most conversation due to its unique narrative mechanics. At the end of the class, the teacher briefly outlined the main tasks for the semester.

This session fundamentally reshaped my understanding of the relationship between "performance" and "design." In traditional views, "performance" is often confined to the bodily arts on stage. In contrast, Performative Design expands it into a broader practice of "orchestrating experiences." It is not limited to the human performer but treats technology, space, media, and even machinery as equal "performers." The case study of "Love" exemplifies this—the precision of the robotic arms and the illusion of the mirrors were not mere backdrops; they were the protagonists of the narrative. This shift in perspective is revolutionary. During the group discussion, as we analyzed that shooting game, I began attempting to deconstruct it through this new lens: Could the UI interactions, environmental sound effects, and camera movements also be considered a form of "performance"? How do they collectively build tension and a sense of immersion? This class planted a seed in my mind: design is not just about creating objects, but about choreographing a comprehensive sensory experience that involves both human and non-human elements.

Week 2

This week built upon the concepts of performing media and the input-process-output interaction model. The course focused on two key areas: Creative Coding (using programming for artistic and interactive expression) and Generative Art (creating artwork through rules and algorithms rather than direct drawing). The teacher emphasized the core logic of balancing "rules and randomness"—where complex visual outcomes emerge through iterative processes, forming a collaborative human-computer creation.

During the hands-on session, we learned TouchDesigner and, through guided exploration, completed my first visual animation. Moving from theoretical understanding to practically implementing generative art allowed me to truly grasp the concept of "rules-driven creation."


Week 3

This week's course provided me with a more concrete understanding of the technical foundations of generative art. At its core, generative art creates visual outcomes through predefined rules, algorithms, and systematic processes. The introduction of randomness and the iteration of rules form the key to dynamic change—a principle that became particularly evident when I attempted to create fractal or particle effects.

Regarding the two primary operators, I gained a deeper understanding of the functional and visual distinctions between TOP and CHOP. The TOP operator focuses on image-level processing, such as generating noise textures or compositing footage, and is easily recognizable by its green icon. CHOP, on the other hand, is primarily used for processing data and signals, like generating oscillating waveforms, and its operational logic aligns more closely with numerical computation.

During practical exercises, I concentrated on applying multiple TOP tools. When compositing multi-layer images with Composite, I learned that arranging layer order correctly ensures text, graphics, and backgrounds stack properly. The Transform tool enabled precise positioning adjustments, sometimes requiring frame-by-frame tweaks to achieve the desired layout. Additionally, I experimented with multiplying blends between backgrounds and foregrounds, and resolved numerous image resizing issues through refitting options. These operations highlighted how tweaking detailed parameters often proves crucial for achieving the desired final effect.

Overall, this week's exercises provided clearer insight into integrating systematic rules with flexible visual control, while gradually familiarizing me with the workflow of realizing creative expressions through node-based operations.


Week 4

This class session featured our presentation for Project One, which went smoothly overall. We delivered the presentation according to the practice flow we had rehearsed privately. The instructor provided feedback, noting that our presentation was rich in content—incorporating charts rather than overwhelming text—and suggesting we elaborate further on the project description while reducing unnecessary details, such as author background information.

After class, we revised our project based on his feedback to make it even better. This presentation allowed each member of our team to make progress, big or small, and we're confident we can deliver an even stronger presentation next time.

Week 5

This week's course centers on transitioning into the creative phase, where we must transform previously studied artist case studies and theories into our own interactive installation concepts. The emphasis lies not on technical perfection, but on whether we can fully express an interactive idea through sound, sensors, and visual feedback.

Technically, the course primarily introduces the combined use of CHOP and TOP in TouchDesigner. CHOP handles data streams—such as mouse position, keyboard triggers, or audio input—then transforms and maps them through nodes like Math and Map. TOP generates and composes visual elements like circles, textures, and layer overlays. The key learning lies in connecting CHOP data streams to TOP parameters, enabling images to dynamically respond to interactions.

During class, we practiced two small systems with the instructor: one where mouse movement controls a circle's motion while keyboard input switches its color; the other where audio input drives the circle's size changes and triggers its jump to different positions on screen based on sound rhythm. This provided a more intuitive understanding that interactive visuals are fundamentally built on flexible data connections and real-time computational logic.

Overall, this week's greatest takeaway was establishing a foundational "input-processing-output" creative mindset. I also began experimenting with rapidly implementing ideas through node connections, laying the groundwork for future project prototyping.


Week 6

This week's course entered a more hands-on interactive practice phase. The instructor redefined the boundaries of "interaction," helping us understand that it extends beyond mouse and keyboard—the body itself can serve as the most direct medium for interaction. He showcased a shadow-based interactive installation in a city square. What struck me most was that the design required no rules for users to learn—people simply interacted naturally with the projections using their bodies, achieving remarkably vivid effects. This made me realize that good interaction should be intuitive and closely tied to the theme.

In the technical practice segment, we began learning how to capture motion with cameras and implement visual feedback. The core logic lies in "frame comparison through caching." The instructor guided us step by step: first, cache consecutive frames; then, select frames from different time points for difference comparison, enabling the capture of motion edges. Next, we applied threshold adjustments to filter out subtle environmental noise, retaining only significant motion changes. Finally, effects like blurring and color adjustments were applied to make the visual feedback appear more natural.

During implementation, I experimented with converting motion amplitude into control data. After reading pixel changes with Analysis TOP, I remapped the numerical range using Map TOP. This allowed my physical movements to directly influence the intensity of noise patterns on the screen. Additionally, we integrated last week's audio analysis work: low-frequency sound components were linked to image displacement, while high-frequency components triggered flash effects. We added smoothing to ensure natural transitions. Parameter adjustments required patience—for instance, raising the threshold from 0.3 to 0.5 significantly improved edge recognition. This trial-and-error process actually helped me understand each node's function.

Regarding the second assignment, the instructor emphasized that the core objective is "transforming ideas into presentable solutions." The concept presentation requires a concise and clear explanation of the desired interaction, inspiration sources, and intended emotional impact. Supporting documentation must include audience analysis and detailed inspiration descriptions. For the prototype, perfection isn't required, but it must fully demonstrate the core interaction logic and document the experimentation and refinement process.

During the final class discussion, we settled on the general direction for Project 2: combining visual effects with Disney magic.

Week 7

This week's course focused on integrating MediaPipe within TouchDesigner to achieve richer physical interaction effects. The instructor first guided us through reviewing the design principles of body interaction systems—establishing a natural feedback loop between the artwork and the viewer, where physical movements serve as input and visual responses as output; effective interactive design should serve the creative theme rather than merely stacking technical effects.

We then moved into hands-on practice with MediaPipe plugins. The instructor demonstrated how to utilize its hand tracking and pose recognition capabilities, guiding us to translate recognition data into dynamic on-screen feedback. Specifically, we learned to capture hand keypoint coordinates and apply smoothing and range mapping through CHOP nodes (such as Lag, Range, Limit) to achieve more stable control over circular position and size changes.

During practice, I experimented with controlling graphic size through hand distance variations. I connected palm movement data to a Displace TOP, combining it with a Noise TOP to generate wave-like dynamic effects. I discovered that adding color overlay and edge processing could better align visual outcomes with creative intent while enhancing expressiveness.

Finally, the instructor suggested exploring different gestures to control other visual dimensions—such as brightness, hue, or texture density—to unlock more possibilities for future projects. This session reinforced that technology is merely a tool for expression; what truly matters is seamlessly integrating interactive logic with creative themes.
We then discussed Project Two, deciding to apply the newly taught hand motion capture technique to realize our new concept. People could create magical effects with a snap of their fingers. Following this, we began drafting our documentation and PowerPoint presentation.


Week 8

This session made me feel the project has entered a critical phase. After hearing our ideas, the instructor suggested we replace Disney magic with our own magical world and clarify what exactly the work aims to express. What should the audience take away from it?

In the latter half of the session, we focused on planning priorities for next week's prototype presentation. The instructor emphasized that core interactions must be functional, while visual style can be added later. He wants to see a "working, communicative" physical prototype, not just concept art. During group discussions, we encountered specific challenges like projection angles, recognition zone limitations, and signal coordination between devices. However, the instructor reminded us that identifying problems is precisely an opportunity to advance the project—the key is proposing solutions for adjustment.

This week's greatest takeaway was recognizing that the prototype phase doesn't pursue perfection—it aims to create an interactive skeleton that embodies the core concept and offers an experiential foundation. Moving forward, we'll refine the interaction logic and adjust structural details, gradually transforming our idea into a tangible, sensory entity.

Week 9

This session made me feel the project has entered a critical phase. After hearing our ideas, the instructor suggested we replace Disney magic with our own magical world and clarify what exactly the work aims to express. What should the audience take away from it?

In the latter half of the session, we focused on planning priorities for next week's prototype presentation. The instructor emphasized that core interactions must be functional, while visual style can be added later. He wants to see a "working, communicative" physical prototype, not just concept art. During group discussions, we encountered specific challenges like projection angles, recognition zone limitations, and signal coordination between devices. However, the instructor reminded us that identifying problems is precisely an opportunity to advance the project—the key is proposing solutions for adjustment.

This week's greatest takeaway was recognizing that the prototype phase doesn't pursue perfection—it aims to create an interactive skeleton that embodies the core concept and offers an experiential foundation. Moving forward, we'll refine the interaction logic and adjust structural details, gradually transforming our idea into a tangible, sensory entity.

https://drive.google.com/file/d/1JDo2tL8Y6vD8LaIH122oas7rIxVP043e/view?usp=sharing

Week 10

Week 10 focused on preparations for the final exhibition. The instructor reiterated the core objective of the assignment: not merely completing a functional interactive piece, but ensuring a clear and cohesive unity between concept, narrative, and physical installation within the actual exhibition environment. The emphasis thus shifted to planning the project's concrete execution. The instructor guided each group to refine their proposals further, clarify technical implementation paths, and begin seriously considering the construction of physical structures and on-site adaptation.

During the first half of the session, the instructor laid out the exhibition timeline in detail: space measurement on December 12th, installation and construction from December 12th to 13th, public exhibition from December 13th to 15th, and the final presentation on the 26th. Specifically, our task involves taking turns staffing the exhibition site, with each person responsible for approximately one and a half hours of supervision. The instructor will provide full technical support throughout this period. The instructor emphasized three critical tasks to complete at this stage: refining the conceptual statement, finalizing technical requirements, and confirming the materials list. Once inside the venue, we must execute a highly specific installation plan. We then conducted a writing exercise titled "Artwork Statement." The instructor required us to clearly describe the installation's core concept, the intended audience experience, and the reflections the work aims to convey within three to six paragraphs. This text will be used for the exhibition wall labels and also help us self-assess whether the visual presentation, interactive logic, and narrative core of our work are consistent.

We revisited our final projects with the instructor, who suggested simplifying our narrative background for greater clarity. He emphasized the importance of emotional resonance, urging us to clarify our project's central theme: what emotions we aim to evoke and how to convey them to audiences for a profound impact. Following this feedback, we held another round of group discussions to refine our projects, particularly focusing on the narrative background and emotional expression.

Comments

Popular posts from this blog

{Interactive Design}Exercise 1

Advanced Modeling and Animation (Assignment1-2)

Embedded Systems Course