2026.04.22 王中佾報告 – Rising Rive: A Personal VR Odyssey into Your Shadow: AI-Driven Therapeutic VR Experience: AI-Driven Therapeutic VR Experience

Rising Rive: A Personal VR Odyssey into Your Shadow: AI-Driven Therapeutic VR Experience: AI-Driven Therapeutic VR Experience

Abstract

Rising River is an AI-integrated virtual reality (VR) experience that transforms personal reflection into a dynamic and interactive journey. Driven by generative AI, the experience customizes narrative content based on the spoken input of each player, opening new possibilities for individualized emotional exploration. Set in a symbolic landscape shaped by water and memory, players gradually restore a dried-up river, uncovering hidden aspects of the self. Drawing from Jungian shadow theory [Peterson and Seligman 2004] and the VIA Classification of Character Strengths and Virtues [VIA Institute on Character 2023], Rising River blends psychological storytelling with immersive design. Developed by an interdisciplinary team, the project explores how AI and VR together can offer poetic, immersive, and personalized paths toward self-awareness. This paper details the conceptual development, visual design, technical implementation and therapeutic potential of Rising River as an experimental model for interactive narrative therapy.
Keywords:
VR, interactive narrative, Jungian shadow work, AI-generated storytelling, self-reflection, therapeutic game
發表於 112下學期 | 在〈2026.04.22 王中佾報告 – Rising Rive: A Personal VR Odyssey into Your Shadow: AI-Driven Therapeutic VR Experience: AI-Driven Therapeutic VR Experience〉中留言功能已關閉

2026.04.22 林佩穎報告 – From Sci-Fi Imagination to Everyday Interaction: A Narrative Framework for the Self-Awakening Journey of a Smart Lamp

From Sci-Fi Imagination to Everyday Interaction: A Narrative Framework for the Self-Awakening Journey of a Smart Lamp

– 2025 ACM Designing Interactive Systems Conference

Bowen Kong/ Rung-Huei Liang

 

Abstract

In recent years, digital technologies have become increasingly autonomous, offering “mind-like” experiences of intelligent objects across various things, including smart home devices, social robots, and voice assistants. Drawing inspiration from the classic “mind awakening” narratives of intelligent things in science fiction, this study employs design fiction to integrate such storylines into everyday contexts. We present EvoLumen, a conceptual lamp designed to explore the emergent self-awareness of a thing. The lamp was deployed in the homes of five participants for one week, generating daily first-person narratives that sequentially covered environmental perception, emotional emulation, dream states, self-reflection, and farewell. Analysis of participant feedback and observations revealed the influence of detection accuracy, emotional triggers, and science fiction elements on perceptions of the lamp’s self-awareness. Additionally, we emphasize the pivotal role of time in shaping the agency of things and propose a “narrative framework” to guide the development of more immersive and experiential digital companions.

Keywords 

smart objects, design fiction, sci-fi, natural language generation

Reference

https://doi.org/10.1145/3715336.3735761

Presentation

file

 

發表於 114下學期 | 在〈2026.04.22 林佩穎報告 – From Sci-Fi Imagination to Everyday Interaction: A Narrative Framework for the Self-Awakening Journey of a Smart Lamp〉中留言功能已關閉

2026.04.22 凌采彣報告 – The Dream of Zhuang Zhou: Entangled Agencies in Multispecies Virtual Reality

The Dream of Zhuang Zhou: Entangled Agencies in MultispeciesVirtual Reality

Shuai Zou, Bingyuan Wang, Boyu Li, Linlin Cai, Qiuting Xia, Zeyu Wang

SA Art Papers ’25: Proceedings of the SIGGRAPH Asia 2025 Art Papers

References:
paper
video

Presentation PPT

發表於 114下學期 | 在〈2026.04.22 凌采彣報告 – The Dream of Zhuang Zhou: Entangled Agencies in Multispecies Virtual Reality〉中留言功能已關閉

2026.04.08 林佩穎報告 – Entanglement: an immersive art of an engagement with non-conscious intelligence

Entanglement: an immersive art of an engagement with non-conscious intelligence

– ISEA 2025

Haru Hyunkyung Ji/ Graham Wakefield

Artificial Nature

Abstract

This paper describes an artwork combining procedural modeling, generative AI, and dynamic simulation to create a seamless immersive installation inspired by the motif of the forest and its underground fungal network. The artwork is grounded in the imperative to draw attention to non-conscious cognition, in biological and machine senses, as a reminder of the essential more-than-human-world around us. It addresses these themes
by integrating biologically-inspired dynamic simulations with non-narrative spatial storytelling. The paper’s contributions also include challenging the limitations of image-based generative AI in achieving consistency in long-form continuous video at high resolutions while balancing aesthetic control to create a valuable tool within an artist’s original workflow.

Keywords
Immersive installation, Mycorrhizal networks, Nonconscious intelligence, Procedural  modeling, Generative AI, Dynamic simulation, Non-narrative storytelling

Reference

https://www.isea-symposium-archives.org/presentation/entanglement-an-immersive-art-of-an-engagement-with-non-conscious-intelligence-presented-by-ji-and-wakefield/

https://artificialnature.net/

Presentation

file

 

發表於 114下學期 | 在〈2026.04.08 林佩穎報告 – Entanglement: an immersive art of an engagement with non-conscious intelligence〉中留言功能已關閉

2026.04.08 李鍵報告 – Eye of Flora: Encountering Nature through the Mixed Reality Lens of Plant-Environment Interactions

Eye of Flora: Encountering Nature through the Mixed Reality Lens of Plant-Environment Interactions

SA ’24: SIGGRAPH Asia 2024 Art Papers

Abstract

This paper presents a hybrid plants-driven mixed reality (MR) system, Eye of Flora, which re-engages humans with nature from a more-than-human perspective. Eye of Flora integrates plants into digital twin systems by merging plant electrophysiology with 3D reconstruction technologies. Utilizing AI to analyze their biosignals, digital plant agents can now participate in digital environments and share their unseen biological responses to various environmental factors through aesthetic particle systems. This approach challenges anthropocentric digital ecosystems by incorporating ecological awareness through a multispecies perspective, promoting a holistic view of human-nature relationships and fostering symbiotic coexistence in the digital age.

Keywords :

Plant Digital Twin, More-than-human, Electrophysiology, Mixed

Reality, Nature Engagement, Virtual Environment

ref.: https://dl-acm-org.nthulib-oc.nthu.edu.tw/doi/full/10.1145/3680530.3695442

報告PPT:https://docs.google.com/presentation/d/1X4HeqtmsuzIY2CcEqZ5xrgT4xiHiWqhA/edit?usp=sharing&ouid=104134464484600756439&rtpof=true&sd=true

發表於 112下學期 | 在〈2026.04.08 李鍵報告 – Eye of Flora: Encountering Nature through the Mixed Reality Lens of Plant-Environment Interactions〉中留言功能已關閉

2026.04.08 王中佾報告-Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback

Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback

Luca Chittaro, Marta Serafini , Yvonne Vulcano

International Journal of Human-Computer Studies

Virtual reality biofeedback systems for relaxation training can be an effective tool for reducing stress and anxiety levels, but most of them offer a limited user experience associated to the execution of a single task and a biofeedback mechanism that reflects a single physiological measurement. Furthermore, user evaluations of such systems do not typically include a placebo condition, making it difficult to determine the actual contribution of biofeedback. This paper proposes a VR system for breathing and relaxation training that: (i) uses biofeedback mechanisms based on multiple physiological measurements, (ii) provides a richer user experience through a narrative that unfolds in phases where the user is the main character and controls different elements of the virtual environment through biofeedback. To evaluate the system and to assess the actual contribution of biofeedback, we compared two conditions involving 35 participants: a biofeedback condition that exploited realtime measurements of user’s breathing, skin conductance, and heart rate; and a placebo control condition, in which changes in the virtual environment followed physiological values recorded from a session with another user. The results showed that the proposed virtual experience helped users relax in both conditions, but real biofeedback produced results that were superior to placebo biofeedback, in terms of both relaxation and sense of presence. These outcomes highlight the important role that biofeedback can play in virtual reality systems for relaxation training, as well as the need for researchers to consider placebo conditions in evaluating this kind of systems.

Keywords: Virtual reality Biofeedback Relaxation Training Placebo condition Evaluation

原文

PPT

發表於 112下學期 | 在〈2026.04.08 王中佾報告-Virtual reality experiences for breathing and relaxation training: The effects of real vs. placebo biofeedback〉中留言功能已關閉

2026.04.08 凌采彣報告 – Rethinking Individual Fairness in Deepfake Detection

Rethinking Individual Fairness in Deepfake Detection

Aryana Hou, Li Lin, Justin Li, Shu Hu

ACM MM 2025 Generative AI: Social Aspects of Generative AI

References:
paper

Presentation PPT

發表於 114下學期 | 在〈2026.04.08 凌采彣報告 – Rethinking Individual Fairness in Deepfake Detection〉中留言功能已關閉

2026.04.01 林佩穎報告 – Embodied Ink: A Multisensory Reinterpretation of Chinese Calligraphy Through Digital Twins and Immersive Realities

Embodied Ink: A Multisensory Reinterpretation of Chinese Calligraphy Through Digital Twins and Immersive Realities

– ACM MM 2025

Anna Borou Yu / Jiajian Min

Embodied Ink: A Multisensory Reinterpretation of Chinese ...

For thousands of years, Chinese culture has regarded calligraphy as both a form of painting and a deeply physical, philosophical practice. While traditionally manifesting as a two-dimensional art form, Chinese calligraphy emerges from three-dimensional, embodied movements infused with breath, emotion, and intention. This demo/video paper introduces Embodied Ink, a multimedia interactive installation that reinterprets Chinese calligraphy through motion capture and generative AI. By translating the audience movements into real-time dynamic visuals and soundscapes, the project reveals the hidden kinetics and philosophical depth underlying calligraphy. Embodied Ink reimagines the static medium of calligraphy as a dynamic interplay of forces and particles, inviting viewers to experience calligraphy as a living, ever-changing art form.

Reference

https://dl.acm.org/doi/epdf/10.1145/3746027.3756139

https://www.mystudio.design/

Presentation

file

 

發表於 114下學期 | 在〈2026.04.01 林佩穎報告 – Embodied Ink: A Multisensory Reinterpretation of Chinese Calligraphy Through Digital Twins and Immersive Realities〉中留言功能已關閉

2026.04.01 王中佾 報告 – Transcendental Chakra: A Multi-Sensory Meditation Spiritual – Journey to Enhance Self-Awareness Based on VR

Transcendental Chakra: A Multi-Sensory Meditation Spiritual – Journey to Enhance Self-Awareness Based on VR

Abstract

Chakra, originating in Hinduism, is described as luminous wheels or auras representing different elements within the body that reflect a person’s self-awareness. Chrakra-based meditation techniques can support individual wellbeing, but the provision of Chakra feedback in a visual and tangible manner to facilitate mindfulness is largely unexplored. Transcendental Chakra is a virtual reality (VR) multi-sensory experience using guided audio, visual effects, and vibrotactile feedback to aid beginners in chakra meditation by visualizing their astral avatar. The goal of this work is to foster both spiritual and physical self-awareness.

Keywords

meditation, chakra, multi-sensory, virtual reality, haptic

原文

PPT

發表於 112下學期 | 在〈2026.04.01 王中佾 報告 – Transcendental Chakra: A Multi-Sensory Meditation Spiritual – Journey to Enhance Self-Awareness Based on VR〉中留言功能已關閉

2026.04.01 李鍵報告 – A Demonstration of YUBI: Your Universal Body Interface Using Finger Force to Full-Body Motion for Avatar Embodiment

A Demonstration of YUBI:  Your Universal Body Interface Using Finger Force to Full-Body Motion for Avatar Embodiment

SIGGRAPH ’25: Special Interest Group on Computer Graphics and Interactive Techniques Conference

Abstract

YUBI, a novel interface, translates nuanced finger force inputs into continuous, full-body avatar motion, fostering strong embodiment in virtual reality. It enables embodied interactions such as object manipulation and navigation using only finger force, overcoming physical constraints while delivering realistic haptic experiences. Crucially, YUBI renders reaction forces to the fingers, allowing users to perceive virtual object properties (e.g., weight and stiffness) through the effort of their input. By dynamically adapting the force-motion relationship based on object characteristics, YUBI provides intuitive haptic perception, significantly enhancing haptic realism and avatar embodiment.

ref.: https://dl.acm.org/doi/epdf/10.1145/3721257.3743094

報告PPT:https://docs.google.com/presentation/d/1xTXyvkN_Ry7A0rHUPuvE9fN5sI0-CvW7/edit?usp=sharing&ouid=104134464484600756439&rtpof=true&sd=true

發表於 114下學期 | 在〈2026.04.01 李鍵報告 – A Demonstration of YUBI: Your Universal Body Interface Using Finger Force to Full-Body Motion for Avatar Embodiment〉中留言功能已關閉

2026.04.01 凌采彣報告 – RoboSax Melody Slot Machine

RoboSax Melody Slot Machine

Masatoshi Hamanaka, Gou Koutaki

ACM MM 2025 Interactive Art

References:
paper
video

Presentation PPT

發表於 114下學期 | 在〈2026.04.01 凌采彣報告 – RoboSax Melody Slot Machine〉中留言功能已關閉

2026.3.11 王中佾報告-SynCocreate:Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in VR

-2024 Siggraph Asia Art Gallery / 2024 CHI EA

ABSTRACT

Collaborative art and co-creation enhance social wellbeing and connectivity. However, the combination of art creation through mutual brainwave interaction with the prosocial potential of EEG biosignals reveal an untapped opportunity. SynCocreate presents the design and prototype of a VR-based interpersonal electroencephalography(EEG) neurofeedback co-creation platform. This generative VR platform enables paired individuals to interact via brainwaves in a 3D virtual canvas, painted and animated collaboratively through their real-time brainwave data. The platform employs synchronized visual cues, aligned with the real-time brainwaves of paired users, to investigate the potential of collaborative neurofeedback in enhancing co-creativity and emotional connection. It also explores the use of Virtual Reality (VR) in fostering creativity and togetherness through immersive, collective visualizations of brainwaves.

KEYWORDS: Generative VR, Interpersonal social connectedness, Co-creation, EEG

Paper

PPT

 

發表於 112下學期 | 在〈2026.3.11 王中佾報告-SynCocreate:Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in VR〉中留言功能已關閉

2026.03.11 林佩穎報告 – The Oracle: Ritual for the Future

The Oracle: Ritual for the Future

– Ars Electronica 2025 

Victorine van Alphen / Brave New Human (NL), IDlab (NL)

未提供相片說明。

The Oracle is a futuristic immersive performance-ritual that explores the deep  entanglement between human beings, technology, and (AI-generated) image culture. It invites and encloses eight participants into an intimate 360-degree screen environment.
The screens function as agents and mirrors-reflecting and reshaping our perceptions of self, body, and an elusive “humanness.” Participants are guided by a live droid performer and a drone companion through a symbolic, interactive journey that blends ritual, performance, confrontation, and immersive installation into a single living system.
Participants shift between moral choice and passive absorption, navigating a choreography that is as much social as it is audiovisual.
They must negotiate moments of peer censorship, physical ritual, and shared decision-making, positioning themselves within the unfolding nar-rative. While technology takes a central role, the experience remains deeply personal, confront-ing visitors with moments of vulnerability, intimacy, and catharsis.
At its heart, The Oracle is not a spectacle of technology, but a reflection on it. Drawing from Buddhist and Indige-nous Latin American philos-ophies, it questions the Western ideal of the auton-omous human, proposing instead a view of the self as fluid, formed and reformed through systems larger than us. Not “another AI show,” but a ritual of collective experiencing: confronting danger, while seducing with the non-human. Symbolic, embodied, and emotionally resonant.

Reference

https://ars.electronica.art/panic/en/view/the-oracle-ritual-for-the-future-23038ddb450c81348779c8071d6269d2/

https://victorinevanalphen.nl/the-oracle-ritual-for-the-future-for-humans-and-non-humans/

https://see-nl.com/artikel/20251103-the-oracle-ritual-for-the-future-for-humans-and-non-

Presentation

file

 

發表於 114下學期 | 在〈2026.03.11 林佩穎報告 – The Oracle: Ritual for the Future〉中留言功能已關閉

2026.03.11 李鍵報告 – Abstract Language Model

Abstract Language Model /Andreas Lutz (DE)

New Animation Art + ARS ELECTRONICA HONORARY MENTION 2025

Abstract Language Model

Artist : Andreas Lutz (DE)

For Abstract Language Model, an artificial neural network was trained with the entire character sets represented in the Unicode Standard (over 65,000 characters in the basic multilingual plane system). The resulting complex data models contain the translation of all available human sign systems as equally representable, machine-created states including latent points, where the most accurate representation of the characters is achieved.

However, between these points interpolation becomes possible, which means that among two previously distinct characters now infinite characters come into existence, which can be seen as the origin of a purely machine created semiotic system. The revealing of these “obscured variants” between the known characters leads to the idea of a transitionless or non-binary universal language, which could be expressed by a self-conscious machine to its human counterpart and vice versa.

The visualizations of these processes are displayed in the 4-channel video installation Abstract Language Model (Sync). Consisting of four synchronized visualizations with seven different states (Extraction, Analysis, Rearrange, Process, Transformation, Learning, and Language), the audio-visual sequence is based on a real-time interpolation through the trained models and depicts the transformation into a trans-human / trans-machine language.

Abstract Language Model (Live) is the audio-visual live performance pendant of the installation. The 45 minutes long performance is presented as a one-channel version with real-time generated visuals and stereo sound.

Having employed also in previous works the conceptual idea of an assumed language model for self-conscious machines and their possible expressions, Abstract Language Model now serves as the semiotic system for current versions of these sculptures and installations.

ref.:

https://archive.aec.at/prix/302810/

https://andreaslutz.com/abstract-language-model-sync/http://ayoungkim.com/wp/3col/delivery-dancers-sphere-2022

報告PPT:https://docs.google.com/presentation/d/1KmdUnATHBwyxdmBJYJDqOXgwX6X1vJN9/edit?usp=sharing&ouid=104134464484600756439&rtpof=true&sd=true

發表於 114下學期 | 在〈2026.03.11 李鍵報告 – Abstract Language Model〉中留言功能已關閉

2026.3.11 凌采彣報告 – Guanaquerx

Guanaquerx

Paula Gaetano Adi

ARS Electronica 2025 Prix

References:
artwork website: Guanaquerx
see the artwork statement here: Ars Electronica Archive
author: Paula Gaetano-Adi | RISD

Presentation PPT

發表於 114下學期 | 在〈2026.3.11 凌采彣報告 – Guanaquerx〉中留言功能已關閉