2025.03.18 周巧其報告 – 新興技術重構生態視野

新興技術重構生態視野
Presentation PDF

REFERENCE

Chang, M., Shen, C., Maheshwari, A., Danielescu, A., & Yao, L. (2022, June). Patterns and opportunities for the design of human-plant interaction. In Proceedings of the 2022 ACM Designing Interactive Systems Conference (pp. 925-948).

Hu, Y., Chou, C., & Kakehi, Y. (2023). Synplant: Cymatics Visualization of Plant-Environment Interaction Based on Plants Biosignals. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 6(2), 1-7.

Hu, Y. Y., Chou, C. C., & Li, C. W. (2021, October). Apercevoir: Bio internet of things interactive system. In Proceedings of the 29th ACM International Conference n Multimedia (pp. 1456-1458).

Hu, Y., Fol, C. R., Chou, C., Griess, V. C., & Kakehi, Y. (2024, May). Immersive Flora: Re-Engaging with the Forest through the Visualisation of Plant-Environment Interactions in Virtual Reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).

Hu, Y., Fol, C. R., Chou, C., Griess, V. C., & Kakehi, Y. (2024, May). Immersive Flora: Re-Engaging with the Forest through the Visualisation of Plant-Environment Interactions in Virtual Reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).

發表於 112下學期 | 在〈2025.03.18 周巧其報告 – 新興技術重構生態視野〉中留言功能已關閉

2025.03.18 葉卯陽報告 – The Malleable-Self Experience

The Malleable-Self Experience: Transforming Body Image by Integrating Visual and Whole-body Haptic Stimuli

Audience Award, ACM SIGGRAPH 2024 Emerging Technologies

ABSTRACT

The Malleable-Self Experience comprises the integration of the visual element of virtual reality (VR) with the whole-body haptic sensations of the Synesthesia X1 haptic chair. The goal is to induce a provocative experience that expands one’s understanding of the self by creating a malleable perception of the body image. We explore the effects of visual and whole-body haptic integration on augmenting body image during dynamic transformations of visual representations of the body in VR. We design the plausibility of these perceptual augmentations using a specific sequence of multisensory events: (1) establishing body ownership of a virtual body anchored in the same self-located space as the participant, (2) separating the virtual body to hover above the participant’s physical body, enhanced by accompanying haptic stimuli to increase proprioceptive uncertainty, and (3) transforming the virtual body with integrated visuo-haptic stimuli to sustain perceptual congruency.

KEYWORDS

Malleable-Self Experience,  XR(Extended Reality), Synesthesia, Body Ownership Illusions(BOI)

AUTHOR

  • Tanner Person, Keio University Graduate School of Media Design
  • Nobuhisa Hanamitsu, Enhance Experience Inc.Keio University Graduate School of Media Design
  • Danny Hynds, Keio University Graduate School of Media Design
  • Sohei Wakisaka, Keio University Graduate School of Media Design
  • Kota Isobe, Enhance Experience Inc.
  • Leonard Mochizuki, Enhance Experience Inc.
  • Tetsuya Mizuguchi, Enhance Experience Inc.Keio University Graduate School of Media Design
  • Kouta Minamizawa, Professor, Keio University Graduate School of Media Design

REFERENCE

ACM SIGGRAPH 2024 Emerging Technologies

Keio Media Design (KMD)

Enhance Experience Inc.

Synesthesia lab

https://synesthesialab.com/

Presentation file.

發表於 113下學期 | 在〈2025.03.18 葉卯陽報告 – The Malleable-Self Experience〉中留言功能已關閉

2025.03.18 孫以臻報告 – Material Texture Design

Material Texture Design: Texture Representation System

Utilizing Pseudo-Attraction Force Sensation

SIGGRAPH 2023 Emerging Technologies

ABSTRACT
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.

KEYWORDS
texture design, haptic display, elastic interface

REFERENCE
https://www.youtube.com/watch?v=KqxmShoDhjIhttps://dl.acm.org/doi/epdf/10.1145/3588037

Pdf. for presentation

發表於 113下學期, conference | 在〈2025.03.18 孫以臻報告 – Material Texture Design〉中留言功能已關閉

2025.03.18 吳柏瑤報告 – Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World (EAI ArtsIT 2024)

Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World

Presentation PDF

Refer to caption

Abstract

In this paper, we create “Love in Action” (LIA), a body language-based social game utilizing video cameras installed in public spaces to enhance social relationships in real-world. In the game, participants assume dual roles, i.e., requesters, who issue social requests, and performers, who respond social requests through performing specified body languages. To mediate the communication between participants, we build an AI-enhanced video analysis system incorporating multiple visual analysis modules like person detection, attribute recognition, and action recognition, to assess the performer’s body language quality. A two-week field study involving 27 participants shows significant improvements in their social friendships, as indicated by Self-reported questionnaires. Moreover, user experiences are investigated to highlight the potential of public video cameras as a novel communication medium for socializing in public spaces.

Keywords: Location-based games, Social interactions, Public video cameras

 

 

發表於 113下學期 | 在〈2025.03.18 吳柏瑤報告 – Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World (EAI ArtsIT 2024)〉中留言功能已關閉

2025.03.03 葉卯陽報告 – I am Feeling Lucky

Art Title: I am Feeling Lucky

         The Prix Ars Electronica | Award of Distinction 2024

Author: Timothy Thomasson

→ Original Link:

→ Artwork Website

→ Class Presentation PPT

Abstract: 

I’m Feeling Lucky is a real-time computer-generated animation that questions relationships to image, geography, virtual space, historical media technology, and mass data collection systems. The work features a 3D virtual landscape that is both historically and geographically ambiguous, generated in real-time using game engine technology. This virtual landscape is then populated with thousands of figures sourced from the vast pool of 360-degree image data collected by Google Street View. These figures are processed through a deep neural network, so they become three-dimensional models in the virtual space, each frozen in their captured pose. The work interrogates mass image collection systems, as many of these individuals may not have been aware that their photo was taken by Google, let alone anticipate being placed in this new, strange setting. Many thousands of figures sourced from all over the world are randomly selected to inhabit the endless landscape together.

 The work takes into consideration the panorama paintings of the 19th century as objects of historical, cultural, and perceptual significance, and situates them within contemporary media contexts. Panoramas are rotunda structures in which large 360 degree paintings depict sublime natural landscapes, battle scenes, religious events, or large cityscapes, characterized by their lack of framed boundaries and the inability to be viewed in their entirety with a single gaze. These panorama structures are theorized as part of the lineage of immersive media technologies and can be analyzed as proto-cinematic/virtual reality forms.

With I’m Feeling Lucky, the virtual environment is generated and populated procedurally, so the panoramic image becomes infinite as the virtual camera slowly pans across the landscape endlessly, portraying the stillness of painting at odds with the expectation of fast, high-speed movement and technical progression of digital imagery.

Jurystatement:

In I’m Feeling Lucky by Canadian artist Timothy Thomasson, a historically and geographically ambiguous 3D virtual landscape is generated in real-time with game engine technology and populated with figures from Google Street View. Processed by a deep neural network, thousands of anonymous figures taken from all over the world are randomly selected to inhabit the landscape. The work is based on 19th century panoramas: all-encompassing circular paintings that featured spectacular natural landscapes or battle scenes that completely surrounded the viewer. The panoramas’ immersive scale aimed to condition and mediate perception, thus linking the spectacle and scale of the time with the contemporary scales of imaging and data collection undertaken by Google. Images in the work are continually produced in run time as a virtual camera rotates around the space endlessly and at times almost imperceptibly, thus creating a disjunction between the stillness of landscape painting and the expectation of high frame rate digital images. The jury was impressed with how I’m Feeling Lucky subtly links histories of geography and historical media technology with current issues around mass data collection.

發表於 113下學期 | 在〈2025.03.03 葉卯陽報告 – I am Feeling Lucky〉中留言功能已關閉

2025.03.04 周巧其報告 – 社會介入與諸眾

社會介入與諸眾

Presentation PDF
Prix Ars Electronica | 2020-2022 | Golden Nica

Be Water by Hong Kongers
https://archive.aec.at/prix/254025/

Bi0film.net: Resist like bacteria
https://junghsu.com/Bi0film-net

Forensic Architecture’s Cloud Studies
https://forensic-architecture.org/

發表於 113下學期 | 在〈2025.03.04 周巧其報告 – 社會介入與諸眾〉中留言功能已關閉

2025.03.04 孫以臻報告 – Nosukaay

Nosukaay / Diane Cescutti (FR)

Interactive Art + ARS ELECTRONICA Golden Nica 2024

Nosukaay
Artist: Diane Cescutti (FR)

The loom could be envisioned as a programmable machine that encodes knowledge into fabric, serving as a means of preserving and transmitting culture; while the computer processes data, the loom preserves stories and traditions. ‘Nosukaay’ means computer in Wolof, a language spoken by people in much of West Africa; the installation Nosukaay merges textile hapticity with the digital space to produce a hybrid that expands the notion of interactivity. It is based on an modified Manjacque loom, in which the loom’s frames are replaced by two screens that introduce a video game in which the users interact with the “wisdom of the system” through a deity. Its tactile interface is made of Manjak loincloth, woven by the artist Edimar Rosa in Dakar. If the player makes a choice that does not respect the machine deity and hence the importance of the knowledge transmitted, the user gets ejected from the game and sent back to the beginning. Nosukaay as a textile-computer hybrid allows us to rethink the concept of the “computer” through a rich tapestry of shared understanding that interweaves craft with computational practices.

ref.:
https://archive.aec.at/prix/290626/
https://www.africandigitalart.com/nosukaay-weaving-the-future-with-tradition-and-technology/
https://dianecescutti.com/works/nosukaay/

pdf. for presentation

發表於 113下學期 | 在〈2025.03.04 孫以臻報告 – Nosukaay〉中留言功能已關閉

2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions

Presentation PDF

Cold Call: Time Theft as Avoided Emissions
Sam Lavigne and Tega Brain (INT)

Prix Ars Electronica | The 2024 Winners | Interactive Art

Abstract

Cold Call: Time Theft as Avoided Emissions is an unconventional carbon offsetting scheme that draws on strategies of worker sabotage and applies them in the context of high emission companies in the fossil fuel industry. Time theft is a strategy to deliberately slow productivity, where workers waste time and are therefore paid for periods of idleness. For example, fake sick days, sleeping on the job, extended lunch breaks, or engaging in non-work-related activities like social media or unrelated phone calls. In extractive industries where productivity remains firmly tethered to carbon emissions, sabotage is an effective strategy for emissions reductions.  

Cold Call is an installation that takes the form of a call center. Audiences are connected by telephone to executives in the fossil fuel industry and instructed to keep them on the phone as long as possible. The cumulative time stolen from these executives is then quantified as carbon credits, using an innovative new offsetting methodology. The project is powered by custom call center software that allows participants to make calls, learn about who they are calling, access call scripts and conversation ideas, and listen to recordings of calls that have already been made. A leader board tracks the total number and length of calls. To date, the longest call has stretched for over 39 minutes.

 

發表於 113下學期 | 在〈2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions〉中留言功能已關閉

2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist

The  ART  Newspaper :
An AI bot has figured out how to draw like Banksy. And it’s uncanny !

報導來源:An AI bot has figured out how to draw like Banksy. And it’s uncanny (theartnewspaper.com)

報告PPT:[PPT]

Abstract

To create these images, Round has used a type of computerised machine learning framework known as a GAN (generative adversarial network). This specific GAN was trained for five days using a portfolio of hundreds of images of (potentially) Banksy’s work, until it was able to produce an image that bears a superficial likeness to the originals.

發表於 111下學期 | 在〈2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist〉中留言功能已關閉

2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views

論文名稱:Co-Writing with Opinionated Language Models Affects Users’ Views

論文作者:Maurice Jakesch, et al.

論文來源:CHI’23 https://dl.acm.org/doi/10.1145/3544548.3581196

報告PPT: [PDF]

ABSTRACT

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologi

發表於 111下學期 | 在〈2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views〉中留言功能已關閉

2023.05.23 李艷琳報告 – CLOUD STUDIES

CLOUD STUDIES

PRIX ARS ELECTRONICA 2021 – Artificial Intelligence & Life Art – Golden Nica

 

Authors:

Forensic Architecture (FA)

→ Original Link:

→ Artwork Video

→ Class Presentation PPT

 

Abstract: 

Civil society rarely has privileged access to classified information, making the information that is available from ‘open sources’ crucial in identifying and analyzing human rights violations by states and militaries. The wealth of newly-available data—images and videos pulled from the open source internet—around which critical new methodologies are being built, demands new forms of image literacy, an ‘investigative aesthetics,’ to read traces of violence in fragmentary data drawn from scenes of conflict and human rights violations. The results of these new methodologies have been significant, and Forensic Architecture (FA) has been among the pioneers in this field, as open source investigation (OSI) has impacted international justice mechanisms, mainstream media, and the work of international human rights NGOs and monitors. The result has been a new era for human rights: what has been called ‘Human Rights 3.0.’

In Forensic Architecture’s work, physical and digital models are more than representations of real-world locations—they function as analytic or operative devices. Models help us to identify the relative location of images, camera positions, actions, and incidents, revealing what parts of the environment are ‘within the frame’ and what remains outside it, thereby giving our investigators a fuller picture of how much is known, or not, about the incident they are studying.

There remain, however, modes of violence that are not easily captured even ‘within the frame.’ Recent decades have seen an increase in airborne violence, typified by the extensive use of chlorine gas and other airborne chemicals against civilian populations in the context of the Syrian civil war. Increasingly, tear gas is used to disperse civilians (often gathered in peaceful protest), while aerial herbicides destroy arable land and displace agricultural communities, and large-scale arson eradicates forests to create industrial plantations, generating vast and damaging smoke clouds. Mobilized by state and corporate powers, toxic clouds affect the air we breathe across different scales and durations, from urban squares to continents, momentary incidents to epochal latencies. These clouds are not only meteorological but political events, subject to debate and contestation. Unlike kinetic violence, where a single line can be drawn between a victim and a ‘smoking gun’, in analyzing airborne violence, causality is hard to demonstrate; in the study of clouds, the ‘contact’ and the ‘trace’ drift apart, carried away by winds or ocean currents, diffused into the atmosphere.  Clouds are transformation embodied, their dynamics elusive, governed by non-linear behavior and multi-causal logics.

One response by FA has been to work with the Department of Mechanical Engineering at Imperial College London (ICL), world leaders in fluid dynamics simulation. Together, FA and ICL have pioneered new methodologies for meeting the complex challenges to civil society posed by airborne violence. The efficacy of such an approach in combatting environmental violence has already been demonstrated—FA’s investigation into herbicidal warfare in Gaza was cited by the UN—and has significant future potential, as state powers are increasingly drawn to those forms of violence and repression that are difficult to trace.

Cloud Studies brings together eight recent investigations by Forensic Architecture, each examining different types of toxic clouds and the capacity of states and corporations to occupy airspace and create unliveable atmospheres. Combining digital modelling, machine learning, fluid dynamics, and mathematical simulation in the context of active casework, it serves as a platform for new human rights research practices directed at those increasingly prevalent modes of ‘cloud-based,’ airborne violence. Following a year marked by environmental catastrophe, a global pandemic, political protest, and an ongoing migrant crisis, Cloud Studies offers a new framework for considering the connectedness of global atmospheres, the porousness of state borders and what Achille Mbembe terms ‘the universal right to breathe.’

發表於 111下學期 | 在〈2023.05.23 李艷琳報告 – CLOUD STUDIES〉中留言功能已關閉

2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality

論文名稱:Tangible Globes for Data Visualisation in Augmented Reality

論文作者:Kadek Ananta Satriadi, et al.

論文來源:CHI’22 https://doi.org/10.1145/3491102.3517715

報告PPT:[PPT] [PDF]

 

Abstract

Head-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the “tangible-virtual interplay” of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR.

 

keywords : immersive analytics, tangible user interface, augmented reality, geographic visualization

發表於 111下學期 | 在〈2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality〉中留言功能已關閉

2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作

論文名稱:《光。盲》反思科技媒體偏誤之科技藝術創作

The “Blinding Light” – A Techno Artwork to Reflect Technology-Mediated Bias

論文作者:張瑜真 Chang, Yu-Chen (2022)

國立成功大學 科技藝術碩士學位學程

Abstract

論文Link

報告PPT

關鍵字: 科技媒體錯覺

發表於 111下學期 | 在〈2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作〉中留言功能已關閉

2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言

星叢‧複線‧集合:網路前衛藝術美學語言

Constellation‧Multiple lines‧Assemblages: Aesthetic language of the avant-garde.net

 

Authors: 林欣怡

國立交通大學 應用藝術研究所 博士論文

 

Abstract:

「星叢‧複線‧集合:網路前衛藝術美學語言」主要以星叢文體、精神姿態、物性導向、數據主體性、網路物性、多重延身概念複寫而成,上述概念同時結合美學語言、哲學概念、藝術作品三面向的視域,生產黏貼於網路體自身(the net itself)的概念集合體(assemblage),一方面映射網路體本質上的多態複數性格,一方面指向網路美學概念的開放性與轉換性,藉此三視域與節點的交互角力,聯結思考網路前衛作品的異質路徑。論文第一個切面「網路前衛藝術美學」,主要以阿多諾美學理論中的「星叢文體」作為論述樣式。同時關注網路空間如何連結至巨型網絡並生產「動能」,以及此動能如何擬造出精神態勢。接續上述星叢、動能、精神姿態與身體視角,導引出網路集體創作所映射而成的數據主體性,同時開展出「觀念作為物」的變異性,形成物件、物性導向展演的網路物性美學語言。最後論述台灣網路藝術創作脈絡,尋找差異與連結點,以此差異連結點接述台灣的網路創作體質。

 

Keywords:

網路前衛藝術、星叢、集合、數據主體性、網路物性

 

→ Original Link:

→ Class Presentation PPT

發表於 111下學期 | 在〈2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言〉中留言功能已關閉

2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文名稱:Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文作者:Nakagaki, Ken

論文年份:September 2021. Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning on August 20, 2021, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Media Arts and Sciences.

論文來源:MIT Media Lab – Tangible Media  Group https://dspace.mit.edu/handle/1721.1/142836

報告PPT:[PPT] [PDF]

Abstract

Research on Actuated and Shape-Changing Tangible User Interfaces (TUIs) in the
field of Human Computer Interaction (HCI) has been explored widely to design embodied interactions using digital computation has been explored widely. While advanced technical approaches, such as robotics and material science, have led to many
concrete instances of Actuated TUIs, a single actuated hardware system, in reality,
is inherently limited by its fixed configuration, thus limiting the reconfigurability,
adaptability, and expressibility of its interactions.

In my thesis, I introduce novel hardware augmentation methods, Shells and Stages,
for Actuated TUI hardware to expand and enrich their interactivity and expressibility
for dynamic physical interactions. Shells act as passive mechanical attachments for
Actuated TUIs that can extend, reconfigure and augment the interactivity and functionality of the hardware. Stages are physical platforms that allow Actuated TUIs to
propel on a platform to create novel physical expression based on the duality of front
stage and back stage. These approaches are inspired by theatrical performances,
computational and robotic architecture, biological systems, physical tools and science fiction. While Shells and Stages can individually augment the interactivity and
expressibility of the Actuated TUI system, the combination of the two enhances advanced physical expression based on combined shell-swaping and stage-transitioning.
By introducing these novel modalities of Shells and Stages, the thesis expands and
contributes to a new paradigm of Inter-Material / Device Interaction in the domain
of Actuated TUIs.

The thesis demonstrates the concepts of Shells and Stages based on existing Actuated TUI hardware, including pin-based shape displays and self-propelled swarm user
interfaces. Design and implementation methods are introduced to fabricate mechanical shells with different properties, and to orchestrate a swarm of robots on the stage
with arbitrary configurations. To demonstrate the expanded interactivity and reconfigurability, a variety of interactive applications are presented via prototypes, ranging
from digital data interaction, reconfigurable physical environment, storytelling, and
tangible gaming. Overall, my research introduces a new A-TUI design paradigm that
incorporates the self-actuating hardware (Actuated TUIs) and passively actuated mechanical modules (Shells) together with surrounding physical platforms (Stages). By
doing so, my research envisions the future in which computational technology is coupled seamlessly with our physical environment. This next generation of TUIs, by
interweaving multiple HCI research streams, aims to provide endless possibilities for
reconfigurable tangible and embodied interactions enabled by fully expressive and
functional movements and forms.

發表於 111下學期 | 在〈2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction〉中留言功能已關閉