2022.03.15 林巖報告: Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR

Introduction

“WEIGHTED WALKING: PROPELLER-BASED ON-LEG FORCE SIMULATION OF WALKING IN FLUID MATERIALS IN VR”

Based on some previous studies about the On-Leg activities in VR, this research provided another mindset to understand the Leg activities through their devices.

The devices used and the Applications for VR to implement these mindsets in advance. The overall solutions also reviewed some end-user considerations for the weight and sensor data usage for the VR application.

EmergyTech_Asia_2021

Publish

發表於 110下學期 | 在〈2022.03.15 林巖報告: Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR〉中留言功能已關閉

2022.3.15 陳昭潔報告 – Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

– ACM SIGGRAPH ASIA 2021, Emerging Technologies

 

Publication

Presentation

Authors: 

Yan He, George Chernyshov, Dingding Zheng, Jiawen Han, Ragnar Thomsen, Danny Hynds, Yuehui Yang, Yun Suen Pai, Kai Kunze, Kouta Minamizawa

Abstract:

We propose Frisson Waves, a real-time system to detect, trigger and share frisson during music performances. The system consists of a physiological sensing wristband for detecting frisson and a thermo-haptic neckband for inducing frisson. This project aims to improve the connectedness of audience members and performers during music performances by sharing frisson. We present the results of an initial concert workshop and a feasibility study of our prototype.

發表於 110下學期 | 在〈2022.3.15 陳昭潔報告 – Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances〉中留言功能已關閉

林巖報告:Reverse Pass-Through VR

Abstract:

Reverse-through VR.

“We introduce reverse pass-through VR, wherein a three-dimensional view of the wearer’s eyes is presented to multiple outside viewers in a perspective-correct manner, with a prototype headset containing a world-facing light field display. ”

“A three-dimensional view of the wearer’s eye is presented to multiple outside viewers.”

This prototype of the display with the natural eye usage method connects the real world to the visual world through display devices with the eye image with some social behavior requirements.

Art Website

Class Representation: Emergy_tech2021

 

發表於 110下學期 | 已標籤 , | 在〈林巖報告:Reverse Pass-Through VR〉中留言功能已關閉

2022.03.15 陳麗宇報告 – Gesture Recognition

Recognition of Gestures over Textiles with Acoustic Signatures – ACM SIGGRAPH ASIA 2021, Emerging Technologies

Publication

→ Class Presentation

 

Authors – Pui Chung Wong, Christian Sandor, Alvaro Cassinelli (CityU, HK)

Abstract – A method capable of turning textured surfaces into opportunistic input interfaces is demonstrated, thanks to a machine learning model pre-trained on acoustic signals generated by scratching different fabrics. It does not require intervention on the fabric. It is passive and works well using regular microphones. Preliminary results also show that the system recognizes the manipulation of Velcro straps, zippers, or the taping or scratching of plastic cloth buttons over the air when the microphone is in personal space.

發表於 110下學期 | 在〈2022.03.15 陳麗宇報告 – Gesture Recognition〉中留言功能已關閉

2022.03.01 林巖報告: INFINITELY YOURS

INFINITELY YOURS:  ARS ELECTRONICA,

The  2020 COLDENE NICA

Category: Computer Animation (CA)

Authors -Miwa Matreyek

Abstract: The “INFINITELY YOURS” (2020). It was the video and audio animation combination. The Author used personal shadow in this film to represent the feeling about the climate issues and some consideration about human activities and the environment.

The file and the image enabled the mindset and the picture about nature and humanity. Some activities on earth and the things we used impact the climate. Water, forest, oil, and plastic usage impact the mother earth with different meanings come at the time.

The shadow places a role and feeling about the things which happened in daily life. The Modern city and things had already impacted things a lot. With the role played inside the file, some feelings and mindsets just went through these processes to remind the idea and things in common.

The film used lots of basic elements and combine whole storage in one page. Used shadow to describe things as the language in short. More reliable message transfer from the place to the people in front of this film.

 

https://archive.aec.at/prix/showmode/63129/

-> Class Presentation

->Art Work Website

 

發表於 110下學期 | 在〈2022.03.01 林巖報告: INFINITELY YOURS〉中留言功能已關閉

2022.03.01 陳昭潔報告 – Algorithmic Perfumery

Algorithmic Perfumery– ARS Electronica 2019, Interactive Art+

→ Prix Archive Page

→ PPT

Authors – Frederik Duerinck

Abstract –In Algorithmic Perfumery, the world of scent is explored by using the visitor’s input to train the creative capabilities of an automated system. Custom scents are created by a machine learning algorithm based on the unique data we feed it. The outcome is a unique scent generated and compounded on- site. By participating in the experience, visitors contribute to the on-going research to improve the system and reinvent the future of perfumery. Generative perfume design is the emergence of the not too distant future. Algorithmic Perfumery not only ignites the senses, it also allows participants to walk away with a tangible and usable memory of the work. Individuals may complete a personality test lasting about 15 minutes, composed of standard questions and a few more focused on scent preference. After the participants’ answers are compiled, a code is generated. You proceed to a contraption lined with tubes of concentrates, type in your code, and the machine proceeds to mix the concentrates in amounts based on the data provided. And at the end of the assembly line, a small sample vial of your individually crafted scent awaits you. You may then review your feelings about the scent, and in this way the A.I. learns and refines its scent crafting abilities. An inspiringly unique approach to a seldom represented creative process, Algorithmic Perfumery is indicative of the cohesive future between human ability and technological potential.

發表於 110下學期 | 在〈2022.03.01 陳昭潔報告 – Algorithmic Perfumery〉中留言功能已關閉

2022.03.01 陳思豪報告: The Deep Listener

Topic: The Deep Listener, PRIX ARS ELECTRONICA, THE 2021 WINNERS

Category: Computer Animation (CA)

Authors: Jakob Kudsk Steensen (DK)

Abstract: The Deep Listener (2019) is an audio-visual ecological expedition through Kensington Gardens and Hyde Park, the area surrounding the Serpentine Galleries. Designed as an augmented reality and spatial audio work downloadable as an app for mobile devices, it is both a site-specific public artwork and a digital archive of species that live within the park. It pushes the utility of augmented reality and technological tools to transform our spatial understanding of the natural world. The commission expands upon Kudsk Steensen’s practice of merging the organic, ecological, and technological in the building of complex worlds in order to tell stories about our current environmental reality.

Keywords: Deep Listener, Serpentine Galleries, Animation, Interaction Device, Augmented Reality, Slow Media

→ Class Presentation

Reference

發表於 110下學期 | 在〈2022.03.01 陳思豪報告: The Deep Listener〉中留言功能已關閉

110 學年下學期課程介紹

科技藝術書報討論

任課教師:許素朱 教授(+其他老師)
<xn.techart@gmail.com>
課程網址 : http://www.fbilab.org/nthu/aet/seminar
上課時間 : 研究所 週(二) 18:30AM-20:30PM
上課地點:校本部 綜二603教室 (跨院碩辦公室旁)

Course Description(課程目標)
本課程帶領學生①了解國際科技藝術創作與研究的最新趨勢,從該領域最重要之學術會議期刊藝術展覽活動,選讀藝術論文及作品探討報告。課程由授課老師指定閱讀論文或科技藝術作品清單給學生選讀,從報告與討論交流中,讓學生獲得科技藝術領域的作品創作、技術研發能力。②課程中會擇期邀多方領域之師資一同參與,給予跨領域面向的講演或討論交流。③課程也會授予論文撰寫及投稿的關鍵能力。 (More …

發表於 110下學期 | 在〈110 學年下學期課程介紹〉中留言功能已關閉

2022.01.11 王聖銘 Report: MACHINE AUGURIES


Report: Link

Topic: MACHINE AUGURIES

Author: Dr. Alexandra Daisy Ginsberg, Johanna Just, Ness Lafoy, Ana Maria Nicolaescu

Reference: ARS Electronica Festival 2020 (Interactive Art)

Abstract:

Before sunrise, a redstart begins his solo with a warbling call. Other birds respond, together creating the dawn chorus: a back-and-forth that peaks thirty minutes before and after the sun emerges in the spring and early summer, as birds defend their territory and call for mates. Light and sound pollution from our 24-hour urban lifestyle affects birds, who are singing earlier, louder, for longer, or at a higher pitch. But only those species that adapt survive. Machine Auguries questions how the city might sound with changing, homogenizing, or diminishing bird populations.

In the multi-channel sound installation, a natural dawn chorus is taken over by artificial birds, their calls generated using machine learning. Solo recordings of chiffchaffs, great tits, redstarts, robins, thrushes, and entire dawn choruses were used to train two neural networks (a Generative Adversarial Network, or GAN), pitted against each other to sing. Reflecting on how birds develop their song from each other, a call and response of real and artificial birds spatializes the evolution of a new language. Samples taken from each stage (epoch) in the GAN’s training reveal the artificial birds’ growing lifelikeness.

The composition follows the arc of a dawn chorus, compressed into ten minutes. The listener experiences the sound of a fictional urban parkland, entering in the dim silvery light of pre-dawn. We start with a solo from a lone “natural” redstart. In response, from across the room, we hear an artificial redstart sing back, sampled from an early epoch. A “natural” robin joins the chorus, with a call and response set up between natural and artificial birds. The chorus rises as other species enter, reaching a crescendo five minutes in. As the decline starts and the room illuminates to a warm yellow, we realize that the artificial birds, which have gained sophistication in their song, are dominating.

發表於 110上學期 | 在〈2022.01.11 王聖銘 Report: MACHINE AUGURIES〉中留言功能已關閉

2022.01.11 田子平報告


SIGGRAPH 2021

Topic: Common Datum

Author: Tobias Klein, Jane Prophet

ppt:https://docs.google.com/presentation/d/1eqoim5uIjCwtd-o3YjbkrfZ2MsCRF_OpiAp23OErySM/edit?usp=sharing

Abstract:

“Common Datum” is an environmentally reactive, hygroscopic sculpture. A series of suspended vessels continuously absorb the humidity in the exhibition — generated through the breath of the audience. Slowly, each 3D-printed condenser accumulates water that drips into a series of glass volumes. Even though all vessels are of individual shapes, locally absorbing moisture at a different rate, a common datum is created throughout all of them. The work articulates a confluence between traditional and digital craft in the context of environmental, participatory art.

發表於 110上學期 | 在〈2022.01.11 田子平報告〉中留言功能已關閉

2022.1.11 翁政弘報告: ElectroRing

Topic: ElectroRing: Subtle Pinch and Touch Detection with a Ring [pdf] [ppt] [paper]

Author:Wolf Kienzle∗ & Eric Whitmire∗   FRL Research Redmond, WA, USA

Abstract:  We present ElectroRing, a wearable ring-based input device that

reliably detects both onset and release of a subtle fnger pinch, and more generally, contact of the fngertip with the user’s skin. ElectroRing addresses a common problem in ubiquitous touch interfaces, where subtle touch gestures with little movement or force  are not detected by a wearable camera or IMU. ElectroRing’s active electrical sensing  approach provides a step-function-like change in the raw signal, for both touch and release events, which can be easily detected using only basic signal processing techniques. Notably, ElectroRing requires no second point of instrumentation, but only the ring itself, which sets it apart from existing electrical touch detection methods. We built three demo applications to highlight the efectiveness of our approach when combined with a simple IMU-based 2D tracking system.

 

發表於 110上學期 | 在〈2022.1.11 翁政弘報告: ElectroRing〉中留言功能已關閉

2022.1.11 楊元福報告: ChoreoMaster


SIGGRAPH 2021

Topic: ChoreoMaster : Choreography-Oriented Music-Driven Dance Synthesis
ppt

Author: Kang Chen et al.

Abstract:

Despite strong demand in the game and film industry, automatically synthesizing high-quality dance motions remains a challenging task. In this paper, we present ChoreoMaster, a production-ready music-driven dance motion synthesis system. Given a piece of music, ChoreoMaster can automatically generate a high-quality dance motion sequence to accompany the input music in terms of style, rhythm and structure.To achieve this goal, we introduce a novel choreography-oriented choreomusical embedding framework, which successfully constructs a unified choreomusical embedding space for both style and rhythm relationships between music and dance phrases. The learned choreomusical embedding is then incorporated into a novel choreography-oriented graph-based motion synthesis framework, which can robustly and efficiently generate high-quality dance motions following various choreographic rules. As a production-ready system, ChoreoMaster is sufficiently controllable and comprehensive for users to produce desired results. Experimental results demonstrate that dance motions generated by ChoreoMaster are accepted by professional artists.

Paper:

https://dl.acm.org/doi/abs/10.1145/3450626.3459932

https://netease-gameai.github.io/ChoreoMaster/Paper.pdf

Introduction:

https://blog.siggraph.org/2021/09/how-choreomaster-combines-cutting-edge-ai-and-graphics-technologies.html/

https://netease-gameai.github.io/ChoreoMaster/

發表於 110上學期 | 在〈2022.1.11 楊元福報告: ChoreoMaster〉中留言功能已關閉

2021.12.28 黃睿緯報告

PPT: https://docs.google.com/presentation/d/1H8qDtTGeQOsDFDornnaXGuTo3aOVEhtj/edit?usp=sharing&ouid=103926017209371204990&rtpof=true&sd=true

Respire: Virtual Reality Art with Musical Agent Guided by Respiratory Interaction [Leonardo Music Jornal]

Website Link: https://kivanctatar.com/Respire

Video:

發表於 110上學期 | 在〈2021.12.28 黃睿緯報告〉中留言功能已關閉

2021.12.28 古士宏Leonardo Journal Report:Stowaway City


Report: Link

Topic: Stowaway City: An Immersive Audio Experience for Multiple Tracked Listeners in a Hybrid Listening Environment

Author: Michael McKnight

Original Article:

https://pureadmin.qub.ac.uk/ws/portalfiles/portal/241455682/McKnight_DEV.pdf

Abstract:

Stowaway City is an immersive audio experience that combines electroacoustic composition and storytelling with extended reality. The piece was designed to accommodate multiple listeners in a shared auditory virtual environment. Each listener, based on their tracked position and rotation in space, wirelessly receives an individual binaurally decoded sonic perspective via open-back headphones. The sounds and unfolding narrative are mapped to physical locations in the performance area, which are only revealed through exploration and physical movement. Spatial audio is simultaneously presented to all listeners via a spherical loudspeaker array that supplements the headphone audio, thus forming a hybrid listening environment. The work is presented as a conceptual and technical design paradigm for creative sonic application of the technology in this medium. The author outlines a set of strategies that were used to realize the composition and technical affordances of the system.

發表於 110上學期 | 在〈2021.12.28 古士宏Leonardo Journal Report:Stowaway City〉中留言功能已關閉

2021.12.28 王聖銘 Report: First Impression: AI Understands Personality


Report: Link

Topic: First Impression: AI Understands Personality

Author: Xiaohui Wang, Xia Liang, Miao Lu, Jingyan Qin

Reference: ACM Multimedia 2020

Abstract:

When you first encounter a person, a mental image of that person is formed. First impression, an interactive art, is proposed to let AI understand human personality at first glance. The mental image is demonstrated by Beijing opera facial makeups, which shows the character personality with a combination of realism and symbolism. We build Beijing opera facial makeup dataset and semantic dataset of facial features to establish relationships among real faces, personalities and facial makeups. First impression detects faces, recognizes personality from facial appearance and finds the matching Beijing opera facial makeup. Finally, the morphing process from real face to facial makeup is shown to let users enjoy the process of AI understanding personality.

發表於 110上學期 | 在〈2021.12.28 王聖銘 Report: First Impression: AI Understands Personality〉中留言功能已關閉