2020.03.31 劉學致報告


報告者:劉學致

報告PPT:PPT
影片:GTS

文獻題目: Taste controller: galvanic chin stimulation enhances, inhibits, and creates tastes
文獻作者:
Kazuma Aoyama、Kenta Sakurai、Akinobu Morishima、Taro Maeda、Hideyuki Ando
文獻來源:2018 ACM SIGGRAPH

摘要:
Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.

發表於 108下學期 | 在〈2020.03.31 劉學致報告〉中留言功能已關閉

2020.03.31 廖志唯報告

報告者:廖志唯
報告PPT:https://drive.google.com/file/d/17CMDrUC2Y-Wposv_7pKJ6Rnc6nDkAql5/view?usp=sharing

文獻題目:CHICAP
文獻作者:Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You
文獻來源:https://dl.acm.org/doi/10.1145/3214907.3214924

影片: https://www.youtube.com/watch?v=JKKucw4ATNY

摘要:
In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.

發表於 108下學期 | 在〈2020.03.31 廖志唯報告〉中留言功能已關閉

2020.03.31 孟昕報告


報告者:孟昕

報告PPT:PPT
文獻題目:Headlight: egocentric visual augmentation by wearable wide projector
文獻作者:Shunichi Kasahara
文獻來源:Emerging Technologies—SIGGRAPH 2018

paper & video:ACM Digital Library

摘要:
Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce “Headlight”, a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.

發表於 108下學期 | 在〈2020.03.31 孟昕報告〉中留言功能已關閉

2020.03.31趙立報告


報告者:趙立

報告PPT:PPT
文獻題目:CoGlobe
文獻作者:Sidney Fels, Ian Stavness, Qian Zhou, Dylan Fafard
文獻來源:
2018 SIGGRAPH /ACM SIGGRAPH BLOG /PAPER
影片:
VIDEO

摘要:
Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.

發表於 108下學期 | 在〈2020.03.31趙立報告〉中留言功能已關閉

2020.03.17 林楷育報告

  • Reporter:  Kai-yu Lin 林楷育
  • ppt: ppt
  • Topic: TORSO #1
  • Authors: Peter Kutin (AT)
  • ABSTRACT
    TORSO #1 is a sound sculpture that is visually reminiscent of a klopotec. This windmill-like wooden construction serves as a scarecrow in vineyards as it mechanically generates sounds and vibrations. Here, an electro-acoustic system of four 100 V loudspeakers rotates at different speeds, generating feedback patterns and modulating sound signals and the spatial sound itself. The targeted acceleration and deceleration of the rotating of the four-voice system serves as the central compositional means for the 35-minute piece – the sculpture becomes an abstract, audiovisual instrument.
發表於 108下學期 | 在〈2020.03.17 林楷育報告〉中留言功能已關閉

2020.3.31 施佳瑩報告

報告者:施佳瑩
報告PPT: 20200331書報討論-施佳瑩
文獻題目:
TeleSight: Enabling asymmetric collaboration in VR between HMD user and Non-HMD users.
文獻作者:Taichi Furukawa, Daisuke Yamamoto, Moe Sugawa, Roshan Peiris, Kouta Minamisawa
文獻來源:

Emerging Technologies


https://dl.acm.org/doi/10.1145/3305367.3335040
https://dl.acm.org/doi/pdf/10.1145/3305367.3335040

摘要:
TeleSight enables cooperative experience with a Non-HMD user in the real world by reproducing the VR space and the avator of HMD user using robot and projection system. It aims at presenting embodied interaction between virtual and real.

發表於 108下學期 | 在〈2020.3.31 施佳瑩報告〉中留言功能已關閉

2020.03.17 鄭涵予報告

Dilate / Queensland University of Technology (AU), Credit: tom mesic

報告者:鄭涵予

PPT

文獻題目:Dilate
文獻作者:
Undergraduate Bachelor of Creative Industries:
Ruth Hawkins (AU), Daniel Kit Wei Tan (MY).
Undergraduate Bachelor of Fine Arts:
Reina Takeuchi (AU).
Honours Bachelor of Design:
Jess Greentree (AU), Peter Lloyd (AU), Tom Long (AU), Steven O’Hanlon-Rose (AU), Joash Teo (AU).

文獻來源:
https://ars.electronica.art/error/en/dilate/#collapseDates

https://spark.adobe.com/page/J9Bl3iPMiMKug/

摘要:
Dilate is an interactive wearable that responds to human data; dilating, pulsating and expanding while on a human body. Dilate uses symbiosis as a forefront for exploration within wearable technology. Audiences activate this wearable through their interaction with it, creating an emotional response.

發表於 108下學期 | 在〈2020.03.17 鄭涵予報告〉中留言功能已關閉

2020.0317 劉學致報告


報告者:劉學致
報告PPT:http://save.liuxuezhi.net/kids.pdf
文獻題目:KIDS
文獻作者:

Michael Frei | Playables, Mario von Rickenbach | Playables
(In alphabetical order)

文獻來源:website
影片:010203

摘要:
*KIDS* is a game of crowds. The project consists of a short film, an interactive animation, and an art installation.
How do we define ourselves when we are all equal? Who is steering the crowd? What if it is heading in the wrong direction? Where does the individual end and the group begin? What is done by choice, and what under duress?
*KIDS* was made using traditional 2D hand-drawn line animation in black and white. The animation was assembled, composited, and choreographed using a game engine with a custom-made animation system in conjunction with physics simulations. The characters in a crowd behave much like matter: They attract and repel, lead and follow, grow and shrink, align and separate. They are purely defined by how they relate to one other–without giving them distinguishable features.

發表於 108下學期 | 在〈2020.0317 劉學致報告〉中留言功能已關閉

2020.03.17 孟昕報告

報告者:孟昕
報告PPT:PPT
文獻題目:LIGHT BARRIER 3RD EDITION
文獻作者:Mimi Son, Elliot Woods
文獻來源:http://archive.aec.at/prix/showmode/55667/

摘要:The Light Barrier series by studio Kimchi and Chips creates volumetric drawings in the air using hundreds of calibrated video projections. These light projections merge in a field of fog to create graphic objects that animate through physical space as they do in time.

The installations present a semi-material mode of existence, materializing objects from light. The third edition continues to exploit the confusion and non-conformities at the boundary between materials and non-materials, reality and illusion, and existence and absence. The viewer is presented with a surreal vision that advances the human instinct of duration and space. The name refers to the light barrier in relativistic physics, which separates things that are material from things that are light, and since 1983 has been used to specify the exact meaning of the metric system of spatial measure.

The 6-minute sequence employs the motif of the circle to travel through themes of birth, death, and rebirth, helping shift the audience into the new mode of existence. The artists use the circle often in their works to evoke the fundamentals of materials and the external connection between life and death.

The artists are interested in how impressionist painters were inspired by the introduction of photography to create ‘viewer-less images’. The installation allows images to arise from the canvas, creating painting outside of perspective. It is a direct approach to the artists’ theme of ‘drawing in the air’.

發表於 108下學期 | 在〈2020.03.17 孟昕報告〉中留言功能已關閉

2020.03.17 廖志唯報告

報告者:廖志唯

報告PPT:https://drive.google.com/file/d/1Ei73pblt6MI_-je5OZjSEgJ-VjEl5zmb/view?usp=sharing

文獻題目:THE AMERICA PROJECT
文獻作者:Paul Vanouse
文獻來源:http://www.paulvanouse.com/ap/
https://vimeo.com/372808350
https://www.youtube.com/watch?v=dSQQ67oCHXs

摘要:
*The America Project* is a biological art installation centered around a process called “DNA gel electrophoresis,” AKA “DNA Fingerprinting,” a process I’ve appropriated to produce recognizable images. Audiences first encounter what resembles a human-scale fountain or decanter, which is actually a spittoon to collect their spit. Viewers are offered a cup of saline and asked to swish and then to deposit it into the spittoon. During the opening, I extract the DNA from hundreds of spit samples, containing cheek cells and the cells’ DNA all mixed together. The DNA is not individuated nor retained – it is processed as a composite to make iconic DNA Fingerprint images of power, which are visible as live video projections of the electrophoresis gels throughout the exhibition.

For decades, we’ve been told that DNA was our source of individuation, difference, and legal identity, whereas I’m showing that our DNA is nearly identical – I’m asserting a manifesto of Radical Sameness. The spittoon is a bio-matter anonymizer for de-colonized citizen science. Everyone’s spit is mixed together making individuation impossible. All collected samples are promiscuously comingled. What is visualized is our shared identity, our collectivity. The question is from our shared biopower, are we merely the conduit for monarchial power or will a power of the 99% emerge. The first images produced with the DNA are (1) a crown symbolizing pure, top-down, rigid power, (2) an infinity sign symbolizing endless potential, hope, and possibility, and lastly (3) the US Flag. Its meaning, elusive and underdetermined at the time, now seems perplexing and foreboding in the wake of the US election. The project premiered in Oct./Nov. 2016, a period in which the meaning of the subject/citizen and the meaning of America were being contested and perhaps subverted. The most poetic image was produced in error, during the opening when the DNA gel tore. The disintegrating crown resembled a turbulent cloud, suggesting an unexpected escape from the experimental norm.

The title was chosen to desolidify the concept of America. The term *Project* implies a goal-oriented mission, or experiment, and recollects the utopian plan for America as a melting pot in which the vast landscape would serve as an equalizer. In recent US elections, this utopian residue has catalyzed both visions of social progress (Obama), but also neo-imperialist doctrines of exceptionalism (Bush) and extreme nationalism and xenophobia (Trump). Extreme nationalism is a growing global problem and it is expected that the project will resonate in the European context. The DNA images will include additional icons of national power.

The flag image was created by doing bio-informatics in reverse: first determining the exact size DNA needed to move at varied rates in a gel to form the star field. We amplified segments from a region of human DNA that nearly all humans share so we could predict amplified sizes using freely available online human genome sequences. Producing perfect length dense smears, or stripes, was also challenging and required completely reimagining DNA processing. Solon Morse and I used the trouble-shooting section of a DNA amplification handbook as a “how to” – inverting best practices in which smears are what you try to avoid. Our belief is that the radical appropriation of molecular and bio-informatic tools by the arts can inform the use, deployment, and public perception of technology so vital to our understanding of proof, evidence, and identity.

發表於 108下學期 | 在〈2020.03.17 廖志唯報告〉中留言功能已關閉

2020.03.17 楊元福報告


Topic: Voices from AI in Experimental Improvisation  ppt

Author: Tomomi Adachi, Andreas Dzialocha, Marcello Lussana

Reference:
https://ars.electronica.art/outofthebox/en/voices-from-ai-in-experimental-improvisation/

Abstract:
Voices from AI in Experimental Improvisation is the attempt to improvise and interact with a computer software which “learns” about the performers voice and musical behavior. The program behind it, named “tomomibot”, is based on Artificial Intelligence (AI) algorithms and enables the voice-performer and artist Tomomi Adachi (human) to perform with his AI-self–learning over time from Tomomi’s past concerts. The project is not only a musical experiment with a non-human performer but also an undertaking to make computer culture “audible.” In giving “tomomibot” full agency in a human-machine interaction, the performance raises the question about the logic and politics of computers in relationship to human culture. What we hear is the result of human software design and computational logic, carving out the limited space of these machines while listening, interacting, and learning from them. Tomomi Adachi is a sound artist known for his intense, fragmented, and sound-based improvisation style which makes “tomomibot” more of a sound and noise machine than a “singing” software. This enables the program to learn from every sound source and type: What is the musical dramaturgy of orchestra music or war videos from YouTube? Through machine learning one can try to find and learn patterns in these sound documents and improvise musically with it, from the perspective of “tomomibot.” “Tomomibot” is a software based on LSTM (Long short-term Memory) algorithms, a form of sequential neural networks, deciding on which sound to play next, based on which live sounds it heard before. The software was designed and developed by Andreas Dzialocha. Experimenting with AI sound synthesis algorithms (WaveNet, WaveRNN, FFTNet) the developer Marcello Lussana generated a large database of sounds which sound like Tomomi. These sounds serve as the sound vocabulary “tomomibot” uses to improvise with human Tomomi.

發表於 108下學期 | 已標籤 , , | 在〈2020.03.17 楊元福報告〉中留言功能已關閉

2020.03.17趙立報告

報告者:趙立
報告PPT:PPT
文獻題目:Resurrecting the Sublime
文獻作者:Christina Agapakis / Ginkgo Bioworks/ Alexandra Daisy Ginsberg / Sissel Tolaas with support from IFF Inc.
文獻來源:website 
影片:https://www.youtube.com/watch?v=8AJTah6Lfh4#t=4m59s

摘要:
Using DNA from flower specimens at Harvard University’s Herbaria, the Ginkgo team used synthetic biology to predict gene sequences that encode for fragrance-producing enzymes. Tolaas then reconstructed the flowers’ smells, using identical or comparative smell molecules. We know which molecules the flowers may have produced, but the amounts are also lost. In Ginsberg’s installation designs, the fragments of each flower’s smell mix: there is no “exact” smell. The lost landscape is reduced to its geology and the flower’s smell. Entering the installation, the human connects the two and, contrary to a natural history museum, becomes the specimen on view.

Resurrecting the smell of extinct flowers so that humans may again experience something we destroyed is awesome and terrifying—it evokes the “sublime.” But this is not de-extinction. Instead, smell and reconstructed landscapes reveal the complex interplay of species and places that no longer exist. *Resurrecting the Sublime* asks us to contemplate our actions, and potentially change them for the future.

發表於 108下學期 | 在〈2020.03.17趙立報告〉中留言功能已關閉

2020.03.17 施佳瑩報告


報告者:施佳瑩
報告PPT: 20200317書報討論-Earthlink

文獻題目:Earthlink
文獻作者:Saša Spačal (Supported by the City of Ljubljana and Ministry of Culture of the Republic of Slovenia)
文獻來源:
Earthlink — Out of the Box  
Earthlink’s website 

摘要:
Earthlink aims to serve as an entrance point to the post-anthropocentric constellation of connections and environmental relations, which enable a more dynamic and sustainable perception of human activity within the environment.

發表於 108下學期 | 在〈2020.03.17 施佳瑩報告〉中留言功能已關閉

2020.0310趙立報告

報告者:趙立
報告PPT:

文獻題目:Resurrecting the Sublime

文獻作者:
Christina Agapakis / Ginkgo Bioworks/ Alexandra Daisy Ginsberg / Sissel Tolaas
with support from IFF Inc.

文獻來源:website

摘要:
Using DNA from flower specimens at Harvard University’s Herbaria, the Ginkgo team used synthetic biology to predict gene sequences that encode for fragrance-producing enzymes. Tolaas then reconstructed the flowers’ smells, using identical or comparative smell molecules. We know which molecules the flowers may have produced, but the amounts are also lost. In Ginsberg’s installation designs, the fragments of each flower’s smell mix: there is no “exact” smell. The lost landscape is reduced to its geology and the flower’s smell. Entering the installation, the human connects the two and, contrary to a natural history museum, becomes the specimen on view.

Resurrecting the smell of extinct flowers so that humans may again experience something we destroyed is awesome and terrifying—it evokes the “sublime.” But this is not de-extinction. Instead, smell and reconstructed landscapes reveal the complex interplay of species and places that no longer exist. *Resurrecting the Sublime* asks us to contemplate our actions, and potentially change them for the future.

發表於 108下學期 | 在〈2020.0310趙立報告〉中留言功能已關閉

2019.12.17 孟昕報告

報告者:孟昕
報告PPT:PPT

文獻題目:
Weaving Objects: Spatial Design and Functionality of 3D Woven Textiles

文獻作者:
Claire Harvey, Emily Holtzman, Joy Ko, Brooks Hagan, Rundong Wu, Steve Marschner and David Kessler

文獻來源:https://dl.acm.org/citation.cfm?id=3320137&picked=formats

摘要:
3D weaving is an industrial process for creating volumetric material through organized multiaxis interlacing of yarns. The overall complexity and rarity of 3D weaving have limited its market to aerospace and military applications. Current textile design software does not address the ease of iterating through physical trialing so necessary for designers to access this medium. This paper describes the development of a series of volumetric textile samples culminating in the creation of a fully formed shoe and the collaboration with computer scientists to develop a visualization tool that addresses the consumer accessory design opportunities for this medium.

發表於 108上學期 | 在〈2019.12.17 孟昕報告〉中留言功能已關閉