2023.06.06 劉士達報告 – Wander: An AI-driven Chatbot to Visit the Future Earth

論文名稱:Wander: An AI-driven Chatbot to Visit the Future Earth

論文作者:Yuqian Sun, Chenhang Cheng, Ying Xu, Yihua Li, Chang Hee Lee, Ali Asadipour

論文來源:ACM MM’22 https://dl.acm.org/doi/10.1145/3503161.3549971

報告PPT:[PPT] [PDF]

Abstract

This artwork presents an intelligent chatbot called Wander. This work used knowledge based story generation to facilitate a narrative AI chatbot on daily communication  platforms, producing interactive fiction with the most accessible natural language input: text messages. On social media platforms such as Discord and WeChat, Wander can generate a science-fiction style travelogue about the future earth, including text, images  and global coordinates (GPS) based on real-world locations (e.g. Paris). The journeys are visualised in real-time on an interactive map that can be updated with participants’ data. Based on Viktor Shklovsky’s defamiliarization technique, we present how an AI agent can become a storyteller through common messages in daily life and lead participants to
see the world from new perspectives. The website of this work is:
https://wander001.com/

keywords : Intelligent Interactive System, Co-creative AI, Chatbot, Metaverse, Gaming

發表於 111下學期 | 在〈2023.06.06 劉士達報告 – Wander: An AI-driven Chatbot to Visit the Future Earth〉中留言功能已關閉

2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist

The  ART  Newspaper :
An AI bot has figured out how to draw like Banksy. And it’s uncanny !

報導來源:An AI bot has figured out how to draw like Banksy. And it’s uncanny (theartnewspaper.com)

報告PPT:[PPT]

Abstract

To create these images, Round has used a type of computerised machine learning framework known as a GAN (generative adversarial network). This specific GAN was trained for five days using a portfolio of hundreds of images of (potentially) Banksy’s work, until it was able to produce an image that bears a superficial likeness to the originals.

發表於 111下學期 | 在〈2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist〉中留言功能已關閉

2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views

論文名稱:Co-Writing with Opinionated Language Models Affects Users’ Views

論文作者:Maurice Jakesch, et al.

論文來源:CHI’23 https://dl.acm.org/doi/10.1145/3544548.3581196

報告PPT: [PDF]

ABSTRACT

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologi

發表於 111下學期 | 在〈2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views〉中留言功能已關閉

2023.05.23 李艷琳報告 – CLOUD STUDIES

CLOUD STUDIES

PRIX ARS ELECTRONICA 2021 – Artificial Intelligence & Life Art – Golden Nica

 

Authors:

Forensic Architecture (FA)

→ Original Link:

→ Artwork Video

→ Class Presentation PPT

 

Abstract: 

Civil society rarely has privileged access to classified information, making the information that is available from ‘open sources’ crucial in identifying and analyzing human rights violations by states and militaries. The wealth of newly-available data—images and videos pulled from the open source internet—around which critical new methodologies are being built, demands new forms of image literacy, an ‘investigative aesthetics,’ to read traces of violence in fragmentary data drawn from scenes of conflict and human rights violations. The results of these new methodologies have been significant, and Forensic Architecture (FA) has been among the pioneers in this field, as open source investigation (OSI) has impacted international justice mechanisms, mainstream media, and the work of international human rights NGOs and monitors. The result has been a new era for human rights: what has been called ‘Human Rights 3.0.’

In Forensic Architecture’s work, physical and digital models are more than representations of real-world locations—they function as analytic or operative devices. Models help us to identify the relative location of images, camera positions, actions, and incidents, revealing what parts of the environment are ‘within the frame’ and what remains outside it, thereby giving our investigators a fuller picture of how much is known, or not, about the incident they are studying.

There remain, however, modes of violence that are not easily captured even ‘within the frame.’ Recent decades have seen an increase in airborne violence, typified by the extensive use of chlorine gas and other airborne chemicals against civilian populations in the context of the Syrian civil war. Increasingly, tear gas is used to disperse civilians (often gathered in peaceful protest), while aerial herbicides destroy arable land and displace agricultural communities, and large-scale arson eradicates forests to create industrial plantations, generating vast and damaging smoke clouds. Mobilized by state and corporate powers, toxic clouds affect the air we breathe across different scales and durations, from urban squares to continents, momentary incidents to epochal latencies. These clouds are not only meteorological but political events, subject to debate and contestation. Unlike kinetic violence, where a single line can be drawn between a victim and a ‘smoking gun’, in analyzing airborne violence, causality is hard to demonstrate; in the study of clouds, the ‘contact’ and the ‘trace’ drift apart, carried away by winds or ocean currents, diffused into the atmosphere.  Clouds are transformation embodied, their dynamics elusive, governed by non-linear behavior and multi-causal logics.

One response by FA has been to work with the Department of Mechanical Engineering at Imperial College London (ICL), world leaders in fluid dynamics simulation. Together, FA and ICL have pioneered new methodologies for meeting the complex challenges to civil society posed by airborne violence. The efficacy of such an approach in combatting environmental violence has already been demonstrated—FA’s investigation into herbicidal warfare in Gaza was cited by the UN—and has significant future potential, as state powers are increasingly drawn to those forms of violence and repression that are difficult to trace.

Cloud Studies brings together eight recent investigations by Forensic Architecture, each examining different types of toxic clouds and the capacity of states and corporations to occupy airspace and create unliveable atmospheres. Combining digital modelling, machine learning, fluid dynamics, and mathematical simulation in the context of active casework, it serves as a platform for new human rights research practices directed at those increasingly prevalent modes of ‘cloud-based,’ airborne violence. Following a year marked by environmental catastrophe, a global pandemic, political protest, and an ongoing migrant crisis, Cloud Studies offers a new framework for considering the connectedness of global atmospheres, the porousness of state borders and what Achille Mbembe terms ‘the universal right to breathe.’

發表於 111下學期 | 在〈2023.05.23 李艷琳報告 – CLOUD STUDIES〉中留言功能已關閉

2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality

論文名稱:Tangible Globes for Data Visualisation in Augmented Reality

論文作者:Kadek Ananta Satriadi, et al.

論文來源:CHI’22 https://doi.org/10.1145/3491102.3517715

報告PPT:[PPT] [PDF]

 

Abstract

Head-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the “tangible-virtual interplay” of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR.

 

keywords : immersive analytics, tangible user interface, augmented reality, geographic visualization

發表於 111下學期 | 在〈2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality〉中留言功能已關閉

2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作

論文名稱:《光。盲》反思科技媒體偏誤之科技藝術創作

The “Blinding Light” – A Techno Artwork to Reflect Technology-Mediated Bias

論文作者:張瑜真 Chang, Yu-Chen (2022)

國立成功大學 科技藝術碩士學位學程

Abstract

論文Link

報告PPT

關鍵字: 科技媒體錯覺

發表於 111下學期 | 在〈2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作〉中留言功能已關閉

2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言

星叢‧複線‧集合:網路前衛藝術美學語言

Constellation‧Multiple lines‧Assemblages: Aesthetic language of the avant-garde.net

 

Authors: 林欣怡

國立交通大學 應用藝術研究所 博士論文

 

Abstract:

「星叢‧複線‧集合:網路前衛藝術美學語言」主要以星叢文體、精神姿態、物性導向、數據主體性、網路物性、多重延身概念複寫而成,上述概念同時結合美學語言、哲學概念、藝術作品三面向的視域,生產黏貼於網路體自身(the net itself)的概念集合體(assemblage),一方面映射網路體本質上的多態複數性格,一方面指向網路美學概念的開放性與轉換性,藉此三視域與節點的交互角力,聯結思考網路前衛作品的異質路徑。論文第一個切面「網路前衛藝術美學」,主要以阿多諾美學理論中的「星叢文體」作為論述樣式。同時關注網路空間如何連結至巨型網絡並生產「動能」,以及此動能如何擬造出精神態勢。接續上述星叢、動能、精神姿態與身體視角,導引出網路集體創作所映射而成的數據主體性,同時開展出「觀念作為物」的變異性,形成物件、物性導向展演的網路物性美學語言。最後論述台灣網路藝術創作脈絡,尋找差異與連結點,以此差異連結點接述台灣的網路創作體質。

 

Keywords:

網路前衛藝術、星叢、集合、數據主體性、網路物性

 

→ Original Link:

→ Class Presentation PPT

發表於 111下學期 | 在〈2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言〉中留言功能已關閉

2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文名稱:Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文作者:Nakagaki, Ken

論文年份:September 2021. Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning on August 20, 2021, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Media Arts and Sciences.

論文來源:MIT Media Lab – Tangible Media  Group https://dspace.mit.edu/handle/1721.1/142836

報告PPT:[PPT] [PDF]

Abstract

Research on Actuated and Shape-Changing Tangible User Interfaces (TUIs) in the
field of Human Computer Interaction (HCI) has been explored widely to design embodied interactions using digital computation has been explored widely. While advanced technical approaches, such as robotics and material science, have led to many
concrete instances of Actuated TUIs, a single actuated hardware system, in reality,
is inherently limited by its fixed configuration, thus limiting the reconfigurability,
adaptability, and expressibility of its interactions.

In my thesis, I introduce novel hardware augmentation methods, Shells and Stages,
for Actuated TUI hardware to expand and enrich their interactivity and expressibility
for dynamic physical interactions. Shells act as passive mechanical attachments for
Actuated TUIs that can extend, reconfigure and augment the interactivity and functionality of the hardware. Stages are physical platforms that allow Actuated TUIs to
propel on a platform to create novel physical expression based on the duality of front
stage and back stage. These approaches are inspired by theatrical performances,
computational and robotic architecture, biological systems, physical tools and science fiction. While Shells and Stages can individually augment the interactivity and
expressibility of the Actuated TUI system, the combination of the two enhances advanced physical expression based on combined shell-swaping and stage-transitioning.
By introducing these novel modalities of Shells and Stages, the thesis expands and
contributes to a new paradigm of Inter-Material / Device Interaction in the domain
of Actuated TUIs.

The thesis demonstrates the concepts of Shells and Stages based on existing Actuated TUI hardware, including pin-based shape displays and self-propelled swarm user
interfaces. Design and implementation methods are introduced to fabricate mechanical shells with different properties, and to orchestrate a swarm of robots on the stage
with arbitrary configurations. To demonstrate the expanded interactivity and reconfigurability, a variety of interactive applications are presented via prototypes, ranging
from digital data interaction, reconfigurable physical environment, storytelling, and
tangible gaming. Overall, my research introduces a new A-TUI design paradigm that
incorporates the self-actuating hardware (Actuated TUIs) and passively actuated mechanical modules (Shells) together with surrounding physical platforms (Stages). By
doing so, my research envisions the future in which computational technology is coupled seamlessly with our physical environment. This next generation of TUIs, by
interweaving multiple HCI research streams, aims to provide endless possibilities for
reconfigurable tangible and embodied interactions enabled by fully expressive and
functional movements and forms.

發表於 111下學期 | 在〈2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction〉中留言功能已關閉

2023.05.09 巫思萱報告-Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing

Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing

Author: Sooyeon Jeong

Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning,on June 29, 2022, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Media Arts and Sciences

 

論文Link

報告PPT

 

Abstract:
Globally, more than 264 million people of all ages are affected by depression, which
has become a leading cause of disability. Several interactive technologies for mental
health have been developed to make various therapeutic services more accessible and scalable. However, most are designed to engage users only within therapy and in-
tervention tasks. This thesis presents social robots that deliver interactive positive psychology interventions and build rapport with people over time as helpful compan-
ions to improve psychological wellbeing. Two long-term deployment studies explored and evaluated how these robotic agents could improve people’s psychological wellbeing in real-world contexts. In Study 1, a robotic coach provided seven positive psychology interventions for college students in on-campus dormitory settings and showed significant association with improvements in students’ psychological wellbeing, mood,and motivation to change. In Study 2, we deployed our robots in 80 people’s homes across the U.S. during the COVID-19 pandemic and evaluated the efficacy of a social robot that delivers wellbeing interventions as a peer-like companion rather than an expert coach. The companion-like robot was shown to be the most effective in building a positive therapeutic alliance with people and resulted in enhanced psychological wellbeing, improved readiness for change, and reduced negative affect. We further explored how traits, such as personality and age, influence the intervention outcomes and participants’ engagement with the robot. The two long-term in-the-wild studies offer valuable insights into design challenges and opportunities for companion AI agents that personalize mental health interventions and agent behaviors based on users’ traits and behavioral cues for better mental health outcomes.

發表於 111下學期 | 在〈2023.05.09 巫思萱報告-Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing〉中留言功能已關閉

2023.04.25 洪寶惜報告 – A Design Framework for Smart Glass Augmented Reality Experiences in Heritage Sites

論文作者:Mariza Dima , Brunel University London,UK

論文來源:

ACM Journals > Journal on Computing and Cultural Heritage 

2022  https://dl.acm.org/doi/10.1145/3490393[PDF]

報告PPT:[PPT]

Abstract

Despite the growing applications of smart glass Augmented Reality (AR) in heritage, there is not a framework that can serve as a base for designing meaningful and educational immersive heritage experiences. This article proposes such a prototype design framework for AR experiences in heritage sites, drawing on literature that connects affective experiences with learning and practically exploring AR as a non-didactic storytelling medium. Smart glass AR is considered here an important technology milestone for creating affective interactions, one that offers visitors/viewers new ways to experience, embody, and have a physical and social interaction with a localized past and learn about it.

發表於 111下學期 | 在〈2023.04.25 洪寶惜報告 – A Design Framework for Smart Glass Augmented Reality Experiences in Heritage Sites〉中留言功能已關閉

2023.04.25 李艷琳報告 – RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions

RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions

 

 

 

 

 

 

 

 

 

Authors:

Yunlong Wang, Shuyuan Shen, Brian Y Lim

National University of Singapore, Singapore

 

Abstract:

Generative AI models have shown impressive ability to produce images with text prompts, which could benefit creativity in visual art creation and self-expression. However, it is unclear how precisely the generated images express contexts and emotions from the input texts. We explored the emotional expressiveness of AI-generated images and developed RePrompt, an automatic method to refine text prompts toward precise expression of the generated images. Inspired by crowdsourced editing strategies, we curated intuitive text features, such as the number and concreteness of nouns, and trained a proxy model to analyze the feature effects on the AI-generated image. With model explanations of the proxy model, we curated a rubric to adjust text prompts to optimize image generation for precise emotion expression. We conducted simulation and user studies, which showed that RePrompt significantly improves the emotional expressiveness of AI-generated images, especially for negative emotions.

 

Keywords:

Text-to-image generated model, prompt engineering, AI-generated visual art, emotion expression, explainable AI

 

→ Original Link:

→ Author Website:

→ Class Presentation PPT

 

發表於 111下學期 | 在〈2023.04.25 李艷琳報告 – RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions〉中留言功能已關閉

2023.04.25 劉士達報告 – LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things

論文名稱:LearnIoTVR: An End-to-End Virtual Reality Environment
Providing Authentic Learning Experiences for Internet of Things

論文作者:Zhengzhe Zhu, Ziyi Liu, Youyou Zhang, Lijun Zhu, Joey Huang, Ana M Villanueva, Xun Qian, Kylie Peppler, Karthik Ramani

論文來源:ACM  CHI 2023  https://doi.org/10.1145/3544548.3581396 [PDF]

報告PPT:[PPT] [PDF]

ABSTRACT

The rapid growth of Internet-of-Things (IoT) applications has generated interest from many industries and a need for graduates with relevant knowledge. An IoT system is comprised of spatially distributed interactions between humans and various interconnected IoT components. These interactions are contextualized within their ambient environment, thus impeding educators from recreating authentic tasks for hands-on IoT learning. We propose LearnIoTVR, an end-to-end virtual reality (VR) learning environment which helps students to acquire IoT knowledge through immersive design, programming, and exploration of real-world environments empowered by IoT (e.g., a smart house). The students start the learning process by installing virtual IoT components we created in diferent locations inside the VR environment so that the learning will be situated in the same context where the IoT is applied. With our custom-designed 3D block-based language, students can program IoT behaviors directly within VR and get immediate feedback on their programming outcome. In the user study, we evaluated the learning outcomes among students using LearnIoTVR with a pre- and post-test to understand to what extent does engagement in LearnIoTVR lead to gains in learning programming skills and IoT competencies. Additionally, we examined what aspects of LearnIoTVR support usability and learning of programming skills compared to a traditional desktop-based learning environment. The results from these studies were promising. We also acquired insightful user feedback which provides inspiration for further expansions of this system.

KEYWORDS

Virtual Reality, IoT, Block-based Programming, Project-based Learning, Immersive Programming, Embodied Interaction

 

發表於 111下學期 | 在〈2023.04.25 劉士達報告 – LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things〉中留言功能已關閉

2023.04.25巫思萱報告-Sympathetic wear

Sympathetic wear

ACM SIGGRAPH 2022 Art Gallery

-PaperLink

-PPT

Authors:
Junichi Kanebako ,Naoya Watabe ,Miki Yamamura ,Haruki Nakamura ,Keisuke Shuto,HirokoUchiyama

ABSTRACT:

In situations where people must maintain physical distance from one another and rely on communication through digital screens, we sometimes feel a sense of absence and loneliness. Sympathetic Wear is artwork that supplements communication through digital displays and considers the person on the other side of the network. When we are sad or in pain, the action of having our backs rubbed can provide comfort. Adopting the back as our theme, Sympathetic Wear brings gentle healing to people’s minds and bodies by creating a soft tactile sensation on the back that is invisible on screen.

發表於 111上學期 | 在〈2023.04.25巫思萱報告-Sympathetic wear〉中留言功能已關閉

演講(4/11):3D場景表示與生成藝術最新技術介紹


演講題目:3D場景表示與生成藝術最新技術介紹
演講時間:2023年4月11日晚上6:30PM
演講地點:清華大學綜合大樓R603
演講專家:楊元福博士

摘要:
3D場景表示和生成藝術是人工智慧領域中最令人興奮和發展迅速的研究領域之一。3D場景表示是建構元宇宙最重要的基礎技術,使用3D模型和算法將現實世界的物體數字化表示。生成藝術是指使用機器學習算法和神經網絡生成各種藝術作品,如繪畫、音樂和詩歌等。3D場景表示技術可以幫助人們輕鬆地創建具有細節和真實感的3D物體,例如人物角色、動物和建築。使用生成藝術技術,人們可以創造出與現實世界不同的,嶄新的和富有創意的3D藝術作品。
除了藝術創作,結合3D場景表示和生成藝術技術還可以應用於許多實際領域,如建築設計、遊戲開發和虛擬現實。我們可以使用3D場景表示技術創建逼真的建築模型,並使用生成藝術技術為其添加創新的藝術風格和細節。然而使用3D場景表示和生成藝術技術創造出的作品有時可能會出現缺陷,例如細節不夠精確或者現實感不夠強烈。此外使用3D場景表示和生成藝術技術需要大量的計算和資源,因此訓練與推論的成本較高。
3D場景表示和生成藝術技術是當前人工智慧領域中最具前景的研究方向之一。這些技術可以應用於許多領域,為人類提供更加豐富、逼真和具有創意的藝術體驗。隨著技術的不斷發展和完善,我們相信這些技術將在未來的藝術和科技領域中扮演越來越重要的角色。

楊元福準博士簡歷:
簡歷:
• AAID/MLAD Data Scientist
• CIT(持續改善團隊) 評審/輔導員
• 2020 機器學習與統計競賽 First Runner-up
• 2018 影像辨識比賽 First Place
• 2015 tsmc Patent Award.
• 2015 tsmc 最佳改善工程師
• 2011/2015/2016 CIT競賽First Award

• CVPR / ICCV / IEEE TNNLS 評審
• 2021 清華大學國際論文獎
• 2020 先進半導體製造會議(ASMC) – 最佳論文獎 (紐約)
• 2019~2022 先進半導體製造會議(ASMC) – 論文入選 (紐約)
• 2022 國際電腦視覺與圖型識別會議(CVPR) – 論文入選 (紐澳良)
• 2021 林茲電子藝術節- Medium . Permeation (林茲, 奧地利)
• NFT 藝術創作

發表於 111下學期 | 在〈演講(4/11):3D場景表示與生成藝術最新技術介紹〉中留言功能已關閉

2023.03.21洪寶惜報告-Health Greeter Kiosk : Tech-enabled Signage to Encourage Face Mask Use and Social Distancing

Health Greeter Kiosk : Tech-enabled Signage to Encourage Face Mask Use and Social Distancing

Authors:

  • Max Hudnell – Computer Vision Engineer
  • Steven King – Associate Professor

 UNC , Reese Innovation Lab(USA)

Abstract:

COVID-19 has been the cause of a global health crisis over the last year. High transmission rates of the virus threaten to cause a wave of infections which have the potential to overwhelm hospitals, leaving infected individuals without treatment. The World Health Organization (WHO) endorses two primary preventative measures for reducing transmission rates: the usage of face masks and adherence to social distancing [World Health Organization 2021]. In order to increase population adherence to these measures, we designed the Health Greeter Kiosk: a form of digital signage. Traditional physical signage has been used throughout the pandemic to enforce COVID-19 mandates, but lack population engagement and can easily go unnoticed. We designed this kiosk with the intent to reinforce these COVID-19 prevention mandates while also considering the necessity of population engagement. Our kiosk encourages engagement by providing visual feedback which is based on analysis from our kiosk’s computer vision software. This software integrates real-time face mask and social distance detection on a low-budget computer, without the need of a GPU. Our kiosk also collects statistics, relevant to the WHO mandates, which can be used to develop well-informed reopening strategies.

 

→ Original Link:

→ Class Presentation PPT

 

發表於 111下學期 | 在〈2023.03.21洪寶惜報告-Health Greeter Kiosk : Tech-enabled Signage to Encourage Face Mask Use and Social Distancing〉中留言功能已關閉