Skip to content

Commit 1e73a09

Browse files
committed
update papers
1 parent ecabefc commit 1e73a09

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+632
-192
lines changed
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
---
2+
layout: publication
3+
year: 2026
4+
month: 05
5+
selected: true
6+
coming-soon: true
7+
hidden: false
8+
external : false
9+
# link: https://dl.acm.org/doi/10.1145/3613904.3642394
10+
pdf: https://doi.org/10.1145/10.1145/3772318.3791794
11+
title: "Auditorily Embodied Conversational Agents: Effects of Spatialization and Situated Audio Cues on Presence and Social Perception"
12+
authors:
13+
- Yi Fei Cheng
14+
- Jarod Bloch
15+
- Alexander Wang
16+
- Andrea Bianchi
17+
- Anusha Withana
18+
- Anhong Guo
19+
- Laurie M. Heller
20+
- David Lindlbauer
21+
# blog:
22+
# doi: 10.1145/3772318.3791794
23+
venue_location: Barcelona, Spain
24+
venue_url: https://chi2026.acm.org/
25+
venue_tags:
26+
- ACM CHI
27+
type:
28+
- Conference
29+
tags:
30+
- CPresence
31+
- Embodiment
32+
- Agents
33+
- Spatial Audio
34+
venue: ACM CHI
35+
36+
#video-thumb: fukfCSvmo44
37+
#video-30sec: 7K3eouLCcSw
38+
#video-suppl: fukfCSvmo44
39+
#video-talk-5min: l9ycUrf50TE
40+
#video-talk-15min: x2wLScb5OIo
41+
42+
bibtex: "@inproceedings {Cheng2026AuditoryAgents, \n
43+
author = {Cheng, Yi Fei, and Bloch, Jarod and Wang, Alexander and Bianchi, Andrea and Withana, Anusha and Guo, Anhong and Heller, Laurie M. and Lindlbauer, David}, \n
44+
title = {Auditorily Embodied Conversational Agents: Effects of Spatialization and Situated Audio Cues on Presence and Social Perception}, \n
45+
year = {2026}, \n
46+
publisher = {Association for Computing Machinery}, \n
47+
address = {New York, NY, USA}, \n
48+
keywords = {Presence, Embodiment, Agents, Spatial Audio}, \n
49+
location = {Barcelona, Spain}, \n
50+
series = {CHI '26} \n
51+
}"
52+
53+
---
54+
55+
Embodiment can enhance conversational agents, such as increasing their perceived presence. This is typically achieved through visual representations of a virtual body; however, visual modalities are not always available, such as when users interact with agents using headphones or display-less glasses. In this work, we explore auditory embodiment. By introducing auditory cues of bodily presence -- through spatially localized voice and situated Foley audio from environmental interactions -- we investigate how audio alone can convey embodiment and influence perceptions of a conversational agent. We conducted a 2 (spatialization: monaural vs. spatialized) × 2 (Foley: none vs. Foley) within-subjects study, where participants (n=24) engaged in conversations with agents. Our results show that spatialization and Foley increase co-presence, but reduce users’ perceptions of the agent’s attention and other social attributes.
1.57 MB
Loading
443 KB
Loading

_site/feed.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2026-01-26T17:11:25-05:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">CMU Augmented Perception Lab</title><subtitle>Augmented Perception Lab at Carnegie Mellon University in Pittsburgh.</subtitle></feed>
1+
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2026-01-26T17:20:12-05:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">CMU Augmented Perception Lab</title><subtitle>Augmented Perception Lab at Carnegie Mellon University in Pittsburgh.</subtitle></feed>

_site/index.html

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,39 @@ <h3 class="mt0">
195195
</div>
196196
</div>
197197

198+
<div class="mv4-ns flex flex-row-ns flex-column">
199+
200+
201+
<div class="h3 mv3 mr3-ns mb2 mb0-ns flex-shrink-0 preview-image ba b--black-05 db"
202+
style="background-image: url('/assets/publications/2026-auditory_agents_thumb.png')">
203+
204+
205+
</div>
206+
207+
<div class="measure-wide mv3 min-width-front">
208+
209+
<div class="mb2">
210+
<h3 class="mt0">
211+
212+
Auditorily Embodied Conversational Agents: Effects of Spatialization and Situated Audio Cues on Presence and Social Perception
213+
214+
</h3>
215+
</div>
216+
217+
<div class="mb2">
218+
Yi Fei Cheng, Jarod Bloch, Alexander Wang, Andrea Bianchi, Anusha Withana, Anhong Guo, Laurie M. Heller, David Lindlbauer.</div>
219+
220+
<span class="b">ACM CHI 2026</span>
221+
222+
223+
<div class="mt3">
224+
<i>More infos coming soon.</i>
225+
</div>
226+
227+
228+
</div>
229+
</div>
230+
198231

199232

200233

_site/publications.html

Lines changed: 145 additions & 47 deletions
Large diffs are not rendered by default.

_site/publications/2013-suggero.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@
5050
<meta property="og:url" content="http://localhost:4000/publications/2013-suggero.html" />
5151
<meta property="og:site_name" content="CMU Augmented Perception Lab" />
5252
<meta property="og:type" content="article" />
53-
<meta property="article:published_time" content="2026-01-26T17:11:25-05:00" />
53+
<meta property="article:published_time" content="2026-01-26T17:20:12-05:00" />
5454
<script type="application/ld+json">
55-
{"author":{"@type":"Person","name":"David Lindlbauer"},"url":"http://localhost:4000/publications/2013-suggero.html","description":"Modifying a digital sketch may require multiple selections before a particular editing tool can be applied. Especially on large interactive surfaces, such interactions can be fatiguing. Accordingly, we propose a method, called Suggero, to facilitate the selection process of digital ink. Suggero identifies groups of perceptually related drawing objects. These “perceptual groups” are used to suggest possible extensions in response to a person’s initial selection. Two studies were conducted. First, a background study investigated participant’s expectations of such a selection assistance tool. Then, an empirical study compared the effectiveness of Suggero with an existing manual technique. The results revealed that Suggero required fewer pen interactions and less pen movement, suggesting that Suggero minimizes fatigue during digital sketching.","@type":"BlogPosting","headline":"Perceptual grouping: selection assistance for digital sketching","dateModified":"2026-01-26T17:11:25-05:00","datePublished":"2026-01-26T17:11:25-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2013-suggero.html"},"@context":"https://schema.org"}</script>
55+
{"author":{"@type":"Person","name":"David Lindlbauer"},"url":"http://localhost:4000/publications/2013-suggero.html","description":"Modifying a digital sketch may require multiple selections before a particular editing tool can be applied. Especially on large interactive surfaces, such interactions can be fatiguing. Accordingly, we propose a method, called Suggero, to facilitate the selection process of digital ink. Suggero identifies groups of perceptually related drawing objects. These “perceptual groups” are used to suggest possible extensions in response to a person’s initial selection. Two studies were conducted. First, a background study investigated participant’s expectations of such a selection assistance tool. Then, an empirical study compared the effectiveness of Suggero with an existing manual technique. The results revealed that Suggero required fewer pen interactions and less pen movement, suggesting that Suggero minimizes fatigue during digital sketching.","@type":"BlogPosting","headline":"Perceptual grouping: selection assistance for digital sketching","dateModified":"2026-01-26T17:20:12-05:00","datePublished":"2026-01-26T17:20:12-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2013-suggero.html"},"@context":"https://schema.org"}</script>
5656
<!-- End Jekyll SEO tag -->
5757

5858
</head>

_site/publications/2014-chair.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@
5050
<meta property="og:url" content="http://localhost:4000/publications/2014-chair.html" />
5151
<meta property="og:site_name" content="CMU Augmented Perception Lab" />
5252
<meta property="og:type" content="article" />
53-
<meta property="article:published_time" content="2026-01-26T17:11:25-05:00" />
53+
<meta property="article:published_time" content="2026-01-26T17:20:12-05:00" />
5454
<script type="application/ld+json">
55-
{"author":{"@type":"Person","name":"Kathrin Probst"},"url":"http://localhost:4000/publications/2014-chair.html","description":"During everyday office work we are used to controlling our computers with keyboard and mouse, while the majority of our body remains unchallenged and the physical workspace around us stays largely unattended. Addressing this untapped potential, we explore the concept of turning a flexible office chair into a ubiquitous input device. To facilitate daily desktop work, we propose the utilization of semaphoric chair gestures that can be assigned to specific application functionalities. The exploration of two usage scenarios in the context of focused and peripheral interaction demonstrates high potential of chair gestures as additional input modality for opportunistic, hands-free interaction.","@type":"BlogPosting","headline":"A chair as ubiquitous input device: exploring semaphoric chair gestures for focused and peripheral interaction","dateModified":"2026-01-26T17:11:25-05:00","datePublished":"2026-01-26T17:11:25-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2014-chair.html"},"@context":"https://schema.org"}</script>
55+
{"author":{"@type":"Person","name":"Kathrin Probst"},"url":"http://localhost:4000/publications/2014-chair.html","description":"During everyday office work we are used to controlling our computers with keyboard and mouse, while the majority of our body remains unchallenged and the physical workspace around us stays largely unattended. Addressing this untapped potential, we explore the concept of turning a flexible office chair into a ubiquitous input device. To facilitate daily desktop work, we propose the utilization of semaphoric chair gestures that can be assigned to specific application functionalities. The exploration of two usage scenarios in the context of focused and peripheral interaction demonstrates high potential of chair gestures as additional input modality for opportunistic, hands-free interaction.","@type":"BlogPosting","headline":"A chair as ubiquitous input device: exploring semaphoric chair gestures for focused and peripheral interaction","dateModified":"2026-01-26T17:20:12-05:00","datePublished":"2026-01-26T17:20:12-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2014-chair.html"},"@context":"https://schema.org"}</script>
5656
<!-- End Jekyll SEO tag -->
5757

5858
</head>

_site/publications/2014-tracs.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@
5050
<meta property="og:url" content="http://localhost:4000/publications/2014-tracs.html" />
5151
<meta property="og:site_name" content="CMU Augmented Perception Lab" />
5252
<meta property="og:type" content="article" />
53-
<meta property="article:published_time" content="2026-01-26T17:11:25-05:00" />
53+
<meta property="article:published_time" content="2026-01-26T17:20:12-05:00" />
5454
<script type="application/ld+json">
55-
{"author":{"@type":"Person","name":"David Lindlbauer"},"url":"http://localhost:4000/publications/2014-tracs.html","description":"We present Tracs, a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co-workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency-control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.","@type":"BlogPosting","headline":"Tracs: transparency-control for see-through displays","dateModified":"2026-01-26T17:11:25-05:00","datePublished":"2026-01-26T17:11:25-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2014-tracs.html"},"@context":"https://schema.org"}</script>
55+
{"author":{"@type":"Person","name":"David Lindlbauer"},"url":"http://localhost:4000/publications/2014-tracs.html","description":"We present Tracs, a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co-workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency-control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.","@type":"BlogPosting","headline":"Tracs: transparency-control for see-through displays","dateModified":"2026-01-26T17:20:12-05:00","datePublished":"2026-01-26T17:20:12-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2014-tracs.html"},"@context":"https://schema.org"}</script>
5656
<!-- End Jekyll SEO tag -->
5757

5858
</head>

_site/publications/2015-creature-teacher.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@
5050
<meta property="og:url" content="http://localhost:4000/publications/2015-creature-teacher.html" />
5151
<meta property="og:site_name" content="CMU Augmented Perception Lab" />
5252
<meta property="og:type" content="article" />
53-
<meta property="article:published_time" content="2026-01-26T17:11:25-05:00" />
53+
<meta property="article:published_time" content="2026-01-26T17:20:12-05:00" />
5454
<script type="application/ld+json">
55-
{"author":{"@type":"Person","name":"Andreas Fender"},"url":"http://localhost:4000/publications/2015-creature-teacher.html","description":"We present Creature Teacher, a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher’s generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.","@type":"BlogPosting","headline":"Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements","dateModified":"2026-01-26T17:11:25-05:00","datePublished":"2026-01-26T17:11:25-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2015-creature-teacher.html"},"@context":"https://schema.org"}</script>
55+
{"author":{"@type":"Person","name":"Andreas Fender"},"url":"http://localhost:4000/publications/2015-creature-teacher.html","description":"We present Creature Teacher, a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher’s generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.","@type":"BlogPosting","headline":"Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements","dateModified":"2026-01-26T17:20:12-05:00","datePublished":"2026-01-26T17:20:12-05:00","mainEntityOfPage":{"@type":"WebPage","@id":"http://localhost:4000/publications/2015-creature-teacher.html"},"@context":"https://schema.org"}</script>
5656
<!-- End Jekyll SEO tag -->
5757

5858
</head>

0 commit comments

Comments
 (0)