CENTIENTS PUBS


“The Impact of Work from Home (WFH) on Work Productivity and Worker Experience,” Journal of Work

Awada M, Lucas G, Becerik-Gerber B, Roll S.

2021

With the COVID-19 pandemic, organizations embraced Work From Home (WFH). An important component of transitioning to WFH is the effect on workers, particularly related to their productivity and work experience. The objective of this study is to examine how worker-, workspace-, and work-related factors affected productivity and time spent at a workstation on a typical WFH day during the pandemic. An online questionnaire was designed and administered to collect the necessary information. Data from 988 respondents were included in the analyses. Overall perception of productivity level among workers did not change relative to their in-office productivity before the pandemic. Female, older, and high-income workers were likely to report increased productivity. Productivity was positively influenced by better mental and physical health statuses, having a teenager, increased communication with coworkers and having a dedicated room for work. Number of hours spent at a workstation increased by approximately 1.5 hours during a typical WFH day. Longer hours were reported by individuals who had school age children, owned an office desk or an adjustable chair, and had adjusted their work hours.The findings highlight key factors for employers and employees to consider for improving the WFH experience.


Workplace environments have a significant impact on worker performance, health, and well-being. With machine learning capabilities, artificial intelligence (AI) can be developed to automate individualized adjustments to work environments (e.g., lighting, temperature) and to facilitate healthier worker behaviors (e.g., posture). Worker perspectives on incorporating AI into office workspaces are largely unexplored. Thus, the purpose of this study was to explore office workers' views on including AI in their office workspace. Six focus group interviews with a total of 45 participants were conducted. Interview questions were designed to generate discussion on benefits, challenges, and pragmatic considerations for incorporating AI into office settings. Sessions were audio-recorded, transcribed, and analyzed using an iterative approach. Two primary constructs emerged. First, participants shared perspectives related to preferences and concerns regarding communication and interactions with the technology. Second, numerous conversations highlighted the dualistic nature of a system that collects large amounts of data; that is, the potential benefits for behavior change to improve health and the pitfalls of trust and privacy. Across both constructs, there was an overarching discussion related to the intersections of AI with the complexity of work performance. Numerous thoughts were shared relative to future AI solutions that could enhance the office workplace. This study's findings indicate that the acceptability of AI in the workplace is complex and dependent upon the benefits outweighing the potential detriments. Office worker needs are complex and diverse, and AI systems should aim to accommodate individual needs.


Unfortunately, active shooter incidents are on the rise in the United States. With the recent technological advancements, virtual reality (VR) experiments could serve as an effective method to prepare civilians and law enforcement personnel for such scenarios. However, for VR experiments to be effective for active shooter training and research, such experiments must be able to evoke emotional and physiological responses as live active shooter drills and events do. The objective of this study is thus to test the effectiveness of an active shooter VR experiment on emotional and physiological responses. Additionally, we consider different locomotion techniques (i.e., walk-in-place and controller) and explore their impact on users’ sense of presence. The results suggest that the VR active shooter experiment in this study can induce emotional arousal and increase heart rate of the participants immersed in the virtual environment. Furthermore, compared to the controller, the walk-in-place technique resulted in a higher emotional arousal in terms of negative emotions and a stronger sense of presence. The study presents a foundation for future active shooter experiments as it supports the ecological validity using VR for active shooter incident related work for the purposes of training or research.


Objective: To understand impacts of social, behavioral and physical factors on well-being of office workstation users during COVID-19 work from home (WFH).

Methods: A questionnaire was deployed from April 24 to June 11, 2020 and 988 responses were valid. Linear regression, multinomial logistic regression and chi-square tests were used to understand factors associated with overall physical and mental health statuses and number of new physical and mental health issues.

Results: Decreased overall physical and mental well-being after WFH were associated with physical exercise, food intake, communication with coworkers, children at home, distractions while working, adjusted work hours, workstation set-up and satisfaction with workspace indoor environmental factors.

Conclusion: This study highlights factors that impact workers’ physical and mental health well-being while WFH and provides a foundation for considering how to best support a positive WFH experience.


Heating, ventilation, and air conditioning (HVAC) systems account for 43% of building energy consumption, yet only 38% of commercial building occupants are satisfied with the thermal environment. The primary reasons for low occupant satisfaction are that HVAC operations do not integrate occupant comfort requirements nor control the thermal environment at the individual level. Personal comfort systems (PCSs) enable local control of the thermal environment around each occupant. However, full manual control of PCS can be inefficient, and fully automated PCS reduces an occupant’s perceived control over the environment, which can then lead to lower satisfaction. A better solution might lie somewhere between fully manual and fully automated environmental control. In this article, we describe the development and implementation of an Internet-of-Things (IoT)-based intelligent agent that learns individual occupant comfort requirements and controls the thermal environment using PCS (i.e., a local fan and a heater). We tested different levels of automation where control is shared between an intelligent agent and the end user. Our results show that PCS use improves occupant satisfaction and including some level of automation can improve occupant satisfaction further than what is possible with manually operated PCS. Among the levels of automation investigated, inquisitive automation, where the user approves/declines the control actions of the intelligent agent before execution, led to highest occupant satisfaction with the thermal environment.


Even before the COVID-19 pandemic, people spent on average around 90% of their time indoors. Now more than ever, with work-from-home orders in place, it is crucial that we radically rethink the design and operation of buildings. Indoor Environmental Quality (IEQ) directly affects the comfort and well-being of occupants. When IEQ is compromised, occupants are at increased risk for many diseases that are exacerbated by both social and economic forces. In the U.S. alone, the annual cost attributed to sick building syndrome in commercial workplaces is estimated to be between $10 billion to $70 billion. It is imperative to understand how parameters that drive IEQ can be designed properly and how buildings can be operated to provide ideal IEQ to safeguard health. While IEQ is a fertile area of scholarship, there is a pressing need for a systematic understanding of how IEQ factors impact occupant health. During extreme events, such as a global pandemic, designers, facility managers, and occupants need pragmatic guidance on reducing health risks in buildings. This paper answers ten questions that explore the effects of buildings on the health of occupants. The study establishes a foundation for future work and provides insights for new research directions and discoveries.


Active shooter incidents present an increasing threat to the American society. Many of these incidents occur in building environments, therefore, it is important to consider design and security elements in buildings to decrease the risk of active shooter incidents. This study aims to assess current security countermeasures and identify varying considerations associated with implementing these countermeasures. Fifteen participants, with expertise and experience in a diverse collection of operational and organizational backgrounds, including security, engineering, law enforcement, emergency management and policy making, participated in three focus group interviews. The participants identified a list of countermeasures that have been used for active shooter incidents. Important determinants for the effectiveness of countermeasures include their influence on occupants' behavior during active shooter incidents, and occupants' and administrators' awareness of how to use them effectively. The nature of incidents (e.g., internal vs. external threats), building type (e.g., office buildings vs. school buildings), and occupants (e.g., students of different ages) were also recognized to affect the selection of appropriate countermeasures. The nexus between emergency preparedness and normal operations, and the importance of tradeoffs such as the ones between cost, aesthetics, maintenance needs and the influence on occupants' daily activities were also discussed. To ensure the effectiveness of countermeasures and improve safety, the participants highlighted the importance of both training and practice, for occupants and administrators (e.g., first responder teams). The interview results suggested that further study of the relationship between security countermeasures and occupants' and administrators’ responses, as well as efficient training approaches are needed.


Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences.

The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience.

This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study #1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study #2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study #3, we show that the lessons learned from the empirical observations in Study #1 and #2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.


People spend most of their day in buildings, and a large portion of the energy in buildings is used to control the indoor environment for creating acceptable conditions for occupants. However, the majority of the building systems are controlled based on a “one size fits all” scheme which cannot account for individual occupant preferences. This leads to discomfort, low satisfaction and negative impacts on occupants' productivity, health and well-being. In this paper, we describe our vision of how recent advances in Internet of Things (IoT) and machine learning can be used to add intelligence to an office desk to personalize the environment around the user. The smart desk can learn individual user preferences for the indoor environment, personalize the environment based on user preferences and act as an intelligent support system for improving user comfort, health and productivity. We briefly describe the recent advances made in different domains that can be leveraged to enhance occupant experience in buildings and describe the overall framework for the smart desk. We conclude the paper with a discussion of possible avenues for further research.


In recent years, technological advances have substantially extended the capabilities of building automation. Despite the achieved advances, evidently, automation have not been widely adopted by occupants in buildings. To enhance automation adoptability, automation procedure in buildings should involve determination of user's preferred automation levels, in different conditions and control contexts and learning preference dynamics in time. In line with this motivation, in this paper we introduce a building automation that learns occupant's preferences continuously to fully or partially control the service systems in buildings based on a set of dynamic rules that are generated with the insight about user's preferences and activities. The algorithmic components of our proposed automation include (1) dynamic command planning, (2) adaptive local learning, and (3) iterative global learning. In order to evaluate these algorithms, we used a combination of real and synthetic user activity and preference data from an office with five occupants and an apartment with one occupant. Based on our results from evaluation of adaptive local learning, after a certain number of days (i.e., 8.5 days in average) the accuracy of predicting participants’ preference reached to an acceptable value (i.e., above 85%). About 24% to 75%, 5% to 45%, and 6% to 49% of the total daily energy consumption of the participants could be saved using full automation, adaptive automation and inquisitive automation, respectively. Our results for evaluating iterative global learning algorithm showed that adaptive automation has the highest sum of the rewards from achieved benefit and user satisfaction and inquisitive automation has the second highest reward values. Full automation and no automation came in third and last spots, respectively.


The uncertainty in occupants' interactions with building systems and occupant-related factors influence the accuracy of building energy consumption estimates. Individual interactions are hard to predict, however, interaction related trends and patterns for groups of building occupants that could be retrieved from empirical studies could potentially provide insight regarding human-building interactions (HBIs) (i.e., occupants' interactions with built environments). Thus, in this study, we measured human response to multimodal sensory discomfort (i.e., multimodal perception of visual and thermal discomfort) in a simulated single occupancy office. We used perceptual decisions of occupants as enablers to understand HBIs at a micro-level. We identified the number, type, kind, hierarchical order, occurrence probabilities, patterns, response time of decisions as markers of response in a between-subjects experiment with 90 participants. We statistically analyzed two conditions (no discomfort and multimodal discomfort) with regards to participants' responses. Our results show that HBI decisions in the no discomfort condition and multimodal discomfort condition are significantly different with regards to type (i.e., thermal and visual) and kind (e.g., blind, desk fan) of immediate decisions. We also found that decisions in the no discomfort condition are very diverse across the participants and potentially reflect occupants’ thermal and visual preferences. On the other hand, the decisions in the multimodal discomfort condition reflect emerging responses of participants to address the multimodal discomfort.


Behavioral intervention strategies have yet to become successful in the development of initiatives to foster pro-environmental behaviors in buildings. In this paper, we explored the potentials of increasing the effectiveness of requests aiming to promote pro-environmental behaviors by engaging users in a social dialog, given the effects of two possible personas that are more related to the buildings (i.e., building vs. building manager). We tested our hypotheses and evaluated our findings in virtual and physical environments and found similar effects in both environments. Our results showed that social dialog involvement persuaded respondents to perform more pro-environmental actions. However, these effects were significant when the requests were delivered by an agent representing the building. In addition, these strategies were not equally effective across all types of people and their effects varied for people with different characteristics. Our findings provide useful design choices for persuasive technologies aiming to promote pro-environmental behaviors.


Based on considerations that people´s need to belong can be temporarily satisfied by “social snacking” (Gardner et al., 2005) in the sense that in absence of social interactions which adequately satisfy belongingness needs surrogates can bridge lonely times, it was tested whether the interaction with a virtual agent can serve to ease the need for social contact. In a between subjects experimental setting, 79 participants interacted with a virtual agent who either displayed socially responsive nonverbal behavior or not. Results demonstrate that although there was no main effect of socially responsive behavior on participants´ subjective experience of rapport and on connectedness with the agent, those people with a high need to belong reported less willingness to engage in social activities after the interaction with a virtual agent – but only if the agent displayed socially responsive behavior.


Occupant behavior is one of the most significant contributors to building energy consumption. Employing communication systems to enable buildings to interact with their occupants and influence the way they behave could significantly reduce energy consumption. We investigated the effectiveness of different delivery styles (i.e., avatar, voice, and text), as well as the impact of communicator’s persona (i.e., building facility manager and building itself) and gender (i.e., male and female) on occupants’ compliance with pro-environmental requests. The results showed that avatar is more effective than voice and voice is more effective than text on promoting compliance with persuasive pro-environmental requests. In addition, results showed greater compliance with requests made by the persona of a building facility manager than the persona of the building itself. Finally, participants were more likely to comply with the female communicator than the male communicator. Accordingly, this new interaction between buildings and their occupants could impact human behavior.


In recent years, technological advances have substantially extended the capabilities of automation systems in buildings. Despite the achieved advances, automation systems have not been widely adopted by building occupants. This paper presents our investigations on automation preferences of occupants for the control of lighting systems and appliances in residential buildings. A survey was carried out to determine how preferences for level of automation vary by contexts as well as individuals’ personalities and demographic characteristics. The contexts investigated in this study include rescheduling an energy consuming activity, activity-based appliance state control, and lighting control. The collected data from 250 respondents were analyzed using Generalized Linear Mixed Models. Our findings demonstrate that in all context no automation is the least preferred option. For rescheduling an energy consuming activity, an automation level with higher user participation is more preferred. For activity-based appliance state control and lighting control, levels of automation with lower user participation are more preferred. Our findings also indicate that income and education levels and also personality traits of agreeableness, neuroticism and openness to experience affect the preference of particular automation levels over the others. Findings from this study can be used in designing user-centered automation systems that lead to potentially more satisfying operation and hence, could enhance automation acceptability.


In this paper, a systematic approach is presented to (1) collect end-user lighting-related behavior by using immersive virtual environments (IVEs) as an experimental tool, (2) integrate the collected data with building performance simulation (BPS) tools in order to translate behavioral information into quantitative measures (i.e., preferred lux level), and (3) incorporate user preference data for evaluating design alternatives with the objective of meeting end-user lighting preferences while reducing the building lighting-related energy consumption. To evaluate the applicability of this approach, 89 participants' lighting preferences, performance (reading speed and comprehension), personality traits, and environmental views were collected in IVEs. BPS tools were used to translate participants' lighting preferences into quantitative lux distributions, which were then used to evaluate alternative designs and make user-centered design decisions. The results of the experimental study show that participants preferred to have maximum simulated daylighting compared to electric lighting. Additionally, participants with some or maximum levels of simulated daylighting performed significantly better on the assigned reading and comprehension tasks than those that did not have any simulated daylighting available. Lastly, by collecting participant personality traits, it was observed that extroverts are significantly more likely to prefer maximum lighting (maximum electric lighting and simulated daylighting) compared to other people. To demonstrate how the collected data and results could be used during the design phase of buildings, as one example, a design case study is presented, in which the design of the same office space (as the experiment) is improved to meet participants' lighting preferences and increase the available simulated daylighting.


Reporting mental health symptoms: Breaking down barriers to care with virtual human interviewers. Frontiers in Robotics and AI, 4(51), 1-9

Lucas, G. M., Rizzo, A. S., Gratch, J., Scherer, S., Stratou, G., Boberg, J., & Morency, L. P.

2017

A common barrier to healthcare for psychiatric conditions is the stigma associated with these disorders. Perceived stigma prevents many from reporting their symptoms. Stigma is a particularly pervasive problem among military service members, preventing them from reporting symptoms of combat-related conditions like posttraumatic stress disorder (PTSD). However, research shows (increased reporting by service members when anonymous assessments are used. For example, service members report more symptoms of PTSD when they anonymously answer the Post-Deployment Health Assessment (PDHA) symptom checklist compared to the official PDHA, which is identifiable and linked to their military records. To investigate the factors that influence reporting of psychological symptoms by service members, we used a transformative technology: automated virtual humans that interview people about their symptoms. Such virtual human interviewers allow simultaneous use of two techniques for eliciting disclosure that would otherwise be incompatible; they afford anonymity while also building rapport. We examined whether virtual human interviewers could increase disclosure of mental health symptoms among active-duty service members that just returned from a year-long deployment in Afghanistan. Service members reported more symptoms during a conversation with a virtual human interviewer than on the official PDHA. They also reported more to a virtual human interviewer than on an anonymized PDHA. A second, larger sample of active-duty and former service members found a similar effect that approached statistical significance. Because respondents in both studies shared more with virtual human interviewers than an anonymized PDHA—even though both conditions control for stigma and ramifications for service members’ military records—virtual human interviewers that build rapport may provide a superior option to encourage reporting.


Detection and computational analysis of psychological signals using a virtual human interviewing agent. Journal of Pain Management, 9, 311-321

Rizzo, A.A., Scherer, S., DeVault, D., Gratch, J., Artstein, R., Hartholt, A., Lucas, G., Marsella, S.,Morbini, F., Nazarian, A., Stratou, G., Traum, D., Wood, R., Boberg, J. & Morency, L-P.

2016

It has long been recognized that facial expressions, body posture/gestures and vocal parameters play an important role in human communication and the implicit signalling of emotion. Recent advances in low cost computer vision and behavioral sensing technologies can now be applied to the process of making meaningful inferences as to user state when a person interacts with a computational device. Effective use of this additive information could serve to promote human interaction with virtual human (VH) agents that may enhance diagnostic assessment. This paper will focus on our current research in these areas within the DARPA-funded "Detection and Computational Analysis of Psychological Signals" project, with specific attention to the SimSensei application use case. SimSensei is a virtual human interaction platform that is able to sense and interpret real-time audiovisual behavioral signals from users interacting with the system. It is specifically designed for health care support and leverages years of virtual human research and development at USC-ICT. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the state and inferred intent of the user through analysis of behavioral signals gleaned from facial expressions, body gestures and vocal parameters. Akin to how non-verbal behavioral signals have an impact on human to human interaction and communication, SimSensei aims to capture and infer from user non-verbal communication to improve engagement between a VH and a user. The system can also quantify and interpret sensed behavioral signals longitudinally that can be used to inform diagnostic assessment within a clinical context.


Reduced frequency range in vowel production is a well documented speech characteristic of individuals with psychological and neurological disorders. Affective disorders such as depression and post-traumatic stress disorder (PTSD) are known to influence motor control and in particular speech production. The assessment and documentation of reduced vowel space and reduced expressivity often either rely on subjective assessments or on analysis of speech under constrained laboratory conditions (e.g. sustained vowel production, reading tasks). These constraints render the analysis of such measures expensive and impractical. Within this work, we investigate an automatic unsupervised machine learning based approach to assess a speaker's vowel space. Our experiments are based on recordings of 253 individuals. Symptoms of depression and PTSD are assessed using standard self-assessment questionnaires and their cut-off scores. The experiments show a significantly reduced vowel space in subjects that scored positively on the questionnaires. We show the measure's statistical robustness against varying demographics of individuals and articulation rate. The reduced vowel space for subjects with symptoms of depression can be explained by the common condition of psychomotor retardation influencing articulation and motor control. These findings could potentially support treatment of affective disorders, like depression and PTSD in the future.


In order for a project to be satisfactory to end-users and completed with high quality, the architecture, engineering, and construction (AEC) industry heavily relies on digital modeling, simulation and visual communication. In the past two decades, the AEC community has examined different approaches, including virtual and augmented reality, to improve communication, visualization, and coordination among different project participants; yet these approaches are slowly being adopted by the industry. Such technological advancements have the potential to improve and revolutionize the current approaches in design (e.g., by involving end-user feedback to ensure higher performing building operations and end-user satisfaction), in construction (e.g., by improving safety through virtual training), and in operations (e.g., by visualizing real-time sensor data to improve diagnostics). The authors' research vision builds upon the value of using immersive virtual environments (IVEs) during the design, construction, and operation phases of AEC projects. IVEs could provide a sense of presence found in physical mock-ups and make evaluation of an increased set of potential design alternatives possible in a timely and cost-efficient manner. Yet, in order to use IVEs during the design, construction, and operation phases of buildings, it is important to ensure that the data collected and analyzed in such environments represent physical environments. To test whether IVEs are adequate representations of physical environments and to measure user performance in such environments, this paper presents results from an experiment that investigates user performance on a set of everyday office-related activities (e.g., reading text and identifying objects in an office environment) and benchmarks the participants' performance in a similar physical environment. Sense of presence is also measured within an IVE through a set of questionnaires. By analyzing the experimental data from 112 participants, the authors concluded that the participants perform similarly in an IVE setting as they do in the benchmarked physical environment for all of the measured tasks. The questionnaire data show that the participants felt a strong sense of presence within an IVE. Based on the experimental data, the authors thus demonstrate that an IVE can be an effective tool in the design phase of AEC projects in order to acquire end-user performance feedback, which might lead to higher performing infrastructure design and end-user satisfaction.


Building emergencies, especially structure fires, are threats to the safety of both building occupants and first responders. It is difficult and dangerous for first responders to perform search and rescue in an unfamiliar environment, sometimes leading to secondary casualties. One way to reduce such hazards is to provide first responders with timely access to accurate location information. To address this challenge, the authors have developed a radio frequency based indoor localization framework, for which novel algorithms were designed for two different situations: one where an existing sensing infrastructure exists in buildings and one where an ad-hoc sensing infrastructure must be deployed. This paper presents a comparative assessment of this framework under different situations and emergency scenarios, and between simulations and field tests. The paper first presents an assessment of the framework in field tests, showing that it achieves room-level accuracies above 82.8% and 84.6% and coordinate-level accuracies above 2.29 m and 2.07 m, under the two situations, respectively. Moreover, the framework demonstrates considerable robustness in the tests, retaining a room-level accuracy of 70% or higher when the majority of sensing infrastructure is damaged. This paper then synthesizes results from both simulations and field tests, and demonstrates how the framework can be adapted to different situations and scenarios while consistently yielding satisfactory localization performance.


Research has begun to explore the use of virtual humans (VHs) in clinical interviews (Bickmore, Gruber, & Picard, 2005). When designed as supportive and “safe” interaction partners, VHs may improve such screenings by increasing willingness to disclose information (Gratch, Wang, Gerten, & Fast, 2007). In health and mental health contexts, patients are often reluctant to respond honestly. In the context of health-screening interviews, we report a study in which participants interacted with a VH interviewer and were led to believe that the VH was controlled by either humans or automation. As predicted, compared to those who believed they were interacting with a human operator, participants who believed they were interacting with a computer reported lower fear of self-disclosure, lower impression management, displayed their sadness more intensely, and were rated by observers as more willing to disclose. These results suggest that automated VHs can help overcome a significant barrier to obtaining truthful patient information.


Automatic audiovisual behavior descriptors for psychological disorder analysis. Image and Vision Computing, 32, 648-658

Scherer, S., Stratou, G., Lucas, G. M., Mahmoud, M., Boberg, J., Gratch, J., Rizzo, A., & Morency, L. P.

2014

We investigate the capabilities of automatic audiovisual nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder. Due to strong correlations between these disordersas measured with standard self-assessment questionnaires in this study, we focus our investigations in particular on a generic distress measure as identified using factor analysis. Within this work, we seek to confirm and enrich present state of the art, predominantly based on qualitative manual annotations, with automatic quantitative behavior descriptors. We propose a number of nonverbal behavior descriptors that can be automatically estimated from audiovisual signals. Such automatic behavior descriptors could be used to support healthcare providers with quantified and objective observations that could ultimately improve clinical assessment. We evaluate our work on the dataset called the Distress Assessment Interview Corpus (DAIC) which comprises dyadic interactions between a confederate interviewer and a paid participant. Our evaluation on this dataset shows correlation of our automatic behavior descriptors with the derived general distress measure. Our analysis also includes a deeper study of self-adaptor and fidgeting behaviors based on detailed annotations of where these behaviors occur.