{"id":1863,"date":"2023-02-24T20:28:55","date_gmt":"2023-02-24T20:28:55","guid":{"rendered":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/?page_id=1863"},"modified":"2024-07-17T13:49:33","modified_gmt":"2024-07-17T13:49:33","slug":"workshops-and-symposia","status":"publish","type":"page","link":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/workshops-and-symposia\/","title":{"rendered":"Workshops and Symposia"},"content":{"rendered":"<p><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-1 nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" id=\"content-start\" ><div class=\"fusion-builder-row fusion-row\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-0 fusion_builder_column_1_1 1_1 fusion-one-full fusion-column-first fusion-column-last\" style=\"--awb-bg-size:cover;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-column-wrapper-legacy\"><div class=\"fusion-title title fusion-title-1 fusion-sep-none fusion-title-text fusion-title-size-one\"><h1 class=\"fusion-title-heading title-heading-left\" style=\"margin:0;\"><span style=\"color: var(--awb-color2); font-size: 28px !important;\">UbiComp \/ ISWC 2024<\/span><br \/>\n<span style=\"color: #37408b;\"><span style=\"color: var(--awb-color4);\" data-fusion-font=\"true\">Workshops<\/span><\/span><span style=\"color: #c83273;\"> <span style=\"color: var(--awb-color4);\" data-fusion-font=\"true\">and Symposia<\/span><br \/>\n<\/span><\/h1><\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><\/div><\/div><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-2 nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-margin-top:25px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-1 fusion_builder_column_4_5 4_5 fusion-four-fifth fusion-column-first\" style=\"--awb-bg-size:cover;width:80%;width:calc(80% - ( ( 4% ) * 0.8 ) );margin-right: 4%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-column-wrapper-legacy\"><div class=\"fusion-text fusion-text-1\"><p>UbiComp\/ISWC 2024 features\u00a0<b data-stringify-type=\"bold\">11 Full-Day Workshops, 2 Half-Day Workshops, and 1 Half-Day Tutorial <\/b>that will be running on Saturday, October 5 and Sunday, October 6, before the start of the main conference.<\/p>\n<p>Workshops and Symposia provide an effective forum for attendees with common interests and are a great opportunity for community building. They can vary in program length, size, and format depending on their specific objectives.<\/p>\n<p><i>*Workshops and Symposia, as any other track at UbiComp\/ISWC 2024, will be in-person only.<br \/>\n<span style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-weight: var(--body_typography-font-weight); letter-spacing: var(--body_typography-letter-spacing);\">More details for exceptions can be found <\/span><span style=\"font-family: var(--body_typography-font-family);\"><span style=\"background-color: rgba(255, 255, 255, 0); font-weight: var(--body_typography-font-weight); letter-spacing: var(--body_typography-letter-spacing);\"><a href=\"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/authors\/\">here<\/a><\/span><\/span><span style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-weight: var(--body_typography-font-weight); letter-spacing: var(--body_typography-letter-spacing);\">.<\/span><\/i><\/p>\n<h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.3;\" data-fontsize=\"24\" data-lineheight=\"31.2px\">Summary of Key Dates<\/h4>\n<ul>\n<li><b>April 29, 2024<\/b>: Distribution of all accepted workshop CFPs<\/li>\n<li><b>May 24, 2024:<\/b> Deadline for the camera-ready version of the workshop description (from the proposal) for inclusion in the ACM DL<\/li>\n<li><b>June 7, 2024:<\/b> Submission deadline for Workshop papers<\/li>\n<li><b style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\">June 28, 2024<\/b><span style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); font-weight: var(--body_typography-font-weight); letter-spacing: var(--body_typography-letter-spacing);\">: Notification of Workshop papers by each accepted Workshop<br \/>\n<\/span><\/li>\n<li><b style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\">July 26, 2024<\/b><span style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\"><span style=\"font-weight: var(--body_typography-font-weight);\">: Deadline for camera-ready version of papers to include in the ACM DL<\/span><br \/>\n<\/span><\/li>\n<li><b style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\">October 5-6, 2024<\/b><span style=\"background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-size: var(--body_typography-font-size); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\">:<\/span> Workshops in Melbourne, Australia<\/li>\n<\/ul>\n<\/div><div class=\"fusion-text fusion-text-2\"><p style=\"--fontsize: 24; line-height: 1.36;\"><b>\u00a0<\/b><\/p>\n<p style=\"--fontsize: 24; line-height: 1.36;\"><b>Note: <\/b>the following schedule is tentative. Workshops may be shuffled between the two days depending on registration numbers and room availability. Details will be confirmed at a later date.<\/p>\n<\/div><div class=\"fusion-text fusion-text-3\"><h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.36;\" data-fontsize=\"24\" data-lineheight=\"32.64px\">Workshops (Oct 5, 2024)<\/h4>\n<\/div><div class=\"accordian fusion-accordian\" style=\"--awb-border-size:1px;--awb-icon-size:16px;--awb-content-font-size:var(--awb-typography4-font-size);--awb-icon-alignment:left;--awb-hover-color:#f9f9fb;--awb-border-color:#e2e2e2;--awb-background-color:#ffffff;--awb-divider-color:var(--awb-color3);--awb-divider-hover-color:var(--awb-color3);--awb-icon-color:#ffffff;--awb-title-color:var(--awb-color8);--awb-content-color:var(--awb-color8);--awb-icon-box-color:#212934;--awb-toggle-hover-accent-color:#1a80b6;--awb-title-font-family:var(--awb-typography1-font-family);--awb-title-font-weight:var(--awb-typography1-font-weight);--awb-title-font-style:var(--awb-typography1-font-style);--awb-title-font-size:16px;--awb-title-line-height:1.36;--awb-content-font-family:var(--awb-typography4-font-family);--awb-content-font-weight:var(--awb-typography4-font-weight);--awb-content-font-style:var(--awb-typography4-font-style);\"><div class=\"panel-group fusion-toggle-icon-boxed\" id=\"accordion-1863-1\"><div class=\"fusion-panel panel-default panel-49611e296f346e561 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_49611e296f346e561\"><a aria-expanded=\"false\" aria-controls=\"49611e296f346e561\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#49611e296f346e561\" href=\"#49611e296f346e561\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">WellComp 2024 (7th International Workshop on Computing for Well-Being) (Half-Day)<\/span><\/a><\/h4><\/div><div id=\"49611e296f346e561\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_49611e296f346e561\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">In the advancing ubiquitous computing age, computing technology has already spread into many <\/span><span style=\"font-weight: 400;\">aspects of our daily lives, such as office work, home and housekeeping, health management, <\/span><span style=\"font-weight: 400;\">transportation, or even cities. <strong>We have been experiencing that much of the influence from those <\/strong><\/span><strong>technologies are both contributing to a better quality of life (QoL) of our individual and organizational <\/strong><span style=\"font-weight: 400;\"><strong>lives, and causing new types of stress and pain at the same time.<\/strong> The term \u201cwell-being\u201d has recently <\/span><span style=\"font-weight: 400;\">gained attention as a term that covers our general happiness and even more concrete good conditions <\/span><span style=\"font-weight: 400;\">in our lives, such as physical, psychological, and social wellness.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">An increasing number of researchers, engineers, and people are paying attention to how their work <\/span><span style=\"font-weight: 400;\">can contribute to the better quality of lives, social good, and well-being. In spite of recent activities in <\/span><span style=\"font-weight: 400;\">academia and the society, unified academic research activities on computing and well-being is <\/span><span style=\"font-weight: 400;\">anticipated within the ubicomp research community. <strong>Active research not only in the HCI domain but <\/strong><\/span><strong>in various other ubicomp research areas (systems, mobile\/wearable sensing, mobile computing, persuasive applications and services, behavior change, etc.) are needed towards drawing the big <\/strong><span style=\"font-weight: 400;\"><strong>picture of \u201ccomputing for well-being\u201d from different viewpoints and layers of computing.<\/strong> For example, <\/span><span style=\"font-weight: 400;\">an additional viewpoint of users\u2019 well-being in activity recognition research may invent new types of <\/span><span style=\"font-weight: 400;\">applications that comprehensively cover different types of recognition of user\u2019s physical, mental and <\/span><span style=\"font-weight: 400;\">social activities. Ever since Mark Weiser introduced the term of ubiquitous computing, the ubiquity of <\/span><span style=\"font-weight: 400;\">computing in our daily lives and society has been certainly progressing. Now it is time for the <\/span><span style=\"font-weight: 400;\">community to more seriously envision the benefits that such computing technologies can bring.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Users of digital devices are increasingly confronted with a tremendous amount of notifications that <\/span><span style=\"font-weight: 400;\">appear on multiple devices and screens in their environment. If a user owns a smartphone, a tablet, a <\/span><span style=\"font-weight: 400;\">smartwatch and a laptop and an email-client is installed on all of these devices an incoming e-mail<\/span><span style=\"font-weight: 400;\">produces up to four notifications \u2013 one on each device. In the future, we will receive notifications from <\/span><span style=\"font-weight: 400;\">all our ubiquitous devices. Therefore, we need a smart attention management for incoming <\/span><span style=\"font-weight: 400;\">notifications. One way for a less interrupting attention management could be the use of ambient <\/span><span style=\"font-weight: 400;\">representations of incoming notifications.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Following our successful 6 workshops (WellComp 2018, 2019, 2020, 2021, 2022 and 2023), this year <\/span><span style=\"font-weight: 400;\">we will bring together people from industry and academia who are active in the areas of activity <\/span><span style=\"font-weight: 400;\">recognition, mental health, social good, context-awareness and ubiquitous computing. <strong>The main <\/strong><\/span><strong>objective of WellComp 2024 is to share the latest research in various areas in computing, related to <\/strong><span style=\"font-weight: 400;\"><strong>users\u2019 physical, mental, and social well-being.<\/strong> Especially this year\u2019s special attention will be drawn to <\/span><span style=\"font-weight: 400;\">\u201cchallenges for physical, social and mental well-being monitoring using ubicomp technologies\u201d. <\/span><span style=\"font-weight: 400;\">Relevance to such topics will be considered in the paper review and selection process. Furthermore, <\/span><span style=\"font-weight: 400;\">the workshop aims to identify future research challenges, research opportunities, and applications of <\/span><span style=\"font-weight: 400;\">our research outcomes to the society.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The topics of interest include -but are not limited- to the following:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"> Measurement and representation of physical, mental, and social well-being with ubicomp <\/span><span style=\"font-weight: 400;\">technologies.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Design and implementation of platforms for collecting, processing, and interpreting health <\/span><span style=\"font-weight: 400;\">and well-being data.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Design and development of computational models predictive of one or several aspects of wellbeing.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Leveraging large foundation models to improve computing for wellbeing.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Unsupervised, semi-supervised, and supervised representation learning for well-being.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Classification, regression, and clustering problems related to <\/span><span style=\"font-weight: 400;\">well-being aspects.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Approaches addressing challenges in wearable sensor data (e.g., missing and noisy data, <\/span><span style=\"font-weight: 400;\">irregular sampling rates, few labels, out-of-distribution inputs, etc) used for well-being <\/span><span style=\"font-weight: 400;\">monitoring.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Development of explainable, robust, privacy-aware, and trustworthy <\/span><span style=\"font-weight: 400;\">pipelines for well-being monitoring.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Multi-modal approaches integrating information from several data sources (e.g., physiological, <\/span><span style=\"font-weight: 400;\">behavioral, audio, video).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Fairness in computing systems for wellbeing<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Ethical considerations from data collection, system development to deployment<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Computing systems for promoting well-being-awareness.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Innovative well-being applications and diverse target populations (e.g., children, patients, or <\/span><span style=\"font-weight: 400;\">elderly people).<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Submission Details<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">We will accept two types of submission: long and short papers. A long paper should have a length of <\/span><span style=\"font-weight: 400;\">maximum 6 pages and short paper maximum 4 pages. Both types of papers should use the SIGCHI <\/span><span style=\"font-weight: 400;\"><a href=\"https:\/\/www.ubicomp.org\/ubicomp-iswc-2024\/authors\/formatting\/\">Master Article Template<\/a> and will be reviewed by at least two workshop organizers. Successful <\/span><span style=\"font-weight: 400;\">submissions will have the potential to raise discussion, provide insights for other attendees, and <\/span><span style=\"font-weight: 400;\">illustrate open challenges and potential solutions. All accepted publications will be published on the <\/span><span style=\"font-weight: 400;\">workshop website and in the ACM Digital Library. At least one author of each accepted paper needs <\/span><span style=\"font-weight: 400;\">to register for the conference and the workshop itself. During the workshop, each paper will be <\/span><span style=\"font-weight: 400;\">presented briefly by one of the authors. In addition, there will be room for demonstrations as well as <\/span><span style=\"font-weight: 400;\">discussions. All papers need to be anonymized.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Organizing Committee<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\">Ting Dang (University of Melbourne)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Shkurta Gashi (ETH AI Center, ETH Zu\u0308rich)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Dimitris Spathis (Nokia Bell Labs \/ University of Cambridge)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Alexander Hoelzemann (University of Siegen)<\/span><\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/wellcomp2024.github.io\/\">https:\/\/wellcomp2024.github.io\/<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-f186a753f8f7ab427 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_f186a753f8f7ab427\"><a aria-expanded=\"false\" aria-controls=\"f186a753f8f7ab427\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#f186a753f8f7ab427\" href=\"#f186a753f8f7ab427\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Multimodal Sports Interaction: Wearables and HCI in Motion<\/span><\/a><\/h4><\/div><div id=\"f186a753f8f7ab427\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_f186a753f8f7ab427\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">Feedback modalities are an essential aspect of the success and effectiveness of wearable systems that are used during mobile activities. In the past decades, researchers have explored a variety of feedback and feed-forward modalities systems aimed at mobile interactions. Dynamic activities such as sports inhibit interactions with devices generally but also offer opportunities for novel interaction experiences. The choice of modalities is essential to provide feedback that is understandable, timely, and does not interfere with the sports activity.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This well-balanced hands-on workshop aims to bring together practitioners and researchers working on and interested in mobile and wearable systems. Workshop participants will be offered a platform to collectively discuss and explore current approaches, methods, and tools related to feedback modalities for mobile interactions.<\/span><\/p>\n<p class=\"x_MsoNormal\"><strong><span lang=\"EN-US\">Apply by e-mailing Expression of<\/span><\/strong><span class=\"x_apple-converted-space\"><b><span lang=\"EN-US\">\u00a0<\/span><\/b><\/span><strong><span lang=\"EN-US\">Interest (UPDATED)<\/span><\/strong><\/p>\n<p class=\"x_MsoNormal\"><strong><span lang=\"EN-US\">As deadlines for submission via PCS (with inclusion of the position papers to the ACM digital library) has passed, we offer the following option to apply for the workshop.<\/span><\/strong><\/p>\n<p class=\"x_MsoNormal\"><strong><span lang=\"EN-US\">Send an e-mail to<\/span><\/strong><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><strong><span lang=\"EN-US\"><a title=\"mailto:Vincent.vanrheden@plus.ac.at\" href=\"mailto:Vincent.vanrheden@plus.ac.at\" data-linkindex=\"2\"><b>Vincent.vanrheden@plus.ac.at<\/b><\/a><\/span><\/strong><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><strong><span lang=\"EN-US\">with an expression of interest including:<\/span><\/strong><\/p>\n<ul type=\"disc\">\n<li class=\"x_MsoNormal\">Background: Describing the participant\u2019s experience using wearable, mobile or interactive systems in sports<span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><span lang=\"EN-US\">or movement-centered HCI<\/span><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><span lang=\"EN-US\">practices<\/span>, as well as their previous research practice in the area.<\/li>\n<li class=\"x_MsoNormal\"><span lang=\"EN-US\">(Optional)<\/span><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span>Sport systems and experiences: Two good and two bad examples of modality usage in sports, arguing the choice of the examples. For each example describe how the feedback modality was utilized and the type of feedback that was given and argue why this was a good or bad approach and consider alternative modalities and provide key insights and challenges.\u00a0If possible, add a representative image. These examples can be industry or research projects, including one\u2019s own.<span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><span lang=\"EN-US\">Note: participants are still expected to present this in the workshop.<\/span><\/li>\n<li class=\"x_MsoNormal\"><span lang=\"EN-US\">(Optional)<\/span><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><span lang=\"EN-US\">Participants<\/span><span class=\"x_apple-converted-space\">\u00a0<\/span>are encouraged to bring material for quick-and-dirty prototyping to explore novel feedback modalities (e.g. actuators, wearables, mobile systems that can be repurposed). Consider providing a short (visual) description of these materials and how they can be used.<\/li>\n<\/ul>\n<p class=\"x_MsoListParagraph\"><strong>\u00b7<\/strong><span class=\"x_apple-converted-space\">\u00a0<\/span><strong><span lang=\"EN-US\">Application<\/span><\/strong><span class=\"x_apple-converted-space\"><b>\u00a0<\/b><\/span><strong>deadline: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/strong><span class=\"x_apple-converted-space\"><b>\u00a0<\/b><\/span><strong><span lang=\"EN-US\">15<\/span><\/strong><strong>.0<\/strong><strong><span lang=\"EN-US\">9<\/span><\/strong><strong>.2024<\/strong><\/p>\n<p class=\"x_MsoListParagraph\"><strong>\u00b7<\/strong><span class=\"x_apple-converted-space\">\u00a0<\/span><strong>Notification to authors: \u00a0\u00a0<\/strong><span class=\"x_apple-converted-space\"><b>\u00a0<\/b><\/span><strong><span lang=\"EN-US\">20<\/span><\/strong><strong>.0<\/strong><strong><span lang=\"EN-US\">9<\/span><\/strong><strong>.2024<\/strong><\/p>\n<p class=\"x_MsoNormal\">\u00b7<span class=\"x_apple-converted-space\">\u00a0<\/span><b><span lang=\"EN-US\">Workshop date:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a005.10.2024<\/span><\/b><\/p>\n<p>Submissions will be reviewed by the organizers and selected according to their relevance to the workshop and the likelihood of sparking discussions and inspire novel feedback approaches and modalities. Please note that at least one author of each accepted submission must attend the workshop, and UbiComp\/ISWC 2024 is in-person only. For more information, visit: <a href=\"https:\/\/exertiongameslab.org\/workshops-events\/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion\" target=\"_blank\" rel=\"noopener noreferrer\" data-auth=\"NotApplicable\" data-linkindex=\"3\">https:\/\/exertiongameslab.org\/workshops-events\/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion<\/a>\u00a0<span lang=\"EN-US\">or feel free to reach out to<\/span><span class=\"x_apple-converted-space\"><span lang=\"EN-US\">\u00a0<\/span><\/span><strong><span lang=\"EN-US\"><a title=\"mailto:Vincent.vanrheden@plus.ac.at\" href=\"mailto:Vincent.vanrheden@plus.ac.at\" data-linkindex=\"4\"><b>Vincent.vanrheden@plus.ac.at.<\/b><\/a><\/span><\/strong><\/p>\n<h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.36;\" data-fontsize=\"24\" data-lineheight=\"32.64px\">Website<\/h4>\n<p><a href=\"https:\/\/exertiongameslab.org\/workshops-events\/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion\"><span style=\"font-weight: 400;\">https:\/\/exertiongameslab.org\/workshops-events\/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion<\/span><\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-2900acc68d9eb71db fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_2900acc68d9eb71db\"><a aria-expanded=\"false\" aria-controls=\"2900acc68d9eb71db\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#2900acc68d9eb71db\" href=\"#2900acc68d9eb71db\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Workshop on Human Activity Sensing Corpus and Applications (HASCA 2024) <\/span><\/a><\/h4><\/div><div id=\"2900acc68d9eb71db\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_2900acc68d9eb71db\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Topics of interest include but are not limited to:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">\u00a0Data collection \/ Corpus construction<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Effectiveness of Data \/ Data-Centric Research<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Tools and Algorithms for Activity Recognition<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Real World Application and Experiences<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Sensing Devices and Systems<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Mobile experience sampling, experience sampling strategies:<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Unsupervised pattern discovery<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Dataset acquisition and annotation through crowd-sourcing, web-mining<\/span><\/li>\n<li><span style=\"font-weight: 400;\">\u00a0Transfer learning, semi-supervised learning, lifelong learning<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Submission Guidelines<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">The correct template for submission is a double-column Word Submission Template or a double-column LaTeX Template. <strong>The maximum paper length is 6 pages, including references<\/strong>. Anonymization is not required.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Please see <\/span><a href=\"https:\/\/www.ubicomp.org\/ubicomp-iswc-2024\/authors\/formatting\/\"><span style=\"font-weight: 400;\">https:\/\/www.ubicomp.org\/ubicomp-iswc-2024\/authors\/formatting\/<\/span><\/a><span style=\"font-weight: 400;\"> for more details on\u00a0<\/span><span style=\"font-weight: 400;\">submission format and templates.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Submit your papers via PCS <\/span><a href=\"https:\/\/new.precisionconference.com\/submissions\"><span style=\"font-weight: 400;\">https:\/\/new.precisionconference.com\/submissions<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Please select SIGCHI -&gt; UbiComp\/ISWC 2024 -&gt; UbiComp\/ISWC 2024 12th Workshop on HASCA 2024<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Organisers<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\">Kazuya MURAO (Ritsumeikan University, Japnan)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Yu ENOKIBORI (Nagoya University, Japan)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Hristijan GJORESKI (Ss. Cyril and Methodius University, N. Macedonia)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Paula LAGO (Concordia University, Canada)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Tsuyoshi OKITA (Kyushu Institute Technology, Japan)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Pekka SIIRTOLA (University of Oulu, Finland)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Kei HIROI (Kyoto University, Japan)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Philipp M. SCHOLL (University of Freiburg, Germany)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Mathias CILIBERTO (University of Sussex, UK)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Kenta URANO (Nagoya University, Japan)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Marius BOCK\u00a0 (University of Siegen, Germany)<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Contact<\/span><\/h4>\n<p><a href=\"mailto:hasca-organizer@ml.hasc.jp\"><span style=\"font-weight: 400;\">hasca-organizer@ml.hasc.jp<\/span><\/a><\/p>\n<h4><span style=\"font-weight: 400;\">Website<\/span><\/h4>\n<p><a href=\"http:\/\/hasca2024.hasc.jp\"><span style=\"font-weight: 400;\">http:\/\/hasca2024.hasc.jp<\/span><\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-f17b8d7a65f4a936d fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_f17b8d7a65f4a936d\"><a aria-expanded=\"false\" aria-controls=\"f17b8d7a65f4a936d\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#f17b8d7a65f4a936d\" href=\"#f17b8d7a65f4a936d\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Advancing Physiological Methods in Human-Information Interaction (APhyMeHII)<\/span><\/a><\/h4><\/div><div id=\"f17b8d7a65f4a936d\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_f17b8d7a65f4a936d\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\"><strong>Human-Information Interaction (HII)<\/strong> has become increasingly ubiquitous. While it is crucial to understand and improve the user experience in HII, several challenges remain from a ubiquitous computing perspective, such as the definitions discrepancy of cognitive activities involved in HII and the lack of standard practice for experimental task design and physiological methods to measure cognitive activities during the interaction.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this workshop, we seek to form a common understanding and community standards of quantifying the cognitive aspects of user experience in HII.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We invite researchers and practitioners who use physiological data to measure user experience in HII to submit their contributions as a short research summary or position paper <strong>(4 pages in the SIGCHI one-column format, excluding references)<\/strong> discussing one or more of the workshop themes. Accepted submissions will be invited to give a talk at our workshop and included in the ACM DL (as part of the UbiComp\/ISWC \u201924 Adjunct Proceedings).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For more details, please visit <a href=\"https:\/\/hii-biosignal.github.io\/ubi24\/\">https:\/\/hii-biosignal.github.io\/ubi24\/<\/a> or get in touch with the workshop organizers via <a href=\"mailto:biosignal.ubicomp24@gmail.com\">biosignal.ubicomp24@gmail.com<\/a>.<\/span><\/p>\n<h4>Submission Details<\/h4>\n<p>To submit your contribution, please go to PCS (<a href=\"https:\/\/new.precisionconference.com\/submissions\">https:\/\/new.precisionconference.com\/submissions<\/a>), select conference &#8220;UbiComp\/ISWC 2024&#8221; and select track &#8220;UbiComp\/ISWC 2024: Workshop on Physiological Methods for HII&#8221;.<\/p>\n<h4><span style=\"font-weight: 400;\">Website<\/span><\/h4>\n<p><a href=\"https:\/\/hii-biosignal.github.io\/ubi24\/\"><span style=\"font-weight: 400;\">https:\/\/hii-biosignal.github.io\/ubi24\/<\/span><\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-33408a791efca2e2e fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_33408a791efca2e2e\"><a aria-expanded=\"false\" aria-controls=\"33408a791efca2e2e\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#33408a791efca2e2e\" href=\"#33408a791efca2e2e\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Heads-Up Computing<\/span><\/a><\/h4><\/div><div id=\"33408a791efca2e2e\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_33408a791efca2e2e\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">Heads-Up Computing is an emerging interaction paradigm within Human-Computer Interaction (HCI) that focuses on integrating computing systems into the user&#8217;s natural environment and daily activities seamlessly. The goal is to deliver information and computing capabilities in an unobtrusive manner that complements ongoing tasks without interfering with users&#8217; natural forms from the real-world context.\u2028<\/span><\/p>\n<p><span style=\"font-weight: 400;\">To better understand the concept of Heads-Up Computing, let&#8217;s use a cooking analogy to explore its components:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Imagine you&#8217;re preparing to cook a meal. The first decision is selecting the right hardware; this could be a wok, a steamer, or a barbecue rack depending on what you&#8217;re planning to cook. Next, consider the ingredients. If you&#8217;re a vegetarian, your choices will naturally exclude meat, focusing instead on vegetables and plant-based products. Finally, the cooking method comes into play. Each cuisine, such as French or Chinese Sichuan, has its distinct techniques and methods that define its flavors and outcomes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">So, what are the hardware, ingredients, and strategies in the context of Heads-Up Computing? \u2028<\/span><\/p>\n<p><strong>1) Hardware: Body-Compatible Hardware Components<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Traditional devices like mobile phones often distract users, turning them into so-called &#8220;smartphone zombies&#8221; because they require concentrated interaction. In contrast, Heads-Up Computing leverages a distributed design that aligns with human capabilities. While numerous hardware design possibilities exist, achieving a balance between compatibility, convenience, practicality, and existing technological constraints is crucial. We anticipate that, at least in the near future (5-10 years), the hardware platform for Heads-Up Computing will primarily consist of two fundamental components: a head-piece and a hand-piece. In the future, we also anticipate a body-piece in the form of a robot that can further enhance the capability of the heads-up hardware platform.<\/span><\/p>\n<p><strong>Head-piece responsibilities:<\/strong><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Provides real-time visual and aural feedback.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Understands the user&#8217;s visual perspective, auditory environment, facial gestures, and emotions.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Recognizes speech input and user attention.<\/span><\/li>\n<\/ul>\n<p><strong>Hand-piece responsibilities:<\/strong><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Offers real-time haptic feedback.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Tracks hand position, posture, and movements.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Facilitates additional interaction commands.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">While some systems, such as Apple&#8217;s Vision Pro, integrate the head-piece and hand-piece into a single device, this approach compromises wearability, resulting in a device that is too bulky for everyday use. Consequently, a two-piece solution is more likely to achieve greater portability, thus possible to serve as an everyday device. For example, systems like Eyeditor, GlassMessaging, and PandaLens utilize a smart-glasses as the head-piece, and a wearable ring mouse as the hand-piece to achieve a balance between functionality and portability.\u00a0 Note that the hand-piece used in these examples is only a basic one and only achieve partial functionalities for an ideal hand-piece which aims to provide comprehensive tracking and feedback capabilities.<\/span><\/p>\n<p><strong>2) Ingredients: Multimodal Voice, Gaze, and Gesture Interaction<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">For effective interaction during daily activities, Heads-Up Computing utilizes complementary communication channels, as most tasks involve sight and manual activities:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Voice Control<\/strong>: Facilitates hands-free device interaction. Projects like EDITalk and Eyeditor have made strides in voice interactions, significantly enhancing user experience when combined with smart glasses.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Gaze Tracking<\/strong>: Directs computing experiences through eye movements. This technology is awaiting advancements like those anticipated with Apple\u2019s Vision Pro to overcome the limitations of mobile usage.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Micro-Gesture Recognition<\/strong>: Employs subtle gestures for interaction without disrupting other activities. Research has identified gestures suitable for Heads-Up Computing, improving the practicality of technologies such as the wearable ring mouse.<\/span><\/li>\n<\/ul>\n<p><strong>3) Strategies: Static and Dynamic Interface &amp; Interaction Design Approaches<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Designing interface and interaction strategies for Heads-Up Computing presents unique challenges, as it requires minimal interference with the user&#8217;s current activities. This necessitates the use of transparent displays that adapt as the user moves, and the avoidance of traditional input methods like keyboards, mice, and touch interactions, which demand significant attention and resources.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u2028To create heads-up friendly interfaces and interactions, two main approaches can be considered:<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>a) Static Interface &amp; Interaction Design: Environmentally Aware and Fragmented Attention Friendly\u2028<\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">This approach aims to design interfaces that are suited for environments requiring fragmented attention, such as multitasking scenarios. Example research work in this category include:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Adapting text spacing and presentation for readability on the go<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Utilizing icons instead of text for unobtrusive notifications\u00a0<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Redesigning the presentation of dynamic information, such as videos, to accommodate mobile multitasking\u00a0<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Glanceable interfaces\u00a0<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">In addition, tools like VidAdapter are instrumental in adapting existing media to these new interfaces, taking into account both the physical and cognitive availability of the user.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>b) Dynamic Interface &amp; Interaction Design: Resource-Aware<\/strong>\u2028<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Instead of one-size fit all interface solutions, one can also design interfaces that dynamically respond to the user\u2019s current cognitive and physical state. This is what we can &#8220;resource-aware interaction&#8221; approach which adjusts the system&#8217;s behavior and generates interfaces and interactions that are context-sensitive, providing a more personalized and efficient user experience. An example of such interface has been proposed by Lindlbauer&#8217;s group. However, such interfaces require the system to have a stronger understanding of the environment, the users&#8217; cognitive status, and the device constraints in real time, which is much harder to do. However, this is a research direction that&#8217;s worth further investigations, and Heads-up Multitasker is one such attempt that tries to understand users&#8217; cognitive model in heads-up computing scenarios.\u00a0 \u2028<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Heads-Up Computing signifies a transformative shift towards a human-centric approach, where technology is designed to augment rather than hinder user engagement with the real world. Although there has been some initial progress in this domain, much more exploration is needed to fully realize its potential. <strong>We view this workshop as a valuable opportunity to outline a research roadmap that will direct our future endeavors in this exciting field.<\/strong> This roadmap will help us identify key areas of focus, address current challenges, and explore innovative solutions that enhance user interaction seamlessly.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Topics of Interest<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">We look for participants with research background in AR, MR, wearable computing, and\/or intelligent assistants. Interested academic participants are asked to submit a <strong>2-4 page position paper or research summary<\/strong> on topics including but not limited to:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Interfaces and Interactions<\/strong>: As smart glasses usher us into a new age, they bring forth the question of designing interactions that are intuitive, seamless, and socially acceptable. How can we meld technology with human instincts?<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Mobility\/Multitasking<\/strong>: The mobility that smart glasses bring is undeniable. The design nuances of catering to a user on-the-move\u2014be it walking, driving, or merely existing in public spaces\u2014deserves detailed discussion.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Ergonomics and Comfort<\/strong>: Functionality does not necessarily warrant comfort. Balancing capability with user comfort will be a pivotal area of exploration.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Inclusive and trustworthy Information Access<\/strong>: Information empowers people\u2019s lives. With a constant influx of information, users stand at the risk of being overwhelmed. This theme will dissect the impact of information accessibility and how to manage and interact with information without jeopardizing safety.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Privacy and Ethics<\/strong>: In an age where user data is holds high value to various holders, wearable technologies walk a fine line between being informative and invasive. The ethical implications of data collection, storage, and usage will be a prime area of focus.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Abuse and Addiction<\/strong>: Every technological marvel comes with its own set of pitfalls. The potential misuse, both by vendors and individuals, will be scrutinized. Delving into these dark patterns will help us forecast and possibly prevent misuse.<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Submission guidelines<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">To submit your workshop paper for Ubicomp24p, please ensure your documents are formatted as PDF files. You can upload your proposals through the following link: <a href=\"https:\/\/new.precisionconference.com\/ubicomp24p\/\">Ubicomp24p Submission Portal<\/a>.<\/span><\/p>\n<p><strong>For Academic Participants, you can submit:<\/strong><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Position Paper<\/strong>: Focus on a specific issue within the realm of Heads-Up Computing.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Research Summary<\/strong>: Provide a comprehensive overview of multiple projects you are involved in.\u2028\u2028<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Once accepted, all position and research summary papers will be compiled into the workshop proceedings and will be accessible on the ArXiv platform.<\/span><\/p>\n<p><strong>For Industry Participants:<\/strong><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">If you do not have previous publications in this area but wish to attend the workshop, please submit a 1-page cover letter. In your letter, describe your background and outline what you hope to learn and contribute during the workshop.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">In addition to this standard format, we ask everyone to submit a simple <a href=\"https:\/\/forms.gle\/xC2v9x23vXGHX3K78\">online form<\/a> with the following information:\u00a0<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Brief introduction to their research area.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Past and ongoing research topics.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">What do you want to get out of the workshop?<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Perceived major issues with the next interaction paradigm of wearable intelligent assistant.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Insights or solutions you might have in mind.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">\u2028It is imperative that at least one author of each accepted submission attend the workshop. Furthermore, all participants must register both for the workshop and for a minimum of one day of the main conference. We eagerly await your valuable contributions and insights. Together, let\u2019s shape the future of human-computer interaction.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Organizers<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\">Shengdong Zhao: Professor, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong, China<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Ian Oakley: Professor, KAIST, Daejeon, South Korea<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Yun Huang: Associate Professor, University of Illinois at Urbana-Champaign, Rono-Hills, Urbana-Champaign, Illinois, USA<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Haiming Liu: Associate Professor, University of Southampton, Southampton, UK<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Can Liu: Associate Professor, City University of Hong Kong, 18 Tat Chee Avenue, Kowloon, Hong Kong, China<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">If you have any questions: please contact us on <a href=\"mailto:ubicomp24p@precisionconference.com\">ubicomp24p@precisionconference.com<\/a>.\u00a0\u00a0<\/span><\/p>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/sites.google.com\/view\/heads-up-computing-ubicomp\/home?authuser=1\">https:\/\/sites.google.com\/view\/heads-up-computing-ubicomp\/home?authuser=1<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-13c8405c999d4f54f fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_13c8405c999d4f54f\"><a aria-expanded=\"false\" aria-controls=\"13c8405c999d4f54f\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#13c8405c999d4f54f\" href=\"#13c8405c999d4f54f\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">FairComp<\/span><\/a><\/h4><\/div><div id=\"13c8405c999d4f54f\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_13c8405c999d4f54f\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">We aim FairComp to be an interdisciplinary forum beyond just presenting papers, where we can bring together academia and industry. Notably, we reach out to researchers and practitioners whose work lies within the ACM SIGCHI domains (e.g., UbiComp, HCI, CSCW), as well as FAccT, ML &amp; AI, Social sciences, Philosophy, Law, Psychology, and others. The workshop organizers are actively engaged in the aforementioned themes and will encourage their network of colleagues and students to participate. In particular, the goal of this workshop is to collaboratively: <em><strong>Assess<\/strong> <\/em>the evolving socio-technical themes and concerns in relation to fairness across ubiquitous technologies, ranging from health, behavioral, and emotion sensing to human-activity recognition, mobility, and navigation. <em><strong>Map<\/strong> <\/em>the space of ethical risks and possibilities regarding technological interventions (e.g., input modalities, learning paradigms, design choices). <em><strong>Envision<\/strong> <\/em>new sensing and data-acquisition paradigms to fairly and accurately gather ubiquitous physical, physiological, and experiential qualities. <em><strong>Explore<\/strong> <\/em>novel methods for generalization, domain adaptation, and bias mitigation and investigate their suitability for diverse ubiquitous case studies. <em><strong>Initiate<\/strong> <\/em>a discourse around the future of \u201cubiquitous fairness\u201d and co-create research agenda(s) to meaningfully address it. <em><strong>Consolidate<\/strong> <\/em>an international network of researchers to further develop these research agendas through funding proposals and through steering future funding instruments.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The topics of interest include, but are not limited to, the following:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"> \u00a0 New definitions, metrics, and criteria of fairness and robustness, tailored for ubiquitous computing.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Indirect notions of fairness on devices (e.g., unfair resource allocation, energy, connectivity).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 New methods for bias identification and mitigation.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Bias, discrimination, and measurement errors in data, labels, and under-represented input modalities.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 New benchmark datasets for fairness and robustness evaluation (e.g., sensor data with protected attributes).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Geographical equity across datasets and applications (e.g., WEIRD research, Global South).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 New user study methodologies beyond conventional protocols (e.g., Fairness-by-Design).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Robustness (e.g., out-of-distribution generalization, uncertainty quantification) of ML models in high-stake and real-world applications.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Investigation of fairness trade-offs (e.g., fairness vs. accuracy, privacy, resource efficiency, generalizability).<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 Implications of regulatory frameworks for UbiComp.<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Submission details<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">We invite complete and ongoing research works, use cases, field studies, review, as well as position papers between 4-6 pages (excluding references). Submission should follow UbiComp&#8217;s publication vendor instructions, and submitted through PCS. Specifically, the correct template for submission is double-column Word Submission Template or double-column LaTeX Template, and the correct template for publication (i.e., after conditional acceptance) is single-column Word Submission Template or double-column LaTeX template. Each article will be reviewed by 2 reviewers from a panel of experts consisting of external reviewers and organizers. To ensure accessibility, all authors should adhere to SIGCHI&#8217;s Accessible Submission Guidelines. All accepted publications will be published on the workshop website and the ACM Digital Library as part of the UbiComp 2024 proceedings. At least one author of each accepted paper needs to register for the conference and the workshop itself. During the workshop, each paper will be presented in-person by one of the authors. <strong>All papers need to be anonymized<\/strong>. Any questions should be mailed to <a href=\"mailto:faircomp.workshop@gmail.com\">faircomp.workshop@gmail.com<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For submissions, please go to the website: <a href=\"https:\/\/new.precisionconference.com\/submissions\">https:\/\/new.precisionconference.com\/submissions<\/a> (Society: SIGCHI &gt; Conference: Ubicomp\/ISWC 2024 &gt; Track: Ubicomp\/ISWC 2024 Workshop: FairComp).<\/span><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Organizing Committee<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\">Lakmal Meegahapola (ETH Zurich)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Dimitris Spathis (Nokia Bell Labs | University of Cambridge)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Marios Constantinides (Nokia Bell Labs | University of Cambridge)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Han Zhang (University of Washington)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Sofia Yfantidou (Aristotle University of Thessaloniki)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Niels van Berkel (Aalborg University)<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Anind K. Dey (University of Washington)<\/span><\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/faircomp-workshop.github.io\/2024\/\">https:\/\/faircomp-workshop.github.io\/2024\/<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-1a9daed65a91961a7 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_1a9daed65a91961a7\"><a aria-expanded=\"false\" aria-controls=\"1a9daed65a91961a7\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-1\" data-target=\"#1a9daed65a91961a7\" href=\"#1a9daed65a91961a7\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">EarComp 2024<\/span><\/a><\/h4><\/div><div id=\"1a9daed65a91961a7\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_1a9daed65a91961a7\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">We will solicit three categories of papers:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Full papers (up to 6 pages including references)<\/strong> should report a reasonably mature work with earables, and is expected to demonstrate concrete and reproducible results albeit scale may be limited.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Experience papers (up to 4 pages including references)<\/strong> that present extensive experiences with implementation, deployment, and operations of earable-based systems. Desirable papers are expected to contain real data as well as descriptions of the practical lessons learned.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Short papers (up to 2 pages including references)<\/strong> are encouraged to report novel, and creative ideas that are yet to produce concrete research results but are at a stage where community feedback would be useful.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Moreover, we will have a special submission category &#8211; <strong>&#8220;Dataset Paper&#8221;<\/strong> &#8211; soliciting a 1-2 page document describing a well curated and labeled dataset collected with earables (eventually accompanied by the dataset). Full research papers will be in ACM sigconf template with 2 columns and the accepted papers will be included in the ACM Digital Library.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">All papers will be digitally available through the workshop website, and the UbiComp adjunct proceedings. For each category of papers, we will offer a &#8220;Best Paper&#8221; and &#8220;Best Dataset&#8221; awards sponsored by Nokia Bell Labs. In addition, depending on the quality and depth of the submissions we might consider producing a Book on &#8220;Earable Computing&#8221; contributed by the authors of the papers, and edited by the Workshop Organizers.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Topics of interest are:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Acoustic Sensing with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Kinetic Sensing with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Multi-Modal Learning with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Multi-Task Learning with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Active Learning with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Low-Power Sensing Systems for Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Authentication &amp; Trust mechanisms for Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Quality-Aware Data Collection with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Experience Sampling with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Crowd Sourcing with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Novel UI and UX for Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Auditory Augmented Reality Application with Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Lightweight Deep Learning on Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Tiny Machine Learning on Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Health and Wellbeing Applications of Earables<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Emerging applications of Earables<\/span><\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/www.esense.io\/earcomp2024\/\">https:\/\/www.esense.io\/earcomp2024\/<\/a><\/p>\n<\/div><\/div><\/div><\/div><\/div><div class=\"fusion-text fusion-text-4\"><h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.36;\" data-fontsize=\"24\" data-lineheight=\"32.64px\">Tutorial (Oct 6, 2024)<\/h4>\n<\/div><div class=\"accordian fusion-accordian\" style=\"--awb-border-size:1px;--awb-icon-size:16px;--awb-content-font-size:var(--awb-typography4-font-size);--awb-icon-alignment:left;--awb-hover-color:#f9f9fb;--awb-border-color:#e2e2e2;--awb-background-color:#ffffff;--awb-divider-color:var(--awb-color3);--awb-divider-hover-color:var(--awb-color3);--awb-icon-color:#ffffff;--awb-title-color:var(--awb-color8);--awb-content-color:var(--awb-color8);--awb-icon-box-color:#212934;--awb-toggle-hover-accent-color:#1a80b6;--awb-title-font-family:var(--awb-typography1-font-family);--awb-title-font-weight:var(--awb-typography1-font-weight);--awb-title-font-style:var(--awb-typography1-font-style);--awb-title-font-size:16px;--awb-title-line-height:1.36;--awb-content-font-family:var(--awb-typography4-font-family);--awb-content-font-weight:var(--awb-typography4-font-weight);--awb-content-font-style:var(--awb-typography4-font-style);\"><div class=\"panel-group fusion-toggle-icon-boxed\" id=\"accordion-1863-2\"><div class=\"fusion-panel panel-default panel-c06c84c409327e01d fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_c06c84c409327e01d\"><a aria-expanded=\"false\" aria-controls=\"c06c84c409327e01d\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-2\" data-target=\"#c06c84c409327e01d\" href=\"#c06c84c409327e01d\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Solving the Activity Recognition Problem (SOAR)<\/span><\/a><\/h4><\/div><div id=\"c06c84c409327e01d\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_c06c84c409327e01d\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p>Feature extraction remains the core challenge in Human Activity Recognition (HAR) &#8211; the automated inference of activities being performed from sensor data. Over the past few years, the community has witnessed a shift from manual feature engineering using statistical metrics and distribution-based representations, to feature learning via neural networks. Particularly, self-supervised learning methods that leverage large-scale unlabeled data to train powerful feature extractors have gained significant traction, and various works have demonstrated its ability to train powerful feature extractors from large-scale unlabeled data. Recently, the advent of Large Language Models (LLMs) and multi-modal foundation models has unveiled a promising direction by leveraging well-understood data modalities. This tutorial focuses on existing representation learning works, from single-sensor approaches to cross-device and cross-modality pipelines. Furthermore, we will provide an overview of recent developments in multi-modal foundation models, which originated from language and vision learning, but have recently started incorporating inertial measurement units (IMU) and time-series data. This tutorial will offer an important forum for researchers in the mobile sensing community to discuss future research directions in representation learning for HAR, and in particular, to identify potential avenues to incorporate the latest advancements in multi-modal foundation models, aiming to finally solve the long-standing activity recognition problem.<\/p>\n<h4><span style=\"font-weight: 400;\">Organizing Committee<\/span><\/h4>\n<ul>\n<li>Harish Haresamudram (Georgia Institute of Technology)<\/li>\n<li>Chi Ian Tang (Nokia Bell Labs)<\/li>\n<li>Sungho Suh (FKI and RPTU)<\/li>\n<li>Paul Lukowicz (DFKI and RPTU)<\/li>\n<li>Thomas Ploetz (Georgia Institute of Technology)<\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/sites.google.com\/view\/soar-tutorial-ubicomp2024\/home\">https:\/\/sites.google.com\/view\/soar-tutorial-ubicomp2024\/home<\/a><\/p>\n<\/div><\/div><\/div><\/div><\/div><div class=\"fusion-text fusion-text-5\"><h4>Workshops (Oct 6, 2024)<\/h4>\n<\/div><div class=\"accordian fusion-accordian\" style=\"--awb-border-size:1px;--awb-icon-size:16px;--awb-content-font-size:var(--awb-typography4-font-size);--awb-icon-alignment:left;--awb-hover-color:#f9f9fb;--awb-border-color:#e2e2e2;--awb-background-color:#ffffff;--awb-divider-color:var(--awb-color3);--awb-divider-hover-color:var(--awb-color3);--awb-icon-color:#ffffff;--awb-title-color:var(--awb-color8);--awb-content-color:var(--awb-color8);--awb-icon-box-color:#212934;--awb-toggle-hover-accent-color:#1a80b6;--awb-title-font-family:var(--awb-typography1-font-family);--awb-title-font-weight:var(--awb-typography1-font-weight);--awb-title-font-style:var(--awb-typography1-font-style);--awb-title-font-size:16px;--awb-title-line-height:1.36;--awb-content-font-family:var(--awb-typography4-font-family);--awb-content-font-weight:var(--awb-typography4-font-weight);--awb-content-font-style:var(--awb-typography4-font-style);\"><div class=\"panel-group fusion-toggle-icon-boxed\" id=\"accordion-1863-3\"><div class=\"fusion-panel panel-default panel-8bd3b7bf30a128a2d fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_8bd3b7bf30a128a2d\"><a aria-expanded=\"false\" aria-controls=\"8bd3b7bf30a128a2d\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#8bd3b7bf30a128a2d\" href=\"#8bd3b7bf30a128a2d\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Interactive Design with Autistic Children using LLM and IoT for Personalized Training: The Good, The Bad and The Challenging (Half-Day)<\/span><\/a><\/h4><\/div><div id=\"8bd3b7bf30a128a2d\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_8bd3b7bf30a128a2d\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p>The goal of this workshop is to provide a platform for researchers, software and medical practitioners, and designers to share and debate both the pros and cons of applying the Large Language Model (LLM) and Internet of Things (IoT) for diagnosis and personalized training for autistic children. Through hosting multiple activities during the half-day workshop, including oral presentation, demo and panel discussion, we hope to use this opportunity to build a network of experts to dedicate our efforts on benefiting children with special needs and further inspire the research on taking advantage of the emerging technologies for these under-privileged group users, caregivers and special education teachers.<\/p>\n<p><span style=\"font-weight: 400;\">This workshop explores the benefits, challenges, and future directions for involving creative interactive design using LLMs and IoT with\/for autistic children in diagnosis and personalized training. By engaging in presentations, demonstrations, and group discussions, participants will have the chance to exchange their related experiences and insights.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Submissions of position papers, work-in-progress reports, or demonstration papers for a short presentation or demonstration related to the interactive design with autistic children using LLM and IoT for diagnosis and personalized training or relevant fields are welcomed. Specifically, the proposed workshop is expecting:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Position papers (2-4 pages)<\/strong> discussing research questions, opportunities, benefits, or challenges.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Work-in-progress reports (2-4 pages)<\/strong> highlighting current research.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Demonstration papers (1 page)<\/strong> illustrating a leading-edge system in use, under development, or at a testing stage.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">The suggested topics include (but are not limited to):<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Optimized and Personalized Training for Special Education<\/span><\/li>\n<li><span style=\"font-weight: 400;\">AI, IoT, and\/or Smart Sensors for Special Education<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Large Language Models (LLMs), and\/or Large Vision Models (LVM) for Special Education<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Technology-Based Intervention (TBI) for Special Education<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Interactive Design with Children<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Emerging Applications for Special Education and\/or Healthcare<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Authors of accepted works will be invited to present their submissions in a dedicated presentation or demo session.<\/span><\/p>\n<h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.36;\" data-fontsize=\"24\" data-lineheight=\"32.64px\"><span style=\"font-weight: 400;\">Submission Guidelines<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Submissions should be in the UbiComp\/ISWC 2024 Proceedings Formats and submitted v<\/span><span style=\"font-weight: 400;\">ia UbiComp PCS. The submission portal will open in May 2024. <\/span><\/p>\n<h4 class=\"fusion-responsive-typography-calculated\" style=\"--fontsize: 24; line-height: 1.36;\" data-fontsize=\"24\" data-lineheight=\"32.64px\">Website<\/h4>\n<p><a href=\"https:\/\/idwac.github.io\/\">https:\/\/idwac.github.io\/<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-9cf5d2acbf55fd8a2 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_9cf5d2acbf55fd8a2\"><a aria-expanded=\"false\" aria-controls=\"9cf5d2acbf55fd8a2\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#9cf5d2acbf55fd8a2\" href=\"#9cf5d2acbf55fd8a2\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">UbiComp Mental Health Sensing and Intervention<\/span><\/a><\/h4><\/div><div id=\"9cf5d2acbf55fd8a2\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_9cf5d2acbf55fd8a2\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">For the <strong>2024 UbiComp Mental Health Sensing and Intervention workshop<\/strong>, we invite paper submissions at the intersection of mental health, well-being, ubiquitous computing, and human-centered design.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This year, we are adding a special call for workshop papers that inspire new research directions. These papers should include initial findings that are valuable to the community, but are not fully publishable or finished contributions. Based upon prior years&#8217; work, these papers could include methods and\/or topics such as:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Ethical deployments of ubiquitous computing systems in historically underserved communities.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Ethical frameworks for developing and implementing ubiquitous technologies for mental health.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Experience reports from clinical studies in any phase, from early pilot studies to large-scale clinical trials.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Experience reports of clinical implementation from any perspective in the healthcare system.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Identification of opportunities for ubiquitous computing technologies to help solve global issues that impact mental health, like climate change.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Integration of ubiquitous technologies into existing healthcare infrastructures (e.g., payment models, regulatory frameworks) and policy.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Investigation of new methodologies for intervention (e.g., conversational agents, AR\/VR applications).<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Proposals of novel frameworks to implement and sustain ubiquitous computing technologies in mental healthcare.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Reflections on implementing ubiquitous computing-based technologies to improve mental health and well-being in both clinical and general populations.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">We still encourage submissions from other topics, including but not limited to (in alphabetical order):<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Analyses of fairness and bias in mental health&#8211;ubiquitous computing technologies.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Design and implementation of computational platforms (e.g., mobile phones, instrumented homes, skin-patch sensors) to collect health and well-being data.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Design and implementation of feedback or decision-support (e.g., reports, visualizations, proactive behavioral interventions, subtle or subconscious interventions etc.) for both patients and caregivers towards improved mental health.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Design of privacy-preserving strategies for data collection, analysis, and management.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Development of methods for sustaining user adherence and engagement over the course of an intervention.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Development of robust models that can handle data sparsity and mislabeling issues within mobile sensing and mental health data.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Identification of opportunities for UbiComp approaches (e.g., digital phenotyping, predictive modeling, micro-randomized intervention trials, adaptive interventions) to better understand factors related to substance abuse.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Integration of multimodal data (with potentially clinical data) from various sensor streams for predicting or measuring mental health and well-being.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">We are soliciting five types of contributions for the workshop:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong>Scientific papers<\/strong> describing novel technologies, approaches, and studies related to ubiquitous computing and mental health. We encourage these submissions to focus on learnings that are beneficial for the community, and not finished contributions.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Challenge papers<\/strong>, in which authors describe a specific challenge to be pitched and discussed at the workshop. These papers often lead to a lively discussion during the workshop.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Demonstrations<\/strong>, to facilitate authors demonstrating developed technologies and early systems at the workshop.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Experience reports<\/strong> that can introduce novel perspectives on real-world implementation, such as in clinical settings, or historically underserved communities.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong>Critical reflections<\/strong> of one&#8217;s own research or existing research at the intersection of ubiquitous computing and mental healthcare. We expect critical reflection papers to contribute towards better research practices in the community.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">We will accept submissions up to <strong>6 pages, including figures and references<\/strong>. The 6 pages are not a requirement; shorter submissions (e.g., 3 pages) are welcome. Papers should be submitted using the UbiComp ISWC 2024 proceedings format, see the UbiComp website for more details: <a href=\"https:\/\/ubicomp.org\/ubicomp-iswc-2024\/authors\/formatting\/\">https:\/\/ubicomp.org\/ubicomp-iswc-2024\/authors\/formatting\/<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\">All submitted papers will be reviewed and judged on originality, technical correctness, relevance, and quality of presentation. We explicitly invite submissions of papers that describe preliminary results or work-in-progress, including early clinical experience. The accepted papers will appear in the UbiComp supplemental proceedings and in the ACM Digital Library. Authors of accepted papers will be invited to present their work in-person in Melbourne and receive feedback from workshop attendees.<\/span><\/p>\n<h4>Submission Guidelines<\/h4>\n<p><span style=\"font-weight: 400;\">Submit your papers on at <a href=\"https:\/\/new.precisionconference.com\/user\/login\">https:\/\/new.precisionconference.com\/user\/login<\/a> (please select SIGCHI-&gt; UBICOMP2024 -&gt; Ubicomp 2024 Mental Health).<\/span><\/p>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/ubicomp-mental-health.github.io\/\">https:\/\/ubicomp-mental-health.github.io\/<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-46ca622f2d2711eea fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_46ca622f2d2711eea\"><a aria-expanded=\"false\" aria-controls=\"46ca622f2d2711eea\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#46ca622f2d2711eea\" href=\"#46ca622f2d2711eea\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">OpenWearables 2024<\/span><\/a><\/h4><\/div><div id=\"46ca622f2d2711eea\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_46ca622f2d2711eea\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">The OpenWearables 2024 workshop aims to address the challenges and opportunities in the field of open source wearable technology. We invite submissions from researchers, developers and innovators on topics such as open source designs of wearable devices, applications and evaluations of open source wearables, software that supports the design and development of open wearables, and frameworks. Submissions should be concise, limited to a maximum of 4 pages excluding references, and demonstrate the use, build and interface processes of open hardware, software or systems. Papers should use pictures, graphs and functional diagrams as often as possible in the explanation of the work. An essential requirement is that the projects presented adhere to open source principles. Papers will be selected based on adherence to these principles and the clarity of the paper. During the workshop, authors will be required to present their research paper and also provide a demonstration of their open wearable work to showcase the practical applications and potential impact of their research.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The workshop will feature a mix of keynote speeches, paper presentations, demo sessions, and group discussions, providing a platform for participants to showcase their work, share insights, and foster collaboration within the open wearables community. We particularly encourage demonstrations of open source wearable projects during the hands-on demo sessions, in addition to the paper.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">All accepted papers will be considered for inclusion in a special position paper summarising the results of the workshop, which will be published in the proceedings. We will also make all workshop materials available on open-wearables.org and GitHub, creating a lasting resource for the community.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Join us at OpenWearables 2024 to help democratise wearable technology, accelerate innovation and establish standards for open wearables. Together, we can create a future where wearable technologies are accessible, interoperable and impactful across applications and industries.<\/span><\/p>\n<h4>Website<\/h4>\n<p><a href=\"http:\/\/open-wearables.org\"><span style=\"font-weight: 400;\">open-wearables.org<\/span><\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-9632ec073675b6221 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_9632ec073675b6221\"><a aria-expanded=\"false\" aria-controls=\"9632ec073675b6221\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#9632ec073675b6221\" href=\"#9632ec073675b6221\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">The Fourth Workshop on Multiple Input Modalities and Sensations for VR\/AR Interactions (MIMSVAI)<\/span><\/a><\/h4><\/div><div id=\"9632ec073675b6221\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_9632ec073675b6221\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">Rapid technological advancements are expanding the scope of virtual reality and augmented reality (VR\/AR) applications; however, users must contend with a lack of sensory feedback and limitations on input modalities by which to interact with their environment. Gaining an intuitive understanding of any VR\/AR application requires the complete immersion of users in the virtual environment, which can only be achieved through the adoption of realistic sensory feedback mechanisms. This <\/span><span style=\"font-weight: 400;\">workshop brings together researchers in UbiComp and VR\/AR to investigate alternative input modalities and sensory feedback systems with the aim of developing coherent and engaging VR\/AR experiences mirroring real-world interactions.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Submission Guidelines\u00a0<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Online Submission System (PCS): https:\/\/new.precisionconference.com\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Please select \u201cSIGCHI\u201d as Society, \u201cUbicomp\/ISWC 2024\u201d as Conference\/Journal, and \u201cUbiComp\/ISWC 2024 Workshop MIMSVAI\u201d as the track on the submission page. <strong>All papers need to be anonymized<\/strong>. Please submit papers with a maximum length of <strong>5 pages (4-page + 1 references)<\/strong> in ACM SIGCHI sigconf template with 2 columns. Please contact us (ubicomp.mimsvai@gmail.com) if you have any problems when preparing your submissions. The accepted papers will be published in the UbiComp\/ISWC Adjunct Proceedings, which will be included in the ACM Digital Library as part of the UbiComp conference supplemental proceedings.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Contact<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">All questions about submissions should be emailed to <a href=\"mailto:ubicomp.mimsvai@gmail.com\">ubicomp.mimsvai@gmail.com.<\/a><\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Best Paper Award<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">The Best Paper Award will be conferred upon the most outstanding paper presented at the MIMSVAI 2024 workshop.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">List of Topics<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Papers may include, but not be limited to, topics:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"> 2D\/3D and volumetric display and projection <\/span>technology<\/li>\n<li><span style=\"font-weight: 400;\"> Immersive analytics and visualization<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Modeling and simulation<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Multimodal capturing and reconstruction<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Scene description and management issues<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Storytelling<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Tracking and sensing<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Embodied agents and self-avatars<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Haptic and tactile interfaces, wearable haptics, <\/span>passive haptics, pseudo haptics<\/li>\n<li><span style=\"font-weight: 400;\">Mediated and diminished reality<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Multimodal input and output<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Multisensory rendering, registration, and <\/span>synchronization<\/li>\n<li><span style=\"font-weight: 400;\">Perception and cognition<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Presence, body ownership, and agency<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Teleoperation and telepresence<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> 3D user interaction<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> 3D user interface metaphors<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Collaborative interactions<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Human factors and ergonomics<\/span><\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/mimsvai.github.io\">https:\/\/mimsvai.github.io<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-c2d5da912ed61a325 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_c2d5da912ed61a325\"><a aria-expanded=\"false\" aria-controls=\"c2d5da912ed61a325\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#c2d5da912ed61a325\" href=\"#c2d5da912ed61a325\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">Workshop on Interpretable, Inclusive, and Immersive Interactions for Ubiquitous AI-infused Physical Systems<\/span><\/a><\/h4><\/div><div id=\"c2d5da912ed61a325\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_c2d5da912ed61a325\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">AI is increasingly integrated with physical entities for sensing and actuation, directly impacting our daily lives. This integration spans from routine goods to specialized AI-infused products, from small actuators to large electro-mechanical systems with ubiquitous intelligence. The complexity of these systems and their direct impact on our physical reality pose unique challenges in designing interpretable and inclusive interactions. As immersive technologies blur the boundaries between the physical and digital worlds, there are new opportunities to augment the capabilities of AI-infused physical systems. This workshop aims to explore the challenges and opportunities in designing interpretable, inclusive, and immersive interactions with ubiquitous AI-infused physical systems, considering their physical exertion and expanding capabilities. We invite research that addresses the research questions below:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><strong> RQ1<\/strong>: How can we design interpretable interactions for ubiquitous <\/span>AI-infused physical systems that bridge the gap between human understanding, anticipation, and actual system behavior to ensure user trust and adoption?<\/li>\n<li><span style=\"font-weight: 400;\"><strong>RQ2<\/strong>: What are the key challenges and opportunities in designing <\/span>interactions for ubiquitous AI-infused physical systems that are adaptive and responsive to diverse user needs and preferences while promoting long-term user well-being?<\/li>\n<li><span style=\"font-weight: 400;\"><strong>RQ3<\/strong>: How can we leverage emerging technologies and extended reality to enable new forms of natural and intuitive interaction with ubiquitous AI-infused physical systems?<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><strong> RQ4<\/strong>: What are the ethical, social, and cultural implications <\/span>of ubiquitous AI-infused physical systems, and how can we develop best practices, design guidelines, and principles for creating interpretable, inclusive, and immersive interactions that align with human values and expectations?<\/li>\n<li><span style=\"font-weight: 400;\"><strong>RQ5<\/strong>: How can we design inclusive and accessible interactions for ubiquitous AI-infused physical systems that accommodate diverse user groups and abilities?<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Track 1: Research Contributions<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\"> Artifacts and prototypes showcasing interaction with networked or embedded intelligence in \u2018physical\u2019 systems, including immersive solutions to augment their capabilities.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Positions for novel interaction paradigms, design principles, and frameworks for AI-infused \u2018physical\u2019 systems, pushing the envelope between the physical and virtual worlds.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Case studies and empirical evaluations assessing the interaction quality and user experience of AI-infused \u2019physical\u2019 systems in specific application areas, including the impact of immersive technologies on user experience.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Enabling technologies, platforms, and infrastructures supporting the development of AI-infused \u2019physical\u2019 systems and their integration with immersive technologies.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> User studies for user needs and preferences in physical exertion and immersive experience with AI-infused \u2019physical\u2019 systems, given the interpretability and inclusivity challenges.<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Track 2: Algorithmic Contributions<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Artifact contributions may require time and resources that exceed the constraints of a workshop deadline. To provide wider opportunities for junior researchers or those with limited access to facilities, we also encourage algorithmic contributions that align with the workshop\u2019s scope and objectives, leveraging existing open datasets. While open to all types of open datasets, for those released by the organizers (bold), we can provide guidance and feedback throughout the workshop, although all contributions will undergo the same review process. Exemplary datasets include but are not limited to:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"><a href=\"https:\/\/doi.org\/10.6084\/m9.figshare.c.6879628.v1\"> Engagnition<\/a>: A multi-dimensional dataset for engagement recognition of children with autism spectrum disorder\u00a0<\/span><\/li>\n<li><span style=\"font-weight: 400;\"><a href=\"https:\/\/doi.org\/10.1038\/s41597-024-03144-z\"> MultiSenseBadminton<\/a>:Wearable Sensor\u2013Based Biomechanical Dataset for Evaluation of Badminton Performance\u00a0<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Submission Guidelines<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Submission and review processes are the same for both research and algorithmic contributions.\u00a0<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">Submission Formatting and Procedure:<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\"> Extended abstracts can be up to 4 pages long, in the 2-columns ACM Proceedings format, excluding references.\u00a0<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Submissions should be made via the Precision Conference.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Submissions should follow UbiComp 2024\u2019s <a href=\"https:\/\/www.ubicomp.org\/ubicomp-iswc-2024\/authors\/accessibility-guidelines\/\">guidelines for accessible materials<\/a>.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> All accepted papers will be included in the UbiComp\/ISWC <\/span>Adjunct Proceedings, which will be indexed in the ACM DL.<\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Review Criteria:<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\"> All submissions will be peer-reviewed by at least two reviewers, including organizers, steering committee members, and external reviewers.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> As the workshop aims to stimulate discussions on future research agendas, the review will prioritize relevance, novelty, and ideas while also considering soundness and clarity.<\/span><\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Organizers<\/span><\/h4>\n<ul>\n<li><span style=\"font-weight: 400;\"> Gwangbin Kim, GIST<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Minwoo Seong, GIST<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Dohyeon Yeo, GIST<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> Yumin Kang, GIST<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> SeungJun Kim, GIST<\/span><\/li>\n<\/ul>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/sites.google.com\/view\/i4u2024\">https:\/\/sites.google.com\/view\/i4u2024<\/a><\/p>\n<\/div><\/div><\/div><div class=\"fusion-panel panel-default panel-08d17f5dbcf2955a2 fusion-toggle-has-divider\" style=\"--awb-title-color:var(--awb-color8);\"><div class=\"panel-heading\"><h4 class=\"panel-title toggle\" id=\"toggle_08d17f5dbcf2955a2\"><a aria-expanded=\"false\" aria-controls=\"08d17f5dbcf2955a2\" role=\"button\" data-toggle=\"collapse\" data-parent=\"#accordion-1863-3\" data-target=\"#08d17f5dbcf2955a2\" href=\"#08d17f5dbcf2955a2\"><span class=\"fusion-toggle-icon-wrapper\" aria-hidden=\"true\"><i class=\"fa-fusion-box active-icon awb-icon-minus\" aria-hidden=\"true\"><\/i><i class=\"fa-fusion-box inactive-icon awb-icon-plus\" aria-hidden=\"true\"><\/i><\/span><span class=\"fusion-toggle-heading\">XAI for U<\/span><\/a><\/h4><\/div><div id=\"08d17f5dbcf2955a2\" class=\"panel-collapse collapse \" aria-labelledby=\"toggle_08d17f5dbcf2955a2\"><div class=\"panel-body toggle-content fusion-clearfix\">\n<p><span style=\"font-weight: 400;\">We invite submissions of original research, insightful case studies, and work in progress that address XAI applications within Ubiquitous and Wearable Computing, including but not limited to:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">XAI in time-series and multimodal data analysis<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Techniques and challenges in interpreting complex data streams from wearable and ubiquitous computing devices.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">User-centered explanations for AI-driven systems<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Designing explanations that are meaningful and accessible to end-users.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Deployment and evaluation of XAI tools in real-world scenarios<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Case studies and empirical research on the effectiveness of XAI applications.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Multimodal XAI for behavior analysis<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Leveraging diverse data sources for comprehensive behavior analysis.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Interconnected ML components in wearable and ubiquitous computing<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Strategies for explaining the dynamics and decisions of interconnected AI systems and models.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Ethical considerations and user privacy in XAI<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Addressing the ethical implications and privacy concerns of deploying XAI in ubiquitous computing.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Multimodal XAI in affective computing<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Techniques for understanding and interpreting human emotions through AI.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Empirical evaluation methods<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Methods for assessing the effectiveness and impact of XAI and multimodal AI systems.<\/span><\/li>\n<\/ul>\n<h4>Submission Guidelines<\/h4>\n<p><span style=\"font-weight: 400;\">Submissions should be anonymized and up to <strong>4 pages (including references)<\/strong>. ACM requires UbiComp\/ISWC 2024 workshop submissions to use the double-column template. Please check the UbiComp website for more details about the template.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Submissions can be made via PCS at http:\/\/new.precisionconference.com\/sigchi. The submission<\/span><span style=\"font-weight: 400;\">\u00a0site opens in May 2024. <\/span><span style=\"font-weight: 400;\">On the submissions tab, please select SIGCHI society, the UbiComp\/ISWC 2024 conference, and the \u201cUbiComp\/ISWC 2024 XAI for U\u201d track.<\/span><\/p>\n<h4>Website<\/h4>\n<p><a href=\"https:\/\/ubicomp-xai.github.io\/\">https:\/\/ubicomp-xai.github.io\/<\/a><\/p>\n<\/div><\/div><\/div><\/div><\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-2 fusion_builder_column_1_5 1_5 fusion-one-fifth fusion-column-last\" style=\"--awb-bg-size:cover;width:20%;width:calc(20% - ( ( 4% ) * 0.2 ) );\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-column-wrapper-legacy\"><div class=\"fusion-title title fusion-title-2 fusion-sep-none fusion-title-text fusion-title-size-three\" style=\"--awb-text-color:var(--awb-color4);--awb-font-size:24px;\"><h3 class=\"fusion-title-heading title-heading-left\" style=\"margin:0;font-size:1em;\">IMPORTANT DATES<\/h3><\/div><div class=\"fusion-text fusion-text-6\"><p><span style=\"font-size: 14px;\"><b style=\"color: var(--awb-color4);\">Submission Deadline:<\/b><br \/>\n<\/span><b style=\"font-size: 14px;\" data-fusion-font=\"true\">June 7, 2024<\/b><span style=\"font-size: 14px;\"><br \/>\n<\/span><\/p>\n<p><span style=\"font-size: 14px;\"><b style=\"color: var(--awb-color4);\">Workshops in Melbourne:<\/b><br \/>\n<\/span><strong style=\"font-size: 14px; background-color: rgba(255, 255, 255, 0); color: var(--body_typography-color); font-family: var(--body_typography-font-family); font-style: var(--body_typography-font-style,normal); letter-spacing: var(--body_typography-letter-spacing);\">October 5-6, 2024<\/strong><\/p>\n<\/div><div class=\"fusion-sep-clear\"><\/div><div class=\"fusion-separator fusion-full-width-sep\" style=\"margin-left: auto;margin-right: auto;margin-top:20PX;width:100%;\"><\/div><div class=\"fusion-sep-clear\"><\/div><div class=\"fusion-title title fusion-title-3 fusion-sep-none fusion-title-text fusion-title-size-three\" style=\"--awb-text-color:#37408b;--awb-font-size:24px;\"><h3 class=\"fusion-title-heading title-heading-left\" style=\"margin:0;font-size:1em;\">CONTACT<\/h3><\/div><div class=\"fusion-text fusion-text-7\"><p><a href=\"mailto:workshops-2024@ubicomp.org\">workshops-2024@ubicomp.org<\/a><\/p>\n<\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><\/div><\/div><footer class=\"fusion-fullwidth fullwidth-box fusion-builder-row-3 nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-padding-top:20px;--awb-padding-right:50px;--awb-padding-bottom:50px;--awb-padding-left:50px;--awb-background-color:#37408b;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-3 fusion_builder_column_1_4 1_4 fusion-one-fourth fusion-column-first\" style=\"--awb-padding-top:40px;--awb-bg-size:cover;width:25%;width:calc(25% - ( ( 4% ) * 0.25 ) );margin-right: 4%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-column-wrapper-legacy\"><div class=\"fusion-text fusion-text-8\"><h4><span style=\"color: #ffffff;\" data-darkreader-inline-color=\"\">UbiComp \/ ISWC<\/span><\/h4>\n<\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-4 fusion_builder_column_3_4 3_4 fusion-three-fourth fusion-column-last\" style=\"--awb-bg-size:cover;width:75%;width:calc(75% - ( ( 4% ) * 0.75 ) );\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-column-wrapper-legacy\"><div class=\"fusion-text fusion-text-9\"><h4><span style=\"color: #ffffff;\" data-darkreader-inline-color=\"\">Past Conferences<\/span><\/h4>\n<div class=\"fusion-text fusion-text-4\">\n<p><span style=\"color: #ffffff;\" data-darkreader-inline-color=\"\">The ACM international joint conference on Pervasive and Ubiquitous Computing (<b>UbiComp<\/b>) is the result of a merger of the two most renowned conferences in the field: Pervasive and UbiComp. While it retains the name of the latter in recognition of the visionary work of Mark Weiser, its long name reflects the dual history of the new event.<\/span><\/p>\n<p><span style=\"color: #ffffff;\" data-darkreader-inline-color=\"\">The ACM International Symposium on Wearable Computing (<b>ISWC<\/b>) discusses novel results in all aspects of wearable computing, and has been colocated with UbiComp and Pervasive since 2013.<\/span><\/p>\n<p><span style=\"color: #ffffff;\" data-darkreader-inline-color=\"\">A complete list of UbiComp, Pervasive, and ISWC past conferences is provided below.<\/span><\/p>\n<\/div>\n<\/div><div class=\"fusion-button-wrapper\"><a class=\"fusion-button button-flat fusion-button-default-size button-custom fusion-button-default button-1 fusion-button-default-span fusion-button-default-type\" style=\"--button_accent_color:#105378;--button_accent_hover_color:#105378;--button_border_hover_color:#105378;--button-border-radius-top-left:0;--button-border-radius-top-right:0;--button-border-radius-bottom-right:0;--button-border-radius-bottom-left:0;--button_gradient_top_color:#e5e6e8;--button_gradient_bottom_color:#e5e6e8;--button_gradient_top_color_hover:#999a9b;--button_gradient_bottom_color_hover:#999a9b;\" target=\"_self\" href=\"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/past-conferences\/\"><span class=\"fusion-button-text\">View all<\/span><\/a><\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><\/div><\/footer>\n<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":9,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"100-width.php","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/pages\/1863"}],"collection":[{"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/comments?post=1863"}],"version-history":[{"count":96,"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/pages\/1863\/revisions"}],"predecessor-version":[{"id":4044,"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/pages\/1863\/revisions\/4044"}],"wp:attachment":[{"href":"https:\/\/ubicomp.hosting.acm.org\/ubicompiswc2024_wp\/wp-json\/wp\/v2\/media?parent=1863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}