Special Sessions

Cardiff, UK, June 29th - July 3rd, 2026

Accordion Dropdown question answer text in CSS

Semantic Quality Assessment for Multi-Modal Intelligent Systems

In contemporary digital ecosystems, information is increasingly represented, transmitted, and consumed in diverse multi-modal forms: images, video, audio, high-dimensional features extracted by large AI models, etc. While these modalities differ in structure, they all serve a common purpose: to convey perceptual or semantic information about the underlying content.

For decades, quality assessment (QA) has been predominantly guided by human perceptual fidelity, with metrics such as PSNR, SSIM, and VMAF evaluating signals based on human perception. However, a fundamental shift is underway in the consumption of multimedia content. In many emerging applications, such as autonomous systems, AIGC pipelines, distributed large model inference, and embodied AI, the data is increasingly consumed by machines. In these AI-driven scenarios, semantic consistency of multimedia data, rather than signal-level or perceptual quality, becomes the primary determinant of downstream task performance.

This paradigm shift reveals three limitations of current QA methodologies: 1) Misalignment between semantic quality and perceptual/signal quality: Data may experience large signal-level deviation while preserving semantics, or vice versa. Traditional QA fails to capture this. 2) Lack of task-oriented or task-agnostic semantic quality measures: Machine-centric tasks require preserving object identity, scene structure, linguistic content, or high-level embeddings properties that classical perceptual metrics cannot evaluate. 3) Inability to support scalable, layered, or hybrid human–machine consumption: As multimedia systems move toward joint human–machine consumption, heterogeneous downstream tasks need different levels of information. Existing QA frameworks cannot guide such multi-layered compression and transmission strategies.

Recent research in feature quality assessment, feature coding, coding for machines, highlight the urgent need for semantic quality assessment. These works consistently demonstrate that semantic distortion correlates poorly with existing perceptual metrics, and that new evaluation frameworks are required to ensure reliable and interpretable quality for both AI systems and human users. In this context, there is a pressing need for the community to collectively rethink how we quantify, predict, and utilize semantic quality in AI systems across modalities. This special session aims to foster this essential dialogue and advance foundational work in the field.

The objectives of this special session include: 1) Establish a foundation for semantic quality assessment across multiple modalities; 2) Promote compression and transmission schemes guided by semantic quality; 3) Build a bridge between semantic quality assessment and AI understanding. Topics of interest include, but are not limited to:

  • Semantic quality assessment methods for multiple modalities (image, video, audio, deep features, etc.)
  • Scalable semantic–perceptual quality evaluation frameworks
  • Semantic-aware multimedia compression, coding, and transmission
  • Task-driven semantic fidelity metrics for machine perception
  • Cross-modal semantic consistency evaluation
  • Modeling of semantic information in foundation models and multimedia systems

Organizers: Changsheng Gao, Jingwen Zhu, Ivan V. Bajić, Patrick LE CALLET

Beyond Quality: Integrating Ethical Dimensions in QoE Research

Despite technical and methodological advances in the development and evaluation of enjoyable technology experiences, one aspect often remains overlooked in QoE research: the broader ethical implications of these digital experiences. For instance, AI can adapt and improve user experiences — but it creates concerns regarding aspects such as biases, fairness, transparency, trust and accountability. Social media or streaming platforms can negatively impact users' wellbeing and mental health. UX research on Gen-AI or XR raises concerns regarding privacy, responsible use of AI, and long-term environmental impact of these technologies. It is critical to start considering these implications and balancing UX and ethical costs, in order to provide technology experiences that are not just enjoyable but truly human-centered, ethical, and empowering.

This special session aims to foster a critical discussion by adopting a broader perspective to examine a wide array of ethical dilemmas and implications of ever-improved multimedia experiences and what they mean for QoE research, measures and test designs. This requires a shift towards increased ethical reflexivity and more value-sensitive QoE research, for instance by moving from viewing QoE as an individual concept to a paradigm that also acknowledges collective QoE.

To practically integrate those aspects in QoE research, we welcome diverse contributions. First, we welcome novel ways to model user experience — for instance by expanding the working definitions of QoE and UX to integrate a social impact factor that reflects broader effects on people and society. Another example is UX evaluations of technological features supporting ethical values (e.g., UX of dark patterns in social media or privacy controls). Also considering collective QoE and ethics, we invite critical studies, using empirical methods and tools for reflection that improves human safety from a system perspective. This includes, for instance, innovative uses of interdisciplinary methods related to critical AI and QoE, examining QoE through a critical lens, usually comparing diverse perspectives and outcomes in various multi-media applications. Also of interest are holistic system analyses, e.g. on how AI is used for QoE, where it is ethical, for whom and why. Examples include broader understanding of the underlying AI functionalities when applied to QoE, investigating power and bias in QoE, who is responsible and who is impacted, and potential negative impacts. We particularly encourage papers that e.g., provide critical perspectives, push towards a future research agenda, propose brave new ideas, and aim to challenge the state of the art.

This special session also aims to reflect on ethical considerations in our own research practices (e.g., regarding inclusivity, data protection…). In particular, QoE practices often assume "typical" users, overlooking individuals with disabilities or in low-resource environments, raising questions of equity and digital exclusion. We welcome contributions that integrate these concerns in their research designs, methodologies or sampling approaches.

Overall, we aim for this special session to support the development of ethical, human(ity)-centered paradigms, by establishing novel frameworks integrating ethical aspects in QoE research and conducting UX/QoE research on technology features supporting ethical values, or with a specific focus on a given ethical dimension.

Topics of Interest
This session invites both full and short papers. We seek contributions from various disciplines, addressing topics including but not limited to:

  • Novel frameworks, methodological approaches and paradigms
  • Humanity-centered paradigms and collective QoE
  • Integrating ethical and social values in QoE to integrate ethical aspects in QoE/UX evaluations
  • Papers (including position papers) that provide critical perspectives, establish future research agendas, or challenge the state of the art.
  • Ethical implications of multimedia and digital experiences
  • Bias, fairness, transparency, trust, privacy and accountability in AI-driven experiences
  • Sustainability in digital experiences, or specific impacts (mental health/wellbeing, social, and environmental)
  • Applying QoE methodologies to create ethical value (e.g. in technologies supporting transparency, autonomy, or well-being)
  • Ethical considerations in UX and QoE research (inclusivity, accessibility, low-resource or marginalized contexts…)

Organizers: Camille Sivelle, Karolina Wylężek, Rakesh Rao Ramachandra Rao, Ibrahim El Shemy, Kaja Ystgaard, Katrien De Moor