Research

 

Ambient Displays

We use ambient displays, also referred to as “peripheral displays”, to display information with varying levels of needed user attention, ranging from change-blind to interrupting.

Main research topics:

  • Unobtrusive Reminders / Non-Interrupting Alerts
  • Attention Guidance
  • Ambient Light Displays

Contact: Andrii Matviienko

Automotive User Interfaces

With the change from manual driving to automated driving come new challenges for the automotive domain. Our research looks into novel interfaces to not only support drivers in manual driving but also drivers/passengers of automated vehicles.
We evaluate novel interfaces, like a vibro-tactile belt, a shape-changing steering wheel or ambient light patterns inside the vehicle. Scenarios include navigation, highway overtaking, safe distance keeping, taking over the control of an automated vehicle, or intuitively understand an automated vehicle’s intention.
For most of our experiments, we use a 150° fixed-base driving simulator and, depending on the research question, measure driving performance, gaze behavior, and/or driving experience.
This research is supported by the projects COMPANION and CSE.
Contact: Wilko Heuten

Augmented and Virtual Reality

We are investigating novel interaction methods and visualization techniques for Augmented and Virtual Reality. In that regard, we also fabricate new devices in our own Fablab. We evaluate our results in empirical lab studies.

Main Research Topics

  • Visualization of out-of-view objects in Mixed Reality
  • Attention Guidance in Cyber-Physical Systems
  • Reducing alarms on Intensive Care Units
  • Novel techniques for medical care
  • Industry 4.0

Contact: Sebastian Weiß

Health Informatics

An important key to aging healthily is physical activity. The aim of the project AEQUIPA is the development of interventions which promote physical activity in old age. Therefore, factors within a community which influence the mobility are being investigated and interventions developed. A particular focus lies on measures which are applicable to all socio- economic groups of people within the community.
Within the scope of AEQUIPA, OFFIS examines the application of technology-based interventions on the basis of sensor-detected vital parameters for the preservation of mobility of older people. Furthermore, OFFIS is working on detecting preventive measures for functional decline of muscle groups and is creating a system for monitoring and displaying physical activity.
Project websites: OFFIS AEQUIPA project page and  aequipa.de
Contact: Kai von Holdt

Understandable Privacy Policies

Emerging technologies are deeply ingrained into our day-to-day lives. To use a smart watch that reminds us to stay active or social media that connects us with friends and family, we accept and allow the access of our private data. Often, the privacy policies we agree to, comprise of long texts, written in a way that is difficult and time-consuming to comprehend.

The aim of the PANDIA project is to make the topic of data protection more interactive as well as understandable and to help users make informed decisions about the use of their data. As part of the project, we create prototypes for a PANDIA app and browser plugin, suitable for the everyday use. In addition, we investigate creative interaction methods and visualizations, offered by Augmented- or Virtual Reality and gamification techniques.

Main Research topics:

  • UI/UX Design
  • Mixed Reality technologies

Contact: Kai von Holdt and Frederike Jung

Gaze Behaviour in Reading

We are interested in leveraging gaze behavior to infer user needs and support their interaction in an implicit manner. To this end, we focus on online interpretation of natural gaze patterns to predict user needs and intended interactions. Specifically, we aim to support children in learning to read by recognizing and mitigating comprehension problems in real time.
 

Main Research Topics

  • Inferring reading progress from gaze behavior
  • Detecting reading problems in real time
  • Assistive visualizations for word and sentence level decoding

Contact: Tobias Lunte

 

Rich Interactive Materials for Everyday Objects in the Home (RIME)

With the world gradually moving towards affordable smart home setups, new design and technical challenges are emerging. Each vendor has their bespoke interaction concepts and techniques which require learning and remembering.

These varying interaction concepts lead to users being frustrated, making mistakes, and have negative user experiences, ultimately resulting in the discarding of promising solutions. However, there is an opportunity to utilise artefacts and technologies naturally embedded into daily practices as a basis for new holistic control interfaces and mediums.

Therefore, the Rich Interactive Materials for Everyday Objects in the Home (RIME) project seeks to unlock the interactive potential for rich interaction with the materials in our smart environments.

The RIME project seeks to achieve this goal by designing, prototyping and evaluating scalable sensor and actuator technology together with touch interaction paradigms for seamless integration into everyday materials and objects to enable natural and scalable hands-on interactions with our future smart homes. As a result, the physical artefacts in our homes, such as chairs, tables, walls, and other surfaces, can be equipped with an interactive digital “skin”, or contain interactive sensor and actuator materials; and swiping along a table, to unfold it for additional guests may become a possible scenario.

Contact: Michael Chamurnowa

Gestural interaction paradigms for smart spaces (GrIPSs)

 

With the advent of intelligent environments (e.g. smart homes) and wearable computing technologies, 2D gestures slowly disappear and are replaced by more natural interaction modalities, such as voice or spatial 3D gestures, and take advantage of the whole body for interacting with pervasive computing environments. While 3D gestural interaction has been explored for many years now, there is no general vocabulary of gestures that is generalizable across different spaces and situations. There is also no metric to allow for a comprehensive assessment of the quality and usability of gestures in different contexts. We aim to understand and support gestures related to interactions, particularly in smart environments. Here, we will look at single gestures and gesture sequences carried out not only with one hand, but also bimanually and with the support of the whole body.