Multimodal Interaction in Smart Environments

Contact: Dirk Roscher, Veit Schwartze, Grzegorz Lehmann, Florian Weingarten

 

Smart environments are equipped with a heterogeneous manifold of cross-linked electronic appliances and sensors. In the future they will become more and more common in public buildings, offices and industrial environments as well as in private housing, facilitating many novel services, but also posing complex problems for service development and deployment. In our research of user interfaces for smart environments we have identified ubiquitous user interfaces, as user interfaces capable of transporting and hiding a growing complexity of technologies, supporting various, mostly unforeseen, contexts of use, enabling a variety of users to interact with a system and supporting distributed and networked systems. Currently, the creation of ubiquitous user interfaces, if possible at all, is a very complex, and thus costly and time-consuming task. CC NGS researches how model-based technologies and model driven engineering can support the creation of ubiquitous user interfaces.

Interaction Design for Smart Home Environments

Popular terms connected to the idea of future living, such as ‘ambient intelligence’, ‘ambient assisted living’ or ‘smart home’, usually describing an environment where everyday objects and devices have evolved to the point where they are networked and communicate with each other to support inhabitants. Since the technology is envisioned to be completely integrated into homes and lives, it remains almost invisible to the user and therefore causes problems of intelligibility in means of control that needs to be addressed. In our work we focus on offering possibilities to simply control this growing network, consisting of independent devices for different purposes, embodying complex technology, as well as on managing a large number of information flows.

Distributed and Seamless Interaction

The availability of numerous networked interaction resources within smart environments makes the exploitation of these resources for innovative and more natural interaction possible. Combining the interaction capabilities of devices like TVs with remote controls, picture frames, mobile phones, touch screens, stereos and PCs would allow achieving a more suitable interaction in one situation. Changing situations could then require the dynamic redistribution of the created interfaces and the alteration of the used devices and modalities to keep up the interaction. Currently, applications and its user interfaces can only be accessed from one device at a time making a combined and changing usage of several devices impossible. During this dissertation it is examined how user interfaces can be distributed to dynamically changing sets of devices to achieve ongoing suitable interaction. Thereby, the needs of the user and the application developer are analyzed and corresponding capabilities for the utilization and development of such distributed user interfaces will be implemented. Furthermore, the possible contributions of the runtime system itself to support the user are investigated.

Adaptive Layout of Dynamic User Interfaces

The user interface design process for dynamic and heterogenic environments becomes very complex because user disabilities and interaction devices are unknown at design time. Therefore, we propose an approach uses information from the user interface models to derive information about the spatial relationships and size constraints for an automatic layout generation. The rapid prototyping character of the approach allows the simulation of specific context of use scenarios and the resulting layout visualization and evaluation at design time. At runtime, the application has to deal with different situations and should provide a consistent layout for all states and distributions of the user interface. Unconsidered or non-formalizable characteristics of the user and its environment can lead to the need of changing the behavior of the layout algorithm. A tool supported approach allows the direct manipulation of the user interface of the application and a visualization of the decision process which leads to the actual user interface. 

Runtime Models of Adaptive User Interfaces

Model-based software development is becoming more and more popular and has been identified as suitable to deal with the increasing complexity of software systems, which the developers have to cope with. Model Driven Engineering (MDE) is a promising approach to the development of complex systems and applications, based on the notion of models as a foundation for software engineering. CC-NGS researches how executable runtime models can aid the development of ubiquitous user interfaces.

Just as model-based software development aids software developers in handling the complexity of modern systems, model-based user interface development (MBUID) has the potential of coping with the increasing complexity of ubiquitous user interfaces. In our work we research how executable user interface models can simplify and accelerate the development, delivery and adaptation of ubiquitous user interfaces.