Self-Explaining Agents

AutorJohannes Fähndrich, Sebastian Ahrndt, Sahin Albayrak
QuelleJurnal Teknologi - Special Edition, 63:3 (pp. 53-64) 
LinksDownload   |   BibTeX 

This work advocate self-explanation as one foundation of self-* properties. Arguing that for system component to become more self-explanatory the underlining foundation is an awareness of themselves and their environment. In the research area of adaptive software, self-* properties have shifted into focus caused by the tendency to push ever more design decisions to the applications runtime. Thus fostering new paradigms for system development like intelligent and learning agents. This work surveys the state-of-the-art methods of self-explanation in software systems and distills a definition of self-explanation. Additionally, we introduce a measure to compare explanations and propose an approach for the first steps towards extending descriptions to become more explanatory. The conclusion shows that explanation is a special kind of description. The kind of description that provides additional information about a subject of interest and is understandable for the audience of the explanation. Further the explanation is dependent on the context it is used in, which brings about that one explanation can transport different information in different contexts. The proposed measure reflects those requirements.