Charles Explorer logo
🇨🇿

MockSAS: Facilitating the Evaluation of Bandit Algorithms in Self-adaptive Systems

Publikace na Matematicko-fyzikální fakulta |
2023

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

To be able to optimize themselves at runtime even in situations not specifically designed for, self-adaptive systems (SAS) often employ online learning that takes the form of sequentially applying actions to learn their effect on system utility. Employing multi-armed bandit (MAB) policies is a promising approach for implementing online learning in SAS.

A main problem when employing MAB policies in this setting is that it is difficult to evaluate and compare different policies on their effectiveness in optimizing system utility. This stems from the high number of runs that are necessary for a trustworthy evaluation of a policy under different contexts.

The problem is amplified when several policies and several contexts are considered. It is however pivotal for wider adoption of MAB policies in online learning in SAS to facilitate such evaluation and comparison.

Towards this end, we provide a Python package, MockSAS, and a grammar that allows for specifying and running mocks of SAS: profiles of SAS that capture the relations between the contexts, the actions, and the rewards. Using MockSAS can drastically reduce the time and resources of performing comparisons of MAB policies in SAS.

We evaluate the applicability of MockSAS and its accuracy in obtaining results compared to using the real system in a self-adaptation exemplar.