Artificial intelligence (AI) has been widely recognized as an important game-changer in our digital society. With help of AI, we are currently able to automate a number of various tasks, including creation of visual, musical, or textual content.
Ethical approach to design, development and utilization of AI systems as well as their legal compliance and robustness are defined as prerequisites of building trust and adoption of the technology. In this paper we analyze whether law supports ethics in the specific domain of automated journalism by examining principles of accountability, responsibility, and transparency (the ART principles) from the perspective of legal interests protected by copyright and other laws.
Other factors influencing ethical decision-making process, namely specificities of a business model and perception of authorship, are also taken into account. We present results of a recent pilot qualitative study illustrating that perception of authorship is closely related to perception of agency and responsibility.
Our findings show that the current Czech law neither incentivizes implementation of the ART principles nor perception of agency in relation to AI systems for automated journalism. Perception of disappearing authorship may, thus, also lead to perception of disappearing responsibility.
In order to solve these problems, we suggest introduction of new legal obligations and adaptation of existing personal rights to protect authors involved in the design of AI systems.