Scientific knowledge relies heavily on models, shaped by simplifying assumptions, with common categories being abstraction and idealization. This article aims to expose conceptual challenges inherent in conventional interpretations of these concepts, particularly in their practical application to scientific modeling.
The primary hurdle emerges in applying these categories to real-world instances of scientific modeling, which we illustrate with examples of non-causal explanations. Key issues revolve around (i) the ambiguous distinction between abstraction and idealization and (ii) the application of the simplifying assumption of abstraction.
Our hypothesis posits that non-causal explanations face unintelligibility due to an unclear understanding of the role of simplifying assumptions in them. To test this, we analyze selected examples, ranging from (toy-)examples to real-world instances, scrutinizing the alignment with the standard notions of abstraction and idealization.
Throughout, we investigate the influence of simplifying assumptions on these explanations, assessing their adherence or deviation from conventional concepts.