Abstract
Establishing a standard formula (SF) for the regulation of European insurance companies is a Herculean task. It has to acknowledge very different business models and national peculiarities. In addition, regulatory authorities—as a stakeholder on their own—have a number of supervisory objectives the SF should incentivize. With the intervention of the SF in economic activities, the principle of equal treatment must be maintained. The large circle of users makes its procedural simplicity indispensable to ensure that it is applied and implemented in a proportionate manner. Above all, the SF should be risksensitive. Compared to Solvency I, the SF of Solvency II is considered a significant improvement, as many of the aforementioned desiderata have been much better realized. The following analysis and survey of modeltheoretical aspects of the SF shows that these improvements could be achieved above all with regard to epistemic uncertainties. The stochastic model underneath the SF is still subject to considerable uncertainties; so that the probability functional of the SF is exposed to significant model risk. As part of the Own Risk and Solvency Assessment (ORSA), insurance companies must prove the adequacy of the SF for their company. The vague prior knowledge represented by the stochastic component of the SF is not sufficient for an SF intrinsic validation of the aleatoric component.
Introduction
The development of the new supervisory regime for insurance companies—Solvency II—took almost a decade. The further development of the International Insurance Capital Standards is currently under way, see, e.g., [14]. Moreover, EIOPA launched a review of the standard formula (SF) until 2020, see [6]. The practical, but also regulatory theoretical significance of the SF, which implements the Pillar I requirements of Solvency II, can hardly be overestimated, as the amount of solvency capital required (SCR: solvency capital requirement) restricts the business volume of insurance companies and thus reduces the most significant production factor: own funds. On the other hand, the SCR serves supervisory authorities as a means of achieving their supervisory objectives (e.g. reduction of systemic risks, consumer protection, see, e.g., BaFin [6]). The majority of insurance companies in Germany (about 90%) use the SF to determine their SCR. Only a minority of about 30 insurance companies make use of their legal right to develop an internal model as an alternative. However, the market share of insurance companies with internal models in Germany is approximately 50%.
A well written and brief summary of the technique of the standard procedures (more precisely, concerning the basic SCR in QIS 5) is provided by Chech [13]. Sandström’s compendium, see [42], offers encyclopaedic completeness. Interested readers can find synoptic comparisons of the regulatory components of Solvency II with those of the banking supervisory regulations (Basel III) in Gatzert and Wesker [24] and Laas and Siegel [30]. Liebwein gives an overview of the application context of internal models under Solvency II in [31]. Important comments from a practitioner’s perspective are given by Dacorogna et al. [15].
The SF has many structural parallels to internal models (IM). In particular, it defines a forecast model whose assumptions, characteristics, and properties can be investigated from both an absolute and a relative perspective—in the light of economic, risk management, supervisory, and stochastic criteria. An absolute perspective is a purely SF immanent evaluation of the SF, i.e. without the use of additional (model) references that go beyond the context reference. A relative perspective compares properties of the SF with alternative models. The following investigations focus on the analysis of the formal, i.e. mathematical properties of the standard formula from an absolute perspective.
The basic insights on quantitative risk management, starting with Morgan’s publication of RiskMetrics [36]—the inauguration of valueatrisk (VaR) as the most important risk measure in practice—and the seminal work of Artzner et al. [1] on coherent risk measures, Föllmer and Schied [20] on convex risk measures, and the work of Heyde et al. [27] on statistical and robust risk statistics have dominated theoretical and practical discussions on risk management issues, see, e.g., Embrechts et al. [19]. Heyde et al. [27] in particular deal with both formal and epistemic aspects of risk measures.
RiskMetrics, see [36], is based—in the spirit of the Markowitz approach—on an objective model framework, in which the risk of a portfolio X, quantified as \(\delta (X)\), is measured by a statistical functional
where the portfolio X is modelled by a random vector with distribution function \(F_X\), i.e. \(X\sim F_X\). This is explained in more detail in Sect. 3.2. Such an objective model approach implies farreaching consequences for regulatory acceptance, validation, and above all for the application in a company.
Insurance markets constitute an important example of incomplete markets. Most insurance products cannot be fully replicated through trading strategies, which suggests using utility theory, as it allows decisionspecific aspects to be incorporated into decisions under uncertainty. Denuit et al. [16, p. 88] and Wang et al. [47] represent a decision problem on the model side by a rankdependent expected utility of the uncertain cash flow Y
where \(g\big ({\overline{F}}_Y(\circ )\big )\) denotes the distorted survival function of Y and \(u(\circ )\) denotes the utility function of the insurance company. Both \(g(\circ )\) and \(u(\circ )\) express decisionspecific aspects, such as risk appetite or perception of probabilities. These subjective components go beyond the inevitable subjective elements of an objective model (1).
A utility theoretical superstructure seems to be most flexible in order to provide interfaces with the scientific theoretical (e.g. Mainzer [33]), economic (e.g. Straub and Welpe [44]), and regulatory standard setting (e.g. Barnett and O’Hagan [7]) literature as well as the basic literature on risk management (e.g. Jaeger et al. [28], Aven [3]).
The use of the standard methods implicitly requires not only a logical understanding of them, but supervision also requires this understanding from the Board of Management by law, see the contribution of Stahl, Fahr et al. [43], in the commentary of the VAG (German Insurance law). The company’s own risk and solvency assessment (ORSA, see, e.g., the book by Gorge [25]), which is required by the supervisory authorities, forms the linchpin of proof of this understanding. In this context, EIOPA published a key document, see [18], summarizing the most important assumptions underlying the SF. The aim of that document is to enable insurance companies using the SF to provide evidence of the adequacy of the SF within the framework of ORSA.
In addition to the work on formal aspects of risk models already cited, more fundamental work on the theory of the concept of risk is also tacitly included. We have already mentioned the works of Aven [4], Jaeger et al. [28], and Douady et al. [45]. In particular, we follow these authors in their differentiation of epistemic and aleatoric components in the modelling of uncertainty.
Following this introduction, Sect. 2 presents the progress made in determining the SCR with the introduction of Solvency II in comparison with Solvency I, as well as key structural elements of the SF. The latter show in particular that the SF, given the information in \(t=0\), is deterministic. Based on the decomposition of uncertainty into an epistemic and an aleatoric component, the formal properties of the SF are analysed in relation to given axioms of risk and capital functions. Section 3.2 analyses the stochastic model underneath the SF. The main finding is that the SF defines incoherent, subjective probabilities. Furthermore, the SF is analysed with regard to its (epistemically interpreted) axioms. It is observed that the SF is neither a coherent nor a translation invariant capital functional. Section 4 summarizes the main results.
Essentials of the SF
Compared to the structure of Pillar I under Solvency I (i.e. the SCR calculation), the SF under Solvency II has a number of significant improvements, see Gorge [25] for details. An example is the determination of the SCR under Solvency I for nonlife insurance policies on the basis of premium (16%) alone; risks from capital investments were ignored.
Compared to this rulebased approach, Solvency II provides a number of structural improvements:

1.
Valuation: An insurance company is valued using the newly introduced solvency balance sheet. The change in this balance sheet (\(\triangle BoF:=BoF_1BoF_0\); acronym of basic own funds) over one year constitutes the variable to be controlled by the SF under Solvency II.

2.
Risk profile: The computation of the solvency capital requirement uses epistemic knowledge of the risk structure of the insurance industry, e.g. about the sources of risk like spreads, longevity, and duration. Furthermore, organizational aspects as lines of business, geographic aspects, etc. are considered. This is manifested, among others, in the hierarchical tree of risk categories, see Fig. 1. The SF thus permits the creation of a risk profile.

3.
Probabilistic statements: The epistemically wellfounded determination of the SCR undergoes an additional aleatoric interpretation as a risk measure:
$$\begin{aligned}&\text {Pr}\big (  \triangle BoF \in [ SCR, \infty ) \big )&= 1  \alpha , \nonumber \\&\text {Pr}\big (BoF_1 \in ( \infty , BoF_0  SCR] \big )&= 1  \alpha , \end{aligned}$$(3)with the regulatory survival probability of \(\alpha = 99.5\%\) and \( \Delta BoF = BoF_0  BoF_1.\) This means that \(BoF_1\) is interpreted as a random variable with continuous distribution function and the SCR as a valueatrisk (VaR) with a 1year time horizon. Vanduffel et al. [17] discuss the advantages of VaR from a regulatory perspective.

4.
Aggregation of building blocks: The components of the basic SCR (as in Fig. 1) are aggregated using a fixed correlation matrix. The category of intangible assets (Intang) was no longer considered in the final version of Solvency II. The vector \({\mathbf {L}} =(L_1, \ldots , L_5)\) denotes the loss variables associated with the five risk categories (market risk, ..., nonlife risk). In the following, \({\mathbf {L}}\) and its individual components \(L_i\) are understood as random variables. Since with (3) losses that are greater than the SCR can only occur for a portfolio X with a probability of \(1\alpha \), (3) can be represented as follows: \(\text {Pr}\big ( {\overline{L}} \in [ SCR, \infty ) \big ) = 1\alpha \), where \( \Delta BoF = {\overline{L}}:= \sum _{i=1}^5 L_{i}.\)

5.
Mitigation procedures: The SF takes hedges into account and thus supports forwardlooking, riskconscious actions. Tax effects are also considered.

6.
Proportionality principle: The SF takes into account the legal principles of proportionality and materiality. Compared to Solvency I, this implicitly means that the SF has a high degree of complexity, which makes simplifications necessary for smaller insurance companies or those with a simple portfolio.

7.
Proof of adequacy: Solvency II has significantly improved the architecture of the supervisory approach. In particular, the companyspecific risk and solvency assessment with which a company must prove the adequacy of the calculation method for determining the SCR is worth mentioning. This means that a rulebased, mechanical application of the SF (as was common practice under Solvency I) is not sufficient to determine the SCR. In the light of the results of this assessment, the supervisory authority may impose additional capital requirements.

8.
Calculation of the SCR: Given the role of the Executive Board within ORSA (challenging the results of the SF), the question of how the aleatoric interpretation made in (3) is to be understood is of particular practical importance. Details are presented, e.g., in Stahl [43]. That paper focuses on a (critical) formal analysis and the associated horizons for interpretation.
Critical remarks with respect to the standard formula
The SF determines the SCR of a portfolio X at \(t_0\) by a procedure which we denote by
The calculation uses key date and marketrelated information, in particular the market prices of financial instruments available at time \(t_0\), but also parameters derived from time series of prices, such as yield curves, and so on. Furthermore, companyspecific information related to the reporting date, in particular the company’s risk exposure, denoted \(\varvec{\lambda }\), and its premiums. The exposure vector \(\varvec{\lambda }\) describes the amount (respectively volume) of financial instruments (e.g. assets or liabilities) the insurance company is exposed to. Due to the fact that the amount of financial instruments for an insurance company is huge, the dimension of \(\varvec{\lambda }\) is huge as well.
The SF excludes important risk factors. The bestknown example for this is government bond defaults. This entails the risk of regulatory arbitrage. The paradigm of absence of (model) arbitrage is not fulfilled by the SF. Furthermore, the SF is not necessarily conservative, i.e. prudent.
Product innovations (e.g. project financing as a result of the low interest phase) are initially not taken into account by the SF; this increases the arbitrage potential of the SF and the uncertainties with regard to the properties of the function.
For market risks, Mittnik [34] criticizes the calibration of the SF as insufficient from an econometric point of view. Stahl [43] explicitly states that the time series on which the calibration is based only extend up to 2009 and do not include important aspects of stylised facts such as low interest rates, unbalanced public budgets, and Brexit, to name but a few. For the SF itself, Brexit is a black swan, as a number of compromises made in the negotiations with the UK are now obsolete. Braun et al. [9] also critically examine the market risk component of the SF. This makes an absolute assessment of the SF within the framework of ORSA difficult.
Formal properties of the SF
Performing a risk analysis requires at least an aggregated assessment of the risk on the level of the consequences. This is provided by the fundamental concept of risk measures. In the following definition, \({\mathcal {B}}\) denotes the vector space of all bounded functions, defined on a fixed domain. The elements of \({\mathcal {B}}\) can be interpreted as losses or payoff functions (with appropriate modifications) of financial instruments.
Definition 1
A function \(\rho : {\mathcal {B}} \longrightarrow {\mathbb {R}}\) is called a monetary risk measure, if for all \(\psi _{B_1}, \psi _{B_2} \in {\mathcal {B}} \):
This definition also encompasses deterministic approaches and does not necessarily assume an underlying stochastic structure. The latter would summarize the aleatoric previous knowledge.
However, thanks to Knight’s seminal work, see [29], a stochastic framework \((\Omega , {\mathcal {A}}, {\mathbb {P}}) \) for describing risk and uncertainty is an established standard. So it comes as no surprise that Knight’s fundamental approach is also reflected in the literature on risk management applications. Nevertheless, uncertainty can also be represented without the use of a stochastic model. To this end, Augustin et al. [2] define the set of acceptable/desirable games as follows:
Definition 2
\(\Omega \) refers to the set of elementary events. An element from the space of bounded functions on \(\Omega \), \({\mathcal {B}}_\Omega \), is referred to as a game with uncertain result. A game is acceptable/desirable if a player accepts it. The set of acceptable/desirable games is denoted by
SF and uncertainty—aleatoric aspects
For the upcoming discussion of the stochastic model related to the SF, the following passage from the EIOPA document [18, p. 42] provides important insights, since it explicitly states that the SF stochastic model is only based on vague prior knowledge:
“Originally in the design of the SCR for nonlife insurance underwriting risk, the lognormal distribution acted prominently as a vehicle to model a skew bellshaped probability distribution. This implied a function of \(\sigma \) that should amount more or less to the value \(3 \sigma \). Later it was decided just to focus on this simple factor and downsizing the explicit assumption of an exact lognormal probability distribution”.
Thus the explanations in CEIOPS technical documents, e.g. [11, 12], serve primarily to justify a calibration proposal rather than to specify a stochastic model. A number of parameter adjustments were also made as part of the QIS studies. Furthermore, vague stochastic modelling has the disadvantage (or, depending on the purpose, the advantage) that some models can hardly be falsified.
The following investigations show to what extent the model approaches expressed by (1) and (2) help to understand the SF. In particular, it will be shown that the stochastic component of the SF should be interpreted as subjective probabilities.
In the spirit of Sandström’s book [42], the SF may be regarded as an objective, stochastic model (see Sect. 3.2) based on a probability space
which differs from internal models mainly in the (presumably conservative, i.e. prudent) calibration of the parameter \(\theta \).
In the following, we only consider the first layer in the hierarchy tree (visualized in Fig. 1) of the basic SCR, so as not to strain the notation. As already mentioned, the SF aggregates the SCR of vector \(\mathbf{L }\) at this level using a correlation matrix \({\mathcal {C}}\) by the socalled squareroot formula:
which is a first indication for a stochastic model in the sense of (7) (\(b^t\) denotes the transposed vector of b). This motivates a closer investigation of the probabilistic model underneath the SF. For the distribution of the aggregated losses, \({\overline{L}}=L_1+\cdots +L_5\), or that of each category \(L_i\), the SF (in \(t_0\)) determines only \(\alpha \)quantiles at the level \(\alpha =99.5\%\) by \( \delta _{t_0}(L_i)\). If \({\mathbb {F}}\) denotes the set of all distribution functions on \({\mathbb {R}}\), the set
denotes the previous knowledge of the regulator about the marginal distributions \(L_i\) of \({\mathbf {L}}\). In addition, the SF specifies the \(\alpha \)quantile of \({\overline{L}}= \sum _{i=1}^5L_i\):
By the pair
we denote the entire aleatoric previous knowledge of the regulator. Obviously, \({\mathbb {V}}_\alpha \) specifies a large nonparametric model. In other words, the previous knowledge is rather vague and insofar minimal, as this is sufficient to define or interpret the SCR (or \(\delta _{t_0}\)) as valueatrisk as in (3). Furthermore, (10) allows a smooth aggregation of at least partial internal models into the SF.
Example 1
(The squareroot formula) If in addition to (10), the vector \({\mathbf {L}}\) is multivariate normally distributed (with \({\mathbb {E}}[{\mathbf {L}}]={\mathbf {0}}\) and Pearson’s correlation matrix \({\mathcal {C}}\)), then by (10) the class of suitably matched normal distributions is specified and the aggregation via the squarerootformula is exactly valid. More generally, this applies to elliptical distributions, see Fuchs et al. [23]. However, the class of elliptical distributions is not compatible with the skewed distribution of typical insurance risks such as NatCat and credit default, to name but a few. In this respect, the class of elliptical distributions is not compatible with the empirical and epistemic (contextual) knowledge about insurance risks. In this context, it is important to note that the EIOPA document [18] does not explicitly assume a normal distribution, but nevertheless uses a calculation rule with the squarerootformula that is compatible with the multivariate normal distribution. A discussion of the more general case of skewed distributions can be found, e.g., in [10, 35].
Consequence 1
Let us consider a distribution \(F_{{\overline{L}},\mathbf{L }} \in {\mathbb {V}}_\alpha .\) Except in a few special cases, many of which are unrealistic for the present application,
is to be expected. In this view \({\mathbb {V}}_\alpha \) is misspecified, as it does not meet the properties of distribution functions. From a mathematical point of view, this structural fact goes beyond a possibly biased conservative calibration. Let us mention the interesting attempt to calibrate the correlation matrix such that equality holds in Eq. (11), see [35].
Example 2
(Diversification effects) We now examine possible diversification effects and refer to the structure of the SF as shown in Fig. 1. For this purpose, \(L_{ij}\) denotes the subrisk categories of the third layer of the basis SCR. The associated risk is represented by
where \(R_{ij}\) is modelled (in this example) as a uniformly distributed random variable \(R_{ij}\), i.e. \(R_{ij} \sim {\mathcal {U}}\left[ 0,1 \right] .\) The aggregation of \(\delta (L_{ij})\) is done according to the correlation matrix of the SF; moreover, the simulated \(R_{ij}\) are assumed to be independent. For each simulation, the diversification
can be determined. The denominator corresponds to the sum of all \(\delta (L_{ij})\), which can be identified by a comonotonic dependence. The box plot in Fig. 2 summarizes the results of a simulation of 100,000 portfolios.
For more than 75% of all portfolios, D is 60% or bigger. In relation to the risk categories \(R_i\), this comes close to the extent of diversification in internal models. In the simulation, D was not less than 50%. Overall, no additional conservative element comes into play through the correlation matrix. Since the module of market risk is not necessarily conservative (default risk of government bonds is ignored), this can only be achieved by calibrating underwriting techniques. In [39], Pfeifer explicitly proves that neither the notion of correlation nor that of tail dependence directly impacts the diversification. Moreover, he points out that under the risk measure valueatrisk, there is no connection between diversification and correlation of risks, which can be mathematically justified [37]. In this respect, the concepts of correlation and diversification should be interpreted with caution in the context of the SF.
Consequence 2
The prior knowledge \({\mathbb {V}}_\alpha \) in (10) is too vague to fully specify a distribution function compatible with the epistemic previous knowledge on the present application of riskmeasurement for an insurance company, for which
is valid. This allows
to be interpreted only as a subjective probability, see Lindley [32]:
where \({\mathcal {K}}\) is the knowledge of the regulator. \({\mathcal {K}}\) includes:

empirical data (e.g. time series of financial instruments, loss triangles, etc.),

results from five QIS studies,

experience from financial crises,

supervisory objectives.
The interpretation (12) corresponds to the prescriptivenormative character of the SF. Previous knowledge \({\mathbb {V}}_\alpha \) is vague both from the perspective of the regulator and of an insurance company.
This provides the supervisory standard \( \delta (X)\) with an additional quality that explicates the concept of uncertainty. With the stochastic model construction, the regulatory standard \( \delta (X)\), which can initially be implemented, converts into an ideal standard, i.e. its compliance can no longer be checked with certainty, see Barnett and O’ Hagan [7, p. 21ff]. When using ideal standards, their validation plays a key role. Barnett and O’ Hagan [7, p. 28], even require that ideal standards should only be used together with a given framework of validation. In any case, the model reference \({\mathbb {V}}_\alpha \) implies additional legitimation compared to \( \delta (X)\).
The vague aleatoric knowledge of the SF makes backtesting with statistical methods almost impossible: Indeed, the insurance company generates a time series of forecast intervals and associated BoF realizations with the SF at quarterly intervals, which results in a sequence of realizations of a Bernoullidistributed random variable based on which it can be checked whether the 99.5% quantile is violated or not. However, there is not enough data to draw reliable statistical conclusions on such an extreme probability in a Bernoulli experiment. As far as the use of statistical tests is concerned, the SF is at the limit of nonfalsifiability.
In [22], Frezal investigated the assumptions of the five QIS studies and their impact on SCR requirements for nonlife risk categories. In our terminology, this corresponds to the analysis of the available information \({\mathcal {K}}_t\) over time and the validity of:
for \(t=t_1,\ldots ,t_5\). He critically evaluates the high influence of \({\mathcal {K}}_t\), since he implicitly assumes an objective model. He therefore concludes that the SF is not riskbased, since risk orders are not timeinvariant, i.e. \(\delta _{t_j}(X) < \delta _{t_j}(Y)\) does not follow from \(\delta _{t_i}(X) < \delta _{t_i}(Y)\) for \( i \ne j \). Thus, the use of the concept of subjective probabilities is necessary in order to justify the regulatory choices. Frezal’s work questions whether the SF is unbiased in the sense of Tversky and Kahnemann [46], i.e. meets the criteria of:

availability,

anchoring, adjustment, and

representativeness.
Consequence 3
In the theory of subjective probabilities a concept of coherence is applied, see [32, p. 92]. This concept should not be confounded with that of coherent risk measures. Subjective probabilities are called coherent, if they fulfill the rules of probability calculus. In order to satisfy the probabilistic requirements for coherence, the term (11) must be an equality. However, knowing the correlations alone will in general not fully specify the copula of \(\mathbf{L }=(L_1,\ldots ,L_5)\), hence, the probability related to \(\delta _{t_0}(X)\) is not a coherent subjective probability.
Consequence 4
Related to the SF a cascade of different, however, interlinked models is simultaneously introduced:
 1. :

At a first purely computational level, the SF implements an algorithm, which motivates the terminology ‘standard formula’. This formula or procedure transforms an input (a given portfolio X) to an output by the mapping \(X\mapsto \delta _{t_0}(X)\) (the calculated SCR). While in mathematical terms a function has to be a welldefined mapping, within the SF the user has some degrees of freedom that might be interpreted as model risk. For instance, the universe of all risk factors an insurance company is exposed to has to be projected onto a small subset of standard risk factors. In this nontrivial modeling step, different insurance companies might use differing assumptions and thus might end up with individually customized ‘standard formulas’.
 2. :

At a second normative level, the SF should be interpreted as a (regulatory) premium for the portfolio X. If the undertaking’s basic own funds (\(BoF_0\)) surpass its SCR, i.e.
$$\begin{aligned}BoF_0 > SCR = \delta _{t_0} (X),\end{aligned}$$then X is considered as an acceptable game for the regulator, i.e. \(X \in {{\mathcal {D}}}\). The set of portfolios that are considered acceptable by the regulator constitutes a proper subset of all portfolios, and, hence, an equivalence relation \(\equiv _\text {accept}\) (two portfolios are in relation if both are acceptable, resp. both are not acceptable) capturing the regulator’s preferences. The space of all portfolios equipped with the equivalence relation \(\equiv _\text {accept}\) has a richer structure than that related to \(\delta _{t_0}(X)\), because the normative element of the SF is not considered in the purely arithmetic expression. This space, expressing the regulator’s preferences, shows moderate uncertainties. The latter stem from the fact that the SF shortcomings’ vary from country to country and from undertaking to undertaking.
 3. :

At a third stochastic level, \(BoF_1\) in (3) is considered a random variable which relates \(\delta _{t_0}(X)\) to a probability space \((\Omega , {\mathcal {A}}, \text {Pr})\). The stochastic model \((\Omega , {\mathcal {A}}, \text {Pr})\) shows the highest degree of uncertainty compared with the arithmetic expression \(\delta _{t_0}(X)\) and the normative level. This follows from the fact that \((\Omega , {\mathcal {A}}, \text {Pr})\) is built on both of the above interpretations and, thus, their uncertainties carry over. Furthermore, the stochastic model \((\Omega , {\mathcal {A}}, \text {Pr})\) is a result of expert opinions and various distributional assumptions, see [22]. Hence, stochastic uncertainties come into play.
Axiomatics of the SF
The following (axiomatic) criteria formulate (mostly desirable) properties of risk measures, see Denuit et al. [16], Föllmer and Weber [21], and Rüschendorf [41]. In the following, they are used to assess the axiomatic properties of the SF. In addition to their formal and mathematical relevance, these axioms have a substantial significance, as they make the existing epistemic knowledge of the respective purpose or causes operational when these risk measures are applied in risk management. Thus, the importance of fulfilling these axioms goes beyond the purely mathematical purpose and reflects the level of understanding of the necessities of risk management implemented in the SF.
We consider the axioms with regard to their fulfillment by the probability functional and the capital functional.
Analysis of the probability functional

1.
Law invariance: If the random variables L and M are equal in distribution (i.e. \(L{\mathop {=}\limits ^{d}} M\)), then \(\delta (L) = \delta (M)\) must hold for a lawinvariant risk measure. This criterion states that the risk measure is a function of the distribution function \(F_L\) and is linked to the representation of a risk measure as a statistical functional (1). As Denuit et al., [16, p. 64], explain, this approach is based on the interpretation of objectivity when
$$\begin{aligned} \delta (L)=T(F_L) \end{aligned}$$(13)can be estimated using the empirical distribution function \({\widehat{F}}_n(l)\):
$$\begin{aligned} {\widehat{\delta }}(L)=T({\widehat{F}}_L)=T\big ({\widehat{F}}_n(l)\big ). \end{aligned}$$Since the latter can be determined from data, this allows the model approach associated with (13) to be interpreted as objective. More generally, this interpretation assumes an objective, stochastic model \((\Omega , {\mathcal {A}}, {\mathbb {F}}), \) which models the aleatoric previous knowledge about L. As already shown, the aleatoric model of the SF with \({\mathbb {V}}_\alpha \) uses only vague or subjective probabilities, see Augustin et al. [2], Baudrit and Dubois [8], and Aven et al. [5]. In addition, \(L{\mathop {=}\limits ^{d}} M\) defines an equivalence relation on the set of random variables. The same applies to \({\mathbb {V}}_\alpha \):
$$\begin{aligned} L \simeq _\alpha M \iff F^{1}_L (\alpha ) = F^{1}_M (\alpha ). \end{aligned}$$Obviously, the equivalence classes belonging to the two relations (equal in distribution versus sharing only some \(\alpha \)quantile) are of different size. This is not only an interpretation of the degree of aleatoric previous knowledge, but also shows the granularity that is a basic prerequisite for stochastic modelling.

2.
No inappropriately low assessment of risk: This is formulated in Denuit et al. [16] by the requirement
$$\begin{aligned} \delta (L) \geqslant {\mathbb {E}} [L]. \end{aligned}$$(14)As is well known, the valueatrisk as a risk measure does not necessarily meet this requirement, see, e.g., Denuit et al. [16]. Hence, this criterion cannot formally be fulfilled by the SF, although the high level \(\alpha \) in Solvency II is indicative of its validity in practice. For many cases of practical relevance, moreover, (14) can be proven within the framework of ORSA.

3.
Monotonicity:
$$\begin{aligned} \text {Pr}(L \leqslant {\tilde{L}}) =1 \implies \delta (L) \leqslant \delta ({\tilde{L}}). \end{aligned}$$
Observation 1
The SF is not monotone.
Pfeiffer showed that neither the standard deviation nor the standard deviation principle meet the criterion of monotonicity, see [38]. Since for the nonlife category, the premium and reserve risk use the standard deviation as a risk measure, \(\delta (X)\) within the context of the SF cannot meet the monotonicity condition. Further details, especially regarding assumptions on the lognormal distribution, can be found in Hamel and Pfeifer [26].
Analysis of the capital functional

1.
No inappropriately high assessment of risk: For \(L\sim F_L\), Denuit et al. [16] use the criterion
$$\begin{aligned} \delta (L) \leqslant F_L^{1}(1). \end{aligned}$$(15)For all \(F_L \in {\mathbb {V}}_\alpha \), (15) is formally true at each level, where the SCR is calculated, see Fig. 1. However, the validity of this relation cannot be taken for granted because it is hardly to be backtested against reality. This argument is also true for the way how the SCRs of different risk categories or levels are aggregated by means of the correlation matrix.

2.
Translativity:
$$\begin{aligned} \delta (\psi _Bc) = \delta (\psi _B)  c ;\quad c \in {\mathbb {R}}. \end{aligned}$$(16)
Epistemically, the role of capital, c in (16), in risk management and the regulation of financial markets can hardly be overestimated. In the representation of these systems by cybernetic feedback loops, capital takes over the function of a regulator, i.e. it allows the system to be kept in balance (homeostasis).
It is therefore not surprising that the requirement (16) is a central one. Pflug and Römisch [40, p. 39] concretize the above explanations with regard to coherent capital functions. These determine the capital amount c necessary for the acceptability of the position \(\psi _B\), so that \(\psi _B+c\) is acceptable, i.e.: \( \psi _B+c \in {\mathcal {D}} \).
Proposition 1
The SCR of the SF does not fulfill the property (16) of translativity.
Proof
The following rule determines the OpRisk SCR:
where \(\Pi \) denotes the premiums. In (17) the components of the right side do not come from different categories: both the premium, \(\Pi \), and the BasisSCR are capital functionals. But the premium \(\Pi \) does not depend on capital c, thus (16) can never be fulfilled if the minimum is attained by the first argument \(3 \% \times \Pi \). \(\square \)
Remark 1
The above argument shows that (additional) capital might not be considered in \(\delta _{OP}\). Hence, the SF has a precautionary component and the SF creates a liquidity buffer. In the light of (16), at least
is required by a risk measure, since there may well be normative reasons to prefer a more conservative approach than that determined by (16). It was implicitly assumed in (18) that cash is a financial instrument that can be added.

3.
Subadditivity: \(\delta (\psi _{B_1} + \psi _{B_2}) \leqslant \delta (\psi _{B_1}) + \delta (\psi _{B_2})\).
Proposition 2
The SF is not subadditive.
Proof
We consider two (nonlife) insurance companies \(V_1\) and \(V_2\) with a basicSCR (\(\rho _b\)) in the amount of \(\rho _{b} (V_1) = 10\) and \( \rho _{b} (V_2 ) = 100\). Furthermore, for gross premiums \(\Pi (V_1) = 897 \) and \( \Pi (V_2)= 3\) apply, thus it follows that
For the merged company \(V := V_1+V_2 \) one gets:
\(\square \)

4.
Positively homogeneous: \(\delta (c X) = c \delta (X);\, c>0\).
Remark 2
The SF is positively homogeneous. Without the property of homogeneity, the SF would not be applicable simultaneously in different currency areas, as the SCR would then depend on the currency.

5.
The requirement of continuity, i.e. for a sequence of portfolios \(\{X_n\}_{n\in {\mathbb {N}}}\),
$$\begin{aligned} X_n \longrightarrow X \implies \delta (X_n) \longrightarrow \delta (X), \end{aligned}$$(19)applies for a risk, capital, or probability function, which is obviously a desirable property, since each model per se represents an approximation of reality, and thus implicitly presupposes a continuity concept. On the other hand, the choice of the norm or the topology (that we did not specify above) reflects previous knowledge, but is moreover in a meaningful connection with the purpose intended by the use of the model.
Remark 3
If \(\{ \varvec{\lambda }_n\}_{n\in {\mathbb {N}}}\) denotes a sequence of exposure vectors related to the SF, then the following applies:
where \(X_n\) and X denotes the portfolios belonging to the exposure vectors, respectively.
Proof
All evaluation mappings within the SF for individual risks, and the procedure to combine them, are continuous functions and so is their concatenation. \(\square \)
Conclusion
Let us summarize the key observations made in this survey:

1.
The probabilities associated with \(F_X\) in the SF should be interpreted as subjective ones. They are not coherent probabilities, e.g. the risk aggregation formula does not hold for distributions that are in line with the risks an insurance company is exposed to, see (11).

2.
The SF does not fulfill the desiderata of Tversky and Kahneman because of a number of compromises chosen by the standard formula. The important work of Frezal, which shows that the QIS studies deteriorated riskrankings, is a good example for a number of biases.

3.
The prior knowledge \({\mathbb {V}}_\alpha \) is not compatible with the epistemic knowledge about insurance companies. As discussed, specifying correlations alone does in general not specify a copula function; an exception being the Gaussian copula. Due to the fact that some important risks have skewed distributions, \({\mathbb {V}}_\alpha \) is misspecified in the light of the epistemic knowledge.

4.
Due to the fact that the marginal distributions are not specified and \({\mathbb {V}}_\alpha \) characterizes only the quantiles, the standard formula is close to being unfalsifiable, because it would take too long to gather significant evidence against the vague knowledge represented by \({\mathbb {V}}_\alpha \). Moreover, the correlation matrix underpinning the SF cannot be evaluated based on \({\mathbb {V}}_\alpha \). This is related to the fact that the correlations depend on both the dependence structure and the marginal distributions.

5.
Interpreting \(\delta _{t_0}(X)\) as a game, the induced preferences show little structure, as the SF lacks many desired axioms of quantitative risk management.
Concluding, the SF lacks sound economic and mathematical reasoning, even minimal requirements (such as monotonicity, no arbitrage, etc.) are violated. The progress compared to Solvency I is not as big as expected. The absence of mathematical structure and scientific rigour makes the use of the SF as a tool of control somewhat delicate. A further consequence of these observations is that it does not seem adequate to use the SF as a benchmark for internal models, because the model uncertainty of the SF is not estimable. The SF, thus, should not serve as an anchor for internal models.
References
 1.
Artzner P, Delbean F, Eber JM, Heath D (1999) Coherent risk measures. Math Financ 9:203–228
 2.
Augustin T, Coolen F, de Cooman G, Troffaes M (2014) Introduction to imprecise probabilities. Wiley, New York
 3.
Aven T (2011) Quantitative risk assessment. Cambridge University Press, Cambridge
 4.
Aven T (2014) Risk, surprises and black swans. Routledge, Abingdon
 5.
Aven T, Baraldi P, Flage R, Zino E (2014) Uncertainty in risk assessment. Wiley, New York
 6.
BaFin (2017) BaFin’s annual report
 7.
Barnett V, O’Hagan A (1997) Setting environmental standards. Chapman and Hall, London
 8.
Baudrit C, Dubois D (2006) Practical representations of incomplete probabilistic knowledge. Comput Stat Data Anal 51:86–108
 9.
Braun A, Schmeiser H, Schreiber F (2020) Solvency II’s market risk standard formula: how credible is the proclaimed ruin probability?
 10.
Campbell R, Koedijk K, Kofman P (2002) Increased correlation in bear markets. Financ Anal J 58(1):87–94
 11.
CEIOPS (2010) Catastrophe task force report on the standardized scenarios for the catastrophe risk module in the standard formula. Technical report, CEIOPS
 12.
CEIOPS (2011) Calibration of the premium and reserve risk factors in the standard formula of Solvency II. Technical report, CEIOPS
 13.
Chech CD (2012) Eigenmittelanforderungen an Versicherungen im Standardansatz von Solvency II. Working paper, p 74
 14.
CROForum (2017) Use of internal models in ICS 2.0
 15.
Dacorogna M, Nisipasu E, Poulin M (2011) Preparing for Solvency II. SCOR papers
 16.
Denuit M, Dhaene J, Gooverts M, Kaas R (2005) Actuarial theory for dependent risks. Wiley, Chichester
 17.
Dhaene J, Laeven R, Vanduffel S, Darkiewicz G, Goovarts M (2009) Can a coherent risk measure be too subadditive? J Risk Insur 75(2):365–386
 18.
EIOPA 14 322 (2014) The underlying assumptions in the standard formula for the Solvency Capital Requirement calculation. EIOPA
 19.
Embrechts P, Puccetti G, Rüschendorf L, Wang R, Beleraj A (2014) An academic response to Basel 3.5. Risks 2(1):25–48
 20.
Föllmer H, Schied A (2002) Convex measures of risk and trading constraints. Financ Stoch 6:429–447
 21.
Föllmer H, Weber S (2015) The axiomatic approach to risk measures for capital determination. Ann Rev Financ Econ 7:301–337
 22.
Frezal S (2017) Solvency II is not riskbased. Could it be? Technical report, PARI
 23.
Fuchs S, Ludwig A, Schmidt K (2013) Zur Exaktheit der Standardformel. Zeitschrift für die Versicherungswissenschaft, pp 87–95
 24.
Gatzert N, Wesker H (2012) A comparative assessment of Basel II/III and Solvency II. Geneva Pap 37:539–570
 25.
Gorge G (2016) Insurance risk management and reinsurance. https://www.lulu.com
 26.
Hampel M, Pfeifer D (2011) Proposal for correction of the SCR calculation bias in Solvency II. ZVersWiss 100:733–743
 27.
Heyde CC, Kou SG, Peng XH (2006) What is a good risk measure: bridging the gaps between data, coherent risk measures, and insurance risk measures. Columbia University (Preprint)
 28.
Jaeger C, Renn O, Rosa E, Webler T (2001) Uncertainty and rational action. Earthscan Publication, London
 29.
Knight FH (1921) Risk, uncertainty and profit. Beard Books, Frederick
 30.
Laas D, Siegel CF (2015) Basel III versus Solvency II: an analysis of regulatory consistency under the new capital standard. Wiley, New York
 31.
Liebwein P (2006) Risk models for capital adequacy: applications in the context of Solvency II and beyond. Geneva Pap 31:528–550
 32.
Lindley DV (2014) Understanding uncertainty. Wiley, New York
 33.
Mainzer K (2014) The new role of mathematical risk modelling and its importance for society. In: Klüppelberg C, Straub D, Welpe I (eds) Risk. Springer, Cham, pp 95–132
 34.
Mittnik S (2011) Solvency II calibrations: where curiosity meets spuriosity. Technical report, center for quantitative risk analysis, Munich
 35.
Mittnik S (2014) Varimplied tailcorrelation matrices. Econ Lett 122(1):69–73
 36.
Morgan JP (2020) Riskmetrics—technical document
 37.
Paulusch J (2017) The Solvency II standard formula, linear geometry and diversification. J Risk Financ Manag 10(2):1–12
 38.
Pfeifer D (2013) Correlation, tail dependence and diversification. In: Becker C, Fried R, Kuhnt S (eds) Robustness and complex data structures. Springer, Berlin, pp 301–314
 39.
Pfeifer D (2016) Hält das Standardmodell unter Solvency II, was es verspricht? In: Koch R, Weber M, Winter G (Hrsg.) (eds) Der Forschung — der Lehre — der Bildung. 100 Jahre Hamburger Seminar für Versicherungswissenschaft und Versicherungswissenschaftlicher Verein in Hamburg e.V., Verlag Versicherungswirtschaft, Karlsruhe, p 767–788
 40.
Pflug G, Römisch W (2007) Modelling, measuring and managing risk. World Scientific, Singapore
 41.
Rüschendorf L (2013) Mathematical risk analysis. Springer, Heidelberg
 42.
Sandström A (2011) Handbook of solvency for actuaries and risk managers. Chapman and Hall, London
 43.
Stahl G (2018) Kommentar zu den §§96121. In: Fahr U, Kaulbach D, Bähr G, Pohlmann G (eds) Versicherungsaufsichtsgesetz. Beck
 44.
Straub D, Welpe I (2014) Decisionmaking under risk: a normative and behavioral perspective. In: Klüppelberg C, Straub D, Welpe I (eds) Risk. Springer, Cham, pp 63–94
 45.
Taleb N, BarYam Y, Douady R, Norman J, Read R (2014) The precautionary principle: fragility and black swans from policy actions. Technical report
 46.
Tversky A, Kahnemann D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131
 47.
Wang S, Young V, Panjer H (1997) Axiomatic characterization of insurance prices. Insur Math Econ 21:173–183
Acknowledgements
The authors would like to thank the participants of the workshop “On Risk Measurement and Regulatory Issues in Business”, Montréal, Sept. 11–14, 2017, for critical discussions and suggestions. Special thanks go to Prof. Pfeifer, Oldenburg, who suggested important additional references that provided essential impetus, and to Prof. Huschens, Dresden, whose critical review helped to remove certain inaccuracies from the manuscript. Last but not least we want to express our thanks to the referees and the editor. Their helpful comments significantly improved the readability of the paper.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Scherer, M., Stahl, G. The standard formula of Solvency II: a critical discussion. Eur. Actuar. J. 11, 3–20 (2021). https://doi.org/10.1007/s1338502000252z
Received:
Revised:
Accepted:
Published:
Issue Date: