AI Underwriting: Feature Engineering and Risk Scoring in Modern Insurance
Underwriting is the beating heart of any insurance company: the process by which a decision is made on whether to accept a risk, at what price, and under what conditions. For decades, this process relied on human underwriters analyzing paper documents and applying actuarial rules encoded in lookup tables. The result? Decisions taking 3-5 business days, high operational costs, and subjective variability between underwriters.
Artificial intelligence is rewriting these rules radically. According to McKinsey, global investment in AI-driven insurance solutions will surpass $6 billion in 2025, with BCG estimating that 36% of the total AI value for insurance is concentrated in the underwriting function. Operational numbers are equally impressive: average underwriting decision time has dropped from 3-5 days to 12.4 minutes for standard policies, maintaining a 99.3% accuracy rate in risk assessment.
But how does an AI underwriting system actually work? This guide deconstructs the entire technical stack: from data collection and feature engineering, to risk scoring models, to interpretability and bias management — with real, production-ready code examples.
What You Will Learn
- End-to-end AI underwriting system architecture
- Domain-specific feature engineering for insurance
- ML models for risk scoring: XGBoost, frequency/severity two-stage approach
- Interpretability with SHAP for auditable, compliance-ready decisions
- Bias detection and fairness mitigation under EU regulatory constraints
- MLOps for underwriting models in production with MLflow
- Data drift monitoring with Population Stability Index (PSI)
The Underwriting Process: From Legacy to AI-Native
Before designing an AI system, it is essential to understand the traditional workflow we are automating. The underwriting process has four fundamental phases:
- Information collection: The applicant provides data about themselves and the risk (questionnaire, documents, physical inspection of the insured asset)
- Risk analysis: The underwriter evaluates the probability and severity of future claims
- Pricing: Determining the premium based on the assessed risk and the portfolio combined ratio objectives
- Decision: Accept, decline, or accept with conditions (exclusions, deductibles, surcharge)
An AI-native system does not eliminate these phases but transforms them fundamentally: data collection becomes automatic from heterogeneous sources (open data, telematics, credit bureaus), risk analysis runs in milliseconds via ML models, pricing is dynamic and personalized for each applicant, and decisions are automated for standard cases with human supervision for complex or borderline ones.
Regulatory Framework: EU AI Act and Underwriting
The European AI Act (fully in force from August 2027) classifies credit and insurance scoring systems as high-risk AI (Annex III). This entails specific obligations: transparency of automated decisions, right to human review, detailed technical documentation, and pre-market conformity assessment. AI underwriting system design must incorporate these requirements from the architecture stage, not as a subsequent retrofit.
Feature Engineering for Insurance Underwriting
Feature engineering quality is the single factor that most differentiates an excellent underwriting model from a mediocre one. Unlike domains such as computer vision, where features are automatically extracted by convolutional layers, insurance tabular data requires deep manual engineering grounded in actuarial domain knowledge.
For motor insurance, features fall into five main categories:
- Demographic features: age, marital status, type of residence
- Driving features: years licensed, age at first license, claims and violations history
- Vehicle features: make, model, year, value, performance category
- Geographic features: urban density, area crime index, weather risk
- Economic features: credit score, requested coverage type
import pandas as pd
import numpy as np
from typing import Dict, Optional
from dataclasses import dataclass
from datetime import date
@dataclass
class PolicyApplicant:
"""Raw data for a motor policy applicant."""
applicant_id: str
birth_date: date
license_date: date
zip_code: str
vehicle_make: str
vehicle_year: int
vehicle_value: float
annual_mileage: int
claims_3yr: int
violations_3yr: int
credit_score: Optional[int] = None
marital_status: str = "single"
housing_type: str = "tenant"
class AutoInsuranceFeatureEngineer:
"""
Feature engineering for motor underwriting.
Produces 40+ features from raw applicant data,
including derived features, interactions and
domain-specific encoding.
"""
VEHICLE_MAKE_RISK: Dict[str, int] = {
"Ferrari": 5, "Lamborghini": 5, "Porsche": 4,
"BMW": 3, "Mercedes": 3, "Audi": 3,
"Toyota": 1, "Honda": 1, "Volkswagen": 2,
"Ford": 2, "Fiat": 2, "Renault": 2,
}
def __init__(self, reference_date: Optional[date] = None) -> None:
self.reference_date = reference_date or date.today()
def engineer_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
features: Dict[str, float] = {}
features.update(self._demographic_features(applicant))
features.update(self._driving_experience_features(applicant))
features.update(self._vehicle_features(applicant))
features.update(self._claims_features(applicant))
features.update(self._geographic_features(applicant))
if applicant.credit_score is not None:
features.update(self._credit_features(applicant))
features.update(self._interaction_features(features))
return features
def _demographic_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
age = (self.reference_date - applicant.birth_date).days / 365.25
return {
"age": age,
"age_squared": age ** 2,
"age_under_25": float(age < 25),
"age_over_70": float(age > 70),
# Non-linear risk: peak below 25 and above 70
"age_risk_young": max(0.0, (25 - age) / 25) if age < 25 else 0.0,
"age_risk_senior": max(0.0, (age - 70) / 20) if age > 70 else 0.0,
"is_married": float(applicant.marital_status == "married"),
"is_homeowner": float(applicant.housing_type == "owner"),
}
def _driving_experience_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
years_licensed = (self.reference_date - applicant.license_date).days / 365.25
age = (self.reference_date - applicant.birth_date).days / 365.25
age_at_license = age - years_licensed
return {
"years_licensed": years_licensed,
"years_licensed_squared": years_licensed ** 2,
"age_at_first_license": age_at_license,
"late_license_ratio": max(0.0, (age_at_license - 18) / 10),
"is_new_driver": float(years_licensed < 2),
"is_experienced_driver": float(years_licensed > 10),
}
def _vehicle_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
vehicle_age = self.reference_date.year - applicant.vehicle_year
make_risk = self.VEHICLE_MAKE_RISK.get(applicant.vehicle_make, 2)
return {
"vehicle_age": float(vehicle_age),
"vehicle_value": applicant.vehicle_value,
"vehicle_value_log": np.log1p(applicant.vehicle_value),
"vehicle_make_risk_score": float(make_risk),
"is_high_performance": float(make_risk >= 4),
"is_new_vehicle": float(vehicle_age <= 2),
"is_old_vehicle": float(vehicle_age > 10),
"annual_mileage": float(applicant.annual_mileage),
"annual_mileage_log": np.log1p(applicant.annual_mileage),
"high_mileage": float(applicant.annual_mileage > 20000),
}
def _claims_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
claims = applicant.claims_3yr
violations = applicant.violations_3yr
return {
"claims_3yr": float(claims),
"violations_3yr": float(violations),
"has_any_claim": float(claims > 0),
"has_multiple_claims": float(claims > 1),
"has_violations": float(violations > 0),
# Weighted combined score: claims 3x heavier than violations
"incident_score": claims * 3.0 + violations * 1.5,
"claims_x_violations": float(claims * violations),
}
def _geographic_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
# Production: lookup against geographic DBs (census, OSM, crime stats)
zip_hash = hash(applicant.zip_code) % 100
urban_score = (zip_hash % 5) / 4.0
crime_index = (zip_hash % 3) / 2.0
weather_risk = (zip_hash % 4) / 3.0
return {
"urban_density_score": urban_score,
"area_crime_index": crime_index,
"area_weather_risk": weather_risk,
"composite_geo_risk": (urban_score + crime_index + weather_risk) / 3,
}
def _credit_features(self, applicant: PolicyApplicant) -> Dict[str, float]:
score = applicant.credit_score or 0
return {
"credit_score": float(score),
"credit_score_normalized": (score - 300) / (850 - 300),
"poor_credit": float(score < 580),
"fair_credit": float(580 <= score < 670),
"good_credit": float(670 <= score < 740),
"excellent_credit": float(score >= 740),
}
def _interaction_features(self, features: Dict[str, float]) -> Dict[str, float]:
return {
# Young driver + high-performance vehicle = very high risk
"young_high_perf": (
features.get("age_risk_young", 0) *
features.get("is_high_performance", 0)
),
# Prior claims in high-crime areas compound risk
"claims_urban": (
features.get("claims_3yr", 0) *
features.get("urban_density_score", 0)
),
# High mileage on old vehicle = elevated mechanical failure risk
"mileage_old_vehicle": (
features.get("annual_mileage_log", 0) *
features.get("is_old_vehicle", 0)
),
}
Risk Scoring Models: Approaches and Trade-offs
The choice of machine learning model for risk scoring must balance predictive accuracy, interpretability (critical for compliance), inference speed, and maintainability. Here are the main approaches used in the insurance industry:
Model Comparison for Insurance Risk Scoring
| Model | Accuracy | Interpretability | Ideal Use Case |
|---|---|---|---|
| GLM (Poisson/Gamma) | Medium | Very High | Actuarial baseline, regulatory acceptance |
| Random Forest | High | Medium | Feature importance, robustness to outliers |
| XGBoost / LightGBM | Very High | Medium | Production standard, SOTA on tabular data |
| Tabular Neural Network | High | Low | Complex features with categorical embeddings |
The most established approach in the industry is the two-stage model: a frequency model (probability of having at least one claim) and a severity model (expected cost of the claim given it occurs). The expected pure premium is: Frequency x Severity.
import xgboost as xgb
from sklearn.metrics import mean_absolute_error, mean_squared_error
import numpy as np
import pandas as pd
from typing import Dict
class TwoStageRiskScorer:
"""
Two-stage pricing model for motor insurance.
Stage 1: Frequency model (Poisson regression with XGBoost)
Target = claim count per policy
Stage 2: Severity model (Tweedie/Gamma with XGBoost)
Target = claim amount, trained on claims-only subset
Pure Premium = E[Frequency] * E[Severity | has_claim]
"""
FREQUENCY_PARAMS: Dict = {
"objective": "count:poisson",
"eval_metric": "poisson-nloglik",
"max_depth": 6,
"learning_rate": 0.05,
"n_estimators": 500,
"min_child_weight": 50, # actuarial stability: min claims per leaf
"subsample": 0.8,
"colsample_bytree": 0.8,
"reg_alpha": 0.1,
"reg_lambda": 1.0,
"tree_method": "hist",
"early_stopping_rounds": 50,
}
SEVERITY_PARAMS: Dict = {
"objective": "reg:tweedie",
"tweedie_variance_power": 1.5, # 1=Poisson, 2=Gamma
"eval_metric": "tweedie-nloglik@1.5",
"max_depth": 5,
"learning_rate": 0.05,
"n_estimators": 300,
"min_child_weight": 30,
"subsample": 0.8,
"colsample_bytree": 0.7,
"tree_method": "hist",
"early_stopping_rounds": 30,
}
def __init__(self) -> None:
self.frequency_model = xgb.XGBRegressor(**self.FREQUENCY_PARAMS)
self.severity_model = xgb.XGBRegressor(**self.SEVERITY_PARAMS)
self.feature_names: list = []
def fit(
self,
X: pd.DataFrame,
y_claims: pd.Series,
y_amounts: pd.Series,
exposure: pd.Series,
eval_fraction: float = 0.2,
) -> "TwoStageRiskScorer":
"""
Train both models.
CRITICAL: use temporal split, NOT random shuffle.
Insurance data is autocorrelated in time.
"""
self.feature_names = X.columns.tolist()
split_idx = int(len(X) * (1 - eval_fraction))
X_train, X_val = X.iloc[:split_idx], X.iloc[split_idx:]
freq_train, freq_val = y_claims.iloc[:split_idx], y_claims.iloc[split_idx:]
self.frequency_model.fit(
X_train, freq_train,
sample_weight=exposure.iloc[:split_idx],
eval_set=[(X_val, freq_val)],
verbose=50,
)
# Severity: train only on policies with at least one claim
has_claim = y_amounts > 0
X_sev, y_sev = X[has_claim], y_amounts[has_claim]
sev_split = int(len(X_sev) * (1 - eval_fraction))
self.severity_model.fit(
X_sev.iloc[:sev_split], y_sev.iloc[:sev_split],
eval_set=[(X_sev.iloc[sev_split:], y_sev.iloc[sev_split:])],
verbose=30,
)
return self
def predict_pure_premium(
self, X: pd.DataFrame, exposure: float = 1.0
) -> np.ndarray:
"""Compute expected pure premium: E[Freq] * E[Severity]."""
freq = self.frequency_model.predict(X) * exposure
sev = self.severity_model.predict(X)
return freq * sev
def evaluate(self, X: pd.DataFrame, y_claims: pd.Series) -> Dict[str, float]:
pred = self.frequency_model.predict(X)
mae = mean_absolute_error(y_claims, pred)
rmse = float(np.sqrt(mean_squared_error(y_claims, pred)))
gini = self._gini(y_claims.values, pred)
lift = self._lift(y_claims.values, pred, 0.1)
return {
"mae": round(mae, 6),
"rmse": round(rmse, 6),
"gini_coefficient": round(gini, 4),
"lift_top_decile": round(lift, 4),
}
def _gini(self, actual: np.ndarray, predicted: np.ndarray) -> float:
"""Gini coefficient: standard actuarial metric for frequency models."""
idx = np.argsort(predicted)
cum = np.cumsum(actual[idx])
cum_norm = cum / cum[-1]
lorenz_area = float(np.sum(cum_norm)) / len(actual)
return 2 * (lorenz_area - 0.5)
def _lift(self, actual: np.ndarray, predicted: np.ndarray, decile: float) -> float:
k = max(1, int(len(actual) * decile))
top_idx = np.argsort(predicted)[-k:]
base = actual.mean()
return float(actual[top_idx].mean() / base) if base > 0 else 0.0
Interpretability with SHAP: Auditable Decisions
In a regulated context such as insurance, a black-box model is insufficient. Regulations require that underwriting decisions be explainable: for the customer (GDPR right to explanation), for underwriters (borderline case review), and for regulators (Solvency II Pillar 3, ORSA). SHAP (SHapley Additive exPlanations) is the industry-standard tool for post-hoc interpretability of ensemble models.
import shap
import pandas as pd
import numpy as np
from typing import Dict, List, Tuple
class UnderwritingExplainer:
"""
SHAP-based explanations for underwriting decisions.
Generates output at three levels: customer, underwriter, compliance.
"""
FEATURE_LABELS: Dict[str, str] = {
"age": "driver age",
"years_licensed": "years holding a license",
"claims_3yr": "claims in the last 3 years",
"violations_3yr": "traffic violations in the last 3 years",
"vehicle_make_risk_score": "vehicle risk category",
"vehicle_age": "vehicle age",
"vehicle_value": "vehicle market value",
"annual_mileage": "declared annual mileage",
"composite_geo_risk": "geographic area risk index",
"credit_score": "credit score",
"young_high_perf": "young driver + high-performance vehicle",
}
def __init__(self, model, feature_names: List[str]) -> None:
self.feature_names = feature_names
self.explainer = shap.TreeExplainer(model)
def explain(self, X_row: pd.DataFrame, risk_score: float) -> Dict:
shap_values = self.explainer.shap_values(X_row)
impacts: List[Tuple[str, float]] = sorted(
zip(self.feature_names, shap_values[0]),
key=lambda x: abs(x[1]),
reverse=True
)
return {
"risk_score": round(risk_score, 2),
"decision": self._score_to_decision(risk_score),
"customer_message": self._customer_message(impacts),
"top_risk_factors": [
{
"name": name,
"label": self.FEATURE_LABELS.get(name, name),
"direction": "increases risk" if shap > 0 else "reduces risk",
"magnitude": round(abs(shap), 4),
}
for name, shap in impacts[:5]
],
"audit_trail": {
"base_expected_value": float(self.explainer.expected_value),
"all_shap_values": {
n: round(float(s), 6)
for n, s in zip(self.feature_names, shap_values[0])
},
"input_features": X_row.to_dict(orient="records")[0],
},
}
def _customer_message(self, impacts: List[Tuple[str, float]]) -> str:
high = [(n, v) for n, v in impacts if abs(v) > 0.1]
if not high:
return "Your profile falls within the standard risk category."
positives = [self.FEATURE_LABELS.get(n, n) for n, v in high[:3] if v < 0]
negatives = [self.FEATURE_LABELS.get(n, n) for n, v in high[:3] if v > 0]
parts = []
if negatives:
parts.append(f"Factors that increase your risk profile: {', '.join(negatives)}.")
if positives:
parts.append(f"Factors in your favour: {', '.join(positives)}.")
return " ".join(parts)
def _score_to_decision(self, score: float) -> str:
if score < 30:
return "ACCEPT_PREFERRED"
elif score < 60:
return "ACCEPT_STANDARD"
elif score < 80:
return "ACCEPT_SUBSTANDARD"
return "DECLINE_OR_MANUAL_REVIEW"
Fairness and Bias Detection Under EU Law
Using proxy variables (postal code, credit score) can introduce indirect discrimination prohibited by law. In Europe, the gender equality directive (confirmed by the ECJ Test-Achats ruling, 2011) prohibits the use of gender for insurance pricing. The AI Act adds additional constraints for high-risk systems under Annex III, requiring mandatory conformity assessments before deployment.
import pandas as pd
import numpy as np
from sklearn.metrics import confusion_matrix
from typing import Dict, List
class FairnessAuditor:
"""
Fairness auditor for underwriting models (EU-compliant).
Implemented metrics:
- Disparate Impact (80% rule)
- Demographic Parity Gap
- Equal Opportunity (TPR parity)
"""
DISPARATE_IMPACT_THRESHOLD = 0.8 # EEOC 80% rule
MAX_DP_GAP = 0.1 # EIOPA guidelines
def __init__(
self,
predictions: np.ndarray,
true_labels: np.ndarray,
sensitive_df: pd.DataFrame,
) -> None:
self.predictions = predictions
self.true_labels = true_labels
self.sensitive_df = sensitive_df
def full_audit(self) -> Dict:
results: Dict = {}
for attr in self.sensitive_df.columns:
groups = self.sensitive_df[attr].unique()
attr_results: Dict = {}
for group in groups:
mask = self.sensitive_df[attr] == group
g_pred = self.predictions[mask]
g_true = self.true_labels[mask]
attr_results[str(group)] = {
"count": int(mask.sum()),
"acceptance_rate": float((g_pred < 0.6).mean()),
"avg_score": round(float(g_pred.mean()), 4),
"tpr": self._tpr(g_true, g_pred),
}
di = self._disparate_impact(attr_results)
dp = self._dp_gap(attr_results)
attr_results["_metrics"] = {
"disparate_impact": round(di, 4),
"demographic_parity_gap": round(dp, 4),
"passes_di_rule": di >= self.DISPARATE_IMPACT_THRESHOLD,
"passes_dp_rule": dp <= self.MAX_DP_GAP,
"overall_fair": (
di >= self.DISPARATE_IMPACT_THRESHOLD and
dp <= self.MAX_DP_GAP
),
}
results[attr] = attr_results
return results
def _tpr(self, labels: np.ndarray, preds: np.ndarray, thr: float = 0.5) -> float:
if len(labels) < 10:
return float("nan")
binary = (preds >= thr).astype(int)
try:
tn, fp, fn, tp = confusion_matrix(labels, binary, labels=[0, 1]).ravel()
return round(tp / (tp + fn), 4) if (tp + fn) > 0 else 0.0
except ValueError:
return float("nan")
def _disparate_impact(self, groups: Dict) -> float:
rates = [v["acceptance_rate"] for k, v in groups.items()
if not k.startswith("_") and isinstance(v, dict)]
if not rates or max(rates) == 0:
return 1.0
return min(rates) / max(rates)
def _dp_gap(self, groups: Dict) -> float:
rates = [v["acceptance_rate"] for k, v in groups.items()
if not k.startswith("_") and isinstance(v, dict)]
return (max(rates) - min(rates)) if rates else 0.0
MLOps and Production Monitoring
Underwriting models are prone to frequent concept drift: the applicant profile changes over time (new electric vehicle models, demographic shifts), repair costs face inflation, and extreme weather events alter claim patterns. Continuous monitoring with Population Stability Index (PSI) is essential to detect when a model needs retraining before its predictions degrade.
from scipy import stats
import numpy as np
import pandas as pd
from typing import Dict, List
from datetime import datetime
class DriftMonitor:
"""
Data drift monitoring for underwriting models.
Uses PSI (Population Stability Index) as primary metric.
PSI interpretation:
- PSI < 0.1: No significant change
- PSI 0.1-0.25: Moderate change, monitor closely
- PSI > 0.25: Significant change, retraining recommended
"""
def __init__(self, reference_df: pd.DataFrame, features: List[str]) -> None:
self.reference_df = reference_df
self.features = features
def check_drift(self, current_df: pd.DataFrame) -> Dict:
feature_results: Dict = {}
critical_features: List[str] = []
for feat in self.features:
if feat not in current_df.columns:
continue
psi = self._psi(self.reference_df[feat], current_df[feat])
ks_stat, ks_p = stats.ks_2samp(
self.reference_df[feat].dropna(),
current_df[feat].dropna()
)
status = "ok" if psi < 0.1 else ("warning" if psi < 0.25 else "critical")
feature_results[feat] = {
"psi": round(psi, 4),
"ks_statistic": round(ks_stat, 4),
"ks_pvalue": round(ks_p, 4),
"status": status,
}
if status == "critical":
critical_features.append(feat)
avg_psi = float(np.mean([v["psi"] for v in feature_results.values()]))
return {
"checked_at": datetime.now().isoformat(),
"overall_psi": round(avg_psi, 4),
"retraining_recommended": avg_psi > 0.1,
"critical_features": critical_features,
"feature_details": feature_results,
}
def _psi(self, ref: pd.Series, cur: pd.Series, bins: int = 10) -> float:
ref_clean = ref.dropna().values
cur_clean = cur.dropna().values
edges = np.percentile(ref_clean, np.linspace(0, 100, bins + 1))
edges = np.unique(edges)
ref_counts, _ = np.histogram(ref_clean, bins=edges)
cur_counts, _ = np.histogram(cur_clean, bins=edges)
ref_pct = (ref_counts + 1e-10) / len(ref_clean)
cur_pct = (cur_counts + 1e-10) / len(cur_clean)
return float(np.sum((cur_pct - ref_pct) * np.log(cur_pct / ref_pct)))
Best Practices and Anti-patterns
Best Practices for AI Underwriting
- Two-stage architecture (frequency/severity): the actuarial industry standard; produces more accurate pricing than a single model on claim amounts
- Mandatory temporal split: insurance data is autocorrelated in time; never use random shuffle for train/test split
- Exposure as model offset: always use policy duration (exposure in years) as offset in the Poisson model to normalize claim count
- Keep a GLM baseline: generalized linear models are more easily validated by regulators and provide a benchmark to measure ML added value
- Shadow mode before go-live: run the model in parallel to human underwriting for 30-90 days to compare decisions before full automation
- Monitor PSI weekly: motor drift is frequent due to new vehicle models, repair cost inflation, and regulatory changes
Anti-patterns to Avoid
- Feature leakage: never use post-claim variables (claim amount, reserve) as training features for the frequency model
- Optimizing only AUC: insurance's relevant metrics are Gini coefficient, Combined Ratio, and Lift in the top risk decile
- Models with 500+ features: impossible to validate actuarially and to justify to the regulator; prefer rigorous feature selection (max 40-60 features)
- Ignoring portfolio concentration: a model accepting only very low-risk profiles creates adverse selection and an unbalanced portfolio
- Proxy discrimination: postal codes can proxy for ethnicity; always test disparate impact before deploying
Conclusions and Next Steps
AI underwriting does not replace the human underwriter — it amplifies them. Decisions for standard policies (80-90% of volume) can be fully automated with accuracy superior to the human average, freeing specialists for complex cases where domain expertise is irreplaceable.
The keys to a successful system: deep feature engineering grounded in actuarial knowledge, frequency/severity two-stage architecture, SHAP interpretability for compliance, mandatory fairness auditing, and continuous PSI monitoring for drift management.
The next article in this series explores Claims Automation with Computer Vision and NLP: from digital FNOL to automated photo damage assessment, through to accelerated end-to-end settlement.
InsurTech Engineering Series
- 01 - Insurance Domain for Developers: Products, Actors and Data Model
- 02 - Cloud-Native Policy Management: API-First Architecture
- 03 - Telematics Pipeline: Processing UBI Data at Scale
- 04 - AI Underwriting: Feature Engineering and Risk Scoring (this article)
- 05 - Claims Automation: Computer Vision and NLP
- 06 - Fraud Detection: Graph Analytics and Behavioral Signals
- 07 - ACORD Standards and Insurance API Integration
- 08 - Compliance Engineering: Solvency II and IFRS 17







