1.3 What we do .... 4 2. Model assumptions and validation of reasonableness 5 3. Description of symbols 6 4. Modeling and solving 6
4.1 Future Risk Estimates and Profitability Forecasts ... ... 6
4.2 Risk assessment and pricing. 8
4.3 Practical Application to Gansu and New Jersey 8
4.4 Conclusion 10
5 Enabling prediction of future risks 11
5.1 Future risk assessment model 11
5.2 Practical application of the model 12
5.3 Conclusion 13
5.4 Model optimization and error analysis 13
6. Some suggestions for property owners wishing to change whether or not their insurance company will insure in the area: ... 15
7 Decision-making on the level of protection to be provided to buildings 15
7.1 Hierarchical Analysis to Estimate Building Values 15
7.2 Model Testing 16
7.3 Verifying the Extensiveness of the Model 19 8. Model evaluation 20
8.1 Strengths 20
8.2 Disadvantages 20 References 21 A letter to the members of the Paris Tower community ... 22
Report on use of AI 23
1. Introduction
1.1 Background of the problem
Extreme weather is like nature's whip, beating human society mercilessly. Whether it is floods, winds, cyclones, droughts or wildfires, each time they come, mankind is overwhelmed and loses a lot of money, even though mankind has made a series of predictions and protective measures. But we are still as vulnerable as ants to these natural challenges. The losses and injuries caused by these extreme weather events have seriously impeded the development process of mankind, and are a constant reminder that we must pay attention to the protection of the ecological environment. In recent years, there have been as many as 1,000 cases of extreme weather events in the world, and the losses incurred are incalculable. The harm brought to the victims and families of the victims is deeply engraved, and the damage to the global economic development is not small䖋. The insurance industry has also suffered significant losses, with claims for natural catastrophes increasing by 115%115 \% in 2022 compared to the 30-year average! and such losses are expected to continue to increase during severe weather. Therefore, there is an urgent need to study and promote the sustainable development of the insurance industry in this situation, and we hope to build (models) to predict the losses caused by the occurrence of risks, so as to inform the insurance industry to insure or not to insure, and to ensure that insurance companies can operate properly in today's situation. At the same time, we would like to contribute to the building of communities.
1.2 Analysis of the problem
Considering the background information and constraints identified in the problem statement, we need to address the following questions.
Problem 1: Develop a future risk assessment model and a profit decision model that will provide a basis for insurers to position insurance based on natural weather factors, risk-taking strategies, and profit algorithms, etc., to find the conditions under which an insurer can underwrite a policy in order to determine whether they should underwrite a policy in an area that has seen an increase in the number of extreme weather events.
Question 2: Based on the insurance model from Question 1, a new assessment model is obtained by collecting and analyzing past extreme weather data in an area to determine whether, and if, to build development at that location.
Question 3: Develop a conservation model to determine the economic and social value of an area and give a conservation strategy accordingly;
Question 4: Conduct a risk assessment for a valuable historic landmark and develop and give recommendations for the future.
1.3 What we do
To clearly illustrate our work, we have created the following flowchart:
2. Model assumptions and rationalization
Considering that these practical problems always contain many complex factors, first, we need to make reasonable assumptions to simplify the model. Assumption: the data we use is accurate and valid.
Rationale: Our data comes from the China Meteorological Data Network (CMDN), the World Meteorological Organization (WMO) and some other official websites as well as research papers.
Assumption: Since the occurrence of extreme weather obeys a Poisson distribution, it is assumed that losses due to extreme weather impacts also obey a probability distribution.
Rationale: When considering losses due to extreme weather, these losses are affected by the duration of the disaster, the population density in the disaster area, the emergency management capacity of the society, and the resilience of the infrastructure design, etc. Some of these influences can be described by Poisson distributions, and can therefore be reasonably assumed;
Assumptions: Independence of risk: It is assumed that the risk of loss between different insurance subjects is independent of each other, i.e., damage to one subject due to an extreme environment does not affect the probability of loss of other subjects.
Rationale: Risk-independent, i.e., no additional losses due to collateral disasters that could interfere with our data.
Assumptions: Insurance rates remain stable during the modeling period, not considering rates due to market competition or changes in risk
Adjustment.
Rationale: Future economic markets are unpredictable and the model is based on current insurance rates.
The Description of symbols
The following table lists the main mathematical notations used in the paper: Table 1: Symbols used in this paper
4.1 Future risk assessment and profitability forecasts
Traditional property and casualty insurance profitability models are difficult to accurately portray extreme environmental risks due to their complexity and uncertainty. In this regard, we have constructed an insurance profitability risk assessment and optimization model, which takes into account the probability of occurrence of extreme environmental events, the degree of loss, the insurance premium rate and the cost of claims, etc., and provides a powerful scientific support for property insurance companies to make decisions on insurance in extreme environments.
First we need to calculate the total premium income based on the formula II
I=sum_(i=1)^(n)V_(i)xx rI=\sum_{i=1}^{n} V_{i} \times r
Where n is the number of insured objects, V_(i)V_{i} refers to the value of the first insured object, where i=(1,2dots n),ri=(1,2 \ldots n), \mathrm{r} denotes the comprehensive insurance premium, which is determined based on the assessment of the extreme environmental risk, the characteristics of the object and other factors.
Gross premium income is the sum of the value of each subject matter multiplied by the composite premium rate. This is the main source of income for insurance companies, and is summarized by assessing the value of each subject matter and multiplying it by the rate to get the total premium income.
After that, we calculate the amount of benefit L_(i)L_{i} and the total benefit expense E[L]E[L] based on the formula of L_(i)L_{i} and E[L]E[L] for the ith insurance subject.
(The formula (Expected Claims Expense) yields its claims expense. When an extreme environmental event occurs, the payout amount for the ith insured subject matter is
The loss ratio of the ith insurance subject is alpha_(i)\alpha_{\mathrm{i}} , which represents the proportion of the total value of the insurance subject damaged by the extreme environmental event, and takes a value between 0 (no loss) and 1 (total loss). It is understood that when the extreme environmental event does not occur (with probability 1-p), the insurance company does not need to pay for the subject matter of the insurance, and the amount of payment is 0.
When an extreme environmental event occurs (with probability p ), the insurer is required to base the loss ratio alpha_(i)\alpha_{i} on the deductible percentage d , the
The value of the subject matter V_(i)V_{i} and the benefit percentage c are used to calculate the benefit amount. First calculate the value of the loss alpha_(i)xxV_(i)\alpha_{i} \times V_{i} , deduct the deductible (1-d), and then determine the final payout amount based on the payout ratio c. Then consider the probability of an extreme event, p, and weight the sum of the possible payout amounts for each insurance subject. Then consider the probability of extreme events fearing p, and weight and sum the possible payout amount for each insurance subject to get the total expected payout expenditure.
Finally we give the formula for the cost of reinsurance R, taking into account the percentage of risk that the insurer may transfer to the reinsurer, as well as the calculation of operating costs such as labor costs, marketing costs, and administrative costs:
R=q xxsum_(i=1)^(n)V_(i)xx r xx sR=q \times \sum_{i=1}^{n} V_{i} \times r \times s
where the comprehensive premium rate is r , which is determined based on the extreme environmental risk assessment, subject matter characteristics, and other factors, and is used to calculate the premium, and s is the reinsurance rate, which is the percentage of the premium paid by the insurance company to the reinsurance company. and the formula for calculating the total operating cost O: O=n xx kO=n \times k
where k is the operating cost per unit of insurance subject matter, including labor cost, marketing cost, management cost, etc. apportioned to each insurance subject matter. n is the number of insurance subjects.
Finally, the profit function is utilized to calculate the profit P of the insurance company with the formula
P=I-E[L]-R-OP=I-E[L]-R-O
be considered to be
P=sum_(i=1)^(n)V_(i)xx r-p xxsum_(i=1)^(n)(1-d)xxalpha_(i)xxV_(i)xx c-q xxsum_(i=1)^(n)V_(i)xx r xx s-n xx kP=\sum_{i=1}^{n} V_{i} \times r-p \times \sum_{i=1}^{n}(1-d) \times \alpha_{i} \times V_{i} \times c-q \times \sum_{i=1}^{n} V_{i} \times r \times s-n \times k
The above calculations are developed in detail, while the relationship between the calculation of the various incomes and expenditures is clearly shown.
4.2 Risk assessment and pricing:
1. Accuracy of data: A more accurate risk assessment is essential for determining p and r.
Rationale: An underestimation of extreme environmental risks with too small a value of p could result in insufficient premium income to cover claim expenses, as more claim events could occur without a corresponding increase in premiums collected.
While a high r may affect the business volume of the insurer as policyholders may choose other insurers due to high rates, if rr is too low it may lead to a lack of profitability as the premium income may not be able to cover the various costs and claims.
2. Insurance clause design: rationalization of d and c.
Rationale: An increase in d could reduce the claims expenditure, but may make the product less attractive as policyholders would have to bear more of the loss themselves, which may lead to loss of business. Adjusting c would also require a combination of risk and market competition, as it would affect the liability assumed by the insurer and the attractiveness of the product to customers.
3. Reinsurance strategy: optimizing the choice of q and s.
Rationale: Higher q reduces the risk of own claims but increases the cost of reinsurance; the optimal reinsurance ceding ratio needs to be determined according to the company's risk tolerance and profitability objectives. At the same time, the level of s also affects the cost of reinsurance, and it is necessary to find a balance between risk transfer and cost control. 4. Cost control: reduce unit operating costs k.
Rationale: By improving operational efficiency and optimizing management processes, we can reduce expenditures on manpower, marketing and management without compromising service quality, thereby increasing profitability. For example, adopting automated claims processes and optimizing sales channels can reduce operating costs.
4.3 Practical application in Gansu and New Jersey
In order to verify the accuracy and validity of the model, we selected two regions located on different continents and experiencing extreme weather events to demonstrate our model, Gansu in China and New Jersey in the United States. Gansu is located in northwestern China, which is inland and has a dry climate with scarce precipitation, making it highly susceptible to drought disasters. During the summer rainy season, it has a high index of flooding due to heavy rainfall and rising rivers. It is also one of the most earthquake-prone areas. New Jersey, on the other hand, is located on the east coast of the U.S. Since it faces the Atlantic Ocean to the east, it is susceptible to county winds during the hurricane season (June-November). At the same time
Wind gear winds can also bring natural disasters such as strong winds, heavy rains and storm surges. During the winter months, New Jersey may also be hit by winter storms.
By querying the relevant data for the two regions and applying our insurance profitability and future valuation optimization model to them, the following data images were obtained:
Gansu Profit Forecast Line Chart
30
Gansu Profit Forecast Line Chart
New Jersey Earnings Forecast Line Chart
New Jersey Earnings Forecast Line Chart
4.4 Conclusion
From this, we can intuitively see that the profitability index of Gansu shows a decreasing trend in the next 5 years, and will be below 0 around 2028, which means that there will be a loss, so the insurance company needs to be careful and discretionary before underwriting the risk. For New Jersey, most of the profitability indices will be below 0 in the next 5 years, which may result in sustained losses, so it is not recommended for insurers to underwrite the risk in the area. Our basic steps are as follows:
5 Enabling prediction of future risks
5.1 Future risk assessment model
As climate change increases the likelihood of more severe weather and natural disasters, in order to determine how best to locate property insurance while ensuring the long-term health of the insurance company, we developed a future risk assessment model to construct a neural network model output predicting property losses through data preprocessing and screening for locating areas to be insured.
First, the existing data are standardized and data coded, and continuous variables, such as earthquake magnitude and property value, are standardized using Z-score, which is given as
Here xx represents the original data values, mu\mu is the mean of the data, and sigma\sigma is the standard deviation. Through this standardization, different features are unified to the same scale, which greatly assists the model operation. The categorical variables, such as building structure and topography, are coded as [1,0,0],[0,1,0],[0,0,1][1,0,0], ~[0,1,0], ~[0,0,1] , which transforms the categorical information into a numerical form that can be processed by the neural network.
Here we mainly consider the number of 9 features under the three categories of extreme environmental intensity indicators, property damage related amount and geography in the region, so the number of neurons in the input layer is 9, each corresponding to accepting one feature data, and the input layer is constructed in Python and Keras, and the input shape is specified as (9, ).
In order to capture the complex relationship between the input features more effectively, understand the inner structure of the data more accurately, and then improve the accuracy of the prediction of property damage, we try to design a hidden layer, first of all, we try in the input layer neuron number of 1 / 2-2 times, first set to 12, and adjusted according to the effect of training. At the same time, in order to avoid the problem of gradient disappearance, we introduce the use of the activation function ReLU (Rectified Linear Unit), which is defined as
f(x)=max(0,x)f(x)=\max (0, x)
It can effectively solve the problem of gradient vanishing and speed up model training. Because the gradient of ReLU is simple to compute in the backpropagation process, when the input is greater than 0, the gradient is 1 , which makes the gradient can be passed effectively and allows the model to converge faster. Finally, since we are predicting property loss as a continuous value, we have one neuron in our output layer. Using a linear activation function, the predicted property loss value is directly output.
5.2 Practical application of the model
Similarly, we collected data on extreme weather and damage caused in Gansu and New Jersey and applied it to our data to produce the following damage prediction chart:
Gansu Property Loss Value Curve
Figure 5-2-1 Loss Value Curve for Gansu Property
Figure 5-2-2 New Jersey Property Loss Value Curves
5.3 Conclusion
By analyzing the above two icons, it is easy to see that the loss of Gansu in the next five years is in a more stable phase, while the loss of New Jersey in the next five years will be in an upward phase.
5.4 Model optimization and error analysis
5.4.1 Error analysis of the model
This operation alone is not enough to reach the level of being able to predict realistically, so we have further upgraded and optimized the model. Through model training, model evaluation to model optimization, the development of this neural network model is made more complete and accurate. It has better generalization ability and prediction accuracy.
For model training, first, we divide the dataset by 70%70 \% for training set, 15%15 \% for validation set, and 15%15 \% for testing set. Applying the mean square error (MSE), the formula is
MSE=(1)/(N)sum_(i=1)^(N)(y_(i)- hat(y)_(i))^(2)M S E=\frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}
where y_(i)y_{i} is the true property loss value, hat(y)_(i)\hat{y}_{i} is the predicted value, and N is the number of samples. The function calculates the average of the squared error between the predicted value hat(y)_(i)\hat{y}_{i} and the true value y_(i)y_{i} , the smaller the value means the model predicts more accurately, and the model is guided to adjust the parameters in order to reduce the large error by penalizing the square of the large error. For the selection of optimizer, we choose Adam optimizer, which combines the advantages of Adagrad and RMSProp and can adaptively adjust the learning rate. We also set the learning rate, e.g., 0.001, which determines the step size of each parameter update.
Thus our model optimization can do the following:
1. For the setting of the learning rate, a suitable value needs to be found through several experiments.
2. Hyper-parameter tuning and regularization: To prevent overfitting, regularization terms can be added to the model. For example, use L1 or L2 regularization for the hidden layer, add regularization penalty term in the loss function to make the model weights smoother and in this way to constrain the complexity of the model. It also enables the model to perform better on both training data and unseen data.
5.4.2 Model evaluation
Assessment of indicators: 1. Mean square error (MSE):
MSE=(1)/(N)sum_(i=1)^(N)(y_(i)- hat(y)_(i))^(2)M S E=\frac{1}{N} \sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}
Rationale: MSE serves as one of the most commonly used metrics to assess the performance of regression models. It calculates the average of the squares of the errors between the predicted and true values. Because of the squaring of the error, larger errors are significantly magnified, which makes the MSE more sensitive to larger errors.
For example, if the predicted value of a sample differs from the true value by a large amount, its squared error will make up a larger portion of the total error, which motivates the model to minimize this large error, thus guiding the model to improve its prediction accuracy overall. In our application of predicting future risk, when comparing the predicted loss value to the true loss value, the smaller the MSE, the smaller the average prediction error of our model on the training or test data, and the better the model's performance. However, the MSE also has the disadvantage that its unit is the square of the predicted value, which leads to an analysis of the error that may be less intuitive in some cases. Therefore if the calculated mean square error value is smaller it indicates that the model is more accurate in its predictive ability.
2. Root Mean Square Error (RMSE): the formula is
RMSE=sqrt(MSE)R M S E=\sqrt{M S E}
Similar to the MSE, but in the same units as the forecast, this formula is more sensitive to the magnitude of the error and provides a more intuitive reflection of the magnitude of the forecast error.
Rationale: RMSE is the square root of MSE. By taking the square root, the magnitude of the error is restored to the same magnitude as the predicted value, which makes the RMSE reflect the magnitude of the prediction error more intuitively than the MSE. At the same time, because its unit is the same as the predicted value, it is easier to understand and explain the model's error in our practical application.
2. Mean Absolute Error (MAE): the formula is
MAE=(1)/(N)sum_(i=1)^(N)|y_(i)- hat(y)_(i)|M A E=\frac{1}{N} \sum_{i=1}^{N}\left|y_{i}-\hat{y}_{i}\right|
It reflects the average degree of prediction error and does not square large errors as MSE does, focusing more on the size of the average error.
Rationale: The MAE calculates the average of the absolute error between the predicted value and the true value. It differs from MSE and RMSE in that it does not square the error, but rather takes the absolute value, and therefore does not amplify larger errors as MSE does.MAE provides us with a more robust measure of error, which indicates how much the model's predictions differ from the true value on average, and in some cases may be more consistent with our intuitive understanding of error.
However, when it comes to the actual evaluation of a model, we usually do not rely on just one metric, but rather a combination of multiple metrics such as MSE, RMSE and MAE.
By calculating the mean square error (MSE), root mean square error (RMSE) and mean absolute error (MAE) for the model's predicted loss values for Gansu from 2014 to 2024, combined with the actual values we could find, the error values were 0.0953,0.30870.0953, ~ 0.3087 and 0.0953 respectively, which, taken together, suggests that the model's predictions are more accurate.
5.4.3 Model optimization
1. For the setting of the learning rate, it is necessary to find a suitable value through several experiments.
2. Hyper-parameter tuning and regularization: To prevent overfitting, regularization terms can be added to the model. For example, use L1 or L2 regularization for the hidden layer, add regularization penalty term in the loss function to make the model weights smoother and in this way to constrain the complexity of the model. It also enables the model to perform better on both training data and unseen data.
6. Some suggestions for property owners who want to change whether or not their insurance company will insure in the area:
1. First, property owners need to improve the security of their property. For example, installing advanced fire protection systems, anti-theft devices, and so on. For example, in fire-prone areas, installing an automatic sprinkler system reduces the risk of fire and makes the insurance company feel that the risk of insuring the property is manageable.
2. Secondly, property owners also need to carry out regular maintenance and repairs on property, such as houses, to ensure that they are in good condition and to reduce potential risks caused by problems such as building deterioration. For example, roof leaks should be repaired in a timely manner to prevent damage caused by rainwater leakage.
7 Decision-making on the level of protection to be provided to buildings
7.1 Hierarchical analysis to estimate building value
In order to determine whether a building needs to be protected, we collect data on the relevant characteristics of the building from four levels: cultural value, historical value, economic value and community importance, which comprehensively cover the important information of the building in different dimensions. The hierarchical structure is constructed through hierarchical analysis and the weights of the factors at each criterion level are determined. Combined with the fuzzy comprehensive evaluation method, a comprehensive evaluation is conducted based on the building's quantitative indexes and experts' opinions, and a decision on the level of protection that should be taken for the building is finally made.
7.1.1 Framework for the construction of the Analytic Hierarchy Process (AHP)
We have constructed a hierarchical structural model that clearly shows the relationship between the factors and the decision-making process by taking "decision-making on the degree of protection of a building" as the goal level, community importance, economic value, cultural value, and historical value as the criterion level, and different protection measures as the program level.
1, Preparation of Hierarchical Analysis
-Layering the decision-making problem of the degree of architectural protection, establishing a hierarchical model, quantifying and comprehensively evaluating the importance of the factors at the guideline level by building a hierarchical model, constructing a judgment matrix, and calculating weight vectors.
2, Construct a judgment matrix
-After clarifying the level, according to the 1-9 scale method, two by two compare the importance of the factors of the guideline level and assign the value, the diagonal of the matrix is 1, and the upper and lower triangular elements are inverted, so that the logical relationship of the factors is obvious. 3, Calculate the weight vector
Calculate the maximum eigenvalue of the judgment matrix and the corresponding eigenvector, get the weight of the factors in the criterion layer, and determine the proportion of decision-making of each factor. To calculate the weight vector, the eigenvalue method is usually used, let the judgment matrix be AA , first find out the maximum eigenvalue of matrix AAlambda_("max ")\lambda_{\text {max }} and its corresponding eigenvector vec(X)\vec{X} , and then normalize the eigenvector vec(X)\vec{X} to get the weight vector vec(W)\vec{W} . The normalization formula is
Where W_(i)W_{i} is the weight of the ii th factor, X_(i)X_{i} is the ii th component of the eigenvector bar(X)\bar{X} , and nn is the order of the matrix.
7.1.2 Consistency test of the model
The consistency indicator CI and consistency ratio CR are calculated to test the consistency of the judgment matrix. The consistency of the judgment matrix is considered acceptable only if the CR is less than 0.1, otherwise the judgment matrix needs to be readjusted to ensure the accuracy and reliability of the weight calculation.
{:[CI=(lambda_(max)-n)/(n-1)],[CR=(CI)/(RI)]:}\begin{gathered}
C I=\frac{\lambda_{\max }-n}{n-1} \\
C R=\frac{C I}{R I}
\end{gathered}
Where lambda_("max ")\lambda_{\text {max }} is the maximum eigenvalue of judgment matrix, nn is the order of matrix, RIR I is the average stochastic consistency index, whose value is related to the order of matrix nn , which can be obtained by checking the table. Finally, the building is subjected to the fuzzy comprehensive evaluation method to assist decision-making.
-Determine the evaluation set and the factor set: the factor set takes the four factors of the criterion level, and the evaluation set sets \{high, medium, and low level of protection}, which is the standard for the fuzzy evaluation.
-Establishment of fuzzy relationship matrix: according to the architectural indicators and experts' opinions, the affiliation degree of each factor to different evaluation sets is determined, and a matrix is formed to reflect the fuzzy relationship between the factors and the degree of protection.
-Comprehensive evaluation: multiply the weight vector by the fuzzy relationship matrix to get the comprehensive results, determine the degree of architectural protection according to the principle of maximum affiliation, combine the factor weights and fuzzy evaluation, and make a comprehensive decision.
7.2 Model Testing
In the following, we will use a specific building, Dunhuang, as an example to make decisions about its level of protection: 1. Calculate the weight vector
We constructed a judgment matrix AA of the criterion layers (community importance, economic value, cultural value, and historical value) for it as follows:
Calculate the largest eigenvalue lambda_("max ")\lambda_{\text {max }} of this matrix and its corresponding eigenvector WW . Here we use the eigenvalue decomposition method, which is computed by mathematical software such as Matlab:
Maximum Eigenvalue lambda_("max ")~~4\lambda_{\text {max }} \approx 4 The corresponding eigenvector W=([0.25],[0.75],[1.25],[1.75])W=\left(\begin{array}{l}0.25 \\ 0.75 \\ 1.25 \\ 1.75\end{array}\right)
Normalize the feature vectors to get the weight vector:
where nn is the order of the judgment matrix, and here n=4n=4 , is calculated by substitution:
CI=(4-4)/(4-1)=0C I=\frac{4-4}{4-1}=0
Find the random consistency indicator RIR I , for n=4,RI=0.882n=4, ~ R I=0.882 (available from Hierarchical Analysis). Calculate the consistency ratio CRC R :
From this it can be judged that the consistency of the matrix is acceptable and the weight vector is valid. 3. Fuzzy integrated evaluation method to assist decision-making
3.1 Determination of the set of evaluation factors and the set of evaluations
Set of evaluation factors: importance of the community, economic value, cultural value, historical value}
Evaluation set: High, Medium, Low level of protection}
3.2 Establishment of fuzzy relationship matrix
It is assumed that the following fuzzy relationship matrix RR is obtained from the expert scoring and the analysis of the quantitative building metrics:
The first row of this matrix represents the affiliation of the cultural values to the high, medium, and low levels of protection as 0.7,0.2,0.10.7,0.2,0.1 , and so on.
3.3 Comprehensive evaluation
Multiply the weight vector with the fuzzy relationship matrix to get the comprehensive evaluation result vector BB :
According to the principle of maximum affiliation, 0.6555 is the largest, so the building should take a high degree of protection. Consistent with the facts.
7.3 Extensive validation modeling
Through the above steps, we complete the decision-making on the degree of protection of Dunhuang by using hierarchical analysis and fuzzy comprehensive evaluation method in a complete way.
Using the same approach, we validated the accuracy of our model by making protection level decisions for additional areas, such as the Dunhuang murals, the Sanxingdui, and the Louvre.
Figure 6-3-1 Four-Level Value Scores for Different Buildings
Figure 6-3-2 Decision Making for Different Levels of Building Protection
In summary, according to the degree of protection given by the model for different buildings, and then combined with the actual situation, we find that the two are basically in line with each other, therefore
The feasibility of our model is high.
8. Model evaluation
8.1 Strengths
Advantage 1: Sensitivity analysis and robustness testing of our model reveals that the output values are highly adaptive to changes in the majority of data parameters and provide reliable predictions under different scenarios;
Advantage 2: Our model takes into account a large number of aspects and maximizes the closeness to the actual situation, so the resulting forecasts have a high degree of accuracy;
Strength 3: Our multiple error analysis of future risk prediction models and consistency testing of conservation models both provide some assurance of the feasibility and accuracy of our output values;
Advantage 4: We have scientifically processed the data by using Z-score standardization and unique heat coding for continuous and categorical variables, respectively, to unify the data scale and enable the neural network to process the data effectively.
Advantage 5: Reasonable division of dataset, selection of mean square error and Adam optimizer, setting the learning rate, and can adjust the learning rate, hyperparameters and regularization to improve the model performance and generalization ability, so that the training optimization is effective.
8.2 Disadvantages
Disadvantage 1: Our weighting analysis of community importance, economic value, cultural value, and historical value considers comparisons between the two, but in fact these four aspects are not comparable;
Disadvantage 2: Due to the uncertain and unpredictable nature of the future, it is difficult for our model to estimate the emergence of sudden-onset natural disasters, and it is equally impossible to estimate the damage caused by multiple natural disasters triggered by o