Pricing Model
Last updated
Last updated
The data derived for modelling comes from DeFi Llama.
The severity data: past hacks and lost funds is reliably reported beginning in 2016. By the end of May 2023, the total amount of hacks recorded is 174. We have elected to use the 174 hacks for the calculations. It is important to mention that we are dealing with truncated data, specifically left truncated data, where there is no information recorded below the truncation point d. In our case the minimum amount lost recorded is d = 0.08 million USD.
For the frequency: daily TVL records per bridge since October 2020 are available. The first TVL record for each bridge occurs at different points in time. By the end of May 2023, 938 days were recorded, producing a total of 16634 daily TVL records. An extract of the Data Set is shown in the following figure.
Web3Shield uses an actuary-based pricing model, works with a collective risk model that examines the aggregated losses. Aggregate loss distributions play an important role in the pricing of insurance coverages, they are calculated in terms of the underlying severity and frequency.
Probability distributions are obtained for the severity of individual losses and frequency. Using these two models, we carry out the necessary calculations to obtain the distribution of S (an aggregate loss model).
From the Frequency modelling we obtain the probability of a given number of losses occurring during a specific period, this probability is calculated as a total probability based on the TVL and past bridge hack events.
The main challenge faced is forecasting the expected future claims experience, for that, we study the losses from past years. Severity modelling develops a model for the distribution of loss amounts based on data. We propose an inventory of standard parametric distributions that could be used to approximate the distribution for loss amount. They include, but are not limited to, the following families of distributions.
We proceed with a formal goodness-of-fit test, which is a statistical procedure that describes how well a distribution fits a set of observations by measuring the quantifiable compatibility between the estimated theoretical distributions against the empirical distributions of the sample data. Finally, we proceed with the model selection.
Once frequency and severity models have been estimated, Web3Shield formulates a base risk premium from the previous calculations.
Specific calculations and intricate details of our pricing model have been kept confidential for proprietary reasons and to prevent reverse engineering.
[1] Klugman, S.A., Panjer, H.H. and Willmot, G.E. (2012) Loss models: From data to decisions. Somerset: John Wiley and Sons.
[2] Total value hacked (USD) (no date) DefiLlama. Available at: https://defillama.com/hacks (Accessed: 25 May 2023).
[3] Bridge TVL Rankings (no date) DeFiLlama. Available at: https://defillama.com/protocols/Bridge (Accessed: 23 May 2023).
[4] Albright, Robert. (2020). Developing Aggregate Loss Models For Obscure Insurance Exposures. Retrieved from the University of Minnesota Digital Conservancy, https://hdl.handle.net/11299/217111.
[5] Taboga, M. (no date) Likelihood ratio test. Available at: https://www.statlect.com/fundamentals-of statis- tics/likelihoodratiotest#: :text=The%20likelihood%20ratio%20(LR)%20test, a%20restriction%20on%20the%20parameter. (Accessed: 26 June 2023).