Normality and Its Verification as a Basic Prerequisite for the Application of VaR

If we start to deal with the topics of investing or trading in the financial markets, sooner or later we will encounter the topic of risk. Risk is one of the basic input variables in assessing the suitability and profitability of an investment and therefore there are a number of procedures and methods for its quantification. In our article, we address the issue of portfolio risk quantification through the VaR method and verification of its basic assumption, namely the normal distribution of values. The aim of the article is to compile an overview of procedures and methods for verifying the normal distribution and compare their specifics. In the first chapter of our paper, we focused on the definition of risk, its types. The second chapter focuses on methods of risk quantification and description of individual methods. In the third chapter, we will describe in detail the possibilities of verifying the normality of the distribution of values. In the last chapter, we will briefly interpret the information obtained and identified the advantages, disadvantages and other specifics of individual methods of verifying the normal distribution. We consider the aim of the article to be fulfilled and we believe that it will be a valuable contribution in this area of research.


Introduction
In today's global world of finance, with the rapid development of modern technologies and investment approaches, the issue of risk identification and measurement is still relevant. Even today, risk remains one of the main input factors in assessing the suitability, adequacy and profitability of investments and transactions worldwide. Therefore, there are many procedures and methods for its quantification. In our article, we will try to focus on one of the most used tools for measuring portfolio risk, namely Value at Risk. The basic premise of the application of this method is the assumption of a normal distribution of the values. And the normal distribution is a very interesting and discussed assumption of this method.

Methodology
The basic methods of formal logic are used in the work as an analysis and synthesis of theoretical knowledge on the basis of which we have come from general information to identify the specifics of individual methods of verifying the normal distribution. We have further described these methods in detail with their mathematical formulas and identified their specifics necessary for their use. We were also interested in the Value at Risk method in detail, which is currently one of the most widely used methods for calculating portfolio risk. The aim of the article is to compile an overview of procedures and methods for verifying the normal distribution and compare their specifics.

Risk
After very fundamental macroeconomic changes, which were identified in the 1970s, two basic trends emerged, which determined the increasing level of competition and uncertainty. The first is the deregulation of economic policy with a focus on marketoriented instruments, the second is globalization, which has forced companies to face the true nature of global competition. These trends have caused individuals and financial institutions to face a wide range of risks today. We can also define financial risk as what unites companies from possible losses on the capital and money markets due to possible shifts in interest rates, prices of securities held, losses from foreign exchange transactions, insolvency of the bond issuer and many others. The position that companies risk losing can be carefully optimized with many financial market instruments. Therefore, manufacturing companies and non-financial service providers focus on business risk management. However, when looking at financial market institutions such as banks, insurance companies, funds, I see that they focus on managing, predicting, arranging protection and advising on financial risks. The aim and main interest of such institutions is to try to measure risks and subsequently control the price of these risks. Today, there is no uniform doctrine of financial risk, which implies that the terminology is not precisely specified, but nevertheless in the following subchapter we present commonly used terminology in risk management [5,7].

Risk typology
In general, financial risk is classified into the following relatively general groups: We define market risk as the risk of loss of the value of the portfolio held caused by sharp changes in the prices of individual assets held. As its existence is linked to the valuation of assets in the market. The factor that most significantly affects the amount of market risk is the volatility of assets. In other words, the size of price movements over time around their average or around the reference price of owned assets in the portfolio. Credit risk arises when a responsible counterparty is unwilling or unable to meet its obligations. The amount of risk is then defined by the amount of exposure of such a claim that the relevant counterparty should have paid. More generally, we can also define credit risk as the potential loss of market value suffered by a company providing funds to counterparties due to a credit event. Such a credit event can be understood as a change in the counterparty's ability to meet its obligations, which is usually reflected in a change in the market value of such loans, a possible change in the counterparty's rating, or a change in the probability of client default. Liquidity is the ability to quickly monetize an asset so that there is no significant difference between the current selling price and the current selling price of that asset. In financial institutions, let us distinguish several types of liquidity risk. The most serious are Product / market liquidity risk, which arises due to the inability of a bank or other institution to sell its assets due to the high volumes that are in its portfolio. We define operational risk or the risk of direct or indirect losses as risk that arises due to inadequate or incorrect internal processes, systems, human resources behaviour, or due to external events. In the past, operational risks were calculated on the basis of expert considerations and there were no comprehensive quantitative tools to measure these risks. A significant step forward in the quantification of operational risks and their prevention was the progress in the field of computer technology, which enabled the effective recording of these events and their evaluation. Legislative risk is closely linked to credit risk and operational risk. This type of risk arises when a counterparty that incurs a loss in a particular transaction seeks to avoid paying for the transaction by finding an error in the contract on a legal basis. Legislative risk also arises when there are indications that a counterparty has no competence to enter into an agreed transaction [11,13,15].

Risk measurement
The increasing volatility of exchange rates, interest rates and commodity prices has created the need to create new financial instruments and analytical tools for risk management. When we begin to study risk management in detail, we identify two main streams of this development. The first basic stream of development in this area is the development of information technologies. The second is the development of financial theory, the emergence of new products to mitigate risks or to eliminate volatility from the owned portfolio [12,16].

Value at Risk
The definition, although not in a strictly mathematical spirit, can be drawn directly from The 1996 Risk Metrics Technical Document: "Value-at-risk measures the maximum potential change in the value of a portfolio of financial instruments with a given probability within a predetermined time horizon. VaR answers the question: how much can I lose with x% probability in a given time horizon? [6].
For the purposes of our article, we will use the calculation formula for calculating normal linear VaR and scaling VaR published by Carlo Alexander in 2008 in his book: "Market Risk Analysis Vol. IV. Value-at-Risk Methods" [1].

Normal linear VaR
(1) Where Φ is the standard normal distribution function. However xα = −VaRα by definition, and Φ-1(α)= −Φ-1 (1−α) by the symmetry of the standard normal distribution, Carol Alexander (2008) substitute these into the above mentioned formula and he created an analytic formula for the VaR for a portfolio with an i.i.d. normal return [1,8,9].

Scaling VaR
VaR is often measured in a short-term risk horizon such as 1-day VaR and then scaled to represent VaR in the longer term risk horizon. The way how should VaR, that is estimated over one risk horizon be scaled to a VaR that is measured over a different risk horizon describes Carol alexander in detail as follows. We will use the formula for calculating normal linear VaR. (2) Where μ1 and σ1 are the expectation and standard deviation of the normally distributed daily returns. We now can use a log approximation to the daily discounted return. We now can approximate the h-day log return with the ordinary h-day return, and deduce that this is (approximately) normally distributed. Then the h-day VaR is given by the approximation as follows [1].

Normal distribution
Each quantitative method has its specific limitations and assumptions that are necessary for the very possibility and correctness of the application of these methods. One of the basic assumptions of applying the mentioned VaR calculation to a specific data set is that this data must come from a normal distribution of values. The reason for such an assumption is the fact that the calculation itself is based on the distribution function of the normal distribution of values, and if this assumption were not verified and the data thus obtained by the VaR calculation were met, they would have no explanatory power. This is the reason for the need to verify the existence of a normal distribution of values before the actual application of this method [10,14,17,18].

Pearson's test
Pearson's test is among the most famous tests. It is a test which is based on the comparison of distributions between observed (empirical) and expected (theoretical) frequencies. This the test is generally recommended for random testing on a large scale. The test can be briefly described as follows. We have a random selection, with an unknown distribution function (empirical distribution) and also a known distribution function (theoretical normal distribution). At a preselected level of significance, we test the null hypothesis H 0 : F(x) = F 0 (x) against the alternative hypothesis H 1 : F(x) ≠ F 0 (x). The test formula is as follows: We reject the H 0 hypothesis at the level of significance of alpha if X 2 > X 2 Prerequisite for the correct use of the Pearson test is the following assumptions [3]: n * p i ≥ 4 and kr -1 ≥ 6

Shapiro Wilk test
The Shapiro-Wilk test is based on determining whether points are constructed quantile of a quantile graph (Q-Q fence) significantly differ from the regression line intersected by these points. We test the null hypothesis H 0 : F(x) = F 0 (x) at a preselected level of alpha significance against the alternative hypothesis H 1 : F(x) ≠ F 0 (x). The Shapiro Wilk test is mainly used for selections of smaller ranges where the number of this file does not exceed 50 units. The formula for testing is as follows: (4) The more the value is measured W approaches 1, thus there is agreement between the theoretical and better empirical distribution. Hypothesis H0 about the normal distribution of the basic set from which the random selection was made, we reject the level of significance of alpha, if applicable [2]:

Kolmogorov Smirnov test
The Kolmogorov-Smirnov test is based on a comparison of the distribution function of the assumed (theoretical) distribution of the continuous type with the empirical (selective) distribution function. It should only be used to verify hypotheses that determine the theoretical distribution function unambiguously, i. not only in terms of shape, but also in terms of parameter values. Unlike the Person's test, this test can also be performed for random selections of relatively small ranges. At a preselected level of alpha significance, we test the null hypothesis H 0 : F(x) = F 0 (x) against the alternative hypothesis H 1 : F(x) ≠ F 0 (x), where F is the empirical distribution function of the sample and F0 is the considered theoretical distribution function. Empirical distribution function: The test calculation formula is based on the expression of the value of D which is defined as a maximum distance between values F and F0: Hypothesis H0 is rejected at the level of significance alpha, if the calculated value of D exceeds the critical value D > D α (n) [4].

Results
Through tests on the verification of normal distribution, we verify the assumption that the data (sample file) come from the base file with normal distribution. Based on the theoretical advantages of individual normality tests, we can identify the following specifics pros and cons of individual tests.

Conclusion
The aim of the article was to compile an overview of procedures and methods for verifying the normal distribution and compare their specifics. In the first chapter of our paper, we were focused on the definition of risk, its types. The second chapter focuses on methods of risk quantification and description of individual methods. In the third chapter, we described in detail the possibilities of verifying the normality of distribution of values. In the last chapter, we briefly interpreted the information obtained and identified the advantages, disadvantages and other specifics of individual methods of verifying the normal distribution. We consider the aim of the article to be fulfilled and we believe that it will be a valuable contribution in this area of research.

Discussion
Our paper dealt with the issue of testing and verifying the normality of the distribution as a basic prerequisite for the application of the VaR method. If we consider the application of the Value at Risk method after considering the application of the method, we must also ensure all the conditions that are necessary for the proper functioning of the method itself. The reason for the need to verify the normality of the distribution at the Value at Risk method is that the VaR calculation itself is based on the distribution function of the normal distribution. In practice, we cannot successfully apply the data obtained by this method if the set of values for which we calculate VaR does not have a normal distribution. Also, when it comes to identifying and verifying the normal distribution, it is not always clear how to implement it. There are a number of procedures and tests that, in various ways, directly or indirectly try to identify whether or not the file we are researching comes from a normal distribution. These tests are then selected based on the specifics of the examined group, but most often based on its scope. We have given the specifics of individual methods in the previous chapter, but in the case of real use of these methods it is necessary to proceed mainly from the structure and essence of verification of normality in terms of computational complexity, which is now replaced by a number of powerful computing software.