main content

expected shortfall estimation and backtesting -凯发k8网页登录

this example shows how to perform estimation and backtesting of expected shortfall models.

value-at-risk (var) and expected shortfall (es) must be estimated together because the es estimate depends on the var estimate. using historical data, this example estimates var and es over a test window, using historical and parametric var approaches. the parametric var is calculated under the assumption of normal and t distributions.

this example runs the es back tests supported in the , , and functionality to assess the performance of the es models in the test window.

the object does not require any distribution information. like the object, the esbacktest object only takes test data as input. the inputs to include portfolio data, var data and corresponding var level, and also the es data, since this is what is back tested. like , runs tests for a single portfolio, but can back test multiple models and multiple var levels at once. the object uses precomputed tables of critical values to determine if the models should be rejected. these table-based tests can be applied as approximate tests for any var model. in this example, they are applied to back test historical and parametric var models. they could be used for other var approaches such as monte-carlo or extreme-value models.

in contrast, the and objects require the distribution information, namely, the distribution name (normal or t) and the distribution parameters for each day in the test window. and can only back test one model at a time because they are linked to a particular distribution, although you can still back test multiple var levels at once. the object implements simulation-based tests and it uses the provided distribution information to run simulations to determine critical values. the object implements tests where the critical values are derived from either a large-sample approximation or a simulation (finite sample). the test in the object tests for independence over time, to assess if there is evidence of autocorrelation in the series of tail losses. all other tests are severity tests to assess if the magnitude of the tail losses is consistent with the model predictions. both the and objects support normal and t distributions. these tests can be used for any model where the underlying distribution of portfolio outcomes is normal or t, such as exponentially weighted moving average (ewma), delta-gamma, or generalized autoregressive conditional heteroskedasticity (garch) models.

for additional information on the es backtesting methodology, see , , and , also see [], [], [] and [] in the references.

estimate var and es

the data set used in this example contains historical data for the s&p index spanning approximately 10 years, from the middle of 1993 through the middle of 2003. the estimation window size is defined as 250 days, so that a full year of data is used to estimate both the historical var, and the volatility. the test window in this example runs from the beginning of 1995 through the end of 2002.

throughout this example, a var confidence level of 97.5% is used, as required by the fundamental review of the trading book (frtb) regulation; see [].

load varexampledata.mat
returns = tick2ret(sp);
datereturns = dates(2:end);
samplesize = length(returns);
testwindowstart = find(year(datereturns)==1995,1);
testwindowend = find(year(datereturns)==2002,1,'last');
testwindow = testwindowstart:testwindowend;
estimationwindowsize = 250;
datestest = datereturns(testwindow);
returnstest = returns(testwindow);
varlevel = 0.975;

the historical var is a non-parametric approach to estimate the var and es from historical data over an estimation window. the var is a percentile, and there are alternative ways to estimate the percentile of a distribution based on a finite sample. one common approach is to use the function. an alternative approach is to sort the data and determine a cut point based on the sample size and var confidence level. similarly, there are alternative approaches to estimate the es based on a finite sample.

the hhistoricalvares local function on the bottom of this example uses a finite-sample approach for the estimation of var and es following the methodology described in []. in a finite sample, the number of observations below the var may not match the total tail probability corresponding to the var level. for example, for 100 observations and a var level of 97.5%, the tail observations are 2, which is 2% of the sample, however the desired tail probability is 2.5%. it could be even worse for samples with repeated observed values, for example, if the second and third sorted values were the same, both equal to the var, then only the smallest observed value in the sample would have a value less than the var, and that is 1% of the sample, not the desired 2.5%. the method implemented in hhistoricalvares makes a correction so that the tail probability is always consistent with the var level; see [] for details.

var_hist = zeros(length(testwindow),1);
es_hist = zeros(length(testwindow),1);
for t = testwindow
   
   i = t - testwindowstart   1;
   estimationwindow = t-estimationwindowsize:t-1;
   
   [var_hist(i),es_hist(i)] = hhistoricalvares(returns(estimationwindow),varlevel);
   
end

the following plot shows the daily returns, and the var and es estimated with the historical method.

figure;
plot(datestest,returnstest,datestest,-var_hist,datestest,-es_hist)
legend('returns','var','es','location','southeast')
title('historical var and es')
grid on

figure contains an axes object. the axes object with title historical var and es contains 3 objects of type line. these objects represent returns, var, es.

for the parametric models, the volatility of the returns must be computed. given the volatility, the var, and es can be computed analytically.

a zero mean is assumed in this example, but can be estimated in a similar way.

for the normal distribution, the estimated volatility is used directly to get the var and es. for the t location-scale distribution, the scale parameter is computed from the estimated volatility and the degrees of freedom.

the hnormalvares and htvares local functions take as inputs the distribution parameters (which can be passed as arrays), and return the var and es. these local functions use the analytical expressions for var and es for normal and t location-scale distributions, respectively; see [] for details.

% estimate volatility over the test window
volatility = zeros(length(testwindow),1);
for t = testwindow
   
   i = t - testwindowstart   1;
   estimationwindow = t-estimationwindowsize:t-1;
   
   volatility(i) = std(returns(estimationwindow));
   
end
% mu=0 in this example
mu = 0;
% sigma (standard deviation parameter) for normal distribution = volatility
sigmanormal = volatility;
% sigma (scale parameter) for t distribution = volatility * sqrt((dof-2)/dof)
sigmat10 = volatility*sqrt((10-2)/10);
sigmat5 = volatility*sqrt((5-2)/5);
% estimate var and es, normal
[var_normal,es_normal] = hnormalvares(mu,sigmanormal,varlevel);
% estimate var and es, t with 10 and 5 degrees of freedom
[var_t10,es_t10] = htvares(10,mu,sigmat10,varlevel);
[var_t5,es_t5] = htvares(5,mu,sigmat5,varlevel);

the following plot shows the daily returns, and the var and es estimated with the normal method.

figure;
plot(datestest,returnstest,datestest,-var_normal,datestest,-es_normal)
legend('returns','var','es','location','southeast')
title('normal var and es')
grid on

figure contains an axes object. the axes object with title normal var and es contains 3 objects of type line. these objects represent returns, var, es.

for the parametric approach, the same steps can be used to estimate the var and es for alternative approaches, such as ewma, delta-gamma approximations, and garch models. in all these parametric approaches, a volatility is estimated every day, either from an ewma update, from a delta-gamma approximation, or as the conditional volatility of a garch model. the volatility can then be used as above to get the var and es estimates for either normal or t location-scale distributions.

es backtest without distribution information

the esbacktest object offers two back tests for es models. both tests use the unconditional test statistic proposed by acerbi and szekely in [], given by

zuncond=1npvart=1nxtitest 1

where

  • n is the number of time periods in the test window.

  • xt is the portfolio outcome, that is, the portfolio return or portfolio profit and loss for period t.

  • pvar is the probability of var failure defined as 1-var level.

  • est is the estimated expected shortfall for period t.

  • it is the var failure indicator on period t with a value of 1 if xt<-vart, and 0 otherwise.

the expected value for this test statistic is 0, and it is negative when there is evidence of risk underestimation. to determine how negative it should be to reject the model, critical values are needed, and to determine critical values, distributional assumptions are needed for the portfolio outcomes xt.

the unconditional test statistic turns out to be stable across a range of distributional assumptions for xt, from thin-tailed distributions such as normal, to heavy-tailed distributions such as t with low degrees of freedom (high single digits). only the most heavy-tailed t distributions (low single digits) lead to more noticeable differences in the critical values. see [] for details.

the object takes advantage of the stability of the critical values of the unconditional test statistic and uses tables of precomputed critical values to run es back tests. has two sets of critical-value tables. the first set of critical values assumes that the portfolio outcomes xt follow a standard normal distribution; this is the test. the second set of critical values uses the heaviest possible tails, it assumes that the portfolio outcomes xt follow a t distribution with 3 degrees of freedom; this is the test.

the unconditional test statistic is sensitive to both the severity of the var failures relative to the es estimate, and also to the number of var failures (how many times the var is violated). therefore, a single but very large var failure relative to the es (or only very few large losses) may cause the rejection of a model in a particular time window. a large loss on a day when the es estimate is also large may not impact the test results as much as a large loss when the es is smaller. and a model can also be rejected in periods with many var failures, even if all the var violations are relatively small and only slightly higher than the var. both situations are illustrated in this example.

the object takes as input the test data, but no distribution information is provided to . optionally, you can specify id's for the portfolio, and for each of the var and es models being backtested. although the model id's in this example do have distribution references (for example, "normal" or "t 10"), these are only labels used for reporting purposes. the tests do not use the fact that the first model is a historical var method, or that the other models are alternative parametric var models. the distribution parameters used to estimate the var and es in the previous section are not passed to , and are not used in any way in this section. these parameters, however, must be provided for the simulation-based tests supported in the object discussed in the simulation-based tests section, and for the tests supported in the object discussed in the large-sample and simulation tests section.

ebt = esbacktest(returnstest,[var_hist var_normal var_t10 var_t5],...
   [es_hist es_normal es_t10 es_t5],'portfolioid',"s&p, 1995-2002",...
   'varid',["historical" "normal","t 10","t 5"],'varlevel',varlevel);
disp(ebt)
  esbacktest with properties:
    portfoliodata: [2087x1 double]
          vardata: [2087x4 double]
           esdata: [2087x4 double]
      portfolioid: "s&p, 1995-2002"
            varid: ["historical"    "normal"    "t 10"    "t 5"]
         varlevel: [0.9750 0.9750 0.9750 0.9750]

start the analysis by running the function.

s = summary(ebt);
disp(s)
      portfolioid          varid        varlevel    observedlevel    expectedseverity    observedseverity    observations    failures    expected    ratio     missing
    ________________    ____________    ________    _____________    ________________    ________________    ____________    ________    ________    ______    _______
    "s&p, 1995-2002"    "historical"     0.975         0.96694            1.3711              1.4039             2087           69        52.175     1.3225       0   
    "s&p, 1995-2002"    "normal"         0.975         0.97077            1.1928               1.416             2087           61        52.175     1.1691       0   
    "s&p, 1995-2002"    "t 10"           0.975         0.97173            1.2652              1.4063             2087           59        52.175     1.1308       0   
    "s&p, 1995-2002"    "t 5"            0.975         0.97173              1.37              1.4075             2087           59        52.175     1.1308       0   

the observedseverity column shows the average ratio of loss to var on periods when the var was violated. the expectedseverity column uses the average ratio of es to var for the var violation periods. for the "historical" and "t 5" models, the observed and expected severities are comparable. however, for the "historical" method, the observed number of failures (failures column) is considerably higher than the expected number of failures (expected column), about 32% higher (see the ratio column). both the "normal" and the "t 10" models have observed severities much higher than the expected severities.

figure;
subplot(2,1,1)
bar(categorical(s.varid),[s.expectedseverity,s.observedseverity])
ylim([1 1.5])
legend('expected','observed','location','southeast')
title('average severity ratio')
subplot(2,1,2)
bar(categorical(s.varid),[s.expected,s.failures])
ylim([40 70])
legend('expected','observed','location','southeast')
title('number of var failures')

figure contains 2 axes objects. axes object 1 with title average severity ratio contains 2 objects of type bar. these objects represent expected, observed. axes object 2 with title number of var failures contains 2 objects of type bar. these objects represent expected, observed.

the function runs all tests and reports only the accept or reject result. the unconditional normal test is more strict. for the 8-year test window here, two models fail both tests ("historical" and "normal"), one model fails the unconditional normal test, but passes the unconditional t test ("t 10"), and one model passes both tests ("t 5").

t = runtests(ebt);
disp(t)
      portfolioid          varid        varlevel    unconditionalnormal    unconditionalt
    ________________    ____________    ________    ___________________    ______________
    "s&p, 1995-2002"    "historical"     0.975            reject               reject    
    "s&p, 1995-2002"    "normal"         0.975            reject               reject    
    "s&p, 1995-2002"    "t 10"           0.975            reject               accept    
    "s&p, 1995-2002"    "t 5"            0.975            accept               accept    

additional details on the tests can be obtained by calling the individual test functions. here are the details for the test.

t = unconditionalnormal(ebt);
disp(t)
      portfolioid          varid        varlevel    unconditionalnormal     pvalue      teststatistic    criticalvalue    observations    testlevel
    ________________    ____________    ________    ___________________    _________    _____________    _____________    ____________    _________
    "s&p, 1995-2002"    "historical"     0.975            reject           0.0047612      -0.37917         -0.23338           2087          0.95   
    "s&p, 1995-2002"    "normal"         0.975            reject           0.0043287      -0.38798         -0.23338           2087          0.95   
    "s&p, 1995-2002"    "t 10"           0.975            reject            0.037528       -0.2569         -0.23338           2087          0.95   
    "s&p, 1995-2002"    "t 5"            0.975            accept             0.13069      -0.16179         -0.23338           2087          0.95   

here are the details for the test.

t = unconditionalt(ebt);
disp(t)
      portfolioid          varid        varlevel    unconditionalt     pvalue     teststatistic    criticalvalue    observations    testlevel
    ________________    ____________    ________    ______________    ________    _____________    _____________    ____________    _________
    "s&p, 1995-2002"    "historical"     0.975          reject        0.017032      -0.37917         -0.27415           2087          0.95   
    "s&p, 1995-2002"    "normal"         0.975          reject        0.015375      -0.38798         -0.27415           2087          0.95   
    "s&p, 1995-2002"    "t 10"           0.975          accept        0.062835       -0.2569         -0.27415           2087          0.95   
    "s&p, 1995-2002"    "t 5"            0.975          accept         0.16414      -0.16179         -0.27415           2087          0.95   

using the tests for more advanced analyses

this section shows how to use the object to run user-defined traffic-light tests, and also how to run tests over rolling test windows.

one way to define a traffic-light test is by combining the results from the unconditional normal and the unconditional t tests. because the unconditional normal is more strict, one can define a traffic-light test with these levels:

  • green: the model passes both the unconditional normal and unconditional t tests.

  • yellow: the model fails the unconditional normal test, but passes the unconditional t test.

  • red: the model is rejected by both the unconditional normal and unconditional t tests.

t = runtests(ebt);
tlvalue = (t.unconditionalnormal=='reject') (t.unconditionalt=='reject');
t.trafficlight = categorical(tlvalue,0:2,{'green','yellow','red'},'ordinal',true);
disp(t)
      portfolioid          varid        varlevel    unconditionalnormal    unconditionalt    trafficlight
    ________________    ____________    ________    ___________________    ______________    ____________
    "s&p, 1995-2002"    "historical"     0.975            reject               reject           red      
    "s&p, 1995-2002"    "normal"         0.975            reject               reject           red      
    "s&p, 1995-2002"    "t 10"           0.975            reject               accept           yellow   
    "s&p, 1995-2002"    "t 5"            0.975            accept               accept           green    

an alternative user-defined traffic-light test can use a single test, but at different test confidence levels:

  • green: the result is to 'accept' with a test level of 95%.

  • yellow: the result is to 'reject' at a 95% test level, but 'accept' at 99%.

  • red: the result is 'reject' at 99% test level.

a similar test is proposed in [] with a high test level of 99.99%.

t95 = runtests(ebt); % 95% is the default test level value
t99 = runtests(ebt,'testlevel',0.99);
tlvalue = (t95.unconditionalnormal=='reject') (t99.unconditionalnormal=='reject');
trolling = t95(:,1:3);
trolling.unconditionalnormal95 = t95.unconditionalnormal;
trolling.unconditionalnormal99 = t99.unconditionalnormal;
trolling.trafficlight = categorical(tlvalue,0:2,{'green','yellow','red'},'ordinal',true);
disp(trolling)
      portfolioid          varid        varlevel    unconditionalnormal95    unconditionalnormal99    trafficlight
    ________________    ____________    ________    _____________________    _____________________    ____________
    "s&p, 1995-2002"    "historical"     0.975             reject                   reject               red      
    "s&p, 1995-2002"    "normal"         0.975             reject                   reject               red      
    "s&p, 1995-2002"    "t 10"           0.975             reject                   accept               yellow   
    "s&p, 1995-2002"    "t 5"            0.975             accept                   accept               green    

the test results may be different over different test windows. here, a one-year rolling window is used to run the es back tests over the eight individual years spanned by the original test window. the first user-defined traffic-light described above is added to the test results table. the function is also called for each individual year to view the history of the severity and the number of var failures.

srolling = table;
trolling = table;
for year = 1995:2002
   ind = year(datestest)==year;
   portid = ['s&p, ' num2str(year)];
   portfoliodata = returnstest(ind);
   vardata = [var_hist(ind) var_normal(ind) var_t10(ind) var_t5(ind)];
   esdata = [es_hist(ind) es_normal(ind) es_t10(ind) es_t5(ind)];
   ebt = esbacktest(portfoliodata,vardata,esdata,...
      'portfolioid',portid,'varid',["historical" "normal" "t 10" "t 5"],...
      'varlevel',varlevel);
   if year == 1995
      srolling = summary(ebt);
      trolling = runtests(ebt);
   else
      srolling = [srolling;summary(ebt)]; %#ok 
      trolling = [trolling;runtests(ebt)]; %#ok 
   end
end
% optional: add the first user-defined traffic light test described above
tlvalue = (trolling.unconditionalnormal=='reject') (trolling.unconditionalt=='reject');
trolling.trafficlight = categorical(tlvalue,0:2,{'green','yellow','red'},'ordinal',true);

display the results, one model at a time. the "t 5" model has the best performance in these tests (two "yellow"), and the "normal" model the worst (three "red" and one "yellow").

disp(trolling(trolling.varid=="historical",:))
    portfolioid       varid        varlevel    unconditionalnormal    unconditionalt    trafficlight
    ___________    ____________    ________    ___________________    ______________    ____________
    "s&p, 1995"    "historical"     0.975            accept               accept           green    
    "s&p, 1996"    "historical"     0.975            reject               accept           yellow   
    "s&p, 1997"    "historical"     0.975            reject               reject           red      
    "s&p, 1998"    "historical"     0.975            accept               accept           green    
    "s&p, 1999"    "historical"     0.975            accept               accept           green    
    "s&p, 2000"    "historical"     0.975            accept               accept           green    
    "s&p, 2001"    "historical"     0.975            accept               accept           green    
    "s&p, 2002"    "historical"     0.975            reject               reject           red      
disp(trolling(trolling.varid=="normal",:))
    portfolioid     varid      varlevel    unconditionalnormal    unconditionalt    trafficlight
    ___________    ________    ________    ___________________    ______________    ____________
    "s&p, 1995"    "normal"     0.975            accept               accept           green    
    "s&p, 1996"    "normal"     0.975            reject               reject           red      
    "s&p, 1997"    "normal"     0.975            reject               reject           red      
    "s&p, 1998"    "normal"     0.975            reject               accept           yellow   
    "s&p, 1999"    "normal"     0.975            accept               accept           green    
    "s&p, 2000"    "normal"     0.975            accept               accept           green    
    "s&p, 2001"    "normal"     0.975            accept               accept           green    
    "s&p, 2002"    "normal"     0.975            reject               reject           red      
disp(trolling(trolling.varid=="t 10",:))
    portfolioid    varid     varlevel    unconditionalnormal    unconditionalt    trafficlight
    ___________    ______    ________    ___________________    ______________    ____________
    "s&p, 1995"    "t 10"     0.975            accept               accept           green    
    "s&p, 1996"    "t 10"     0.975            reject               reject           red      
    "s&p, 1997"    "t 10"     0.975            reject               accept           yellow   
    "s&p, 1998"    "t 10"     0.975            accept               accept           green    
    "s&p, 1999"    "t 10"     0.975            accept               accept           green    
    "s&p, 2000"    "t 10"     0.975            accept               accept           green    
    "s&p, 2001"    "t 10"     0.975            accept               accept           green    
    "s&p, 2002"    "t 10"     0.975            reject               reject           red      
disp(trolling(trolling.varid=="t 5",:))
    portfolioid    varid    varlevel    unconditionalnormal    unconditionalt    trafficlight
    ___________    _____    ________    ___________________    ______________    ____________
    "s&p, 1995"    "t 5"     0.975            accept               accept           green    
    "s&p, 1996"    "t 5"     0.975            reject               accept           yellow   
    "s&p, 1997"    "t 5"     0.975            accept               accept           green    
    "s&p, 1998"    "t 5"     0.975            accept               accept           green    
    "s&p, 1999"    "t 5"     0.975            accept               accept           green    
    "s&p, 2000"    "t 5"     0.975            accept               accept           green    
    "s&p, 2001"    "t 5"     0.975            accept               accept           green    
    "s&p, 2002"    "t 5"     0.975            reject               accept           yellow   

the year 2002 is an example of a year with relatively small severities, yet many var failures. all models perform poorly in 2002, even though the observed severities are low. however, the number of var failures for some models is more than twice the expected number of var failures.

disp(summary(ebt))
    portfolioid       varid        varlevel    observedlevel    expectedseverity    observedseverity    observations    failures    expected    ratio     missing
    ___________    ____________    ________    _____________    ________________    ________________    ____________    ________    ________    ______    _______
    "s&p, 2002"    "historical"     0.975         0.94636            1.2022                 1.2             261            14        6.525      2.1456       0   
    "s&p, 2002"    "normal"         0.975         0.94636            1.1928              1.2111             261            14        6.525      2.1456       0   
    "s&p, 2002"    "t 10"           0.975         0.95019            1.2652              1.2066             261            13        6.525      1.9923       0   
    "s&p, 2002"    "t 5"            0.975         0.95019              1.37              1.2077             261            13        6.525      1.9923       0   

the following figure shows the data on the entire 8-year window, and severity ratio year by year (expected and observed) for the "historical" model. the absolute size of the losses is not as important as the relative size compared to the es (or equivalently, compared to the var). both 1997 and 1998 have large losses, comparable in magnitude. however the expected severity in 1998 is much higher (larger es estimates). overall, the "historical" method seems to do well with respect to severity ratios.

sh = srolling(srolling.varid=="historical",:);
figure;
subplot(2,1,1)
failureind = returnstest<-var_hist;
plot(datestest,returnstest,datestest,-var_hist,datestest,-es_hist)
hold on
plot(datestest(failureind),returnstest(failureind),'.')
hold off
legend('returns','var','es','location','best')
title('historical var and es')
grid on
subplot(2,1,2)
bar(1995:2002,[sh.expectedseverity,sh.observedseverity])
ylim([1 1.8])
legend('expected','observed','location','best')
title('yearly average severity ratio, historical var')

figure contains 2 axes objects. axes object 1 with title historical var and es contains 4 objects of type line. one or more of the lines displays its values using only markers these objects represent returns, var, es. axes object 2 with title yearly average severity ratio, historical var contains 2 objects of type bar. these objects represent expected, observed.

however, a similar visualization with the expected against observed number of var failures shows that the "historical" method tends to get violated many more times than expected. for example, even though in 2002 the expected average severity ratio is very close to the observed one, the number of var failures was more than twice the expected number. this then leads to test failures for both the unconditional normal and unconditional t tests.

figure;
subplot(2,1,1)
plot(datestest,returnstest,datestest,-var_hist,datestest,-es_hist)
hold on
plot(datestest(failureind),returnstest(failureind),'.')
hold off
legend('returns','var','es','location','best')
title('historical var and es')
grid on
subplot(2,1,2)
bar(1995:2002,[sh.expected,sh.failures])
legend('expected','observed','location','best')
title('yearly var failures, historical var')

figure contains 2 axes objects. axes object 1 with title historical var and es contains 4 objects of type line. one or more of the lines displays its values using only markers these objects represent returns, var, es. axes object 2 with title yearly var failures, historical var contains 2 objects of type bar. these objects represent expected, observed.

simulation-based tests

the object supports five simulation-based es back tests. requires the distribution information for the portfolio outcomes, namely, the distribution name ("normal" or "t") and the distribution parameters for each day in the test window. uses the provided distribution information to run simulations to determine critical values. the tests supported in are , , , , and . these are implementations of the tests proposed by acerbi and szekely in [], and [], [] for 2017 and 2019.

the object supports normal and t distributions. these tests can be used for any model where the underlying distribution of portfolio outcomes is normal or t, such as exponentially weighted moving average (ewma), delta-gamma, or generalized autoregressive conditional heteroskedasticity (garch) models.

es backtests are necessarily approximated in that they are sensitive to errors in the predicted var. however, the minimally biased test has only a small sensitivity to var errors and the sensitivity is prudential, in the sense that var errors lead to a more punitive es test. see acerbi-szekely ([], [] for 2017 and 2019) for details. when distribution information is available, the minimally biased test is recommended (see , ).

the "normal", "t 10", and "t 5" models can be backtested with the simulation-based tests in . for illustration purposes, only "t 5" is backtested. the distribution name ("t") and parameters (degrees of freedom, location, and scale) are provided when the object is created.

rng('default'); % for reproducibility; the esbacktestbysim constructor runs a simulation
ebts = esbacktestbysim(returnstest,var_t5,es_t5,"t",'degreesoffreedom',5,...
   'location',mu,'scale',sigmat5,...
   'portfolioid',"s&p",'varid',"t 5",'varlevel',varlevel);

the recommended workflow is the same: first, run the summary function, then run the function, and then run the individual test functions.

the function provides exactly the same information as the function from .

s = summary(ebts);
disp(s)
    portfolioid    varid    varlevel    observedlevel    expectedseverity    observedseverity    observations    failures    expected    ratio     missing
    ___________    _____    ________    _____________    ________________    ________________    ____________    ________    ________    ______    _______
       "s&p"       "t 5"     0.975         0.97173             1.37               1.4075             2087           59        52.175     1.1308       0   

the function shows the final accept or reject result.

t = runtests(ebts);
disp(t)
    portfolioid    varid    varlevel    conditional    unconditional    quantile    minbiasabsolute    minbiasrelative
    ___________    _____    ________    ___________    _____________    ________    _______________    _______________
       "s&p"       "t 5"     0.975        accept          accept         accept         accept             accept     

additional details on the test results are obtained by calling the individual test functions. for example, call the test. the first output, t, has the test results and additional details such as the p-value, test statistic, and so on. the second output, s, contains simulated test statistic values assuming the distributional assumptions are correct. for example, generated 1000 scenarios of portfolio outcomes in this case, where each scenario is a series of 2087 observations simulated from t random variables with 5 degrees of freedom and the given location and scale parameters. the simulated values returned in the optional s output show typical values of the test statistic if the distributional assumptions are correct. these are the simulated statistics used to determine the significance of the tests, that is, the reported critical values and p-values.

[t,s] = minbiasabsolute(ebts);
disp(t)
    portfolioid    varid    varlevel    minbiasabsolute    pvalue    teststatistic    criticalvalue    observations    scenarios    testlevel
    ___________    _____    ________    _______________    ______    _____________    _____________    ____________    _________    _________
       "s&p"       "t 5"     0.975          accept         0.299      -0.00080059      -0.0030373          2087          1000         0.95   
whos s
  name      size              bytes  class     attributes
  s         1x1000             8000  double              

select a test to show the test results and visualize the significance of the tests. the histogram shows the distribution of simulated test statistics, and the asterisk shows the value of the test statistic for the actual portfolio returns.

estestchoice = "minbiasabsolute";
switch estestchoice
   case 'minbiasabsolute'
      [t,s] = minbiasabsolute(ebts);
   case 'minbiasrelative'
      [t,s] = minbiasrelative(ebts);
   case 'conditional'
      [t,s] = conditional(ebts);
   case 'unconditional'
      [t,s] = unconditional(ebts);
   case 'quantile'
      [t,s] = quantile(ebts);
end
disp(t)
    portfolioid    varid    varlevel    minbiasabsolute    pvalue    teststatistic    criticalvalue    observations    scenarios    testlevel
    ___________    _____    ________    _______________    ______    _____________    _____________    ____________    _________    _________
       "s&p"       "t 5"     0.975          accept         0.299      -0.00080059      -0.0030373          2087          1000         0.95   
figure;
histogram(s);
hold on;
plot(t.teststatistic,0,'*');
hold off;
title = sprintf('%s: %s, p-value: %4.3f',estestchoice,t.varid,t.pvalue);
title(title)

figure contains an axes object. the axes object with title minbiasabsolute: t 5, p-value: 0.299 contains 2 objects of type histogram, line. one or more of the lines displays its values using only markers

the unconditional test statistic reported by is exactly the same as the unconditional test statistic reported by . however the critical values reported by are based on a simulation using a t distribution with 5 degrees of freedom and the given location and scale parameters. the object gives approximate test results for the "t 5" model, whereas the results here are specific for the distribution information provided. also, for the test, this is a visualization of the standalone conditional test (conditionalonly result in the table above). the final conditional test result (conditional column) depends also on a preliminary var backtest (vartestresult column).

the "t 5" model is accepted by the five tests.

the object provides a function to run a new simulation. for example, if there is a borderline test result where the test statistic is near the critical value, you might use the function to simulate new scenarios. in cases where more precision is required, a larger simulation can be run.

the tests can be run over a rolling window, following the same approach described above for . user-defined traffic-light tests can also be defined, for example, using two different test confidence levels, similar to what was done above for .

large-sample and simulation tests

the object supports two es back tests with critical values determined either with a large-sample approximation or a simulation (finite sample). requires the distribution information for the portfolio outcomes, namely, the distribution name ("normal" or "t") and the distribution parameters for each day in the test window. it does not require the var of the es data. uses the provided distribution information to map the portfolio outcomes into "ranks", that is, to apply the cumulative distribution function to map returns into values in the unit interval, where the test statistics are defined. can determine critical values by using a large-sample approximation or a finite-sample simulation.

the tests supported in are and . these are implementations of the tests proposed by du and escanciano in []. the tests and all the tests previously discussed in this example are severity tests that assess if the magnitude of the tail losses is consistent with the model predictions. the test, however, is a test for independence over time that assess if there is evidence of autocorrelation in the series of tail losses.

the object supports normal and t distributions. these tests can be used for any model where the underlying distribution of portfolio outcomes is normal or t, such as exponentially weighted moving average (ewma), delta-gamma, or generalized autoregressive conditional heteroskedasticity (garch) models.

the "normal", "t 10", and "t 5" models can be backtested with the tests in . for illustration purposes, only "t 5" is backtested. the distribution name ("t") and parameters (degreesoffreedom, location, and scale) are provided when the object is created.

rng('default'); % for reproducibility; the esbacktestbyde constructor runs a simulation
ebtde = esbacktestbyde(returnstest,"t",'degreesoffreedom',5,...
   'location',mu,'scale',sigmat5,...
   'portfolioid',"s&p",'varid',"t 5",'varlevel',varlevel);

the recommended workflow is the same: first, run the function, then run the function, and then run the individual test functions. the function provides exactly the same information as the function from .

s = summary(ebtde);
disp(s)
    portfolioid    varid    varlevel    observedlevel    expectedseverity    observedseverity    observations    failures    expected    ratio     missing
    ___________    _____    ________    _____________    ________________    ________________    ____________    ________    ________    ______    _______
       "s&p"       "t 5"     0.975         0.97173             1.37               1.4075             2087           59        52.175     1.1308       0   

the function shows the final accept or reject result.

t = runtests(ebtde);
disp(t)
    portfolioid    varid    varlevel    conditionalde    unconditionalde
    ___________    _____    ________    _____________    _______________
       "s&p"       "t 5"     0.975         reject            accept     

additional details on the test results are obtained by calling the individual test functions.

t = conditionalde(ebtde);
disp(t)
    portfolioid    varid    varlevel    conditionalde      pvalue      teststatistic    criticalvalue    autocorrelation    observations    criticalvaluemethod    numlags    scenarios    testlevel
    ___________    _____    ________    _____________    __________    _____________    _____________    _______________    ____________    ___________________    _______    _________    _________
       "s&p"       "t 5"     0.975         reject        0.00034769       12.794           3.8415           0.078297            2087          "large-sample"          1          nan         0.95   

by default, the critical values are determined by a large-sample approximation. critical values based on a finite-sample distribution estimated by using a simulation are available when using the 'criticalvaluemethod' optional name-value pair argument.

[t,s] = conditionalde(ebtde,'criticalvaluemethod','simulation');
disp(t)
    portfolioid    varid    varlevel    conditionalde    pvalue    teststatistic    criticalvalue    autocorrelation    observations    criticalvaluemethod    numlags    scenarios    testlevel
    ___________    _____    ________    _____________    ______    _____________    _____________    _______________    ____________    ___________________    _______    _________    _________
       "s&p"       "t 5"     0.975         reject         0.01        12.794           3.7961           0.078297            2087           "simulation"           1         1000         0.95   

the second output, s, contains simulated test statistic values. the following visualization is useful for comparing how well the simulated finite-sample distribution matches the large-sample approximation. the plot shows that the tail of the distribution of test statistics is slightly heavier for the simulation-based (finite-sample) distribution. this means the simulation-based version of the tests are more tolerant and would not reject in some cases where the large-sample approximation results reject. how closely the large-sample and simulation distributions match depends not only on the number of observations in the test window, but also on the var confidence level (higher var levels lead to heavier tails in the finite-sample distribution).

xls = 0:0.05:30;
pdfls = chi2pdf(xls,t.numlags);
histogram(s,'normalization',"pdf")
hold on
plot(xls,pdfls)
hold off
ylim([0 0.1])
legend({'simulation','large-sample'})
title = sprintf('conditional test distribution\nvar level: %g%%, sample size = %d',varlevel*100,t.observations);
title(title)

figure contains an axes object. the axes object with title conditional test distribution var level: 97.5%, sample size = 2087 contains 2 objects of type histogram, line. these objects represent simulation, large-sample.

similar steps can be used to see details on the test, and to compare the large-sample and simulation based results.

the object provides a function to run a new simulation. for example, if there is a borderline test result where the test statistic is near the critical value, you can use the function to simulate new scenarios. also, by default, the simulation stores results for up to 5 lags for the conditional test, so if simulation-based results for a larger number of lags is needed, you must use the function.

if the large-sample approximation tests are the only tests that you need because they are reliable for a particular sample size and var level, you can turn off simulation when creating an object by using the 'simulate' optional input.

the tests can be run over a rolling window, following the same approach described above for . you can also define traffic-light tests, for example, you could use two different test confidence levels, similar to what was done above for .

conclusions

to contrast the three es backtesting objects:

  • the object is used for a wide range of distributional assumptions: historical var, parametric var, monte-carlo var, or extreme-value models. however, offers approximate test results based on two variations of the same test: the unconditional test statistic with two different sets of precomputed critical values ( and ).

  • the object is used for parametric methods with normal and t distributions (which includes ewma, garch, and delta-gamma) and requires distribution parameters as inputs. offers five different tests (, , , , and and the critical values for these tests are simulated using the distribution information that you provide, and therefore, are more accurate. although all es backtests are sensitive to var estimation errors, the minimally biased test has only a small sensitivity and is recommended (see acerbi-szekely 2017 and 2019 for details [], []).

  • the object is also used for parametric methods with normal and t distributions (which includes ewma, garch, and delta-gamma) and requires distribution parameters as inputs. contains a severity () and a time-independence () tests and it has the convenience of a large-sample, fast version of the tests. the test is the only test for independence over time for es models among all the tests supported in these three classes.

as shown in this example, all three es backtesting objects provide functionality to generate reports on severities, var failures, and test results information. the three es backtest objects provide the flexibility to build on them. for example, you can create user-defined traffic-light tests and run the es backtesting analysis over rolling windows.

references

[1] acerbi, c., and b. szekely. "backtesting expected shortfall." msci inc., december 2014.

[2] acerbi, c., and b. szekely. "general properties of backtestable statistics. ssrn electronic journal. january, 2017.

[3] acerbi, c., and b. szekely. "the minimally biased backtest for es." risk. september, 2019.

[4] basel committee on banking supervision. "minimum capital requirements for market risk." january 2016,

[5] du, z., and j. c. escanciano. "backtesting expected shortfall: accounting for tail risk." management science. vol 63, issue 4, april 2017.

[6] mcneil, a., r. frey, and p. embrechts. quantitative risk management: concepts, techniques and tools. princeton university press. 2005.

[7] rockafellar, r. t. and s. uryasev. "conditional value-at-risk for general loss distributions." journal of banking and finance. vol. 26, 2002, pp. 1443-1471.

local functions

function [var,es] = hhistoricalvares(sample,varlevel)
    % compute historical var and es
    % see [7] for technical details
    % convert to losses
    sample = -sample;
    
    n = length(sample);
    k = ceil(n*varlevel);
    
    z = sort(sample);
    
    var = z(k);
    
    if k < n
       es = ((k - n*varlevel)*z(k)   sum(z(k 1:n)))/(n*(1 - varlevel));
    else
       es = z(k);
    end
end
function [var,es] = hnormalvares(mu,sigma,varlevel)
    % compute var and es for normal distribution
    % see [6] for technical details
    
    var = -1*(mu-sigma*norminv(varlevel));
    es = -1*(mu-sigma*normpdf(norminv(varlevel))./(1-varlevel));
end
function [var,es] = htvares(dof,mu,sigma,varlevel)
    % compute var and es for t location-scale distribution
    % see [6] for technical details
    var = -1*(mu-sigma*tinv(varlevel,dof));
    es_standardt = (tpdf(tinv(varlevel,dof),dof).*(dof tinv(varlevel,dof).^2)./((1-varlevel).*(dof-1)));
    es = -1*(mu-sigma*es_standardt);
end

related examples

more about

    网站地图