global sensitivity analysis with simbiology video -凯发k8网页登录
learn about the global sensitivity analysis (gsa) functionality in simbiology®. you’ll discover:
- the differences between local and global sensitivity analysis and when it is appropriate to apply each method
- how sobol indices and multiparametric gsa are calculated
- how to interpret the plots associated with sobol and mpgsa
- how to choose your sample size for these gsa methods
you’ll also get an introduction to the concept of ‘observables’ with respect to the model or data (for example, to calculate auc) and how they can be used as outputs for a gsa.
welcome to the webinar. my name is sietse braakman. i'm an application engineer at mathworks. today, i will be talking about global sensitivity analysis with simbiology.
the topics i wanted to discuss today are first to take you through some of the concepts in the global sensitivity analysis, to make sure that everyone understands and is on the same page with those concepts. then i will explain how to perform global sensitivity analysis in simbiology. we will be using two methods, the sobol method and the multiparametric method. i will spend time in how to interpret these plots, as well as how to size your samples such that you get reliable results.
and we'll have time for a q&a. there is a q&a window part of the webex that you can type your questions in during the during the meeting. some of my colleagues, fulden buyukozturk and jeremy huard, are able to answer. and others, we might keep till the end for the q&a session.
with that, let's get started. so we'll start with some of the concepts. most of you might be familiar with the fact that there is local and global sensitivity analysis. so when i talk about local sensitivity analysis, i talk about an analysis around a single operating point in the parameter space. so you might have multiple parameters, and really, we're only performing this sensitivity analysis for a single set of those parameter values.
global sensitivity analysis, on the other hand, is performed across a domain in your parameter space. so for every parameter, there is now an upper and a lower bound. and between those bounds, we explore how the model output is sensitive to those input parameters.
an important concept in sensitivity analysis is sampling. and so for local sensitivity analysis, this is mostly done one at a time. we are in that single operating point. we perturb one parameter and see how that affects the model output. we bring the parameter back to its original value and we'll perturb the next one, et cetera. and so it's one at a time. and as a result, you're unable to absorb observatory interactions between parameters from single analysis.
global sensitivity analysis, on the other hand, is all at a time. so you take random samples from the parameter space to calculate the sensitivity index. as a result, you're able to observe interactions between the parameters. and then there are also multiple ways that you can sample that parameter space. in general, these samplings are uniform, but there are so-called low discrepancy sampling methods, such as sobol, latin hypercube, and halton sequences, that you can use to perform the sampling.
and then lastly, there are multiple ways that you can calculate sensitivity indices. so for local sensitivity, we use a derivative or a ratio, how we look at the change in model output over the change in model input. for sobol and for efast, which is fourier-based method, you're using the variance. so you try to attribute variance to each parameter. and then there are distribution-based methods, such as the multiparametric global sensitivity analysis and correlation-based methods such as partial rank correlation coefficients.
the next thing to talk about is why should we use local or global sensitivity analysis. so here i have a very simple example, a one-compartment model with absorption ka, distribution volume vd, and elimination-- parameterized enzymatic elimination so vm and k max, the vm and km. now, if i set every parameter to be 1 and i simulate this model for 10 hours, i get the following local sensitivity values. however, if i set ka to be 0.1, so i divide k by 10 and i do the same analysis, i get these results.
and so you can see that the results from the local sensitivity analysis strongly depend on that operating point in your parameter space. and so that's why it's advisable to use global sensitivity analysis if you don't know that operating point with much confidence. so in that case, global sensitivity analysis is most appropriate when you're exploring sensitivity across that parameter domain.
still, there is a reason why you might want to use local sensitivity analysis, for example, for target identification. so if you do have a good idea of that calibration, and you have, for example, one calibration that represents a kidney-impaired patient, you might want to use local sensitivity analysis for target identification, such as birgit schoeberl and our colleagues have done in the paper below. and also, simbiology in matlab use the local sensitivities to calculate the gradients in optimization algorithms.
now, on the other hand, the global sensitivity analysis is more appropriate when you want to understand which model inputs drive the model response or a model-based decision metric if you don't know that operating point. and then you can use the results of that global sensitivity analysis also to inform parameter estimation strategy. so the parameters that the model is very sensitive to, you can-- most likely, you want to estimate, to make sure that you have a good understanding of what their value is and that your model is properly calibrated.
with that, i want to move on to sobol global sensitivity analysis. this is one of the two methods that's being implemented in simbiology, and i want to explain first how the sobol index is calculated. so the idea is to apportion variance in the model output y to the model inputs x.
and there are multiple indices you can calculate. the first order sensitivity index represents the individual contributions, so the individual variance that can be-- the variance that can be apportioned to each individual parameter. and it's calculated through this expression.
and in the denominator here, you see the unconditional variance. so assume we're simulating a one-compartment model, and we have-- we're sampling two parameters, the absorption coefficient ka and the clearance. now, if i sample these parameters, i simulate each sample, i get this ensemble of simulations. and at every time point, i can calculate the variance of that ensemble, and that's what happens here in the denominator.
now, in the numerator, you see the conditional variance. and that's the variance not due to xi. say, xi is our parameter ka. so what we then do is we fix ka in one point. we still allow the clearance to vary, and we get the variance.
now, we can do that for all values of ka, take the mean value, and then we can say that that is the variance that is due to not ka. and so 1 minus that whole value of that ratio gives us the variance due to ka. so that's the way that the first order sobol index is calculated. so it's the ratio of the conditional variance over the unconditional variance, and then 1 minus that.
the other sensitivity index you can calculate is the total effect, and that shows interactions because it is the sum of all of the lower order effects that include your parameter of interest. so if your parameter or interest is number one or ka, it's the first order, the second order with each of the other parameters, the third order with each of the combination of parameters, et cetera. and that sum is basically-- there is an easier way to calculate that and that is through this expression, which i'm not going to go into much detail. but the idea of calculating it is similar to the first order sensitivity index.
ok, so that gives you an idea of how the sobol indices are calculated and how that is different from a local sensitivity analysis, where you do the ratio between your change in model output over change in model input. so let's-- before we move to showing how you would do this in simbiology, i just want to take you through the workflow that we're going to follow when we are moving to simbiology.
so the first thing we're going to do is we're going to define the domain of interest in the parameter space. so i'm going to say, these are the parameters that i'm interested in incorporating in my global sensitivity analysis, and for each of them, i need to decide what the lower and upper bound is. then that defines my parameter space, and i can sample that. i can simulate each sample. and then once all my simulations are complete, i can calculate the sensitivity measures.
i'm going to demonstrate this using a model by sergey aksenov and his colleagues. this is a model that describes lesinurad and febuxostat, which are two approved drugs to treat gout. and the model looks like this. so we have a two-compartment model for the lesinurad at the top and a two-compartment model for the febuxostat at the bottom. and you can see that the febuxostat, the central concentration, has an effect on the production of serum uric acid, whereas the lesinurad increases the glomerular filtration and thereby basically increases the clearance of uric acid to avoid accumulation of it.
what i'm going to do today, i'm going to choose four parameters from this model, particular the pd part of the model, to make sure that we have-- that we explore these four parameters and see to what extent they have an effect on the output, which i choose to be serum uric acid. so the four parameter are k1, fc50, fmax, and e0. and of course, i could take more parameters, but then we're going to spend a lot of time simulating and that's-- for the purpose of today, it's not helpful.
all right, there are two ways that you can perform this analysis. you can either perform it entirely programmatically, so you can write your own script, or you can use an app to perform the analysis. so the app, you can download from-- if you go to matlab, to the home tab, and you click here on the add-on explorer, and you search for global sensitivity analysis simbiology, you can download this app here and install it directly onto your machine.
if you can't access this for some reason, note that the functions themselves are built in and installed as of 2020a-- simbiology 2020a. so the two functions are sbiosobol and sbiompgsa, and the app is basically a user interface to that-- to those two functions. so what we do is-- we will use the app today, just so it's easier for you to follow what i'm doing. but at the end of the presentation, i will share my code and the model with you, such that you can also try it out using code.
in order to run the app, what i need to do is i need to move this model to matlab. so i can export this model to the workspace, to my matlab workspace. for example, calling it m1. and then in matlab, i can pull out the doses. so in this case, i want to use dose one and two. i've already done all of that and started up the app, but you call the app like this, startglobalsensi tivityanalysisapp, and then you pass in the model and the doses. and that starts up the app, and it looks like this.
so let's walk through this step by step. the first thing, as i said, that we need to do is we need to define which parameters we are interested in. so i can go here. that basically opens up a separate window with all of the objects in my model, so the compartments, parameters, initial conditions for the species, et cetera, that i can use as inputs to my global sensitivity analysis. and i can sort those.
and so what i'm interested now in is selecting these four parameters that are of interest to me, e0, fc50, fmax, and k1. and now i can choose values, upper and lower bound values, for each of those. so i've already done that, and i've ended up basically taking the model value, the standard value of the parameter in the model, then divide by 1 and 1/2 and times 1 and 1/2, primarily because it's easier to always bound it by 1. so i wanted to make sure that all of the parameters had a similar width in terms of order of magnitude spread between the upper and lower bound.
and you can just edit this and write 0.5, and then change the parameter values. so then when you're done, you've selected all your parameters and input all the upper and lower bounds, we move on. here, you can select the number of samples. by default, this is 1,000, and i'm going to keep it that way.
and then you need to define the output times, and this is basically matlab code. so you can use linspace, but you can also just write your own time vector here. so i'm going to use the vector that goes from 0 to 180 in steps of 0.5 hours.
then there are a few dropdown menus here. you can choose different sampling methods. as i said, sobol, halton, and latin hypercube, those are low discrepancy sampling methods, which-- i recommend using one of those three because it's more efficient than using just a standard random uniform distribution.
the output times are reported after the simulation is done, and they might not coincide with the steps that ode solver has taken. so that's why you can choose an interpolator. i would just recommend using the default value here.
and then there are different ways that you can speed up this global sensitivity analysis. so first of all, you can parallelize the simulations. i will be using a parallel pull with four workers here, on my core i7 laptop with four cores. and we will also accelerate the model, so that that compiles the model to seek code in order to speed up the simulations. and then we can talk about that later.
so that's basically the input part of the global sensitivity analysis that we've set up. now, the next thing we can do is we can define what the output of interest is for us. well, and for this model for gout, the output of interest is the serum uric acid levels. so in here, you can select whichever output you're interested in. if you're just interested in looking at pka for lesinurad, you could use this-- the central lesinurad concentration.
but in this case, we're very interested in the clinically-relevant output of the model, which is the serum uric acid. so i just select that and click done. and then we can basically start simulating the model. and i'll talk about this more in a bit. but basically, the combination of having four parameters and drawing 1,000 samples means that we need to do 1,000 times 4 plus 2, so 6,000 simulations.
so i'm going to go ahead and start that now, and in the meantime, i'm going go back to the slides to discuss another new feature in simbiology that is relevant for this particular case. and those are called observables. so the idea behind observables is that they supersede and expend calculate statistics functionality.
so you might be familiar with the calculate statistics functionality, where you simulate your model in the task editor in 2019a and prior, and you were able to calculate, for example, cmax or something. it always had to be a scalar value. the observables, built on top of that, and they don't necessarily have to result in scalar, but they can also be time-based. and so time-based might be relevant if, for example, you have different species that you want to add up to get, for example, the total tumor volume, or total drug concentration, or something like that.
and the idea is the observables, you can apply to your simulation data. so say, you have done 100 simulations, and you just the auc from each of those 100 simulations. so you have a 100 by 1 simdata array, and you can then add an observable to that simdata array that just says, ok, give me the auc which equals trapz time, central.drug_central, and then it will just give you 100 aucs, basically. and so that will make your life a lot easier. you can do that in one line of code.
now, some of you might realize that this could also be achieved by using repeated assignments. so the difference between repeated assignment and observables are that the observables are calculated after the odes are solved. so they're not part of the system of equations. and so observables cannot be variables that the system depends on. if your model or your system depends on that tumor volume, for example, to change the compartment tumor volume, then it needs to be a repeated assignment. but if nothing in the model depends on it, it's better to use an observable because it's less computationally expensive.
the observables can be applied to both your model and to the simdata object. if you add an observable to a model and you simulate the model, the observable is automatically calculated as well. and the simdata, if you add the observable to the simdata, then it's calculated and returns, for example, that auc.
ok, so i hope that makes sense for the observables. so let's go back to simbiology. it looks like it's done now, and we have our results here. last time, i did this earlier today. it took about 90 seconds to simulate these 6,000 simulations. so-- ok.
so here we have the results. we have in blue the first order sensitivities, sobol sensitivities, and in red the total order sensitivities. so what you can see is that e0 is clearly the most important parameter in the model, probably followed by k1, fmax, and then fc50.
now, if there were no interactions, these should all add up to 1 at any moment in time or around about 1. you can see that that's not quite the case. so there is-- here is a fraction of unexplained variance that is not 0. and so this gives you an indicator that there is some interaction between your parameters.
there are other reasons this could be non-zero, which is, you might have some numerical drift or something in your simulation. but the most common explanation would be that there are interactions. and you can see that there are interactions by comparing the total order values, for example, here, to the first order values.
and so if we compare these two, you can see that this one is higher than this one, and that's what you expect. you expect the total order to be either the same or higher than the first order because the total order is the sum of the first order, all of the second, third, et cetera, orders. and so by comparing these two, you can see which parameters show most interaction.
ok, so this gives us an idea of the global sensitivities over time. the output that we took was serum uric acid, which is a continuous variable throughout-- for the model that changes over time. and the waves that you see here are the different dosing events. and so what we can see here is that e0 here looks like it's more important at the earlier stages of the simulation than at the later stages.
and the other thing, of course, we could have done was we could have used an actual observable, scalar observable, like auc or the minimum value of the serum uric acid as our model output. and then of course, we wouldn't have had a time course, but we would have had a single number for each scalar value, for each of the first and total order indices.
another thing you can plot, and that's generally good to do, is this sort of sanity check-- is you can plot the data. so if you plot the data, you can see the results here of all the simulations with the 90 percentile region in blue, and some of the individual traces dotted here. and in red, you see the mean simulation value for all of the samples, the 1,000 samples that we took.
ok, so with that, we're going to move on to the multiparametric global sensitivity analysis. and the idea behind the multiparametric global sensitivity analysis is that you use a classifier to analyze the relative importance of parameters, and that classifier is a model-based decision metric based on the model outputs, but it has to result in a true or false outcome. and so as a result, there has to be an inequality. so the effect-- for example, the classifier can be, my pharmacodynamic effect is larger than 70% or the final concentration is larger than the mean of the final concentration.
and so that way, you can basically classify your simulations between, do they meet that classifier? are they indeed larger than 70%? or do they fail? are they rejected?
you can use a combination of these. like, you can create a single, the effect has to be larger and the final concentration has to be larger than that. or you can also just perform the multiparametric global sensitivity with multiple outputs, with multiple classifiers, and see which ones are relevant for you.
so here's an example of how-- what it looks like. say you have a model and there are two parameters that you're varying, kel and ic50. then you simulate the model for these two values, and you see whether the effect is larger than 70. and if so, then you say yes, and otherwise, you say no. and so that way, we get a set of simulations that are either accepted or rejected.
and from that, we can calculate an empirical cumulative distribution function. so if they are-- and we do that for both the accepted and for the rejected sample. so you can see an example here. for k1, this is the case. so at low values of k1, it looks like more of the samples are accepted, whereas at higher values, more of the samples are rejected.
and what we can then do is we can calculate the maximal distance. so that's the maximum distance between the two functions vertically, basically. and we can use a kolmogorov-smirnov test to see whether these distributions are statistically significantly different. and so the advantages of doing this over a sobol is that you actually get an answer, like, is there a significantly different result. and you can, of course, choose that metric, that classifier to be relevant for your case.
now, there's one thing i haven't touched on and that's the threshold. that threshold, for example, 70%, that should be about the halfway mark on your simulation. so about 1/2 of the simulations should pass and 1/2 of those simulations should fail, in order to be able to construct those two cumulative distribution functions.
because if more of them failed than pass, then you're going to get fewer passed ones. and you're going to get a very jagged cdf, and then your kolmogorov-smirnov test is not going to be reliable. so what you can actually do is, you can use this simulation here, that plot of all your simulations, and for example, use that red line to come up with a threshold.
ok, so how do we bring this in practice? well, it's actually very simple. we can go back here, and we can actually reuse the simulations. and we already did the monte carlo simulation. so we can just reuse those simulations. the only thing i need to define is my classifier.
and so i can type that classifier in. and i can just-- i don't have to run the simulations anymore. i can just compute the multiparametric global sensitivity analysis.
and so here, you see the results from the multiparametric global sensitivity analysis. and you see that k1 and fmax are-- now there is a significant difference between the two. for e0, they are miles apart. and for fc50, they're very close together.
and you can compare this to the histogram as well. the cumulative distribution function and the histogram are related. and so you can see, for example, for e0, that most of the accepted samples occur at lower values of e0. and that's why you see this blue line rise. whereas the rejected samples only occur at higher values, and that's why the red line only comes of 0, like rises above 0, at these higher values of e0. so that's how the histogram and the ecdf are related.
and so what we're looking at with this kolmogorov-smirnov test is how different are these histograms or how different are the-- really, the kolmogorov-smirnov test looks at how different are these cumulative distribution functions. and how we can do that is we can plot-- called bar. and that plots are-- that calculates the ks statistics, which you see here in blue, and the p-value of whether it's statistically significantly different.
so then you see that for fmax, e0, and k1, the effect is significant. so the two distributions are significantly different. whereas for fc50, it doesn't reach the threshold of p is less than 0.05.
ok, so that concludes the multiparametric global sensitivity analysis. there is one more topic i want to discuss and that is the number of iterations. and so in order to perform this global sensitivity analysis, you have to calculate-- you have to perform a great deal of simulations. it's computationally expensive. but also, in order to get reliable results, you need to have-- you can't undersample. so you know, what is a good number of samples to have?
so if we assume that n is the number of samples that we draw from our parameter input space and p is the number of input parameters, so that's the dimensionality of the parameter input space, then we can say whichever is larger. so n should be larger than 2 to the power of p. that would be the absolute bare minimum.
if i have two parameters, then i take four samples. that only allows me to cover the corners of my input space. if i have three parameters, it's eight, so that's the corners of the cube. so really, what you want is a higher base number. so like, 3 to the power of p or 4 to the power of p.
and then you can see that you know as p increases above like, 15, that you're looking at a very large number of simulations. and in general, for-- if you have-- if you're performing this with less than five or less parameters, i would recommend getting 1,000 to 5,000 simulations, like what i did here. i did 1,000 simulations for those four parameters.
so try that out. there are a few ways that you can see that you're actually undersampling, and that is if your sobol results are falling below 0. so this is just above 0, but if this were negative, then i would be worried about my-- about undersampling-- or if they are above the above 1.
because the sobol index is always trying to see what percentage of the variance can be-- of the total variance can be attributed to the single parameter. it should always be between 0 and 1. so that's a surefire symptom that something is wrong, and you should increase the number of samples.
another one, of course, is that if you rerun the analysis, you're getting meaningfully different results, but you might not have the computational resources to try it out multiple times. and lastly, if you have-- for mpgsa, if you have a jagged or a staircase shape-- i touched on this a little bit earlier-- then you might either have a poor choice of your threshold-- that threshold might not be placed such that about 1/2 are rejected and 1/2 are accepted-- but it can also be due to insufficient sample size. and you need to know that if you have that jagged staircase, you should not trust the results from your mpgsa.
lastly, because we do-- we have to do so many simulations and because we have to simulate all of the samples before we can calculate the sensitivity measures, then all of the simulation results need to be held in memory in order to calculate the sensitivity indices. and so if you can minimize the memory footprint of your simulation, you can probably perform more samples. you can simulate more samples.
so how can you minimize the memory footprint? well, first of all, you can reduce the number of output. so if you only have a single output, like the final concentration or final effect, that will greatly minimize the number of outputs. and also limiting the number of logged states. so make sure that there are no-- that you're not logging all of your species, et cetera. and then lastly, of course, you can use an observable in order to minimize that footprint as well because then you can basically boil down an entire simulation into a single scalar using that observable, and that will allow you to take more, to run a larger global sensitivity analysis.
ok, so in summary, we talked about what do we need to do in order to perform this sensitivity analysis. we need to define that domain, what are the upper and lower bounds for each parameter. then we can sample that domain. for each sample, we can simulate the model. and then once we've simulated all of the samples, we get that ensemble of simulations. from there, we can calculate the sensitivity measure.
so in summary, some of the things you should think about, which parameters do you want to include, what is the range for each of those parameters. and that range can be physiologically defined, or it can be that you have found different values in literature, and you take the lowest value you found and the highest value you found. then you're going to sample. you need to choose a sampling method. all of these samples have to be uniform in order for assumptions underlying the sobol indices calculation to not be violated.
so the latin hypercube, this sobol sequence, and the halton sequence, they're all uniform sampling methods. but they have an advantage of the random uniform sampling, in that they require fewer samples to be taken, while still covering the same input space. so i recommend using latin hypercube, sobol, or halton for your sampling. and then you also need to choose the number of samples.
then you're going to simulate the model, so you need to make sure you add the doses. if you don't add a dose, you're not perturbing the model, and you're not going to see any results. the variants need to be right. you need to define what are time points, what the model output of interest is, what your classifier is, et cetera, and think here again about your memory footprint.
and then lastly, we can talk about the-- well, sbiosobol and sbiompgsa, the two functions that are basically working under the hood of that app that i showed you, they will calculate the first order and total order sobol indices and the ecdfs, and perform the k-s tests on all of the simulations that you then run.
featured product
simbiology
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)