batch process optimization with matlab video -凯发k8网页登录
this presentation considers the alternative construction of the design space based on experimental data and a grey-box model of the reactions. the model is subsequently used to optimize the production process by changing the process variables. you can examine the effect of uncertainty and variability of the parameters on process performance with a monte-carlo simulation.
highlights:
- reduce experiments required during the process characterization phase
- improve the fundamental scientific understanding of a process by building a model
- optimize or scale up the process by manipulating time-varying parameters
- examine the effect of uncertainty quantification and analysis, which is mandatory for quality by design (qbd)
hello, everyone. thank you for joining us today. my name is paul huxel, and i'm a senior application engineer with mathworks. since joining the team almost four years ago, i've visited and worked with many medical device and pharmaceutical sites, to help them use matlab to enhance their data analysis. i hope this introductory webinar on batch process optimization demonstrates how easily you can get started with matlab, as well.
but before we get into the how, i want to first show you what will be building towards in today's webinar. these images are from the pdf we'll be creating directly from a live script in matlab. live scripts are interactive documents that allow you to combine formatted text, equations, and images-- along with your matlab code and output of that code-- to easily and thoroughly document your algorithms and results.
i'll describe today's problem in more detail in a moment. but in short, we'll be using data from several constant temperature batch runs to build a model that describes the process occurring during those runs. we'll then use this model to determine the optimal temperature variance schedule that will maximize the yield of this process. finally, we'll perform a quick monte carlo analysis to evaluate how temperature uncertainties might impact the yield.
to do this, we'll be using the typical data analysis workflow. if we're going to perform data analysis, we're going to need access to data. in today's example, we'll be loading data from spreadsheets, but this could also come from image files, other software and web applications, or directly from hardware and scientific instruments.
once we have our data in matlab, we'll see how easy it is to begin exploring and discovering various analysis and modeling techniques. in particular, we'll see how a wide range of domain-specific apps will help us quickly get started before writing any code. but to get the most out of our work, we'll want to share it with colleagues or customers. as you just saw, this could be as simple as publishing our code to a pdf.
however, other options include creating standalone or web-based applications so end users can access our work without requiring a matlab installation or license, as well as automatic generation or cloud deployment for scalability. i'll talk a bit more about these options after the demonstration. and along the way, we'll see how we can use matlab to help automate this process.
but before we jump into matlab, i wanted to give you a quick overview of today's topic. one batch process that i enjoy every morning is brewing coffee. like all batch processes, this involves working with a limited inventory of a product at a time, in this case, just coffee beans and water.
we can think of optimization of a batch process as essentially trying to find the recipe, or conditions, that maximize or minimize a particular parameter, subject to process constraints. so in this case, we're trying to maximize flavor by adjusting process conditions, such as the quantity of each ingredient, how fine we grind the beans, and the brewing temperature and duration. depending on how particular you are about your coffee, you've probably never worried about performing a full parametric sweep on the various permutations of process conditions.
you can imagine that even for this simple problem, that would be quite an undertaking. and as you know, commercial applications are much more complex. examples of commercial batch products include fermentation-based processes like beer and wine, dyes and pigments for textiles, paints and inks,
common food additives, fragrances for perfumes and essential oils, specialty chemicals, and of course, pharmaceuticals.
these are often products where developing and controlling precise process conditions is crucial to maintaining brand reputation and regulatory compliance. many industrial processes mimic those that occur naturally. for example, fungi penicillium-- which causes food spoilage-- is also used for production of the antibiotic, penicillin.
however, manufacturing processes need to be efficient and repeatable. this is especially true in the pharmaceutical industry, where any significant deviation from an approved batch formula could result in throwing away millions of dollars. so not only are we interested in finding the best formula to optimize a product yield or potency, but also in determining how best to control this process to adhere to this formula.
today's talk focuses on the former. but as i mentioned, we will use a monte carlo analysis to see how uncertainties in the process might affect the yield. bioreactors are used to help control industrial process conditions, to facilitate the desired biochemical reactions. these stainless steel tanks provide a controlled means to add and remove things from the reactor, such as nutrients and waste gases, respectively.
sensors are used to monitor and control conditions, such as pressure, temperature, oxygen, and ph. and aerators and agitators help mix solutions in a controlled way, so as not to damage growing cells. but in order to design and control processes, we need to know the rate of change of these processes, which could occur over seconds, hours, or days.
these changes may be physical, such as a diffusion process-- like the time it takes for water to get inside a coffee ground-- or they may be chemical, such as fermentation, where there's a biological agent that is ingesting something in the mixture. the time rate of change of the concentration of a species is usually a function of the current concentration of that species, as well as possibly the concentration of other species or other process variables, such as temperature, pressure, ph, and so on.
a simple example would be where the rate of change of a single substance, with respect to time, is a negative of some number k times the concentration itself. if k is a constant, this is just an exponential decay which can be solved analytically. or as is the case in the example we'll look at today, the rate constant of the reaction is often a function of temperature.
these can be extremely complex, non-linear functions. so these models are usually empirically derived from data. the model is important, because if we don't have a model of the process, the only way to find the best recipe would be trial and error. and with so many variables, it could take a prohibitive amount of time to test all the different permutations, even with only a few species.
the reaction kinetics we'll consider in today's introductory demonstration have intentionally been formulated in a simple manner, so that we can focus on the overall problem set up and workflow without getting bogged down in detailed differential equations. with that in mind, we'll only be considering the concentration of two substances-- a nutrient that acts as a food source, and the biomass of an organism that grows by consuming this nutrient.
as you can see here, the nutrient is being modeled such that it decomposes over time due to environmental variables, such as temperature. in reality, it would also be a function of the concentration of biomass available to consume it. but for this simple model, we'll assume that effect is negligible by comparison.
however, we will model the change in concentration of the organism biomass a little more realistically, such that it has both a growth and death component. the organism will begin eating and reproducing based on how much nutrient is available. since the cells have a finite lifetime, they will then die off in proportion to the current biomass concentration, which is a fairly common model for a biological system.
hence, at some point, the food will start to run out and the death process will begin to dominate, such that eventually, the biomass concentration would also decay to zero. in this figure, i'm showing the nutrient and biomass time histories from a constant temperature run. our objective is to determine the temperature varying process schedule that optimizes a key performance indicator, or kpi, such as product yield.
to do so, we'll select a simple kpi that represents the maximum concentration that is ever reached in the biomass curve. as such, once the controller notices the biomass is no longer increasing, the process can be stopped to obtain the maximum yield. with our problem now defined, let's jump into matlab and get started.
for those new to matlab, allow me to give you a quick tour of the four main areas you'll see when it opens. the current folder is the first place matlab is going to look for scripts and other files. it shows all of the files on the path and the address bar, here.
with git or svn integration, you can also quickly see each file source control status. the command window allows you to immediately execute commands and run scripts. variables created by these commands or scripts will show up in the workspace. the workspace shows the variables currently in matlab memory, and provides information on their dimension and data type. these variables are then available for use in other commands or scripts.
finally, the tools strip at the top organizes matlab functionality within a series of tabs. we'll talk more about these as we use them. today, we'll be walking through the live script that was used to create the pdf you saw earlier. as you can see, i've already commented the coat with rich text formatting, and added a table of contents for easy navigation.
the gray shaded areas contain the matlab code. the rest is just to document our methodology and results. recall from the workflow, the first thing we'll need to do is access data. as you can see in the current folder, we have a data folder containing several spreadsheets with the nutrient and biomass time histories, resulting from an experimental campaign that included six different constant temperature batch runs.
if we were working with a single file, we could begin by using the interactive import data tool found on the home tab. instead, we'll use a datastore to automatically gather information about all the available spreadsheets. we'll later use this datastore to read in all the data files in a single command.
but first, let's look at the dynamics of a single run. to make this easier, i've inserted an interactive control to allow the user to quickly select a file of interest from a drop down menu. when we do so, notice that a new variable, with 31 rows and four columns, appears in the workspace.
we can then extract individual columns from this table, using dot notation. selecting these new variables from the workspace, we can quickly visualize the nutrient dynamics using the plot step. matlab then displays all the relevant plotting options. we'll begin with a simple line plot.
notice that when we did this, matlab automatically displayed the corresponding command in the command window. let's also add the biomass dynamics as a subplot, by opening the figure palette view
and dragging these variables to the new axes. we could then use the property editor to customize our plots.
for example, we'll add labels to each x- and y-axis, add grid lines, and select the marker and line color for the nutrient and the biomass. once we're finished customizing our plot, we can generate the corresponding matlab code. so we do not need to repeat this interactive process every time we get new data.
i've already saved this code as a function, named plot dynamics, which can then be called from our live script. as expected, the nutrient dynamics look like an exponential decay. we can quickly fit a model to this data, using one of matlab's built-in apps.
the apps tab offers many interactive apps to help you get started with capabilities, such as machine and deep learning, image and signal processing, test and measurement, computational biology, or in our case, curve fitting. once we've selected the x and y data from the appropriate variables in the workspace, we can quickly try fitting various models, such as polynomials, fourier series, a custom equation, or in this case, an exponential.
the app will then automatically compute the corresponding model coefficients with 95% confidence bounds, and display various goodness of fit metrics. as is the case with many of matlab's interactive tools, we'll once again generate the corresponding matlab code to automate this task for future use. i've already saved this code as a function, named create fit, that will be used in our live script to create a model that can be evaluated to produce the fitted nutrient curve at each time step.
alternatively, instead of fitting an exponential to the data, we could have used the fitlm function to perform a linear regression on the log of the nutrient data. if you're ever unsure of how to use a function, the matlab documentation contains thorough descriptions, detailed references, and many examples to help you get started. once we've created a regression model, we can then make predictions at each time step and transform the result back from the log space.
likewise, once we transform the computer regression coefficients, you'll see we get results similar to what we got with the exponential fit. for the biomass, i've added a live task to allow us to interactively smooth the data. live task can be found on the insert tab, to help you get started with common capabilities without needing to leave your script to refer to documentation.
once you're happy with your selections for the task, you can display the matlab code to learn how to perform these tasks programmatically in the future. we can then add all of these results to the previous figure we created, using the batch run number four data. so far, we've created functions to visualize and fit data for a constant temperature run.
but our real objective is to use this constant temperature data to create a model that accounts for temperature variation. to start, we'll use the readall command to load all of the data contained in our spreadsheet datastore, and once again, extract the time, nutrient, biomass, and temperature columns. we can then explore the effect of temperature by plotting the maximum biomass for each constant temperature run.
it appears the optimal constant temperature occurs at 22 degrees celsius. we can confirm this by examining the max value of a piecewise cubic interpolation of these data points. to do so, we'll use the interp1 function, and evaluate the result every quarter of a degree between the minimum and maximum temperatures.
we now have some confidence the 22 degrees is the optimal constant temperature schedule. but that does not imply it will produce a greater biomass yield than varying the temperature schedule. to investigate this, we'll posit and examine the simple process model we discussed earlier, written here in matrix form.
so the k1 kinetic parameter is analogous to the b coefficient we computed earlier. however, this time, we'll consider it to be temperature dependent. recall, the k2 parameter models the growth of the biomass, while the k3 parameter models its death cycle.
to test our model, we'll arbitrarily choose three values for these kinetic parameters-- such as 1,1, 1-- and then integrate the system, using a fourth and fifth order run cutout ordinary differential equation solver. using our plot dynamics function again, we see the resulting dynamics look very similar to what we saw previously. in particular, the nutrient displays an exponential decay, while the biomass grows until the nutrient is significantly depleted, and then begins to die out.
now that we have a model, we can use this data to estimate the kinetic parameters. since this is simulated data, we can then compare our kinetic parameter estimates with the true values of 1, 1, 1. with that goal in mind, i've created a function to estimate the kinetic parameters.
since i've set this up as a live function with embedded equations, if a user asks for help on this function, they'll see richly formatted documentation similar to matlab's built-in functions. this function estimates the parameters by reformulating the process model and numerically estimating the derivatives of the nutrient and biomass. in this way, the only unknown quantity is the vector k, which can then be found by fitting the equation with a linear regression model using the fitlm function, similar to what we did earlier.
we can then return to our live script and compare the estimated kinetic parameters to the known values. now that we have confidence in our approach, we'll loop through all six constant temperature trial runs to estimate all three kinetic parameters at each temperature. we'll then use these estimates in our schedule dynamics to interpolate the kinetic parameter values, as a function of temperature.
with that, we now have everything we need to compare various temperature schedules. to better understand our process, let's begin by comparing the optimal constant temperature schedule we saw earlier with an arbitrary time varying temperature schedule, such as the oscillating temperature seen here. to do so, we'll integrate our process model for each of these schedule dynamics.
as expected, the optimal constant temperature schedule results in a maximum biomass of just under 0.6, while the arbitrary time varying temperature schedule results in a maximum biomass closer to 0.7. of course, arbitrarily choosing a temperature schedule is not an efficient way to optimize our process. but it has confirmed that we can increase our kpi by varying the temperature.
upon closer examination, we see the lower initial temperature seems to preserve the nutrient longer without significantly impacting the initial biomass growth. while conversely, we later see that when the temperature drops again, the biomass growth quickly plummets. so there appears to be a benefit in keeping the temperature low initially to preserve nutrients when there's little biomass, and then later increasing the temperature to promote biomass growth.
with these insights, we're now ready to set up an appropriately constrained optimization problem, to find the temperature schedule that yields the maximum biomass. we'll begin by creating an optimization problem using the optimproblem command. we can then specify the constraints and objective using expressions of optimization variables.
in this case, we'll create an optimization variable for temperature, with lower and upper bounds based on the minimum and maximum temperatures used in our trial runs. we'll then use this variable to set the constraints on the magnitude and direction the temperature can change between set points in the schedule. in particular, we'll set the minimum change in temperature to be greater than or equal to 0, such that it is always increasing.
and then, we'll set the maximum change in temperature to be consistent with the thermodynamic specifications of the bioreactor. i've created a function name schedulecost to specify the objective. this function integrates the schedule dynamics to compute the resulting kpi, which recall is the maximum biomass achieved. since the optimization problem will attempt to minimize the cost, we'll set the cost to be the negative of the kpi, such that it will then be maximized.
next, we'll initialize the optimization search using the optimal constant temperature schedule from earlier, and then view the problem formulation before running the solver. executing the solve command will yield a solution in about a minute. but in the interest of time, i'm going to load results i obtained previously. extracting the temperature from the optimal solution, we see the temperature changes between set points satisfies our specified magnitude and direction constraints.
finally, we'll integrate our process model dynamics using this optimal time varying temperature schedule. comparing the resulting kpi with the optimal constant temperature schedule, we see that the new time varying temperature schedule results in a 44% increase in biomass yield. however, we know in the real world, we may not be able to precisely control the temperature to perfectly match the optimal temperature schedule.
we can set up a monte carlo analysis to examine how temperature control uncertainties might impact our biomass yield. for example, suppose the temperature at each set point can only be controlled to within two degrees, one sigma. we'll simulate this by perturbing our temperature schedule at each set point, by adding a normal random error with a standard deviation of 2 degrees.
to statistically quantify the impact of this control error, we'll repeat the process for 250 runs, and calculate the mean and standard deviation of the resulting kpi distribution. since we're using a simple process model, this only takes a few seconds. however, if this became a computational burden due to increasing the dynamics complexity or the number of monte carlo runs, we could speed up our analysis using matlab's parallel computing capabilities to perform these runs in parallel on a cluster, or in the cloud.
as you might expect, if we're unable to follow the optimal temperature schedule precisely, we will see a reduction in the maximum biomass achieved. fitting a normal distribution to the resulting histogram, we see the mean value is reduced from 0.8 to about 0.7, with a standard deviation of about 0.03. finally, the last step in the workflow is to share our work. in this case, we will quickly document our algorithm development and analysis by saving our live script as a pdf.
i hope this example gave you a feel for just how quickly and easily you can get started using matlab to enhance your data analysis. as you see here, we only covered a very narrow path through this workflow. so i'd like to take a moment to highlight some of the capabilities that we missed.
matlab offers a single integrated platform for your entire workflow. today, we imported excel spreadsheets. but we also offer interactive tools and hardware support packages to make accessing data easy from other applications, databases, and scientific instruments. for analysis and development, additional capabilities include machine and deep learning, signal and image processing, biological sequence analysis, and simulation in both the time and frequency domains, just to name a few.
and as we briefly saw earlier, in each of these domains, matlab offers interactive tools and deep solutions to help jump start your analysis and development. but you may have also noticed there are apps to help you share and deploy your work, as well. there are two main pathways to do this-- application deployment and automatic code generation.
in both cases, you'd once again begin by developing models algorithms or applications in matlab. which sharing path you choose then depends on the requirements of your system. application deployment allows you to create graphical user interfaces and software libraries that can be integrated with enterprise and cloud systems.
with this option, the functionality of matlab is still effectively running behind the scenes, but the end user does not require a matlab license. automatic code generation, on the other hand, allows you to run your analytics on embedded hardware, such as a medical device or manufacturing line. in this case, your matlab algorithms are converted to readable, portable code in the language of your specified hardware, such as a real-time target, fpga or asic, plc, or gpu.
with deployment in mind, one question we often get asked is if matlab is validated by the fda or other regulatory agencies. we've created a q&a web page and tool validation kit to help guide you through this process. here, you'll also find a summary of the fda mathworks research and collaboration agreement, as well as contact information if you have additional questions or need consulting help.
today's webinar gave you a quick introduction to what matlab has to offer. we saw how easy it is to get started with matlab. thorough documentation, with applicable examples, mean you don't have to start coding from a blank page. you also have access to product and industry experts via technical support and application engineers, as well as an expansive user community via the matlab answers and file exchange sites on our web page.
we saw how apps can be used for interactive algorithm development and automatic matlab code generation, which boost productivity and allow for rapid prototyping. furthermore, once you're ready to deploy your algorithms, you have many options such as creating standalone and web applications, software libraries to integrate with enterprise and cloud systems, or automatic code generation for embedded devices to save time and reduce coding errors.
you can get started today with our free self-paced onramp tutorial. this online interactive tutorial uses hands-on exercises with automated assessment and feedback, to teach you the essentials of matlab in just a couple hours. from there, you can build your skills with several other free self-paced onramps, with popular topics such as machine and deep learning, image and signal processing, and control design with simulink.
if you'd like to dive deeper than the onramps, we also offer a wide variety of self-paced trainings and instructor-led courses. these courses have typically been a mix of in-person and online offerings. but over the past year, we've expanded our instructor-led online trainings, which have been very well received.
these courses can also be customized to meet your specific needs. and when time is critical, we also have consulting services available around the world. our consulting is very transparent, with the goal that your team owns and operates the resulting work.
we customize services based on your needs, to optimize your investment and ensure your success. our consulting has helped many customers jump start their project by applying our proven best practices,
deep product knowledge, and broad technical experience. you can learn more about our products and solutions on our website, at mathworks.com. thank you for your time and attention today.
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)