predictive maintenance of a steam turbine video -凯发k8网页登录
boitumelo mantji, opti-num solutions
peter randall, sasol
the accumulation of salt deposits on turbine blades can significantly impair turbine efficiency, resulting in a process bottleneck that increases operational costs. hear how sasol and opti-num solutions engineers collaborated to determine the optimal wash time to prevent bottlenecking, and how they implemented an end-to-end predictive maintenance workflow using matlab®. learn how they estimated the remaining useful life of a turbine and deployed the application so that decision makers can access up-to-date data to define optimal wash schedules.
good everyone. thanks for attending this webinar. welcome. this will be a webinar on the optimization of steam turbine maintenance and scheduling using predictive maintenance workflow. i'm one of your hosts or presenters today, i'm peter randall, a rotating equipment engineer here at sasol and i'll be co-presenting with boitumelo mantji from opti-num solutions.
good morning. my name is boitumelo mantji and it is a pleasure to be here. i'm an electrical and biomedical engineer and i work as a solutions engineer at optimum solutions. as part of my role i get to work quite closely with a lot of customers within the mining and manufacturing space helping or supporting them in solving the various problems or even introducing smart technologies into their operations. and this presentation will display an instance of just that. thanks peter.
thanks boitumelo. so moving on to the agenda. so we'll cover a bit of a background, some content on fouling and how that's important to us, fundamentals of steam turbine operation, and then we'll run you through the predictive maintenance workflow, looking at the preprocessing, the analysis, model development, and then the deployment, and we'll conclude with some comments.
so just to start off with just a little bit about sasol. so sasol is global chemicals and energy company. we have operations in 30 countries and we are also one of the largest producers of synthetic fuels in the world, and we also have the world's broadest integrated alcohol and surfactants portfolio. i'm one of their trading equipment engineers located at our second facility, which is quite a large facility covering a geographical area of more or less about 20 square kilometers.
so now i'm going to go forward into the equipment that we're dealing with today's project. it comprised of a series of seven compressor turbine trains, each one with a megawatt rating of about 13 and 1/2 megawatts. it's a condensing steam turbine running at about 7,000 rpm with speed controller.
and then additionally we also have a pressure controller, which will control our wheel chamber pressure when it measures 2,550 kpa. so that'll basically cut back steam flow to limit that pressure at that time. so then the problem statement is actually that these particular steam turbines suffer from fouling and that fouling occurs in a period normally between about eight months and to a year.
and we want to essentially but more planning and predictability because when we start cutting back on that wheel chamber pressure we start to limit the steam coming through the machine and that causes us to have a bottleneck on the compressor side of things.
so then i'm going to go through a little bit about fouling. as you can see there's a picture a photograph of fouling and how it appears inside the turbine. and that basically is occurring when the steam goes from a dry condition to a wet condition. so from super-heated to saturated conditions. at that point is where you actually get the salts being slightly knocked out of the steam and it builds up on these turbine blades as a result.
so what fouling then does inside the machine is it increases the resistance, which will directly affect the wheel chamber pressure. it then also changes or modifies the blade profiles which will decrease the isotropic efficiency. and a decrease in isotropic efficiency will result in slightly increased steam flow. i'll explain this a bit more in theory in a moment.
then we will also get potentially with an increase in steam flow, we will see an increase outlet pressure. and with the increase wheel chamber pressure we have a larger pressure over the whole machine which will potentially result in a larger pressure force. and that needs to be absorbed thrust bearings.
so going forward into a mollier diagram, where entropy diagram. here, we can just see two cases of isotropic expansion, which is in the black area and the real expansion process in the red area. so what that is just showing is that in a real steam expansion process, which is going from some superheated condition into a saturated condition, we get an increase in the entropy.
and this is also visually represented here as a slightly increased exhaust entropy, but as you can see the extraction of energy is the difference between those two. so the more you increase your entropy the less energy are extracted from the steam and so you would need more steam to have the same amount of work. this is a little bit easier to see sometimes on a temperature entropy diagram.
i've plotted three different cases. the one is the isotropic expansion as you saw before, the real expansion to the same isobar, which is a constant pressure line. and then the real curve at with an increased isobar as a result of the fouling condition. so you can see that we've moved more to the right and even more entropy, but we've also climbed up onto a different isobar.
and this is just exaggerated in this case for illustration purposes. but effectively that is the transition that you have and that is why as a result of the fouling you have, number one an increased flow but also these changes that we see. so then going into the project itself, we had a budget of about 100 consulting hours for use with opti-num.
and the aim of this project was to analyze the post performance and the maintenance of our maintenance interventions here being the turbine washers, and predict the future on when these interventions are required and to deliver a model that would be able to predict these interventions and then allow us to deploy that within our environment. so here i'm going to hand over to boitumelo to take you through the predictive maintenance with that.
thanks peter. so at this stage, we have a clear view of what the problem is and what we're trying to achieve. and so now we're going to move on to the solution we implemented. so the solution is comprised of various stages which are shown in this workflow, and as i go through each stage in more detail, you will uncover the various challenges that we faced in building out best solution.
the first stage entailed accessing and processing the data. so essentially getting the data into a format that's suitable for building a predictive model. once the data was clean, we perform some analysis on the data to extract features descriptive of turbine performance, and this together with peter's domain knowledge, enabled us to define the operating bounds of each turbine.
once we had characterized what healthy turbine behavior looks like we moved on to building our model. and then finally, the deployment stage, which entails integrating the functionality into sasol systems. so effectively making the technology accessible to the operators and decision makers, which peter will cover in more detail later on in the presentation.
now we're going to look more closely at the data access and preprocessing stage. sasol provided us with process data for seven of their turbines. this was six years of data collected from 2012 to 2018 at 20 minute intervals. so this project was actually completed early in 2019. and so this is the most recent data that we had at the time. initially, four variables of interest were identified that being variables that could characterize turbine behavior, namely real chamber pressure, speed, steam throughput, and vacuum pressure.
but through leveraging peter's domain knowledge of turbine systems we ended up ignoring the last variable, which is the vacuum pressure. and the reason for that is because vacuum pressure has a strong correlation to steam flow, which is one of the variables we're really looking at. and knowledge of this save us a lot of time because it meant we didn't need to look at possibly applying dimensionality reduction techniques to the data.
you will also made aware of the operating conditions of the turbine, where wheel chamber pressure has to be in the range 1,600 to 2,550 kilo pascals. and speed has to be above 6,700 rotations per minute. so if we take a look at the graph on the right, which is a plot of the speed against time for a single turbine over that six year period, it means that the operation in this region, which gives us information about the performance or the efficiency of the turbine.
and remember what we're trying to do is predict the remaining useful life of the turbine in other words, when next will we need to maintain our turbine. and that deviation from normal operations can only be seen within this region. so for us to estimate the remaining useful life of the turbine, we needed to analyze the life cycle data of each turbine. and we can think of the life cycle as the period of time between two consecutive washes.
but one of the challenges we faced is that there weren't any records or logs of when previous washes occurred over the six years. and so we had to work quite closely with peter to see if it would be possible to tell when washing occurred based on the data. because of this, we analyzed the low speed data, which is the data in the green region.
low speed data in general is representative of ramp up and ramp downtime, overhauling, and washes. and we chose to look at the low speed data in isolation. so this scatter plot is only the data below the 6,700 rpm threshold. and we did this because we knew what we're looking for would be in that region. and by removing the high speed data, it just meant we had less data to process, which saved us a lot of time. at this point, the aim was to look through the low speed data and see if we could detect when this occurred, so we can analyze the turbine cycles more closely.
after even more conversations with peter, some poor guy, he pointed out the turbine speed behavior when it's undergoing a wash. and in order to show that behavior visually, we plotted the scatter plot as a line graph. this red line represents a time when a wash could. and i'm simply going to zoom into the circled area to give a better view of the turbine behavior.
so what peter explained is that when you wash a turbine the machine undergoes a series of ramp ups and downs. you can see here it's turned off, then it's ramped up to run at about 800 rpm, then it's run down again. and this process takes about eight to 12 hours. and this pattern is actually characteristic of washing a turbine and it's something we never would have known without bothering peter again.
so we could find that working really closely with peter, so coupling our modeling or data science experience with peter's experience as the domain expert worked really well in terms of influencing a lot of the design decisions that were made. from this, we were able to develop an algorithm to detect that pattern throughout the data set. and this enabled us to have a log or record of when past washes occurred.
once we extracted those dates, we discarded the low speed data and this allowed us to focus more on the data that spoke to the operation of the turbine. so the high speed data. at this stage, we have cleaned our data and we know one previous washes occurred. so the next step is to analyze the turbine life cycles. to be able to visualize and understand how the behavior of a turbine changes as it is the end
of its life, we use wash logs to effectively segment the data into a series of life cycles for that specific turbine.
this graph shows the data collected within a single lifecycle of a turbine. early i mentioned that we had three variables of interest for characterizing the turbine behavior, which was speed, wheel chamber pressure and steam throughput. and we looked at various visualizations of the data. so 3d and 4d plots of different sensor combinations and ratios trying to identify the key features. one of which was looking at speed against the wheel chamber pressure and how this varied over time, which is shown by this color bar.
so the blue region would indicate the time right off a wash, and the yellow would indicate the time just before a wash needs to occur. and what we noticed was as we approach the next wash as the turbine gets more foul, the wheel chamber pressure would rise and the speed would drop. and this is consistent with the theory. and this kind of plateauing behavior that we see in this region is the result of some of the control measures that sasol has in place to prevent the pressure from rising past a certain point.
then we looked at the other life cycles of the same turbine. so for example, this turbine washed five times within that six year period. and so five life cycles can be observed. and we use this view to see if the features mentioned previously, which were identified as being indicative of failing were in fact, consistent across the other life cycles.
so now we're taking a closer look at one life cycle and bringing in that third variable, the steam throughput. the reason for that is bringing in steam throughputs provided another level of separation of the data points particularly, when looking at the color gradient, which speaks to the period of time between two washes.
in this view, what we see is that the steam throughput is rising. and from a practical point of view, the reason this happens is because as salts continue to accumulate on the blades of the turbine, more and more steam is required to maintain the same throughput. but eventually a point is reached where the expected throughput can no longer be maintained. so that throughput gets pulled down by the pressure limit, which is limiting the same throughput.
and it is at this point where we know our system is operating inefficiently. from this we could determine what the operating threshold needed to be, which is this red star. so this is where we don't want to be operating, or any way too close to this region. and we can see that point corresponds with the wheel chamber pressure threshold, which is chosen by the operator and the point where we start seeing the decrease in speed.
so once we are within certain bounds of the operating threshold, this is when we typically want to wash the turbines. so we can start operating in the blue region again. so just to recap, at this stage, we have identified our operating threshold. in other words, the point we don't want to reach. but we know that as time progresses, so as the turbine reaches the end of its life cycle, we get closer and closer to that threshold.
meaning, that the three dissonance between the data points and the threshold over time will decrease. so we started by calculating the distance between each data point and the threshold. and that distance between the data points and the threshold is what this graph represents, with the y-axis is the log of the distance, the x-axis is time and the color band in this instance represents the wheel chamber pressure.
each graph has a separate life cycle and we can see that in general each graph very gradually tends to zero over time. because remember the x-axis essentially represents that operating threshold, which we
are approaching. the final step required fitting a linear function to the distance later, in this process, we made use of a moving window with a window size is a certain period of time which gets determined by the operator.
and as the window moves across the data, at each step a linear function is fitted to the data. you then get this collection of linear models. and the reason we use a moving window is to ensure that the model adjusts predictions according to the latest information. and you can see how the model keeps adjusting itself by looking at these decreasing gradients as we move through time. and this is exactly what we expected to see.
we then get an average of the models to get the general trend of the overall data. then finally we extrapolate that average model to the intercept and otherwise to the operating threshold, which then tells us what the remaining useful life of the turbine is. at this stage, we are very happy because we now have a predictive model that we can verify.
so we verified that the performance of the model by applying it to historic unseen data. and these red lines you see here serve as the markers of what the model predicted the next wash will need to be. so if we look at the first graph, we can see that the turbine was washed around august, september. but according to the model's prediction, the washing could have happened a few months later.
but from the gap in the data, which is about a month long we know that the turbine was actually overhauled which explains why it appears to have been washed too early. we also see that shortly after the overhauling, the efficiency of the turbine dropped quite rapidly. and in this instance, peter and the team it was due to the poor steam quality at that time.
so really the result of the model are meant to serve as supporting material for the operators and the decision makers. and this is important because the intention of the predictive model was not for it to be used in isolation or independent of the operator but to serve as an additional tool or assistive technology for defining a more holistic wash schedule.
for the second life cycle, a wash was performed around september. but the wheel chamber pressure was still quite low because see that was still in the blue-grean region. so we washed it too early. and we can see that the prediction headed about a few months later. and not only that, we noticed that shortly after the wash in september, which marks the beginning of the third life cycle, we can already see the wheel chamber pressure starting to rise again. we're already moving into that yellow region.
and this just highlighted that the quality of the wash was not so great. so from an operational point of view, this allows operators to not only predict the future but also have a retrospective view of why certain decisions were made and how they can be avoided in the future.
once the model is built, the next step was looking at how all of this technology could be made accessible to the relevant people within sasol and peter will give more information around that.
thanks boitumelo. so now we've got a model that has been built it's now in our hands now we need to from the sasol side of things integrated into our workflows and our operational environment. that's had to factor in a few different things. the one is how are we going to actually interface with this thing, this model and app. and then how does that integrate itself into our operational databases and things like that. so the back end side.
and then i want to just touch on some of my learnings in that because i think the audience might find some use in that information. on the right hand side, you can actually see an example of what the gui interface looked like. on the left hand side, we've got a summary section where it would just cover the
operation of each machine since the last wash. so you can see this breaks in operation but it was never washed in that particular period.
and yeah, this is essentially just high look at each machine and then you can deep dive into a specific unit. so in this case unit 7 and look at what that information looked like for the last wash or the last number of washes in the time frame that you've requested. so you can actually have that retrospective look to see, did we actually improve or are we moving further away from that high wheel chamber pressure, getting these nice dark blues and things like that.
so that retrospective analysis is done visually and just as a secondary point to the actual washing procedure itself. then other interface considerations. so on gui interface essentially for people like planners and maintenance managers and things like that, they're not necessarily looking to dive into the details. they want to know the information off the bat nice and quickly in a summarized and concise form.
so here we can see that landing page of the app, you essentially select our historian data, you import it, you run the model, and then it'll provide you with a table. and here you can actually just fit all the some cases where these machines might want to or need to come off at very close into those to one another, then you can actually shift around their schedules. maybe one a little bit earlier based on production availability and things like that.
so it allows you to plan that and to just have that visual way, or tabulate the data to make that decision. something to note as well on your web apps. that typically it'll just log the next action that should be done. so if you click the two buttons in quick succession, then it will pull the data but then the next coding window it'll actually run the second one, which may not be when you've actually gotten it back.
so in cases like this, it's good to use something like a processing dialogue to prevent the user from getting any more inputs to the app. then in terms of the deployment on the back end, we have quite an interesting situation where we have two different types of historians actually, essentially backups of one another. one is a honeywell historian and the other one is a rissi soft pie historian, both of those historian instances also have opc server deployments on top.
so we had a luxury in terms of how we could integrate this app into our operational facility. and i started off with using the opc toolbox which allowed me some graphical way of exploring the server, seeing how i'm going to get the data and then you can also create functions to go and pull certain functions or pull data or utilize any opc functions. and that just helps with creating code if you don't specifically know the architecture.
and then the other method is actually using either the dlls or the apis natively within matlab, which you could do via the .net addassembly or load library core. and this actually is a slightly better method because the apis carry a little bit less overhead. and they remove some of the inherent limitations that might exist on the opc deployments.
and then the last one i was struggling to get the honeywell api natively into matlab, but i was able to actually pull that into python. and because of some other work that we're doing using coprob within python, we actually decided to just use python as a basis for that.
so as we made that decision that then requires us to just ensure that we have correct matlab rappers to essentially take whatever is in the python or dot net scripts and make sure that it's compatible with the data coming into matlab. and then the last step is once you've got all that data, you process it and you figure out some of the downsides, maybe of the gui and you optimize that to make sure it's quite user-friendly.
and in that instance, we found that the information was taking quite a while to process and as boyd miller mentioned, there's multiple linear models that is being done because you're moving-- you've got that moving window. and essentially that takes quite a lot of computation and what we are doing here is actually then just making sure that we can leverage our multiple cpus on the machine by using a parallel four loop.
but that in itself mandated that it was a little bit of a rewrite in the code, just in terms of how the function communicates to the four loop because you still need to keep track of which machine you're dealing with and that you can do quite easily. you can look in the documentation of matlab, but you essentially would use that data gui to make sure that you can get the values back.
last thing is that when you use something like a parallel four loop it's going to start a parallel pool with our many workers you've got set and that takes some of it. so these are not necessarily advised for situations where the overhead is more than the cost of the processing.
then some of the interesting points on the historians that i'd like to mention is essentially they look a lot, they look very simulink how you would call and get data from the historians. you've got these tag constructs and you've got functions that are built onto the tag, but what's interesting here is i've got a tag construct and i then have a server construct. and the fetch data request is then on the server construct of the honeywell side of things.
whereas if you look at the pi side, you've got the same tag construct but then the tag construct has something called a summary or a way to fetch data based on some calculation or something like that. you don't have to put in a calculation, you could just ask for the raw data or something like that. and then here's an example of, say, the current value.
so the important thing here is that each point actually sends a request and not necessarily each server. and that is just the difference in how those connections are handled. so on the honeywell side of things, the connection is handled by pi server request and the timestamps and the request the data is aggregated in that server instance. and you don't need to make sure that it's there in each call.
and whereas on the pi side it's actually handled as part of these functions. so that's handled internally within that summary function or method for example. something to note is that because of the way that this occurs. when you want to run through multiple attacks serially, it's actually slightly faster to use what osi soft spot into the api and that's a point list. so that's essentially a similar way of doing it to the server. in that you've got a list of points that you want to then request these accidentally data fall.
so that makes things a lot faster because you can actually parallelize the call on the server side and you don't have these multiply waits that you do when you serially request the data. so please feel free to visit the support documentation. i only have the osi support website documented here. if you have a honeywell server, you should be able to get hold of the documentation for left in pdf form.
and going on to actually getting so now we've got this function in matlab and in python, but we have still an issue that whatever these apis are using, there are actually dot net apis. so they're using system type variables. and firstly because you're using python for this, you need to get them into python data types. but preferentially you choose the types that are analogous to the matlab data types. so in this case things like doubles, integers, singles, those are very easy to move between python and matlab so that is fairly trivial.
but moving them from a system that adds up to a python data type might not be. so there's various ways to skin this proverbial cape, that is one of them is to use like four loops and or looping through the racial
constructs and actually then converting each souled actual data point individually. these are typically quite some time consuming, especially when you compare it to the next comment which is actually a guitar posto. i put it in there for interest and you're welcome to go and built it.
this essentially as a memory move of certain data types that are identical to python data types. and that makes it very quick to convert these data types to a python data type. in our case, all of the data was actually doubles. and most processed historians store values in either doubles or integers, or maybe booleans.
i've come across very few that are actually using string data types. so that actually works for most or all of the data that i was dealing with. the last thing that's a bit tricky is the date time. so that was coming through is a system date time, which you could either store as a string value, but then passing the string and making sure that your date is the same on either end is quite time consuming. passing strings is obviously not just a single action per data point, you've got however many characters you have in there.
so obviously passing a character string is quite time consuming because you have multiple characters that you need to pass. the other option, though, is when we make this call to the database. we can actually request the date time as utc date number, which means that it's very quick to essentially change that over to a daytime in matlab. it's essentially just an addition to zero date number and getting that back. and that is just essentially float 64 or double data points. so that it makes that conversion quite easy.
and i would advise if you're going to do something like this to use that method, although converted into a known date number and then use that formula going forward. and then lastly, now we've got this model we've actually been able to deploy it and now we can use it. so essentially we managed to create a model within the 100 hours of available time. it provides us with retrospective analysis, it's repeatable and comparable, and it does exactly what we needed to do.
in the case of efficiencies we weren't specifically requiring that we look at the efficiency necessarily of the turbine, but we were looking more for the point of operation where we stopped the bottleneck the facility, because that is a much bigger loss for us and that was our main goal. and so we've now got this model we've integrated it and we've used it.
so lastly in terms of a workflow like this is just some details that is handy to remember. that is if you can understand your problem from the first principles or physics or some design basis, then i would urge you to do that because it might highlight points where you can actually take a physical and analytical approach to either minimize the dimensionality of the data, or understand relationships between certain data points, so as not to duplicate a relationship.
and then also understanding your deployment requirements. so in our case, we wanted it to be a grid type deployment so that we could have the retrospective analysis. you could also if you have a specific goal, write this into a script that runs in the background and then just to deliver say a report or a single value or whatever the case might be in the back end. but like i mentioned, we wanted the retrospective side of things.
and then also understanding leveraging of the expertise that's available to you. so in our case, i'm not a data scientist by trade and so the data science portion of it would have taken me much longer to do and implement. and my expertise lies on the machines and how i can bring some knowledge into the model based on that. so it's important to collaborate in things like this.
this whole process as we've detailed was a massive collaboration between the two parties, making sure that my goals as a technical person were achieved and also bringing that technical knowledge into the data science and opti-num or boitumelo being able to provide some visual evaluations and things to essentially see is this model doing the right thing. and that was a really good way to work just leveraging those two things.
so i would advise that as well to whoever wants to approach a problem like this. and then for client personnel that are working and breathing into these operational environments, just to establish whether you have any of these problems and whether you can find a better or more efficient way of dealing with information that you deal with on a repeated and frequent basis, question that and see whether you can improve and expand on what you're doing moving stuff away from a manual action into something that is a little bit more automated that gives you a bit more time to do these interesting examples and workflows and optimize your time a bit better.
but in order to do that, you need to get to know your systems, understand what the limitations are in terms of data flow and what data is available and those things, and what you could actually achieve. and to do that you might also identify some future hurdles and projects that want to be take-- to be done in order to make this a reality.
so yes. thank you for your time. i hope you guys enjoyed it and i hope you found value in it. we'll now take some time to answer some of your questions. so feel free to contact us if you have any specific questions around the content.
thank you everybody.
featured product
statistics and machine learning toolbox
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)