reinforcement learning for developing field-凯发k8网页登录
use reinforcement learning and the ddpg algorithm for field-oriented control of a permanent magnet synchronous motor. this demonstration replaces two pi controllers with a reinforcement learning agent in the inner loop of the standard field-oriented control architecture and shows how to set up and train an agent using the reinforcement learning workflow.
in this video, we show how to use reinforcement learning for field oriented control of a permanent magnet synchronous motor.
to showcase this, we start with an example that uses the typical field oriented control architecture, where the outer loop controller is responsible for speed control; whereas the inner loop pi controllers are responsible for controlling the d-axis and q-axis currents.
we then create and validate a reinforcement learning agent that replaces the inner loop controllers of this architecture.
the use of rl agent is especially beneficial when the system is nonlinear, in which case we can train a single rl agent instead of tuning pi controllers at multiple operating conditions.
in this example, we use a linear motor model to showcase the workflow of field oriented control using reinforcement learning, and this workflow remains the same for a complex nonlinear motor as well.
let’s look at the simulink model that implements the field oriented control architecture.
this model contains two control loops: an outer speed loop and an inner current loop.
the outer loop is implemented in the ‘speed control’ subsystem, and it contains a pi controller that is responsible for generating reference currents for the inner loop.
the inner loop is implemented in the ‘current control’ subsystem and contains two pi controllers to determine reference voltages in the dq frame.
the reference voltage is then used to generate appropriate pwm signals that control the semiconductor switches of the inverter, which then drives the permanent magnet synchronous motor to achieve desired torque and flux.
let’s go ahead and run the simulink model.
we can see that the tracking performance of the controllers are good and are able to track the desired speed.
let’s save this result for later comparison with the reinforcement learning controller.
now we update the existing model by replacing the two pi controllers in the current loop with a reinforcement learning agent block.
in this example we use ddpg as the reinforcement learning algorithm, which trains an actor and a critic simultaneously to learn an optimal policy that maximizes long-term reward.
once the simulink model is updated with the reinforcement learning block, we then follow the reinforcement learning workflow to setup, train, and simulate the controller.
reinforcement learning workflow is as follows:
first step is to create an environment. in this example, we already have a simulink model that contains the permanent magnet synchronous motor modeled using motor control blockset and simscape electrical within the ‘plant and inverter’ subsystem.
we then use this simulink model to create a reinforcement learning environment interface with appropriate observations and actions.
here the observations to the reinforcement learning block are error in the stator currents ‘id error’ and ‘iq error’ and the stator currents ‘id’ and ‘iq’.
actions are the stator voltages ‘vd’ and ‘vq’.
next we create the reward signal to let the reinforcement learning agent know how good or bad the actions it selects during training are, based on its interaction with the environment.
here we shape a reward based on the quadratic reward penalty that penalizes distance from goal and control effort.
then we move on to creating network architecture.
here we construct the actor and the critic networks as required by the ddpg algorithm programmatically using matlab functions for layers and representations.
the neural networks can also be constructed using the deep network designer app and then imported into matlab.
the critic network in this example takes in observations and actions as the input and gives estimated q values as the output.
the actor network, on the other hand, takes in observations as the input and gives actions as the output.
with actor and critic representations created, we can create a ddpg agent.
the sample time of the ddpg agent is configured depending on the execution requirement of the control loop.
in general, agents with smaller sample time take longer time to train as it involves a greater number of simulation steps each episode.
we are now ready to train the agent.
first, we specify the training options.
here we specify that we want to run training for at most 2000 episodes and stop training if the average reward exceeds the provided value.
we then use the ‘train’ command to start the training process.
in general, it is best practice to randomize reference signals to the controller during the training process to obtain a more robust policy. this can be done by writing a local reset function for the environment.
during the training process, progress can be monitored in the episode manager.
once the training is complete, we can simulate and verify the control policy from the trained agent.
by simulating the model with the trained agent, we see that the speed tracking performance of field oriented control is good with reinforcement learning agent controlling the stator currents.
on viewing this performance with the previously saved output, we see that performance of field oriented control with reinforcement learning agent is comparable to its pi controller counterpart.
this concludes the video.
related products
learn more
featured product
reinforcement learning toolbox
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)