creating and training reinforcement learning agents interactively -凯发k8网页登录
design, train, and simulate reinforcement learning agents using a visual interactive workflow in the reinforcement learning designer app. use the app to set up a reinforcement learning problem in reinforcement learning toolbox™ without writing matlab® code. work through the entire reinforcement learning workflow to:
- import an existing environment in the app
- import or create a new agent for your environment and select the appropriate hyperparameters for the agent
- use the default neural network architectures created by reinforcement learning toolbox or import custom architectures
- train the agent on single or multiple workers and simulate the trained agent against the environment
- analyze simulation results and refine agent parameters
- export the final agent to the matlab workspace for further use and deployment
as of r2021a release of matlab, reinforcement learning toolbox lets you interactively design, train, and simulate rl agents with the new reinforcement learning designer app. open the app from the command line or from the matlab toolstrip. first, you need to create the environment object that your agent will train against. reinforcement learning designer lets you import environment objects from the matlab workspace, select from several predefined environments, or create your own custom environment. for this example, let’s create a predefined cart-pole matlab environment with discrete action space and we will also import a custom simulink environment of a 4-legged robot with continuous action space from the matlab workspace. you can delete or rename environment objects from the environments pane as needed and you can view the dimensions of the observation and action space in the preview pane. to create an agent, click new in the agent section on the reinforcement learning tab. depending on the selected environment, and the nature of the observation and action spaces, the app will show a list of compatible built-in training algorithms. for this demo, we will pick the dqn algorithm. the app will generate a dqn agent with a default critic architecture. you can adjust some of the default values for the critic as needed before creating the agent. the new agent will appear in the agents pane and the agent editor will show a summary view of the agent and available hyperparameters that can be tuned. for example let’s change the agent’s sample time and the critic’s learn rate. here, we can also adjust the exploration strategy of the agent and see how exploration will progress with respect to number of training steps. to view the critic default network, click view critic model on the dqn agent tab. the deep learning network analyzer opens and displays the critic structure. you can change the critic neural network by importing a different critic network from the workspace. you can also import a different set of agent options or a different critic representation object altogether. click train to specify training options such as stopping criteria for the agent. here, let’s set the max number of episodes to 1000 and leave the rest to their default values. to parallelize training click on the use parallel button. parallelization options include additional settings such as the type of data workers will send back, whether data will be sent synchronously or not and more. after setting the training options, you can generate a matlab script with the specified settings that you can use outside the app if needed. to start training, click train. during the training process, the app opens the training session tab and displays the training progress. if visualization of the environment is available, you can also view how the environment responds during training. you can stop training anytime and choose to accept or discard training results. accepted results will show up under the results pane and a new trained agent will also appear under agents. to simulate an agent, go to the simulate tab and select the appropriate agent and environment object from the drop-down list. for this task, let’s import a pretrained agent for the 4-legged robot environment we imported at the beginning. double click on the agent object to open the agent editor. you can see that this is a ddpg agent that takes in 44 continuous observations and outputs 8 continuous torques. in the simulate tab, select the desired number of simulations and simulation length. if you need to run a large number of simulations, you can run them in parallel. after clicking simulate, the app opens the simulation session tab. if available, you can view the visualization of the environment at this stage as well. when the simulations are completed, you will be able to see the reward for each simulation as well as the reward mean and standard deviation. remember that the reward signal is provided as part of the environment. to analyze the simulation results, click on inspect simulation data. in the simulation data inspector you can view the saved signals for each simulation episode. if you want to keep the simulation results click accept. when you finish your work, you can choose to export any of the agents shown under the agents pane. for convenience, you can also directly export the underlying actor or critic representations, actor or critic neural networks, and agent options. to save the app session for future use, click save session on the reinforcement learning tab. for more information please refer to the documentation of reinforcement learning toolbox.
featured product
reinforcement learning toolbox
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)