run tests in multiple releases of matlab
if you have more than one release of matlab® installed, you can run tests in multiple releases. starting with r2011b, you can also run tests in releases that do not have simulink® test™. running tests in multiple releases enables you to use test functionality from later releases while running the tests in your preferred release of simulink. you can also compare test results across multiple releases to better understand simulink changes before upgrading to a new version of matlab and simulink.
although you can run test cases on models in previous releases, the release you run the test in must support the features of the test. for example, if your test involves test harnesses or test sequences, the release must support those features for the test to run.
before you can create tests that use additional releases, add the releases to your list of available releases using test manager preferences. see add releases using test manager preferences.
considerations for testing in multiple releases
testing models in previous or later releases
your model or test harness must be compatible with the matlab version running your test.
if you have a model created in a newer version of matlab, to test the model in a previous version of matlab, export the model to a previous version and simulate the exported model with the previous matlab version. for more information, see the information on exporting a model in .
to test a model in a more recent version of matlab, consider using the upgrade advisor to upgrade your model for the more recent release. for more information, see .
test case compatibility with previous releases
when collecting coverage in multiple-release tests, you can run tests cases up to three years (six releases) prior to the current release. tests that contain logical or temporal assessments are supported in r2016b and later releases.
test case limitations with multiple release testing
certain features are not supported for multiple-release testing:
parallel test execution
running test cases with the matlab unit test framework
real-time tests
models with observers
input data defined in an external excel® document
including custom figures from test case callbacks
add releases using test manager preferences
before you can create tests for multiple releases, use test manager preferences to include the matlab release you want to test in. you can also delete a release that you added to the available releases list. however, you cannot delete the release from which you are running test manager.
in the test manager, click preferences.
in the preferences dialog box, click release. the release pane lists the release you are running test manager from.
in the release pane, click add/remove releases to open the release manager.
in the release manager, click add.
browse to the location of the matlab release you want to add and click ok.
to change the release name that will appear in the test manager, edit the name field.
close the release manager. the preferences dialog box shows the selected releases. deselect releases you do not want to make available for running tests.
run baseline tests in multiple releases
when you run a baseline test with the test manager set up for multiple releases, you can:
create the baseline in the release you want to see the results in, for example, to try different parameters and apply tolerances.
create the baseline in one release and run it in another release. using this approach you can, for example, know whether a newer release produces the same simulation outputs as an earlier release.
create the baseline.
make sure that the release has been added to your test manager preferences.
create a test file, if necessary, and add a baseline test case to it.
select the test case.
under system under test, enter the name of the model you want to test.
set up the rest of the test.
capture the baseline. under baseline criteria, click capture. specify the format and file in which to save the baseline and select the release in which to capture the baseline. then, click capture to simulate the model.
for more information about capturing baselines, see .
after you create the baseline, run the test in the selected releases. each release you selected generates a set of results.
in the test case, expand simulation setting and release overrides and, in the select releases for simulation drop-down menu, select the releases you want to use to compare against your baseline.
specify the test options.
from the toolstrip, click run.
for each release that you select when you run the test case, the pass-fail results appear in the results and artifacts pane. for results from a release other than the one you are running test manager from, the release number appears in the name.
run equivalence tests in multiple releases
when you run an equivalence test, you compare two simulations. each simulation runs in a single release, which can be the same or different. examples of equivalence tests include comparing models run in different model simulation modes, such as normal and software-in-the-loop (sil), or comparing different tolerance settings.
make sure that the releases have been added to your test manager preferences.
create a test file, if necessary, and add an equivalence test case to it.
select the test case.
under simulation 1, system under test, enter the name of the model you want to test.
expand simulation setting and release overrides and, in the select releases for simulation drop-down menu, select the release for simulation 1 of the equivalence test. for an equivalence test, only one release can be selected for each simulation.
set up the rest of the test.
repeat steps 4 through 6 for simulation 2.
in the toolstrip, click run.
the test runs each simulation in the release you selected and compares the results for equivalence. for each release that you selected when you ran the test case, the pass-fail results appear in the results and artifacts pane. for results from a release other than the one you are running test manager from, the release number appears in the name.
run simulation tests in multiple releases
running a simulation test simulates the model in each release you select using the criteria you specify in the test case.
make sure that the releases have been added to your test manager preferences.
create a test file, if necessary, and add a simulation test case template to it.
select the test case.
under system under test, enter the model you want to test.
expand simulation setting and release overrides and, in the select releases for simulation drop-down menu, select the release options for the simulation.
under simulation outputs, select the signals to log.
in the toolstrip, click run.
the test runs, simulating for each release you selected. for each release, the pass-fail results appear in the results and artifacts pane. for results from a release other than the one you are running test manager from, the release number appears in the name.
assess temporal logic in multiple releases
you can run tests that contain logical and temporal assessments in multiple releases to test signal logic for models created in an earlier release. you can also compare assessment results across releases when you run the tests in multiple releases. for more information, see assess temporal logic by using temporal assessments.
you can run these test case types with logical and temporal assessments:
baseline tests
equivalence tests
simulation tests
run tests with logical and temporal assessments
to run tests logic with logical and temporal assessments in multiple releases:
start matlab r2021b or later.
open the test manager. for more information, see .
in the test manager, add the releases to your test manager preferences. for more information, see add releases using test manager preferences.
create a new test file with a baseline, equivalence, or simulation test case, or open an existing one. for more information, see:
in the test manager, specify your test case properties, including the system under test and other properties that you want to apply. for more information, see .
add a logical or temporal assessment to your test case. for more information, see assess temporal logic by using temporal assessments and .
select the releases to run the test in. in the test manager, select your test case. in system under test, under simulation settings and release overrides, next to select releases for simulation, select the releases to run the test case in from the list.
if you are using a baseline or simulation test case, you can run the test in multiple releases in a single run by selecting multiple releases from the list. if you are using an equivalence test case, you can select one release under simulation 1 and another release under simulation 2. for more information, see:
run the test. in the test manager, click run.
evaluate assessment results
the results and artifacts pane displays the test results for each release you selected. the test release appears in the name of each test result from a release other than the version you ran test manager from.
you can evaluate the assessment results independently from other pass-fail criteria. for example, while a baseline test case might fail due to a failing baseline criteria, a logical or temporal assessment in the test case might pass.
you can also examine detailed assessment signal behavior. for more information, see view assessment results.
collect coverage in multiple-release tests
to add coverage collection for multiple releases, you must have a simulink coverage™ license. set up your test as described in run baseline tests in multiple releases, run equivalence tests in multiple releases, or run simulation tests in multiple releases. you can use external test harnesses to increase coverage for multiple-release tests. before you capture the baseline or run the equivalence or simulation test, enable coverage collection.
click the test file that contains your test case. to collect coverage for test suites or test cases, you must enable coverage at the test file level.
in the coverage settings section, select record coverage for system under test, record coverage for referenced models, or both.
select the types of coverage to collect under coverage metrics to collect.
after you run the test, the results and artifacts pane shows the pass-fail results for each release in the test suite.
to view the coverage results for a release, select its test case and expand the coverage results section. the table lists the model, release, and the coverage percentages for the metrics you selected.
to view aggregated coverage results for the releases in your test, select the test suite that contains the releases and expand the aggregated coverage results section.
to use the current release to add tests for missing coverage to an older release, click the row and click add tests for missing coverage. you can also use coverage filters, generate reports, merge results, import and export results, and scope coverage to linked requirements. for more information, see and .
see also
|