main content

fix requirements-凯发k8网页登录

this example shows how to address common traceability issues in model requirements and tests by using the model testing dashboard. the dashboard analyzes the testing artifacts in a project and reports metric data on quality and completeness measurements such as traceability and coverage, which reflect guidelines in industry-recognized software development standards, such as iso 26262 and do-178c. the dashboard widgets summarize the data so that you can track your requirements-based testing progress and fix the gaps that the dashboard highlights. you can click the widgets to open tables with detailed information, where you can find and fix the testing artifacts that do not meet the corresponding standards.

collect metrics for the testing artifacts in a project

the dashboard displays testing data for a model and the artifacts that the unit traces to within a project. for this example, open the project and collect metric data for the artifacts.

  1. open the project that contains the models and testing artifacts. for this example, in the matlab® command window, enter dashboardccprojectstart("incomplete").

  2. open the dashboard window. to open the model testing dashboard: on the project tab, click model testing dashboard or enter modeltestingdashboard at the command line.

  3. in the project panel, the dashboard organizes unit models under the component models that contain them in the model hierarchy. view the metric results for the unit cc_driverswrequest. in the project panel, click the name of the unit, cc_driverswrequest. when you initially select cc_driverswrequest, the dashboard collects the metric results for uncollected metrics and populates the widgets with the data for the unit.

link a requirement to its implementation in a model

the artifacts panel shows artifacts such as requirements, tests, and test results that trace to the unit selected in the project panel.

in the artifacts panel, the trace issues folder shows artifacts that do not trace to unit models in the project. the trace issues folder contains subfolders for:

  • unexpected implementation links — requirement links of type implements for a requirement of type container or type informational. the dashboard does not expect these links to be of type implements because container requirements and informational requirements do not contribute to the implementation and verification status of the requirement set that they are in. if a requirement is not meant to be implemented, you can change the link type. for example, you can change a requirement of type informational to have a link of type related to.

  • unresolved and unsupported links — requirement links which are broken or not supported by the dashboard. for example, if a model block implements a requirement, but you delete the model block, the requirement link is now unresolved. the model testing dashboard does not support traceability analysis for some artifacts and some links. if you expect a link to trace to a unit and it does not, see the troubleshooting solutions in .

  • untraced tests — tests that execute on models or subsystems that are not on the project path.

  • untraced results — results that the dashboard can no longer trace to a test. for example, if a test produces results, but you delete the test, the results can no longer be traced to the test.

address testing traceability issues

the widgets in the test analysis section of the model testing dashboard show data about the unit requirements, tests for the unit, and links between them. the widgets indicate if there are gaps in testing and traceability for the implemented requirements.

link requirements and tests

for the unit cc_driverswrequest, the tests linked to requirements section shows that some of the tests are missing links to requirements in the model.

to see detailed information about the missing links, in the tests linked to requirements section, click the widget unlinked. the dashboard opens the metric details for the widget with a table of metric values and hyperlinks to each related artifact. the table shows the tests that are implemented in the unit, but do not have links to requirements. the table is filtered to show only tests that are missing links to requirements.

the test detect long decrement is missing linked requirements.

  1. in the artifact column of the table, point to detect long decrement. the tooltip shows that the test detect long decrement is in the test suite unit test for driverswrequest, in the test file cc_driverswrequest_tests.

  2. click detect long decrement to open the test in the test manager. for this example, the test needs to link to three requirements that already exist in the project. if there were not already requirements, you could add a requirement by using the requirements editor.

  3. open the software requirements in the requirements editor. in the artifacts panel of the dashboard window, expand the folder functional requirements > implemented and double-click the requirement file cc_softwarereqs.slreqx.

  4. view the software requirements in the container with the summary driver switch request handling. expand cc_softwarereqs > driver switch request handling.

  5. select multiple software requirements. hold down the ctrl key as you click output request mode, avoid repeating commands, and long increment/decrement switch recognition. keep these requirements selected in the requirements editor.

  6. in the test manager, expand the requirements section for the test detect long decrement. click the arrow next to the add button and select link to selected requirement. the traceability link indicates that the test detect long decrement verifies the three requirements output request mode, avoid repeating commands, and long increment/decrement switch recognition.

  7. the metric results in the dashboard reflect only the saved artifact files. to save the test suite cc_driverswrequest_tests.mldatx, in the test browser, right-click cc_driverswrequest_tests and click save.

refresh metric results in the dashboard

the dashboard detects that the metric results are now stale and shows a warning banner at the top of the dashboard.

  1. click the collect button on the warning banner to re-collect the metric data so that the dashboard reflects the traceability link between the test and requirements.

  2. view the updated dashboard widgets by returning to the model testing results. at the top of the dashboard, there is a breadcrumb trail from the metric details back to the model testing results. click the breadcrumb button for cc_driverswrequest to return to the model testing results for the unit.

the tests linked to requirements section shows that there are no unlinked tests. the requirements linked to tests section shows that there are 3 unlinked requirements. typically, before running the tests, you investigate and address these testing traceability issues by adding tests and linking them to the requirements. for this example, leave the unlinked artifacts and continue to the next step of running the tests.

test the model and analyze failures and gaps

after you create and link unit tests that verify the requirements, run the tests to check that the functionality of the model meets the requirements. to see a summary of the test results and coverage measurements, use the widgets in the simulation test result analysis section of the dashboard. the widgets help show testing failures and gaps. use the metric results to analyze the underlying artifacts and to address the issues.

perform unit testing

run the tests for the model by using the test manager. save the test results in your project and review them in the model testing dashboard.

  1. open the unit tests for the model in the test manager. in the model testing dashboard, in the artifacts panel, expand the folder tests > unit tests and double-click the test file cc_driverswrequest_tests.mldatx.

  2. in the test manager, click run.

  3. select the results in the results and artifacts pane.

  4. save the test results as a file in the project. on the tests tab, in the results section, click export. name the results file results1.mldatx and save the file under the project root folder.

the model testing dashboard detects the results and automatically updates the artifacts panel to include the new test results for the unit in the subfolder test results > model.

the dashboard also detects that the metric results are now stale and shows a warning banner at the top of the dashboard.

the stale icon appears on the widgets in the simulation test result analysis section to indicate that they are showing stale data that does not include the changes.

click the collect button on the warning banner to re-collect the metric data and to update the stale widgets with data from the current artifacts.

address testing failures and gaps

for the unit cc_driverswrequest, the model test status section of the dashboard indicates that one test failed and one test was disabled during the latest test run.

  1. to view the disabled test, in the dashboard, click the disabled widget. the table shows the disabled tests for the model.

  2. open the disabled test in the test manager. in the table, click the test artifact detect long decrement.

  3. enable the test. in the test browser, right-click the test and click enabled.

  4. re-run the test. in the test browser, right-click the test and click run and save the test suite file.

  5. view the updated number of disabled tests. in the dashboard, click the collect button on the warning banner. note that there are now zero disabled tests reported in the model test status section of the dashboard.

  6. view the failed test in the dashboard. click the breadcrumb button for cc_driverswrequest to return to the model testing results and click the failed widget.

  7. open the failed test in the test manager. in the table, click the test artifact detect set.

  8. examine the test failure in the test manager. you can determine if you need to update the test or the model by using the test results and links to the model. for this example, instead of fixing the failure, use the breadcrumbs in the dashboard to return to the model testing results and continue on to examine test coverage.

check if the tests that you ran fully exercised the model design by using the coverage metrics. for this example, the model coverage section of the dashboard indicates that some conditions in the model were not covered. place your cursor over the decision bar in the widget to see what percent of condition coverage was achieved.

  1. view details about the decision coverage by clicking one of the decision bars. for this example, click the decision bar for achieved coverage.

  2. in the table, expand the model artifact. the table shows the test results for the model and the results files that contains them. for this example, click on the hyperlink to the source file results1.mldatx to open the results file in the test manager.

  3. to see detailed coverage results, use the test manager to open the model in the coverage perspective. in the test manager, in the aggregated coverage results section, in the analyzed model column, click cc_driverswrequest.

  4. coverage highlighting on the model shows the points that were not covered by the tests. for this example, do not fix the missing coverage. for a point that is not covered in your project, you can add a test to cover it. you can find the requirement that is implemented by the model element or, if there is none, add a requirement for it. then you can link the new test to the requirement. if the point should not be covered, you can justify the missing coverage by using a filter.

once you have updated the unit tests to address failures and gaps in your project, run the tests and save the results. then examine the results by collecting the metrics in the dashboard.

iterative requirements-based testing with the model testing dashboard

in a project with many artifacts and traceability connections, you can monitor the status of the design and testing artifacts whenever there is a change to a file in the project. after you change an artifact, use the dashboard to check if there are downstream testing impacts by updating the tracing data and metric results. use the metric details tables to find and fix the affected artifacts. track your progress by updating the dashboard widgets until they show that the model testing quality meets the standards for the project.

see also

related topics

网站地图