Module 13 - T202
T202: Overview of Test Design Specifications Test Cases, and Test Procedures
HTML of the Course Transcript
(Note: This document has been converted from the transcript to 508-compliant HTML. The formatting has been adjusted for 508 compliance, but all the original text content is included.)
After you complete this training, we hope that you will tell colleagues and customers about the latest ITS standards and encourage them to take advantage of the archived version of the webinars. ITS Standards training is one of the first offerings of our updated Professional Capacity Training Program. Through the PCB program we prepare professionals to adopt proven and emerging ITS technologies that will make surface transportation safer, smarter and greener which improves livability for us all. You can find information on additional modules and training programs on our web site www.pcb.its.dot.gov. Please help us make even more improvements to our training modules through the evaluation process. We look forward to hearing your comments. Thank you again for participating and we hope you find this module to be helpful.
Your instructor Russ Brookshire is the Product Manager for Embedded Systems for Intelligent Devices. He has been involved in manufacturing and testing ITS products for over 20 years. Most recently he worked with Patrick Chen on developing the test procedures for NTCIP 1203 Version three dynamic message signs. The next voice you will hear will be of your instructor.
There are few recommended prerequisites for this course, they include T101: Introduction to ITS Standards Testing and T201: How to Write a Test Plan. These form part of the curriculum path for testing which begins with T101 Introduction to ITS Standards Testing.
During this webinar you will have learned about IEEE 829 which is the standard for software test documentation and we'll be going over how IEEE 829 fits in with testing here in this webinar and this course also defined the phases of testing along with some methods of testing and we'll be showing how those can be used in the testing process. The next webinar was T201: How to Write a Test Plan and this explained the elements that are included in a test plan for IEEE 829. This module T202: Overview of Test Design Specifications, Test cases and Test Procedures we'll be going over all the elements of those individual documents along with providing quite a few examples and following from this module you can see there are several choices. At the very top you'll see T311: Applying your Test Plan to the NTCIP 1203 v03 Dynamic Message Sign Standard. Below that T313: Applying your Test Plan to the NTCIP 1204 v03 Environmental Sensor Station Standard. And finally T3XX there are additional modules that will be created in the future which would follow from this as far as Applying your Test Plan to other NTCIP Standards, TMDD Standards and ATC Standards.
Once you've completed this module these are some of the things that you will be capable of doing including describing with the context of the testing lifecycle the role of Test Plans, Test Design Specifications (TDS), Test Cases and Test Procedures. In addition you'll be able to describe the purpose and content of Test Design Specifications, Test Cases and Test Procedures. And next for standards that were created using the Systems Engineering Process (SEP) you'll be able to detail the manner that Protocol Requirements Lists and Requirements to Test Cases Traceability Matrices can be used to create Test Specifications. And we'll have an example of that coming from the NTCIP 1203 Dynamic Message Sign Standard. I will also show how dialogs, another element of the NTCIP standards created using SEP can be used to create Test Specifications and we'll use the transportation Sensor Systems, Standard 1209 for that example. Some of the NTCIP standards do not use SEP, the Systems Engineering Process so you'll be able to detail the manner that Conformance Groups can be used to create Test Specifications.
So let's start with a general overview of testing and how it fits into and ITS Project. Someone who has an ITS Project in mind would start in the upper left hand corner of this diagram, it's termed the V diagram and this would show the first thing they would take into account would be the Regional Architecture, what do they already have in place that could be used to hopefully leverage existing facilities or existing functions that are already in their network. Next might look at a Feasibility Study or some Concept Exploration for actually implementing these functions. Next working down the left side of the V diagram which is termed Decomposition and Definition, we'll go through the Concept of Operations and create that. Work up System Requirements and finally work into the High Level Design and Detailed Design for that actual project. Once you hit the High Level Design and Detailed Design it's possible that these are taking place at the manufacturer rather than at the specifying agency. Finally, at the very bottom, the Software and Hardware Development and Field Installation. This may actually take place at a slightly later time rather than at this stage after some testing has been performed. The Field Installation that is. Next working up the right side, we can see that this is where the testing actually takes place, there might be Unit or Device Testing that would occur followed by Subsystem Verification where again it would be placed in the field and the communications channels that are used to communicate with those devices are also tested and also the central system that would be controlling and monitoring how those devices. It would just be for a single device such the Dynamic Message Sign or Environmental Sensor Station. It wouldn't be testing interaction between these subsystems at this point. Once you move up to System Verification and Deployment that's when the interaction would be tested and finally System Validation would occur, does the item actually perform as expected? Next the project would run into Operations and Maintenance. There might be some changes and upgrades during the life of the unit and finally Retirement and Replacement. Looking at the central section we can see that there are associated with each of the design phases some testing plans that are associated with the test phases. So starting from the bottom we might have a Unit or Device Test Plan. This could be a factory acceptance test, it could be a group of concept tests, something of that nature. Moving up we have a Subsystem Verification Plan, these typically occur in the field and then the final System Acceptance or System Verification plan and on the last and highest level would be the System Validation Plan. So this would be all of the manner in which testing works into the project lifecycle.
So there's a lot of time and effort that obviously is involved here. So one question would be why test at all? Well several reasons. One is to validate the system against the user needs, next would be to verify compliance with the procurement specifications to make sure that having gone to all this trouble for the agency to put together these specifications, verifying that they got what they actually were requiring and finally would be to verify conformance to the standard, this would ensure interoperability and interchangeability. So note that validation, verification occur at different stages in the project life cycle with the verification typically occurring before the validation and compliance and conformance testing, two last points there, these can sometimes can be combined. But the NTCIP test procedures only ensure conformance to the NTCIP standards. As far as how to go about testing, there are several testing methods that can be used. These can run from an inspection, analysis, demonstration or formal testing. Starting with inspection this would be a verification by physical and visual examination. So some examples of this might be the taking of measurements or the verifying the application for compliance labels that would be applied to the unit itself or perhaps counting items that are associated with the unit. The next method would be analysis, this is verification by means of calculations. So some examples of this might include simulating wind forces on a camera tower in a computer program. This is definitely something you would not want to try to perform a formal testing on. If it does not pass this test you will have failed a camera tower. The other one would be structural analysis of the forces caused by ice load on a Dynamic Message Sign. Again you would not want to fail that test, could be quite expensive. Better to perform the structural analysis beforehand. Next another testing method would be demonstration. So this would be verification of a function observed under a specific condition. So some examples of this would be testing a weather monitoring system in the factory, in this case the specific conditions would be room temperature with no wind, a relative humidity about 50% and no rain, so only gets you so far. And you'll see with formal testing we're looking at verification of a function observed under controlled exercises using real or simulated stimulus. So here we now have the ability to change those conditions, so we'd be testing a weather monitoring system outside or using simulated conditions so we could run the full temperature range of -34°C to +74°C. We could run calm to up to hurricane winds, low humidity all the way to up to where the moisture actually condenses on the unit. And as far as precipitation could simulate fog, mist, rain, snow, sleet, hail, freezing rain, anything that's possible to monitor. So any of those are possible testing methods. NTCIP will use several of those, typically does not use analysis but inspection demonstration or formal testing are all items testing methods that could be used in an NTCIP test procedure.
Now let's move along to an activity, so what are some of the benefits of NTCIP Conformance Testing? And anyone who has an idea what some of those benefits might be, feel free to go ahead and enter those in the chat box.
Okay I'm getting a couple of answers now and we're seeing to ensure interoperability or make sure the devices are interoperable and another is ensure meeting of requirements. So in this case we can see yes, note that the testing for compliance with the project specifications only shows the system works as specified and so that would be the distinction between the ensuring meeting of requirements, those are related to project specifications and as what would be termed compliance testing.
Conformance testing is against a standard so there's typically a published standard and so the major reason for doing this is it promotes interoperability of system elements by means of standardized dialogs, test cases and test procedures and we'll be going over some of those as they have been implemented in some of the NTCIP standards.
Standardization also reduces overall system cost and risk because manufacturers can perform this testing before the specification has actually been issued by the agency. It's possible for them to work out a lot of the issues that may have otherwise developed otherwise.
Okay I'll start off with Test Plans, these are defined in IEEE 829 and they were covered in detail in Module T201 - How to Write a Test Plan. Because the Test Design Specifications, Test Cases and Test Procedures follow close on the design of a test plan, we'll now provide a brief overview of the Test Plans and how they fit into the Project Life Cycle. Test Plans are typically developed during the “Decomposition and Definition” phase of the Project Life Cycle as you recall this was the left side of that V diagram. For NTCIP Conformance Testing of a field device, this would take place during the detail design phase. The Test Plan itself is a high level document which means the Test Plans are primarily intended to provide enough information to ensure that all parties involved in the testing are on the same page. The details will be provided in the Test Design Specifications, Test Cases and Test Procedures which are generated later in the Project Life Cycle. As far as the Test Plan itself, what it's going to do is define what item is to be tested and when is it to be tested, in what detail the item is to be tested, how the item is to be tested and who is to design and perform the testing.
So let's go over each of those elements of the Test Plan in detail, starting with what item is to be tested and when is it to be tested. Well the what item is relatively straight forward depending on the device that's been specified, it might be a Dynamic Message Sign, Weather Station etc. But as to when it is to be tested, this can be looked at in terms of the relation to other milestones in the project rather than a specific date at this time as we mentioned, this is a high level document. So the Unit/Device Test that would cover the item and its interfaces. So an example there would be a field device or central system where the communications interfaces are simulated and this is typically how a lot of NTCIP testing itself occurs. A subsystem verification would test the time its communications and other items that communicate with the test item such as central systems as an example. So that's going to basically test that whole combination but just for a single field device type. Once we go to System Verification this ensure that the entire system meets the system requirements for all of the interactions between all of the different subsystems and still includes the interaction with the individual devices. But in this case it's going to be testing every single device. In many instances a subsystem verification may just be testing a single instance of central to a single device. And finally there would be system validation and this will be used to show that the system as implemented meets the original user needs. So if NTCIP's conformance testing typically occurs at either the Unit/Device Test stage, the Subsystem Verification test and/or the System Verification stage.
So in what detail is the item to be tested? This is defined in the Test Plan. So we can divide this up into some categories. As far as communications we're looking at does the item conform to the communications standard as specified. The communications themselves might be via serial or Ethernet, might be a wireless communication system and at this stage this would be where packet errors would be tested for. Another category would be the functionality of the device, so does the unit exhibit the functionality defined in the specifications? Some examples of functionality would be the ability of a camera to zoom in and out, the ability of a Dynamic Message Sign to change the brightness of the message being displayed and for an environmental sensor station, the ability to monitor the air temperature. There might also be some other devices that would be monitoring air temperature but that would be obviously a primary function of an environmental sensor station. Now some other categories would be performance, the speed of the device, the reliability or how often it's available or how often it breaks down. Perhaps the capacity of the device in terms of memory or in terms of sheer volume of what it does. Another category is hardware. What is the device actually made of and how robust is it? What materials does it consist of and the strength of those materials and how they withstand vibration? And finally would be environmental category, in this case how well does the device withstand extremes of temperature, humidity, water intrusion, ice buildup, notice these can apply to any field device. So the NTCIP standards in testing typically include communications and functionality requirements as obviously can be seen from here but also they might include hardware or performance requirements. So a hardware requirement example comes from NTCIP 2101 PMPP using RS-232 where the physical interface is defined as the female 25 pin connector. So in this instance it's in a communication. What's usually thought of as a communication standard but there is a physical hardware requirement that's included in that NTCIP standard and so that would be an item that could be tested for in an NTCIP test plan. As to a performance requirement there's an example from NTCIP 1203, Dynamic Message Sign Standard where the DMS is required to update fields at least every 60 seconds, so a field being an area of the sign where a number can be placed that would change based on external input such as radar or perhaps temperature. So there's a performance requirement. So the NTCIP standards typically don't cover environmental because in this instance environmental refers to the ability of the field device to withstand the elements, not to measure them. So the measuring of the environmental parameters would actually fall under functionality and that would be covered by NTCIP 1204 Environmental Sensor Stations.
So next another question answered by the Test Plan is how is the item to be tested? So NTCIP testing itself is a combination of communications testing and functional testing. The communications testing is typically performed by NTCIP test software and/or off the shelf protocol analyzers. So these have the benefit of allowing you to customize the testing that you're performing. In addition they have the ability to actually capture exactly what happened throughout the testing process so it's possible to review the results later. The functional testing that may require specialized equipment to simulate testable conditions. Easy examples for that one would be the ability for environmental sensor station to monitor temperature, you're going to have to change the temperature surrounding that station and the data that are communicated with the device must correlate at some point with observable behavior which constitutes the functionality to be tested. So obviously if we were just sending data back and forth nothing was actually changing out there in the field we're just talking about a data base and typically with ITS devices we're dealing with wanting to either communicate, change something out in the field or retrieve information as to what is happening out in the field.
Next is who is to design and perform the testing, another question that's answered by the Test Plan. Several possibilities here, could be agency personnel, could be an out-of-house expert or perhaps the manufacturer's representatives, someone who is actually producing the device. Note that all of these have pros and cons, the agency personnel are very familiar with the user needs but may not be as familiar with technical details on the devices themselves and out-of-house experts should have the technical aspect covered but may be expensive or unavailable and a manufacturer's representative is very familiar with the device but not as familiar with the user needs, so the testing may not be well tailored to final application. So each of these have pros and cons but in many instances a combination of those dividing the tasks up that may be the answer.
Okay some additional considerations for the Test Plans might be the Item Pass/Fail Criteria. So, is it a requirement that the items being tested pass all individual tests to receive a passing grade? And this really depends on the stage of testing that is being performed. The standard for the criteria being used for the pass/fail would vary perhaps a bit less strict at a proof of concept where it may be that a grade of 70% would be considered passing. As you move up through a prototype factory acceptance test, first article site acceptance test, subsystem test and system test you're going to want that to go up. Obviously the farther along you get and the more dire are the consequences of not having passed some portion of the test. So next would be the Suspension Criteria and the Resumption Requirements for the testing itself. So here we're speaking of in particular the—if you have to stop the testing for planned stops such as lunch breaks that are occurring during testing or perhaps overnight. Some tests actually require that they run for extended periods of time. An example of that might be some of the temperature tests from environmental sensor stations where they will have a 24 hour minimum and maximum that is recorded. So want to make sure that nothing occurs during that testing period where it may be that people don't have access to that equipment, can't monitor it for the entire 24 hour period. So you need to make to define these beforehand so everyone is aware of the kind of location that's necessary for the equipment and access to that equipment.
Another item to take into account would be any retesting requirements and this goes right back into the Item Pass/Fail Criteria. If for some reason the item does not pass one of the required tests then the question would is it required that the item be run through the entire litany of tests or just a subset of those tests or simply just the single test which was not passed. This should be shown or decided upon very early in the process so that everyone is aware of the consequences of what happens should a test not pass. So note that the Test Plans are not included in the NTCIP Standards and so are the responsibility of the specifying agency.
So as a summary of Test Plans it's a management level document that provides an overview of the testing that's to be performed. So once you have a Test Plan, the next step is to define the Test Design Specifications and these are going to include, the Test Design Specifications, the Test Case Specifications (TCS) and the Test Procedure Specifications (TPS). So these three documents are the primary focus of this webinar. Following after the creation of the Test Plan they're used to detail the testing that's to be performed. Test Design Specifications specify the details of a test approach for a feature and identify the associated tests to be performed to verify the feature. Test Design Specifications differ from Test Plans in the level of detail that's given. However as far as the actual tests themselves are concerned the Test Design Specifications simply list them. They're defined using the Test Case Specifications and Test Procedure Specifications. The Test Case Specifications, they're a document specifying the input, the output and the conditions under which the testing is to be performed and the Test Procedure Specifications these are a document that specify a sequence of actions for the execution of a test. So the Test Case Specifications and the Test Procedure Specifications are tightly linked and we'll see exactly how here shortly. But they come together to actually be the documents that are used in the final testing. So just as with the Test Plans the TDS, TCS and TPS are typically generated in the Decomposition and Definition stage of the Project Life Cycle. Now as you remember on the V diagram that would be the right hand side.
So let's show the relationship between these test specifications and it'll make the roles a bit more clear. So starting at the top of this slide we can see the Test Plan and each Test Plan has a Test Design Specification. But at this point each Test Design Specification may reference multiple Test Case Specifications and typically does and after this you'll note that each Test Case may reference one or more Test Procedures and vice versa. So each Test Procedure may be referenced by multiple Test Cases. So let's go to an example that shows two Test Cases, Test Case one and Test Case Two and not that they're linked to a single Test Procedure.
So our example is going to be a test item of a calculator, simple hand held calculator. So starting from the Test Design Specification what we are going to be testing is a feature of addition. We have two different Test Cases, Test Case one on the left is the ability to add two positive numbers and Test Case two on the right would be an error condition which is the sum is too large. So as you recall the Test Cases are used to define inputs, conditions and results. So if you look at Test Case one we can see the input one is seven, input two is 12. Our operation is plus for addition and our expected result would be that the calculator displays 19 and the same kind of information is shown for Test Case two. If we go down to the Test Procedure you'll note that all it lists are the actual steps you used to perform the test. So in this case the Test Procedure is called arithmetic. It could be addition, subtraction, multiplication, division etc. And the steps themselves are one enter Input one, two enter the operation, three enter input two, press the “=” key and verify the result. So if we go through the Test Procedure using the inputs from Test Case two we can see enter input one of 500, enter the operation which would be the “+” sign, enter input two of 500, press the “=” key and verify the result. Our result should be that the calculator displays error. You'll note that this calculator doesn't have very many digits. So in this case you can see how the Test Cases, you can have multiple Test Cases, obviously many more than we have here, could be adding a positive or a negative number, adding two negative numbers, multiplying positive or negative numbers etc. All of these would have to be tested to ensure that your calculator performs properly. But in this case you can use that single test procedure over and over again with all of these different test cases. So that's a very basic example of how Test Design Specifications, Test Cases and Test Procedures link together.
Moving on we've got our poll here again. Quick question. Which of the following are included in the NTCIP standards? And that would be Test Design Specifications, Test Case Specifications, Test Procedure Specifications, all three or it depends. And if you want to go ahead and make your selection of which ones you think are included in NTCIP Standards. Okay got a few more folks still left to vote, if you want to tally in your vote. Okay polls are about to close. Okay and here we have the results. As you can see I have quite a few people who say that Test Design Specifications, Test Case Specifications and Test Procedure Specifications are all included in NTCIP Standards and that is definitely true except they're not included in all NTCIP Standards. They're only included in some. So the answer would be five, it depends and based on that it depends on the actual standard itself. Test Design Specifications are not included, they would actually be generated separately but the Test Case Specifications and Test Procedure Specifications are only included in some of the newer standards that have been updated to include those. So what we're going to do is go through and show using some examples how to actually generate the documents that are not included in NTCIP such as Test Design Specifications and if they are included, how to select the correct Test Case and Test Procedure Specifications. If they are not included in the standard then we'll show how to generate them from what is available in the individual standards.
So start with a case study, first a summary. So we have the Test Plan that's the overview of the entire testing process. The Test Design Specification which specifies the details of the test approach. The Test Case Specification which specifies the inputs, the outputs, the testing conditions and also the results that are to be expected and the Test Procedures they simply specify the steps to be taken to execute the test. So let's go to a case study to see exactly how these are all created or used if they're already included in the standard.
So we'll start with a case study of a Dynamic Message Sign, this comes from the NTCIP 1203 v03 standard. So this was created using the Systems Engineering Process (SEP) so it includes Test Cases and Test Procedures and what we will show here is how to create the actual Test Design Specification and how to select the correct Test Cases and Test Procedures that are based on what requirements are necessary for that Dynamic Message Sign. So we're using here v3, v01 of the standards, it simply defined the functionality for the Dynamic Message Sign. v02 actually added in the Systems Engineering Process, so it added sections for the user needs, functional requirements, a requirements traceability matrix, protocol requirements list and dialogs and we'll be going over a lot of these elements that are included in the standards here shortly. And v03 that actually added the Test Procedures and Test Cases.
So starting our case study here we have an example, Dynamic Message Sign. We can see the configuration is a character matrix and what this means is that there is space between the individual modules such that characters can only be displayed at certain locations on the sign. In this case we can display three lines of characters, each line having 18 characters. And if you look very, very closely you can that each character is seven rows high by five columns wide. So how did the specifying authority define these requirements for the sign? So this is done through the Test Design Specification and in particular the protocol requirements list that's included in the NTCIP Standard.
So if you'd like to follow on you can look at page six of the supplement. So in this case the Test Design Specification, note that it's not included in the NTCIP Standards, it's not published. This is actually generated by the specifying agency. These are the requirements of a Test Design Specification as defined by IEEE 829. Starting with the Test Design Specification identifier. It must be unique, wouldn't want to get confused with another Test Design Specification. But note that IEEE 829 does not require any particular format, it can be whatever the agency would like to use.
Next would be the features to be tested, this identifies the test items and the specific features that are going to be tested for each. So note that NTCIP Standards have two ways of allowing specifying agencies to select the required features. Either protocol requirements list, which is being done in this standard and conformance groups, and which we'll cover in a later standard. Might be some approach refinements that are included, so this may be specific test techniques that are to be used or it may summarize common attributes of any of the test cases. This would be helpful if you were having to write the Test Cases yourself. In this case this standard the Test Cases have already been generated so it makes our job a lot easier. Next would be Test Identification, actually identifying the Test Cases to be used. So at this stage you'll note that the Test Cases are identified but they're not detailed. So there's no particular inputs or outputs, the results etc. As you recall that's gone into Test Case Specification and next would be the Feature pass/fail criteria. So here we're not looking at the item pass/fail criteria of the Test Plan but rather just the individual features.
Now into the nitty gritty, we are looking at an actual excerpt from the NTCIP 1203 standard. This was the protocol requirements list I was referring to and this is where the agency would go to actually begin selecting the features that they require for their implementation and we're going to go through just specifying how the configuration of the Dynamic Message Sign itself and in this case what we're looking at is well you'll notice that on the left there are functional requirements and over to the conformance column we have Ms and Os. M meaning mandatory as you'll note that under the support project requirement anytime there's a mandatory it's always listed as yes. For an optional conformance however it's up to the agency to select whether they want that or not. So in this case you'll note that non matrix is an option and it was selected as no because the sign we are specifying is not a non-matrix sign, it is a matrix sign and so you'll see the next requirement is selected as yes and there's also you'll note under the additional project requirements there is a requirement that's been filled out there that says the pitch between pixels shall be at least 66mm. Now there's actually originally and underline under the 66, there was a blank there I should say. So there was a selection there between zero and 255mm pitch between the pixels that could be selected by the agency. So what you'll note is that for the options in some instances they'll simply say conformance optional. Another format that can be used as seen here is what's termed an option group and so in this case we're seeing two option groups listed, option group two listed as an 0.2 and option group three listed as 0.3. For these there's also a parenthesis after it indicating the range. So for each of these it's simply saying out of all of the possibilities that are in option group two and we can see there's two of them either non matrix or matrix you have to select at least one. So actually you have to select one and only one and that was done here by selecting matrix. For option group three you must select one of those and this was done at the very bottom by selecting character matrix. The range could have also been given in another format which would be 1..* indicating well you have to select at least one but after that you can select as many as you can, as many as you would like. Obviously these are mutually exclusive options, the sign is either full matrix or it's line matrix can't be both. But there are several other options, I'm sure you can imagine where you might select multiple of those and that would be done with a range given as 1..*. Okay but notice here for the matrix requirement there's a reference here matrix in parenthesis and this is referred to as a conditional reference. So basically what this means is it allows when selected to have other requirements that actually follow from your having selected matrix. In other words if the agency selects matrix then these other requirements become mandatory and we'll see some of those here in the next slide. If you note that because matrix was selected, several requirements are now mandatory including the ones shown here in the bottom three lines. So the predicate this matrix indicates that if matrix has been selected in the previous slide then those individual functional requirements are now mandatory. You can see a lot of the items that are on here are already mandatory and so the ones that are conditional were the bottom three determine sign face size and pixels, determine character size and pixels and determine pixel spacing. But because matrix was selected these are all mandatory and so were selected as yes.
So what if you wanted more information about a functional requirement such as determining character size and pixels or determining the matrix capabilities so in this case we can look at the Requirements Traceability Matrix, this is another element that's included in any of the NTCIP Standards would have been created using the Systems Engineering Process. Now there's quite a few of those out there now. What the Requirements Traceability Matrix does is maps functional requirements to dialogs and objects. So we'll take a look at 22.214.171.124.2.1 here determine sign face size in pixels and what we'll see is it maps to a dialog id G.1 and I'll go ahead and break the suspense, this is basically just a generic SNMP Get Interface. The central send the request to the device says please send me this information back and the sign responds with that information. There's not a lot in that dialog, many of the other dialogs are more complex. I'm going to go over one of those later in the webinar. But in addition it shows the objects that are actually involved. In this case there are two objects, vmsSignHeightPixels and vmsSignWidthPixels. So as you'll recall we had three lines, each seven pixels high for a total of 21 for vmsSignHeightPixels. If we asked the sign for that object it would return 21, and sign width in pixels would be 18 times 5, the number of columns for each individual module.
So how about linking the functional requirement looking forward to the test cases and the test procedures. Well in this case we have another matrix, another element of the standards that are available which is called the Requirements to Test Cases Traceability Matrix and we can see an excerpt of that that has been printed here at the bottom. You'll note that this is only included in standards that have Test Cases and Test Procedures and what it does is it links a functional requirement reusing the exact same one here, 126.96.36.199.2.1, determine sign face size in pixels and what it does is it links that functional requirement to one or more Test Cases. So in this case the Test Case that it links to is C.3.1.6, determine sign face size in pixels. So you'll note that there may have been more than one Test Case listed for that functional requirement, in this case there was only one but all of the Test Cases referenced would have to be performed to verify conformance.
So let's see what constitutes an actual Test Case Specification. So this is in accordance with IEEE 829, a Test Case Specification is a document specifying the input, the predicted results and a set of execution conditions for a test item. So like in this of the sections referenced on the side. So if you'd like to refer to pages seven through nine in the supplement, we'll go over the sections except for the Input Specifications which are shown on the following slide. So for the Test Case Specifications we start with again a Test Case Specification identifier, it would be unique shown in the supplement we see that the Test Case Specification identifier is TCS-C-DMS and next it would have a purpose which defines the purpose of the document itself. Next the test items that are to be tested so there's a little definition there of what the actual items to be tested are. The Input Specifications in this case we're going to go through that here in the next slide. The Output Specifications would be the results and we're also going to see that in the next slide and any environmental needs. So in this case it's a broad term basically what is everything that's required to perform this testing, everything from the physical space to any kind of special test tools and anything that might be necessary. Next would be Special Procedural Requirements that may be necessary, there may be variations to the Test Plan that's required and these can be listed here and there might be Intercase Dependencies, it may be that a certain test has to be performed before another simply because it will not work in any other manner or it may be that the testing simply can be performed quicker if certain Test Cases are performed before others or after others.
So let's go back and take a look at the actual Input Specifications so in this case we're now looking at what is given in page nine as an appendix A so this was developed for this Test Case and here we're seeing the Test Case Test Procedure ID of 1.6 which is determine sign face size and pixels and we defined several variables and a reference for them and the expected value that is to be returned. So in this case we don't have so much input as expected results. As I mentioned the required sign pixel height would be 21 pixels high, 3 times 7 and the required sign pixel width would be the 18 times 5 which would be 90. Okay and this would be the Test Procedure that's actually being referenced and this is published in the NTCIP Standard. Now here you'll notice that there's a slight variance from IEEE 829, it does not follow exactly with the way that IEEE 829 defines Test Cases and Test Procedures. So you'll notice that in the NTCIP Standard 1203 the Test Case is listed as 1.6, determine sign face size in pixels and there are the variables we were referring to previously along with some pass/fail criteria. So there are a lot of elements that come from a Test Case. But then right after it you see that the Test Procedure, the actual test steps are defined so there is no means as this is laid out to define multiple Test Cases linked to a single Test Procedure. So in many cases the actual NTCIP Test Procedures simply include those possible Test Cases in the Test Procedure itself so it will actually test for multiple inputs or different inputs in a single Test Procedure. Per IEEE 829, it is recommended to separate those into Test Cases and Test Procedures so you can simply call the same Test Procedure multiple times by different Test Cases. Thus not increasing the usability of that Test Procedure.
So as far as the Test Procedure Specification per IEEE 829 this is a document that specifies a sequence of actions for the execution of a test. So you'll note as I mentioned that many NTCIP standards combine the Test Case Specifications and Test Procedure Specifications. So the standard Test Procedures ensure that the conformance testing is performed in the same manner on separate test occasions rather than simply running through a test based on simply the inputs and outputs, yes I tested that for these particular inputs and yes I got these actual outputs. In this case we're making sure that the procedure that is followed is very particular. And note that it's important not to skip any steps in the Test Procedures to ensure proper conformance testing.
So now the graphic at the right shows only the Test Procedure steps from Test Case 1.6 that was the lower section. So step one and two records the values for pixel height and pixel width recorded in the PRL, steps three and four record the actual sign height and width. Step five retrieves the objects that were referenced in the Requirements Traceability Matrix and steps six and seven compare the retrieved values to the values that were recorded in the PRL. You'll note that these are examples of verifying communication requirements using the demonstration method of testing. Step eight and nine verify the retrieved values against the actual sigh width and height in pixels. So someone would have to actually look at the sign, as you can see that this is an example of verifying a functional requirement by means of the inspection method of testing because someone actually has to view the sign, count the number of pixels. So you can see in the far right the results column that has references in parenthesis that trace back to the entries in the PRL. The entries in the PRL are sometimes organized by user needs section number and sometimes by the functional requirement section numbers. But those should be unique and so you should be able to find those in the PRL.
So now the summary of what we've gone over with this particular case study so this was for standards that were created using the Systems Engineering Process with Test Cases and Test Procedures. So the Test Design Specification was created using the PRL, the Test Case Specification was determined using the requirement of the Test Cases Traceability Matrix and note that some of the inputs may be defined in the protocol requirements list and finally the Test Procedure Specifications themselves are included in the standard.
So next what about standards that were created using the Systems Engineering Process but did not have Test Case Specifications or Test Procedure Specifications. While this would a mixed case study which is NTCIP 1209 v02 - Transportation Sensor Systems. Transportation Sensor System is used to monitor vehicle volume, occupancy and speed over a selected period of time. Also done over multiple zones. Some examples of these could be using radar, laser, acoustic loop detectors or even video. This standard itself NTCIP 1209 is v02 which was created using the Systems Engineering Process. So it has protocol requirements lists and dialogs but it does not yet have any Test Cases or Test Procedures that have been created.
So what we can do, you'll note that we can create Test Design Specifications just as we showed in the previous DMS case study. So in this case study we're just going to show how to create the Test Cases and Test Procedures using what's available in the standard. So starting from the Protocol Requirements List at the very top of this slide you can see that we selected a functional requirement of just resetting the system and in particular the functional requirement is to restart the system, item 188.8.131.52.1, the conformance is mandatory so we have to do this, so what we're going to do is look into the requirements for Test Cases Traceability Matrix and you'll see previously from the DMS case study we could have looked in the Requirements for Test Cases Traceability Matrix and simply found the Test Cases and Test Procedures that were necessary. Well we don't have any of those in this standard so instead what we're going to do is look at the Requirements Traceability Matrix which is at the bottom of the screen here and from this we'll be able to look at the dialog that's associated with this particular functional requirement.
So here we see a restart the TSS as a dialog ID of 184.108.40.206 reset and synchronize the TSS. Now let's take a look at that dialog and see what that looks like. So this would be the dialog that's published in the standard, reset and synchronize the TSS and it has several steps that it has to go through and the purpose of a dialog is to ensure that a central system in performing a function performs it the same way as any other central system. From the manufacturer of the device perspective it means that he has a method that's shown to him as to how that function should be performed and then when the central performs that function in the same way as is expected by the device manufacturer then we have interoperability between those devices which in those elements excuse me.
So what we can do now is actually use this dialog is a template for creating the Test Procedure. So this gives us a basic flow of how the Test Procedure itself would look. So from the dialog we'll note that only one parameter needed to be passed to the Test Procedure itself which was the command. So you can see in the Test Case listing below under Variables we have reset command equals restart. So there may be other Test Cases that would use simply a different reset command of type and you see the result that we're looking for would be SensorSystemStatus.0 = OK. That would be a response that we would get from the device. So we're going to create a Test Procedure that basically uses the dialog to actually create those steps and you would see that these actually match up pretty well. Perform to get on the systems status and then based on what the response value is may exit the Test Procedure or continue on. There may be that the Transportation Sensor System is not properly configured at that point for accepting this and that was included in the dialog. So you'll see that there's item 3 Set the sensor system reset button zero to reset the command that's the variable. So we set that in our previous step to restart so it would then execute the restart command and then we actually verify as seen is step six that the response that we receive back equals okay and that was the expected response. So you'll notice that this is very basic, there may be additional requirements listed in the standard for any of these objects. It would be important to look through the standard and ensure that the person writing these Test Procedures is familiar with those. There may be some special cases that would occur that would have to be included that would not be included in the standard dialog. But the dialog definitely gives you a great starting place for working up that Test Procedure. So in summary if you have a standard that was created using the Systems Engineering Process but does not yet have any Test Cases or Test Procedures then we can use the Protocol Requirements List, the Requirements Traceability Matrix and the dialogs to create Test Design Specifications, Test Cases and Test Procedures.
We have a question that's come up, can I specify how to determine the sign height, tape measure versus laser or is it method independent? That's an excellent question and it goes to exactly what kind of tools you would need so in this case it is NTCIP does not define how that would be required to be verified. It may be that there's a very, very specific requirement for the height of the Dynamic Message Sign in which case something as accurate as a laser may be required. An example would be if the sign has to slide inside of a concrete opening and the sign is on the order of 50 feet wide by 20 feet tall and there would be a concern that a tape measure may not be accurate enough over those kind of distances so it may be that yeah that that would be a possible requirement for that. But otherwise it would not be typically listed as I mentioned in the NTCIP itself, it's up to getting down to the Test Design Specifications as to how that function would be verified. Excellent question!
Okay so our next poll and now we have according to IEEE 829 which of the following are included in Test Procedures? We have inputs, we have execution conditions, the steps to execute, the expected results or all of the above and notice in particular we're referring to according to IEEE 829 not according to how we often find them in NTCIP standards. So which of these are included in Test Procedures. Go ahead and start entering your results. Okay I've got a few more folks left to vote. Okay and we'll close the poll and here we have the results, 86% saying all of the above so in this case the actual answer is the steps to execute because this is according to IEEE 829 where Test Procedures are divided from Test Cases so the Test Procedures themselves just include the steps to execute.
The Test Cases would include the inputs, the execution conditions and the expected results. So there's a bit of a distinction there between those two and that would explain the discrepancy in the results. Moving for standards that were developed without the Systems Engineering Process so we're talking about the way these standards were originally developed and many of them have not been updated since. So now we have no user names, no functional requirements, not Protocol Requirements Lists, yeah kind of on our own here.
So in this case the Test Design Specifications, Test Cases and Test Procedures must all be generated by the specifying authority or could be by someone they specify. What we do have in standards of this type are what's termed conformance groups, the conformance groups are used by NTCIP standards that were not created using the Systems Engineering Process. Conformance groups perform two functions, they combine similar objects into broadly defined groups and they then define whether each of those groups and the objects within those groups are mandatory or optional for purposes of defining conforming to the standard this sounds familiar to a Protocol Requirements List but you'll notice the format is a bit different. You'll recall that Protocol Requirements Lists are used for these purposes in standards that were created using the Systems Engineering Process.
So conformance groups can be used to help generate the Test Specifications in the following Test Case study will help make this clear. So for this case study we're going to use an NTCIP 1205 Close Circuit Television Example. So you can see there in the picture a camera, directed down at traffic and in this case we're not using that to count vehicles but rather to actually transfer an image back to a central system so someone can for example determine exactly why it is that the traffic is jammed up. May have determined that from a separate monitoring system of some sort. The standard itself NTCIP 1205 cameras was not creating using the Systems Engineering Process so no Protocol Requirements Lists, no Test Cases, no Test Procedures. So we'll be using conformance groups to actually generate these documents.
So here's an example of a conformance statement table from NTCIP 1205 shown at the bottom of the screen. You'll note again that there are mandatory and optional groups in this case so looking at the bottom the CCTV configuration conformance group is mandatory and the CCTV motion control conformance group is optional. Notice that we would be selecting that as being supported simply because we would want to have the ability to control the movement of the actual camera. An example of a camera that did not require this and the reason that it is listed as optional is you may have simply a static camera, one that is fixed, now it cannot be moved.
So we'll be looking at the CCTV motion control conformance group for a particular object that's used to control the zoom function. So this is some excerpts from the standard from two conformance groups that show the mandatory objects for the zoom feature itself. So in this case we have the configuration conformance group which is showing the ranges and limit and time out zoom and from the motion control conformance group we have the position zoom lens. So with conformance groups it may necessary to select one or more optional groups and objects to specify the desired features, whereas with PRL's they're conveniently organized by functional requirements. So we'll now take a look at these particular functions and this is what these particular objects do, the position zoom lens object this allows the central to command the camera to change its zoom setting. The rangeZoom limit defines the maximum zoom level, how far in you can zoom and the time outZoom this limits how long a zoom can continue. So the rangeZoom limit and time outZoom apply to three of the zoom modes but for the example Test Cases we're doing here we'll only apply them against a single mode that they have. So this is an example of one of the tradeoffs that must be made in testing, thoroughness versus cost and time and in many cases the specifying agency will make these based on how they will be using the actual device and what is most important to them.
So here's an excerpt from the Test Case Specification, this is included in the student supplement. So it only shows three of the eight Test Cases required to show conformance to the zoom feature. Entire Test Cases as I said included in the supplement. The first column of the table provides the Test Case Specification Identifier, in this case that's the Test Case number. The second column shows the Test Case Name, the Objects under Test they indicate the objects that are used in the Test Case. The Variable and a Results column define the inputs and the outputs. Finally the Test Procedure Steps, column for each Test Case provides traceability into the Test Procedure Specifications. This could also be done in a separate table. Test Case 1205-01 that's the first Test Case that's used to test the ability of the camera to zoom in to its maximum telephoto setting. This is tested in Test Procedure TP1205-003 steps 4-10 and it's tested using the position zoom lens object with the mode set to two, absolute because what we're doing is we're going to a particular zoom setting. There are other modes that are available such as delta, you might move from a certain amount from where you are to another or perhaps continuous which doesn't make as much sense with zoom, makes much more sense with the pan command but all the modes are used alike against all of the different mean methods of moving the camera. So you'll note also another variable is the speed at which it's being moved so in this case we're moving it fast and finally the offset which is the range zoom limit and so what we have as a result is that the camera should zoom to its maximum setting using its fastest speed.
So this would be the Test Case Specification and this would be the Test Procedure that is used for actually testing against that Test Case. So this only shows the first ten steps of the Test Procedure for verifying the zoom feature. It's referenced by Test Case TC1205.001 as we just showed. Now the upper left hand box shows the identifier and in this case it's TP1205.003. The box to the right shows the title, which is zoom camera and a brief description of the purpose of the test, the pass/fail criteria and the variables that are used. So in this case you can see that each of the variables has a range which is acceptable for each of those and we find values for each of those as necessary in the Test Case Specifications document. So you'll note the first three steps of this test are used to determine the camera's maximum telephoto zoom position which is stored in the NTCIP object range zoom limit and to prevent the camera from stopping a zoom feature by clearing the object time out zoom. So step four is send the command to the camera to zoom into its maximum telephoto position, the actual functionality that we're testing here. Step five is simply delay, wait for it to occur after which step six has the test operator verify that the camera has zoomed to its maximum telephoto position, so there should be a functional requirement. Next the remaining steps query the camera to verify that it indicates that is has zoomed to its maximum telephoto zoom position. So you'll note that these are examples of the formal method of testing. So this actually covers the ability taking a device that uses a standard without Systems Engineering Process and how to go about creating all of the test documents necessary for performing testing against that device. So a summary of what we've gone through for standards that do have Systems Engineering Process Test Cases and Test Procedures you can start from the Protocol Requirements List to create the Test Design Specification and then use the Requirements for Test Case Traceability Matrix to actually determine which Test Cases and Test Procedures are required. You'll note that there may be additional requirements listed in the Protocol Requirements List which would define some of those inputs for the test cases. As you'll remember for this example we used the Dynamic Message Sign NTCIP 1205. For standard with Systems Engineering Process that they don't yet have Test Cases or Test Procedures what you can do is start from the Protocol Requirements List to create the Test Design Specifications just as we did with the DMS but then the Test Cases and Test Procedure Specifications can be created using the Requirements Traceability Matrix, other dialogs and possibly any additional requirements that were listed in the PRL. So for standards without the Systems Engineering Process we just went through this example of the camera, simply use the conformance groups to create the Test Design Specifications and then create the Test Case Specifications and Test Procedure Specifications. One thing to look for in the Test Case Specifications is to be sure to include boundary and error conditions.
So there may be some additional items that wouldn't jump out at you at first. Okay then several questions of go ahead and go through a couple of those. Are you using a software tool to generate the case study examples? That's helping to automate the traceability of requirements all the way through the SEP. In this particular case I am not, I understand there are some people who are working on software that would allow that to be automatically generated from for example a MIB but up till now everything that's in the standards and the small examples I've done here have just been done by hand. In many cases you start from as we've done here the dialogs that are available, if they're not then you start from the objects themselves and their definitions and then you add in there any experience that you may have with those devices and issues that you've run across that may need to be tested to ensure that those don't occur in the field.
Another question was how does the tester verify the camera is zoomed to absolute position, i.e. what does verify mean? In this case you would actually be monitoring the output from the camera typically on a video monitor and in zooming to its absolute position you're going to see a change in the screen itself. For many of these cameras they'll have an indication indicating the zoom level. It may have gone to 8x, 16x, 32x etc. so that would be an indication of the verifying that if there are very particular requirements you'll note NTCIP didn't specify that it must go into a certain level of zoom, not 32x for example. The NTCIP standard for zoom simply has a number and so it zooms in as to that maximum number, zooms out to its minimum number and between those two that the central can then control that camera through its entire range. If the agency requirements have a particular need for example a 32x zoom and the camera itself doesn't indicate that or if they want to determine that, that would be going beyond the conformance testing of NTCIP and into compliance testing against the specifications and not to say you can't do them at the same time, it would be required that the agency actually specified that that be tested also and maybe that the way they do that is through zooming all the way out to a wide level, determine the angle that is actually displayed on the video. Zoom all the way in to telephoto and then determine how much of that image is now displayed, divide those two and you would get an actual zoom range in terms of magnification if you will, 32x, etc. So but yeah excellent questions.
In addition to the test documents that we've been going over there are additional test documents that are defined in IEEE 829. So these will be used in addition to the Test Plan and the Test Specification documents. These include a Test Item Transmittal form, this one's used to document transferring the Test Item between entities and includes it's status for example once an item is ready for formal testing it may be that the manufacturer sends along a Test Item Transmittal with the device to indicate this item has now passed out in house testing and is ready for formal testing etc. Or it may be that an item fails testing, has to be modified, retested and then it is shipped back for formal testing. Again a Test Item Transmittal would follow that device and would have to include its status which is to say it has now been corrected and retested. As part of the testing itself Test Incident Reports could provide a means of recording the anomalies that occurred during the testing. So this would be simply recording only the items that either failed outright or were in some fashion not quite what was expected, anything of that nature. A test having to be restarted for some reason for example. Next the Test Summary, typically a one page report that provides the results of the testing, hopefully this simply says passed that would be the best test summary. But it may be that there are certain other items that would also be necessary there. Finally the Test Logs would document the testing that occurred. These could be simply the Test Procedures having been checked off by someone. It may be that this also includes actual recordings of all the communications that took place between the test devices or the test software and the actual device under tasked. So one of the things that many agencies will want to actually define is what form this Test Log would take. Is this going to be in a form that is readable by anyone such as a PDF or Word document. Is this information going to be in a proprietary format that can only be read back by the actual test equipment in which case it was done out of house it may be that the DOT themselves don't have that test equipment and the logs themselves would be unreadable. So that's something that would typically be to define early in the process.
So let's go back over the learning objectives that we went through for this webinar. Starting with the first was to be able to describe within the context of the testing lifecycle, the role of Test Plans, Test Design Specifications, Test Cases and Test Procedures. We went over several examples. Describe the purpose and content of Test Design Specifications, Test Cases and Test Procedures. Next for standards using Systems Engineering Process detail the manner that Protocol Requirements Lists and Requirements for Test Cases Traceability Matrix can be used to create Test Specifications and you'll recall the Dynamic Message Sign example for that. For standards that do not use Systems Engineering Process detail the manner that conformance groups can be used to create Test Specifications and this was done using the camera example.
So next one last activity. In this case what did we learn today? So we'll start with just go ahead and type into the chat box any of the answers starting with number one, the blank is created early in the Project Life Cycle and defines the testing to be performed from a management-level perspective. So everyone go ahead and type in your answers in the chat box for the first item and that would be the Test Plan. Next blank details the testing to be performed. And this would be another document. Okay so now that's it the Test Design Specifications. Next the Test Cases define the blank, the expected results and the test conditions. Okay and they define the inputs, the expected results and the test conditions. Okay the next is number four the Test Procedures define the blank to be performed to execute the tests. Okay getting several answers, excellent and yes the Test Procedures define the steps to be performed to execute the tests. Next multiple choice, it's not going to be good. Blank and blank are two items found in standards created using the Systems Engineering Process. I've got a lot of possible answers here. Okay any others? Okay in this case one of them I the Protocol Requirements List another is the Requirements Traceability Matrix. Other options are the user needs, the functional requirements or dialogs and you'll note that the Requirements for Test Cases Traceability Matrix and the Test Cases and Test Procedures they might be in there, they might not. Just because it was created using the Systems Engineering Process there's at least one standard out there now that has the Systems Engineering Process but does not yet have Test Cases and Test Procedures that have been included in the standard. So and generally yes if they've been created using the Systems Engineering Process most of the standards for NTCIP now that they have those. And finally, getting ahead of myself, blank is found in standards created without using the Systems Engineering Process. I'm not sure if that got out over the air or not. So this was the example of the camera and that's correct, it would be conformance groups. Okay excellent.
So if you'd like some more information there's several great references available. The V Systems Engineering Model came from the Systems Engineering Guidebook, listed there at the top and also we were referencing the IEEE Standard 829, so that's referenced there. Several NTCIP standards, all these are available at the NTCIP website: www.ntcip.org. And I'll particularly recommend the last one listed there which is NTCIP 9001, the NTCIP Guide which gives a great overview of NTCIP and also goes into as much detail as you would like as to its communication standard itself.
Okay so as to the curriculum path for testing as we went over earlier, starting with T101 and T201, we are now finished with T202. Moving from here several possibilities, T311 and Applying your Test Plan to Dynamic Message Sign is another option. T313 Applying your Test Plan to Environmental Sensor Stations and there will be additional modules in this same vein that will be created here shortly.
Okay do we have any additional questions? Give me a second, see if anyone has anything about anything that we've gone over here or any testing in general. Okay as to the question is to be sure I understand the scope of this deals primarily with conformance testing, not compliance testing, however on the V diagram once a device is deployed in the field is it expected both compliance and conformance testing would be performed? In particular what we are going over is conformance testing that's what NTCIP standards and their Test Procedures are going to test for. It's the responsibility of the agency to determine that the compliance testing would ensure that the device itself would cover all the requirements that they have put into the specifications that they released. So it may be that the two of those are being done at the same time as part of a single testing process. It may be that there is a separate test jut for NTCIP conformance as an example typically that's performed as a factory acceptance test on a single item. Whereas compliance testing may be performed during that same factory acceptance test or it may be performed independently. So but yes in particular we've been going over conformance testing to the NTCIP standard.
So another question is: Will Test Procedure be standardized? And yes that's what we're actually going over here is as much as possible what you're looking to do is standardize your Test Procedures for the published standards that NTCIP has produced, these are the Test Procedures that must be performed to be able to state that your device is conformant to NTCIP. You'll note there's a lot of latitude that's given there, it's not required that you support every single feature that NTCIP supports. It's not required even that you support every range of that feature that's required so just because NTCIP allows a Dynamic Message Sign to support thousands of messages doesn't mean that that's required to be able to say that your device is conformant to NTCIP. If an agency only needs five messages or even a blank out sign just blank and a message that would still constitute a conformant NTCIP device as long as those two messages are displayed in accordance with the standard and have been tested in accordance with the Test Procedures. So yes we're looking to standardize the Test Procedures so that everyone can perform those procedures in the same manner and achieve the same results each time. Doesn't matter whether it's the manufacturer doing it beforehand, whether it's the consultant who's performing it as part of the factory acceptance test or whether it's the agency performing it later when they think there might be an issue, should always get the same result.
Okay we have another question: If someone were after a requirement like, “cameras shall be NTCIP compliant,” is that meaningless, do they need to state in what way? Yes I would have to say that's pretty near to meaningless. Typically for a camera that's an older standard so what they would want to do is actually define the conformance groups that are required listing any of the mandatory ones and defining in particular the optional ones. In many cases the way you would see this done previously is it would say the camera shall be conformant to NTCIP, it shall support all mandatory conformance groups, it shall support the following objects with these ranges for optional conformance groups. So in that manner it's very clear in the specifications to the person who is having to actually verify that that camera is conformant exactly what's included in that definition in accordance with the standard. Once you go into the actual newer standards that were created with the Systems Engineering Process it will be done through the PRL. In that case they would simply copy the PRL with as I showed the section the yes/no, yes/no selected and the ranges as necessary in the additional requirements column over there on the right of the PRL. So that would then provide a very clear indication of what's required to be NTCIP conformant. So yes in many instances you will see standards come out that simply say must be NTCIP compliant or conformant and it really puts an onus on the manufacturer to then have to go through the complete specification and see what functionality they're calling out and from that discern exactly how that maps to NTCIP and so it's very easy for there to be some confusion. That's the reason for making sure that everyone tries to specify that the NTCIP requirements in the same manner so that's where initially the conformance groups came in and they were followed with the PRL.
Okay so as to another question: What level of range an object would you test, a sample or a full set? This goes back to the question of many standards are very complex, trying to test every single combination that is possible would result in testing that is incredibly expensive, take quite some time so what you're looking to do in particular is verify first of all that the device performs for your critical functionality. So it may be that for Dynamic Message Signs, you know that you're just going to put up regular messages, nothing crazy, you're not going to be moving text left and right, you're not going to be flashing text and you have no extra additional sensors that are actually attached to the device for displaying changing information using fields. So you're going to concentrate on the ability to display basic messages and you may have a requirement and say well we might use some of these features later so we want to make sure to go through and verify that yes it does support fields and the way we're going to do that is we're going to say the requirement is to support fields and here are the fields that you must be able to display: temperature, time, radar, etc., or speed, I should say, vehicle speed. But you're not going to indicate what ranges would be included and you could simply test a particular one. So that would be, yeah something that you would have to determine on a case by case basis but that's allowable through the Test Procedure selection process, or the Test Case I should say, if you're selecting the input for those individual Test Procedures. Okay, any other questions? Okay this then completes the ITS Standards Training Webinar, T202 Overview of Test Design Specifications, Test Cases and Test Procedures and thank you all for attending.