Dive into Testing
Tabs
Tracebility Matrix
Definition :
A traceability matrix is a document, usually in the form of a table, that correlates any two baseline documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
A requirements traceability matrix may be used to check to see if the current project requirements are being met, and to help in the creation of a Request for Proposal, various deliverable documents, and project plan tasks.Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the relationship is too complex and should be simplified. To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other.
Description :
A table that traces the requirements to the system deliverable component for that stage that responds to the requirement.Size and Format. For each requirement, identify the component in the current stage that responds to the requirement. The requirement may be mapped to such items as a hardware component, an application unit, or a section of a design specification.
Traceability Matrix Requirements :
Traceability matrices can be established using a variety of tools including requirements management software, databases, spreadsheets, or even with tables or hyperlinks in a word processor. A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.
In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many. Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan. Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.
Use a Traceability Matrix to:
• verify and validate system specifications
• ensure that all final deliverable documents are included in the system specification, such as process models and data models
• improve the quality of a system by identifying requirements that are not addressed by configuration items during design and code reviews and by identifying extra configuration items that are not required. Examples of configuration items are software modules and hardware devices
• provide input to change requests and future project plans when missing requirements are identified
• provide a guide for system and acceptance test plans of what needs to be tested.
Need for Relating Requirements to a Deliverable :
Taking the time to cross-reference each requirement to a deliverable ensures that a deliverable is consistent with the system requirements. A requirement that cannot be mapped to a deliverable is an indication that something is missing from the deliverable. Likewise, a deliverable that cannot be traced back to a requirement may mean the system is delivering more than required.
Use a Traceability Matrix to Match Requirements to a Deliverable :
There are many ways to relate requirements to the deliverable for each stage of the system life cycle.
One method is to:
• create a two-dimensional table
• allow one row for each requirements specification paragraph (identified by paragraph number from the requirements document)
• allow one column per identified configuration item (such as software module or hardware device)
• put a check mark at the intersection of row and column if the configuration item satisfies the stated requirement
Useful Traceability Matrices :
Various traceability matrices may be utilized throughout the system life cycle. Useful ones include:
• Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
• Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
• Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
• Design specification to functional specification verifies that each function has been covered in the design.
• System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.
Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick reference during verification and validation tasks.
Source : http://www.onestoptesting.com/
A traceability matrix is a document, usually in the form of a table, that correlates any two baseline documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
A requirements traceability matrix may be used to check to see if the current project requirements are being met, and to help in the creation of a Request for Proposal, various deliverable documents, and project plan tasks.Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the relationship is too complex and should be simplified. To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other.
Description :
A table that traces the requirements to the system deliverable component for that stage that responds to the requirement.Size and Format. For each requirement, identify the component in the current stage that responds to the requirement. The requirement may be mapped to such items as a hardware component, an application unit, or a section of a design specification.
Traceability Matrix Requirements :
Traceability matrices can be established using a variety of tools including requirements management software, databases, spreadsheets, or even with tables or hyperlinks in a word processor. A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.
In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many. Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan. Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.
Use a Traceability Matrix to:
• verify and validate system specifications
• ensure that all final deliverable documents are included in the system specification, such as process models and data models
• improve the quality of a system by identifying requirements that are not addressed by configuration items during design and code reviews and by identifying extra configuration items that are not required. Examples of configuration items are software modules and hardware devices
• provide input to change requests and future project plans when missing requirements are identified
• provide a guide for system and acceptance test plans of what needs to be tested.
Need for Relating Requirements to a Deliverable :
Taking the time to cross-reference each requirement to a deliverable ensures that a deliverable is consistent with the system requirements. A requirement that cannot be mapped to a deliverable is an indication that something is missing from the deliverable. Likewise, a deliverable that cannot be traced back to a requirement may mean the system is delivering more than required.
Use a Traceability Matrix to Match Requirements to a Deliverable :
There are many ways to relate requirements to the deliverable for each stage of the system life cycle.
One method is to:
• create a two-dimensional table
• allow one row for each requirements specification paragraph (identified by paragraph number from the requirements document)
• allow one column per identified configuration item (such as software module or hardware device)
• put a check mark at the intersection of row and column if the configuration item satisfies the stated requirement
Useful Traceability Matrices :
Various traceability matrices may be utilized throughout the system life cycle. Useful ones include:
• Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
• Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
• Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
• Design specification to functional specification verifies that each function has been covered in the design.
• System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.
Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick reference during verification and validation tasks.
Source : http://www.onestoptesting.com/
BVA & ECP
Boundary value analysis :
Testing experience has shown that especially the boundaries of input ranges to a software module are liable to bugs. A developer implement e.g. the range 1 to 31 at an input, which e.g. stands for the days of day January, has in his code a line checking for this range. This may look like: if (day > 0 && day <>= 0 && day < style="font-weight: bold;">Definition :
BVA is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges BVA is a method which refines ECP. BVA generates test cases that highlight errors better than ECP. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, BVA broadens the portions of the business requirement document used to generate tests. Unlike ECP, it takes into account the output specifications when deriving test cases.
Description :
The purpose of BVA is to concentrate the testing effort on error prone areas by accurately pointing the boundaries of conditions,(e.g., a developer may specify >, when the requirement states > or =).To set up BVA test cases you first have to determine which boundaries you have at the interface of a software module. This has to be done by applying the ECP technique. BVA and ECP are inevitably linked together. For the example of the day in a date you would have the following partitions:
.......... -2 -1 0 1 .............................. 30 31 32 33 .....
--------------|-------------------|---------------------
invalid valid invalid
Applying BVA you have to select now a test case at each side of the boundary between two partitions. In the above example this would be 0 and 1 for the lower boundary as well as 31 and 32 for the upper boundary. Each of these pairs consists of a "+ve" and a "-ve" test case. A "+ve" test case should give you a valid operation result of your program. A "-ve" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The BVA can have 6 testcases.n, n-1,n+1 for the upper limit and n, n-1,n+1 for the lower limit.
A further set of boundaries has to be considered when you set up your test cases. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If you are working with signed values this is especially the range around zero (-1, 0, +1). Similar to the typical range check faults, developers tend to have weaknesses in their programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the developer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the developer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256.The tendency is to relate BVA more to the so called black box testing ,which is strictly checking a software module at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.After determining the necessary test cases with ECP and subsequent BVA, it is necessary to define the combinations of the test cases when there are multiple inputs to a software module.
Implementation of BVA :
There are two steps:
STEP 1: IDENTIFY EQUIVALENCE CLASSES
Follow the same rules you used in ECP. However, consider the output specifications as well. For example, if the toatl no of days in the month of january are 31, then add the following classes to the ones you found previously:
1. The valid class ( 1 < = days in January < = 31 ) 2 The invalid class (days in January <0)> 31 )
STEP 2: DESIGN TEST CASES
In this step, you derive test cases from the equivalence classes. The process is similar to that of ECP but the rules for designing test cases differ. With ECP, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range.
1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of days on hand is 1 to 31, write test cases that include:
1. the valid test case days on hand is 1
2. the valid test case days on hand is 31
3. the invalid test case days on hand is 0 and
4. the invalid test case days on hand is 32
You may combine valid classes wherever possible, just as you did with ECP, and, once again, you may not combine invalid classes. Don't forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:
1. the valid test case total days on hand is 1
2. the valid test case total days on hand is 31
3. the invalid test case total days on hand is 0 and
4. the invalid test case total days on hand is 32
2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range.
3. Design tests that highlight the first and last records in an input or output file.
4.Look for any other extreme input or output conditions, and generate a test for each of them.
Error Guessing :
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.Ability to guess based on previous experience in Software Testing environment.
Adhoc method to identify tests likely to expose errors based on experience and intuition. Some areas to guess are Empty or null strings, Zero instances, occurrences, Blank or null characters in strings, Negative numbers.The purpose of error guessing is to focus the testing activity on areas that have not been handled by the other more formal techniques, such as ECP and boundary value analysis.Error guessing is the process of making an educated guess as to other types of areas to be tested. For example, educated guesses can be based on items such as metrics from past testing experiences, or the tester's identification of situations in the Functional Design Specification or Detailed Design Specification, that are not addressed clearly.
Equivalence Class Partitioning :
ECP is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
Description :
ECP is a software testing technique to minimize number of permutation and combination of input data. In ECP, data is selected in such a way that it gives as many different out put as possible with the minimal set of data.If software behaves in an identical way for a set of value, then the set is termed as equivalence class or a partition. It can be assumed safely that functionality of the software will be same for any data value from the equivalence class or partition. In ECP, input data is analyzed and divided into equivalence classes which produces different output. Now, data from these classes can be representative of all the input values that your software expect. For equivalence classes, it can be assumed that software will behave in exactly same way for any data value from the same partition.
The testing theory related to ECP says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand.
Implementation of ECP :
The tendency is to relate ECP to black box testing, which is strictly checking a software module at its interface without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to white box testing as well. Imagine an interface to a module which has a valid range between 1 and 31 as in the example above. However, internally the function may have a differentiation of values between 1 and 15 and the values between 16 and 31. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the module this difference will not be noticed, however in your white-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for black-box testing. For this example this would be:
.......... -2 -1 0 1 ......... 15 16 .......... 31 32 33 34 .....
--------------|---------|----------|----------------
invalid1 valid 1 valid2 invalid2
T1 T2 T3 T4
To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface.
Types of Equivalence Classes
• Continuous classes, or ranges of values, run from one point to another, with no clear separations of values. An example is a temperature range.
• Discrete classes have clear separation of values. Discrete classes are sets, or enumerations.
• Boolean classes are either true or false. Boolean classes only have two values, either true or false, on or off, yes or no. An example is whether a checkbox is checked or unchecked.
Why ECP?
ECP drastically cuts down the number of test cases required to test a system reasonably.It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
Designing Test Cases Using ECP,
Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class ) Following are some general guidelines for identifying equivalence classes:
a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if a month can have a days of 1 to 31, identify the following classes: 1. one valid class: (DAY is greater than or equal to 1 and is less than or equal to 31).2. the invalid class (DAY is less than 0) 3. the invalid class (DAY is greater than 31)
b) If the requirements state that the number of days input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are, too many inputs. For example, specifications state that a maximum of 2 no can be registered against anyone day. The equivalence classes are : the valid equivalence class: (number of Characters is greater than or equal to 1 and less than or equal to 2 , the invalid class (no. of characters > 2) the invalid class (no. of Characters < style="font-weight: bold;">
Conclusion : BVA,ECP and Error Guessing are the important methods in preparing and implementing the Testcases.
Source : http://www.onestoptesting.com/
Testing experience has shown that especially the boundaries of input ranges to a software module are liable to bugs. A developer implement e.g. the range 1 to 31 at an input, which e.g. stands for the days of day January, has in his code a line checking for this range. This may look like: if (day > 0 && day <>= 0 && day < style="font-weight: bold;">Definition :
BVA is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges BVA is a method which refines ECP. BVA generates test cases that highlight errors better than ECP. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, BVA broadens the portions of the business requirement document used to generate tests. Unlike ECP, it takes into account the output specifications when deriving test cases.
Description :
The purpose of BVA is to concentrate the testing effort on error prone areas by accurately pointing the boundaries of conditions,(e.g., a developer may specify >, when the requirement states > or =).To set up BVA test cases you first have to determine which boundaries you have at the interface of a software module. This has to be done by applying the ECP technique. BVA and ECP are inevitably linked together. For the example of the day in a date you would have the following partitions:
.......... -2 -1 0 1 .............................. 30 31 32 33 .....
--------------|-------------------|---------------------
invalid valid invalid
Applying BVA you have to select now a test case at each side of the boundary between two partitions. In the above example this would be 0 and 1 for the lower boundary as well as 31 and 32 for the upper boundary. Each of these pairs consists of a "+ve" and a "-ve" test case. A "+ve" test case should give you a valid operation result of your program. A "-ve" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The BVA can have 6 testcases.n, n-1,n+1 for the upper limit and n, n-1,n+1 for the lower limit.
A further set of boundaries has to be considered when you set up your test cases. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If you are working with signed values this is especially the range around zero (-1, 0, +1). Similar to the typical range check faults, developers tend to have weaknesses in their programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the developer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the developer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256.The tendency is to relate BVA more to the so called black box testing ,which is strictly checking a software module at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.After determining the necessary test cases with ECP and subsequent BVA, it is necessary to define the combinations of the test cases when there are multiple inputs to a software module.
Implementation of BVA :
There are two steps:
STEP 1: IDENTIFY EQUIVALENCE CLASSES
Follow the same rules you used in ECP. However, consider the output specifications as well. For example, if the toatl no of days in the month of january are 31, then add the following classes to the ones you found previously:
1. The valid class ( 1 < = days in January < = 31 ) 2 The invalid class (days in January <0)> 31 )
STEP 2: DESIGN TEST CASES
In this step, you derive test cases from the equivalence classes. The process is similar to that of ECP but the rules for designing test cases differ. With ECP, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range.
1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of days on hand is 1 to 31, write test cases that include:
1. the valid test case days on hand is 1
2. the valid test case days on hand is 31
3. the invalid test case days on hand is 0 and
4. the invalid test case days on hand is 32
You may combine valid classes wherever possible, just as you did with ECP, and, once again, you may not combine invalid classes. Don't forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:
1. the valid test case total days on hand is 1
2. the valid test case total days on hand is 31
3. the invalid test case total days on hand is 0 and
4. the invalid test case total days on hand is 32
2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range.
3. Design tests that highlight the first and last records in an input or output file.
4.Look for any other extreme input or output conditions, and generate a test for each of them.
Error Guessing :
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.Ability to guess based on previous experience in Software Testing environment.
Adhoc method to identify tests likely to expose errors based on experience and intuition. Some areas to guess are Empty or null strings, Zero instances, occurrences, Blank or null characters in strings, Negative numbers.The purpose of error guessing is to focus the testing activity on areas that have not been handled by the other more formal techniques, such as ECP and boundary value analysis.Error guessing is the process of making an educated guess as to other types of areas to be tested. For example, educated guesses can be based on items such as metrics from past testing experiences, or the tester's identification of situations in the Functional Design Specification or Detailed Design Specification, that are not addressed clearly.
Equivalence Class Partitioning :
ECP is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
Description :
ECP is a software testing technique to minimize number of permutation and combination of input data. In ECP, data is selected in such a way that it gives as many different out put as possible with the minimal set of data.If software behaves in an identical way for a set of value, then the set is termed as equivalence class or a partition. It can be assumed safely that functionality of the software will be same for any data value from the equivalence class or partition. In ECP, input data is analyzed and divided into equivalence classes which produces different output. Now, data from these classes can be representative of all the input values that your software expect. For equivalence classes, it can be assumed that software will behave in exactly same way for any data value from the same partition.
The testing theory related to ECP says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand.
Implementation of ECP :
The tendency is to relate ECP to black box testing, which is strictly checking a software module at its interface without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to white box testing as well. Imagine an interface to a module which has a valid range between 1 and 31 as in the example above. However, internally the function may have a differentiation of values between 1 and 15 and the values between 16 and 31. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the module this difference will not be noticed, however in your white-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for black-box testing. For this example this would be:
.......... -2 -1 0 1 ......... 15 16 .......... 31 32 33 34 .....
--------------|---------|----------|----------------
invalid1 valid 1 valid2 invalid2
T1 T2 T3 T4
To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface.
Types of Equivalence Classes
• Continuous classes, or ranges of values, run from one point to another, with no clear separations of values. An example is a temperature range.
• Discrete classes have clear separation of values. Discrete classes are sets, or enumerations.
• Boolean classes are either true or false. Boolean classes only have two values, either true or false, on or off, yes or no. An example is whether a checkbox is checked or unchecked.
Why ECP?
ECP drastically cuts down the number of test cases required to test a system reasonably.It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
Designing Test Cases Using ECP,
Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class ) Following are some general guidelines for identifying equivalence classes:
a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if a month can have a days of 1 to 31, identify the following classes: 1. one valid class: (DAY is greater than or equal to 1 and is less than or equal to 31).2. the invalid class (DAY is less than 0) 3. the invalid class (DAY is greater than 31)
b) If the requirements state that the number of days input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are, too many inputs. For example, specifications state that a maximum of 2 no can be registered against anyone day. The equivalence classes are : the valid equivalence class: (number of Characters is greater than or equal to 1 and less than or equal to 2 , the invalid class (no. of characters > 2) the invalid class (no. of Characters < style="font-weight: bold;">
Conclusion : BVA,ECP and Error Guessing are the important methods in preparing and implementing the Testcases.
Source : http://www.onestoptesting.com/
Test Strategy Vs Test Plan
Test Strategy :
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments.The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
Components in the Test Strategy are as follows:
1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan
Test Plan :
A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
• An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
• A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
• A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
• Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.
Test plan is the freezed document developed from SRS(Specification Requirement Document). After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Components are as follows:
1. Test Plan id
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test deliverable
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach
Conclusion :Test Plan is the Document which deals with the When,What and Who will do the Project and Test Strategy is the document which deals with the How to do the project, In case if i wrong anywhere kindly give the feedback.
Why does software have bugs?
1. understanding or no communication - understand the application requirements.
2. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
3. Programming errors - programmers "can" make mistakes.
4. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
5. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented that result as bugs.
7. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments.The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
Components in the Test Strategy are as follows:
1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan
Test Plan :
A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
• An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
• A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
• A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
• Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.
Test plan is the freezed document developed from SRS(Specification Requirement Document). After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Components are as follows:
1. Test Plan id
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test deliverable
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach
Conclusion :Test Plan is the Document which deals with the When,What and Who will do the Project and Test Strategy is the document which deals with the How to do the project, In case if i wrong anywhere kindly give the feedback.
Why does software have bugs?
1. understanding or no communication - understand the application requirements.
2. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
3. Programming errors - programmers "can" make mistakes.
4. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
5. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented that result as bugs.
7. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
Non-Functionality Testing
Performance vs. load vs. stress testing
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms interchangeably, but they have in fact quite different meanings. This post is a quick review of these concepts, based on my own experience and definations given by formors.
Performance testing
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing. If you don't know where you want to go in terms of the performance of the system, then it matters little which direction you take .For example, for a Web application, you need to know at least two things:
1.except load in terms of concurrent ot HTTP Conncetions
2.acceptable response
Once you know where you want to be, you can start on your way there by constantly increasing the load on the system while looking for bottlenecks. To take again the example of a Web application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
1.At the application level developers can use profilers to spot inefficiencies in their code(like poor algorithms)
2.At the database level,developers and DBAs can use database-specific profilers and query
3.At the Operatingsystem level,system engineers can use utilities such as top,vmstat,iostat(on Unix-type system)and PerfMon(on Windows) to moniter H/W resourses such as CPU,memory,swap,disk,I/O; specialized kernels monitoring S/W can also be used
4.At the network level,network engineers can use packet sniffers such s tcpdump,network protocol analyzers such as ethereal and various utilities such as netstat, MRTG, ntop, mii-tool can be used
From a testing point of view, the activities described above all take a white-box approach, where the system is inspected and monitored "from the inside out" and from a variety of angles. Measurements are taken and analyzed, and as a result, tuning is done.
However, testers also take a black-box approach in running the load tests against the system under test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure response times
When the results of the load test indicate that performance of the system does not meet its expected goals, it is time for tuning, starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations. Once a particular function or method has been profiled and tuned, developers can then wrap its unit tests in jUnitPerf and ensure that it meets performance requirements of load and timing and calls this "continuous performance testing".
If, after tuning the application and the database, the system still doesn't meet its expected goals in terms of performance, a wide array of tuning procedures is available at the all the levels discussed before. Here are some examples of things you can do to enhance the performance of a Web application outside of the application code per se:
1.Use Web cache mechanisms, such as the one provided by Squid
2.Publish highly-requested Web pages statically, so that they don't hit the database
3.Scale the Web server farm horizontally via load balancing
4.Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
5.Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
6.Increase the available network bandwidth
Performance tuning can sometimes be more art than science, due to the sheer complexity of the systems involved in a modern Web application. Care must be taken to modify one variable at a time and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to qualify and repeat.
In a standard test environment such as a test lab, it will not always be possible to replicate the production server configuration. In such cases, a staging environment is used which is a subset of the production environment. The expected performance of the system needs to be scaled down accordingly.
The cycle "run load test->measure performance->tune system" is repeated until the system under test achieves the expected levels of performance. At this point, testers have a baseline for how the system behaves under normal conditions. This baseline can then be used in regression tests to gauge how well a new version of the software performs.
Another common goal of performance testing is to establish benchmark numbers for the system under test and many hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do not include a detailed specification of all the hardware and software configurations that were used in that particular test.
TCP
The Transaction Processing Performance Council defines transaction processing and database benchmarks and delivers trusted results to the industry.
Load testing
We have already seen load testing as part of the process of performance testing and tuning. In that context, it meant constantly increasing the load on the system via automated tools. For a Web application, the load is defined in terms of concurrent users or HTTP connections.
In the testing literature, the term "load testing" is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing.
Examples of volume testing:
1.testing a word processor by editing a very large document
2.testing a printer by sending it a very large job
3.testing a mail server with thousands of users mailboxes
4.a specific case of volume testing is zero-volume testing, where the system is fed empty tasks
Examples of longevity/endurance testing:
1.testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
1.expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
2.ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seem similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels. On the other hand, load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.
In the context of load testing, I want to emphasize the extreme importance of having large datasets available for testing. In my experience, many important bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory, thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.
Stress testing :
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Where performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability. To take again the example of a Web application, here are some ways in which stress can be applied to the system:
1.double the baseline number for concurrent users/HTTP connections
2.randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
3.take the database offline, then restart it
4.rebuild a RAID array while the system is running
5.run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However, stress testing does not break the system purely for the pleasure of breaking it, but instead it allows testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user, or does it merely display incomprehensible hex codes? Is the security of the system compromised because of unexpected failures? And the list goes on.
Conclusion
I am aware that I only scratched the surface in terms of issues, tools and techniques that deserve to be mentioned in the context of performance, load and stress testing. After reading Please Comment in case if i was wrong.
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms interchangeably, but they have in fact quite different meanings. This post is a quick review of these concepts, based on my own experience and definations given by formors.
Performance testing
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing. If you don't know where you want to go in terms of the performance of the system, then it matters little which direction you take .For example, for a Web application, you need to know at least two things:
1.except load in terms of concurrent ot HTTP Conncetions
2.acceptable response
Once you know where you want to be, you can start on your way there by constantly increasing the load on the system while looking for bottlenecks. To take again the example of a Web application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
1.At the application level developers can use profilers to spot inefficiencies in their code(like poor algorithms)
2.At the database level,developers and DBAs can use database-specific profilers and query
3.At the Operatingsystem level,system engineers can use utilities such as top,vmstat,iostat(on Unix-type system)and PerfMon(on Windows) to moniter H/W resourses such as CPU,memory,swap,disk,I/O; specialized kernels monitoring S/W can also be used
4.At the network level,network engineers can use packet sniffers such s tcpdump,network protocol analyzers such as ethereal and various utilities such as netstat, MRTG, ntop, mii-tool can be used
From a testing point of view, the activities described above all take a white-box approach, where the system is inspected and monitored "from the inside out" and from a variety of angles. Measurements are taken and analyzed, and as a result, tuning is done.
However, testers also take a black-box approach in running the load tests against the system under test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure response times
When the results of the load test indicate that performance of the system does not meet its expected goals, it is time for tuning, starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations. Once a particular function or method has been profiled and tuned, developers can then wrap its unit tests in jUnitPerf and ensure that it meets performance requirements of load and timing and calls this "continuous performance testing".
If, after tuning the application and the database, the system still doesn't meet its expected goals in terms of performance, a wide array of tuning procedures is available at the all the levels discussed before. Here are some examples of things you can do to enhance the performance of a Web application outside of the application code per se:
1.Use Web cache mechanisms, such as the one provided by Squid
2.Publish highly-requested Web pages statically, so that they don't hit the database
3.Scale the Web server farm horizontally via load balancing
4.Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
5.Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
6.Increase the available network bandwidth
Performance tuning can sometimes be more art than science, due to the sheer complexity of the systems involved in a modern Web application. Care must be taken to modify one variable at a time and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to qualify and repeat.
In a standard test environment such as a test lab, it will not always be possible to replicate the production server configuration. In such cases, a staging environment is used which is a subset of the production environment. The expected performance of the system needs to be scaled down accordingly.
The cycle "run load test->measure performance->tune system" is repeated until the system under test achieves the expected levels of performance. At this point, testers have a baseline for how the system behaves under normal conditions. This baseline can then be used in regression tests to gauge how well a new version of the software performs.
Another common goal of performance testing is to establish benchmark numbers for the system under test and many hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do not include a detailed specification of all the hardware and software configurations that were used in that particular test.
TCP
The Transaction Processing Performance Council defines transaction processing and database benchmarks and delivers trusted results to the industry.
Load testing
We have already seen load testing as part of the process of performance testing and tuning. In that context, it meant constantly increasing the load on the system via automated tools. For a Web application, the load is defined in terms of concurrent users or HTTP connections.
In the testing literature, the term "load testing" is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing.
Examples of volume testing:
1.testing a word processor by editing a very large document
2.testing a printer by sending it a very large job
3.testing a mail server with thousands of users mailboxes
4.a specific case of volume testing is zero-volume testing, where the system is fed empty tasks
Examples of longevity/endurance testing:
1.testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
1.expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
2.ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seem similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels. On the other hand, load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.
In the context of load testing, I want to emphasize the extreme importance of having large datasets available for testing. In my experience, many important bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory, thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.
Stress testing :
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Where performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability. To take again the example of a Web application, here are some ways in which stress can be applied to the system:
1.double the baseline number for concurrent users/HTTP connections
2.randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
3.take the database offline, then restart it
4.rebuild a RAID array while the system is running
5.run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However, stress testing does not break the system purely for the pleasure of breaking it, but instead it allows testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user, or does it merely display incomprehensible hex codes? Is the security of the system compromised because of unexpected failures? And the list goes on.
Conclusion
I am aware that I only scratched the surface in terms of issues, tools and techniques that deserve to be mentioned in the context of performance, load and stress testing. After reading Please Comment in case if i was wrong.
Source: http://agiletesting.blogspot.com
Basic Definations in Testing for Test Enginner
Everyone know testing but most of the people don't know what is Testing, here some of the basic concepts in testing.
Software testing :Software Testing is the process of executing a program or system with the intent of finding errors or it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible.
Black box Testing :The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.
White box Testing :Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Test plans are made according to the details of the software implementation, such as programming language, logic and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing.
There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage),traverse every branch statements (branch coverage),or cover all the possible combination of true and false condition predicates(Multiple condition coverage).Control-flow testing, loop testing and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use or never get executed at all,which can not be discovered by functional testing.
Good test engineer :A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer or User, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical(customers, management)people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
Good Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
'Test case':
• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
ECP and how you will prepare test cases :ECP --Equivalence Class Portioning. It is a software testing related technique which is used for writing test cases. it will break the range into some Equal partitions.
The main purpose of this technique is
1) To reduce the no. of test cases to a necessary minimum.
2) To select the right test cases to cover all the scenarios.
Role of documentation in QA :QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.Change management for documentation should be used if possible.
Bug :One which Technically Implemented but the functionality is not working according to the Specifications is bug.
Issue :One which Technically is not implemented properly according to the Specifications is called Issue.
Error :One which Technically implemented and functionality also working but not according to the specification i.e., Code problem, security Problem is called Error.
Levels of Testing :Ideally, this is the list of levels in Testing
1.Smoke Testing or Build Acceptance Testing
2.Sanity Testing
3.Functionality Testing
4.Retesting(if build == 1st returns False)
5.Regression testing
6.Integration testing
7.Performance testing(Stress, Volume, Security)
8.System Testing
9.End to End testing(Before stopping the testing)
10.Beta Testing (Before releasing to Client)
11.Acceptance testing(Client Side)
Smoke Test :To test Weather the build is stable or not for further testing.
Sanity Testing :To test weather the High Priority Functionalities are working properly or not according to the Specifications.
Functionality Testing :To test every corner of the Application according to specification and execute all the test cases.
Retesting :To test weather the reported bugs are fixed or not in a new build and follow DTLC.
Regression testing :To test after resolving the old bugs, resolved bugs and entire application.
Integration Testing :To test after merging the 2 or 3 modules check the system as a whole.
Performance testing :To check the application behavioral or capacity by maintaining the load on the application(manually it is not possible).
System Testing :To check the whole application after merging the all module as a system(like Compatibility, System Configuration, Supported Add-on, etc.,).
End to End testing :To test corner to corner of the application to decide the error rate and stopping of testing.
Beta Testing :To check the Application before releasing to client in front of PM and company management.
Acceptance Testing :Testing the Application as a whole at client side.
'Configuration management' :Configuration management covers the processes used to control, coordinate and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.
How can it be known when to stop testing :This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
Software testing :Software Testing is the process of executing a program or system with the intent of finding errors or it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible.
Black box Testing :The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.
White box Testing :Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Test plans are made according to the details of the software implementation, such as programming language, logic and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing.
There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage),traverse every branch statements (branch coverage),or cover all the possible combination of true and false condition predicates(Multiple condition coverage).Control-flow testing, loop testing and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use or never get executed at all,which can not be discovered by functional testing.
Good test engineer :A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer or User, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical(customers, management)people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
Good Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
'Test case':
• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
ECP and how you will prepare test cases :ECP --Equivalence Class Portioning. It is a software testing related technique which is used for writing test cases. it will break the range into some Equal partitions.
The main purpose of this technique is
1) To reduce the no. of test cases to a necessary minimum.
2) To select the right test cases to cover all the scenarios.
Role of documentation in QA :QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.Change management for documentation should be used if possible.
Bug :One which Technically Implemented but the functionality is not working according to the Specifications is bug.
Issue :One which Technically is not implemented properly according to the Specifications is called Issue.
Error :One which Technically implemented and functionality also working but not according to the specification i.e., Code problem, security Problem is called Error.
Levels of Testing :Ideally, this is the list of levels in Testing
1.Smoke Testing or Build Acceptance Testing
2.Sanity Testing
3.Functionality Testing
4.Retesting(if build == 1st returns False)
5.Regression testing
6.Integration testing
7.Performance testing(Stress, Volume, Security)
8.System Testing
9.End to End testing(Before stopping the testing)
10.Beta Testing (Before releasing to Client)
11.Acceptance testing(Client Side)
Smoke Test :To test Weather the build is stable or not for further testing.
Sanity Testing :To test weather the High Priority Functionalities are working properly or not according to the Specifications.
Functionality Testing :To test every corner of the Application according to specification and execute all the test cases.
Retesting :To test weather the reported bugs are fixed or not in a new build and follow DTLC.
Regression testing :To test after resolving the old bugs, resolved bugs and entire application.
Integration Testing :To test after merging the 2 or 3 modules check the system as a whole.
Performance testing :To check the application behavioral or capacity by maintaining the load on the application(manually it is not possible).
System Testing :To check the whole application after merging the all module as a system(like Compatibility, System Configuration, Supported Add-on, etc.,).
End to End testing :To test corner to corner of the application to decide the error rate and stopping of testing.
Beta Testing :To check the Application before releasing to client in front of PM and company management.
Acceptance Testing :Testing the Application as a whole at client side.
'Configuration management' :Configuration management covers the processes used to control, coordinate and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.
How can it be known when to stop testing :This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
Subscribe to:
Posts (Atom)