-->

Tabs

Basic Definations in Testing for Test Enginner

Everyone know testing but most of the people don't know what is Testing, here some of the basic concepts in testing.

Software testing :Software Testing is the process of executing a program or system with the intent of finding errors or it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible.

Black box Testing :The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

White box Testing :Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Test plans are made according to the details of the software implementation, such as programming language, logic and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing.
There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage),traverse every branch statements (branch coverage),or cover all the possible combination of true and false condition predicates(Multiple condition coverage).Control-flow testing, loop testing and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use or never get executed at all,which can not be discovered by functional testing.

Good test engineer :A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer or User, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical(customers, management)people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Good Software QA engineer :The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

'Test case':
• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

ECP and how you will prepare test cases :ECP --Equivalence Class Portioning. It is a software testing related technique which is used for writing test cases. it will break the range into some Equal partitions.
The main purpose of this technique is
1) To reduce the no. of test cases to a necessary minimum.
2) To select the right test cases to cover all the scenarios.


Role of documentation in QA :QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.Change management for documentation should be used if possible.

Bug :One which Technically Implemented but the functionality is not working according to the Specifications is bug.

Issue :One which Technically is not implemented properly according to the Specifications is called Issue.

Error :One which Technically implemented and functionality also working but not according to the specification i.e., Code problem, security Problem is called Error.

Levels of Testing :Ideally, this is the list of levels in Testing
1.Smoke Testing or Build Acceptance Testing
2.Sanity Testing
3.Functionality Testing
4.Retesting(if build == 1st returns False)
5.Regression testing
6.Integration testing
7.Performance testing(Stress, Volume, Security)
8.System Testing
9.End to End testing(Before stopping the testing)
10.Beta Testing (Before releasing to Client)
11.Acceptance testing(Client Side)

Smoke Test :To test Weather the build is stable or not for further testing.

Sanity Testing :To test weather the High Priority Functionalities are working properly or not according to the Specifications.

Functionality Testing :To test every corner of the Application according to specification and execute all the test cases.

Retesting :To test weather the reported bugs are fixed or not in a new build and follow DTLC.

Regression testing :To test after resolving the old bugs, resolved bugs and entire application.

Integration Testing :To test after merging the 2 or 3 modules check the system as a whole.

Performance testing :To check the application behavioral or capacity by maintaining the load on the application(manually it is not possible).

System Testing :To check the whole application after merging the all module as a system(like Compatibility, System Configuration, Supported Add-on, etc.,).

End to End testing :To test corner to corner of the application to decide the error rate and stopping of testing.

Beta Testing :To check the Application before releasing to client in front of PM and company management.

Acceptance Testing :Testing the Application as a whole at client side.

'Configuration management' :Configuration management covers the processes used to control, coordinate and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.

How can it be known when to stop testing :This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in
deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends