I have been asked this question a hell lot of time.
Error : Deviation for actual and the expected/theoritical value .
Bug : An Error found in the development environment before the product is shipped to the customer .
Defect : An Error found in the product itself after it is shipped to the customer .
Sunday, November 15, 2009
Difference Between Validation and Verification
Verification ensures that the application complies to standards and processes. This answers the question " Did we build the right system? "
Eg: Design reviews, code walkthroughs and inspections.
Validation ensures whether the application is built as per the plan. This answers the question " Did we build the system in the right way? ".
Eg: Unit Testing, Integration Testing, System Testing and
User Acceptence Test.
Eg: Design reviews, code walkthroughs and inspections.
Validation ensures whether the application is built as per the plan. This answers the question " Did we build the system in the right way? ".
Eg: Unit Testing, Integration Testing, System Testing and
User Acceptence Test.
Important Testing Terms
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Automated Testing:
* Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
* The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Functional Testing:
* Testing the features and operational behavior of a product to ensure they correspond to its specifications.
* Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Gorilla Testing: Testing one particular module, functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Load Testing:Testing the application under load when lots of users login into application.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass"
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Automated Testing:
* Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
* The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
Functional Testing:
* Testing the features and operational behavior of a product to ensure they correspond to its specifications.
* Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
Gorilla Testing: Testing one particular module, functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Load Testing:Testing the application under load when lots of users login into application.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass"
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.
Importance of Software Testing
Software testing is a process which ensures that a software/application is delivered with highest quality.
Importance of Software Testing
Whenever an application is developed for end users, its the sole responsibility of Software Test Engineer to test the entire the application for bugs which may happen when the application is released.
Though the developers believe that there application is bug free but still the application should be thoroughly tested.
How Should an Application should be tested?
Step1
Requirement Phase
An application is build based on the requirement of the end user. A software Test Engineer understands these requirements and create a Test Plan for the same.We will use black box testing and automation regression suite for automation.
Step 2
Test Strategy
Test Engineer breaks down the requirements into small modules and looks what all functionality should be automated and what all requires manual testing.
Step3
Test Case Writing
For each module TCO'S(Test Case Outlines) are identified and these TCO'S are then breakdown into test cases(A test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not)
eg.
Lets take gmail login page. What all are the test cases.
1>User should be able to login with correct username and password.
2>User should not be able to login with incorrect username and password.
These are basic functional test cases for gmail login page. You can also include some negative test cases but we will cover them later on.
Step 4
Reviewing of Test cases
This is an important step in a test plan where a test lead/manager reviews the test cases. Review will include like all the test cases are there for requirements, whether the test case is right. In some cases a developer can also review the test cases as in case of Agile methodology.
Step 5
Traceabilty Matrix
A traceability matrix is then created matching the test cases with the requirements.This is mainly done by a test manager or test lead.
Step 6
Meanwhile Developers develop the application and release the application for testing.
Step 6
Tools Used
Here all the tools used for application management and bug management are chosen.
Step 7
Test Pass
Software Test Engineer test the application on the basis of test cases he has written and log the bugs if any of the test passes fails. Also he do adhoc testing for the application. These bugs are then assigned to the respective developers and once fixed are then rechecked by the testers.
Step 8
Timelines
Here the time line is decided. Here is an example
2 test passes , each test pass of 3 day cycle.
Step 9
Bug Matrix
A bug matrix is created for all the bugs in which bugs are matched with their respective test cases.
Step 10
Bug Triage Meeting
Here all the top management guys sit together and decide what all bugs need to be fixed and what all we can live with. Mainly low priority bugs are ignored.(P4 and P5)
Step 11
Exit Criteria
100% test pass, No P1-P2 open/resolved bugs.
Step 12
Delevirables
Test run report, What all test cases passed/failed in test runs.
Tracebillity matrix:traceability matrix is then created matching the test cases with the requirements
Importance of Software Testing
Whenever an application is developed for end users, its the sole responsibility of Software Test Engineer to test the entire the application for bugs which may happen when the application is released.
Though the developers believe that there application is bug free but still the application should be thoroughly tested.
How Should an Application should be tested?
Step1
Requirement Phase
An application is build based on the requirement of the end user. A software Test Engineer understands these requirements and create a Test Plan for the same.We will use black box testing and automation regression suite for automation.
Step 2
Test Strategy
Test Engineer breaks down the requirements into small modules and looks what all functionality should be automated and what all requires manual testing.
Black box and regression suite Automation
Step3
Test Case Writing
For each module TCO'S(Test Case Outlines) are identified and these TCO'S are then breakdown into test cases(A test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not)
eg.
Lets take gmail login page. What all are the test cases.
1>User should be able to login with correct username and password.
2>User should not be able to login with incorrect username and password.
These are basic functional test cases for gmail login page. You can also include some negative test cases but we will cover them later on.
Step 4
Reviewing of Test cases
This is an important step in a test plan where a test lead/manager reviews the test cases. Review will include like all the test cases are there for requirements, whether the test case is right. In some cases a developer can also review the test cases as in case of Agile methodology.
Step 5
Traceabilty Matrix
A traceability matrix is then created matching the test cases with the requirements.This is mainly done by a test manager or test lead.
Step 6
What will the testing cycle be
2 test cycles of 2 test passes each. This will include functional testing
End of each test cycle there will be regression testing.
After 2 test cycles there will be a performance test.
Meanwhile Developers develop the application and release the application for testing.
Step 6
Tools Used
Here all the tools used for application management and bug management are chosen.
Step 7
Test Pass
Software Test Engineer test the application on the basis of test cases he has written and log the bugs if any of the test passes fails. Also he do adhoc testing for the application. These bugs are then assigned to the respective developers and once fixed are then rechecked by the testers.
Step 8
Timelines
Here the time line is decided. Here is an example
2 test passes , each test pass of 3 day cycle.
Step 9
Bug Matrix
A bug matrix is created for all the bugs in which bugs are matched with their respective test cases.
Step 10
Bug Triage Meeting
Here all the top management guys sit together and decide what all bugs need to be fixed and what all we can live with. Mainly low priority bugs are ignored.(P4 and P5)
Step 11
Exit Criteria
100% test pass, No P1-P2 open/resolved bugs.
Step 12
Delevirables
Test run report, What all test cases passed/failed in test runs.
Tracebillity matrix:traceability matrix is then created matching the test cases with the requirements
Subscribe to:
Posts (Atom)