It is a type of user acceptance testing where the application is testing in preproduction.Preproduction means which has similar Environment like production i.e h/w components and software and web servers all are same as in production.
for Example:
In bank when ever new chanegs in banking transaction like adding new functionality to accounts.then they will create account in preproduction environment which has almost similar features in live transactions.If it suceeede then they will moved new change requests to production.
Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts
Friday, June 6, 2008
Saturday, March 22, 2008
Defect Life Cycle
Bug life cycle:
Bug life contains seven status
New :If tester find the bug at first time then he put status as new
Closed :After developer fixed the bug if bug was not able to reproduce then the tester set it status as Closed
Fixed:After tester send the bug report if the developer fix the bug then he put the status as Fixed
Rejected :After tester send the bug report if the developer does not satisfy that as bug the he put the status as rejected.
Open:After tester send the bug report if the developer needs some more calrification on that bug then he put the statusas open and put comments as "need calarification" .
After developer fixed the bug ,if the tester reproduce the same but when performing regression testing the he set the status as open(reopen)
As per requirements :If the requirements are changed and the tester would not have idea on that then if he will set it as bug,then the developer will then set it status as As per requirements
Not able to Reproduce:If there was any bug found in client location but stiil it was unable to produce in company location then he put the status as Not able to reproduce
Description of Bug life cycle:
there are two ways the report the bugs
1)using bug tracking tools like bugzilla etc.
2)using Excel sheet
When ever a new build is released then the tester will tested the build using his testcase document.
if bug is found then he will set it status as new.then the developer can check the bug whether it is bug or not by using crossreference matrix.if he reproduce the bug then he will fixed the bug other wise he will check just rejected the bug and set it status as rejected.
then once again the tester tested the old functionlity,to ensure that whether the bug is fixed or not if the bug is fixed the he set the status as closed,other wise he will set the status as open.if he find a new when testing the old functionality then will set the status as new. that means when tester find the bug at first time then he will set the status as new other wise he put the status as open.when developer wants some more clarification that bug then he will set the status as open.then he will rectify that bug later.And if the requirements are changed and the tester would not have idea on that then he will set it as bug,the developer will then set it status as As per requirements.
After testing completion in company once again user acceptence testing will be performed in client environment.If there was any bug found in client location but stiil it was unable to produce in company location then he put the status as Not able to reproduce
once user acceptance testing is completed then the buld moved into production.
I there was any severe Issue occured where there was no testcase then it call hotfix.
then the developer wiil immediately want to fix the bug.
Bug life contains seven status
New :If tester find the bug at first time then he put status as new
Closed :After developer fixed the bug if bug was not able to reproduce then the tester set it status as Closed
Fixed:After tester send the bug report if the developer fix the bug then he put the status as Fixed
Rejected :After tester send the bug report if the developer does not satisfy that as bug the he put the status as rejected.
Open:After tester send the bug report if the developer needs some more calrification on that bug then he put the statusas open and put comments as "need calarification" .
After developer fixed the bug ,if the tester reproduce the same but when performing regression testing the he set the status as open(reopen)
As per requirements :If the requirements are changed and the tester would not have idea on that then if he will set it as bug,then the developer will then set it status as As per requirements
Not able to Reproduce:If there was any bug found in client location but stiil it was unable to produce in company location then he put the status as Not able to reproduce
Description of Bug life cycle:
there are two ways the report the bugs
1)using bug tracking tools like bugzilla etc.
2)using Excel sheet
When ever a new build is released then the tester will tested the build using his testcase document.
if bug is found then he will set it status as new.then the developer can check the bug whether it is bug or not by using crossreference matrix.if he reproduce the bug then he will fixed the bug other wise he will check just rejected the bug and set it status as rejected.
then once again the tester tested the old functionlity,to ensure that whether the bug is fixed or not if the bug is fixed the he set the status as closed,other wise he will set the status as open.if he find a new when testing the old functionality then will set the status as new. that means when tester find the bug at first time then he will set the status as new other wise he put the status as open.when developer wants some more clarification that bug then he will set the status as open.then he will rectify that bug later.And if the requirements are changed and the tester would not have idea on that then he will set it as bug,the developer will then set it status as As per requirements.
After testing completion in company once again user acceptence testing will be performed in client environment.If there was any bug found in client location but stiil it was unable to produce in company location then he put the status as Not able to reproduce
once user acceptance testing is completed then the buld moved into production.
I there was any severe Issue occured where there was no testcase then it call hotfix.
then the developer wiil immediately want to fix the bug.
Tuesday, January 29, 2008
TESTING LIFE CYCLE
Testing life cycle:
Testing lifecycle contains six phases
Test planning
Test development
Test execution
Result analysis
Bug tracking
Reporting
Test planning: Test planning is a document which describes how to perform testing on an application in an effective, efficient and optimized way. Test planning contains test objective, coverage of testing, test strategy, Base criteria, test scope, test deliverables, test environment, test bed, resource planning, scheduling, staffing& training, risks &contingency plan, assumptions and approval information.
Test objective: the purpose of the test plan document is clearly described in this section.
Reference documents: the list of documents that are Referred to prepare the
Test plan are listed out in this section.
Coverage of testing:
Features to be tested: The lists of all the features that are within the scope are listed in this section.
Features not tested: features that are not planned for tested based on the following criteria are listed in this section.
(1) Features out of scope
(2) Future functionalities
(3) Low risk areas
(4) The features that are skipped based on the time constraint.
Test strategy: test strategy is an organizational level term which is used for testing all the projects for an organization. For all projects the test strategy is same. Test strategy is considered for all projects. Where test plan is different for different projects.
Test strategy consists of
Levels of testing
Types of testing
Test design techniques
Configuration management
Test metrics
Terminology
Automation plan
List of automated tools
Levels of testing: The lists of all the levels of testing that are used in the organization are listed in the section. Mainly they perform unitlevel, module level, integration, system level and acceptance testing.
Types of testing: the list of all the types of testing that are followed by that company will be listed out in this section.
Test design techniques: the list of all the techniques that are used for perform testing in an effective, efficient and optimized way with the same result.
Software configuration management (SCM): software configuration management is a process in which mainly they control, coordinate, and track code, requirements, documents, problems, change requests, designs, and tools.
Test metrics: the list of all the tasks that are planned to maintain metrics in that organization are listed out here in this section. quality metrics, process improve metrics, productivity metrics etc are the different types of metrics.
Terminology: The lists of all the terms that are used in that organization along with their meaning are clearly mentioned here in this section.
Automation plan: the list of all the areas that are planned for automation are listed here in this section.
List of automated tools: the list of all the automated tools that are used in that company are listed out here in this section
Test bed: test bed is the execution environment configured for software testing, consisting of specific hardware, n/w topology, operating system, configuration of the product to be under test, system software and other applications. The test plan for a project should be developed from the test beds to be used.
Test deliverables: The list of all the documents that are delivering in a testing department are list out here in this section. Test case, defect fault, test plan, status report, review report, test summary report etc are the test deliverables in the testing phase. This does not represent a definite set of test deliverables but it will help any test organization begin the process of determining an appropriate set of deliverables. Test deliverables goal is to capture the require content in a useful an consistent framework as concisely as possible
Test environment: the customer specified environment is called test environment.
Test case: A test is a document that describes the input, action or event and expected output to determine if the application is working fine (or) not.
It is used for finding the problems in requirements or design, because it completely thinking of operation of an application
Base criteria
Acceptance criteria: when to stop testing in a full fledged model is clearly described here in this section
Suspension criteria: when to suspend testing in clearly described in this section
Resource planning: A resource planning describes what the resources in company are whether h/w, s/w and human resources is sufficient for this project and who has to do what is clearly described in this section.
Scheduling: the start date and end dates are clearly described in this section.
Staffing and training: how much staff is to be recruited and what kind of training is to be provided for them and existing employees is clearly planned in this section.
Risks and contingency plans: the list of all the potential risks and the corresponding solutions are listed out here in this section.
Risks in an organization:
(1) Delaying in delivery of software
(2)Employees may leave the organization
(3)Lack of expatiation
(4)Customer imposed dead lines
(5)Unable to test all the features with in the time.
Contingencies:
(1)Proper plan assurance
(2)Employees should be maintained on bench
(3)Training should be provided
(4)What to be not tested should plan in case of customer imposed dead lines. Perform adhoc testing.
(5)Based of severity and priority the testing should be done.
Assumptions: the list of all the assumptions that are to be assured by a test engineer will be clearly mentioned here in this section.
Approval information: who has to approve what is clearly described here in this section.
.
Test development phase:
Test execution: In this phase the test engineer will do the following
He will perform the action that is described in the described column.
He will observe the actual behavior of the application
He will document the observed value under the actual column of test case document.
Result analysis: In this phase a test engineer will compare the actual value with the expected value with the if both are matched then he will document the result as pass otherwise he will document the result as fail.
Bug tracking: bug tracking is a process of identified, isolated the bug.
Defect profile template: defect profile contains defect id defect description, steps for reproducibility, submitter, date of submission, build number, version number, assigned number, severity, priority, status.
Defect id: the sequence of defect numbers are mentioned here in this section.
Steps for reproducibility: the steps that are followed by test engineer to identify the defect will be listed here in this section.
Submitter: the test engineer alias name who has submitted the defect is specified here in this section.
Date of submission: the date on which the defect is submitted is mentioned here in this section.
Severity: severity describes the seriousness of the defect. In other words how serious the defect is described in terms of severity.
Severity: is classified into four types.
Fatal ------------sev1---s1
Major --- -------sev2---s2
Minor --- -------sev3---s3
Suggestion -----sev4---s4
Fatal: If at all the problems are related to the navigational blocks or unavailability of functionality, then such type of defects is treated as fatal defects.
Major: If at all the problems are related to the functionality of application then such types of defects are treated to be major defects.
Minor: If at all the problems are related to the perception or look and feel of the application then such type of defects to be treated as minor defects.
Suggestion: If at all the problems are related to the value of the application then such types of defects are treated to be suggestions.
Priority: priority defines which bug to be fixed first.
Priority is classified into 4 types
Show topper
High S
Medium
Low
Usually fatal defects are given showtopper; major defects are given high priority, minor defects low priority and suggestions for no priority.
But in some situations highest severity defects to least priority and lowest priority may be given high priority.
Low severity and highest priority:
When ever there is a customer visits all the looks and feel, defects are given highest priority.
Highest severity and low priority: whenever some functionalities are available and it is known issue as that part of the module is been under construction even though it is a fatal issue one will give lowest priority for it.
Tester comments: If tester as any comments on the bug then he will write his comments in that field.
Developer comments: if developer want to write any comments on the bug the he will write his comments in that field.
Testing lifecycle contains six phases
Test planning
Test development
Test execution
Result analysis
Bug tracking
Reporting
Test planning: Test planning is a document which describes how to perform testing on an application in an effective, efficient and optimized way. Test planning contains test objective, coverage of testing, test strategy, Base criteria, test scope, test deliverables, test environment, test bed, resource planning, scheduling, staffing& training, risks &contingency plan, assumptions and approval information.
Test objective: the purpose of the test plan document is clearly described in this section.
Reference documents: the list of documents that are Referred to prepare the
Test plan are listed out in this section.
Coverage of testing:
Features to be tested: The lists of all the features that are within the scope are listed in this section.
Features not tested: features that are not planned for tested based on the following criteria are listed in this section.
(1) Features out of scope
(2) Future functionalities
(3) Low risk areas
(4) The features that are skipped based on the time constraint.
Test strategy: test strategy is an organizational level term which is used for testing all the projects for an organization. For all projects the test strategy is same. Test strategy is considered for all projects. Where test plan is different for different projects.
Test strategy consists of
Levels of testing
Types of testing
Test design techniques
Configuration management
Test metrics
Terminology
Automation plan
List of automated tools
Levels of testing: The lists of all the levels of testing that are used in the organization are listed in the section. Mainly they perform unitlevel, module level, integration, system level and acceptance testing.
Types of testing: the list of all the types of testing that are followed by that company will be listed out in this section.
Test design techniques: the list of all the techniques that are used for perform testing in an effective, efficient and optimized way with the same result.
Software configuration management (SCM): software configuration management is a process in which mainly they control, coordinate, and track code, requirements, documents, problems, change requests, designs, and tools.
Test metrics: the list of all the tasks that are planned to maintain metrics in that organization are listed out here in this section. quality metrics, process improve metrics, productivity metrics etc are the different types of metrics.
Terminology: The lists of all the terms that are used in that organization along with their meaning are clearly mentioned here in this section.
Automation plan: the list of all the areas that are planned for automation are listed here in this section.
List of automated tools: the list of all the automated tools that are used in that company are listed out here in this section
Test bed: test bed is the execution environment configured for software testing, consisting of specific hardware, n/w topology, operating system, configuration of the product to be under test, system software and other applications. The test plan for a project should be developed from the test beds to be used.
Test deliverables: The list of all the documents that are delivering in a testing department are list out here in this section. Test case, defect fault, test plan, status report, review report, test summary report etc are the test deliverables in the testing phase. This does not represent a definite set of test deliverables but it will help any test organization begin the process of determining an appropriate set of deliverables. Test deliverables goal is to capture the require content in a useful an consistent framework as concisely as possible
Test environment: the customer specified environment is called test environment.
Test case: A test is a document that describes the input, action or event and expected output to determine if the application is working fine (or) not.
It is used for finding the problems in requirements or design, because it completely thinking of operation of an application
Base criteria
Acceptance criteria: when to stop testing in a full fledged model is clearly described here in this section
Suspension criteria: when to suspend testing in clearly described in this section
Resource planning: A resource planning describes what the resources in company are whether h/w, s/w and human resources is sufficient for this project and who has to do what is clearly described in this section.
Scheduling: the start date and end dates are clearly described in this section.
Staffing and training: how much staff is to be recruited and what kind of training is to be provided for them and existing employees is clearly planned in this section.
Risks and contingency plans: the list of all the potential risks and the corresponding solutions are listed out here in this section.
Risks in an organization:
(1) Delaying in delivery of software
(2)Employees may leave the organization
(3)Lack of expatiation
(4)Customer imposed dead lines
(5)Unable to test all the features with in the time.
Contingencies:
(1)Proper plan assurance
(2)Employees should be maintained on bench
(3)Training should be provided
(4)What to be not tested should plan in case of customer imposed dead lines. Perform adhoc testing.
(5)Based of severity and priority the testing should be done.
Assumptions: the list of all the assumptions that are to be assured by a test engineer will be clearly mentioned here in this section.
Approval information: who has to approve what is clearly described here in this section.
.
Test development phase:
Test execution: In this phase the test engineer will do the following
He will perform the action that is described in the described column.
He will observe the actual behavior of the application
He will document the observed value under the actual column of test case document.
Result analysis: In this phase a test engineer will compare the actual value with the expected value with the if both are matched then he will document the result as pass otherwise he will document the result as fail.
Bug tracking: bug tracking is a process of identified, isolated the bug.
Defect profile template: defect profile contains defect id defect description, steps for reproducibility, submitter, date of submission, build number, version number, assigned number, severity, priority, status.
Defect id: the sequence of defect numbers are mentioned here in this section.
Steps for reproducibility: the steps that are followed by test engineer to identify the defect will be listed here in this section.
Submitter: the test engineer alias name who has submitted the defect is specified here in this section.
Date of submission: the date on which the defect is submitted is mentioned here in this section.
Severity: severity describes the seriousness of the defect. In other words how serious the defect is described in terms of severity.
Severity: is classified into four types.
Fatal ------------sev1---s1
Major --- -------sev2---s2
Minor --- -------sev3---s3
Suggestion -----sev4---s4
Fatal: If at all the problems are related to the navigational blocks or unavailability of functionality, then such type of defects is treated as fatal defects.
Major: If at all the problems are related to the functionality of application then such types of defects are treated to be major defects.
Minor: If at all the problems are related to the perception or look and feel of the application then such type of defects to be treated as minor defects.
Suggestion: If at all the problems are related to the value of the application then such types of defects are treated to be suggestions.
Priority: priority defines which bug to be fixed first.
Priority is classified into 4 types
Show topper
High S
Medium
Low
Usually fatal defects are given showtopper; major defects are given high priority, minor defects low priority and suggestions for no priority.
But in some situations highest severity defects to least priority and lowest priority may be given high priority.
Low severity and highest priority:
When ever there is a customer visits all the looks and feel, defects are given highest priority.
Highest severity and low priority: whenever some functionalities are available and it is known issue as that part of the module is been under construction even though it is a fatal issue one will give lowest priority for it.
Tester comments: If tester as any comments on the bug then he will write his comments in that field.
Developer comments: if developer want to write any comments on the bug the he will write his comments in that field.
Friday, January 18, 2008
Testing terminology in company
Review: review is process of studying or checking depending upon the role involved in it.
Ex: developers review the process where quality assurance people will check whether process is perfect or not.
Review report: review report is an outcome document of review which may contain either list of doubts or list of comments, depending on the role involved in it.
Peer review: It is an alternative form of testing, where colleagues were invited to examine your work products for defects and improvement opportunities
Peer review report: it is an outcome document of a peer review which contains list of comments.
Pass around: it is a multiple concurrent peer desk check where several people are invites to provide comments on the product.
Work around: It is temporally alternate solution provided if at any problem is araised.
Inspection: Inspection is a process of checking conduced on roles without any prior intimation. Usually they dig down to the details during this process. It is more systematic
And rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.
Ex: In Motorola iridium project nearly 80% of defects are detected through inspections where only 60% of the defects are detected through informal review.
Audit: Audit is a process of checking conducted on roles (or) departments with prior intimation well in advance.
Tow types of audits are there
(1)Internal audit
(2)External audit
If at all the audit is conducted by the resources within the company then it is known as internal audit.
If it the entire audit is conducted by the third party resources then it is called external audit.
Team reviews: it is planned and structured approach but less formal and less rigorous comparing to inspections.
Walk through: walk through is defined an informal meeting b/w the roles.
Code walk through: code walk through is a process of checking conducted on roles by the quality assurance people where in they go through the source code in order to check the coding standards.
Check list: The checklist helps to improve the presence of value to the so that it is easy to understand the code.
(1)Comments the every page, methods and classes
(2)Naming conventions
(3)Tool tips for every control
(4)Validations according to meta data
(5)Consistent look and feelings css
(6)Only labels for writing the field names
(7)Adding comments to table’s views and stored procedures in database.
(8)Exception handling
(9)Partitions the code into regions with the appropriate name
Validation: validation is a process of validating the developed product. Validation is done by testing team or customer or third party testing team.
Verification: verification is process of monitoring and checking that whether the product is developing in right manner or not. Usually quality assurance people will do it.
Quality control: Quality control is a process of validating the developed product .usually they do the validation.
Quality assurance: Quality assurance is the process of monitoring and guiding each and every role in the organization to follow the guidelines right from starting from the process still the ending of process. Usually they do the verification with the help of reviews, check lists, inspections and audits.
Non conformance (NC): when ever the role is not following the process, the penalty will be given for him in terms of NC.
Software configuration management (SCM): software configuration management is a process in which mainly they concentration on two tasks
Change control: change control is a process of updating the related documents whenever some changes are made to the application in order to keep the documents and application in sink with each other at any point of time
Version control: version control is a process of maintaining the naming conventions and version numbers.
Common repository: it is basically a server which act as storage place and can access only by the authorized users.
Check-in: it is a process of uploading the documents to the common repository.
Checkout: it is a process of uploading the documents from the common repository.
Base lining: base lining is a process of finalizing the documents.
Publish: publish is a process of making the baseline documents available to the relevant roles by uploading them to them to the common repository with a special icon representation.
Release: It is process of sending application from the development department to the testing department.
Delivery: It is process of sending the tested application from the company to the client.
Software delivery note: It is document prepared by the development department and sent to the testing department during the release, which contain the following info
(1)Build path information
(2)Deployment document path info
(3)Test data path information
(4) Known issues document path information
(5)Release manager mail
(6)Build no
(7) Version no
(8) Released date
(9) Module name
Software deliver note: It is a document prepared by the testing department with the support of project manager and business analyst an will be given to the customer at the time of delivery it contains the following information
(1) User manual
(2) Known issues
Corrective action: when ever the role has committed a repairable mistake then one will take care of corrective action in order to correct that mistake.
Preventive action: If at all the role has committed an irreparable mistakes then one will take care of preventive action in order to preventive that type of mistake.
Defect product: If at all product is not justifying some of requirements but still it is usable then such type of products is called as defect products.
Defective products: If the entire product is not justifying the requirements as well as not usable then such type of products are called as defective products.
Bench mark: Bench mark is defined as standard with which usually we compare it. in testing department we can consider the expected value has bench marks.
Prototype: It is defined as roughly and rapidly developed model which is used for demonstrating to the client in order to gather the clear requirements. An win the confidence of the customer.
Pair programming: two developers together on same program at a single work station and continuously reviewing their work.
Brain storming: A learning technique involving from open group discussion intended to expand the range of available ideas. A meeting to generate creative ideas.
Change request: change request is a process of requesting the developers to incorporate the changes to do the same usually
Impact analysis: After the changes are requested by the customer the analysts will analyze how much impact will fall on the already developed part if they incorporate these changes, this process is called as impact analysis.
Slippage: it is defined as an extra time taken to accomplish a specific task.
Slippage=actual time taken-expected time.
Escalation: escalation is a process of intimating any kind of problematic situation related information to the superiors’ .usually it is done in different levels.
1st level ------------team lead
2nd level -----------quality lead
3rd level -----------quality manager
4th level ------------team manager
5th level ------------project manager
Code optimization: code optimization is a process of reducing the no of lines or complexity of the code in order to increase the performance of an application. This is also called as fine tuning.
Traceability matrix: It is a document used for tracking the requirements, test cases and the defects. It is prepared to make client satisfy that coverage done is complete as end to end. It contains requirement base line document, reference number, test case /condition, bug or defect id. Using this document the person can track the requirement based on the defect id. It contains which contain some linking information used for tracing back for reference in ambiguous situation. It is also called as traceability matrix.
Hard coding: It is a process of incorporating the constant values directly into the program.
Patch: when ever the test engineer reject the build the developer will do some patch work to the build and release the same build as patch.
Project report: It is report prepared by the test lead well in advance before the periodic project meeting conducted.
Periodic project meeting (PPM): periodic project meeting is conducted periodically in order to discuss the status of project. Where in they discuss the following points.
(1) Percentage covered in project during the period
(2) Percentage not covered in the project during the period.
(3) Tasks completed during the period
(4) Total defect metrics.
(5) Any slippages.
(6) Reasons for the slippages.
(7) Team related hr issues.
(8) Technical issues.
Management representative meetings (MRN): It is meeting conducted in order to discuss the status of the company where in they discuss the following points
(1) Success rate and growth rate of the company
(2) Projects that are recently signed off.
(3) Projects that are pipelined
(4) Customer appraisal
(5) Customer negative command
(6) Future plans
(7) Internal audit reports
(8) Individual appraisal
Ticket: when the role has to access the permissions or any technical or fair related issues then the role send mail to the related persons.
Knowledge transfer (KT): It is a training program in which the project leads or project managers give domain knowledge to the team so that before developing the project the roles can understand the requirements and functionality so that they understand and develop the code or test cases depending upon the roles.
Defect leakage: It occurs at the customer or end user side after the application delivery. If any user gets any type of defects by using the application then it is called as defect leakage.
Hot fix: Bug find at customer place with high priority where there is no testcases for compatibility testing.
Ex: developers review the process where quality assurance people will check whether process is perfect or not.
Review report: review report is an outcome document of review which may contain either list of doubts or list of comments, depending on the role involved in it.
Peer review: It is an alternative form of testing, where colleagues were invited to examine your work products for defects and improvement opportunities
Peer review report: it is an outcome document of a peer review which contains list of comments.
Pass around: it is a multiple concurrent peer desk check where several people are invites to provide comments on the product.
Work around: It is temporally alternate solution provided if at any problem is araised.
Inspection: Inspection is a process of checking conduced on roles without any prior intimation. Usually they dig down to the details during this process. It is more systematic
And rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.
Ex: In Motorola iridium project nearly 80% of defects are detected through inspections where only 60% of the defects are detected through informal review.
Audit: Audit is a process of checking conducted on roles (or) departments with prior intimation well in advance.
Tow types of audits are there
(1)Internal audit
(2)External audit
If at all the audit is conducted by the resources within the company then it is known as internal audit.
If it the entire audit is conducted by the third party resources then it is called external audit.
Team reviews: it is planned and structured approach but less formal and less rigorous comparing to inspections.
Walk through: walk through is defined an informal meeting b/w the roles.
Code walk through: code walk through is a process of checking conducted on roles by the quality assurance people where in they go through the source code in order to check the coding standards.
Check list: The checklist helps to improve the presence of value to the so that it is easy to understand the code.
(1)Comments the every page, methods and classes
(2)Naming conventions
(3)Tool tips for every control
(4)Validations according to meta data
(5)Consistent look and feelings css
(6)Only labels for writing the field names
(7)Adding comments to table’s views and stored procedures in database.
(8)Exception handling
(9)Partitions the code into regions with the appropriate name
Validation: validation is a process of validating the developed product. Validation is done by testing team or customer or third party testing team.
Verification: verification is process of monitoring and checking that whether the product is developing in right manner or not. Usually quality assurance people will do it.
Quality control: Quality control is a process of validating the developed product .usually they do the validation.
Quality assurance: Quality assurance is the process of monitoring and guiding each and every role in the organization to follow the guidelines right from starting from the process still the ending of process. Usually they do the verification with the help of reviews, check lists, inspections and audits.
Non conformance (NC): when ever the role is not following the process, the penalty will be given for him in terms of NC.
Software configuration management (SCM): software configuration management is a process in which mainly they concentration on two tasks
Change control: change control is a process of updating the related documents whenever some changes are made to the application in order to keep the documents and application in sink with each other at any point of time
Version control: version control is a process of maintaining the naming conventions and version numbers.
Common repository: it is basically a server which act as storage place and can access only by the authorized users.
Check-in: it is a process of uploading the documents to the common repository.
Checkout: it is a process of uploading the documents from the common repository.
Base lining: base lining is a process of finalizing the documents.
Publish: publish is a process of making the baseline documents available to the relevant roles by uploading them to them to the common repository with a special icon representation.
Release: It is process of sending application from the development department to the testing department.
Delivery: It is process of sending the tested application from the company to the client.
Software delivery note: It is document prepared by the development department and sent to the testing department during the release, which contain the following info
(1)Build path information
(2)Deployment document path info
(3)Test data path information
(4) Known issues document path information
(5)Release manager mail
(6)Build no
(7) Version no
(8) Released date
(9) Module name
Software deliver note: It is a document prepared by the testing department with the support of project manager and business analyst an will be given to the customer at the time of delivery it contains the following information
(1) User manual
(2) Known issues
Corrective action: when ever the role has committed a repairable mistake then one will take care of corrective action in order to correct that mistake.
Preventive action: If at all the role has committed an irreparable mistakes then one will take care of preventive action in order to preventive that type of mistake.
Defect product: If at all product is not justifying some of requirements but still it is usable then such type of products is called as defect products.
Defective products: If the entire product is not justifying the requirements as well as not usable then such type of products are called as defective products.
Bench mark: Bench mark is defined as standard with which usually we compare it. in testing department we can consider the expected value has bench marks.
Prototype: It is defined as roughly and rapidly developed model which is used for demonstrating to the client in order to gather the clear requirements. An win the confidence of the customer.
Pair programming: two developers together on same program at a single work station and continuously reviewing their work.
Brain storming: A learning technique involving from open group discussion intended to expand the range of available ideas. A meeting to generate creative ideas.
Change request: change request is a process of requesting the developers to incorporate the changes to do the same usually
Impact analysis: After the changes are requested by the customer the analysts will analyze how much impact will fall on the already developed part if they incorporate these changes, this process is called as impact analysis.
Slippage: it is defined as an extra time taken to accomplish a specific task.
Slippage=actual time taken-expected time.
Escalation: escalation is a process of intimating any kind of problematic situation related information to the superiors’ .usually it is done in different levels.
1st level ------------team lead
2nd level -----------quality lead
3rd level -----------quality manager
4th level ------------team manager
5th level ------------project manager
Code optimization: code optimization is a process of reducing the no of lines or complexity of the code in order to increase the performance of an application. This is also called as fine tuning.
Traceability matrix: It is a document used for tracking the requirements, test cases and the defects. It is prepared to make client satisfy that coverage done is complete as end to end. It contains requirement base line document, reference number, test case /condition, bug or defect id. Using this document the person can track the requirement based on the defect id. It contains which contain some linking information used for tracing back for reference in ambiguous situation. It is also called as traceability matrix.
Hard coding: It is a process of incorporating the constant values directly into the program.
Patch: when ever the test engineer reject the build the developer will do some patch work to the build and release the same build as patch.
Project report: It is report prepared by the test lead well in advance before the periodic project meeting conducted.
Periodic project meeting (PPM): periodic project meeting is conducted periodically in order to discuss the status of project. Where in they discuss the following points.
(1) Percentage covered in project during the period
(2) Percentage not covered in the project during the period.
(3) Tasks completed during the period
(4) Total defect metrics.
(5) Any slippages.
(6) Reasons for the slippages.
(7) Team related hr issues.
(8) Technical issues.
Management representative meetings (MRN): It is meeting conducted in order to discuss the status of the company where in they discuss the following points
(1) Success rate and growth rate of the company
(2) Projects that are recently signed off.
(3) Projects that are pipelined
(4) Customer appraisal
(5) Customer negative command
(6) Future plans
(7) Internal audit reports
(8) Individual appraisal
Ticket: when the role has to access the permissions or any technical or fair related issues then the role send mail to the related persons.
Knowledge transfer (KT): It is a training program in which the project leads or project managers give domain knowledge to the team so that before developing the project the roles can understand the requirements and functionality so that they understand and develop the code or test cases depending upon the roles.
Defect leakage: It occurs at the customer or end user side after the application delivery. If any user gets any type of defects by using the application then it is called as defect leakage.
Hot fix: Bug find at customer place with high priority where there is no testcases for compatibility testing.
Tuesday, December 25, 2007
Testing basics
What is Testing and why we perform Testing?
Testing is a process in which defects are identified, isolated and subjected
For rectification with the help of test cases, inspections and checklists to ensure that the product is defect free and satisfies the customer requirements. The main objective of testing is to detect the errors.
It is a process used to help identify the correctness, completeness, security and Quality of the developed computer software. The main intension of testing is to find errors
For more information on testing log on to
http://www.questioningsoftware.com/2007/11/what-is-software-testing.html
Ways of testing
Conventional testing
Unconventional testing
Conventional testing: It is a sort of testing in which the quality assurance people will test the each and every outcome document right from the initial phase of the software development life cycle.
Unconventional testing: If testing is performed is performed by the developer then we say that it is unconventional testing.
Testing techniques:
There are 3 methods for testing
(1)Black box testing
(2)White box testing
(3)Grey box testing
(1)Black box Testing: If at all one performs testing only on the functional part of the application without having any structural knowledge then the method of testing is known as black box testing. Usually test engineers perform it.
(2)White box testing: If at all one performs testing on the structural part of the application then this type of testing is known as white box testing. usually developers perform it.
(3)Grey box testing: If at all one performs testing on the functional as well as the structural part of the application then this type of testing is known as gray box testing. A test engineer with structural knowledge will perform grey box testing.
Levels of Testing:
Unit Testing
Module Testing
Integration Testing
System Testing
User acceptance Testing
Unit Testing: If one performs testing on a single unit or small component then this type of testing is called as unit testing. It is white box testing. It is a low level testing and it is done by the developer.
Module Testing: If one performs testing on a module then it is called module testing
It is a Black box testing. Usually test engineers perform Black box testing.
Integration Testing: After developing the modules the developer want to integrate the modules so that all the modules combined to form a project. so it need to integrate them. The developer integrates the modules by using a link programs called interfaces. Once the module is linked it is the duty of the developer to test whether these interfaces are working fine or not.This is known as integration as testing.
There are four ways the developer integrate the modules
Top down approach
In this approach the parent modules are integrated with the sub modules.
While integrating the modules in top-down approach if at all any mandatory approach module is missing that is replaced with some temporary program called stub.
Bottom up approach
In this approach the child modules integrated with the parent modules.
While integrating the modules in the bottom up approach if at all any mandatory module is missing then it is replaced by some temporary program called as driver.
Hybrid approach
This is mixed of both top-down and bottom-up approach
Big bang approach
Once all the modules are ready at a time then we integrate all the module at a time is known as big-bang approach.
System level testing: If one performs testing after deploying the complete application into the environment is called as system level testing. It is a black box testing .usually test engineers perform it.
Usually they perform black box integration testing, load testing, stress testing at this level.
User acceptance testing: If one performs the same system testing in the presence of user then this type of testing is known as user acceptance testing.
Different types of testing:
Regression testing: It is a type of testing in which we can perform testing on the
Already tested functionality in order to ensure that the old functionality is working fine with respective to the new functionality.
It is done in two situations
(1)When ever the test engineer raises a defect then he will report the defect to the development department so that the developer fixed the defect and send next build to the testing department. Then the test engineer will test the old functionality
with respective to defect functionality in order to ensure that the old functionality is working fine with respective to the defect functionality. This type of testing is called as Bug regression testing
(2)When ever the customer want new changes in the application then the development department add new changes to the application. Then the test engineer
will perform testing on the old functionalities with respective to the new add-in functionality so that the old functionality has not effected by the new add-in functionality.
Progressive testing: Testing of new features after the regression testing of previous features.
Build verification testing : It is a type of testing one will perform testing on the released build in order to check application is proper to handle the loads .It is similar to hard ware testing. It is also called as sanity testing.
Smoke testing: It is a type of software testing to check whether the basic functionality is working fine or not. If it possess 70% offer we say that the build is stable.
Re-testing: Re-testing means testing the application once again in order to check it is working fine or not. It is a type of testing in which a test engineer performs testing on the application with multiple set of values in order increase the scope of the test.
Structural testing: Structural testing is a white box testing and it is based on the algorithm (or) code.
Functional testing: It is a Black box testing. Testing where tester verifies the functional specification that means .The tester performs on functionality of an
application.
Negative testing: Testing the application with negative data
Eg: testing the password where it should be min 6 chars, testing with 4 chars.
Comparison testing: This is nothing but compare the software strengths and weakness with another competition product.
Alpha-testing: It is type of user acceptance testing which is done in our company by our test engineers.
The advantage is if at all any defects are found in this type of testing there is a chance of rectifying them immediately.
Beta-testing: It is a type of user acceptance testing which is done in the client space
either by third party test engineers or by the end users.
Static testing: It is type of testing in which we can perform testing without executing the application.
Ex: GUI testing, document testing, code review .etc.
Dynamic testing: It is a type of testing in which we can perform some actions on the application.(or)executing the application.
Scalability testing: It is similar to load testing .scalability is nothing but how many users that the application should handle (or) the max. no of users that the system can handle.
Installation testing: It is a type of testing in which one will install the applications into the environment by following the guidelines given by the deployment document and if the installation is successful then he will come to the conclusion that the given guidelines are correct otherwise he will came to the conclusion that the given guidelines are not correct.
Compatibility testing: It is a type of testing in which the test engineer may have to deploy the application into multiple environments prepared with multiple combinations of environmental components in order to check whether the application is compatible with that environment. that means computing environment like operating system, database, browser, compatibility, backwards compatibility ,computing capacity of h/w platform and compact of periperals
Ex: suppose we develop the application in windows environment then we install the application into Unix environment and check whether it is compatible with that environment or not. this is usually done to the product.
Monkey testing: It is a type of testing in which a test engineer will perform abnormal actions intentionally on the application in order to check the stability of the application.
Exploratory testing: It is a type of testing in which domain experts will perform testing on the application without having any requirements by exploring the functionality.
Usability testing: Usability testing is used for checking the user friendliness of the application.
End-End testing: It is a type of testing in which one will perform testing on a complete transaction from one end to the other end in order to ensure that the end to end scenario is working fine.
Port testing: It is a compatibility testing in which one will install the application into the client environment and check whether it is compatible with that environment or not.
Security testing: It is a type of testing in which we can perform testing on the following areas.
Authentication
Direct URL testing
Firewall leakage testing
Authentication: In this type of testing the test engineer will test whether the application allowing only authorized users or not.
Direct URL Testing: In this type of testing a test engineer will try to access the unauthorized pages (or) secure pages directly by giving the URL.
Firewall leakage testing: In this testing the test engineer will try to access the other level pages by entering as another level user.
Reliability testing: It is a type of testing in which one will perform testing on the application for a longer period of time in order to check its stability. it is also called as soak testing.
Mutation testing: It is a type of testing in which developer will do some changes to the program and check for its performance as it is associated with multiple mutants .it is called as mutation testing
Adhoc testing : After understanding the requirements if we perform testing on the application in our own way then this type of testing is called as adhoc testing.
Error-handling testing: It determines the ability of an application system to process the incorrect transactions properly. Errors encompass all unexpected conditions.
Ex: in some system approx. 50% of programming effort will be devoted to handling error condition.
Load testing :It is a type of testing in which one will apply initial loads on the application and continues with sequential loads in order to check whether the target
Loads are met (or) not and to asses the critical load.
Critical or peak load: the load beyond which the application starts degrading its performance is known as critical load
Performance testing: It is a type of testing in which one will apply predefined quantified request in order to check the response time
Stress testing: It is a type of testing in which one will perform either abnormal action (or) perform testing on the application for long period of time in order
to check the stability of an application
Database testing: It is Backend testing. We conduct this testing based on the data validation and data integration. It is used to conduct the validation of data.
Data validation means whether front end values are correctly storing into tables content or not. Data integrity means that whether impact of front end operations is working on base end tables content.
Volume testing: the purpose of volume testing is to find the weakness in the system with respective to the handling large amount of data during short time of period.
Maintenance testing: It is a type of testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective. It can be performed at either the system level, the equipment level, or the component level. It is conducted to ensure functionality and performance. It may include functionality and accuracy.
Maintenance fall into following categories
Preventive maintenance
Corrective maintenance
Perfective maintenance
Adaptive maintenance
Test Design techniques:
While developing the testcases if at all the test engineers is feeling difficulty.then they will use these techniques to develop the test cases in an easy manner.
There are types of testing techniques used in several companies
Boundary value analysis
Equivalence class partition
Error guessing
Cause effect graphing.
Boundary value analysis: While developing the test cases for range kind of input usually the test engineer will use this technique.
Equivalence class partition: When ever there are more no of validations for a feature to develop the test case in an easy manner. Usually a test engineer will use equivalence class partition technique.
Error guessing: Error guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Cause effect graphing: It is a testing technique that aids in selecting, in a systematic way, a high yield of set of test case that logically relates causes to effect to produce test cases. It has beneficial effect in pointing out incompleteness and ambiguities in specifications.
Testing is a process in which defects are identified, isolated and subjected
For rectification with the help of test cases, inspections and checklists to ensure that the product is defect free and satisfies the customer requirements. The main objective of testing is to detect the errors.
It is a process used to help identify the correctness, completeness, security and Quality of the developed computer software. The main intension of testing is to find errors
For more information on testing log on to
http://www.questioningsoftware.com/2007/11/what-is-software-testing.html
Ways of testing
Conventional testing
Unconventional testing
Conventional testing: It is a sort of testing in which the quality assurance people will test the each and every outcome document right from the initial phase of the software development life cycle.
Unconventional testing: If testing is performed is performed by the developer then we say that it is unconventional testing.
Testing techniques:
There are 3 methods for testing
(1)Black box testing
(2)White box testing
(3)Grey box testing
(1)Black box Testing: If at all one performs testing only on the functional part of the application without having any structural knowledge then the method of testing is known as black box testing. Usually test engineers perform it.
(2)White box testing: If at all one performs testing on the structural part of the application then this type of testing is known as white box testing. usually developers perform it.
(3)Grey box testing: If at all one performs testing on the functional as well as the structural part of the application then this type of testing is known as gray box testing. A test engineer with structural knowledge will perform grey box testing.
Levels of Testing:
Unit Testing
Module Testing
Integration Testing
System Testing
User acceptance Testing
Unit Testing: If one performs testing on a single unit or small component then this type of testing is called as unit testing. It is white box testing. It is a low level testing and it is done by the developer.
Module Testing: If one performs testing on a module then it is called module testing
It is a Black box testing. Usually test engineers perform Black box testing.
Integration Testing: After developing the modules the developer want to integrate the modules so that all the modules combined to form a project. so it need to integrate them. The developer integrates the modules by using a link programs called interfaces. Once the module is linked it is the duty of the developer to test whether these interfaces are working fine or not.This is known as integration as testing.
There are four ways the developer integrate the modules
Top down approach
In this approach the parent modules are integrated with the sub modules.
While integrating the modules in top-down approach if at all any mandatory approach module is missing that is replaced with some temporary program called stub.
Bottom up approach
In this approach the child modules integrated with the parent modules.
While integrating the modules in the bottom up approach if at all any mandatory module is missing then it is replaced by some temporary program called as driver.
Hybrid approach
This is mixed of both top-down and bottom-up approach
Big bang approach
Once all the modules are ready at a time then we integrate all the module at a time is known as big-bang approach.
System level testing: If one performs testing after deploying the complete application into the environment is called as system level testing. It is a black box testing .usually test engineers perform it.
Usually they perform black box integration testing, load testing, stress testing at this level.
User acceptance testing: If one performs the same system testing in the presence of user then this type of testing is known as user acceptance testing.
Different types of testing:
Regression testing: It is a type of testing in which we can perform testing on the
Already tested functionality in order to ensure that the old functionality is working fine with respective to the new functionality.
It is done in two situations
(1)When ever the test engineer raises a defect then he will report the defect to the development department so that the developer fixed the defect and send next build to the testing department. Then the test engineer will test the old functionality
with respective to defect functionality in order to ensure that the old functionality is working fine with respective to the defect functionality. This type of testing is called as Bug regression testing
(2)When ever the customer want new changes in the application then the development department add new changes to the application. Then the test engineer
will perform testing on the old functionalities with respective to the new add-in functionality so that the old functionality has not effected by the new add-in functionality.
Progressive testing: Testing of new features after the regression testing of previous features.
Build verification testing : It is a type of testing one will perform testing on the released build in order to check application is proper to handle the loads .It is similar to hard ware testing. It is also called as sanity testing.
Smoke testing: It is a type of software testing to check whether the basic functionality is working fine or not. If it possess 70% offer we say that the build is stable.
Re-testing: Re-testing means testing the application once again in order to check it is working fine or not. It is a type of testing in which a test engineer performs testing on the application with multiple set of values in order increase the scope of the test.
Structural testing: Structural testing is a white box testing and it is based on the algorithm (or) code.
Functional testing: It is a Black box testing. Testing where tester verifies the functional specification that means .The tester performs on functionality of an
application.
Negative testing: Testing the application with negative data
Eg: testing the password where it should be min 6 chars, testing with 4 chars.
Comparison testing: This is nothing but compare the software strengths and weakness with another competition product.
Alpha-testing: It is type of user acceptance testing which is done in our company by our test engineers.
The advantage is if at all any defects are found in this type of testing there is a chance of rectifying them immediately.
Beta-testing: It is a type of user acceptance testing which is done in the client space
either by third party test engineers or by the end users.
Static testing: It is type of testing in which we can perform testing without executing the application.
Ex: GUI testing, document testing, code review .etc.
Dynamic testing: It is a type of testing in which we can perform some actions on the application.(or)executing the application.
Scalability testing: It is similar to load testing .scalability is nothing but how many users that the application should handle (or) the max. no of users that the system can handle.
Installation testing: It is a type of testing in which one will install the applications into the environment by following the guidelines given by the deployment document and if the installation is successful then he will come to the conclusion that the given guidelines are correct otherwise he will came to the conclusion that the given guidelines are not correct.
Compatibility testing: It is a type of testing in which the test engineer may have to deploy the application into multiple environments prepared with multiple combinations of environmental components in order to check whether the application is compatible with that environment. that means computing environment like operating system, database, browser, compatibility, backwards compatibility ,computing capacity of h/w platform and compact of periperals
Ex: suppose we develop the application in windows environment then we install the application into Unix environment and check whether it is compatible with that environment or not. this is usually done to the product.
Monkey testing: It is a type of testing in which a test engineer will perform abnormal actions intentionally on the application in order to check the stability of the application.
Exploratory testing: It is a type of testing in which domain experts will perform testing on the application without having any requirements by exploring the functionality.
Usability testing: Usability testing is used for checking the user friendliness of the application.
End-End testing: It is a type of testing in which one will perform testing on a complete transaction from one end to the other end in order to ensure that the end to end scenario is working fine.
Port testing: It is a compatibility testing in which one will install the application into the client environment and check whether it is compatible with that environment or not.
Security testing: It is a type of testing in which we can perform testing on the following areas.
Authentication
Direct URL testing
Firewall leakage testing
Authentication: In this type of testing the test engineer will test whether the application allowing only authorized users or not.
Direct URL Testing: In this type of testing a test engineer will try to access the unauthorized pages (or) secure pages directly by giving the URL.
Firewall leakage testing: In this testing the test engineer will try to access the other level pages by entering as another level user.
Reliability testing: It is a type of testing in which one will perform testing on the application for a longer period of time in order to check its stability. it is also called as soak testing.
Mutation testing: It is a type of testing in which developer will do some changes to the program and check for its performance as it is associated with multiple mutants .it is called as mutation testing
Adhoc testing : After understanding the requirements if we perform testing on the application in our own way then this type of testing is called as adhoc testing.
Error-handling testing: It determines the ability of an application system to process the incorrect transactions properly. Errors encompass all unexpected conditions.
Ex: in some system approx. 50% of programming effort will be devoted to handling error condition.
Load testing :It is a type of testing in which one will apply initial loads on the application and continues with sequential loads in order to check whether the target
Loads are met (or) not and to asses the critical load.
Critical or peak load: the load beyond which the application starts degrading its performance is known as critical load
Performance testing: It is a type of testing in which one will apply predefined quantified request in order to check the response time
Stress testing: It is a type of testing in which one will perform either abnormal action (or) perform testing on the application for long period of time in order
to check the stability of an application
Database testing: It is Backend testing. We conduct this testing based on the data validation and data integration. It is used to conduct the validation of data.
Data validation means whether front end values are correctly storing into tables content or not. Data integrity means that whether impact of front end operations is working on base end tables content.
Volume testing: the purpose of volume testing is to find the weakness in the system with respective to the handling large amount of data during short time of period.
Maintenance testing: It is a type of testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective. It can be performed at either the system level, the equipment level, or the component level. It is conducted to ensure functionality and performance. It may include functionality and accuracy.
Maintenance fall into following categories
Preventive maintenance
Corrective maintenance
Perfective maintenance
Adaptive maintenance
Test Design techniques:
While developing the testcases if at all the test engineers is feeling difficulty.then they will use these techniques to develop the test cases in an easy manner.
There are types of testing techniques used in several companies
Boundary value analysis
Equivalence class partition
Error guessing
Cause effect graphing.
Boundary value analysis: While developing the test cases for range kind of input usually the test engineer will use this technique.
Equivalence class partition: When ever there are more no of validations for a feature to develop the test case in an easy manner. Usually a test engineer will use equivalence class partition technique.
Error guessing: Error guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Cause effect graphing: It is a testing technique that aids in selecting, in a systematic way, a high yield of set of test case that logically relates causes to effect to produce test cases. It has beneficial effect in pointing out incompleteness and ambiguities in specifications.
Subscribe to:
Posts (Atom)