Fundamentals of Software Testing
Verification Testing
Verification is the process of examining a product to find its defects.
• Verification techniques include:
1. Desk checks
2. Peer reviews
3. Walk-throughs
4. Formal inspections
• Verification tools include:
1. Standards
2. Checklists
3. Static analyzers
4. Use Cases
5. Judgment and experience
• The objectives of verification testing is to find defects
1. Using human examination and experience.
2. As early as possible in the life of the defect.
Inspection criteria:
1. If an input specification is used, it has been previously verified.
2. Special criteria can occur as a list of questions or a targeted checklist.
3. Inspection criteria is usually tied to the type of deliverable rather than the software product being developed. The typical formal inspection assigns the following responsibilities:
Producer - have the product ready and available on time; work with the facilitator to establish and meet schedules; clarify issues for reviewers; resolve problems identified by the inspection team.
Facilitator - Learn the information being inspected; select the reviewers; schedule and
coordinate meetings; lead the discussion on the inspection issues and mediate disputes;
assign responsibilities; track each problem to closure.
Reader - Become familiar with the test subject; paraphrase the information into logical
'chunks'.
Reviewer - Learn the test subject and the inspection criteria; identify discrepancies between the two before the meeting; focus on identifying problems, not solving them; remain objective; review the product, not the producer.
Recorder - Become familiar with the work product and the inspection criteria; record
accurately all issues raised by the review team.
Formal Inspection
• An inspection is a formal review process that evaluates a test subject using:
1. A structured inspection process.
2. A set of inspection criteria or an input specification.
3. A meeting facilitator, a recorder, and an optional reader.
4. A trained inspection team of 3 to 6 reviewers.
5. Reviewer preparation.
6. A formal report of findings, rework, and follow-up.
• Inspections apply a set of criteria to sections of a deliverable.
• The advantages of formal inspections are
1. They are highly structured and require closure.
2. Success criteria are very visible.
3. They expose reviewers to the process and the deliverable.
• The disadvantages of formal inspections are
1. They are very detailed and time-consuming.
2. Reviewers must be trained in inspection and product development.
3. The inspection criteria must be sound.
Walk-Through
• A walk-through evaluates a test subject by simulating (walking through) the process that it represents using simple test cases.For example
1. A code walk-through simulates the process of running an application using test data.
2. A requirements walk-through simulates user events being addressed in the way that the document specifies.
• The objective of a walk-through is to provide an interactive arena using the simulation as the basis for discussion.
• A walk-through consists of
1. The test subject (usually a document).
2. The producer who leads the review.
3. 3 - 5 reviewers, often subject matter experts.
4. An optional report of findings.
• Walk-throughs are very effective for evaluations that benefit from creative thought.
• The advantages of walk-throughs are:
1. They apply diverse expertise and promote creative evaluation.
2. They usually make defects easily visible to the producer.
• The disadvantages of walk-throughs are:
1. They are not structured.
2. They rely solely on reviewer expertise.
Peer Review
• A peer review is an evaluation of a deliverable done by a group of individuals who do the same kind of work.
• Peer reviews typically focus on specific aspects of the subject rather than the deliverable as a whole. For example:
1. A code review may address only programming efficiency and style.
2. A design review may focus on the accuracy of algorithms.
3. A requirements review may focus on testability.
• The advantages of a peer review are:
1. People generally learn and get motivation best from peers.
2. Diversity pays off, in terms of finding defects and learning from them.
3. Peers become familiar with other deliverables.
4. Errors found in reviews cost less to fix than those found later.
• The disadvantages of a peer review are:
1. Reviews can be very taxing.
2. Peers may be uncomfortable critiquing each other's work.
3. The technique can cause resentment.
Desk Check
• A desk check is an evaluation of a deliverable by its producer.
• Desk checks are most commonly used by designers and programmers, but they can also be used for other deliverables.
• The advantages of a desk check are:
1. The skills required to do a desk check are the same as those needed to develop the product.
2. The producer does the check so there is no preparation time, no meetings to schedule, and no communication issues.
3. The desk check is a redundant check of the product's requirements.
• The disadvantages of a desk check are:
1. It is often difficult for an individual to identify defects in his or her own work.
2. It will probably NOT find any misinterpretations of the requirements or standards.
USE CASE OUTLINE
Transaction, Actors, Pre-Conditions, Inputs, Wrap up, Related Use Cases and Steps
Use Cases
• A use case is a tool that defines a system requirement from an external (user) perspective.
• It defines an atomic business event as:
1. A set of one or more business conditions
2. A series of sequenced activities
3. A set of inputs
4. A set of one or more measurable outputs.
• The business conditions define the business event, the sources for the event, and the state of the system prior to the event.
• The series of activities describe the steps taken by the user to accomplish the business event.
• The set of inputs defines the data entered into the system to accomplish the event.
• The set of one or more measurable outputs describes the transactions performed by the systems and the behaviors that the system must support.
• Use cases are used in many different ways during system development.
1. Designers use this tool to remove ambiguity from models of system functionality, behavior, and interfaces.
2. Testers use this tool to explore software requirements to verify them using atomic business events.
If you verify nothing else in the entire system, DO verify the requirements!
Sample Requirements Checklist
1. Ambiguous (Do requirements contain words that can be misinterpreted by readers? Are complex subjects displayed graphically? Are assumptions stated explicitly? Do we know who/what is doing the acting at all times?)
2. Inconsistent (Do they support the objectives of preceding phases?)
3. Contradictory (Does any requirement disagree with the statements, measures, or
intention of any other requirement?)
4. Incomplete (Are all user and system objectives clearly specified? Do the requirements
define all the information displayed to the user? Do they address all error conditions and
required responses? Data integrity mechanisms? Transaction authorization? Precision of
calculations? System maintenance requirements? Recovery and reconstruction? etc.)
5. Achievable (Can each requirement be met? Can each be sustained for the life of the
product? Are these requirements feasible given the project constraints?)
6. Measurable (Is each requirement quantified and measurable?)
7. Traceable (Are the requirements arranged in such a way that they can be used as a
source for all subsequent software products?)
8. Changeable (Can these requirements be maintained the way that they are written?)
Suggestion: When reviewing requirements, watch out for statements that contain adjectives and adverbs rather than measurable items. For example, compare:
- “The system must be VERY fast under all normal circumstances.” OR
- “The system must deliver 3 second response time to ad hoc queries and 1 second
response time to pre-defined queries. Response time refers to the period of time
between the start of a query and the appearance of the last line of output.”
Verifying Requirements
• When we verify requirements we are checking that:
1. we understand the users' needs before design begins.
2. we can effectively evaluate the product in terms of these needs.
• It is critical to verify the requirements document for several reasons:
1. This is THE document that records the users' expectations.
2. It is the foundation of the whole software product; all work products are indirect products of this document.
3. All work products, including testware must trace back to specific requirements.
4. The majority of software defects originate with errors in this document.
• There are several ways to verify a requirements specification:
1. Conduct peer reviews or walk-throughs.
2. Review the document using a requirements checklist.
3. Compare it to the concept document or a specification for a competitive product.
4. Issue it to people and to ask them to describe the system.
5. Develop simple use cases and see if it addresses all of the issues raised.
The following are ideas for creating a verification checklist for a functional design. Omissions
1. Is every requirement represented in the design? Are requirements referenced?
2. Are all the screens, reports, commands, inputs, and responses included?
3. Are there enough examples and diagrams?
4. Where necessary, are the reasons for design choices explained?
Errors
1. Are there mistakes in the translation of the user requirements?
2. Are there mistakes in definitions presented?
3. Are there errors made in calculations?
4. Are there any features that are NOT included in the requirements specification?
Ambiguity
1. Can general statements be interpreted in multiple ways?
2. Are the verbs used the best ones to explain the intended behavior?
3. Are pronouns used properly, or can their subject be misunderstood?
4. Is it always clear which is acting, the program or the user?
Verifying the Functional Design
• Software design translates product requirements into specifications of:
1. How the system will be built - its internal design.
2. How the system will function - its functional or external design.
• The functional design describes the product's interface and behavior from the perspective of the USER.
• When testers speak of verifying the software design, they are almost always referring to verifying the functional design.
• When verifying the design, we look for:
1. omissions
2. errors
3. ambiguity.
• The functional design should be presented concisely, using language and diagrams that can be readily understood by both users and developers. Even though technical reviews are usually carried out by developers, testers can learn a lot in these sessions. We might also pick up useful information by reading the internal design documents. In these we should see things like product limits, possible failure conditions, boundary conditions, and other white box testing considerations.
Note: all these terms are discussed in the next few chapters. A code checklist is too long to present here. Most checklists cover the various kinds of common programming errors including mistakes in:
i. data referencing
ii. logic
iii. computations
iv. function interfaces
v. external functions
vi. standards for naming, comments, etc.
Verifying the Internal Design and the Code
• Other software products that are usually verified include the internal design and the program code.
• These products require technical verification, so these tests are usually carried out by developers.
• Components of the internal design are: data structures, data flows, program structure and program logic.
1. When verifying an internal design, we look primarily for omissions and errors.
2. IEEE Recommended Practice for Software Design
• The code is the application itself; it is often verified in desk checks, walk-throughs, AND inspections.
• Code inspections rely on checklists developed using standards documents and common errors.
• Code walk-throughs simulate the application running simple scenarios that get participants thinking and questioning program assumptions and implementation.
Verifying Testware
• Testware, like software, should be verified as it is built.
1. For testware, the 'requirements specification' is the test plan.
2. Test plans and tests should be verified before they are used.
• Test plans should be checked for the following:
1. A clear and feasible testing strategy
2. A functional description of what is to be tested and to what degree
3. Resource requirements and schedule
4. Testing dependencies
5. Descriptions of tests or test suites
6. Realistic completion criteria.
• Tests should be checked for the following:
1. Author, objectives, test subject, and trace information
2. Configuration, resource, and setup requirements
3. Sequenced test steps
4. Expected results that correspond with source documents
5. Evaluation and disposition criteria.
Verifying User Documentation
• User documentation, when well written, improves user satisfaction and lowers customer support costs.
• We verify user documentation looking for problems of:
1. Omission, such as missing features, or incomplete explanations.
2. Accuracy, ranging from typos to incorrect commands, diagrams, or references.
3. Clarity, arising from confusing or ambiguous discussion or examples.
4. Organization that make the document less usable.
• Use requirements and functional specifications (and any other information you have) to verify the document(s).
• Check every explicit and implicit fact presented.
• Enter every keystroke in every example and try every suggestion provided.
• Check every screen diagram against the working program.
The Code Review Log is used when reviewing project code. It can be employed regardless of the verification technique selected (Formal Inspection, Walk-Through, Peer Review or Desk Check). It provides a record of defects found during the Code Review.
At the top of the Code Review Log enter the following:
• Project name
• Release number
• Date of review
• Component being reviewed
• Module being reviewed
• Module version number
• The names of all review participants next to their role
In the table, record each defect found during the review. Space is provided to enter the
following information for each defect:
• Page #
• Line #
• Description of defect
• Category
• Type of defect
• Severity level
Sample Categories follow:
• Extra
• Incorrect
• Missing
• Suggestion/Enhancement
Sample Types follow:
• Data
• Documentation
• Functionality
• Interface
• Logic
• Performance
• Standard
• Other
Notes
The Code Review Summary Report should be used in conjunction with the Code Review Log.
The Code Review Log lists the defects found during the review and the Code Review
Summary Report summarizes the types of defects found by category and severity. It also
documents the outcome of the review (pass or fail) and captures metrics such as time spent on the review. The information at the top of the Code Review Summary Report is the same as the Code Review Log with the addition of the column, “Prep Time”. In the “Prep Time” column enter the number of hours each participate spent preparing for the review. In the “Defect types by category and severity” table, tally up the number of defect types per category and severity level. In the “Inspection Summary” table, enter the information requested in the right column. At the bottom of the form check whether the module passed the review or if it failed. If the module failed enter:
• the estimated number of hours to fix the defects
• the date the fixes should be completed
• the date of the next review
The Review Tracking Sheet is used when reviewing project documents. It can be employed regardless of the verification technique selected (Formal Inspection, Walk-Through, Peer Review or Desk Check). It provides a record of defects found during the document review. Enter the following Document Information:
• Project name
• Release number
• Title of the document being reviewed
• Document Number
• Document Version number
• Document Date
• Date the document will be distributed for review
• Date when the document must be reviewed OR
• Meeting review date
Enter the following Author Information:
• Name(s)
• Group (e.g., Development, Test, QA, Project Management, etc.)
• Location (e.g., Address, Building #, Floor #, Room #, Cube/Office #)
• Phone #
Enter the following Participant Information:
• Name of participant next to corresponding role
• Group (e.g., Development, Test, QA, Project Management, etc.)
• Prep Time - Amount of time spent preparing for the review
• Rating (use the following scale)
1. Approved
2. Approved with optional comments
3. Approved after required comments are incorporated
4. Not Approved. Another review is required
Review Tracking Sheet
Document Information
Author Information
Participant Information
(-continued-)
Project: Document Distribution Date:
Release: Must Be Reviewed by Date:
Document Title: Meeting Review Date (If applicable)
Document
Document Version:
Document Date:
Name Group Location Phone
Name Role Group Prep
Time Rating Initial & Date
Facilitator
Reader
Recorder
Reviewer
Notes
Enter the following Document Comments:
• A comment about the document under review
• The name of the person who submitted the comment
• Note whether the comment is optional or required
• The Status column is for use after the review. The author can use this column to:
Specify whether the comment has been incorporated into the document or not.
Note any Defect numbers opened as a result of the comment.
Enter the following Action Item information:
• Description of the action item
• Name of person responsible for resolving the action item
• The date the action item is required to be resolved
• The status of the action item (e.g., open, closed, etc.)
Tuesday, March 4, 2008
Software Testing Fundamentals
Sponsored linksPosted by 123interviewquestions at 1:56 PM
Labels: basics, beginners, fundamentals, software testing, testing tools
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment