What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
• what is a somewhat 'negative' process (e.g., looking for or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers, managers, and customers.
• be able to run meetings and keep them focused
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.
What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Bookstore section's 'Software Requirements Engineering' category for books on Software Requirements.)
Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation between programmers and customers/end-users to iteratively develop requirements. The programmer uses 'Test first' development to first create automated unit testing code, which essentially embodies the requirements.
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
• Obtain requirements, functional design, and internal design specifications and other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
• Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications, etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through life cycle
What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What's a 'test case'?
• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:
• Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.
What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the 'Tools' section for web resources with listings of configuration management tools. Also see the Bookstore section's 'Configuration Management' category for useful books with more information.)
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.
What can be done if requirements are changing continuously?
A common problem and a major headache.
• Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
• It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes easier for the developers.
• Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
• Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
• Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
• Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
• Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to remain unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
• Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.
How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers.
What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than:
• Hire good people
• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
• Everyone in the organization should be clear on what 'quality' means to the customer
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.)
How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be included on each page.
Many new web site test tools have appeared in the recent years and more than 280 of them are listed in the 'Web Test Tools' section.
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.
What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info see the XP-related listings in the Softwareqatest.com 'Other Resources' section.
What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Bookstore section's 'Software QA' category for a list of useful books on Software Quality Assurance.)
What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of useful books on Software Testing.)
• Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
What are some recent major computer system failures caused by software bugs?
• A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders.
• News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.
• In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that "...six miscues out of more than 400 trades does not indicate negligence." was invalidated.
• In April of 2003 it was announced that the largest student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.
• News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.
• In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attibuted to the difficulty of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
• According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
• In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.
• News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.
• In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
• Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
• A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."
Why does software have bugs?
• miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
• software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered.
• programming errors - programmers, like anyone else, can make mistakes.
• changing requirements (whether documented or undocumented) - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ.
• time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
• egos - people prefer to say things like:
'no problem'
'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
If there are too many unrealistic 'no problem's', the result is bugs.
• poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
• software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
How can new Software QA processes be introduced in an existing organization?
• A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
• For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
• The most value for effort will be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation and (b) design inspections and code inspections.
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for management to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.
What kinds of testing should be considered?
• Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
• unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
• incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
• integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
• functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
• system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
• end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
• sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
• regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
• acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
• load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
• stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
• performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
• usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
• install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
• recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
• security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
• compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
• exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
• ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
• user acceptance testing - determining if software is satisfactory to an end-user or customer.
• comparison testing - comparing software weaknesses and strengths to competing products.
• alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
• beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
• mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
What are 5 common problems in the software development process?
• poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
• unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
• inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
• featuritis - requests to pile on new features after development is underway; extremely common.
• miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.
What are 5 common solutions to software development problems?
• solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.
• realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
• adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing.
• stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on.
• communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes early on so that customers' expectations are clarified.
What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:
• minimize or eliminate use of global variables.
• use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
• use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
• function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
• function descriptions should be clearly spelled out in comments preceding a function's code.
• organize code for readability.
• use whitespace generously - vertically and horizontally
• each line of code should contain 70 characters max.
• one code statement per line.
• coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
• in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
• no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
• make extensive use of error handling procedures and status and error logging.
• for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is preferable.
• for C++, make liberal use of exception handlers
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. (See further discussion of functional and internal design in 'What's the big deal about requirements?' in FAQ #2.) For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include:
• the program should act in a way that least surprises the user
• it should always be evident to the user what can be done next and how to exit
• the program shouldn't let the users do something stupid without warning them.
What is SEI? CMM? ISO? IEEE? ANSI? Will it help?
• SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
• CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in Software Quality Assurance.
• ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
• Other software development process assessment methods besides CMM and ISO 9000 include SPICE, Trillium, TickIT. and Bootstrap.
What is the 'software life cycle'?
The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
Will automated testing tools make testing easier?
• Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.
• A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.
• Other automated tools can include:
code analyzers - monitor code complexity, adherence to standards, etc.
coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under various load
levels.
web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.
other tools - for test case management, documentation management, bug reporting, and configuration management.
Monday, July 14, 2008
Software Testing Interview Questions
Q: What if the application has functionality that wasn't
in the requirements?
A: It may take serious effort to determine if an application has significant
unexpected or hidden functionality, which it would indicate deeper problems in
the software development process. If the functionality isn't necessary to the
purpose of the application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing
needs or regression testing needs. Management should be made aware of any
significant added risks as a result of the unexpected functionality. If the
functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
Q: How can software QA processes be implemented
without stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach
agreement on processes and adjust and experiment as an organization grows
and matures. Productivity will be improved instead of stifled. Problem prevention
will lessen the need for problem detection. Panics and burnout will decrease and
there will be improved focus and less wasted effort. At the same time, attempts
should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting,
minimize time required in meetings and promote training as part of the QA
process. However, no one, especially talented technical types, like bureaucracy
and in the short run things may slow down a bit. A typical scenario would be that
more days of planning and development will be needed, but less time will be
required for late-night bug fixing and calming of irate customers.
Q: What if an organization is growing so fast that fixed
QA processes are impossible?
A: This is a common problem in the software industry, especially in new
technology areas. There is no easy solution in this situation, other than... · Hire good people (i.e. hire Rob Davis) · Ruthlessly prioritize quality issues and maintain focus on the customer; · Everyone in the organization should be clear on what quality means to the customer.
Q: How is testing affected by object-oriented designs?
A: A well-engineered object-oriented design can make it easier to trace from
code to internal design to functional design to requirements. While there will be
little affect on black box testing (where an understanding of the internal design of
the application is unnecessary), white-box testing can be oriented to the
application's objects. If the application was well designed this can simplify test
design.
Q: Why do you recommended that we test during the
design phase? A: Because testing during the design phase can prevent defects later on. I
recommend we verify three things... 1. Verify the design is good, efficient, compact, testable and maintainable. 2. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in
exceptional circumstances, starting state of each module and how to
guarantee the state of each module). 3. Verify the design incorporates enough memory, I/O devices and quick
enough runtime for the final product.
Q: What is software quality assurance?
A: Software Quality Assurance (SWQA) when Rob Davis does it is oriented to
*prevention*. It involves the entire software development process. Prevention is
monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they are the comined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes montored by project managers. It depends on what best fits your organization's size and business structure. Rob Davis can provide QA and/or SWQA. This document details some aspects of how he can provide
software testing/QA service.
Q: What is quality assurance?
A: Quality Assurance ensures all parties concerned with the project adhere to the
process and procedures, standards and templates and test readiness reviews.
Rob Davis' QA service depends on the customers and projects. A lot will depend on team
leads or managers, feedback to developers and communications among customers,
managers, developers' test engineers and testers.
Q: Processes and procedures - why follow them?
A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a
process is repeatable. Once Rob Davis has learned and reviewed customer's business
processes and procedures, he will follow them. He will also recommend improvements
and/or additions. Q: Standards and templates - what is supposed to be in a
document? A: All documents should be written to a certain standard and template. Standards and
templates maintain document uniformity. It also helps in learning where information is
located, making it easier for a user to find what they want. Lastly, with standards and
templates, information will not be accidentally omitted from a document. Once Rob Davis
has learned and reviewed your standards and templates, he will use them. He will also
recommend improvements and/or additions.
Q: What are the different levels of testing?
A: Rob Davis has expertise in testing at all testing levels listed in the these FAQs. At each test level, he documents the results. Each level of testing is either considered black or white box testing.
Q: What is black box testing?
A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing is based on requirements and functionality.
Q: What is white box testing?
A: White box testing is based on knowledge of the internal logic of an application's code.
Tests are based on coverage of code statements, branches, paths and conditions.
Q: What is unit testing? A: Unit testing is the first level of dynamic testing and is first the responsibility of
developers and then that of the test engineers. Unit testing is performed after the
expected test results are met or differences are explainable/acceptable.
Q: What is parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations
correctly.
Q: What is functional testing?
A: Functional testing is black-box type of testing geared to functional requirements of an
application. Test engineers should perform functional testing.
Q: What is usability testing?
A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions and other techniques can be used. Test engineers are needed, because
programmers and developers are usually not appropriate as usability testers. Q: What is incremental integration testing? A: Incremental integration testing is continuous testing of an application as new
functionality is recommended. This may require that various aspects of an application's
functionality are independent enough to work separately, before all parts of the program
are completed, or that test drivers are developed as needed. This type of testing may be
performed by programmers, software engineers, or test engineers.
Q: What is integration testing?
A: Upon completion of unit testing, integration testing begins. Integration testing is black
box testing. The purpose of integration testing is to ensure distinct components of the
application still work in accordance to customer requirements. Test cases are developed
with the express purpose of exercising the interfaces between the components. This
activity is carried out by the test team. Integration testing is considered complete, when
actual results and expected results are either in line or differences are
explainable/acceptable based on client input.
Q: What is system testing?
A: System testing is black box testing, performed by the Test Team, and at the start of
the system testing the complete system is configured in a controlled environment. The
purpose of system testing is to validate an application's accuracy and completeness in
performing the functions as designed. System testing simulates real life scenarios that
occur in a "simulated real life" test environment and test all functions of the system that
are required in real life. System testing is deemed complete when actual results and
expected results are either in line or differences are explainable or acceptable, based on
client input.
Upon completion of integration testing, system testing is started. Before system testing,
all unit and integration test results are reviewed by SWQA to ensure all problems have
been resolved. For a higher level of testing it is important to understand unresolved
problems that originate at unit and integration test levels.
Q: What is end-to-end testing?
A: End-to-end testing is similar to system testing, the *macro* end of the test
scale; it is the testing a complete application in a situation that mimics real life
use, such as interacting with a database, using network communication, or
interacting with other hardware, application, or system.
Q: What is regression testing?
A: The objective of regression testing is to ensure the software remains intact. A
baseline set of data and scripts is maintained and executed to verify that
changes introduced during the release have not "undone" any previous code.
Expected results from the baseline are compared to results of the software under
test. All discrepancies are highlighted and accounted for, before testing proceeds
to the next level.
Q: What is sanity testing?
A: Sanity testing is a cursory testing; it is performed whenever a cursory testing
is sufficient to prove the application is functioning according to specifications.
This level of testing is a subset of regression testing. It normally includes a set of
core tests of basic GUI functionality to demonstrate connectivity to the database,
application servers, printers, etc.
Q: What is performance testing?
A: Performance testing verifies loads, volumes and response times, as defined
by requirements. Although performance testing is a part of system testing, it can
be regarded as a distinct level of testing.
Q: What is load testing?
A: Load testing is testing an application under heavy loads, such as the testing of
a web site under a range of loads to determine at what point the system
response time will degrade or fail.
Q: What is installation testing?
A: Installation testing is the testing of a full, partial, or upgrade install/uninstall
process. The installation test is conducted with the objective of demonstrating
production readiness. This test includes the inventory of configuration items,
performed by the application's System Administration, the evaluation of data
readiness, and dynamic tests focused on basic system functionality. Following
installation testing, a sanity test is performed when necessary. Q: What is security/penetration testing? A: Security/penetration testing is testing how well the system is protected against
unauthorized internal or external access, or willful damage. This type of testing
usually requires sophisticated testing techniques. Q: What is recovery/error testing? A: Recovery/error testing is testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
Q: What is compatibility testing?
A: Compatibility testing is testing how well software performs in a particular
hardware, software, operating system, or network environment.
Q: What is comparison testing? A: Comparison testing is testing that compares software weaknesses and
strengths to those of competitors' products.
Q: What is acceptance testing? A: Acceptance testing is black box testing that gives the client/customer/project
manager the opportunity to verify the system functionality and usability prior to
the system being released to production. The acceptance test is the
responsibility of the client/customer or project manager, however, it is conducted
with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.
Q: What is alpha testing? A: Alpha testing is testing of an application when development is nearing
completion. Minor design changes can still be made as a result of alpha testing.
Alpha testing is typically performed by end-users or others, not programmers,
software engineers, or test engineers.
Q: What is beta testing? A: Beta testing is testing an application when development and testing are
essentially completed and final bugs and problems need to be found before the
final release. Beta testing is typically performed by end-users or others, not
programmers, software engineers, or test engineers.
Q: What testing roles are standard on most testing
projects? A: Depending on the organization, the following roles are more or less standard
on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA
Manager, System Administrator, Database Administrator, Technical Analyst, Test
Build Manager and Test Configuration Manager. Depending on the project, one
person may wear more than one hat. For instance, Test Engineers may also
wear the hat of Technical Analyst, Test Build Manager and Test Configuration
Manager.
Q: What is a Test/QA Team Lead?
A: The Test/QA Team Lead coordinates the testing activity, communicates
testing status to management and manages the test team.
Q: What is a Test Engineer?
A: A Test Engineer is an engineer who specializes in testing. Test engineers
create test cases, procedures, scripts and generate data. They execute test
procedures and scripts, analyze standards of measurements, evaluate results of
system/integration/regression testing. They also... · Speed up the work of your development staff; · Reduce your risk of legal liability; · Give you the evidence that your software is correct and operates
properly; · Improve problem tracking and reporting; · Maximize the value of your software; · Maximize the value of the devices that use it; · Assure the successful launch of your product by discovering bugs and
design flaws, before users get discouraged, before shareholders loose
their cool and before employees get bogged down; · Help the work of your development staff, so the development team can
devote its time to build up your product; · Promote continual improvement; · Provide documentation required by FDA, FAA, other regulatory agencies
and your customers; · Save money by discovering defects 'early' in the design process, before
failures occur in production, or in the field; · Save the reputation of your company by discovering bugs and design
flaws; before bugs and design flaws damage the reputation of your
company.
Q: What is a Test Build Manager?
A: Test Build Managers deliver current software versions to the test environment,
install the application's software and apply software patches, to both the
application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more
than one hat. For instance, a Test Engineer may also wear the hat of a Test Build
Manager.
Q: What is a System Administrator?
A: Test Build Managers, System Administrators, Database Administrators deliver
current software versions to the test environment, install the application's
software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a System Administrator.
Q: What is a Database Administrator?
A: Database Administrators, Test Build Managers, and System Administrators
deliver current software versions to the test environment, install the application's
software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a Database Administrator.
Q: What is a Technical Analyst?
A: Technical Analysts perform test assessments and validate system/functional
test requirements. Depending on the project, one person may wear more than
one hat. For instance, Test Engineers may also wear the hat of a Technical
Analyst.
Q: What is a Test Configuration Manager?
A: Test Configuration Managers maintain test environments, scripts, software
and test data. Depending on the project, one person may wear more than one
hat. For instance, Test Engineers may also wear the hat of a Test Configuration
Manager.
Q: What is a test schedule?
A: The test schedule is a schedule that identifies all tasks required for a
successful testing effort, a schedule of all test activities and resource
requirements.
Q: What is software testing methodology?
A: One software testing methodology is a three step process of... 1. Creating a test strategy; 2. Creating a test plan/design; and 3. Executing tests. This methodology can be used and molded to your organization's needs. Rob
Davis believes that using this methodology is important in the development and
ongoing maintenance of his customers' applications.
Q: What is the general testing process? A: The general testing process is the creation of a test strategy (which
sometimes includes the creation of test cases), creation of a test plan/design
(which usually includes test cases and test procedures) and the execution of
tests.
Q: How do you create a test strategy? A: The test strategy is a formal description of how a software product will be
tested. A test strategy is developed for all levels of testing, as required. The test
team analyzes the requirements, writes the test strategy and reviews the plan
with the project team. The test plan may include test cases, conditions, the test
environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process: · A description of the required hardware and software components,
including test tools. This information comes from the test environment,
including test tool data. · A description of roles and responsibilities of the resources required for
the test and schedule constraints. This information comes from man-
hours and schedules. · Testing methodology. This is based on known standards. · Functional and technical requirements of the application. This
information comes from requirements, change request, technical and
functional design documents. · Requirements that the system can not provide, e.g. system limitations. Outputs for this process: · An approved and signed off test strategy document, test plan, including
test cases. · Testing issues requiring resolution. Usually this requires additional
negotiation at the project management level.
How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional
requirements of the release and preparing logical groups of functions that can be
further broken into test procedures. Test procedures define test conditions, data
to be used for testing and expected results, including database updates, file
outputs, report results. Generally speaking... · Test cases and scenarios are designed to represent both typical and
unusual situations that may occur in the application. · Test engineers define unit test requirements and unit test cases. Test
engineers also execute unit test cases. · It is the test team who, with assistance of developers and clients,
develops test cases and scenarios for integration and system testing. · Test scenarios are executed through the use of test procedures or
scripts. · Test procedures or scripts define a series of steps necessary to perform
one or more test scenarios. · Test procedures or scripts include the specific data that will be used for
testing the process or transaction. · Test procedures or scripts may cover multiple test scenarios. · Test scripts are mapped back to the requirements and traceability
matrices are used to ensure each test is within scope. · Test data is captured and base lined, prior to testing. This data serves as
the foundation for unit and system testing and used to exercise system
functionality in a controlled environment. · Some output data is also base-lined for future comparison. Base-lined
data is used to support future application maintenance via regression
testing. · A pre-test meeting is held to assess the readiness of the application and
the environment and data to be tested. A test readiness document is
created to indicate the status of the entrance criteria of the release. Inputs for this process: · Approved Test Strategy Document. · Test tools, or automated test tools, if applicable. · Previously developed scripts, if applicable. · Test documentation problems uncovered as a result of testing. · A good understanding of software complexity and module path coverage,
derived from general and detailed design documents, e.g. software
design document, source code and software complexity data. Outputs for this process: · Approved documents of test scenarios, test cases, test conditions and
test data. · Reports of software design issues, given to software developers for
correction.
Q: How do you execute tests? A: Execution of tests is completed by following the test documents in a
methodical manner. As each test procedure is performed, an entry is recorded in
a test execution log to note the execution of the procedure and whether or not
the test procedure uncovered any defects. Checkpoint meetings are held
throughout the execution phase. Checkpoint meetings are held daily, if required,
to address and discuss testing issues, status and activities. · The output from the execution of test procedures is known as test
results. Test results are evaluated by test engineers to determine
whether the expected results have been obtained. All
discrepancies/anomalies are logged and discussed with the software
team lead, hardware test lead, programmers, software engineers and
documented for further investigation and resolution. Every company has
a different process for logging and reporting bugs/defects uncovered
during testing. · A pass/fail criteria is used to determine the severity of a problem, and
results are recorded in a test summary report. The severity of a problem,
found during system testing, is defined in accordance to the customer's
risk assessment and recorded in their selected tracking tool. · Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression tested and flawless fixes
are migrated to a new baseline. Following completion of the test,
members of the test team prepare a summary report. The summary
report is reviewed by the Project Manager, Software QA (SWQA)
Manager and/or Test Team Lead. · After a particular level of testing has been certified, it is the responsibility
of the Configuration Manager to coordinate the migration of the release
software components to the next test level, as documented in the
Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance. · The test team reviews test document problems identified during testing,
and update documents where appropriate. Inputs for this process: · Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. · Test tools, including automated test tools, if applicable. · Developed scripts. · Changes to the design, i.e. Change Request Documents. · Test data. · Availability of the test team and project team. · General and Detailed Design Documents, i.e. Requirements Document,
Software Design Document. · A software that has been migrated to the test environment, i.e. unit
tested code, via the Configuration/Build Manager. · Test Readiness Document. · Document Updates. Outputs for this process: Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing
deliverables. · Changes to the code, also known as test fixes. · Test document problems uncovered as a result of testing. Examples are
Requirements document and Design Document problems. · Reports on software design issues, given to software developers for
correction. Examples are bug reports on code issues. · Formal record of test incidents, usually part of problem tracking. · Base-lined package, also known as tested source and object code, ready
for migration to the next level.
in the requirements?
A: It may take serious effort to determine if an application has significant
unexpected or hidden functionality, which it would indicate deeper problems in
the software development process. If the functionality isn't necessary to the
purpose of the application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing
needs or regression testing needs. Management should be made aware of any
significant added risks as a result of the unexpected functionality. If the
functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
Q: How can software QA processes be implemented
without stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach
agreement on processes and adjust and experiment as an organization grows
and matures. Productivity will be improved instead of stifled. Problem prevention
will lessen the need for problem detection. Panics and burnout will decrease and
there will be improved focus and less wasted effort. At the same time, attempts
should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting,
minimize time required in meetings and promote training as part of the QA
process. However, no one, especially talented technical types, like bureaucracy
and in the short run things may slow down a bit. A typical scenario would be that
more days of planning and development will be needed, but less time will be
required for late-night bug fixing and calming of irate customers.
Q: What if an organization is growing so fast that fixed
QA processes are impossible?
A: This is a common problem in the software industry, especially in new
technology areas. There is no easy solution in this situation, other than... · Hire good people (i.e. hire Rob Davis) · Ruthlessly prioritize quality issues and maintain focus on the customer; · Everyone in the organization should be clear on what quality means to the customer.
Q: How is testing affected by object-oriented designs?
A: A well-engineered object-oriented design can make it easier to trace from
code to internal design to functional design to requirements. While there will be
little affect on black box testing (where an understanding of the internal design of
the application is unnecessary), white-box testing can be oriented to the
application's objects. If the application was well designed this can simplify test
design.
Q: Why do you recommended that we test during the
design phase? A: Because testing during the design phase can prevent defects later on. I
recommend we verify three things... 1. Verify the design is good, efficient, compact, testable and maintainable. 2. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in
exceptional circumstances, starting state of each module and how to
guarantee the state of each module). 3. Verify the design incorporates enough memory, I/O devices and quick
enough runtime for the final product.
Q: What is software quality assurance?
A: Software Quality Assurance (SWQA) when Rob Davis does it is oriented to
*prevention*. It involves the entire software development process. Prevention is
monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they are the comined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes montored by project managers. It depends on what best fits your organization's size and business structure. Rob Davis can provide QA and/or SWQA. This document details some aspects of how he can provide
software testing/QA service.
Q: What is quality assurance?
A: Quality Assurance ensures all parties concerned with the project adhere to the
process and procedures, standards and templates and test readiness reviews.
Rob Davis' QA service depends on the customers and projects. A lot will depend on team
leads or managers, feedback to developers and communications among customers,
managers, developers' test engineers and testers.
Q: Processes and procedures - why follow them?
A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a
process is repeatable. Once Rob Davis has learned and reviewed customer's business
processes and procedures, he will follow them. He will also recommend improvements
and/or additions. Q: Standards and templates - what is supposed to be in a
document? A: All documents should be written to a certain standard and template. Standards and
templates maintain document uniformity. It also helps in learning where information is
located, making it easier for a user to find what they want. Lastly, with standards and
templates, information will not be accidentally omitted from a document. Once Rob Davis
has learned and reviewed your standards and templates, he will use them. He will also
recommend improvements and/or additions.
Q: What are the different levels of testing?
A: Rob Davis has expertise in testing at all testing levels listed in the these FAQs. At each test level, he documents the results. Each level of testing is either considered black or white box testing.
Q: What is black box testing?
A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing is based on requirements and functionality.
Q: What is white box testing?
A: White box testing is based on knowledge of the internal logic of an application's code.
Tests are based on coverage of code statements, branches, paths and conditions.
Q: What is unit testing? A: Unit testing is the first level of dynamic testing and is first the responsibility of
developers and then that of the test engineers. Unit testing is performed after the
expected test results are met or differences are explainable/acceptable.
Q: What is parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations
correctly.
Q: What is functional testing?
A: Functional testing is black-box type of testing geared to functional requirements of an
application. Test engineers should perform functional testing.
Q: What is usability testing?
A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions and other techniques can be used. Test engineers are needed, because
programmers and developers are usually not appropriate as usability testers. Q: What is incremental integration testing? A: Incremental integration testing is continuous testing of an application as new
functionality is recommended. This may require that various aspects of an application's
functionality are independent enough to work separately, before all parts of the program
are completed, or that test drivers are developed as needed. This type of testing may be
performed by programmers, software engineers, or test engineers.
Q: What is integration testing?
A: Upon completion of unit testing, integration testing begins. Integration testing is black
box testing. The purpose of integration testing is to ensure distinct components of the
application still work in accordance to customer requirements. Test cases are developed
with the express purpose of exercising the interfaces between the components. This
activity is carried out by the test team. Integration testing is considered complete, when
actual results and expected results are either in line or differences are
explainable/acceptable based on client input.
Q: What is system testing?
A: System testing is black box testing, performed by the Test Team, and at the start of
the system testing the complete system is configured in a controlled environment. The
purpose of system testing is to validate an application's accuracy and completeness in
performing the functions as designed. System testing simulates real life scenarios that
occur in a "simulated real life" test environment and test all functions of the system that
are required in real life. System testing is deemed complete when actual results and
expected results are either in line or differences are explainable or acceptable, based on
client input.
Upon completion of integration testing, system testing is started. Before system testing,
all unit and integration test results are reviewed by SWQA to ensure all problems have
been resolved. For a higher level of testing it is important to understand unresolved
problems that originate at unit and integration test levels.
Q: What is end-to-end testing?
A: End-to-end testing is similar to system testing, the *macro* end of the test
scale; it is the testing a complete application in a situation that mimics real life
use, such as interacting with a database, using network communication, or
interacting with other hardware, application, or system.
Q: What is regression testing?
A: The objective of regression testing is to ensure the software remains intact. A
baseline set of data and scripts is maintained and executed to verify that
changes introduced during the release have not "undone" any previous code.
Expected results from the baseline are compared to results of the software under
test. All discrepancies are highlighted and accounted for, before testing proceeds
to the next level.
Q: What is sanity testing?
A: Sanity testing is a cursory testing; it is performed whenever a cursory testing
is sufficient to prove the application is functioning according to specifications.
This level of testing is a subset of regression testing. It normally includes a set of
core tests of basic GUI functionality to demonstrate connectivity to the database,
application servers, printers, etc.
Q: What is performance testing?
A: Performance testing verifies loads, volumes and response times, as defined
by requirements. Although performance testing is a part of system testing, it can
be regarded as a distinct level of testing.
Q: What is load testing?
A: Load testing is testing an application under heavy loads, such as the testing of
a web site under a range of loads to determine at what point the system
response time will degrade or fail.
Q: What is installation testing?
A: Installation testing is the testing of a full, partial, or upgrade install/uninstall
process. The installation test is conducted with the objective of demonstrating
production readiness. This test includes the inventory of configuration items,
performed by the application's System Administration, the evaluation of data
readiness, and dynamic tests focused on basic system functionality. Following
installation testing, a sanity test is performed when necessary. Q: What is security/penetration testing? A: Security/penetration testing is testing how well the system is protected against
unauthorized internal or external access, or willful damage. This type of testing
usually requires sophisticated testing techniques. Q: What is recovery/error testing? A: Recovery/error testing is testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
Q: What is compatibility testing?
A: Compatibility testing is testing how well software performs in a particular
hardware, software, operating system, or network environment.
Q: What is comparison testing? A: Comparison testing is testing that compares software weaknesses and
strengths to those of competitors' products.
Q: What is acceptance testing? A: Acceptance testing is black box testing that gives the client/customer/project
manager the opportunity to verify the system functionality and usability prior to
the system being released to production. The acceptance test is the
responsibility of the client/customer or project manager, however, it is conducted
with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.
Q: What is alpha testing? A: Alpha testing is testing of an application when development is nearing
completion. Minor design changes can still be made as a result of alpha testing.
Alpha testing is typically performed by end-users or others, not programmers,
software engineers, or test engineers.
Q: What is beta testing? A: Beta testing is testing an application when development and testing are
essentially completed and final bugs and problems need to be found before the
final release. Beta testing is typically performed by end-users or others, not
programmers, software engineers, or test engineers.
Q: What testing roles are standard on most testing
projects? A: Depending on the organization, the following roles are more or less standard
on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA
Manager, System Administrator, Database Administrator, Technical Analyst, Test
Build Manager and Test Configuration Manager. Depending on the project, one
person may wear more than one hat. For instance, Test Engineers may also
wear the hat of Technical Analyst, Test Build Manager and Test Configuration
Manager.
Q: What is a Test/QA Team Lead?
A: The Test/QA Team Lead coordinates the testing activity, communicates
testing status to management and manages the test team.
Q: What is a Test Engineer?
A: A Test Engineer is an engineer who specializes in testing. Test engineers
create test cases, procedures, scripts and generate data. They execute test
procedures and scripts, analyze standards of measurements, evaluate results of
system/integration/regression testing. They also... · Speed up the work of your development staff; · Reduce your risk of legal liability; · Give you the evidence that your software is correct and operates
properly; · Improve problem tracking and reporting; · Maximize the value of your software; · Maximize the value of the devices that use it; · Assure the successful launch of your product by discovering bugs and
design flaws, before users get discouraged, before shareholders loose
their cool and before employees get bogged down; · Help the work of your development staff, so the development team can
devote its time to build up your product; · Promote continual improvement; · Provide documentation required by FDA, FAA, other regulatory agencies
and your customers; · Save money by discovering defects 'early' in the design process, before
failures occur in production, or in the field; · Save the reputation of your company by discovering bugs and design
flaws; before bugs and design flaws damage the reputation of your
company.
Q: What is a Test Build Manager?
A: Test Build Managers deliver current software versions to the test environment,
install the application's software and apply software patches, to both the
application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more
than one hat. For instance, a Test Engineer may also wear the hat of a Test Build
Manager.
Q: What is a System Administrator?
A: Test Build Managers, System Administrators, Database Administrators deliver
current software versions to the test environment, install the application's
software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a System Administrator.
Q: What is a Database Administrator?
A: Database Administrators, Test Build Managers, and System Administrators
deliver current software versions to the test environment, install the application's
software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on
the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a Database Administrator.
Q: What is a Technical Analyst?
A: Technical Analysts perform test assessments and validate system/functional
test requirements. Depending on the project, one person may wear more than
one hat. For instance, Test Engineers may also wear the hat of a Technical
Analyst.
Q: What is a Test Configuration Manager?
A: Test Configuration Managers maintain test environments, scripts, software
and test data. Depending on the project, one person may wear more than one
hat. For instance, Test Engineers may also wear the hat of a Test Configuration
Manager.
Q: What is a test schedule?
A: The test schedule is a schedule that identifies all tasks required for a
successful testing effort, a schedule of all test activities and resource
requirements.
Q: What is software testing methodology?
A: One software testing methodology is a three step process of... 1. Creating a test strategy; 2. Creating a test plan/design; and 3. Executing tests. This methodology can be used and molded to your organization's needs. Rob
Davis believes that using this methodology is important in the development and
ongoing maintenance of his customers' applications.
Q: What is the general testing process? A: The general testing process is the creation of a test strategy (which
sometimes includes the creation of test cases), creation of a test plan/design
(which usually includes test cases and test procedures) and the execution of
tests.
Q: How do you create a test strategy? A: The test strategy is a formal description of how a software product will be
tested. A test strategy is developed for all levels of testing, as required. The test
team analyzes the requirements, writes the test strategy and reviews the plan
with the project team. The test plan may include test cases, conditions, the test
environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process: · A description of the required hardware and software components,
including test tools. This information comes from the test environment,
including test tool data. · A description of roles and responsibilities of the resources required for
the test and schedule constraints. This information comes from man-
hours and schedules. · Testing methodology. This is based on known standards. · Functional and technical requirements of the application. This
information comes from requirements, change request, technical and
functional design documents. · Requirements that the system can not provide, e.g. system limitations. Outputs for this process: · An approved and signed off test strategy document, test plan, including
test cases. · Testing issues requiring resolution. Usually this requires additional
negotiation at the project management level.
How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional
requirements of the release and preparing logical groups of functions that can be
further broken into test procedures. Test procedures define test conditions, data
to be used for testing and expected results, including database updates, file
outputs, report results. Generally speaking... · Test cases and scenarios are designed to represent both typical and
unusual situations that may occur in the application. · Test engineers define unit test requirements and unit test cases. Test
engineers also execute unit test cases. · It is the test team who, with assistance of developers and clients,
develops test cases and scenarios for integration and system testing. · Test scenarios are executed through the use of test procedures or
scripts. · Test procedures or scripts define a series of steps necessary to perform
one or more test scenarios. · Test procedures or scripts include the specific data that will be used for
testing the process or transaction. · Test procedures or scripts may cover multiple test scenarios. · Test scripts are mapped back to the requirements and traceability
matrices are used to ensure each test is within scope. · Test data is captured and base lined, prior to testing. This data serves as
the foundation for unit and system testing and used to exercise system
functionality in a controlled environment. · Some output data is also base-lined for future comparison. Base-lined
data is used to support future application maintenance via regression
testing. · A pre-test meeting is held to assess the readiness of the application and
the environment and data to be tested. A test readiness document is
created to indicate the status of the entrance criteria of the release. Inputs for this process: · Approved Test Strategy Document. · Test tools, or automated test tools, if applicable. · Previously developed scripts, if applicable. · Test documentation problems uncovered as a result of testing. · A good understanding of software complexity and module path coverage,
derived from general and detailed design documents, e.g. software
design document, source code and software complexity data. Outputs for this process: · Approved documents of test scenarios, test cases, test conditions and
test data. · Reports of software design issues, given to software developers for
correction.
Q: How do you execute tests? A: Execution of tests is completed by following the test documents in a
methodical manner. As each test procedure is performed, an entry is recorded in
a test execution log to note the execution of the procedure and whether or not
the test procedure uncovered any defects. Checkpoint meetings are held
throughout the execution phase. Checkpoint meetings are held daily, if required,
to address and discuss testing issues, status and activities. · The output from the execution of test procedures is known as test
results. Test results are evaluated by test engineers to determine
whether the expected results have been obtained. All
discrepancies/anomalies are logged and discussed with the software
team lead, hardware test lead, programmers, software engineers and
documented for further investigation and resolution. Every company has
a different process for logging and reporting bugs/defects uncovered
during testing. · A pass/fail criteria is used to determine the severity of a problem, and
results are recorded in a test summary report. The severity of a problem,
found during system testing, is defined in accordance to the customer's
risk assessment and recorded in their selected tracking tool. · Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression tested and flawless fixes
are migrated to a new baseline. Following completion of the test,
members of the test team prepare a summary report. The summary
report is reviewed by the Project Manager, Software QA (SWQA)
Manager and/or Test Team Lead. · After a particular level of testing has been certified, it is the responsibility
of the Configuration Manager to coordinate the migration of the release
software components to the next test level, as documented in the
Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance. · The test team reviews test document problems identified during testing,
and update documents where appropriate. Inputs for this process: · Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. · Test tools, including automated test tools, if applicable. · Developed scripts. · Changes to the design, i.e. Change Request Documents. · Test data. · Availability of the test team and project team. · General and Detailed Design Documents, i.e. Requirements Document,
Software Design Document. · A software that has been migrated to the test environment, i.e. unit
tested code, via the Configuration/Build Manager. · Test Readiness Document. · Document Updates. Outputs for this process: Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing
deliverables. · Changes to the code, also known as test fixes. · Test document problems uncovered as a result of testing. Examples are
Requirements document and Design Document problems. · Reports on software design issues, given to software developers for
correction. Examples are bug reports on code issues. · Formal record of test incidents, usually part of problem tracking. · Base-lined package, also known as tested source and object code, ready
for migration to the next level.
Software Testing
Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors.
Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs.
Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing.
Over its existence, computer software has continued to grow in complexity and size. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it presumably must assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment.
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.
Scope
A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the quality aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Defects and failures
The software faults occur through the following process. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.] A single defect may result in a wide range of failure symptoms.
Input combinations and preconditions
A problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.]] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, parafunctional dimensions of quality (how it is supposed to be versus what it is supposed to do) -- for example, usability, scalability, performance, compatibility, reliability -- can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Static vs. dynamic testing
There are many approaches to software testing. Reviews, walkthroughs or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing.
Software verification and validation
Software testing is used in association with verification and validation:]
Verification: Have we built the software right (i.e., does it match the specification)?
Validation: Have we built the right software (i.e., is this what the customer wants)?
The software testing team
Software testing can be done by software testers. Until the 1950s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing] there have been established different roles: test lead/manager, test designer, tester, test automater/automation developer, and test administrator.
Software Quality Assurance (SQA)
Thought controversial], software testing may be viewed as an important part of the software quality assurance (SQA) process.[citation needed] In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to control the functions of an airliner. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
History
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[8] Although his attention was on breakage testing, it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dr. Dave Gelperin and Dr. William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[9]
Until 1956 - Debugging oriented0]
1957-1978 - Demonstration oriented
1979-1982 - Destruction oriented
1983-1987 - Evaluation oriented
1988-2000 - Prevention oriented
Testing methods
Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
Black box testing
Black box testing treats the software as a black-box without any understanding of internal behavior. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix etc.
White box testing
White box testing, however, is when the tester has access to the internal data structures, code, and algorithms.
Types of white box testing
The following types of white box testing exist:
code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once.
mutation testing methods.
fault injection methods.
static testing - White box testing includes all static testing.
Code completeness evaluation
White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.
Two common forms of code coverage are:
function coverage, which reports on functions executed
and statement coverage, which reports on the number of lines executed to complete the test.
They both return a coverage metric, measured as a percentage.
Grey Box Testing
In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. Grey box testing may also include reverse engineering to determine, for instance, boundary values.
Non Functional Software Testing
Special methods exist to test non-functional aspects of software.
Performance testing checks to see if the software can handle large quantities of data or users.
Usability testing is needed to check if the user interface is easy to use and understand.
Security testing is essential for software which processes confidential data and to prevent system intrusion by hackers.
internationalization and localization is needed to test these aspects of software, for which a pseudolocalization method can be used.
Testing process
A common practice of software testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing.8] Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.9]
In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
Testing can be done on the following levels:
Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.0]
Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
System testing tests a completely integrated system to verify that it meets its requirements.
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.[citation needed]
Before shipping the final version of software, alpha and beta testing are often done additionally:
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[citation needed]
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.[citation needed]
Finally, acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]
Regression testing
Main article: Regression testing
After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.
More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behaviour, and smoke testing when testing for basic functionality.
Finding faults early
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in requirements is found only post-release, then it would cost 10-100 times more to fix it comparing to the cost if the same fault was already found by the requirements review.
Time Introduced Time Detected
Requirements Architecture Construction System Test Post-Release
Requirements 1 3 5-10 10 10-100
Architecture - 1 10 15 25-100
Construction - - 1 10 10-25
Measuring software testing
Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing.
Test case
A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test script
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing:
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
Test planning: Test strategy, test plan, testbed creation. A lot of activities will be carried out during testing, so that a plan is needed.
Test development: Test procedures, test scenarios, test cases, test scripts to use in testing software.
Test execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with at a later time.
Retesting the resolved defects. Once a defect has been dealt with by the development team, it is retested by the testing team.
Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
Controversy
Main article: Software testing controversies
Some of the major controversies include:
What constitutes responsible software testing? - Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. 8]
Agile vs. traditional - Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles 9], whereas government and military software providers are slow to embrace this methodology, and mostly still hold to CMM.0]
Exploratory vs. scripted - Should tests be designed at the same time as they are executed or should they be designed beforehand?
Manual vs. automated - Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests.
Software design vs. software implementation - Should testing be carried out only at the end or throughout the whole process?
Who watches the watchmen? - The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.
Certification
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
Software testing certification types
Certifications can be grouped into: exam-based and education-based.
Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI.
Education-based certifications: Education based software testing certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).
Testing certifications
Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)
Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing8]
[[CSTP (TM)]] (Australian Version) offered by K. J. Ross & Associates9]
CATe offered by the International Institute for Software Testing0]
ISEB offered by the Information Systems Examinations Board
Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board
Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board
CBTS offered by the Brazilian Certification of Software Testing (ALATS)
Quality assurance certifications
CSQE offered by the American Society for Quality (ASQ)
CSQA offered by the Quality Assurance Institute (QAI)
See also
Dynamic program analysis
Formal verification
Reverse Semantic Traceability
Static code analysis
GUI software testing
Web testing
Source: http://en.wikipedia.org/wiki/Software_testing
Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs.
Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing.
Over its existence, computer software has continued to grow in complexity and size. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it presumably must assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment.
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.
Scope
A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the quality aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Defects and failures
The software faults occur through the following process. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.] A single defect may result in a wide range of failure symptoms.
Input combinations and preconditions
A problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.]] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, parafunctional dimensions of quality (how it is supposed to be versus what it is supposed to do) -- for example, usability, scalability, performance, compatibility, reliability -- can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Static vs. dynamic testing
There are many approaches to software testing. Reviews, walkthroughs or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing.
Software verification and validation
Software testing is used in association with verification and validation:]
Verification: Have we built the software right (i.e., does it match the specification)?
Validation: Have we built the right software (i.e., is this what the customer wants)?
The software testing team
Software testing can be done by software testers. Until the 1950s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing] there have been established different roles: test lead/manager, test designer, tester, test automater/automation developer, and test administrator.
Software Quality Assurance (SQA)
Thought controversial], software testing may be viewed as an important part of the software quality assurance (SQA) process.[citation needed] In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to control the functions of an airliner. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
History
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[8] Although his attention was on breakage testing, it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dr. Dave Gelperin and Dr. William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[9]
Until 1956 - Debugging oriented0]
1957-1978 - Demonstration oriented
1979-1982 - Destruction oriented
1983-1987 - Evaluation oriented
1988-2000 - Prevention oriented
Testing methods
Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
Black box testing
Black box testing treats the software as a black-box without any understanding of internal behavior. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix etc.
White box testing
White box testing, however, is when the tester has access to the internal data structures, code, and algorithms.
Types of white box testing
The following types of white box testing exist:
code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once.
mutation testing methods.
fault injection methods.
static testing - White box testing includes all static testing.
Code completeness evaluation
White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.
Two common forms of code coverage are:
function coverage, which reports on functions executed
and statement coverage, which reports on the number of lines executed to complete the test.
They both return a coverage metric, measured as a percentage.
Grey Box Testing
In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. Grey box testing may also include reverse engineering to determine, for instance, boundary values.
Non Functional Software Testing
Special methods exist to test non-functional aspects of software.
Performance testing checks to see if the software can handle large quantities of data or users.
Usability testing is needed to check if the user interface is easy to use and understand.
Security testing is essential for software which processes confidential data and to prevent system intrusion by hackers.
internationalization and localization is needed to test these aspects of software, for which a pseudolocalization method can be used.
Testing process
A common practice of software testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing.8] Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.9]
In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
Testing can be done on the following levels:
Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.0]
Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
System testing tests a completely integrated system to verify that it meets its requirements.
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.[citation needed]
Before shipping the final version of software, alpha and beta testing are often done additionally:
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[citation needed]
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.[citation needed]
Finally, acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]
Regression testing
Main article: Regression testing
After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.
More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behaviour, and smoke testing when testing for basic functionality.
Finding faults early
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in requirements is found only post-release, then it would cost 10-100 times more to fix it comparing to the cost if the same fault was already found by the requirements review.
Time Introduced Time Detected
Requirements Architecture Construction System Test Post-Release
Requirements 1 3 5-10 10 10-100
Architecture - 1 10 15 25-100
Construction - - 1 10 10-25
Measuring software testing
Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing.
Test case
A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test script
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing:
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
Test planning: Test strategy, test plan, testbed creation. A lot of activities will be carried out during testing, so that a plan is needed.
Test development: Test procedures, test scenarios, test cases, test scripts to use in testing software.
Test execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with at a later time.
Retesting the resolved defects. Once a defect has been dealt with by the development team, it is retested by the testing team.
Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
Controversy
Main article: Software testing controversies
Some of the major controversies include:
What constitutes responsible software testing? - Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. 8]
Agile vs. traditional - Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles 9], whereas government and military software providers are slow to embrace this methodology, and mostly still hold to CMM.0]
Exploratory vs. scripted - Should tests be designed at the same time as they are executed or should they be designed beforehand?
Manual vs. automated - Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests.
Software design vs. software implementation - Should testing be carried out only at the end or throughout the whole process?
Who watches the watchmen? - The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.
Certification
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
Software testing certification types
Certifications can be grouped into: exam-based and education-based.
Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI.
Education-based certifications: Education based software testing certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).
Testing certifications
Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)
Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing8]
[[CSTP (TM)]] (Australian Version) offered by K. J. Ross & Associates9]
CATe offered by the International Institute for Software Testing0]
ISEB offered by the Information Systems Examinations Board
Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board
Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board
CBTS offered by the Brazilian Certification of Software Testing (ALATS)
Quality assurance certifications
CSQE offered by the American Society for Quality (ASQ)
CSQA offered by the Quality Assurance Institute (QAI)
See also
Dynamic program analysis
Formal verification
Reverse Semantic Traceability
Static code analysis
GUI software testing
Web testing
Source: http://en.wikipedia.org/wiki/Software_testing
Subscribe to:
Posts (Atom)