The Comprehensive Guide To Basic Terminologies And Definition Of Software Testing!
An overview of Software testing terminologies!
There are some kinds of terminologies for every industry. Below is a comprehensive list of over 150 software testing terminologies that are used frequently in the working of this industry. This list also gives a definition of software testing. These are the basic testing concepts.
Hope you learn a lot!
Software testing meaning and terminologies
Here are some software testing terminologies that are widely and popularly used in the software testing and Information Technology industries:
Acceptance testing – Formal testing of user needs, requirements, and business processes conducted to determine whether or not a system meets the acceptance criteria and to allow the user, customers, or other authorized entity to determine whether or not to accept the system.
Alpha testing – Simulated or actual operational testing by potential users/customers or an independent testing team on the developers’ website, but outside the development organization. Thus, manual testing means alpha testing. Alpha testing is often employed as a form of the internal acceptance test.
Attack – Directed and focused attempt on quality assessment, especially reliability, of a test object, trying to force specific failures.
Beta testing – Operational testing by users / potential customers and/or existing on an external site that is not involved with developers. Its purpose is to determine whether a component or system meets the needs of the user/client and fits within the business processes. Beta testing is often employed as a form of external acceptance testing to get market feedback.
Black box testing – Functional or non-functional tests, without reference to the internal structure of the component or system.
Also read: Security Testing Tools - A Complete Overview
Boundary value analysis– A black box test design technique in which test cases are designed based on threshold values.
Branch coverage– The percentage of branches that were evaluated by a set of tests. One hundred percent affiliate coverage implies 100% decision coverage and 100% declaration coverage.
Code coverage– An analysis method that determines which parts of the software were run (covered) by the test suite and which parts were not executed.
Compiler– A software tool that translates programs expressed in a machine language.
Complexity: The degree to which a component or system has an internal design and/or structure that is difficult to understand, maintain, and verify.
Component integration testing: Tests performed to identify defects in interfaces and interactions between integrated components.
Component testing: Testing individual software components.
Configuration control: A configuration management element which consists of evaluating, coordinating, approving, or disapproving, and implementing changes to configuration items after the formal establishment of your configuration ID.
Configuration item: An aggregation of hardware, software, or both, which is intended for configuration management and treated as a single entity in the configuration management process.
Configuration management: A discipline that applies direction, technical and administrative surveillance to identify and document the functional and physical characteristics of a configuration item, track changes in these characteristics, record and report change processing and implementation status, and verify compliance with the specified requirements.
Control flow: An abstract representation of all possible sequences of events and paths in execution through a component or system.
Coverage: The grade, expressed as a percentage, to which a specified coverage item was run by a test suite.
Coverage tool: A tool that provides objective measurements of which structural elements were executed by the test suite.
Cyclomatic complexity: The number of independent paths through a program.
Data-driven testing: A scripting technique that stores test input and expected results in a table or worksheet so that a single control script can run all tests in the table. Data tests are often used to support the application of test execution tools such as capture/playback tools.
Data flow: An abstract representation of the sequence and possible changes in the state of data objects, where the state of an object is any creation, use, or destruction.
Debugging: The process of finding, analyzing, and removing the causes of software failures.
Debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs, and find the corresponding defect. Debuggers allow programmers to run programs step by step, stop a program in any program instruction, and define and examine program variables.
Decision coverage: The percentage of decision results that were exercised by a set of tests. One hundred per cent decision coverage implies both 100% affiliate coverage and 100% declaration coverage.
Decision table testing: A black box testing technique in which test cases are designed to perform the combinations of inputs and/or stimuli (causes) shown in a decision table.
Defect: A failure in a component or system that can cause the component or system to fail when performing its required function. An incorrect declaration or data definition. A defect, if found during execution, can cause a component or system failure.
Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in terms of standard measurement, for example, lines of code, number of classes, or function points).
Defect management: The process of recognition, investigation, action, and elimination of defects. It involves recording defects, classifying them, and identifying the impact. It is a very important software testing terminology.
Driver: A software component or test tool that replaces a component that takes care of the control and/or calls of a component or system.
Dynamic analysis tool: A tool that provides run-time information about the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic, and to monitor memory allocation, usage, and distribution, and to highlight memory leaks.
Dynamic testing: Testing that involves running the system so that it is dynamically tested.
Entry criteria: The set of generic and specific conditions to allow a process to proceed with a defined task.
Error guessing: Another important software testing terminology is error guessing. It is a test design technique where the tester experience is used to anticipate which defects may be present in the component or system under test as a result of errors made and design tests specifically to expose them.
Exit criteria: The set of generic and specific conditions agreed with stakeholders to allow a process to be officially completed. The purpose of the exit criteria is to prevent a task from being considered completed when there are still pending parts of the task that have not been completed. The output criteria are used by tests to report against and plan when to stop the test.
Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.
Exploratory testing: Test where the tester actively controls the design of tests when these tests are performed and uses the information obtained during testing to design new and better tests.
Failure: Actual deviation of the component or system from its expected delivery, service, or result. The inability of a system or component to perform a required function within specified limits.
Failure rate: The proportion of the number of failures in a given category for a given unit of measure, e. Failures per unit of time, failures by the number of transactions, failures by the number of computer runs. This software testing terminology is very crucial.
Formal review: A review characterized by documented procedures and requirements, e.g. inspection.
Functional requirement: A requirement that specifies a function that a component or system must perform.
Functional testing: Testing based on an analysis of the specification of the functionality of a component or system.
Horizontal traceability: Tracking requirements for a test level through the test documentation layers (for example, test plan, test project specification, test case specification, and test procedure specification).
Impact analysis: The evaluation of the change to the development documentation layers, test documentation, and components to implement a particular change to the specified requirements.
Incident: Any event that occurs during the test that requires investigation.
Incident Report: A document that reports any event that occurs during testing, which requires investigation.
Incident management tool: Tool that facilitates the recording and status monitoring of incidents encountered during testing. They often have workflow-oriented capabilities to track and track incident allocation, remediation, and retesting, and provide reporting facilities.
Incremental development model: A development lifecycle in which a project is divided into a series of increments, each of which provides a piece of functionality in the overall requirements of the project. Requirements are prioritized and delivered in order of priority in the appropriate increment. In some (but not all) versions of this lifecycle model, each subproject follows a “mini-model V” with its design, coding, and testing phases.
Independence of testing: Separation of responsibilities, which encourages the performance of objective tests.
Informal review: A review not based on a formal procedure (documented).
Inspection: A type of review that relies on visual examination of documents to detect defects. Another important software testing terminology.
Intake test: A special example of a smoke test to decide whether the component or system is ready for detailed and additional testing.
Integration: The process of combining components or systems into larger assemblies.
Integration Tests: Tests performed to expose defects in interfaces and integrations between components or integrated systems.
Interoperability testing: The testing process for determining the interoperability of a software product
ISTQB: International Software Testing Qualifications Board, a non-profit association that develops international certification for software testers. An important software testing terminology.
Keyword-driven testing: A scripting technique that uses data files to contain not only test data and expected results but also keywords related to the application being tested. Keywords are interpreted by special support scripts that are called by the control script for testing. See also data tests.
Maintainability Testing: The testing process for determining the maintenance of a software product.
Metric: A measurement scale and method used for measurement.
Moderator: The leader and principal responsible for an inspection or other review process.
Modeling tool: Tool that supports the validation of software or system models.
N-switch coverage: The percentage of N+1 transition sequences that were exercised by a set of tests.
N-switch testing: A form of state transition testing in which test cases are designed to perform all valid sequences of N+1 transitions.
Non-functional requirement: Requirement that does not refer to functionality, but attributes of it, such as reliability, efficiency, ease of use, maintenance, and portability.
Off-the-shelf Software: A software product that is developed for the general market, that is, for a large number of customers, and that is delivered to many customers in an identical format.
Performance testing: The testing process for determining the performance of a software product.
Performance testing tool: A tool to support performance testing and that typically has two main facilities: load generation and test transaction measurement. A load generation can simulate multiple users or large volumes of input data. During execution, response time measurements are taken from selected transactions and are recorded. Performance testing tools typically provide reports based on test records and load graphs against response times.
Portability testing: The testing process for determining the portability of a software product.
Probe effect: The effect on the component or system when it is being measured, e.g. by a performance test tool or monitor.
Product risk: Risk directly related to the test object.
Project Risk: A risk related to project management and control. E.g.
lack of staff, strict deadlines, change of requirements, etc.
Project test plan: A test plan that typically meets multiple test levels.
Quality: The degree to which a component, system, or process meets the requirements and/or needs and expectations of users/customers.
RAD: Rapid Application Development, a software development model.
Regression testing: Testing a previously tested after modification program to ensure that defects have not been introduced or discovered in unchanged areas of the software as a result of changes made. It runs when the software or its environment changes.
Reliability testing: The testing process for determining the reliability of a software product.
Requirement: A condition or capacity required by the user to resolve a problem or achieve a goal that must be achieved or possessed by a component system or system to satisfy a contract, standard, specification, or another formally imposed document.
Requirement management tool: A tool that supports the recording of requirements, requirements attributes (for example, priority, responsible knowledge) and annotation and facilitates traceability through requirements layers and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations of predefined requirements rules.
Re-testing: Test that runs test cases that failed the last time they were run to verify the success of corrective actions.
Review: An evaluation of a product or project status to verify discrepancies in planned results and recommend improvements. Examples include management review, informal review, technical review, inspection, and walk-through.
Review tool: A tool that supports the review process. Typical features include review planning and tracking, communication support, collaborative reviews, and a repository for collecting and reporting metrics. It helps to understand the definition of software testing.
Reviewer: The person involved in the review who identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different views and roles in the review process.
Risk: A factor that could result in future negative consequences; Generally expressed as impact and probability.
Risk-based testing: An approach to testing to reduce the level of product risk and inform stakeholders about its status, starting in the early stages of a project. It involves identifying the risks of the product and its use in guiding the testing process.
Robustness testing: Testing to determine the robustness of the software product.
SBTM: Session-based test management, and ad hoc, and exploratory test management technique, based on fixed-duration sessions (from 30 to 120 minutes) during which testers exploit part of the application.
Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a registration form. The scribe shall ensure that the registration form is legible and understandable.
Scripting language: A programming language in which executable test scripts are written, used by a test execution tool (for example, a capture/replay tool).
Security testing: Testing to determine software product security.
Site acceptance testing: User/customer acceptance testing on your site to determine whether a component or system meets user/customer needs and fits within business processes, typically including hardware and software.
SLA: Service level agreement, service agreement between a vendor and its customer, defining the level of service that a customer can expect from the provider.
Smoke test: A subset of all defined/planned test cases that cover the core functionality of a component or system, to verify that the most crucial functions of a program work, but do not bother with more precise details. The daily test of construction and smoke is among the best practices in the industry.
State transition: A transition between two states of a component or system.
State transition testing: A black box testing technique in which test cases are designed to perform valid and invalid state transitions.
Statement coverage: The percentage of executable claims that have been exercised by a set of tests.
Static analysis: Analysis of software artefacts, e.g. requirements or code, performed without the execution of these software artifacts.
Static testing: Tool that performs static code analysis. The tool checks source code for certain properties, such as compliance with coding standards, quality metrics, or data flow anomalies.
Static testing: Testing a component or system at the specification or implementation level without running that software, e.g. static code analysis.
Stress testing: A type of performance test conducted to evaluate a system or component within the limits of its expected or specified workloads or with reduced availability of resources, such as memory access or servers. See also performance tests, load tests.
Stress testing tool: A tool that supports stress testing.
Structural testing: See white-box tests.
Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a component called.
System integration testing: Testing the integration of systems and packages; Testing interfaces for external organizations (e.g. electronic data exchange, internet).
System testing: The process of testing an integrated system to verify that it meets the specified requirements.
Technical review: A peer group discussion activity that focuses on reaching consensus on the technical approach to be taken. A technical review is also known as peer review.
Test: A set of one or more test cases.
Test approach: The implementation of the test strategy for a specific project. Typically includes the decisions made below based on the project goal (test) and the risk assessment performed, starting points on the test process, the test design techniques to be applied, the exit criteria, and the types of tests to be performed.
Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which test cases are based. If a document can only be amended utilizing a formal amending procedure, the test basis is called the frozen test basis.
Test case: A set of input values, run preconditions, expected results, and post-execution conditions, developed for a particular test objective or condition, such as exercising a specific program path or to verify compliance with a specific requirement.
Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution conditions) for a test item.
Test comparator: A test tool for comparing automated tests.
Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.
Test control: A test management task that handles the development and application of a set of corrective actions to get a test project on track when monitoring shows a deviation from the plan.
Test coverage: See the coverage.
Test data: Data that exists (for example, in a database) before a test is run, and that affects or is affected by the component or system being tested.
Test data preparation tool: A type of test tool that allows data to be selected from existing databases or created, generated, manipulated, and edited for use in tests.
Test design: The process of transforming overall test objectives into tangible test conditions and test cases.
Test design specification: A document specifying the test conditions (coverage items) for a test item, such as a detailed test approach and identification of associated high-level test cases.
Test design technique: A method used to derive or select test cases.
Test design tool: A tool that supports test project activity by generating test inputs from a specification that can be maintained in a CASE tool store, e.g. requirements management tools, or from specified test conditions maintained in the tool itself.
Test-driven development: Agile development method, where tests are designed and automated before code (from requirements or specifications), then the minimum amount of code is written to successfully pass the test. This method is iterative and ensures that the code continues to meet the requirements through the test run.
Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other supporting elements needed to perform a test.
Test execution: The process of running a test by the component or system under test, producing actual results.
Test execution schedule: A schema for running test procedures. Test procedures are included in the test execution schedule in its context and in the order in which it should be run.
Test execution tool: A type of test tool that can run other software using an automated test script, e.g. Capture/Playback.
Test harness: A test environment composed of stubs and drivers required to perform a test.
Test leader: Consult the Test manager.
Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities of a project.
Test log: A chronological record of relevant details about running tests.
Test management: The planning, estimation, monitoring, and control of tests and activities, usually performed by a test manager.
Test Manager: The person responsible for testing and evaluating a test object. The individual, who manages, controls, manages, plans, and regulates the evaluation of a test object.
Test monitoring: A test management task that handles activities related to periodic verification of the status of a test project. The reports are prepared that compare the results with what was expected.
Test objective: A reason or purpose for designing and running a test.
Test oracle: A source to determine the expected results to compare with the actual result of the software under test. An oracle can be the existing system (for a benchmark), a user manual, or the specialized knowledge of an individual, but it should not be code.
Test plan: A document describing the scope, approach, features, and schedule of the intended test activities. Among others, it identifies the test items, the resources to be tested, the test tasks, who will perform each task, the degree of independence of the tester, the test environment, the test techniques, and the test measurement techniques to be used. It is a record of the test planning process.
Test policy: A high-level document that describes the principles, approach, and key objectives of the organization concerning testing.
Test procedure specification: A document specifying a sequence of actions for running a test; Also known as test script or manual test script.
Test script: Commonly used to refer to a test procedure specification, especially an automated one.
Test strategy: A high-level document that defines the test levels to be run and the test within those levels for a program (one or more projects). A test strategy helps to know the definition of software testing.
Test suite: A set of multiple test cases for one component or system under test, where the postcondition condition of one test is often used as the precondition for the next.
Test summary report: A document that summarizes the test activities and results. It also contains an evaluation of the corresponding test items against the output criteria.
Tester: A technically qualified professional who is involved in testing a component or system.
Testware: Artifacts produced during the testing process required to plan, design, and run tests, such as documentation, scripts, inputs, expected results, installation and cleanup procedures, files, databases, environment, and any additional software or utilities used in testing.
Thread testing: A component integration test version where the progressive integration of components follows the implementation of subsets of requirements, as opposed to integrating components by levels of a hierarchy.
Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests.
Usability testing: Test to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to users under specified conditions. By knowing the definition of software testing, you can implement usability testing.
Use case testing: A black box test design technique in which test cases are designed to run user scenarios.
User acceptance testing: See acceptance testing test.
V-Model: A framework to describe the life cycle activities of software development, from requirements specification to maintenance. Model V illustrates how to test activities that can be integrated into each phase of the software development lifecycle.
Validation: Confirmation by testing and by providing objective evidence that the requirements for a specific use or application have been met.
Verification: Confirmation by testing and by providing objective evidence that the specified requirements have been met.
Vertical traceability: The tracing of requirements through the development of documentation layers for components.
Walk-through: A step-by-step presentation by the author of a document to gather information and establish a common understanding of its content.
White-box testing: Testing based on an analysis of the internal structure of the component or system.
So, aforementioned is a list of software testing terminologies. Be sure to mention any and all terms we may have missed in the comments blow! Good luck!