Tools Interview Questions -
- What kinds of testing should be
- Black box testing - not based on any
knowledge of internal design or code. Tests are based on requirements and
- White box testing - based on knowledge of
the internal logic of an application's code. Tests are based on coverage
of code statements, branches, paths, conditions.
- unit testing - the most 'micro' scale of
testing; to test particular functions or code modules. Typically done by
the programmer and not by testers, as it requires detailed knowledge of
the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses.
- incremental integration testing -
continuous testing of an application as new functionality is added;
requires that various aspects of an application's functionality be
independent enough to work separately before all parts of the program are
completed, or that test drivers be developed as needed; done by
programmers or by testers.
- integration testing - testing of combined
parts of an application to determine if they function together correctly.
The 'parts' can be code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially
relevant to client/server and distributed systems.
- functional testing - black-box type testing
geared to functional requirements of an application; this type of testing
should be done by testers. This doesn't mean that the programmers
shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)
- system testing - black-box type testing
that is based on overall requirements specifications; covers all combined
parts of a system.
- end-to-end testing - similar to system
testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate.
- sanity testing or smoke testing - typically
an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging
down systems to a crawl, or corrupting databases, the software may not be
in a 'sane' enough condition to warrant further testing in its current
- regression testing - re-testing after fixes
or modifications of the software or its environment. It can be difficult
to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for
this type of testing.
- acceptance testing - final testing based on
specifications of the end-user or customer, or based on use by
end-users/customers over some limited period of time.
- load testing - testing an application under
heavy loads, such as testing of a web site under a range of loads to
determine at what point the system's response time degrades or fails.
- stress testing - term often used
interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually
heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
- performance testing - term often used
interchangeably with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
- usability testing - testing for
'user-friendliness'. Clearly this is subjective, and will depend on the
targeted end-user or customer. User interviews, surveys, video recording
of user sessions, and other techniques can be used. Programmers and
testers are usually not appropriate as usability testers.
- install/uninstall testing - testing of
full, partial, or upgrade install/uninstall processes.
- recovery testing - testing how well a
system recovers from crashes, hardware failures, or other catastrophic
- failover testing - typically used
interchangeably with 'recovery testing'
- security testing - testing how well the
system protects against unauthorized internal or external access, willful
damage, etc; may require sophisticated testing techniques.
- compatability testing - testing how well
software performs in a particular hardware/software/operating
- exploratory testing - often taken to mean a
creative, informal software test that is not based on formal test plans or
test cases; testers may be learning the software as they test it.
- ad-hoc testing - similar to exploratory
testing, but often taken to mean that the testers have significant
understanding of the software before testing it.
- context-driven testing - testing driven by
an understanding of the environment, culture, and intended use of
software. For example, the testing approach for life-critical medical
equipment software would be completely different than that for a low-cost
- user acceptance testing - determining if
software is satisfactory to an end-user or customer.
- comparison testing - comparing software
weaknesses and strengths to competing products.
- alpha testing - testing of an application
when development is nearing completion; minor design changes may still be
made as a result of such testing. Typically done by end-users or others,
not by programmers or testers.
- beta testing - testing when development and
testing are essentially completed and final bugs and problems need to be
found before final release. Typically done by end-users or others, not by
programmers or testers.
- mutation testing - a method for determining
if a set of test data or test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation
requires large computational resources.
What are 5 common problems in the software
- poor requirements - if requirements are
unclear, incomplete, too general, and not testable, there will be
- unrealistic schedule - if too much work is
crammed in too little time, problems are inevitable.
- inadequate testing - no one will know
whether or not the program is any good until the customer complains or
- featuritis - requests to pile on new
features after development is underway; extremely common.
- miscommunication - if developers don't know
what's needed or customer's have erroneous expectations, problems are
What are 5 common solutions to software
- solid requirements - clear, complete,
detailed, cohesive, attainable, testable requirements that are agreed to
by all players. Use prototypes to help nail down requirements. In
'agile'-type environments, continuous coordination with
customers/end-users is necessary.
- realistic schedules - allow adequate time
for planning, design, testing, bug fixing, re-testing, changes, and
documentation; personnel should be able to complete the project without
- adequate testing - start testing early on,
re-test after fixes or changes, plan for adequate time for testing and
bug-fixing. 'Early' testing ideally includes unit testing by developers
and built-in testing and diagnostic capabilities.
- stick to initial requirements as much as
possible - be prepared to defend against excessive changes and additions
once development has begun, and be prepared to explain consequences. If
changes are necessary, they should be adequately reflected in related
schedule changes. If possible, work closely with customers/end-users to
manage expectations. This will provide them a higher comfort level with
their requirements decisions and minimize excessive changes later on.
- communication - require walkthroughs and
inspections when appropriate; make extensive use of group communication
tools - e-mail, groupware, networked bug-tracking tools and change
management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably
electronic, not paper; promote teamwork and cooperation; use protoypes if
possible to clarify customers' expectations.
What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget,
meets requirements and/or expectations, and is maintainable.
However, quality is obviously a subjective term. It will depend on who the
'customer' is and their overall influence in the scheme of things. A
wide-angle view of the 'customers' of a software development project might
include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization's
management/accountants/testers/salespeople, future software maintenance
engineers, stockholders, magazine columnists, etc. Each type of 'customer'
will have their own slant on 'quality' - the accounting department might
define quality in terms of profits while an end-user might define quality as
user-friendly and bug-free.
What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable.
Some organizations have coding 'standards' that all developers are supposed to
adhere to, but everyone has different ideas about what's best, or what is too
many or too few rules. There are also various theories and metrics, such as
McCabe Complexity metrics. It should be kept in mind that excessive use of
standards and rules can stifle productivity and creativity. 'Peer reviews',
'buddy checks' code analysis tools, etc. can be used to check for problems and
For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:
- minimize or eliminate use of global
- use descriptive function and method names -
use both upper and lower case, avoid abbreviations, use as many characters
as necessary to be adequately descriptive (use of more than 20 characters
is not out of line); be consistent in naming conventions.
- use descriptive variable names - use both
upper and lower case, avoid abbreviations, use as many characters as
necessary to be adequately descriptive (use of more than 20 characters is
not out of line); be consistent in naming conventions.
- function and method sizes should be
minimized; less than 100 lines of code is good, less than 50 lines is
- function descriptions should be clearly
spelled out in comments preceding a function's code.
- organize code for readability.
- use whitespace generously - vertically and
- each line of code should contain 70
- one code statement per line.
- coding style should be consistent throught
a program (eg, use of brackets, indentations, naming conventions, etc.)
- in adding comments, err on the side of too
many rather than too few comments; a common rule of thumb is that there
should be at least as many lines of comments (including header blocks) as
lines of code.
- no matter how small, an application should
include documentaion of the overall program function and flow (even a few
paragraphs is better than nothing); or if possible a separate flow chart
and detailed program documentation.
- make extensive use of error handling
procedures and status and error logging.
- for C++, to minimize complexity and
increase maintainability, avoid too many levels of inheritance in class
heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator
overloading (note that the Java programming language eliminates multiple
inheritance and operator overloading.)
- for C++, keep class methods small, less
than 50 lines of code per method is preferable.
- for C++, make liberal use of exception
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design'
or 'internal design'. Good internal design is indicated by software code whose
overall structure is clear, understandable, easily modifiable, and
maintainable; is robust with sufficient error-handling and status logging
capability; and works correctly when implemented. Good functional design is
indicated by an application whose functionality can be traced back to customer
and end-user requirements.
For programs that have a user interface, it's often a good idea to assume that
the end user will have little computer knowledge and may not read a user
manual or even the on-line help; some common rules-of-thumb include:
- the program should act in a way that least
surprises the user
- it should always be evident to the user
what can be done next and how to exit
- the program shouldn't let the users do
something stupid without warning them.
What is SEI? CMM? CMMI? ISO? IEEE? ANSI?
Will it help?
- SEI = 'Software Engineering Institute' at
Carnegie-Mellon University; initiated by the U.S. Defense Department to
help improve software development processes.
- CMM = 'Capability Maturity Model', now
called the CMMI ('Capability Maturity Model Integration'), developed by
the SEI. It's a model of 5 levels of process 'maturity' that determine
effectiveness in delivering quality software. It is geared to large
organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and
if reasonably applied can be helpful. Organizations can receive CMMI
ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic
panics, and heroic efforts required by individuals to successfully complete
projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic
planning, and configuration management processes are in place; successful
practices can be repeated. Level 3 - standard software development and
maintenance processes are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee software processes, and
training programs are used to ensure understanding and compliance. Level 4 -
metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high. Level 5 - the
focus is on continouous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were
assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4,
and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23%
at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was
100 software engineering/maintenance personnel; 32% of organizations were U.S.
federal contractors or agencies. For those rated at Level 1, the most
problematical key process area was in Software Quality Assurance.
- ISO = 'International Organisation for
Standardization' - The ISO 9001:2000 standard (which replaces the previous
standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing
organizations, not just software. It covers documentation, design,
development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management
Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management
Systems: Guidelines for Performance Improvements. To be ISO 9001
certified, a third-party auditor assesses an organization, and
certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily
indicate quality products - it indicates only that documented processes
Also see http://www.iso.ch/ for the latest information. In the U.S. the
standards can be purchased via the ASQ web site at http://e-standards.asq.org/
- IEEE = 'Institute of Electrical and
Electronics Engineers' - among other things, creates standards such as
'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829),
'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE
Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730),
- ANSI = 'American National Standards
Institute', the primary industrial standards body in the U.S.; publishes
some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality).
- Other software development/IT management
process assessment methods besides CMMI and ISO 9000 include SPICE,
Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.
What is the 'software life cycle'?
The life cycle begins when an application is first conceived and ends when it
is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test
planning, coding, document preparation, integration, testing, maintenance,
updates, retesting, phase-out, and other aspects.
Will automated testing tools make testing
- Possibly. For small projects, the time
needed to learn and implement them may not be worth it. For larger
projects, or on-going long-term projects they can be valuable.
- A common type of automated tool is the
'record/playback' type. For example, a tester could click through all
combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results logged by a tool.
The 'recording' is typically in the form of text based on a scripting
language that is interpretable by the testing tool. If new buttons are
added, or some underlying code in the application is changed, etc. the
application might then be retested by just 'playing back' the 'recorded'
actions, and comparing the logging results to check effects of the
changes. The problem with such tools is that if there are continual
changes to the system being tested, the 'recordings' may have to be
changed so much that it becomes very time-consuming to continuously update
the scripts. Additionally, interpretation and analysis of results
(screens, data, logs, etc.) can be a difficult task. Note that there are
record/playback tools for text-based interfaces also, and for all types of
- Another common type of approach for
automation of functional testing is 'data-driven' or 'keyword-driven'
automated testing, in which the test drivers are separated from the data
and/or actions utilized in testing (an 'action' would be something like
'enter a value in a text box'). Test drivers can be in the form of
automated test tools or custom-written testing software. The data and
actions can be more easily maintained - such as via a spreadsheet - since
they are separate from the test drivers. The test drivers 'read' the
data/action information to perform specified tests. This approach can
enable more efficient control, development, documentation, and maintenance
of automated tests/test cases.
- Other automated tools can include:
code analyzers - monitor code complexity,
adherence to standards, etc. coverage analyzers - these tools check which
parts of the code have been exercised by a test, and may be oriented to
code statement coverage, condition coverage, path coverage, etc. memory
analyzers - such as bounds-checkers and leak detectors. load/performance
test tools - for testing client/server and web applications under various
load levels. web test tools - to check that links are valid, HTML code
usage is correct, client-side and server-side programs work, a web site's
interactions are secure. other tools - for test case management,
documentation management, bug reporting, and configuration management.
Search more content related to TESTING...
more resources coming soon.....keep visiting..