Course Taxonomy: Testing

AI for Software Testing

Part 1: Introducing Generative AI for Software Testing

Part 2: Let’s Test with AI

  • Use AI agents to generate and run tests

Part 3: Modelling for Testing

  • Apply different ways to structure a problem and organize the testing process

Part 4: Test Planning with AI

  • Use AI to help create an overall test strategy, using a Test Strategy Canvas and, Testing Quadrants

Part 5: Testing Single Functions

  • Learn how AI can assist with equivalence partitioning, boundary value analysis, state and preconditions when defining tests

Part 6: Evaluate Tests

  • Identifying missing and redundant tests as well as the level of test coverage

Part 7: Activities and Processes

  • Use AI to generate use cases in several forms (traditional, Given-When-Then, and graphical) and generate detailed test cases

Part 8: Planning the End Game

  • Create AI-generated test plans for UAT, alpha, beta, and usability testing

Part 9: Stories and Scenarios

  • Use AI to present a user story in terms of a set of scenarios that need to pass

Part 10: Automation

  • Use AI to generate automated test cases

Part 11: Quality Attributes & Non-functional Requirements

Part 12: Evaluating AI Readiness

  • Ethical considerations and emerging trends

Fundamentals of Software Testing

Part 1: Introduction and Overview

Establishes a foundation for the course, provides a workable definition of software quality and shows how testing fits into the overall quality process.

Part 2: What to Test and How to Test it — The Universal Testing Method

Testers follow the same basic process that scientists use; we follow the principles of experimentation and measurement. In this course, we map your testing method back to those principles and show how, at each step in your testing, you’re making complex decisions about what to test and how to test it. Utilizing a combination of skills, tactics, practices, and tools – this section helps build a base that testers in any context (of any skill level) can apply to solve testing problems.

  1. Model the Testing Space. Compose, describe and work with visual models of the software to identify relevant dimensions, variables, and dynamics so you can develop an effective test strategy.
  2. Determine Test Coverage. Understand a variety of common measures of test coverage and choose the most appropriate ones for each project; determine the scope of testing; establish a structure to track test coverage
  3. Determine Test Oracles. Identify the sources of truth to determine whether the application has passed or failed tests; review common formal and heuristic oracles 
  4. Determine Test Procedures. Define what test procedures and test cases are; identify common test procedure types; learn how to document test procedures in a practical, efficient manner
  5. Configure the Test System. See how to ensure you have everything needed to support testing; discuss common challenges to test configuration; consider test lab requirements and realities
  6. Operate the Test System. Learn how to manage tester contact with the application under test (AUT); discuss different methods of interaction with the system to address different testing objectives; identify common artifacts and practices related to test operation
  7. Observe the Test System. Learn what empirical data to capture about the application under test and how to preserve testing interactions for review and reproducibility; consider common tools used to assist with test observation; identify common problems and human tendencies related to observation
  8. Evaluate Test Results. Discuss possibilities and probabilities related to test results (not every test failure is a bug!); identify typical test result evaluation tasks; consider performance test results interpretation; learn key factors related to defect communications
  9. Report Test Results. Learn how to make credible, professional reports of testing results and testing progress that address the varied needs of your target audiences; identify guiding principles for report design; review best practices and examples for defect reporting, progress status reporting, and quality statistics reporting

Part 3: Test Case Strategies

The heart of good testing is coming up with good test cases.  In this section, we will define what makes test cases “good”, and discuss these strategies for identifying test cases in specific contexts:

  1. White Box strategies
  2. Black Box strategies
  3. Input and data-based strategies
  4. User interface oriented strategies
  5. Business Process flow strategies
  6. Strategies based on your personal and organizational experiences

Part 4: Common Phases of Testing

Different testing activities take place as the software progresses through its life cycle. (Agile testers perform these same testing activities, even though they are not project phases.) This section explains the common phases of software testing, including the purpose of each, who normally performs it, and the typical types of tests that are done.

Test phases or types discussed:

  1. Unit and Software
  2. Integration
  3. System and System Integration
  4. Product Readiness
  5. Regression
  6. User Acceptance

Part 5: Approaches to Testing

Different approaches to testing are used to address different testing objectives and different project conditions. Some approaches are more formal, lengthy, traceable, and reproducible. Others are more free-form, quicker, less traceable, and less reproducible. The range of such approaches forms a continuum from which testers select the optimal combination for a given project. The best selection of approaches addresses the needs for both positive and negative testing.

  1. The Testing Approach Continuum
  2. Scripted Testing
  3. Freestyle Testing
  4. Middle-Ground (Charters, Checklists, Scenarios)

Part 6: Non-Functional Testing

Without question, functional testing is a must-have for software quality. However, there’s more to the picture than that. This section describes several key types of non-functional testing and identifies, what their scope is, and what techniques or best practices apply. 

  1. Performance
  2. Usability
  3. Accessibility
  4. Security
  5. Portability
  6. Localization

Part 7: Platform Specialization

Software is not just on the desktop—it runs on numerous platforms, and it all needs to be tested. This section takes multiple platforms into consideration and identifies each platform’s unique challenges, and the best testing approaches for each given platform.

  1. Web-Based
  2. Mobile
  3. SOA (Service-Oriented Architecture)
  4. Telephony and Voice
  5. DW/BI (Data Warehouse and Business Intelligence)
  6. COTS/MOTS – Package Implementations (COTS)

Part 8: Test Automation — Bonus Section

There have been many organizations that have set out to implement automation testing in their projects, and many of them have failed. This section identifies the different types of tools and practices that fall into the “automation” category and helps set realistic expectations and goals for automated testing. Learn how to optimize your automation testing investment and plan properly for long-term success. This is a bonus section that is discussed as time permits.

  1. Automated Test Tools
  2. System Monitor Tools
  3. File/Database Comparison Tools
  4. Static Analysis Tools

Part 9: Behavior Driven Development (BDD) & Test Driven Development (TDD) — Bonus Section

BDD and TDD are related approaches to software development that came out of the Agile movement and have proven to have a significant positive impact on software quality. This section provides an introduction to the concepts so testers can be prepared to adopt them together with developers and other project members using iterative development methods. This is a bonus section that is discussed as time permits.

  1. Test-Driven Development activities
  2. Behavior-Driven Development activities
  3. Tools for Different Languages

Part 10: Managing Testing Projects

Whether you lead a team of testers or work as the lone tester on a project, effectively managing the testing work is key to your ability to successfully test the software on time with the resources at hand.  In this section, we will address the basics of managing your work in a way that is relevant to individual contributors and lead leads alike.

  1. Planning for Testing (Universal Testing Method Steps 1-4)
  2. Requirements Traceability
  3. Test Resource Needs
  4. Testing Risks and Issues
  5. Testing Entry and Exit Criteria
  6. Measuring Testing Progress

Software Tester Certification Boot Camp

Part 1: Course and Exam Overview

  1. ISTQB and ASTQB overview
  2. Exam format
  3. Study and exam-taking tips
  4. Course flow and agenda topics
  5. Outline of the daily schedule (varies on day 3)
  6. Explanation of supplementary material

Part 2: Fundamentals of Testing

  1. Testing overview and key terminology
  2. Common testing principles
  3. Basic test process
  4. Psychology of testing
  5. Code of ethics
  • Interactive Session: Testing missions and test objectives
  • Pop Quiz: Seven testing principles
  • Interactive Session: Context drivers for testing

Part 3: Testing throughout the Software Life Cycle

  1. Software development models
  2. Test levels and test types
  3. Maintenance testing
  • Interactive Session: Understanding test impacts of software development models
  • Interactive Session: Illustrating verification and validation for better understanding
  • Interactive Session: Linking test levels with entry and exit criteria
  • Interactive Session: Compare and contrast black box and white box testing
  • Interactive Session: Understanding goals, targets, and issues within test levels
  • Interactive Session: Compare and contrast test types using examples

Part 4: Test Management

  1. Test organization
  2. Planning and estimation
  3. Progress monitoring and control
  4. Configuration management
  5. Risk and testing
  • Incident management
  • Pop Quiz: Understanding project constraints
  • Pop Quiz: Test team organizational structures
  • Pop Quiz: Driving more accurate test estimates
  • Pop Quiz: Choosing a test approach
  • Interactive Session and Pop Quiz: Performing risk assessment
  • Pop Quiz: Identifying incidents
  • Hands-on Exercise: Write an incident report
  • Hands-on Exercise: Perform a review session
  • Interactive Session: Developing oracles

Part 5: Test Design Techniques

  1. The test development process
  2. Specification-based techniques
  3. Structure-based techniques
  4. Experience-based techniques
  5. Selecting test techniques
  • Pop Quiz: Using specification-based techniques
  • Interactive Session: Review tests designed with equivalence partitioning
  • Hands-on Exercise: Use equivalence partitioning as a test design method
  • Hands-on Exercise: Use boundary value analysis to create tests
  • Interactive Session: Analyze and map out complex logic in requirements
  • Hands-on Exercise: Use a decision table to develop tests
  • Interactive Session: Walk through a state model
  • Hands-on Exercise: Use a state model to build tests
  • Pop Quiz: Use case basics
  • Interactive Session: Generate tests from use cases
  • Interactive Session: Analyze code flow with control flow diagrams
  • Hands-on Exercise: Develop structural tests for code and analyze coverage
  • Pop Quiz: Differentiate experience-based techniques
  • Pop Quiz: Choose a test technique

Part 6: Static Techniques

  1. Static testing techniques
  2. The review process
  3. Static analysis by tools
  • Review test sets to evaluate test design*
  • Perform a peer review and feedback session (these practice sessions are embedded elsewhere to perform reviews on real targets)

Part 7: Tool Support for Testing

  1. Types of tools
  2. Effective use of tools
  3. Introducing tools into an organization
  • Pop Quiz: Test frameworks
  • Pop Quiz: Understanding probe effect
  • Pop Quiz: Pros and cons of tools
  • Pop Quiz: Piloting a tool

Part 8: Course Wrap-up

  1. Exam tips, procedures, and guidelines
  2. Exam cram
  • Open review session
  • Practice exam review

At the conclusion of the software testing training course, you will have the opportunity to take the ISTQB™ Certified Tester —Foundation Level exam. The exam is held at 3:30 p.m. on the third day of the course. The ISTQB™ Certified Tester —Foundation Level certification exam is independently administered by the American Software Testing Qualifications Board, Inc. (ASTQB).

Effective User Acceptance Testing

Part 1: What UAT Is and Is Not

  • We will begin by describing how UAT differs from other software testing and how it fits into various software development lifecycles (including Waterfall and Agile). Along the way, we'll define a variety of key terms and identify the players.

Part 2: Understanding the Business Need

  • Business Need has many dimensions from correct computations to ease of use. We will explore each of those dimensions so you can ensure that your UAT addresses each of them in an appropriate way.

Part 3: What Could Go Wrong?

  • Of all the things we could test, which should we focus on? We will apply Risk-Based testing to focus our UAT where it will be most valuable.

Part 4: "U" is for User

  • Effective UAT includes testing from the standpoint of all of the users (both active and passive ones). We will discuss ways to identify all of the users and ensure that their viewpoints are included in our UAT.

Part 5: Incremental UAT

  • UAT is usually the final gate before deployment, but any problems found at that point in the project can be costly and time-consuming to correct. So we will introduce an incremental approach to UAT that can be integrated into any software development lifecycle (even Waterfall). This incremental approach enables you to identify issues earlier (when they are easier to fix), and reduces the likelihood of unpleasant surprises at the project's end.

Part 6: Preparing Test Data

  • Good tests require appropriate test data. We will discuss how to identify and prepare test data that will enable good Acceptance Testing. Along the way we will discuss the limitations, dangers and (in some cases) illegality of using production data for testing, and we will look at options for addressing those issues.

Part 7: The Acceptance Test Plan

  • As the old adage says, "Fail to plan; plan to fail." The plans for UAT will be different from those for other testing activities. We will provide guidance for UAT plans, including how to find the "sweet spot" of providing enough guidance to ensure effective and repeatable tests, while enabling the testers to exercise the system as they will use it after it is deployed.

Part 8: Performing UAT

  • Testing is more than just using a computer. We will provide guidance for how Acceptance Testers should operate while performing UAT. We will discuss following UAT plans as well as going beyond them to explore how the system works. We will also discuss evaluating test results, reporting issues and raising questions.

Part 9: "A" is For Acceptance

  • We will finish with a discussion of deciding if the system is acceptable or not. We will explore this both from each tester's perspective, and for UAT as a whole. Along the way, we will talk about "minor" defects, unresolved issues, and what it means for the system to be "good enough" in a particular context.

Agile Testing (ICP-TST)

Part 1: Agile Testing Mindset

Much like Agile itself, many of the Agile testing techniques where espoused well before the Agile Manifesto was created. But Agile testing is much different from testing performed during traditional software development approaches. This topic anchors the ideas of Agile testing in earlier work, while also providing insight into the major differences between Agile testing and testing performed as part of traditional (phased-based) software development.

The 12 Principles of the Agile Manifesto establish guiding principles for not only the Agile movement but Agile testing as a discipline. The Agile mindset includes: Quality is not “owned” by a particular role in Agile; Testers become facilitators of the team’s quality efforts; Agile testing provides critical insights and feedback into the software process. This topic helps learners understand how the Agile Manifesto is realized within an Agile testing process and approach, and to adopt the requisite Agile mindset.

  1. Overview of Agile Testing
    • Origins of Agile Testing
    • Agile Testing vs. Traditional Approaches
  2. Mindset & Culture
    • Agile Testing Principles
    • Whole Team Approach
    • Building Quality In
    • Continuous Improvement and Feedback
    • Ingraining The Agile Testing Mindset (Hands-on Exercise)

Part 2: Testing Techniques

Testing activities can be broken into various categories (or Quadrants) of testing based on their purpose and value. Automated testing can be performed at various levels (the automation pyramid) within a software application, and appropriate testing techniques must be applied to each. This topic provides the learner with a sound understanding of the purpose of various categories of testing, opportunities for automation, and testing techniques so they can be applied appropriately and at the right time within an Agile environment.

Developer testing of individual software units and associated components is critical to detecting implementation defects within the software. Unit and component tests are leveraged within TDD as well. This topic helps the learner to thoroughly understand the purpose and approach to successfully implementing unit and component testing on Agile projects and how testers support developer testing during development cycles.

Test-Driven Development (TDD) and its derivatives, Acceptance Test-Driven Development (ATDD), Behavior-Driven Development (BDD) and Spec by Example are techniques for assuring that Stories are implemented in a manner that satisfies the customers’ needs. This topic helps the learner to thoroughly understand the purpose and approach to successfully implementing these techniques on Agile projects.

Testing of User Stories is critical to successful development of software within an Agile project. This testing is often performed using the techniques above but can be done in other ways as is appropriate or necessary. This topic enables the learner to thoroughly understand the various options for testing User Stories during software development; this is an extension to ATDD to include boundary conditions and other types of testing such as exploratory testing.

  1. Categories of Testing
    • Agile Testing Quadrants of Categories
    • Automation Pyramid – Introduction
    • Testing Techniques
  2. Collaborating with Developers
    • Unit and Component Testing
    • Pairing Between Developer and Tester
  3. Example Driven Development
    • Acceptance Test-Driven Development (ATDD)
    • Behavior-Driven Development (BDD)
    • Spec by Example
  4. Feature and Story Testing
    • User Story Testing
    • Feature Testing
    • Exploratory Testing
    • Non-Functional Testing

Part 3: Agile Testing Process

Testing during an Agile project is team-oriented, so it is common for every member of the team to provide some level of testing support. This includes the Product Owner and other Business Representatives and the programmers in addition to the testers. This topic provides the learner with an understanding that within an Agile project, the entire project team is responsible for testing activities, with a specific focus on how this affects specific roles.

Lightweight planning and documentation are typical of Agile projects. The best Agile projects do just enough planning and documenting to support project activities and the needs of the end users. This topic provides the learner with an understanding of how lightweight test strategy and planning is performed on Agile projects, and how decisions are made regarding what type of test documentation, records, metrics, and reports are needed and how much is enough.

Agile projects employ a variety of techniques around the delivery of the product, including Time-boxed Iterations and Continuous Delivery, that all have a very strong focus on testing. This topic helps the learner to appreciate all of the various ways that testing is used in the “End-Game” activities (which don’t just happen at the end of the Agile project!)

Multiple environments are often necessary to support testing activities during iterations and the release process. This topic provides the learner with an understanding of the typical test environments that must be set up and maintained to support testing activities during iterations and releases and how the product must be managed as it progresses through those environments.

Distributed teams are a fact of life in most organizations and must be dealt with to make Agile testing initiatives successful. This topic provides the learner with an understanding of how communication and coordination of testing activities can be most effective on distributed teams.

  1. Roles and Responsibilities
    • Team-Based Testing Approach
    • Typical Business Representative Role in Testing
    • Typical Programmer Role in Testing
    • Typical Tester Role in Testing
    • Role of Test Managers in Agile
  2. Test Strategy and Planning
    • Different Strategies Based on Levels of Precision
    • During Iteration Planning/Kickoff
    • Lightweight Test Plan Documentation
    • Defect Tracking and Management
    • Results Reporting
    • Test Metrics
    • Regression Tests
  3. Successful Delivery
    • Time-Boxed Delivery
    • Continuous Delivery
    • Post-Development Test Cycles
    • Iteration Wrap-Up
    • Definition of a Release/End Game
    • User Acceptance Test (UAT)
    • System-Wide and Cross-Team Testing
    • Post-Release Testing
    • Documentation for Regulatory Requirements
  4. Test Environments and Infrastructure
    • Typical Environments for Test
    • Build Pipeline
    • Automated Builds
    • Testing the Proper Build
    • Test Data Management
  5. Working on Distributed Teams
    • Distributed Team Communication
    • Distributed Team Coordination

Test-Driven Development with Java

Part 1: Agile Overview

Test Driven Development is a key component of the Agile Software Development Methodology and of the overall DevOps movement. So it is helpful to have at a minimum a high-level understanding of Agile practices and scrum ceremonies and TDD fits into the overall Agile, Scrum and DevOps landscape. Part 1 serves as a leveling exercise to ensure that team member is speaking the same language during upcoming labs and discussions.

  1. What is Agile Software Development
    • DevOps Overview
    • The Agile Manifesto
    • Scrum vs Agile
  2. Components of Agile
    • User Stories
    • Tasks
    • Bugs
    • Automated Builds
    • Automated Tests
    • Continuous Inspection
  3. The Role of TDD in Agile Development
    • Automated Unit Tests
    • Automated Acceptance Tests

Lab: Explore the Board of an Agile Project

  • Kanban Board
  • User Stories
  • Tasks
  • Bugs
  • Work in Progress
  • Burndown Chart

Part 2: Unit Testing

Unit Testing is a critical component of Test Driven Development (TDD). Small units of code are tested throughout the development process, which isolates functionality to ensure that individual parts work correctly.

  1. Unit Test Fundamentals
    • Reason to do Unit Testing
    • What to Test: Right BICEP
    • CORRECT Boundary Conditions
    • Properties of Good Tests
  2. Frameworks
    • What is JUnit
    • JUnit Building Blocks
    • Test Cases
    • Test Suites
    • Examples
  3. Agile Testing Strategy
    • Agile Testing Quadrant
    • Automation Pyramid
    • Assertions
  4. Test Attributes
    • Setup / TearDown
    • JUnit Lifecycle
    • System Under Test
    • Test Design Strategy
    • Naming our Tests
    • Exceptions

Lab: Introduction to Unit Testing

  • IDE and Project Setup
  • Running our first Unit Test
  • Explore Junit framework
  • Test Attributes
  • Assert Statements

Part 3: Test Driven Development

Essential TDD techniques require developers write programs in short development cycles, and there are critical steps that must be taken. Tests are created before the code is written. Once the code passes testing, it is refactored to adhere to the most effective and acceptable standards.

  1. TDD Rhythm
    • TDD Overview
    • Red, Green, Refactor
    • TDD Benefits
  2. Sustainable TDD
    • Development without TD
    • Test Last
    • Test Last in Parallel
    • Test Driven Development
  3. Supporting Practices
    • Collective Ownership
    • Continuous Integration
  4. Eight Wastes of Software Development
    • Ripple effect of defects
    • Partially Done Work
    • Extra Features
    • Relearning
    • Handoffs
    • Task Switching
    • Delays
    • Defects
  5. Test Automation
    • Automate, Automate, Automate
    • Automate Early and Often
    • Additional Topics Identified

Lab: Test Driven Development

  • Start Test Driven Development on our example App
  • Write unit test cases
  • Experience RED, Green, Refactor Process

Part 4: Principles of Agile Development

TDD is directly influenced by design, so it will be a priority to take this into context during implementation. Considering design principles will enable teams to experiment with different approaches, and gear towards more functional programming.

  1. Design Principles Overview
  2. Coding Principles
  3. isolation of the SUT
  4. Developing independently testable units
  5. Test doubles
    • Introducing test doubles
    • Stubs
    • Fakes
    • Mocks

Lab: Continue development on example App

  • Setting up Test doubles for our example app
  • Discuss and implement test dummy and test stubs

Part 5: Refactoring

Refactoring is another essential technique of TDD, and most software development teams are most likely doing some form of refactoring regularly. Refactoring can be used in a number of different workflows which will be explored in this Part.

  1. Why Refactor?
    • Red, Green, Refactor
    • Benefits
    • Development without TDD
  2. Refactoring Methods
  3. Refactoring Cycle
    • Reduce Local Variable Scope
    • Replace Temp with Query
    • Remove Dead Code
    • Extract Method
    • Remove Unnecessary Code

Lab: Continue our example project

  • Implement new test cases
  • TDD Cycle
  • Discuss and implement Refactoring Needs

Part 6: Pair Programming

Pair Programming is an effective way to improve code quality. In this Part, we will discuss pairing and how it leads to better software design, and a lower cost of development.

  1. Pair Programming
    • Pairing Setup
    • Results: Time
    • Results: Defects
    • How it works
  2. Advantages of Pairing
    • Both Halves of the Brain
    • Focus
    • Reduce Interruptions
    • Reduce Resource Constraints
    • Multiple Monitors
    • Challenges
  3. Pairing Techniques
    • Pair Rotation
    • Ping Pong Pairing
    • Promiscuous Pairs
    • Pair Stairs
    • Cross-Functional Pairing

Lab- Experience pair programming and continue developing our example app

Part 7: Acceptance Test Driven Development (ATDD) & Behavior-Driven Development (BDD)

Acceptance Tests are an important form of functional specification, and Behavior Driven Development dictates what happens as an effect of these tests being run. In this Part, we will cover the difference between them, and how they are closely related.

  1. Acceptance Testing
    • Acceptance Tests
    • Why Acceptance Tests?
    • Acceptance Test Execution
    • Who Writes Acceptance Tests
    • Pair Test Writing
  2. Best Practices for Effective Testing
    • Keys to Good Acceptance Tests
    • Writing Acceptance Criteria
    • Acceptance Test Example
    • Acceptance Test-Driven Development (ATDD)
  3. BDD vs. ATDD
    • Specification by Example
    • BDD Frameworks
    • BDD Examples

Lab: Experience ATDD and BDD

  • Experience ATDD or BDD and discuss the impact to TDD

Part 8: Advanced TDD

In order to implement Unit Tests efficiently, it is critical that developers understand their properties.

Lab: Demonstration for Mocking/doubles with our example App

  1. TDD Solutions
    • Continuous Unit Testing
    • Collective Ownership
  2. Advanced Unit Testing
    • Unit Testing Legacy Applications
    • Techniques for Legacy Code
    • External Dependencies
    • Mocking frameworks
    • Unit Testing the Database
  3. Outside In vs Inside Out
    • Working with database
    • Working with mocking frameworks
  4. Test Strategy
    • Continues Integration
    • Batch Execution of TestCases
  5. Unit Test Examples
    • More Tests
    • Algorithm
  6. Advanced Refactoring

Lab: Demonstration for Mocking/doubles with our example App

  • Working with database
  • Working with mocking frameworks

Part 9: Simulation

Experience Agile development with test driven development and pair programming

Test Automation Boot Camp (ICP-ATA)

Part 1: Introducing Test Automation

  1. Watch an Automated Test
  2. Requirements
    1. Exercise: Identify different requirement types
    2. Exercise: Make requirements testable
  3. Testing Types
    1. Black-box vs. white-box
    2. System testing vs. integration testing vs. unit testing vs. acceptance testing
  4. Application Types
    1. Process-driven or data-driven: no "one size fits all"
    2. Exercise: Define different kinds of tests for different application types
  5. The Alphabet Soup of Tools and Methods
    1. Selenium. Gherkin. Cucumber. HPQC. Jira—you hear all of these. What do they mean? How do they fit together? Which do you need and which can you safely ignore?
    2. Exercise: Testing facts and fallacies

Part 2: Preparing for Test Automation

  1. Effective Partitioning Schemes
    1. Exercise: Structure a system into processes (actor goals), activities, actions
  2. Use Cases and Test Cases
    1. Exercise: Create a test case for a single activity from a written use case (happy path)
  3. Behavior-Driven Languages
    1. Exercise: Write a test case as a Gherkin scenario
  4. Modeling and Diagramming Techniques
    1. Exercise: Construct a UI navigation map for normal and alternate flows
  5. Equivalence Partitioning and Boundary Value Analysis
    1. Exercise: Define input value choices and use TAME to construct test alternatives

Part 3: Recording Automated Tests

  1. Automated Test Steps:
    1. Pre-checks, Inputs, Events, and Post-Checks
    2. Exercise: Write the steps of an automated test
  2. Record and playback a single test
    1. Exercise: Record and play back a test in Selenium WebDriver

Part 4: Dissecting Automated Tests

  1. Recorded Test Steps
    1. Exercise: Examine recorded steps in Selenium
  2. UI Element Repositories
    1. Exercise: Examine the components of a UI page
    2. Exercise: Create path expressions to locate page elements

Part 5: Assembling Automated Tests from Modules

  1. Test Suites, Test Cases, and Modules
    1. Exercise: Partition a recorded test into reusable modules
  2. Modular Test Development
    1. Exercise: Construct test cases from existing modules
    2. Exercise: Construct new modules for alternate behaviors

Part 6: Coding Automated Tests

  1. Code always, Code sometimes, or Code never
  2. The Skills Pyramid
  3. Open-source and commercial tools
    1. Exercise: Compare tools and team capabilities

Part 7: Exploiting Automated Testing

  1. Test-driven development: test cases as specifications
  2. Data-driven tests
    1. Exercise: Define data tables for equivalence partitions and boundary value analysis
  3. Multi-platform and cross-browser testing
    1. Exercise: Run test cases on multiple web browsers

Part 8: Enabling Continuous Integration with Test Automation

  1. Regression test suites
  2. Development events trigger test runs
  3. Configure test subsets
    1. Exercise: Define a minimal "smoke test" and contrast with a full regression suite
  4. Report test results
    1. Exercise: Design a dashboard for quick reporting of test results

Part 9: Course Summary

  1. Quiz: Testing facts and fallacies
  2. Exercise: Plan your own test automation strategy