Boundary Value Analysis & Equivalence Partitioning

test-automation

Boundary value analysis and equivalence partitioning are the first test design techniques every QA engineer should master. Together, they let you minimize the number of test cases while efficiently detecting the places where bugs are most likely to hide.

Used correctly, boundary value analysis and equivalence partitioning let you resist the urge to test every possible value — and instead focus your testing exactly where bugs are most likely to appear.

📌 Who This Article Is For

  • QA engineers and developers who want to learn test design fundamentals systematically
  • Anyone who struggles with “how much testing is enough?”
  • Those whose test cases have grown too numerous to manage
  • Engineers who want to apply boundary values and equivalence classes to automated test design

✅ What You’ll Learn

  • The concepts and usage of equivalence partitioning and boundary value analysis
  • How to reduce test cases while maintaining quality
  • How to apply these techniques to the form validation scenarios you encounter every day

👤 About the Author

Written by Yoshi, a QA and test automation engineer with 15+ years of hands-on experience. Boundary value analysis and equivalence partitioning are techniques used on real projects every day — and they connect directly to automated test design. Code is publicly available on GitHub: github.com/YOSHITSUGU728/automated-testing-portfolio

📌 Key Takeaways

  • Equivalence Partitioning: Group data that behaves the same way and test with one representative value per group
  • Boundary Value Analysis: Test the values at the edges (boundaries) of each group
  • Combining both techniques lets you achieve high bug detection rates with a minimal number of test cases

If you try to test every possible value, test cases multiply without end. But time and resources are always finite. Test design techniques help solve problems like these:

  • Test cases become unmanageable in volume
  • It’s unclear what level of testing is “enough”
  • The places most likely to contain bugs get overlooked

This article explains the two most widely used test design techniques in QA — equivalence partitioning and boundary value analysis — with concrete examples throughout.

What Are Test Design Techniques and Why Do You Need Them?

Test design techniques are a systematic approach to answering: “which values should I test to find bugs efficiently with the fewest test cases?”

For example, testing an age input field (valid range: 1–120) by trying every value from 1 to 120 would be thorough — but completely impractical. Test design techniques let you maintain the same level of confidence while drastically reducing the number of tests needed.

Problems that arise without test design:

  • Test cases multiply without a clear ceiling
  • It becomes vague what has and hasn’t been tested
  • The “boundaries” where bugs lurk get missed
  • Time runs out before critical tests are executed
  • Different team members test different things with no consistency
ComparisonWithout Design TechniquesWith Design Techniques
Number of test casesOverwhelming, hard to manageMinimal and well-organized
Bug detection efficiencyMany gaps and missed areasKey areas covered systematically
Team sharingAd hoc and inconsistentReproducible and easy to share
Compatibility with automationToo many tests, slow to runWorks naturally with parametrize

① Equivalence Partitioning

Equivalence partitioning means dividing input data into groups (equivalence classes) where each group is expected to behave the same way, then testing with just one representative value per group.

Because any value within a group should produce the same result, the entire group can be represented by a single test value.

Example: Age Input Field (Valid Range: 1–120)

Equivalence ClassValue RangeRepresentative ValueExpected Result
✅ Valid class1 – 12050Accepted normally
❌ Invalid class ①0 or below-1 or 0Error message shown
❌ Invalid class ②121 or above121 or 200Error message shown
❌ Invalid class ③Non-numeric input“abc” or empty stringError message shown
💡 Key Point: Every value between 1 and 120 should behave identically. If 50 passes, 10 and 80 should too — that’s the logic behind equivalence partitioning. A value near the middle of the range is typically a good representative choice.

Tips for Finding Equivalence Classes

Look at the conditions in your specification to identify classes.

Specification ConditionValid ClassInvalid Classes
Integer from 1 to 120Integer 1–120≤0 / ≥121 / decimal / string
Password: 8–20 charactersString 8–20 chars≤7 chars / ≥21 chars / empty
Email address formatxxx@xxx.xxx formatNo @ / no domain / empty

② Boundary Value Analysis

Boundary value analysis means focusing your testing on the values at the edges (boundaries) of each equivalence class. Bugs are statistically known to occur most often near boundaries — conditional logic errors (>=, <=, etc.) are common here, and these are exactly the values developers often fail to anticipate.

Why Do Bugs Cluster at Boundaries?

Here are the most common developer mistakes that boundary testing catches.

Common MistakeIncorrect CodeCorrect Code
Off-by-one errorif age > 1 and age < 120if age >= 1 and age <= 120
Wrong comparison operatorif length < 8 (rejects 8 chars)if length <= 8 (8 chars allowed)

How to Choose Boundary Values (3-Point Method)

For each boundary, test three points: one below the boundary, the boundary value itself, and one above it.

BoundaryBoundary − 1Boundary ValueBoundary + 1
Lower bound (1)0 ❌ Invalid1 ✅ Valid2 ✅ Valid
Upper bound (120)119 ✅ Valid120 ✅ Valid121 ❌ Invalid
💡 2-Point vs 3-Point Method: Strictly speaking, testing the boundary value and one value outside it (2-point method) is valid. Adding the value just inside (3-point method) improves coverage further. Choose based on the time available for testing.

③ Combining Equivalence Partitioning and Boundary Value Analysis

In practice, both techniques are used together. The standard pattern is to use equivalence partitioning to define your groups, then apply boundary value analysis to test the edges of each group.

Real Example: Password Length Validation (8–20 Characters)

Test CaseValueClassificationExpected Result
TC-017 charsLower boundary − 1 (invalid)❌ Error
TC-028 charsLower boundary value (valid)✅ Pass
TC-039 charsLower boundary + 1 (valid)✅ Pass
TC-0414 charsValid class representative (equivalence partitioning)✅ Pass
TC-0519 charsUpper boundary − 1 (valid)✅ Pass
TC-0620 charsUpper boundary value (valid)✅ Pass
TC-0721 charsUpper boundary + 1 (invalid)❌ Error
TC-080 chars (empty)Invalid class representative (equivalence partitioning)❌ Error

These 8 test cases cover virtually every important pattern for password length validation. Running these 8 reliably delivers more quality than a disorganized list of 50.

④ Applying These Techniques to pytest parametrize

Test cases designed with equivalence partitioning and boundary value analysis map perfectly to pytest’s @pytest.mark.parametrize. You can pass the values from your design table directly as parameters.

import pytest

def validate_password_length(password: str) -> bool:
    """Validate that the password is between 8 and 20 characters"""
    return 8 <= len(password) <= 20

# Equivalence partitioning x boundary value analysis → directly parametrized
@pytest.mark.parametrize("password, expected", [
    # Boundary values (lower bound)
    ("a" * 7,  False),  # TC-01: 7 chars → invalid
    ("a" * 8,  True),   # TC-02: 8 chars → valid (lower boundary)
    ("a" * 9,  True),   # TC-03: 9 chars → valid

    # Valid class representative (equivalence partitioning)
    ("a" * 14, True),   # TC-04: 14 chars → valid

    # Boundary values (upper bound)
    ("a" * 19, True),   # TC-05: 19 chars → valid
    ("a" * 20, True),   # TC-06: 20 chars → valid (upper boundary)
    ("a" * 21, False),  # TC-07: 21 chars → invalid

    # Invalid class representative (equivalence partitioning)
    ("",       False),  # TC-08: empty string → invalid
])
def test_password_validation(password, expected):
    assert validate_password_length(password) == expected
💡 Pro Tip: The best practice in production is to build your test case table first, then translate it directly into code. Keeping comments that explain why each value is being tested makes the tests far easier to maintain later.

⑤ Common Failure Patterns in Test Design

Even after learning these techniques, these mistakes are easy to make in practice. Knowing them in advance will significantly improve your design accuracy.

① Testing only boundary values without defining equivalence classes

Only testing the boundary values (0, 1, 2, 119, 120, 121, etc.) and calling it done is a common trap. Without equivalence classes, you'll miss entire invalid classes — like non-numeric input (strings, empty values, etc.). Always define your equivalence classes with partitioning first, then select boundary values from them.

② Testing only invalid classes and skipping valid class tests

Verifying that errors are shown without checking that valid input actually works is another common mistake. Skipping valid class tests (happy path) means you never confirm that correct inputs produce correct outputs. Both valid and invalid class tests are required.

③ Creating test cases by intuition instead of reading the specification

Picking values like "let's try 5, 10, and 100" without a reason is the opposite of test design. Equivalence classes must always be derived from the conditions in the specification (e.g. "1–120", "8–20 characters"). If the spec is vague, clarify it before designing tests.

④ Testing only one boundary value per boundary

Thinking "the upper limit is 120, so I just need to test 120" misses the point of boundary value analysis. Its power comes from testing multiple points around each boundary. Testing only 120 means you never verify that 121 correctly errors out or that 119 correctly passes. Always test at least the boundary value and one point outside it.

FAQ

Q. Which should I do first — equivalence partitioning or boundary value analysis?

Always start with equivalence partitioning. Identify your groups (equivalence classes) first, then apply boundary value analysis to test the boundaries of each group. Boundary values can't be defined without groups, so partitioning is the foundation everything else builds on.

Q. Do I really need to test valid classes? Can't I just test invalid ones?

Both are necessary. Valid class tests (happy path) confirm that correct inputs produce correct outputs. Invalid class tests (error cases) confirm that incorrect inputs are properly rejected. Skipping either side leaves gaps in quality coverage.

Q. Do I always need to test all three points (boundary − 1, boundary, boundary + 1)?

The 3-point method is ideal when time allows, but at minimum always test the boundary value itself and at least one point outside it. In practice, apply 3-point testing to high-priority cases and simplify to 2-point for lower-priority ones.

Q. Can these techniques be used for automated tests, not just manual testing?

These techniques actually pair exceptionally well with automated testing. With pytest's @pytest.mark.parametrize, you can pass the test case table you designed directly as parameters. The design and implementation connect naturally, so the intent behind each test is readable directly from the code.

📋 Summary

  • Equivalence Partitioning: Group values that behave the same way and test with one representative per group — reduces test count significantly
  • Boundary Value Analysis: Focus tests on edge values to efficiently catch off-by-one errors and other boundary-related bugs
  • Use them together: Define groups with partitioning, then apply 3-point boundary testing — this is the standard production pattern
  • Works great with pytest: @pytest.mark.parametrize lets you convert your design table directly into test code

The biggest benefit of learning test design techniques is the freedom from the pressure of "I have to test everything." With equivalence partitioning and boundary value analysis, you can achieve high reliability with a small, focused set of test cases. Start by applying them to a form validation in a project you're already working on.

Copied title and URL