The goal of test automation is not to automate everything — it’s to identify what’s worth automating and maximize the return on your investment.
📌 This article is for:
- Engineers who want to start automating but aren’t sure where to begin
- Teams who tried to automate everything — and watched maintenance costs explode
- QA engineers who struggle with deciding when to use manual vs automated testing
- Lead engineers or QA leads who want to give their team a shared decision framework
✅ What you’ll learn in this article
- 5 characteristics of tests that are well-suited for automation
- 4 characteristics of tests that should stay manual
- A practical checklist for making automation decisions on real projects
- The ideal division of labor between manual and automated testing
📌 The Bottom Line
Tests worth automating are ones that are repetitive, have clear pass/fail criteria, and prone to human error. Tests to keep manual are ones requiring human judgment, frequently changing specs, or run only once. Making this distinction is the first step to automation that actually lasts.
“We introduced test automation, but now all we do is fix broken tests.” “Nobody trusts the automated test results anymore.” — Most of these failures share a common root cause: trying to automate tests that shouldn’t be automated.
Test automation is not a silver bullet. When you try to automate everything, the cost of maintaining tests will eventually outpace the cost of running them manually. In this article, we’ll cover the practical decision framework for knowing when to automate — and when to stop.
Manual vs Automated Testing: Strengths at a Glance
Before diving into which tests to automate, let’s clarify the strengths of each approach. This isn’t about which is better — they’re good at different things.
| Category | 🙋 Manual Testing | 🤖 Automated Testing |
|---|---|---|
| Execution Speed | 🐢 Slow | ⚡ Fast |
| Repeated Execution | ❌ Fatigue and errors | ✅ Perfectly consistent |
| Intuition & Aesthetics | ✅ Strong | ❌ Weak |
| Adapting to Spec Changes | ✅ Immediate | ❌ Requires code changes |
| Large Data Sets | ❌ Limited | ✅ Handles easily |
| Upfront Cost | ✅ Low | ❌ High (code authoring) |
✅ 5 Characteristics of Tests Worth Automating
Tests with the following characteristics deliver high ROI when automated.
✅ Signs a test is a good automation candidate
- Runs repeatedly before every release (regression testing)
- Has clear pass/fail criteria — “if X is displayed, it passes”
- Requires covering a large number of data patterns
- Repetitive and tedious for humans — prone to fatigue-induced errors
- Needs to be verified across multiple browsers or environments
Verifying that existing features still work after every new release. Running this manually before each deploy is unsustainably expensive.
Testing 100+ input variations by hand is slow and error-prone as concentration fades. Automation runs them all in seconds.
Running the same test suite across Chrome, Firefox, and Safari manually triples your effort. Automation can run them in parallel.
Status codes, response values, and response times are all numerically defined — making API tests an ideal automation candidate.
A minimal set of “is the system alive?” checks that run on every push. Completes in minutes and catches regressions at the earliest possible moment.
❌ 4 Characteristics of Tests to Keep Manual
Forcing automation onto tests with these characteristics wastes both the cost of building the tests and the cost of maintaining them.
❌ Signs a test should stay manual
- Evaluating design, layout, or visual appearance — requires human aesthetic judgment
- Specs change frequently — code maintenance costs will exceed the benefit
- Tests that will only be run once
- Exploratory testing — finding unexpected bugs through intuition and curiosity
“Is this button color right?” “Does the font look consistent?” “Is the spacing balanced?” — These require human aesthetic judgment. Automated tools have no concept of “beautiful.”
In the early stages of a project when requirements shift weekly, updating test code every time eliminates the benefit of automation entirely.
“Something feels off here.” “What happens if I try this?” — Exploratory testing harnesses human intuition and curiosity to find unexpected bugs. It’s highly effective, but inherently can’t be scripted.
Data migrations, campaign-period checks, or any test you’ll only run once — the investment in writing automation code will never pay off.
⚠️ The Gray Zone: Tests That Could Go Either Way
Some tests can’t be answered with a simple yes or no. Use the following guidance to make the call.
| Test Type | Verdict | Reasoning / Conditions |
|---|---|---|
| E2E tests for new features | △ Conditional | Wait for specs to stabilize before automating. Explore manually first. |
| Error & edge case tests | ◎ Recommended | Clear pass/fail criteria = high automation value (e.g., verify 404 / 500 status) |
| Performance testing | ◎ Recommended | If you have a defined threshold (e.g., response under 3 seconds), automate it |
| Security testing | △ Partial | Basic auth errors (403) can be automated. Deep vulnerability scanning needs specialists. |
| Usability testing | ✕ Keep manual | “Is this easy to use?” is a human judgment call — not something automation can evaluate. |
A Practical Automation Decision Checklist
When in doubt, run through this checklist. 3 or more YES answers means it’s worth seriously considering automation.
📋 Automation Decision Checklist
| ☑ | Will it be run repeatedly? | If it runs at least once a month, automation is likely worth it |
| ☑ | Is the pass/fail criteria clearly defined? | Can you say “if X happens, it passes”? |
| ☑ | Is it slow or error-prone when done manually? | Repetitive, tedious tasks are prime automation candidates |
| ☑ | Are the specs stable? | Frequent spec changes make automation expensive to maintain |
| ☑ | Does it need to run across multiple browsers or environments? | Parallel execution is one of automation’s biggest advantages |
| ☑ | Do you want it to run automatically in CI/CD? | Tests that trigger on every push deliver the highest ongoing value |
The Ideal Division of Labor Between Manual and Automated Testing
The ultimate goal is a state where automated tests handle all routine coverage, freeing people to focus on exploratory testing and quality strategy.
| 🤖 Let Automation Handle | 🙋 Keep for Humans |
|---|---|
| Full regression test suite | Exploratory testing & new feature validation |
| CI/CD smoke tests | UI appearance & usability evaluation |
| Large-scale data validation | Quality strategy & test design |
| API happy-path & error testing | Bug analysis & quality improvement proposals |
| Cross-browser & multi-environment testing | Developer feedback & collaboration |
🔑 The Right Mindset
- Automation doesn’t make people redundant — it elevates them to higher-value work
- Doing manual testing well — especially exploratory testing — is a genuine and respected QA skill
- “Automate selectively and thoughtfully” produces better long-term quality than “automate everything”
📖 Related Articles
Summary
In this article, we covered the practical framework for deciding which tests to automate — and which ones to keep manual.
📋 Key Takeaways
- Good automation candidates: repetitive, clear criteria, large data sets, cross-browser
- Keep manual: human judgment required, frequently changing specs, one-time runs, exploratory
- Gray zone: ask “will it be run repeatedly?” and “are the specs stable?”
- Use the checklist (3+ YES = worth automating) to make consistent, team-wide decisions
- The ideal state: automation owns routine coverage, humans own exploration and strategy
Knowing what to automate is just as important a skill as knowing how to automate. Share this checklist with your team, and use it to build automation that stays useful — not automation that becomes a burden.
Ready to put automation into practice? Start with E2E Test Automation with Playwright (Beginner’s Guide) 👇
