How to Output Python API Test Results to CSV & HTML Report | pytest × requests

test-automation

Automating API tests with Python is only half the job — if you can’t share the results, it’s not enough for real-world use.

This article explains how to automatically output API test results from pytest and requests into both an HTML report (pytest-html) and a CSV file. All code is production-ready and usable by practicing QA engineers.

📌 Who This Article Is For

  • QA engineers who want to output API test results as HTML reports or CSV files
  • Those who want to learn how to use pytest-html
  • Those who want to use test results as evidence for their portfolio or client reports
  • Those who want to learn how to write test results to CSV with Python

What You Will Learn

  • How to configure pytest-html to auto-generate HTML reports
  • How to write API test results to CSV using Python’s csv module
  • How to add custom information (timestamp, status, response time) to test results
  • How to create evidence usable for portfolios and client reporting

👨‍💻 About the Author

Working as a QA engineer handling API test automation with Python, pytest, and requests in real-world projects. All code used in this article is publicly available on GitHub and has been verified to work as described. View code on GitHub →


00. Why API Test Reporting and CSV Output Matter

In real-world projects, it’s not enough to just run tests — you also need a system to record and share the results.

Output FormatUse CaseBenefits
HTML ReportVisualizing test resultsEasy to read in a browser, easy to share on GitHub
CSV FileRecording and analyzing dataOpens in Excel, enables comparison with past results
Terminal OutputImmediate confirmationUseful for debugging during development

💡 Key Takeaway:If you’re publishing a portfolio, simply including an HTML report in your GitHub repository instantly makes it look more professional. It can also be shared directly with clients as evidence.


01. API Test Report Environment with pytest + requests

If you haven’t set up your environment yet, install the required libraries first.

pip install requests pytest pytest-html
LibraryRole
pytestRunning and managing tests
pytest-htmlAuto-generating HTML reports
csv (standard library)Writing to CSV files (no extra install needed)

02. How to Auto-Generate Python API Test HTML Reports

pytest.ini Configuration (One-Time Setup)

Place the following configuration at your project root. Once set, pytest automatically generates an HTML report every time it runs — no additional code required.

# pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html

This single configuration is all it takes to auto-generate an HTML report on every test run.

Run Command

# Run tests and auto-generate HTML report
pytest test_api.py -v

Folder Structure After Running

project/
├── pytest.ini
├── test_api.py
└── report.html   ← Auto-generated!

💡 Key Takeaway:Adding --self-contained-html embeds CSS and images directly into the HTML file, making it fully self-contained in a single file. This makes uploading to GitHub or sharing with clients straightforward.


03. How to Write Python API Test Results to CSV

Python’s csv module is part of the standard library, so no additional installation is needed. Using this module, you can easily save test results to a CSV file.

Basic CSV Output Code

Combining Python’s csv and datetime modules lets you write the test ID, test name, result, status code, and response time to CSV one row at a time.

import csv
import datetime
import requests

BASE_URL = "https://jsonplaceholder.typicode.com"
CSV_FILE = "test_results.csv"

def write_result_to_csv(tc_id, test_name, status, status_code, response_time_ms, memo=""):
    """Write test result to CSV"""
    # ✅ utf-8-sig: prevents garbled text when opening in Excel
    with open(CSV_FILE, mode="a", newline="", encoding="utf-8-sig") as f:
        writer = csv.writer(f)
        writer.writerow([
            datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
            tc_id,
            test_name,
            status,
            status_code,
            f"{response_time_ms:.0f}ms",
            memo
        ])

def test_get_user_with_csv():
    """TC01: GET test + CSV output"""
    # ✅ timeout=5: won't hang forever if the API goes down
    # ✅ response.elapsed: official requests response time measurement (more accurate)
    response = requests.get(f"{BASE_URL}/users/1", timeout=5)
    elapsed_ms = response.elapsed.total_seconds() * 1000

    try:
        assert response.status_code == 200
        write_result_to_csv("TC01", "GET /users/1", "PASS", response.status_code, elapsed_ms)
        print(f"\n✅ TC01 PASS | {elapsed_ms:.0f}ms")
    except AssertionError:
        write_result_to_csv("TC01", "GET /users/1", "FAIL", response.status_code, elapsed_ms,
                            f"Expected:200 Got:{response.status_code}")
        raise

💡 Key Takeaway:Using mode="a" enables append mode. Results are added every time tests run, preserving a history of past executions.


04. How to Auto-Create the CSV Header for Python API Tests

Add logic to write headers only when the CSV file doesn’t yet exist. Checking with os.path.exists() before writing ensures headers are never duplicated across multiple test runs.

import csv
import os

CSV_FILE = "test_results.csv"

def initialize_csv():
    """Write headers to CSV only if the file doesn't exist"""
    if not os.path.exists(CSV_FILE):
        # ✅ utf-8-sig: prevents garbled text when opening in Excel
        with open(CSV_FILE, mode="w", newline="", encoding="utf-8-sig") as f:
            writer = csv.writer(f)
            writer.writerow([
                "Timestamp",
                "Test ID",
                "Test Name",
                "Result",
                "Status Code",
                "Response Time",
                "Notes"
            ])

💡 Key Takeaway:Checking with os.path.exists() before writing ensures headers are never duplicated on subsequent runs.


05. Complete Python API Test Report Code

Here is the complete code combining HTML reports and CSV output. The CSV is initialized via a pytest fixture, and all production patterns are applied: timeout, response.elapsed, and utf-8-sig.

"""
API Test with CSV Output & HTML Report
Target: JSONPlaceholder (https://jsonplaceholder.typicode.com)
Framework: Python + requests + pytest + pytest-html
"""
import csv
import os
import datetime
import requests
import pytest

BASE_URL = "https://jsonplaceholder.typicode.com"
CSV_FILE = "test_results.csv"


def initialize_csv():
    """Write headers to CSV only if the file doesn't exist"""
    if not os.path.exists(CSV_FILE):
        # ✅ utf-8-sig: prevents garbled text when opening in Excel
        with open(CSV_FILE, mode="w", newline="", encoding="utf-8-sig") as f:
            writer = csv.writer(f)
            writer.writerow(["Timestamp", "Test ID", "Test Name", "Result",
                              "Status Code", "Response Time", "Notes"])


# ✅ Initialize CSV via fixture (runs once at pytest session start)
@pytest.fixture(scope="session", autouse=True)
def setup_csv():
    initialize_csv()


def write_result(tc_id, test_name, status, status_code, elapsed_ms, memo=""):
    """Append test result to CSV"""
    # ✅ utf-8-sig: prevents garbled text when opening in Excel
    with open(CSV_FILE, mode="a", newline="", encoding="utf-8-sig") as f:
        writer = csv.writer(f)
        writer.writerow([
            datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
            tc_id,
            test_name,
            status,
            status_code,
            f"{elapsed_ms:.0f}ms",
            memo
        ])


def test_get_user():
    """TC01: GET test"""
    # ✅ timeout=5: won't hang forever if the API goes down
    # ✅ response.elapsed: official requests response time measurement
    response = requests.get(f"{BASE_URL}/users/1", timeout=5)
    elapsed_ms = response.elapsed.total_seconds() * 1000

    try:
        assert response.status_code == 200
        assert response.json()["id"] == 1
        write_result("TC01", "GET /users/1", "PASS", response.status_code, elapsed_ms)
        print(f"\n✅ TC01 PASS | {elapsed_ms:.0f}ms")
    except AssertionError as e:
        write_result("TC01", "GET /users/1", "FAIL", response.status_code, elapsed_ms, str(e))
        raise


def test_post_user():
    """TC02: POST test"""
    new_user = {"name": "Yoshitsugu Tester", "email": "test@example.com"}
    response = requests.post(f"{BASE_URL}/users", json=new_user, timeout=5)
    elapsed_ms = response.elapsed.total_seconds() * 1000

    try:
        assert response.status_code == 201
        assert "id" in response.json()
        write_result("TC02", "POST /users", "PASS", response.status_code, elapsed_ms)
        print(f"\n✅ TC02 PASS | {elapsed_ms:.0f}ms")
    except AssertionError as e:
        write_result("TC02", "POST /users", "FAIL", response.status_code, elapsed_ms, str(e))
        raise


def test_put_user():
    """TC03: PUT test"""
    updated_user = {"id": 1, "name": "Updated User", "email": "updated@example.com"}
    response = requests.put(f"{BASE_URL}/users/1", json=updated_user, timeout=5)
    elapsed_ms = response.elapsed.total_seconds() * 1000

    try:
        assert response.status_code == 200
        write_result("TC03", "PUT /users/1", "PASS", response.status_code, elapsed_ms)
        print(f"\n✅ TC03 PASS | {elapsed_ms:.0f}ms")
    except AssertionError as e:
        write_result("TC03", "PUT /users/1", "FAIL", response.status_code, elapsed_ms, str(e))
        raise


def test_delete_user():
    """TC04: DELETE test + post-deletion GET verification"""
    # Step 1: Delete
    response = requests.delete(f"{BASE_URL}/users/1", timeout=5)
    elapsed_ms = response.elapsed.total_seconds() * 1000

    try:
        assert response.status_code == 200
        write_result("TC04", "DELETE /users/1", "PASS", response.status_code, elapsed_ms)
        print(f"\n✅ TC04 PASS | {elapsed_ms:.0f}ms")
    except AssertionError as e:
        write_result("TC04", "DELETE /users/1", "FAIL", response.status_code, elapsed_ms, str(e))
        raise

    # Step 2: Verify resource is gone after deletion
    # ※ JSONPlaceholder is a mock API — data is not actually deleted,
    #   so GET may still return 200. In a real API, expect 404.
    response2 = requests.get(f"{BASE_URL}/users/1", timeout=5)
    assert response2.status_code in [200, 404], \
        f"Expected 404 after deletion (mock may return 200): {response2.status_code}"
    print(f"\n✅ TC04 post-DELETE GET | status: {response2.status_code}")

Run Command

pytest test_report_api.py -v -s

Generated Files

project/
├── pytest.ini
├── test_report_api.py
├── report.html        ← Auto-generated by pytest-html
└── test_results.csv   ← Generated by Python code

Sample CSV Output

Timestamp,Test ID,Test Name,Result,Status Code,Response Time,Notes
2026-04-06 10:23:01,TC01,GET /users/1,PASS,200,342ms,
2026-04-06 10:23:02,TC02,POST /users,PASS,201,289ms,
2026-04-06 10:23:03,TC03,PUT /users/1,PASS,200,310ms,
2026-04-06 10:23:04,TC04,DELETE /users/1,PASS,200,298ms,

06. Pitfalls & Lessons Learned

Here are the key issues I encountered during implementation. I hope this helps others who run into the same problems.


① Garbled text when opening CSV in Excel

When opening the CSV in Excel, Japanese characters appeared garbled. The cause was the encoding setting.

# ❌ utf-8 may cause garbled text when opened in Excel
with open(CSV_FILE, mode="w", encoding="utf-8") as f:
    ...

# ✅ utf-8-sig displays correctly in Excel
with open(CSV_FILE, mode="w", encoding="utf-8-sig") as f:
    ...

💡 Key Takeaway:utf-8-sig is UTF-8 with a BOM (Byte Order Mark). Use utf-8-sig when opening CSV files directly in Excel to prevent garbled characters.


② CSV headers duplicated on every test run

A header row was being added every time the tests ran.

# ❌ Writing headers every time
with open(CSV_FILE, mode="w") as f:
    writer.writerow(["Test ID", "Result", ...])  # Overwrites every time

# ✅ Check if file exists before writing headers
if not os.path.exists(CSV_FILE):
    with open(CSV_FILE, mode="w", newline="", encoding="utf-8-sig") as f:
        writer = csv.writer(f)
        writer.writerow(["Timestamp", "Test ID", "Test Name", "Result", ...])

💡 Key Takeaway:Checking with os.path.exists() before writing ensures headers are never duplicated.


③ CSV not written on FAIL

When an assert failed, an exception was raised and the CSV write code was never reached.

# ❌ write_result() is never called when assert fails
assert response.status_code == 200
write_result("TC01", "GET", "PASS", ...)  # Never reached

# ✅ Use try/except to write to CSV on both PASS and FAIL
try:
    assert response.status_code == 200
    write_result("TC01", "GET /users/1", "PASS", response.status_code, elapsed_ms)
except AssertionError as e:
    write_result("TC01", "GET /users/1", "FAIL", response.status_code, elapsed_ms, str(e))
    raise  # ← Don't forget raise! This tells pytest the test failed

⚠️ Note:Forgetting raise inside except means pytest will consider the test passed even when it failed. Always include raise.


④ report.html is overwritten every run, losing past results

pytest-html overwrites the same report.html each time. To preserve past reports, include a timestamp in the filename.

# pytest.ini doesn't support variables, so use conftest.py
# Add to conftest.py
import pytest
import datetime

def pytest_configure(config):
    now = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    config.option.htmlpath = f"reports/report_{now}.html"

💡 Key Takeaway:Using conftest.py lets you dynamically generate timestamped report filenames, accumulating all past test results in a folder.


⑤ Response times vary between runs

Running the same test multiple times gave different response times each run, making comparisons difficult.

# ✅ Run multiple times and take the average
import statistics

times = []
for _ in range(3):
    response = requests.get(f"{BASE_URL}/users/1", timeout=5)
    times.append(response.elapsed.total_seconds() * 1000)

avg_ms = statistics.mean(times)
write_result("TC01", "GET /users/1", "PASS", response.status_code, avg_ms, "3-run average")

💡 Key Takeaway:For performance testing, running multiple times and taking the average produces more stable and reliable measurements.


07. Frequently Asked Questions (FAQ)

Q. Where is the pytest-html report saved?
A. By default it’s saved as report.html in the directory where pytest is run. You can change the save location by specifying a path in pytest.ini like --html=reports/report.html.

Q. Should I use CSV or HTML reports?
A. Use them for different purposes. HTML reports are best for sharing with people and presenting visually; CSV is best for accumulating and analyzing data. In real-world projects, outputting both is common.

Q. How do I upload reports to GitHub?
A. Just include report.html in your repository and push it. Since GitHub can’t render HTML directly, consider using GitHub Pages or pasting a screenshot of the report in your README.

Q. Can I send test results to Slack or email?
A. Yes. Using Python’s smtplib (email) or the Slack API, you can automatically send results after tests complete. Combined with CI/CD, you can set up a system that automatically notifies on test results with every deployment.

Q. What’s the difference between allure reports and pytest-html?
A. pytest-html is simple and easy to set up, but its appearance is basic. Allure offers a rich UI with graphs and trend analysis for more sophisticated reporting. It’s recommended to start with pytest-html and migrate to allure when needed.


08. Summary

When adding reporting to Python API tests, combining pytest-html and the csv module lets you automatically generate production-level evidence. Visualizing test results is an important part of quality assurance.

This article covered how to generate CSV output and HTML reports for API tests using pytest and requests.

FeatureImplementation
Auto-generate HTML reportAdd --html=report.html --self-contained-html to pytest.ini
CSV outputPython csv module + try/except to handle both PASS and FAIL
Prevent duplicate headersCheck file existence with os.path.exists()
Prevent garbled textUse utf-8-sig encoding
Timestamped reportsGenerate dynamic filenames via conftest.py
fixture-based CSV init@pytest.fixture(scope="session", autouse=True)
Accurate response timeUse response.elapsed.total_seconds() * 1000
TimeoutAdd timeout=5 to all requests

Being able to output test results in both HTML and CSV means you can demonstrate not just “I can write tests” but “I can manage and report test results” — a powerful differentiator as a QA engineer.

Copied title and URL