Python Mocks Test Helpers

| python | programming | ci |

I’ve been writing a python wrapper for the CircleCI API over the last week. I wanted to do this “the right way” with test driven development.

I have a couple integration tests that actually hit the CircleCI API, but most of the unit tests so far are using MagicMock to ensure that the basic functions are working as expected.

This generally involves the tedious process of dumping out JSON, saving it to a file, and then reloading that file later on to actually test it.

I wrote two helper functions that make this process slightly less tedious.

Load Mock

The first is a function that loads a file and overrides every request to return that file (typically as JSON).
    def loadMock(self, filename):
        """helper function to open mock responses"""
        filename = 'tests/mocks/{0}'.format(filename)
    with open(filename, 'r') as f:
        self.c._request = MagicMock(return_value=f.read())

Test Helper

The second is a function that runs a real request for the first time and dumps the output to a file.
    def test_helper(self):
        resp = self.c.add_circle_key()
        print(resp)
        with open('tests/mocks/mock_add_circle_key_response', 'w') as f:
             json.dump(resp, f)
Naming it test_helper allows it to be picked up and ran when you run your test suite since by default unittest will capture any methods that start with test.

Usage

An actual example is shown below.
    def test_clear_cache(self):
        self.loadMock('mock_clear_cache_response')
        resp = json.loads(self.c.clear_cache('levlaz', 'circleci-sandbox'))
    self.assertEqual('build dependency caches deleted', resp['status'])

Writing the tests is easy, we just copy and paste the name of the file that was created with test_helper and verify that the contents are what we expect them to be.

This approach has been working very well for me so far. One thing to keep in mind with writing these types of tests is that you should also include some general integration tests against the API that you are working with. This way you can catch any regressions with your library in the event that the API changes in any way. However, as a basic sanity check mocking these requests is a good practice and less prone to flakiness.

Thank you for reading! Share your thoughts with me on bluesky, mastodon, or via email.

Check out some more stuff to read down below.

Most popular posts this month

Recent Favorite Blog Posts

This is a collection of the last 8 posts that I bookmarked.

Articles from blogs I follow around the net

Come on John

For all I know, John O'Nolan is a cool dude. He’s the founder of Ghost, a project that is also really cool. You know what’s also cool? RSS. And guess what, John just announced he’s working on a new RSS app (Reader? Tool? Service?) called Alcov…

via Manuel Moreale — Everything Feed December 5, 2025

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025)

Today's links The Reverse Centaur’s Guide to Criticizing AI: My speech for U Washington's Neuroscience, AI and Society lecture series. Hey look at this: Delights to delectate. Object permanence: Pac Man ghost algorithms; The US wrote Spain's c…

via Pluralistic: Daily links from Cory Doctorow December 5, 2025

App Defaults - 2025

It’s that time of the year again: here’s my extended “frozen /uses page” for late 2025. Whenever multiple applications are listed, p marks private use, w marks software that I (have to) use at work. Changes compared to the previous year are highlighted usi…

via ttntm.me - Blog December 5, 2025

Generated by openring