Python Mocks Test Helpers

| python | programming | ci |

I’ve been writing a python wrapper for the CircleCI API over the last week. I wanted to do this “the right way” with test driven development.

I have a couple integration tests that actually hit the CircleCI API, but most of the unit tests so far are using MagicMock to ensure that the basic functions are working as expected.

This generally involves the tedious process of dumping out JSON, saving it to a file, and then reloading that file later on to actually test it.

I wrote two helper functions that make this process slightly less tedious.

Load Mock

The first is a function that loads a file and overrides every request to return that file (typically as JSON).
    def loadMock(self, filename):
        """helper function to open mock responses"""
        filename = 'tests/mocks/{0}'.format(filename)
    with open(filename, 'r') as f:
        self.c._request = MagicMock(return_value=f.read())

Test Helper

The second is a function that runs a real request for the first time and dumps the output to a file.
    def test_helper(self):
        resp = self.c.add_circle_key()
        print(resp)
        with open('tests/mocks/mock_add_circle_key_response', 'w') as f:
             json.dump(resp, f)
Naming it test_helper allows it to be picked up and ran when you run your test suite since by default unittest will capture any methods that start with test.

Usage

An actual example is shown below.
    def test_clear_cache(self):
        self.loadMock('mock_clear_cache_response')
        resp = json.loads(self.c.clear_cache('levlaz', 'circleci-sandbox'))
    self.assertEqual('build dependency caches deleted', resp['status'])

Writing the tests is easy, we just copy and paste the name of the file that was created with test_helper and verify that the contents are what we expect them to be.

This approach has been working very well for me so far. One thing to keep in mind with writing these types of tests is that you should also include some general integration tests against the API that you are working with. This way you can catch any regressions with your library in the event that the API changes in any way. However, as a basic sanity check mocking these requests is a good practice and less prone to flakiness.

Thank you for reading! Share your thoughts with me on mastodon or via email.

Check out some more stuff to read down below.

Most popular posts this month

Recent Favorite Blog Posts

This is a collection of the last 8 posts that I bookmarked.

Articles from blogs I follow around the net

Today in heavy-handed metaphors

Sam Altman is the owner of a massive, invasive, parasitical toxic sludge that respects no boundaries and ruins everything it touches, and that he thinks someone else should clean up. Also his new house has mold. OpenAI CEO's $27 million San Francisco ma…

via jwz July 19, 2024

Weeknotes: GPT-4o mini, LLM 0.15, sqlite-utils 3.37 and building a staging environment

Upgrades to LLM to support the latest models, and a whole bunch of invisible work building out a staging environment for Datasette Cloud. GPT-4o mini and LLM 0.15 Today's big news was the release of GPT-4o mini, which I wrote about here. If you build ap…

via Simon Willison's Weblog: Entries July 19, 2024

ESM3: A simplified primer to the model's architecture

A short primer explaining the architecture of the ESM3 model

via Emmanuel Blogs July 18, 2024

Generated by openring