Testing Python
Chapter 1
- Writing code without tests in general is going to lead to problems down the line.
- Choreographing
- One of the worst traps a developer can fall into is writing a bunch of code and then going back and testing it all at the end.
- With the advent of social networks and the ever-increasing pressure of media attention, defects in your code could be costly to both you and your reputation or that of any company you may represent.
- The key advantage of writing tests, especially as part of the development process, is that testing gives you confidence in your code.
Chapter 2 - writing unittests
- AN APPLICATION IS one of the great examples of the whole being greater than the sum of its parts.
- With a good test suite in place, refactoring is easy because you know when you change your code you haven’t broken any previous behavior.
- Before you write any code you give thought to the kind of tests you will be writing to check the methods will work as expected.
assertEqual()
with assertRaises(Exception)
assertAlmostEqual(1, 1.2, delta=0.5)
assertAlmostEqual(1, 1.00001, places=4)
assertDictContainsSubset(expected, actual, msg=None)
assertDictEqual(d1, d2, msg=None)
assertGreater(a, b, msg=None)
assertGreaterEqual(a, b, msg=None)
assertIn(member, container, msg=None)
assertNotNone(obj, msg=None)
assertLess(a, b, msg=None)
assertItemsEqual(a, b, msg=None)
- Unit tests should be placed under a
test/unit
directory at the top level of your project folder. - All unit test files should mirror the name of the file they are testing, with
_test
as the suffix. - Example: {% highlight python %} import unittest
from calculate import Calculate
class TestCalculate(unittest.TestCase): def setUp(self) -> None: self.calc = Calculate()
def test_add_method_returns_correct_result(self):
self.assertEqual(4, self.calc.add(2, 2))
def test_add_method_incrorrect_type(self):
self.assertRaises(TypeError, self.calc.add("Hello", "World"))
def test_assert_raises(self):
with self.assertRaises(AttributeError):
[].get
if name == ‘main’: unittest.main()
{% endhighlight %}
Chapter 3 - python test tools
nose
- Nose looks in directories for files ending
_test.py
. nosetests some_test.py
for running specific tests.- -s (no log capture), -v (verbose) flags.
nosetests --pdb
->n(ext), w(here), d(own), u(p), b(reak) [[fielname:]lineno | function[, conditation]]:
- coverage:
pip install nose-cov
nosetests --with-coverage test/calculate_test.py
ignore some code blocks
if __name__ == '__main__': #pragma: no cover
...
- rednose:
pip install rednose
nosetests --rednose
py.test
pip install pytest
- by default it collects all test like nose.
py.test specific_test.py
py.test --pdb
- coverage:
$ pip install pytest-cov
$ py.test --cov app/ test/
$ cat .coveragerc
[run]
omit=*__init__.py
mock library
pip install mock
- basic usage:
mock = Mock()
mock.my_method.return_value = "hello"
mock.get.side_effect = ConnectionError()
- patching: {% highlight python %} from mock import Mock, patch
class TestAccount(unittest.TestCase): @patch(‘app.account.requests’) def test_get_current_balance_returns_data_correctly(self, mock_requests): mock_requests.get.return_value = ‘500’ … {% endhighlight %}
Chapter 4 - doctest
- handler:
if __name__ == "__main__":
import doctest
doctest.testmod()
doctest -v
- initialization in handler:
doctest.testmod(extraglobs={'c': Calculate()})
- integration with nose:
nosetests --with-doctest
Chapter 5 - TDD
- TDD gives you great confidence that each new piece of functionality you write in your code is backed by a test, which confirms how it behaves.
- Agile Manifesto
- In Scrum ideally the card will be worked on in pairs who will follow the TDD approach to complete the work.
- The basic concept of TDD is to write a failing test first, before writing any code. TDD is a cycle so that once you have your failing tests you can begin coding.
- Ping-pong programming: One person writes the failing test, and the other writes the code to make it pass.
Chapter 6 - BDD
- If unit testing verifies that the code does exactly what the programmer expects it to do, then acceptance testing verifies that the code does what the user expects it to do.
- Development cycle with unit and acceptance testing:
- write failing acceptance tests.
- write failing unit tests
- write the code.
- check that the unit tests still pass.
- ensure your acceptance tests pass now.
- Gherkin syntax: {% highlight gherkin %} Feature: Retrieve customer balance As a customer of the bank I wish to be able to view my current balance
Scenario: Customer retrieves balance successfully Given account number 0001 is a valid account When I try to retrieve the balance for account number 0001 Then the balance of the account is “50” {% endhighlight %}
- the code is in the step file:
@when('I enter the account number "{account_number}"')
def _(context, account_number):
...
behave:
pip install behave
Scenario Outline: {% highlight gherkin %} Scenario Outline: Retrieve customer balance Given I create account “<account_number>” with balance of “” And I visit the homepage When I enter the account number “<account_number>” Then I see a balance of “”
Examples: customer_account_table |account_number |balance| |1111 |50 |
|2222 |100 |
|3333 |500 |
|4444 |1000 | {% endhighlight%}You can actually pass in table of data into a step definition.
Acceptance testing is as much vital part of testing as unit tests.
$ tree
.
├── account.py
├── bank_app.py
├── bank.py
├── Pipfile
├── templates
│ └── index.html
└── test
├── account_test.py
├── bank_test.py
└── bdd
└── features
├── bank.feature
├── environment.py
└── steps
└── steps.py
$ behave test/bdd/features
Chapter 7 - acceptance test tools
Cucumber
- behave tags to run selected tests or ignore (work in progress):
$ behave --tags="@tag1 or tag2" test/bdd/features
$ behave --tags="@wip" test/bdd/features # work in progress only
$ behave -w" test/bdd/features # work in progress only
Chapter 8 - maximizing code’s performance
jmeter
- get jmeter
- get plugin manager and put into
lib/ext
- Test Plan -> add -> Config Element -> HTTP Request Details (localhost:5000)
- Test Plan -> add -> Threads (Users) -> Thread Group (20)
- ThreadGroup (right click) -> Add -> Sampler -> Http Request
- ThreadGroup (right click) -> Add -> Listener -> View Resulsts Tree
- Add -> {Timer, Listener} for modeling throughput tests
cprofile
- include cProfile in your code
import cProfile
...
cProfile.run('app.run(debug=True)') or run through CLI: `$ python -m cProfile bank_app.py` and use `-s` to control the order of the output.
- Alternatively use line_profiler
- Visualisation:
$ sudo apt get install graphviz
$ pip install pycallgraph
$ pycallgraph graphviz -- bank_app.py
Chapter 9 - lint
pylint
pip install pylint
coverage
pip install coverage
nosetests --with-coverage --cover-erase
nosetests --with-coverage --cover-erase --cover-html
nosetests --with-coverage --cover-erase --cover-package=bank.account
- enforce some minimum coverage:
nosetests --with-coverage --cover-erase --cover-package=bank.account
- use
#pragma: no cover
directive to ignore a line/block from the report