Thirty six hints for writing analysis functions

  1. Create an issue with the description of the new analysis function. Also define its name and two letter abbreviation (which must not collide with existing ones).

  2. Always track any open items in task lists in the issue or the PR itself.

  3. Think about what should happen at each event. As a rule of thumb we want to do only what is really needed in MID_SWEEP_EVENT (if that is used at all). The common QC entries are determined in the following events:

    • Sweep QC in POST_SWEEP_EVENT

    • Set QC in POST_SET_EVENT

    • Baseline QC in MID_SWEEP_EVENT/POST_SWEEP_EVENT.

  4. Create a list of used labnotebook keys. The question which labnotebook keys are required can be answered by thinking about how the dashboard will interpret the analysis function run results. Only if it can decide, by looking at labnotebook entries, for each possible outcome exactly why the run failed, all labnotebook keys were thought of.

  5. Create a list of used user epochs. User epochs are there to define interesting stimset x-ranges for the analysis function. This should be used preferrably over other similiar approaches.

  6. Create a list of analysis function parameters including required/optional state and for the latter also the default values.

  7. Units for labnotebook keys should be added if possible. For physical units we tend to prefer base units without prefix, i.e. Ω instead of GΩ.

  8. Decide if the new labnotebook entries should be headstage dependent or not. The existing entries don’t do a very good job in guiding you here. An ideal choice is that the DEPEND/INDEP type of a entry would not have to be changed if the analysis function would need to support more or fewer headstages.

  9. Make a list of additional features and/or changes in common PSQ_/MSQ_ functions you need.

  10. Draw a preliminary flowchart, on paper is fine. This serves as a way to think the behaviour through. Have a look at existing flowcharts for inspiration.

  11. Create a stimulus set for testing. The test stimsets can be loaded via LoadStimsets and saved via SaveStimsets available in HardwareTests.pxp.

  12. At this point you should have a pretty good idea what needs to be done. Discuss what you think you need to do with your boss.

  13. Add a skeleton analysis function, see here, and add all analysis parameters, their help messages and check code.

  14. Add documentation for labnotebook keys and user epochs to the tables at the top of MIES_AnalysisFunctions_PatchSeq.ipf/MIES_AnalysisFunctions_MultiPatchSeq.ipf

  15. Implement the test override entries in PSQ_CreateOverrideResults()/MSQ_CreateOverrideResults() with documentation.

  16. Implement the behaviour for each event. Going from easy to difficult has proven to work.

  17. Now you should have a first version. Congratulation! Pad yourself on the back and take a break, because now the real fun starts.

  18. Add preliminary dashboard support. We do check for every testcase that the dashboard works.

  19. Create a new test suite and add it to UTF_HardwareAnalysisFunctions.ipf. Be sure to base it on the test suite of the last added analysis function to avoid copying deprecated approaches.

  20. Add a first test case were all test override entries result in failed QC.

  21. As rule of thumb what to check in each test case; be sure to have test assertions for all added labnotebook entries (except standard baseline entries) and position checking of user epochs.

  22. Make that first test case pass, this takes a surprisingly long time. The function LBV_PlotAllAnalysisFunctionLBNKeys() helps for debugging.

  23. After this first test case passes, reassess the test assertions. Are you testing enough or too much?

  24. Writeup a test matrix for determining what needs to be tested, first version in paper is fine. The columns are the inputs, usually these are test overrides and analysis parameters.

    See UTF_PatchSeqSealEvaluation.ipf for an example.

    We always have the following three test cases:

    • The first has all QC (except Sampling Interval QC) failing.

    • The second has all QC passing.

    • The last one only has Sampling Interval QC failing.

  25. Implement all test cases, fixing bugs in the analysis function on the way.

  26. Run all tests for this analysis function with instrumentation.

  27. Check the coverage output to see if you still have relevant gaps in testing.

  28. Add new test cases for filling coverage gaps.

  29. Repeat the last three points until the coverage is good enough.

  30. Check if some helper code etc. can/should be fleshed out into its own pull request.

  31. Be sure to include documentation and tests if your analysis function publishes ZeroMQ messages. See CheckPipetteInBathPublishing and PSQ_PB_Publish() for an example. Add it also in CheckPublishedMessage.

  32. Tell your boss to test the current state.

  33. Check and fill any gaps in the documentation:

    • Analysis function comment with ascii art stimulus sets

    • Labnotebook entries

    • User epochs

  34. Create digital versions of the test matrix and the flowchart. For the latter see here.

  35. Cleanup commits

  36. Your done!