improving-jami's-quality.md (4289B)
1 # Improving Jami's Quality 2 3 ## Unit-tests 4 5 * It is harder to make unit-test on Jami project because of the race conditions on multi-level dependance. 6 7 * There is about 30 unit-tests and 26% coverage. Due to Jami high demand to give new functionalities to user quickly, they are not maintained by the developers or by a QA dept. 8 9 * We use lcov for the coverage, you can find the lcov’s configuration in the daemon’s Makefile.am. Also, the coverage can be found at https://docs.jami.net/coverage/ 10 11 * A system needs to be implemented to start convincing the team to make a unit-test for new code before merging 12 13 * You can launch them by doing ‘make check’ in the daemon folder or separately in the unit-test folder with gdb: ‘gdb ut_media_encoder’ 14 15 * The environment needs to be set with ‘--disable-shared’ during the ’./configure’ command 16 17 ## Framework Tests 18 19 * You can find framework tests in the daemon’s Makefile.am and lunch it with ‘make integration’. This calls jami_test.py script in the tools/dringctrl folder. It uses dringctrl.py and controller.py which let you control Jami through bash. 20 21 * This makes a series of calls to assure jami’s opendht network is stable. 22 23 * Other framework tests need to be implemented in the future to tests Jami’s functionalities as a whole. 24 25 ## Integration tests 26 27 * Each commit goes through integration tests in dockers on the build machines you can find the details at: jenkins.jami.net 28 29 * Code-review is made by a fellow developer, sometimes the code is reviewed by the same developer, this should be avoided to emphasize Linus’ law. The ‘Jenkins verified’ label is sometimes discarded and replaced by +1 from a developer, this should also be avoided. 30 31 * Sonarqube lets Jenkins build Jami and verify linting. You can find filters and results at: sonar- jami.savoirfairelinux.net Sonar uses clang-tidy as a preprocessor linting compilator, you can find clang’s filters in .clang-tidy file in the daemon folder. 32 33 * On sflvault sonarqube can be found at service m#2637 and admin logins at service s#7169 34 35 ## Doc and feedback: 36 37 * You can find all the documentation on docs.jami.net 38 39 * Issues are made by developers or users on git.jami.net 40 41 ## Monitoring 42 43 * A script is called every 30 minutes on a virtual machine jami-monitorpeervm-01. You can find it on sflvault service s#7209 and is calling an other client viratual jami- monitorpeer-02 (service s#7224). A series of calls is being made and it returns the failure rate. You can find all the details at https://wiki.savoirfairelinux.com/wiki/Jami-monitorpeervm-01.mtl.sfl. 44 45 * If needed, the manual command is ./script.sh –peer 031acbb73f2a3385b2babc7161f13325be103431 46 47 * It traces a real time point by point graph on https://monitoring.savoirfairelinux.com/grafana/dashboard/script/dyndash.js?host=jami-monitorpeervm-01.mtl.sfl&service=Check%20JamiCall&panelId=1&fullscreen&orgId=1 48 49 ## Smoke tests 50 51 Before each releases every clients MUST past a list of scenarios. 52 53 <!-- Scenarios are described here: --> 54 <!-- [SmokeTestsJami.ods](./SmokeTestsJami.ods) --> 55 56 They are reviewed by QA dpt. before sending it to the developers if needed. 57 58 If a release contains a network commit that has been merged, the QA dept. Should be able to automate the different connectivity tests (as descibed below in Calls configurations) 59 60 ### Calls configurations. 61 62 This is the list of network configurations that need to be tested: 63 64 (IPv4 | IPv6) + (TURN | !TURN) + (STUN | !STUN) + (UPnP | !UPnP) for both sides. 65 66 If both sides are IPv4 only without TURN/STUN/UPnP, the call should be only local. 67 68 ## Special note: FDroid 69 70 The script to generate MR is in the client-android repo (fdroidMergeRequest.sh) 71 72 ## What needs to be done 73 74 * Push coverage closer to 60% 75 76 * Establish a system within the team to assure maintenance and creation of unit-tests. 77 78 * Each major functionality should be tested as whole by adding a framework test (i.e. making sure a message was received, the call was ended well on both side, etc...) 79 80 * Each new functionality should be tested on each platform before merging it to reduce regression 81 82 * Integrate sonarqube on each client 83 84 * Automate the testing of Jami’s behaviour on network compatibility 85 86 * Make a make_ring.py script adaptable to windows also