]> git.proxmox.com Git - mirror_edk2.git/blame - UnitTestFrameworkPkg/ReadMe.md
BaseTools: BaseTools changes for RISC-V platform.
[mirror_edk2.git] / UnitTestFrameworkPkg / ReadMe.md
CommitLineData
0f7fb5c5
MK
1# Unit Test Framework Package\r
2\r
3## About\r
4\r
5This package adds a unit test framework capable of building tests for multiple contexts including\r
6the UEFI shell environment and host-based environments. It allows for unit test development to focus\r
7on the tests and leave error logging, result formatting, context persistance, and test running to the framework.\r
8The unit test framework works well for low level unit tests as well as system level tests and\r
9fits easily in automation frameworks.\r
10\r
11### UnitTestLib\r
12\r
13The main "framework" library. The core of the framework is the Framework object, which can have any number\r
14of test cases and test suites registered with it. The Framework object is also what drives test execution.\r
15\r
16The Framework also provides helper macros and functions for checking test conditions and\r
17reporting errors. Status and error info will be logged into the test context. There are a number\r
18of Assert macros that make the unit test code friendly to view and easy to understand.\r
19\r
20Finally, the Framework also supports logging strings during the test execution. This data is logged\r
21to the test context and will be available in the test reporting phase. This should be used for\r
22logging test details and helpful messages to resolve test failures.\r
23\r
24### UnitTestPersistenceLib\r
25\r
26Persistence lib has the main job of saving and restoring test context to a storage medium so that for tests\r
27that require exiting the active process and then resuming state can be maintained. This is critical\r
28in supporting a system reboot in the middle of a test run.\r
29\r
30### UnitTestResultReportLib\r
31\r
32Library provides function to run at the end of a framework test run and handles formatting the report.\r
33This is a common customization point and allows the unit test framework to fit its output reports into\r
34other test infrastructure. In this package a simple library instances has been supplied to output test\r
35results to the console as plain text.\r
36\r
37## Samples\r
38\r
39There is a sample unit test provided as both an example of how to write a unit test and leverage\r
40many of the features of the framework. This sample can be found in the `Test/UnitTest/Sample/SampleUnitTest`\r
41directory.\r
42\r
43The sample is provided in PEI, SMM, DXE, and UEFI App flavors. It also has a flavor for the HOST_APPLICATION\r
44build type, which can be run on a host system without needing a target.\r
45\r
46## Usage\r
47\r
48This section is built a lot like a "Getting Started". We'll go through some of the components that are needed\r
49when constructing a unit test and some of the decisions that are made by the test writer. We'll also describe\r
50how to check for expected conditions in test cases and a bit of the logging characteristics.\r
51\r
52Most of these examples will refer to the SampleUnitTestUefiShell app found in this package.\r
53\r
54### Requirements - INF\r
55\r
56In our INF file, we'll need to bring in the `UnitTestLib` library. Conveniently, the interface\r
57header for the `UnitTestLib` is located in `MdePkg`, so you shouldn't need to depend on any other\r
58packages. As long as your DSC file knows where to find the lib implementation that you want to use,\r
59you should be good to go.\r
60\r
61See this example in 'SampleUnitTestApp.inf'...\r
62\r
63```\r
64[Packages]\r
65 MdePkg/MdePkg.dec\r
66\r
67[LibraryClasses]\r
68 UefiApplicationEntryPoint\r
69 BaseLib\r
70 DebugLib\r
71 UnitTestLib\r
72 PrintLib\r
73```\r
74\r
75### Requirements - Code\r
76\r
77Not to state the obvious, but let's make sure we have the following include before getting too far along...\r
78\r
79```c\r
80#include <Library/UnitTestLib.h>\r
81```\r
82\r
83Now that we've got that squared away, let's look at our 'Main()'' routine (or DriverEntryPoint() or whatever).\r
84\r
85### Configuring the Framework\r
86\r
87Everything in the UnitTestPkg framework is built around an object called -- conveniently -- the Framework.\r
88This Framework object will contain all the information about our test, the test suites and test cases associated\r
89with it, the current location within the test pass, and any results that have been recorded so far.\r
90\r
91To get started with a test, we must first create a Framework instance. The function for this is\r
92`InitUnitTestFramework`. It takes in `CHAR8` strings for the long name, short name, and test version.\r
93The long name and version strings are just for user presentation and relatively flexible. The short name\r
94will be used to name any cache files and/or test results, so should be a name that makes sense in that context.\r
95These strings are copied internally to the Framework, so using stack-allocated or literal strings is fine.\r
96\r
97In the 'SampleUnitTestUefiShell' app, the module name is used as the short name, so the init looks like this.\r
98\r
99```c\r
100DEBUG(( DEBUG_INFO, "%a v%a\n", UNIT_TEST_APP_NAME, UNIT_TEST_APP_VERSION ));\r
101\r
102//\r
103// Start setting up the test framework for running the tests.\r
104//\r
105Status = InitUnitTestFramework( &Framework, UNIT_TEST_APP_NAME, gEfiCallerBaseName, UNIT_TEST_APP_VERSION );\r
106```\r
107\r
108The `&Framework` returned here is the handle to the Framework. If it's successfully returned, we can start adding\r
109test suites and test cases.\r
110\r
111Test suites exist purely to help organize test cases and to differentiate the results in reports. If you're writing\r
112a small unit test, you can conceivably put all test cases into a single suite. However, if you end up with 20+ test\r
113cases, it may be beneficial to organize them according to purpose. You _must_ have at least one test suite, even if\r
114it's just a catch-all. The function to create a test suite is `CreateUnitTestSuite`. It takes in a handle to\r
115the Framework object, a `CHAR8` string for the suite title and package name, and optional function pointers for\r
116a setup function and a teardown function.\r
117\r
118The suite title is for user presentation. The package name is for xUnit type reporting and uses a '.'-separated\r
119hierarchical format (see 'SampleUnitTestApp' for example). If provided, the setup and teardown functions will be\r
120called once at the start of the suite (before _any_ tests have run) and once at the end of the suite (after _all_\r
121tests have run), respectively. If either or both of these are unneeded, pass `NULL`. The function prototypes are\r
122`UNIT_TEST_SUITE_SETUP` and `UNIT_TEST_SUITE_TEARDOWN`.\r
123\r
124Looking at 'SampleUnitTestUefiShell' app, you can see that the first test suite is created as below...\r
125\r
126```c\r
127//\r
128// Populate the SimpleMathTests Unit Test Suite.\r
129//\r
130Status = CreateUnitTestSuite( &SimpleMathTests, Fw, "Simple Math Tests", "Sample.Math", NULL, NULL );\r
131```\r
132\r
133This test suite has no setup or teardown functions. The `&SimpleMathTests` returned here is a handle to the suite and\r
134will be used when adding test cases.\r
135\r
136Great! Now we've finished some of the cruft, red tape, and busy work. We're ready to add some tests. Adding a test\r
137to a test suite is accomplished with the -- you guessed it -- `AddTestCase` function. It takes in the suite handle;\r
138a `CHAR8` string for the description and class name; a function pointer for the test case itself; additional, optional\r
139function pointers for prerequisite check and cleanup routines; and and optional pointer to a context structure.\r
140\r
141Okay, that's a lot. Let's take it one piece at a time. The description and class name strings are very similar in\r
142usage to the suite title and package name strings in the test suites. The former is for user presentation and the\r
143latter is for xUnit parsing. The test case function pointer is what is actually executed as the "test" and the\r
144prototype should be `UNIT_TEST_FUNCTION`. The last three parameters require a little bit more explaining.\r
145\r
146The prerequisite check function has a prototype of `UNIT_TEST_PREREQUISITE` and -- if provided -- will be called\r
147immediately before the test case. If this function returns any error, the test case will not be run and will be\r
148recorded as `UNIT_TEST_ERROR_PREREQUISITE_NOT_MET`. The cleanup function (prototype `UNIT_TEST_CLEANUP`) will be called\r
149immediately after the test case to provide an opportunity to reset any global state that may have been changed in the\r
150test case. In the event of a prerequisite failure, the cleanup function will also be skipped. If either of these\r
151functions is not needed, pass `NULL`.\r
152\r
153The context pointer is entirely case-specific. It will be passed to the test case upon execution. One of the purposes\r
154of the context pointer is to allow test case reuse with different input data. (Another use is for testing that wraps\r
155around a system reboot, but that's beyond the scope of this guide.) The test case must know how to interpret the context\r
156pointer, so it could be a simple value, or it could be a complex structure. If unneeded, pass `NULL`.\r
157\r
158In 'SampleUnitTestUefiShell' app, the first test case is added using the code below...\r
159\r
160```c\r
161AddTestCase( SimpleMathTests, "Adding 1 to 1 should produce 2", "Addition", OnePlusOneShouldEqualTwo, NULL, NULL, NULL );\r
162```\r
163\r
164This test case calls the function `OnePlusOneShouldEqualTwo` and has no prerequisite, cleanup, or context.\r
165\r
166Once all the suites and cases are added, it's time to run the Framework.\r
167\r
168```c\r
169//\r
170// Execute the tests.\r
171//\r
172Status = RunAllTestSuites( Framework );\r
173```\r
174\r
175### A Simple Test Case\r
176\r
177We'll take a look at the below test case from 'SampleUnitTestApp'...\r
178\r
179```c\r
180UNIT_TEST_STATUS\r
181EFIAPI\r
182OnePlusOneShouldEqualTwo (\r
183 IN UNIT_TEST_FRAMEWORK_HANDLE Framework,\r
184 IN UNIT_TEST_CONTEXT Context\r
185 )\r
186{\r
187 UINTN A, B, C;\r
188\r
189 A = 1;\r
190 B = 1;\r
191 C = A + B;\r
192\r
193 UT_ASSERT_EQUAL(C, 2);\r
194 return UNIT_TEST_PASSED;\r
195} // OnePlusOneShouldEqualTwo()\r
196```\r
197\r
198The prototype for this function matches the `UNIT_TEST_FUNCTION` prototype. It takes in a handle to the Framework\r
199itself and the context pointer. The context pointer could be cast and interpreted as anything within this test case,\r
200which is why it's important to configure contexts carefully. The test case returns a value of `UNIT_TEST_STATUS`, which\r
201will be recorded in the Framework and reported at the end of all suites.\r
202\r
203In this test case, the `UT_ASSERT_EQUAL` assertion is being used to establish that the business logic has functioned\r
204correctly. There are several assertion macros, and you are encouraged to use one that matches as closely to your\r
205intended test criterium as possible, because the logging is specific to the macro and more specific macros have more\r
206detailed logs. When in doubt, there are always `UT_ASSERT_TRUE` and `UT_ASSERT_FALSE`. Assertion macros that fail their\r
207test criterium will immediately return from the test case with `UNIT_TEST_ERROR_TEST_FAILED` and log an error string.\r
208_Note_ that this early return can have implications for memory leakage.\r
209\r
210At the end, if all test criteria pass, you should return `UNIT_TEST_PASSED`.\r
211\r
212### More Complex Cases\r
213\r
214To write more advanced tests, first take a look at all the Assertion and Logging macros provided in the framework.\r
215\r
216Beyond that, if you're writing host-based tests and want to take a dependency on the UnitTestFrameworkPkg, you can\r
217leverage the `cmocka.h` interface and write tests with all the features of the Cmocka framework.\r
218\r
219Documentation for Cmocka can be found here:\r
220https://api.cmocka.org/\r
221\r
222## Development\r
223\r
224When using the EDK2 Pytools for CI testing, the host-based unit tests will be built and run on any build that includes the `NOOPT` build target.\r
225\r
226If you are trying to iterate on a single test, a convenient pattern is to build only that test module. For example, the following command will build only the SafeIntLib host-based test from the MdePkg...\r
227\r
228```bash\r
229stuart_ci_build -c .pytool/CISettings.py TOOL_CHAIN_TAG=VS2017 -p MdePkg -t NOOPT BUILDMODULE=MdePkg/Test/UnitTest/Library/BaseSafeIntLib/TestBaseSafeIntLib.inf\r
230```\r
231\r
232## Known Limitations\r
233\r
234### PEI, DXE, SMM\r
235\r
236While sample tests have been provided for these execution environments, only cursory build validation\r
237has been performed. Care has been taken while designing the frameworks to allow for execution during\r
238boot phases, but only UEFI Shell and host-based tests have been thoroughly evaluated. Full support for\r
239PEI, DXE, and SMM is forthcoming, but should be considered beta/staging for now.\r
240\r
241### Host-Based Support vs Other Tests\r
242\r
243The host-based test framework is powered internally by the Cmocka framework. As such, it has abilities\r
244that the target-based tests don't (yet). It would be awesome if this meant that it was a super set of\r
245the target-based tests, and it worked just like the target-based tests but with more features. Unfortunately,\r
246this is not the case. While care has been taken to keep them as close a possible, there are a few known\r
247inconsistencies that we're still ironing out. For example, the logging messages in the target-based tests\r
248are cached internally and associated with the running test case. They can be saved later as part of the\r
249reporting lib. This isn't currently possible with host-based. Only the assertion failures are logged.\r
250\r
251We will continue trying to make these as similar as possible.\r
252\r
253## Copyright\r
254\r
255Copyright (c) Microsoft Corporation.\r
256SPDX-License-Identifier: BSD-2-Clause-Patent\r
257\r