]> git.proxmox.com Git - ceph.git/blob - ceph/doc/dev/developer_guide/running-tests-locally.rst
262683bfba9d7f87273117f83dafc14feddc39e0
[ceph.git] / ceph / doc / dev / developer_guide / running-tests-locally.rst
1 Running Unit Tests
2 ==================
3
4 How to run s3-tests locally
5 ---------------------------
6
7 RGW code can be tested by building Ceph locally from source, starting a vstart
8 cluster, and running the "s3-tests" suite against it.
9
10 The following instructions should work on jewel and above.
11
12 Step 1 - build Ceph
13 ^^^^^^^^^^^^^^^^^^^
14
15 Refer to :doc:`/install/build-ceph`.
16
17 You can do step 2 separately while it is building.
18
19 Step 2 - vstart
20 ^^^^^^^^^^^^^^^
21
22 When the build completes, and still in the top-level directory of the git
23 clone where you built Ceph, do the following, for cmake builds::
24
25 cd build/
26 RGW=1 ../src/vstart.sh -n
27
28 This will produce a lot of output as the vstart cluster is started up. At the
29 end you should see a message like::
30
31 started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.
32
33 This means the cluster is running.
34
35
36 Step 3 - run s3-tests
37 ^^^^^^^^^^^^^^^^^^^^^
38
39 .. highlight:: console
40
41 To run the s3tests suite do the following::
42
43 $ ../qa/workunits/rgw/run-s3tests.sh
44
45
46 Running test using vstart_runner.py
47 -----------------------------------
48 CephFS and Ceph Manager code is be tested using `vstart_runner.py`_.
49
50 Running your first test
51 ^^^^^^^^^^^^^^^^^^^^^^^^^^
52 The Python tests in Ceph repository can be executed on your local machine
53 using `vstart_runner.py`_. To do that, you'd need `teuthology`_ installed::
54
55 $ virtualenv --python=python3 venv
56 $ source venv/bin/activate
57 $ pip install 'setuptools >= 12'
58 $ pip install teuthology[test]@git+https://github.com/ceph/teuthology
59 $ deactivate
60
61 The above steps installs teuthology in a virtual environment. Before running
62 a test locally, build Ceph successfully from the source (refer
63 :doc:`/install/build-ceph`) and do::
64
65 $ cd build
66 $ ../src/vstart.sh -n -d -l
67 $ source ~/path/to/teuthology/venv/bin/activate
68
69 To run a specific test, say `test_reconnect_timeout`_ from
70 `TestClientRecovery`_ in ``qa/tasks/cephfs/test_client_recovery``, you can
71 do::
72
73 $ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery.test_reconnect_timeout
74
75 The above command runs vstart_runner.py and passes the test to be executed as
76 an argument to vstart_runner.py. In a similar way, you can also run the group
77 of tests in the following manner::
78
79 $ # run all tests in class TestClientRecovery
80 $ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery
81 $ # run all tests in test_client_recovery.py
82 $ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery
83
84 Based on the argument passed, vstart_runner.py collects tests and executes as
85 it would execute a single test.
86
87 vstart_runner.py can take the following options -
88
89 --clear-old-log deletes old log file before running the test
90 --create create Ceph cluster before running a test
91 --create-cluster-only creates the cluster and quits; tests can be issued
92 later
93 --interactive drops a Python shell when a test fails
94 --log-ps-output logs ps output; might be useful while debugging
95 --teardown tears Ceph cluster down after test(s) has finished
96 running
97 --kclient use the kernel cephfs client instead of FUSE
98 --brxnet=<net/mask> specify a new net/mask for the mount clients' network
99 namespace container (Default: 192.168.0.0/16)
100
101 .. note:: If using the FUSE client, ensure that the fuse package is installed
102 and enabled on the system and that ``user_allow_other`` is added
103 to ``/etc/fuse.conf``.
104
105 .. note:: If using the kernel client, the user must have the ability to run
106 commands with passwordless sudo access.
107
108 .. note:: A failure on the kernel client may crash the host, so it's
109 recommended to use this functionality within a virtual machine.
110
111 Internal working of vstart_runner.py -
112 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
113 vstart_runner.py primarily does three things -
114
115 * collects and runs the tests
116 vstart_runner.py setups/teardowns the cluster and collects and runs the
117 test. This is implemented using methods ``scan_tests()``, ``load_tests()``
118 and ``exec_test()``. This is where all the options that vstart_runner.py
119 takes are implemented along with other features like logging and copying
120 the traceback to the bottom of the log.
121
122 * provides an interface for issuing and testing shell commands
123 The tests are written assuming that the cluster exists on remote machines.
124 vstart_runner.py provides an interface to run the same tests with the
125 cluster that exists within the local machine. This is done using the class
126 ``LocalRemote``. Class ``LocalRemoteProcess`` can manage the process that
127 executes the commands from ``LocalRemote``, class ``LocalDaemon`` provides
128 an interface to handle Ceph daemons and class ``LocalFuseMount`` can
129 create and handle FUSE mounts.
130
131 * provides an interface to operate Ceph cluster
132 ``LocalCephManager`` provides methods to run Ceph cluster commands with
133 and without admin socket and ``LocalCephCluster`` provides methods to set
134 or clear ``ceph.conf``.
135
136 .. note:: vstart_runner.py deletes "adjust-ulimits" and "ceph-coverage" from
137 the command arguments unconditionally since they are not applicable
138 when tests are run on a developer's machine.
139
140 .. note:: "omit_sudo" is re-set to False unconditionally in cases of commands
141 "passwd" and "chown".
142
143 .. note:: The presence of binary file named after the first argument is
144 checked in ``<ceph-repo-root>/build/bin/``. If present, the first
145 argument is replaced with the path to binary file.
146
147 Running Workunits Using vstart_enviroment.sh
148 --------------------------------------------
149
150 Code can be tested by building Ceph locally from source, starting a vstart
151 cluster, and running any suite against it.
152 Similar to S3-Tests, other workunits can be run against by configuring your environment.
153
154 Set up the environment
155 ^^^^^^^^^^^^^^^^^^^^^^
156
157 Configure your environment::
158
159 $ . ./build/vstart_enviroment.sh
160
161 Running a test
162 ^^^^^^^^^^^^^^
163
164 To run a workunit (e.g ``mon/osd.sh``) do the following::
165
166 $ ./qa/workunits/mon/osd.sh
167
168 .. _test_reconnect_timeout: https://github.com/ceph/ceph/blob/master/qa/tasks/cephfs/test_client_recovery.py#L133
169 .. _TestClientRecovery: https://github.com/ceph/ceph/blob/master/qa/tasks/cephfs/test_client_recovery.py#L86
170 .. _teuthology: https://github.com/ceph/teuthology
171 .. _vstart_runner.py: https://github.com/ceph/ceph/blob/master/qa/tasks/vstart_runner.py