]> git.proxmox.com Git - ceph.git/blob - ceph/doc/radosgw/config-ref.rst
update ceph source to reef 18.2.1
[ceph.git] / ceph / doc / radosgw / config-ref.rst
1 ======================================
2 Ceph Object Gateway Config Reference
3 ======================================
4
5 The following settings may added to the Ceph configuration file (i.e., usually
6 ``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
7 settings may contain default values. If you do not specify each setting in the
8 Ceph configuration file, the default value will be set automatically.
9
10 Configuration variables set under the ``[client.radosgw.{instance-name}]``
11 section will not apply to rgw or radosgw-admin commands without an instance-name
12 specified in the command. Thus variables meant to be applied to all RGW
13 instances or all radosgw-admin options can be put into the ``[global]`` or the
14 ``[client]`` section to avoid specifying ``instance-name``.
15
16 .. confval:: rgw_frontends
17 .. confval:: rgw_data
18 .. confval:: rgw_enable_apis
19 .. confval:: rgw_cache_enabled
20 .. confval:: rgw_cache_lru_size
21 .. confval:: rgw_dns_name
22 .. confval:: rgw_script_uri
23 .. confval:: rgw_request_uri
24 .. confval:: rgw_print_continue
25 .. confval:: rgw_remote_addr_param
26 .. confval:: rgw_op_thread_timeout
27 .. confval:: rgw_op_thread_suicide_timeout
28 .. confval:: rgw_thread_pool_size
29 .. confval:: rgw_num_control_oids
30 .. confval:: rgw_init_timeout
31 .. confval:: rgw_mime_types_file
32 .. confval:: rgw_s3_success_create_obj_status
33 .. confval:: rgw_resolve_cname
34 .. confval:: rgw_obj_stripe_size
35 .. confval:: rgw_extended_http_attrs
36 .. confval:: rgw_exit_timeout_secs
37 .. confval:: rgw_get_obj_window_size
38 .. confval:: rgw_get_obj_max_req_size
39 .. confval:: rgw_multipart_min_part_size
40 .. confval:: rgw_relaxed_s3_bucket_names
41 .. confval:: rgw_list_buckets_max_chunk
42 .. confval:: rgw_override_bucket_index_max_shards
43 .. confval:: rgw_curl_wait_timeout_ms
44 .. confval:: rgw_copy_obj_progress
45 .. confval:: rgw_copy_obj_progress_every_bytes
46 .. confval:: rgw_max_copy_obj_concurrent_io
47 .. confval:: rgw_admin_entry
48 .. confval:: rgw_content_length_compat
49 .. confval:: rgw_bucket_quota_ttl
50 .. confval:: rgw_user_quota_bucket_sync_interval
51 .. confval:: rgw_user_quota_sync_interval
52 .. confval:: rgw_bucket_default_quota_max_objects
53 .. confval:: rgw_bucket_default_quota_max_size
54 .. confval:: rgw_user_default_quota_max_objects
55 .. confval:: rgw_user_default_quota_max_size
56 .. confval:: rgw_verify_ssl
57 .. confval:: rgw_max_chunk_size
58
59 Lifecycle Settings
60 ==================
61
62 Bucket Lifecycle configuration can be used to manage your objects so they are stored
63 effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited
64 by single threaded processing. With the Nautilus release this has been addressed and the
65 Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across
66 additional Ceph Object Gateway instances and replaces the in-order
67 index shard enumeration with a random ordered sequence.
68
69 There are two options in particular to look at when looking to increase the
70 aggressiveness of lifecycle processing:
71
72 .. confval:: rgw_lc_max_worker
73 .. confval:: rgw_lc_max_wp_worker
74
75 These values can be tuned based upon your specific workload to further increase the
76 aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands)
77 you would look at increasing the :confval:`rgw_lc_max_worker` value from the default value of 3 whereas for a
78 workload with a smaller number of buckets but higher number of objects (hundreds of thousands)
79 per bucket you would consider decreasing :confval:`rgw_lc_max_wp_worker` from the default value of 3.
80
81 .. note:: When looking to tune either of these specific values please validate the
82 current Cluster performance and Ceph Object Gateway utilization before increasing.
83
84 Garbage Collection Settings
85 ===========================
86
87 The Ceph Object Gateway allocates storage for new objects immediately.
88
89 The Ceph Object Gateway purges the storage space used for deleted and overwritten
90 objects in the Ceph Storage cluster some time after the gateway deletes the
91 objects from the bucket index. The process of purging the deleted object data
92 from the Ceph Storage cluster is known as Garbage Collection or GC.
93
94 To view the queue of objects awaiting garbage collection, execute the following
95
96 .. prompt:: bash $
97
98 radosgw-admin gc list
99
100 .. note:: Specify ``--include-all`` to list all entries, including unexpired
101 Garbage Collection objects.
102
103 Garbage collection is a background activity that may
104 execute continuously or during times of low loads, depending upon how the
105 administrator configures the Ceph Object Gateway. By default, the Ceph Object
106 Gateway conducts GC operations continuously. Since GC operations are a normal
107 part of Ceph Object Gateway operations, especially with object delete
108 operations, objects eligible for garbage collection exist most of the time.
109
110 Some workloads may temporarily or permanently outpace the rate of garbage
111 collection activity. This is especially true of delete-heavy workloads, where
112 many objects get stored for a short period of time and then deleted. For these
113 types of workloads, administrators can increase the priority of garbage
114 collection operations relative to other operations with the following
115 configuration parameters.
116
117 .. confval:: rgw_gc_max_objs
118 .. confval:: rgw_gc_obj_min_wait
119 .. confval:: rgw_gc_processor_max_time
120 .. confval:: rgw_gc_processor_period
121 .. confval:: rgw_gc_max_concurrent_io
122
123 :Tuning Garbage Collection for Delete Heavy Workloads:
124
125 As an initial step towards tuning Ceph Garbage Collection to be more
126 aggressive the following options are suggested to be increased from their
127 default configuration values::
128
129 rgw_gc_max_concurrent_io = 20
130 rgw_gc_max_trim_chunk = 64
131
132 .. note:: Modifying these values requires a restart of the RGW service.
133
134 Once these values have been increased from default please monitor for performance of the cluster during Garbage Collection to verify no adverse performance issues due to the increased values.
135
136 Multisite Settings
137 ==================
138
139 .. versionadded:: Jewel
140
141 You may include the following settings in your Ceph configuration
142 file under each ``[client.radosgw.{instance-name}]`` instance.
143
144 .. confval:: rgw_zone
145 .. confval:: rgw_zonegroup
146 .. confval:: rgw_realm
147 .. confval:: rgw_run_sync_thread
148 .. confval:: rgw_data_log_window
149 .. confval:: rgw_data_log_changes_size
150 .. confval:: rgw_data_log_obj_prefix
151 .. confval:: rgw_data_log_num_shards
152 .. confval:: rgw_md_log_max_shards
153 .. confval:: rgw_data_sync_poll_interval
154 .. confval:: rgw_meta_sync_poll_interval
155 .. confval:: rgw_bucket_sync_spawn_window
156 .. confval:: rgw_data_sync_spawn_window
157 .. confval:: rgw_meta_sync_spawn_window
158
159 .. important:: The values of :confval:`rgw_data_log_num_shards` and
160 :confval:`rgw_md_log_max_shards` should not be changed after sync has
161 started.
162
163 S3 Settings
164 ===========
165
166 .. confval:: rgw_s3_auth_use_ldap
167
168 Swift Settings
169 ==============
170
171 .. confval:: rgw_enforce_swift_acls
172 .. confval:: rgw_swift_tenant_name
173 .. confval:: rgw_swift_token_expiration
174 .. confval:: rgw_swift_url
175 .. confval:: rgw_swift_url_prefix
176 .. confval:: rgw_swift_auth_url
177 .. confval:: rgw_swift_auth_entry
178 .. confval:: rgw_swift_account_in_url
179 .. confval:: rgw_swift_versioning_enabled
180 .. confval:: rgw_trust_forwarded_https
181
182 Logging Settings
183 ================
184
185 .. confval:: rgw_log_nonexistent_bucket
186 .. confval:: rgw_log_object_name
187 .. confval:: rgw_log_object_name_utc
188 .. confval:: rgw_usage_max_shards
189 .. confval:: rgw_usage_max_user_shards
190 .. confval:: rgw_enable_ops_log
191 .. confval:: rgw_enable_usage_log
192 .. confval:: rgw_ops_log_rados
193 .. confval:: rgw_ops_log_socket_path
194 .. confval:: rgw_ops_log_data_backlog
195 .. confval:: rgw_usage_log_flush_threshold
196 .. confval:: rgw_usage_log_tick_interval
197 .. confval:: rgw_log_http_headers
198
199 Keystone Settings
200 =================
201
202 .. confval:: rgw_keystone_url
203 .. confval:: rgw_keystone_api_version
204 .. confval:: rgw_keystone_admin_domain
205 .. confval:: rgw_keystone_admin_project
206 .. confval:: rgw_keystone_admin_token
207 .. confval:: rgw_keystone_admin_token_path
208 .. confval:: rgw_keystone_admin_tenant
209 .. confval:: rgw_keystone_admin_user
210 .. confval:: rgw_keystone_admin_password
211 .. confval:: rgw_keystone_admin_password_path
212 .. confval:: rgw_keystone_accepted_roles
213 .. confval:: rgw_keystone_token_cache_size
214 .. confval:: rgw_keystone_verify_ssl
215 .. confval:: rgw_keystone_service_token_enabled
216 .. confval:: rgw_keystone_service_token_accepted_roles
217 .. confval:: rgw_keystone_expired_token_cache_expiration
218
219 Server-side encryption Settings
220 ===============================
221
222 .. confval:: rgw_crypt_s3_kms_backend
223
224 Barbican Settings
225 =================
226
227 .. confval:: rgw_barbican_url
228 .. confval:: rgw_keystone_barbican_user
229 .. confval:: rgw_keystone_barbican_password
230 .. confval:: rgw_keystone_barbican_tenant
231 .. confval:: rgw_keystone_barbican_project
232 .. confval:: rgw_keystone_barbican_domain
233
234 HashiCorp Vault Settings
235 ========================
236
237 .. confval:: rgw_crypt_vault_auth
238 .. confval:: rgw_crypt_vault_token_file
239 .. confval:: rgw_crypt_vault_addr
240 .. confval:: rgw_crypt_vault_prefix
241 .. confval:: rgw_crypt_vault_secret_engine
242 .. confval:: rgw_crypt_vault_namespace
243
244 SSE-S3 Settings
245 ===============
246
247 .. confval:: rgw_crypt_sse_s3_backend
248 .. confval:: rgw_crypt_sse_s3_vault_secret_engine
249 .. confval:: rgw_crypt_sse_s3_key_template
250 .. confval:: rgw_crypt_sse_s3_vault_auth
251 .. confval:: rgw_crypt_sse_s3_vault_token_file
252 .. confval:: rgw_crypt_sse_s3_vault_addr
253 .. confval:: rgw_crypt_sse_s3_vault_prefix
254 .. confval:: rgw_crypt_sse_s3_vault_namespace
255 .. confval:: rgw_crypt_sse_s3_vault_verify_ssl
256 .. confval:: rgw_crypt_sse_s3_vault_ssl_cacert
257 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientcert
258 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientkey
259
260
261 QoS settings
262 ------------
263
264 .. versionadded:: Nautilus
265
266 The ``civetweb`` frontend has a threading model that uses a thread per
267 connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
268 configurable when it comes to accepting connections. The newer ``beast`` frontend is
269 not restricted by the thread pool size when it comes to accepting new
270 connections, so a scheduler abstraction is introduced in the Nautilus release
271 to support future methods of scheduling requests.
272
273 Currently the scheduler defaults to a throttler which throttles the active
274 connections to a configured limit. QoS based on mClock is currently in an
275 *experimental* phase and not recommended for production yet. Current
276 implementation of *dmclock_client* op queue divides RGW ops on admin, auth
277 (swift auth, sts) metadata & data requests.
278
279
280 .. confval:: rgw_max_concurrent_requests
281 .. confval:: rgw_scheduler_type
282 .. confval:: rgw_dmclock_auth_res
283 .. confval:: rgw_dmclock_auth_wgt
284 .. confval:: rgw_dmclock_auth_lim
285 .. confval:: rgw_dmclock_admin_res
286 .. confval:: rgw_dmclock_admin_wgt
287 .. confval:: rgw_dmclock_admin_lim
288 .. confval:: rgw_dmclock_data_res
289 .. confval:: rgw_dmclock_data_wgt
290 .. confval:: rgw_dmclock_data_lim
291 .. confval:: rgw_dmclock_metadata_res
292 .. confval:: rgw_dmclock_metadata_wgt
293 .. confval:: rgw_dmclock_metadata_lim
294
295 .. _Architecture: ../../architecture#data-striping
296 .. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
297 .. _Cluster Pools: ../../rados/operations/pools
298 .. _Rados cluster handles: ../../rados/api/librados-intro/#step-2-configuring-a-cluster-handle
299 .. _Barbican: ../barbican
300 .. _Encryption: ../encryption
301 .. _HTTP Frontends: ../frontends