]> git.proxmox.com Git - ceph.git/blob - ceph/doc/radosgw/config-ref.rst
add stop-gap to fix compat with CPUs not supporting SSE 4.1
[ceph.git] / ceph / doc / radosgw / config-ref.rst
1 ======================================
2 Ceph Object Gateway Config Reference
3 ======================================
4
5 The following settings may added to the Ceph configuration file (i.e., usually
6 ``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
7 settings may contain default values. If you do not specify each setting in the
8 Ceph configuration file, the default value will be set automatically.
9
10 Configuration variables set under the ``[client.radosgw.{instance-name}]``
11 section will not apply to rgw or radosgw-admin commands without an instance-name
12 specified in the command. Thus variables meant to be applied to all RGW
13 instances or all radosgw-admin options can be put into the ``[global]`` or the
14 ``[client]`` section to avoid specifying ``instance-name``.
15
16 .. confval:: rgw_frontends
17 .. confval:: rgw_data
18 .. confval:: rgw_enable_apis
19 .. confval:: rgw_cache_enabled
20 .. confval:: rgw_cache_lru_size
21 .. confval:: rgw_dns_name
22 .. confval:: rgw_script_uri
23 .. confval:: rgw_request_uri
24 .. confval:: rgw_print_continue
25 .. confval:: rgw_remote_addr_param
26 .. confval:: rgw_op_thread_timeout
27 .. confval:: rgw_op_thread_suicide_timeout
28 .. confval:: rgw_thread_pool_size
29 .. confval:: rgw_num_control_oids
30 .. confval:: rgw_init_timeout
31 .. confval:: rgw_mime_types_file
32 .. confval:: rgw_s3_success_create_obj_status
33 .. confval:: rgw_resolve_cname
34 .. confval:: rgw_obj_stripe_size
35 .. confval:: rgw_extended_http_attrs
36 .. confval:: rgw_exit_timeout_secs
37 .. confval:: rgw_get_obj_window_size
38 .. confval:: rgw_get_obj_max_req_size
39 .. confval:: rgw_multipart_min_part_size
40 .. confval:: rgw_relaxed_s3_bucket_names
41 .. confval:: rgw_list_buckets_max_chunk
42 .. confval:: rgw_override_bucket_index_max_shards
43 .. confval:: rgw_curl_wait_timeout_ms
44 .. confval:: rgw_copy_obj_progress
45 .. confval:: rgw_copy_obj_progress_every_bytes
46 .. confval:: rgw_max_copy_obj_concurrent_io
47 .. confval:: rgw_admin_entry
48 .. confval:: rgw_content_length_compat
49 .. confval:: rgw_bucket_quota_ttl
50 .. confval:: rgw_user_quota_bucket_sync_interval
51 .. confval:: rgw_user_quota_sync_interval
52 .. confval:: rgw_bucket_default_quota_max_objects
53 .. confval:: rgw_bucket_default_quota_max_size
54 .. confval:: rgw_user_default_quota_max_objects
55 .. confval:: rgw_user_default_quota_max_size
56 .. confval:: rgw_verify_ssl
57 .. confval:: rgw_max_chunk_size
58
59 Lifecycle Settings
60 ==================
61
62 Bucket Lifecycle configuration can be used to manage your objects so they are stored
63 effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited
64 by single threaded processing. With the Nautilus release this has been addressed and the
65 Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across
66 additional Ceph Object Gateway instances and replaces the in-order
67 index shard enumeration with a random ordered sequence.
68
69 There are two options in particular to look at when looking to increase the
70 aggressiveness of lifecycle processing:
71
72 .. confval:: rgw_lc_max_worker
73 .. confval:: rgw_lc_max_wp_worker
74
75 These values can be tuned based upon your specific workload to further increase the
76 aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands)
77 you would look at increasing the :confval:`rgw_lc_max_worker` value from the default value of 3 whereas for a
78 workload with a smaller number of buckets but higher number of objects (hundreds of thousands)
79 per bucket you would consider decreasing :confval:`rgw_lc_max_wp_worker` from the default value of 3.
80
81 .. note:: When looking to tune either of these specific values please validate the
82 current Cluster performance and Ceph Object Gateway utilization before increasing.
83
84 Garbage Collection Settings
85 ===========================
86
87 The Ceph Object Gateway allocates storage for new objects immediately.
88
89 The Ceph Object Gateway purges the storage space used for deleted and overwritten
90 objects in the Ceph Storage cluster some time after the gateway deletes the
91 objects from the bucket index. The process of purging the deleted object data
92 from the Ceph Storage cluster is known as Garbage Collection or GC.
93
94 To view the queue of objects awaiting garbage collection, execute the following
95
96 .. prompt:: bash $
97
98 radosgw-admin gc list
99
100 .. note:: specify ``--include-all`` to list all entries, including unexpired
101
102 Garbage collection is a background activity that may
103 execute continuously or during times of low loads, depending upon how the
104 administrator configures the Ceph Object Gateway. By default, the Ceph Object
105 Gateway conducts GC operations continuously. Since GC operations are a normal
106 part of Ceph Object Gateway operations, especially with object delete
107 operations, objects eligible for garbage collection exist most of the time.
108
109 Some workloads may temporarily or permanently outpace the rate of garbage
110 collection activity. This is especially true of delete-heavy workloads, where
111 many objects get stored for a short period of time and then deleted. For these
112 types of workloads, administrators can increase the priority of garbage
113 collection operations relative to other operations with the following
114 configuration parameters.
115
116 .. confval:: rgw_gc_max_objs
117 .. confval:: rgw_gc_obj_min_wait
118 .. confval:: rgw_gc_processor_max_time
119 .. confval:: rgw_gc_processor_period
120 .. confval:: rgw_gc_max_concurrent_io
121
122 :Tuning Garbage Collection for Delete Heavy Workloads:
123
124 As an initial step towards tuning Ceph Garbage Collection to be more aggressive the following options are suggested to be increased from their default configuration values::
125
126 rgw_gc_max_concurrent_io = 20
127 rgw_gc_max_trim_chunk = 64
128
129 .. note:: Modifying these values requires a restart of the RGW service.
130
131 Once these values have been increased from default please monitor for performance of the cluster during Garbage Collection to verify no adverse performance issues due to the increased values.
132
133 Multisite Settings
134 ==================
135
136 .. versionadded:: Jewel
137
138 You may include the following settings in your Ceph configuration
139 file under each ``[client.radosgw.{instance-name}]`` instance.
140
141 .. confval:: rgw_zone
142 .. confval:: rgw_zonegroup
143 .. confval:: rgw_realm
144 .. confval:: rgw_run_sync_thread
145 .. confval:: rgw_data_log_window
146 .. confval:: rgw_data_log_changes_size
147 .. confval:: rgw_data_log_obj_prefix
148 .. confval:: rgw_data_log_num_shards
149 .. confval:: rgw_md_log_max_shards
150 .. confval:: rgw_data_sync_poll_interval
151 .. confval:: rgw_meta_sync_poll_interval
152 .. confval:: rgw_bucket_sync_spawn_window
153 .. confval:: rgw_data_sync_spawn_window
154 .. confval:: rgw_meta_sync_spawn_window
155
156 .. important:: The values of :confval:`rgw_data_log_num_shards` and
157 :confval:`rgw_md_log_max_shards` should not be changed after sync has
158 started.
159
160 S3 Settings
161 ===========
162
163 .. confval:: rgw_s3_auth_use_ldap
164
165 Swift Settings
166 ==============
167
168 .. confval:: rgw_enforce_swift_acls
169 .. confval:: rgw_swift_tenant_name
170 .. confval:: rgw_swift_token_expiration
171 .. confval:: rgw_swift_url
172 .. confval:: rgw_swift_url_prefix
173 .. confval:: rgw_swift_auth_url
174 .. confval:: rgw_swift_auth_entry
175 .. confval:: rgw_swift_account_in_url
176 .. confval:: rgw_swift_versioning_enabled
177 .. confval:: rgw_trust_forwarded_https
178
179 Logging Settings
180 ================
181
182 .. confval:: rgw_log_nonexistent_bucket
183 .. confval:: rgw_log_object_name
184 .. confval:: rgw_log_object_name_utc
185 .. confval:: rgw_usage_max_shards
186 .. confval:: rgw_usage_max_user_shards
187 .. confval:: rgw_enable_ops_log
188 .. confval:: rgw_enable_usage_log
189 .. confval:: rgw_ops_log_rados
190 .. confval:: rgw_ops_log_socket_path
191 .. confval:: rgw_ops_log_data_backlog
192 .. confval:: rgw_usage_log_flush_threshold
193 .. confval:: rgw_usage_log_tick_interval
194 .. confval:: rgw_log_http_headers
195
196 Keystone Settings
197 =================
198
199 .. confval:: rgw_keystone_url
200 .. confval:: rgw_keystone_api_version
201 .. confval:: rgw_keystone_admin_domain
202 .. confval:: rgw_keystone_admin_project
203 .. confval:: rgw_keystone_admin_token
204 .. confval:: rgw_keystone_admin_token_path
205 .. confval:: rgw_keystone_admin_tenant
206 .. confval:: rgw_keystone_admin_user
207 .. confval:: rgw_keystone_admin_password
208 .. confval:: rgw_keystone_admin_password_path
209 .. confval:: rgw_keystone_accepted_roles
210 .. confval:: rgw_keystone_token_cache_size
211 .. confval:: rgw_keystone_verify_ssl
212 .. confval:: rgw_keystone_service_token_enabled
213 .. confval:: rgw_keystone_service_token_accepted_roles
214 .. confval:: rgw_keystone_expired_token_cache_expiration
215
216 Server-side encryption Settings
217 ===============================
218
219 .. confval:: rgw_crypt_s3_kms_backend
220
221 Barbican Settings
222 =================
223
224 .. confval:: rgw_barbican_url
225 .. confval:: rgw_keystone_barbican_user
226 .. confval:: rgw_keystone_barbican_password
227 .. confval:: rgw_keystone_barbican_tenant
228 .. confval:: rgw_keystone_barbican_project
229 .. confval:: rgw_keystone_barbican_domain
230
231 HashiCorp Vault Settings
232 ========================
233
234 .. confval:: rgw_crypt_vault_auth
235 .. confval:: rgw_crypt_vault_token_file
236 .. confval:: rgw_crypt_vault_addr
237 .. confval:: rgw_crypt_vault_prefix
238 .. confval:: rgw_crypt_vault_secret_engine
239 .. confval:: rgw_crypt_vault_namespace
240
241 SSE-S3 Settings
242 ===============
243
244 .. confval:: rgw_crypt_sse_s3_backend
245 .. confval:: rgw_crypt_sse_s3_vault_secret_engine
246 .. confval:: rgw_crypt_sse_s3_key_template
247 .. confval:: rgw_crypt_sse_s3_vault_auth
248 .. confval:: rgw_crypt_sse_s3_vault_token_file
249 .. confval:: rgw_crypt_sse_s3_vault_addr
250 .. confval:: rgw_crypt_sse_s3_vault_prefix
251 .. confval:: rgw_crypt_sse_s3_vault_namespace
252 .. confval:: rgw_crypt_sse_s3_vault_verify_ssl
253 .. confval:: rgw_crypt_sse_s3_vault_ssl_cacert
254 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientcert
255 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientkey
256
257
258 QoS settings
259 ------------
260
261 .. versionadded:: Nautilus
262
263 The ``civetweb`` frontend has a threading model that uses a thread per
264 connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
265 configurable when it comes to accepting connections. The newer ``beast`` frontend is
266 not restricted by the thread pool size when it comes to accepting new
267 connections, so a scheduler abstraction is introduced in the Nautilus release
268 to support future methods of scheduling requests.
269
270 Currently the scheduler defaults to a throttler which throttles the active
271 connections to a configured limit. QoS based on mClock is currently in an
272 *experimental* phase and not recommended for production yet. Current
273 implementation of *dmclock_client* op queue divides RGW Ops on admin, auth
274 (swift auth, sts) metadata & data requests.
275
276
277 .. confval:: rgw_max_concurrent_requests
278 .. confval:: rgw_scheduler_type
279 .. confval:: rgw_dmclock_auth_res
280 .. confval:: rgw_dmclock_auth_wgt
281 .. confval:: rgw_dmclock_auth_lim
282 .. confval:: rgw_dmclock_admin_res
283 .. confval:: rgw_dmclock_admin_wgt
284 .. confval:: rgw_dmclock_admin_lim
285 .. confval:: rgw_dmclock_data_res
286 .. confval:: rgw_dmclock_data_wgt
287 .. confval:: rgw_dmclock_data_lim
288 .. confval:: rgw_dmclock_metadata_res
289 .. confval:: rgw_dmclock_metadata_wgt
290 .. confval:: rgw_dmclock_metadata_lim
291
292 .. _Architecture: ../../architecture#data-striping
293 .. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
294 .. _Cluster Pools: ../../rados/operations/pools
295 .. _Rados cluster handles: ../../rados/api/librados-intro/#step-2-configuring-a-cluster-handle
296 .. _Barbican: ../barbican
297 .. _Encryption: ../encryption
298 .. _HTTP Frontends: ../frontends