]> git.proxmox.com Git - ceph.git/blame - ceph/src/boost/libs/compute/doc/faq.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / compute / doc / faq.qbk
CommitLineData
7c673cae
FG
1[/===========================================================================
2 Copyright (c) 2013-2015 Kyle Lutz <kyle.r.lutz@gmail.com>
3
4 Distributed under the Boost Software License, Version 1.0
5 See accompanying file LICENSE_1_0.txt or copy at
6 http://www.boost.org/LICENSE_1_0.txt
7=============================================================================/]
8
9[section:faq Frequently Asked Questions]
10
11[h3 How do I report a bug, issue, or feature request?]
12
13Please submit an issue on the GitHub issue tracker at
14[@https://github.com/boostorg/compute/issues].
15
16
17[h3 Where can I find more documentation?]
18
19* The main documentation is here: [@http://boostorg.github.io/compute/]
20* The README is here: [@https://github.com/boostorg/compute/blob/master/README.md]
21* The wiki is here: [@https://github.com/boostorg/compute/wiki]
22* The contributor guide is here: [@https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md]
23* The reference is here: [@http://boostorg.github.io/compute/compute/reference.html]
24
25
26[h3 Where is the best place to ask questions about the library?]
27
28The mailing list at [@https://groups.google.com/forum/#!forum/boost-compute].
29
30
31[h3 What compute devices (e.g. GPUs) are supported?]
32
33Any device which implements the OpenCL standard is supported. This includes
34GPUs from NVIDIA, AMD, and Intel as well as CPUs from AMD and Intel and other
35accelerator cards such as the Xeon Phi.
36
37
38[h3 Can you compare Boost.Compute to other GPGPU libraries such as Thrust,
39Bolt and VexCL?]
40
41Thrust implements a C++ STL-like API for GPUs and CPUs. It is built
42with multiple backends. NVIDIA GPUs use the CUDA backend and
43multi-core CPUs can use the Intel TBB or OpenMP backends. However,
44thrust will not work with AMD graphics cards or other lesser-known
45accelerators. I feel Boost.Compute is superior in that it uses the
46vendor-neutral OpenCL library to achieve portability across all types
47of compute devices.
48
49Bolt is an AMD specific C++ wrapper around the OpenCL API which
50extends the C99-based OpenCL language to support C++ features (most
51notably templates). It is similar to NVIDIA's Thrust library and
52shares the same failure, lack of portability.
53
54VexCL is an expression-template based linear-algebra library for
55OpenCL. The aims and scope are a bit different from the Boost Compute
56library. VexCL is closer in nature to the Eigen library while
57Boost.Compute is closer to the C++ standard library. I don't feel that
58Boost.Compute really fills the same role as VexCL. In fact, the recent versions
59of VexCL allow to use Boost.Compute as one of the backends, which makes
60the interaction between the two libraries a breeze.
61
62
63
64Also see this StackOverflow question:
65[@http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust-and-boost-compute]
66
67
68[h3 Why not write just write a new OpenCL back-end for Thrust?]
69
70It would not be possible to provide the same API that Thrust expects
71for OpenCL. The fundamental reason is that functions/functors passed
72to Thrust algorithms are actual compiled C++ functions whereas for
73Boost.Compute these form expression objects which are then translated
74into C99 code which is then compiled for OpenCL.
75
76
77[h3 Why not target CUDA and/or support multiple back-ends?]
78
79CUDA and OpenCL are two very different technologies. OpenCL works by
80compiling C99 code at run-time to generate kernel objects which can
81then be executed on the GPU. CUDA, on the other hand, works by
82compiling its kernels using a special compiler (nvcc) which then
83produces binaries which can executed on the GPU.
84
85OpenCL already has multiple implementations which allow it to be used
86on a variety of platforms (e.g. NVIDIA GPUs, Intel CPUs, etc.). I feel
87that adding another abstraction level within Boost.Compute would only
88complicate and bloat the library.
89
90
91[h3 Is it possible to use ordinary C++ functions/functors or C++11
92lambdas with Boost.Compute?]
93
94Unfortunately no. OpenCL relies on having C99 source code available at
95run-time in order to execute code on the GPU. Thus compiled C++
96functions or C++11 lambdas cannot simply be passed to the OpenCL
97environment to be executed on the GPU.
98
99This is the reason why I wrote the Boost.Compute lambda library.
100Basically it takes C++ lambda expressions (e.g. _1 * sqrt(_1) + 4) and
101transforms them into C99 source code fragments (e.g. “input[i] *
102sqrt(input[i]) + 4)”) which are then passed to the Boost.Compute
103STL-style algorithms for execution. While not perfect, it allows the
104user to write code closer to C++ that still can be executed through
105OpenCL.
106
107Also check out the BOOST_COMPUTE_FUNCTION() macro which allows OpenCL
108functions to be defined inline with C++ code. An example can be found in
109the monte_carlo example code.
110
111
112[h3 What is the command_queue argument that appears in all of the algorithms?]
113
114Command queues specify the context and device for the algorithm's
115execution. For all of the standard algorithms the command_queue
116parameter is optional. If not provided, a default command_queue will
117be created for the default GPU device and the algorithm will be
118executed there.
119
120
121[h3 How can I print out the contents of a buffer/vector on the GPU?]
122
123This can be accompilshed easily using the generic boost::compute::copy()
124algorithm along with std::ostream_iterator<T>. For example:
125
126[import ../example/print_vector.cpp]
127[print_vector_example]
128
129
130[h3 Does Boost.Compute support zero-copy memory?]
131
132Zero-copy memory allows OpenCL kernels to directly operate on regions of host
133memory (if supported by the platform).
134
135Boost.Compute supports zero-copy memory in multiple ways. The low-level
136interface is provided by allocating [classref boost::compute::buffer buffer]
137objects with the `CL_MEM_USE_HOST_PTR` flag. The high-level interface is
138provided by the [classref boost::compute::mapped_view mapped_view<T>] class
139which provides a std::vector-like interface to a region of host-memory and can
140be used directly with all of the Boost.Compute algorithms.
141
142
143[h3 Is Boost.Compute thread-safe?]
144
145The low-level Boost.Compute APIs offer the same thread-safety guarantees as
146the underyling OpenCL library implementation. However, the high-level APIs
147make use of a few global static objects for features such as automatic program
148caching which makes them not thread-safe by default.
149
150To compile Boost.Compute in thread-safe mode define `BOOST_COMPUTE_THREAD_SAFE`
151before including any of the Boost.Compute headers. By default this will require
152linking your application/library with the Boost.Thread library.
153
154
155[h3 What applications/libraries use Boost.Compute?]
156
157Boost.Compute is used by a number of open-source libraries and applications
158including:
159
160* ArrayFire ([@http://arrayfire.com])
161* Ceemple ([@http://www.ceemple.com])
162* Odeint ([@http://headmyshoulder.github.io/odeint-v2])
163* VexCL ([@https://github.com/ddemidov/vexcl])
164
165If you use Boost.Compute in your project and would like it to be listed here
166please send an email to Kyle Lutz (kyle.r.lutz@gmail.com).
167
168
169[h3 How can I contribute?]
170
171We are actively seeking additional C++ developers with experience in
172GPGPU and parallel-computing.
173
174Please send an email to Kyle Lutz (kyle.r.lutz@gmail.com) for more information.
175
176Also see the
177[@https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md contributor guide]
178and check out the list of issues at:
179[@https://github.com/boostorg/compute/issues].
180
181[endsect] [/ faq ]