]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | [/=========================================================================== |
2 | Copyright (c) 2013-2015 Kyle Lutz <kyle.r.lutz@gmail.com> | |
3 | ||
4 | Distributed under the Boost Software License, Version 1.0 | |
5 | See accompanying file LICENSE_1_0.txt or copy at | |
6 | http://www.boost.org/LICENSE_1_0.txt | |
7 | =============================================================================/] | |
8 | ||
9 | [section:faq Frequently Asked Questions] | |
10 | ||
11 | [h3 How do I report a bug, issue, or feature request?] | |
12 | ||
13 | Please submit an issue on the GitHub issue tracker at | |
14 | [@https://github.com/boostorg/compute/issues]. | |
15 | ||
16 | ||
17 | [h3 Where can I find more documentation?] | |
18 | ||
19 | * The main documentation is here: [@http://boostorg.github.io/compute/] | |
20 | * The README is here: [@https://github.com/boostorg/compute/blob/master/README.md] | |
21 | * The wiki is here: [@https://github.com/boostorg/compute/wiki] | |
22 | * The contributor guide is here: [@https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md] | |
23 | * The reference is here: [@http://boostorg.github.io/compute/compute/reference.html] | |
24 | ||
25 | ||
26 | [h3 Where is the best place to ask questions about the library?] | |
27 | ||
28 | The mailing list at [@https://groups.google.com/forum/#!forum/boost-compute]. | |
29 | ||
30 | ||
31 | [h3 What compute devices (e.g. GPUs) are supported?] | |
32 | ||
33 | Any device which implements the OpenCL standard is supported. This includes | |
34 | GPUs from NVIDIA, AMD, and Intel as well as CPUs from AMD and Intel and other | |
35 | accelerator cards such as the Xeon Phi. | |
36 | ||
37 | ||
38 | [h3 Can you compare Boost.Compute to other GPGPU libraries such as Thrust, | |
39 | Bolt and VexCL?] | |
40 | ||
41 | Thrust implements a C++ STL-like API for GPUs and CPUs. It is built | |
42 | with multiple backends. NVIDIA GPUs use the CUDA backend and | |
43 | multi-core CPUs can use the Intel TBB or OpenMP backends. However, | |
44 | thrust will not work with AMD graphics cards or other lesser-known | |
45 | accelerators. I feel Boost.Compute is superior in that it uses the | |
46 | vendor-neutral OpenCL library to achieve portability across all types | |
47 | of compute devices. | |
48 | ||
49 | Bolt is an AMD specific C++ wrapper around the OpenCL API which | |
50 | extends the C99-based OpenCL language to support C++ features (most | |
51 | notably templates). It is similar to NVIDIA's Thrust library and | |
52 | shares the same failure, lack of portability. | |
53 | ||
54 | VexCL is an expression-template based linear-algebra library for | |
55 | OpenCL. The aims and scope are a bit different from the Boost Compute | |
56 | library. VexCL is closer in nature to the Eigen library while | |
57 | Boost.Compute is closer to the C++ standard library. I don't feel that | |
58 | Boost.Compute really fills the same role as VexCL. In fact, the recent versions | |
59 | of VexCL allow to use Boost.Compute as one of the backends, which makes | |
60 | the interaction between the two libraries a breeze. | |
61 | ||
62 | ||
63 | ||
64 | Also see this StackOverflow question: | |
65 | [@http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust-and-boost-compute] | |
66 | ||
67 | ||
68 | [h3 Why not write just write a new OpenCL back-end for Thrust?] | |
69 | ||
70 | It would not be possible to provide the same API that Thrust expects | |
71 | for OpenCL. The fundamental reason is that functions/functors passed | |
72 | to Thrust algorithms are actual compiled C++ functions whereas for | |
73 | Boost.Compute these form expression objects which are then translated | |
74 | into C99 code which is then compiled for OpenCL. | |
75 | ||
76 | ||
77 | [h3 Why not target CUDA and/or support multiple back-ends?] | |
78 | ||
79 | CUDA and OpenCL are two very different technologies. OpenCL works by | |
80 | compiling C99 code at run-time to generate kernel objects which can | |
81 | then be executed on the GPU. CUDA, on the other hand, works by | |
82 | compiling its kernels using a special compiler (nvcc) which then | |
83 | produces binaries which can executed on the GPU. | |
84 | ||
85 | OpenCL already has multiple implementations which allow it to be used | |
86 | on a variety of platforms (e.g. NVIDIA GPUs, Intel CPUs, etc.). I feel | |
87 | that adding another abstraction level within Boost.Compute would only | |
88 | complicate and bloat the library. | |
89 | ||
90 | ||
91 | [h3 Is it possible to use ordinary C++ functions/functors or C++11 | |
92 | lambdas with Boost.Compute?] | |
93 | ||
94 | Unfortunately no. OpenCL relies on having C99 source code available at | |
95 | run-time in order to execute code on the GPU. Thus compiled C++ | |
96 | functions or C++11 lambdas cannot simply be passed to the OpenCL | |
97 | environment to be executed on the GPU. | |
98 | ||
99 | This is the reason why I wrote the Boost.Compute lambda library. | |
100 | Basically it takes C++ lambda expressions (e.g. _1 * sqrt(_1) + 4) and | |
101 | transforms them into C99 source code fragments (e.g. “input[i] * | |
102 | sqrt(input[i]) + 4)”) which are then passed to the Boost.Compute | |
103 | STL-style algorithms for execution. While not perfect, it allows the | |
104 | user to write code closer to C++ that still can be executed through | |
105 | OpenCL. | |
106 | ||
107 | Also check out the BOOST_COMPUTE_FUNCTION() macro which allows OpenCL | |
108 | functions to be defined inline with C++ code. An example can be found in | |
109 | the monte_carlo example code. | |
110 | ||
111 | ||
112 | [h3 What is the command_queue argument that appears in all of the algorithms?] | |
113 | ||
114 | Command queues specify the context and device for the algorithm's | |
115 | execution. For all of the standard algorithms the command_queue | |
116 | parameter is optional. If not provided, a default command_queue will | |
117 | be created for the default GPU device and the algorithm will be | |
118 | executed there. | |
119 | ||
120 | ||
121 | [h3 How can I print out the contents of a buffer/vector on the GPU?] | |
122 | ||
123 | This can be accompilshed easily using the generic boost::compute::copy() | |
124 | algorithm along with std::ostream_iterator<T>. For example: | |
125 | ||
126 | [import ../example/print_vector.cpp] | |
127 | [print_vector_example] | |
128 | ||
129 | ||
130 | [h3 Does Boost.Compute support zero-copy memory?] | |
131 | ||
132 | Zero-copy memory allows OpenCL kernels to directly operate on regions of host | |
133 | memory (if supported by the platform). | |
134 | ||
135 | Boost.Compute supports zero-copy memory in multiple ways. The low-level | |
136 | interface is provided by allocating [classref boost::compute::buffer buffer] | |
137 | objects with the `CL_MEM_USE_HOST_PTR` flag. The high-level interface is | |
138 | provided by the [classref boost::compute::mapped_view mapped_view<T>] class | |
139 | which provides a std::vector-like interface to a region of host-memory and can | |
140 | be used directly with all of the Boost.Compute algorithms. | |
141 | ||
142 | ||
143 | [h3 Is Boost.Compute thread-safe?] | |
144 | ||
145 | The low-level Boost.Compute APIs offer the same thread-safety guarantees as | |
146 | the underyling OpenCL library implementation. However, the high-level APIs | |
147 | make use of a few global static objects for features such as automatic program | |
148 | caching which makes them not thread-safe by default. | |
149 | ||
150 | To compile Boost.Compute in thread-safe mode define `BOOST_COMPUTE_THREAD_SAFE` | |
151 | before including any of the Boost.Compute headers. By default this will require | |
152 | linking your application/library with the Boost.Thread library. | |
153 | ||
154 | ||
155 | [h3 What applications/libraries use Boost.Compute?] | |
156 | ||
157 | Boost.Compute is used by a number of open-source libraries and applications | |
158 | including: | |
159 | ||
160 | * ArrayFire ([@http://arrayfire.com]) | |
161 | * Ceemple ([@http://www.ceemple.com]) | |
162 | * Odeint ([@http://headmyshoulder.github.io/odeint-v2]) | |
163 | * VexCL ([@https://github.com/ddemidov/vexcl]) | |
164 | ||
165 | If you use Boost.Compute in your project and would like it to be listed here | |
166 | please send an email to Kyle Lutz (kyle.r.lutz@gmail.com). | |
167 | ||
168 | ||
169 | [h3 How can I contribute?] | |
170 | ||
171 | We are actively seeking additional C++ developers with experience in | |
172 | GPGPU and parallel-computing. | |
173 | ||
174 | Please send an email to Kyle Lutz (kyle.r.lutz@gmail.com) for more information. | |
175 | ||
176 | Also see the | |
177 | [@https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md contributor guide] | |
178 | and check out the list of issues at: | |
179 | [@https://github.com/boostorg/compute/issues]. | |
180 | ||
181 | [endsect] [/ faq ] |