From dcf092e98465df792e1522e07fa706fd9ae9c27e Mon Sep 17 00:00:00 2001 From: randyh62 Date: Wed, 11 Dec 2024 16:48:37 -0800 Subject: [PATCH 01/46] Add note to README to point to HIP documentation. --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 610b2a89c7..2493f09990 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,9 @@ Key features include: New projects can be developed directly in the portable HIP C++ language and can run on either NVIDIA or AMD platforms. Additionally, HIP provides porting tools which make it easy to port existing CUDA codes to the HIP layer, with no loss of performance as compared to the original CUDA application. HIP is not intended to be a drop-in replacement for CUDA, and developers should expect to do some manual coding and performance tuning work to complete the port. +> [!NOTE] +> The published documentation is available at [HIP documentation](https://rocm.docs.amd.com/projects/HIP/en/latest/index.html) in an organized, easy-to-read format, with search and a table of contents. The documentation source files reside in the `HIP/docs` folder of this GitHub repository. As with all ROCm projects, the documentation is open source. For more information on contributing to the documentation, see [Contribute to ROCm documentation](https://rocm.docs.amd.com/en/latest/contribute/contributing.html). + ## DISCLAIMER The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions, and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard versionchanges, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. Any computer system has risks of security vulnerabilities that cannot be completely prevented or mitigated.AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes.THIS INFORMATION IS PROVIDED ‘AS IS.” AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS, OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY RELIANCE, DIRECT, INDIRECT, SPECIAL, OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. From ddbf7599c1927785a1246274cfb9446a90e4edd5 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 6 Dec 2024 00:11:28 +0000 Subject: [PATCH 02/46] Bump rocm-docs-core[api_reference] from 1.10.0 to 1.11.0 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.10.0 to 1.11.0. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.10.0...v1.11.0) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 0a0f69fc45..298669727c 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.10.0 +rocm-docs-core[api_reference]==1.11.0 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index b23b89a21d..11b4ba8c7b 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -116,7 +116,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.10.0 +rocm-docs-core[api-reference]==1.11.0 # via -r requirements.in six==1.16.0 # via python-dateutil From f8b9cb01e5149c07159538a174c2078089fb4729 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Fri, 6 Dec 2024 16:24:14 +0100 Subject: [PATCH 03/46] Minor fix in unified memory page --- .../hip_runtime_api/memory_management/unified_memory.rst | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst index 5c55035151..c253416928 100644 --- a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst @@ -111,8 +111,7 @@ allocator can be used. ❌: **Unsupported** :sup:`1` Works only with ``XNACK=1`` and kernels with HMM support. First GPU -access causes recoverable page-fault. For more details, visit `GPU memory -`_. +access causes recoverable page-fault. .. _unified memory allocators: @@ -144,8 +143,7 @@ GPUs, it is essential to configure the environment variable ``XNACK=1`` and use a kernel that supports `HMM `_. Without this configuration, the behavior will be similar to that of systems without HMM -support. For more details, visit -`GPU memory `_. +support. The table below illustrates the expected behavior of managed and unified memory functions on ROCm and CUDA, both with and without HMM support. From 8aeb7ec69c078eea6b943acfaf4edb5d49bd80a9 Mon Sep 17 00:00:00 2001 From: Matthias Knorr Date: Thu, 28 Nov 2024 11:09:33 +0100 Subject: [PATCH 04/46] Docs: Update device memory pages --- .../memory_management/device_memory.rst | 283 ++++++++++++++++-- .../device_memory/texture_fetching.rst | 113 ++++--- include/hip/hip_runtime_api.h | 2 +- 3 files changed, 328 insertions(+), 70 deletions(-) diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst index 32f3fd7d58..fba559f7ad 100644 --- a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst @@ -1,52 +1,283 @@ .. meta:: :description: This chapter describes the device memory of the HIP ecosystem ROCm software. - :keywords: AMD, ROCm, HIP, device memory + :keywords: AMD, ROCm, HIP, GPU, device memory, global, constant, texture, surface, shared .. _device_memory: -******************************************************************************* +******************************************************************************** Device memory -******************************************************************************* +******************************************************************************** -Device memory exists on the device, e.g. on GPUs in the video random access -memory (VRAM), and is accessible by the kernels operating on the device. Recent -architectures use graphics double data rate (GDDR) synchronous dynamic -random-access memory (SDRAM) such as GDDR6, or high-bandwidth memory (HBM) such -as HBM2e. Device memory can be allocated as global memory, constant, texture or -surface memory. +Device memory is random access memory that is physically located on a GPU. In +general it is memory with a bandwidth that is an order of magnitude higher +compared to RAM available to the host. That high bandwidth is only available to +on-device accesses, accesses from the host or other devices have to go over a +special interface which is considerably slower, usually the PCIe bus or the AMD +Infinity Fabric. + +On certain architectures like APUs, the GPU and CPU share the same physical +memory. + +There is also a special local data share on-chip directly accessible to the +:ref:`compute units `, that can be used for shared +memory. + +The physical device memory can be used to back up several different memory +spaces in HIP, as described in the following. Global memory ================================================================================ -Read-write storage visible to all threads on a given device. There are -specialized versions of global memory with different usage semantics which are -typically backed by the same hardware, but can use different caching paths. +Global memory is the general read-write accessible memory visible to all threads +on a given device. Since variables located in global memory have to be marked +with the ``__device__`` qualifier, this memory space is also referred to as +device memory. + +Without explicitly copying it, it can only be accessed by the threads within a +kernel operating on the device, however :ref:`unified memory` can be used to +let the runtime manage this, if desired. + +Allocating global memory +-------------------------------------------------------------------------------- + +This memory needs to be explicitly allocated. + +It can be allocated from the host via the :ref:`HIP runtime memory management +functions ` like :cpp:func:`hipMalloc`, or can be +defined using the ``__device__`` qualifier on variables. + +It can also be allocated within a kernel using ``malloc`` or ``new``. +The specified amount of memory is allocated by each thread that executes the +instructions. The recommended way to allocate the memory depends on the use +case. If the memory is intended to be shared between the threads of a block, it +is generally beneficial to allocate one large block of memory, due to the way +the memory is accessed. + +.. note:: + Memory allocated within a kernel can only be freed in kernels, not by the HIP + runtime on the host, like :cpp:func:`hipFree`. It is also not possible to + free device memory allocated on the host, with :cpp:func:`hipMalloc` for + example, in a kernel. + + +An example for how to share memory allocated within a kernel by only one thread +is given in the following example. In case the device memory is only needed for +communication between the threads in a single block, :ref:`shared_memory` is the +better option, but is also limited in size. + +.. code-block:: cpp + + __global__ void kernel_memory_allocation(TYPE* pointer){ + // The pointer is stored in shared memory, so that all + // threads of the block can access the pointer + __shared__ int *memory; + + size_t blockSize = blockDim.x; + constexpr size_t elementsPerThread = 1024; + if(threadIdx.x == 0){ + // allocate memory in one contiguous block + memory = new int[blockDim.x * elementsPerThread]; + } + __syncthreads(); + + // load pointer into thread-local variable to avoid + // unnecessary accesses to shared memory + int *localPtr = memory; + + // work with allocated memory, e.g. initialization + for(int i = 0; i < elementsPerThread; ++i){ + // access in a contiguous way + localPtr[i * blockSize + threadIdx.x] = i; + } + + // synchronize to make sure no thread is accessing the memory before freeing + __syncthreads(); + if(threadIdx.x == 0){ + delete[] memory; + } +} + +Copying between device and host +-------------------------------------------------------------------------------- + +When not using :ref:`unified memory`, memory has to be explicitly copied between +the device and the host, using the HIP runtime API. + +.. code-block:: cpp + + size_t elements = 1 << 20; + size_t size_bytes = elements * sizeof(int); + + // allocate host and device memory + int *host_pointer = new int[elements]; + int *device_input, *device_result; + HIP_CHECK(hipMalloc(&device_input, size_bytes)); + HIP_CHECK(hipMalloc(&device_result, size_bytes)); + + // copy from host to the device + HIP_CHECK(hipMemcpy(device_input, host_pointer, size_bytes, hipMemcpyHostToDevice)); + + // Use memory on the device, i.e. execute kernels + + // copy from device to host, to e.g. get results from the kernel + HIP_CHECK(hipMemcpy(host_pointer, device_result, size_bytes, hipMemcpyDeviceToHost)); + + // free memory when not needed any more + HIP_CHECK(hipFree(device_result)); + HIP_CHECK(hipFree(device_input)); + delete[] host_pointer; Constant memory ================================================================================ -Read-only storage visible to all threads on a given device. It is a limited -segment backed by device memory with queryable size. It needs to be set by the -host before kernel execution. Constant memory provides the best performance -benefit when all threads within a warp access the same address. +Constant memory is read-only storage visible to all threads on a given device. +It is a limited segment backed by device memory, that takes a different caching +route than normal device memory accesses. It needs to be set by the host before +kernel execution. + +In order to get the highest bandwidth from the constant memory, all threads of +a warp have to access the same memory address. If they access different +addresses, the accesses get serialized and the bandwidth is therefore reduced. + +Using constant memory +-------------------------------------------------------------------------------- + +Constant memory can not be dynamically allocated, and the size has to be +specified during compile time. If the values can not be specified during compile +time, they have to be set by the host before the kernel, that accesses the +constant memory, is called. + +.. code-block:: cpp + + constexpr size_t const_array_size = 32; + __constant__ double const_array[const_array_size]; + + void set_constant_memory(double* values){ + hipMemcpyToSymbol(const_array, values, const_array_size * sizeof(double)); + } + + __global__ void kernel_using_const_memory(double* array){ + + int warpIdx = threadIdx.x / warpSize; + // uniform access of warps to const_array for best performance + array[blockDim.x] *= const_array[warpIdx]; + } Texture memory ================================================================================ -Read-only storage visible to all threads on a given device and accessible -through additional APIs. Its origins come from graphics APIs, and provides -performance benefits when accessing memory in a pattern where the -addresses are close to each other in a 2D representation of the memory. +Texture memory is special read-only memory visible to all threads on a given +device and accessible through additional APIs. Its origins come from graphics +APIs, and provides performance benefits when accessing memory in a pattern where +the addresses are close to each other in a 2D or 3D representation of the +memory. It also provides additional features like filtering and addressing for +out-of-bounds accesses, which are further explained in :ref:`texture_fetching`. + +The original use of the texture cache was also to take pressure off the global +memory and other caches, however on modern GPUs, that support textures, the L1 +cache and texture cache are combined, so the main purpose is to make use of the +texture specific features. + +To find out whether textures are supported on a device, query +:cpp:enumerator:`hipDeviceAttributeImageSupport`. + +Using texture memory +-------------------------------------------------------------------------------- + +Textures are more complex than just a region of memory, so their layout has to +be specified. They are represented by ``hipTextureObject_t`` and created using +:cpp:func:`hipCreateTextureObject`. -The :ref:`texture management module ` of the HIP -runtime API reference contains the functions of texture memory. +The underlying memory is a 1D, 2D or 3D ``hipArray_t``, that needs to be +allocated using :cpp:func:`hipMallocArray`. + +On the device side, texture objects are accessed using the ``tex1D/2D/3D`` +functions. + +The texture management functions can be found in the :ref:`Texture management +API reference ` + +A full example for how to use textures can be found in the `ROCm texture +management example `_ Surface memory ================================================================================ -A read-write version of texture memory, which can be useful for applications -that require direct manipulation of 1D, 2D, or 3D hipArray_t. +A read-write version of texture memory. It is created in the same way as a +texture, but with :cpp:func:`hipCreateSurfaceObject`. + +Since surfaces are also cached in the read-only texture cache, the changes +written back to the surface can't be observed in the same kernel. A new kernel +has to be launched in order to see the updated surface. + +The corresponding functions are listed in the :ref:`Surface object API reference +`. + +Shared memory +================================================================================ + +Shared memory is read-write memory, that is only visible to the threads within a +block. It is allocated per thread block, and needs to be either statically +allocated at compile time, or can be dynamically allocated when launching the +kernel, but not during kernel execution. Its general use-case is to share +variables between the threads within a block, but can also be used as scratch +pad memory. + +Shared memory is not backed by the same physical memory as the other address +spaces. It is on-chip memory local to the :ref:`compute units +`, providing low-latency, high-bandwidth access, +comparable to the L1 cache. It is however limited in size, and as it is +allocated per block, can restrict how many blocks can be scheduled to a compute +unit concurrently, thereby potentially reducing occupancy. + +An overview of the size of the local data share (LDS), that backs up shared +memory, is given in the +:doc:`GPU hardware specifications `. + +Allocate shared memory +-------------------------------------------------------------------------------- + +Memory can be dynamically allocated by declaring an ``extern __shared__`` array, +whose size can be set during kernel launch, which can then be accessed in the +kernel. + +.. code-block:: cpp + + extern __shared__ int dynamic_shared[]; + __global__ void kernel(int array1SizeX, int array1SizeY, int array2Size){ + // at least (array1SizeX * array1SizeY + array2Size) * sizeof(int) bytes + // dynamic shared memory need to be allocated when the kernel is launched + int* array1 = dynamic_shared; + // array1 is interpreted as 2D of size: + int array1Size = array1SizeX * array1SizeY; + + int* array2 = &(array1[array1Size]); + + if(threadIdx.x < array1SizeX && threadIdx.y < array1SizeY){ + // access array1 with threadIdx.x + threadIdx.y * array1SizeX + } + if(threadIdx.x < array2Size){ + // access array2 threadIdx.x + } + } + +A more in-depth example on dynamically allocated shared memory can be found in +the `ROCm dynamic shared example +`_. + +To statically allocate shared memory, just declare it in the kernel. The memory +is allocated per block, not per thread. If the kernel requires more shared +memory than is available to the architecture, the compilation fails. + +.. code-block:: cpp + + __global__ void kernel(){ + __shared__ int array[128]; + __shared__ double result; + } + +A more in-depth example on statically allocated shared memory can be found in +the `ROCm shared memory example +`_. -The :ref:`surface objects module ` of HIP runtime API -contains the functions for creating, destroying and reading surface memory. \ No newline at end of file diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst index a7f2873dd5..646d8afca6 100644 --- a/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst +++ b/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst @@ -5,56 +5,67 @@ .. _texture_fetching: -******************************************************************************* +******************************************************************************** Texture fetching -******************************************************************************* - -`Textures <../../../../doxygen/html/group___texture.html>`_ are more than just a buffer -interpreted as a 1D, 2D, or 3D array. - -As textures are associated with graphics, they are indexed using floating-point -values. The index can be in the range of [0 to size-1] or [0 to 1]. - -Depending on the index, texture sampling or texture addressing is performed, -which decides the return value. - -**Texture sampling**: When a texture is indexed with a fraction, the queried -value is often between two or more texels (texture elements). The sampling -method defines what value to return in such cases. - -**Texture addressing**: Sometimes, the index is outside the bounds of the -texture. This condition might look like a problem but helps to put a texture on -a surface multiple times or to create a visible sign of out-of-bounds indexing, -in computer graphics. The addressing mode defines what value to return when -indexing a texture out of bounds. - -The different sampling and addressing modes are described in the following -sections. - -Here is the sample texture used in this document for demonstration purposes. It +******************************************************************************** + +Textures give access to specialized hardware on GPUs that is usually used in +graphics processing. In particular, textures use a different way of accessing +their underlying device memory. Memory accesses to textures are routed through +a special read-only texture cache, that is optimized for logical spatial +locality, e.g. locality in 2D grids. This can also benefit certain algorithms +used in GPGPU computing, when the access pattern is the same as used when +accessing normal textures. + +Additionally, textures can be indexed using floating-point values. This is used +in graphics applications to interpolate between neighboring values of a texture. +Depending on the interpolation mode the index can be in the range of ``0`` to +``size - 1`` or ``0`` to ``1``. Textures also have a way of handling +out-of-bounds accesses. + +Depending on the value of the index, :ref:`texture filtering ` +or :ref:`texture addressing ` is performed. + +Here is the example texture used in this document for demonstration purposes. It is 2x2 texels and indexed in the [0 to 1] range. .. figure:: ../../../../data/how-to/hip_runtime_api/memory_management/textures/original.png :width: 150 - :alt: Sample texture + :alt: Example texture :align: center Texture used as example -Texture sampling -=============================================================================== +In HIP textures objects are of type :cpp:struct:`hipTextureObject_t` and created +using :cpp:func:`hipCreateTextureObject`. + +For a full list of available texture functions see the :ref:`HIP texture API +reference `. + +A code example for how to use textures can be found in the `ROCm texture +management example `_ + +.. _texture_filtering: -Texture sampling handles the usage of fractional indices. It is the method that -describes, which nearby values will be used, and how they are combined into the -resulting value. +Texture filtering +================================================================================ -The various texture sampling methods are discussed in the following sections. +Texture filtering handles the usage of fractional indices. When the index is a +fraction, the queried value lies between two or more texels (texture elements), +depending on the dimensionality of the texture. The filtering method defines how +to interpolate between these values. + +The filter modes are specified in :cpp:enumerator:`hipTextureFilterMode`. + +The various texture filtering methods are discussed in the following sections. .. _texture_fetching_nearest: -Nearest point sampling +Nearest point filtering ------------------------------------------------------------------------------- +This filter mode corresponds to ``hipFilterModePoint``. + In this method, the modulo of index is calculated as: ``tex(x) = T[floor(x)]`` @@ -70,22 +81,24 @@ of the nearest texel. .. figure:: ../../../../data/how-to/hip_runtime_api/memory_management/textures/nearest.png :width: 300 - :alt: Texture upscaled with nearest point sampling + :alt: Texture upscaled with nearest point filtering :align: center - Texture upscaled with nearest point sampling + Texture upscaled with nearest point filtering .. _texture_fetching_linear: Linear filtering ------------------------------------------------------------------------------- +This filter mode corresponds to ``hipFilterModeLinear``. + The linear filtering method does a linear interpolation between values. Linear interpolation is used to create a linear transition between two values. The formula used is ``(1-t)P1 + tP2`` where ``P1`` and ``P2`` are the values and ``t`` is within the [0 to 1] range. -In the case of texture sampling the following formulas are used: +In the case of linear texture filtering the following formulas are used: * For one dimensional textures: ``tex(x) = (1-α)T[i] + αT[i+1]`` * For two dimensional textures: ``tex(x,y) = (1-α)(1-β)T[i,j] + α(1-β)T[i+1,j] + (1-α)βT[i,j+1] + αβT[i+1,j+1]`` @@ -95,7 +108,7 @@ Where x, y, and, z are the floating-point indices. i, j, and, k are the integer indices and, α, β, and, γ values represent how far along the sampled point is on the three axes. These values are calculated by these formulas: ``i = floor(x')``, ``α = frac(x')``, ``x' = x - 0.5``, ``j = floor(y')``, ``β = frac(y')``, ``y' = y - 0.5``, ``k = floor(z')``, ``γ = frac(z')`` and ``z' = z - 0.5`` -This following image shows a texture stretched out to a 4x4 pixel quad, but +The following image shows a texture stretched out to a 4x4 pixel quad, but still indexed in the [0 to 1] range. The in-between values are interpolated between the neighboring texels. @@ -106,12 +119,18 @@ between the neighboring texels. Texture upscaled with linear filtering +.. _texture_addressing: + Texture addressing =============================================================================== -Texture addressing mode handles the index that is out of bounds of the texture. -This mode describes which values of the texture or a preset value to use when -the index is out of bounds. +The texture addressing modes are specified in +:cpp:enumerator:`hipTextureAddressMode`. + +The texture addressing mode handles out-of-bounds accesses to the texture. This +can be used in graphics applications to e.g. repeat a texture on a surface +multiple times in various ways or create visible signs of out-of-bounds +indexing. The following sections describe the various texture addressing methods. @@ -120,8 +139,10 @@ The following sections describe the various texture addressing methods. Address mode border ------------------------------------------------------------------------------- -In this method, the texture fetching returns a border value when indexing out of -bounds. The border value must be set before texture fetching. +This addressing mode is set using ``hipAddressModeBorder``. + +This addressing mode returns a border value when indexing out of bounds. The +border value must be set before texture fetching. The following image shows the texture on a 4x4 pixel quad, indexed in the [0 to 3] range. The out-of-bounds values are the border color, which is yellow. @@ -141,6 +162,8 @@ the addressing begins. Address mode clamp ------------------------------------------------------------------------------- +This addressing mode is set using ``hipAddressModeClamp``. + This mode clamps the index between [0 to size-1]. Due to this, when indexing out-of-bounds, the values on the edge of the texture repeat. The clamp mode is the default addressing mode. @@ -164,6 +187,8 @@ the addressing begins. Address mode wrap ------------------------------------------------------------------------------- +This addressing mode is set using ``hipAddressModeWrap``. + Wrap mode addressing is only available for normalized texture coordinates. In this addressing mode, the fractional part of the index is used: @@ -189,6 +214,8 @@ the addressing begins. Address mode mirror ------------------------------------------------------------------------------- +This addressing mode is set using ``hipAddressModeMirror``. + Similar to the wrap mode the mirror mode is only available for normalized texture coordinates and also creates a repeating image, but mirroring the neighboring instances. diff --git a/include/hip/hip_runtime_api.h b/include/hip/hip_runtime_api.h index f1488e82a3..d29ca8dfe6 100644 --- a/include/hip/hip_runtime_api.h +++ b/include/hip/hip_runtime_api.h @@ -6381,7 +6381,7 @@ hipError_t hipExtLaunchKernel(const void* function_address, dim3 numBlocks, dim3 * * @returns #hipSuccess, #hipErrorInvalidValue, #hipErrorNotSupported, #hipErrorOutOfMemory * - * @note 3D liner filter isn't supported on GFX90A boards, on which the API @p hipCreateTextureObject will + * @note 3D linear filter isn't supported on GFX90A boards, on which the API @p hipCreateTextureObject will * return hipErrorNotSupported. * */ From 34ec86354288507e7ff0bca29503d57ff94ad9c2 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Sun, 15 Dec 2024 19:09:18 +0100 Subject: [PATCH 05/46] Minor fixes --- docs/sphinx/_toc.yml.in | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index ba90d82efd..aca33a2c62 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -95,6 +95,7 @@ subtrees: - file: reference/hip_runtime_api/modules/runtime_compilation - file: reference/hip_runtime_api/modules/callback_activity_apis - file: reference/hip_runtime_api/modules/graph_management + - file: reference/hip_runtime_api/modules/opengl_interoperability - file: reference/hip_runtime_api/modules/graphics_interoperability - file: reference/hip_runtime_api/modules/opengl_interoperability - file: reference/hip_runtime_api/modules/cooperative_groups_reference From b549345a80a004ad26bbdb08b5a0208478f622ac Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Thu, 19 Dec 2024 18:41:09 +0100 Subject: [PATCH 06/46] Documentation build error fix --- docs/sphinx/_toc.yml.in | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index aca33a2c62..ba90d82efd 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -95,7 +95,6 @@ subtrees: - file: reference/hip_runtime_api/modules/runtime_compilation - file: reference/hip_runtime_api/modules/callback_activity_apis - file: reference/hip_runtime_api/modules/graph_management - - file: reference/hip_runtime_api/modules/opengl_interoperability - file: reference/hip_runtime_api/modules/graphics_interoperability - file: reference/hip_runtime_api/modules/opengl_interoperability - file: reference/hip_runtime_api/modules/cooperative_groups_reference From 83a36df28c51574f021fa7c1aae95f9d146a3dca Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 24 Dec 2024 20:59:21 +0000 Subject: [PATCH 07/46] Bump rocm-docs-core[api_reference] from 1.11.0 to 1.12.0 in /docs/sphinx Dependabot couldn't find the original pull request head commit, 00084b8251f26746e3f1f41fcd76368883817ac1. --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 298669727c..a218a01602 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.11.0 +rocm-docs-core[api_reference]==1.12.0 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 11b4ba8c7b..66559f66a9 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -116,7 +116,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.11.0 +rocm-docs-core[api-reference]==1.12.0 # via -r requirements.in six==1.16.0 # via python-dateutil From ffcc6119f2348cea05a2a270690fc7f6a9e076f9 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 3 Jan 2025 00:10:54 +0000 Subject: [PATCH 08/46] Bump rocm-docs-core[api_reference] from 1.12.0 to 1.12.1 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.12.0 to 1.12.1. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.12.0...v1.12.1) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index a218a01602..1069b749d1 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.12.0 +rocm-docs-core[api_reference]==1.12.1 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 66559f66a9..67e2eb9896 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -116,7 +116,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.12.0 +rocm-docs-core[api-reference]==1.12.1 # via -r requirements.in six==1.16.0 # via python-dateutil From d7b11c7b777e1433b40357ace43b63195e8c6538 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 7 Jan 2025 00:19:48 +0000 Subject: [PATCH 09/46] Bump rocm-docs-core[api_reference] from 1.12.1 to 1.13.0 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.12.1 to 1.13.0. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.12.1...v1.13.0) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 1069b749d1..0bd422bee5 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.12.1 +rocm-docs-core[api_reference]==1.13.0 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 67e2eb9896..bfbafa055a 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -116,7 +116,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.12.1 +rocm-docs-core[api-reference]==1.13.0 # via -r requirements.in six==1.16.0 # via python-dateutil From 4cf933072e8a43c59558ca70b65482e33c9fb4c5 Mon Sep 17 00:00:00 2001 From: Matthias Knorr Date: Thu, 21 Nov 2024 18:01:07 +0100 Subject: [PATCH 10/46] Docs: Refactor cpp_language_extensions and cpp_language_support --- .wordlist.txt | 9 + docs/faq.rst | 2 +- docs/how-to/hip_cpp_language_extensions.rst | 922 ++++++++++++++ docs/how-to/hip_porting_guide.md | 4 +- docs/how-to/kernel_language_cpp_support.rst | 209 ++++ docs/index.md | 4 +- docs/reference/cpp_language_extensions.rst | 1209 ------------------- docs/reference/cpp_language_support.rst | 171 --- docs/sphinx/_toc.yml.in | 6 +- docs/what_is_hip.rst | 4 +- 10 files changed, 1150 insertions(+), 1390 deletions(-) create mode 100644 docs/how-to/hip_cpp_language_extensions.rst create mode 100644 docs/how-to/kernel_language_cpp_support.rst delete mode 100644 docs/reference/cpp_language_extensions.rst delete mode 100644 docs/reference/cpp_language_support.rst diff --git a/.wordlist.txt b/.wordlist.txt index b3b8686678..a88f752b84 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -12,11 +12,14 @@ backtrace Bitcode bitcode bitcodes +blockDim +blockIdx builtins Builtins CAS clr compilable +constexpr coroutines Ctx cuBLASLt @@ -51,6 +54,7 @@ FNUZ fp gedit GPGPU +gridDim GROMACS GWS hardcoded @@ -87,6 +91,7 @@ iteratively Lapack latencies libc +libhipcxx libstdc lifecycle linearizing @@ -97,6 +102,7 @@ makefile Malloc malloc MALU +maxregcount MiB memset multicore @@ -125,6 +131,7 @@ preconditioners predefining prefetched preprocessor +printf profilers PTX PyHIP @@ -153,6 +160,7 @@ SYCL syntaxes texel texels +threadIdx tradeoffs templated toolkits @@ -167,5 +175,6 @@ unregister upscaled variadic vulkan +warpSize WinGDB zc diff --git a/docs/faq.rst b/docs/faq.rst index 2867cf06e7..5e67ec465d 100644 --- a/docs/faq.rst +++ b/docs/faq.rst @@ -47,7 +47,7 @@ The :doc:`HIP API documentation ` describes each API and its limitations, if any, compared with the equivalent CUDA API. The kernel language features are documented in the -:doc:`/reference/cpp_language_extensions` page. +:doc:`/how-to/hip_cpp_language_extensions` page. Relation to other GPGPU frameworks ================================== diff --git a/docs/how-to/hip_cpp_language_extensions.rst b/docs/how-to/hip_cpp_language_extensions.rst new file mode 100644 index 0000000000..6b18bd01e3 --- /dev/null +++ b/docs/how-to/hip_cpp_language_extensions.rst @@ -0,0 +1,922 @@ +.. meta:: + :description: This chapter describes the built-in variables and functions that + are accessible from HIP kernels and HIP's C++ support. It's + intended for users who are familiar with CUDA kernel syntax and + want to learn how HIP differs from CUDA. + :keywords: AMD, ROCm, HIP, CUDA, c++ language extensions, HIP functions + +################################################################################ +HIP C++ language extensions +################################################################################ + +HIP extends the C++ language with additional features designed for programming +heterogeneous applications. These extensions mostly relate to the kernel +language, but some can also be applied to host functionality. + +******************************************************************************** +HIP qualifiers +******************************************************************************** + +Function-type qualifiers +================================================================================ + +HIP introduces three different function qualifiers to mark functions for +execution on the device or the host, and also adds new qualifiers to control +inlining of functions. + +.. _host_attr: + +__host__ +-------------------------------------------------------------------------------- + +The ``__host__`` qualifier is used to specify functions for execution +on the host. This qualifier is implicitly defined for any function where no +``__host__``, ``__device__`` or ``__global__`` qualifier is added, in order to +not break compatibility with existing C++ functions. + +You can't combine ``__host__`` with ``__global__``. + +__device__ +-------------------------------------------------------------------------------- + +The ``__device__`` qualifier is used to specify functions for execution on the +device. They can only be called from other ``__device__`` functions or from +``__global__`` functions. + +You can combine it with the ``__host__`` qualifier and mark functions +``__host__ __device__``. In this case, the function is compiled for the host and +the device. Note that these functions can't use the HIP built-ins (e.g., +:ref:`threadIdx.x ` or :ref:`warpSize `), as +they are not available on the host. If you need to use HIP grid coordinate +functions, you can pass the necessary coordinate information as an argument. + +__global__ +-------------------------------------------------------------------------------- + +Functions marked ``__global__`` are executed on the device and are referred to +as kernels. Their return type must be ``void``. Kernels have a special launch +mechanism, and have to be launched from the host. + +There are some restrictions on the parameters of kernels. Kernels can't: + +* have a parameter of type ``std::initializer_list`` or ``va_list`` +* have a variable number of arguments +* use references as parameters +* use parameters having different sizes in host and device code, e.g. long double arguments, or structs containing long double members. +* use struct-type arguments which have different layouts in host and device code. + +Kernels can have variadic template parameters, but only one parameter pack, +which must be the last item in the template parameter list. + +.. note:: + Unlike CUDA, HIP does not support dynamic parallelism, meaning that kernels + can not be called from the device. + +Calling __global__ functions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The launch mechanism for kernels differs from standard function calls, as they +need an additional configuration, that specifies the grid and block dimensions +(i.e. the amount of threads to be launched), as well as specifying the amount of +shared memory per block and which stream to execute the kernel on. + +Kernels are called using the triple chevron ``<<<>>>`` syntax known from CUDA, +but HIP also supports the ``hipLaunchKernelGGL`` macro. + +When using ``hipLaunchKernelGGL``, the first five configuration parameters must +be: + +* ``symbol kernelName``: The name of the kernel you want to launch. To support + template kernels that contain several template parameters separated by use the + ``HIP_KERNEL_NAME`` macro to wrap the template instantiation + (:doc:`HIPIFY ` inserts this automatically). +* ``dim3 gridDim``: 3D-grid dimensions that specifies the number of blocks to + launch. +* ``dim3 blockDim``: 3D-block dimensions that specifies the number of threads in + each block. +* ``size_t dynamicShared``: The amount of additional shared dynamic memory to + allocate per block. +* ``hipStream_t``: The stream on which to run the kernel. A value of ``0`` + corresponds to the default stream. + +The kernel arguments are listed after the configuration parameters. + +.. code-block:: cpp + + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t err = expression; \ + if(err != hipSuccess){ \ + std::cerr << "HIP error: " << hipGetErrorString(err) \ + << " at " << __LINE__ << "\n"; \ + } \ + } + + // Performs a simple initialization of an array with the thread's index variables. + // This function is only available in device code. + __device__ void init_array(float * const a, const unsigned int arraySize){ + // globalIdx uniquely identifies a thread in a 1D launch configuration. + const int globalIdx = threadIdx.x + blockIdx.x * blockDim.x; + // Each thread initializes a single element of the array. + if(globalIdx < arraySize){ + a[globalIdx] = globalIdx; + } + } + + // Rounds a value up to the next multiple. + // This function is available in host and device code. + __host__ __device__ constexpr int round_up_to_nearest_multiple(int number, int multiple){ + return (number + multiple - 1)/multiple; + } + + __global__ void example_kernel(float * const a, const unsigned int N) + { + // Initialize array. + init_array(a, N); + // Perform additional work: + // - work with the array + // - use the array in a different kernel + // - ... + } + + int main() + { + constexpr int N = 100000000; // problem size + constexpr int blockSize = 256; //configurable block size + + //needed number of blocks for the given problem size + constexpr int gridSize = round_up_to_nearest_multiple(N, blockSize); + + float *a; + // allocate memory on the GPU + HIP_CHECK(hipMalloc(&a, sizeof(*a) * N)); + + std::cout << "Launching kernel." << std::endl; + example_kernel<<>>(a, N); + // make sure kernel execution is finished by synchronizing. The CPU can also + // execute other instructions during that time + HIP_CHECK(hipDeviceSynchronize()); + std::cout << "Kernel execution finished." << std::endl; + + HIP_CHECK(hipFree(a)); + } + +Inline qualifiers +-------------------------------------------------------------------------------- + +HIP adds the ``__noinline__`` and ``__forceinline__`` function qualifiers. + +``__noinline__`` is a hint to the compiler to not inline the function, whereas +``__forceinline__`` forces the compiler to inline the function. These qualifiers +can be applied to both ``__host__`` and ``__device__`` functions. + +``__noinline__`` and ``__forceinline__`` can not be used in combination. + +__launch_bounds__ +-------------------------------------------------------------------------------- + +GPU multiprocessors have a fixed pool of resources (primarily registers and +shared memory) which are shared by the actively running warps. Using more +resources per thread can increase executed instructions per cycle but reduces +the resources available for other warps and may therefore limit the occupancy, +i.e. the number of warps that can be executed simultaneously. Thus GPUs have to +balance resource usage between instruction- and thread-level parallelism. + +``__launch_bounds__`` allows the application to provide hints that influence the +resource (primarily registers) usage of the generated code. It is a function +attribute that must be attached to a __global__ function: + +.. code-block:: cpp + + __global__ void __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_WARPS_PER_EXECUTION_UNIT) + kernel_name(/*args*/); + +The ``__launch_bounds__`` parameters are explained in the following sections: + +MAX_THREADS_PER_BLOCK +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This parameter is a guarantee from the programmer, that kernel will not be +launched with more threads than ``MAX_THREADS_PER_BLOCK``. + +If no ``__launch_bounds__`` are specified, ``MAX_THREADS_PER_BLOCK`` is +the maximum block size supported by the device (see +:doc:`../reference/hardware_features`). Reducing ``MAX_THREADS_PER_BLOCK`` +allows the compiler to use more resources per thread than an unconstrained +compilation. This might however reduce the amount of blocks that can run +concurrently on a CU, thereby reducing occupancy and trading thread-level +parallelism for instruction-level parallelism. + +``MAX_THREADS_PER_BLOCK`` is particularly useful in cases, where the compiler is +constrained by register usage in order to meet requirements of large block sizes +that are never used at launch time. + +The compiler can only use the hints to manage register usage, and does not +automatically reduce shared memory usage. The compilation fails, if the compiler +can not generate code that satisfies the launch bounds. + +On NVCC this parameter maps to the ``.maxntid`` PTX directive. + +When launching kernels HIP will validate the launch configuration to make sure +the requested block size is not larger than ``MAX_THREADS_PER_BLOCK`` and +return an error if it is exceeded. + +If :doc:`AMD_LOG_LEVEL <./logging>` is set, detailed information will be shown +in the error log message, including the launch configuration of the kernel and +the specified ``__launch_bounds__``. + +MIN_WARPS_PER_EXECUTION_UNIT +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This parameter specifies the minimum number of warps that must be able to run +concurrently on an execution unit. +``MIN_WARPS_PER_EXECUTION_UNIT`` is optional and defaults to 1 if not specified. +Since active warps compete for the same fixed pool of resources, the compiler +must constrain the resource usage of the warps. This option gives a lower +bound to the occupancy of the kernel. + +From this parameter, the compiler derives a maximum number of registers that can +be used in the kernel. The amount of registers that can be used at most is +:math:`\frac{\text{available registers}}{\text{MIN_WARPS_PER_EXECUTION_UNIT}}`, +but it might also have other, architecture specific, restrictions. + +The available registers per Compute Unit are listed in +:doc:`rocm:reference/gpu-arch-specs`. Beware that these values are per Compute +Unit, not per Execution Unit. On AMD GPUs a Compute Unit consists of 4 Execution +Units, also known as SIMDs, each with their own register file. For more +information see :doc:`../understand/hardware_implementation`. +:cpp:struct:`hipDeviceProp_t` also has a field ``executionUnitsPerMultiprocessor``. + +Porting from CUDA __launch_bounds__ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +CUDA also defines a ``__launch_bounds__`` qualifier which works similar to HIP's +implementation, however it uses different parameters: + +.. code-block:: cpp + + __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR) + +The first parameter is the same as HIP's implementation, but +``MIN_BLOCKS_PER_MULTIPROCESSOR`` must be converted to +``MIN_WARPS_PER_EXECUTION``, which uses warps and execution units rather than +blocks and multiprocessors. This conversion is performed automatically by +:doc:`HIPIFY `, or can be done manually with the following +equation. + +.. code-block:: cpp + + MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / warpSize + +Directly controlling the warps per execution unit makes it easier to reason +about the occupancy, unlike with blocks, where the occupancy depends on the +block size. + +The use of execution units rather than multiprocessors also provides support for +architectures with multiple execution units per multiprocessor. For example, the +AMD GCN architecture has 4 execution units per multiprocessor. + +maxregcount +"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +Unlike ``nvcc``, ``amdclang++`` does not support the ``--maxregcount`` option. +Instead, users are encouraged to use the ``__launch_bounds__`` directive since +the parameters are more intuitive and portable than micro-architecture details +like registers. The directive allows per-kernel control. + +Memory space qualifiers +================================================================================ + +HIP adds qualifiers to specify the memory space in which the variables are +located. + +Generally, variables allocated in host memory are not directly accessible within +device code, while variables allocated in device memory are not directly +accessible from the host code. More details on this can be found in +:ref:`unified_memory`. + +__device__ +-------------------------------------------------------------------------------- + +Variables marked with ``__device__`` reside in device memory. It can be +combined together with one of the following qualifiers, however these qualifiers +also imply the ``__device__`` qualifier. + +By default it can only be accessed from the threads on the device. In order to +access it from the host, its address and size need to be queried using +:cpp:func:`hipGetSymbolAddress` and :cpp:func:`hipGetSymbolSize` and copied with +:cpp:func:`hipMemcpyToSymbol` or :cpp:func:`hipMemcpyFromSymbol`. + +__constant__ +-------------------------------------------------------------------------------- + +Variables marked with ``__constant__`` reside in device memory. Variables in +that address space are routed through the constant cache, but that address space +has a limited logical size. +This memory space is read-only from within kernels and can only be set by the +host before kernel execution. + +To get the best performance benefit, these variables need a special access +pattern to benefit from the constant cache - the access has to be uniform within +a warp, otherwise the accesses are serialized. + +The constant cache reduces the pressure on the other caches and may enable +higher throughput and lower latency accesses. + +To set the ``__constant__`` variables the host must copy the data to the device +using :cpp:func:`hipMemcpyToSymbol`, for example: + +.. code-block:: cpp + + __constant__ int const_array[8]; + + void set_constant_memory(){ + int host_data[8] {1,2,3,4,5,6,7,8}; + + hipMemcpyToSymbol(const_array, host_data, sizeof(int) * 8); + + // call kernel that accesses const_array + } + +__shared__ +-------------------------------------------------------------------------------- + +Variables marked with ``__shared__`` are only accessible by threads within the +same block and have the lifetime of that block. It is usually backed by on-chip +shared memory, providing fast access to all threads within a block, which makes +it perfectly suited for sharing variables. + +Shared memory can be allocated statically within the kernel, but the size +of it has to be known at compile time. + +In order to dynamically allocate shared memory during runtime, but before the +kernel is launched, the variable has to be declared ``extern``, and the kernel +launch has to specify the needed amount of ``extern`` shared memory in the launch +configuration. The statically allocated shared memory is allocated without this +parameter. + +.. code-block:: cpp + + #include + + extern __shared__ int shared_array[]; + + __global__ void kernel(){ + // initialize shared memory + shared_array[threadIdx.x] = threadIdx.x; + // use shared memory - synchronize to make sure, that all threads of the + // block see all changes to shared memory + __syncthreads(); + } + + int main(){ + //shared memory in this case depends on the configurable block size + constexpr int blockSize = 256; + constexpr int sharedMemSize = blockSize * sizeof(int); + constexpr int gridSize = 2; + + kernel<<>>(); + } + +__managed__ +-------------------------------------------------------------------------------- + +Managed memory is a special qualifier, that makes the marked memory available on +the device and on the host. For more details see :ref:`unified_memory`. + +__restrict__ +-------------------------------------------------------------------------------- + +The ``__restrict__`` keyword tells the compiler that the associated memory +pointer does not alias with any other pointer in the function. This can help the +compiler perform better optimizations. For best results, every pointer passed to +a function should use this keyword. + +******************************************************************************** +Built-in constants +******************************************************************************** + +HIP defines some special built-in constants for use in device code. + +These built-ins are not implicitly defined by the compiler, the +``hip_runtime.h`` header has to be included instead. + +Index built-ins +================================================================================ + +Kernel code can use these identifiers to distinguish between the different +threads and blocks within a kernel. + +These built-ins are of type dim3, and are constant for each thread, but differ +between the threads or blocks, and are initialized at kernel launch. + +blockDim and gridDim +-------------------------------------------------------------------------------- + +``blockDim`` and ``gridDim`` contain the sizes specified at kernel launch. +``blockDim`` contains the amount of threads in the x-, y- and z-dimensions of +the block of threads. Similarly ``gridDim`` contains the amount of blocks in the +grid. + +.. _thread_and_block_idx: + +threadIdx and blockIdx +-------------------------------------------------------------------------------- + +``threadIdx`` and ``blockIdx`` can be used to identify the threads and blocks +within the kernel. + +``threadIdx`` identifies the thread within a block, meaning its values are +within ``0`` and ``blockDim.{x,y,z} - 1``. Likewise ``blockIdx`` identifies the +block within the grid, and the values are within ``0`` and ``gridDim.{} - 1``. + +A global unique identifier of a three-dimensional grid can be calculated using +the following code: + +.. code-block:: cpp + + (threadIdx.x + blockIdx.x * blockDim.x) + + (threadIdx.y + blockIdx.y * blockDim.y) * blockDim.x + + (threadIdx.z + blockIdx.z * blockDim.z) * blockDim.x * blockDim.y + +.. _warp_size: + +warpSize +================================================================================ + +The ``warpSize`` constant contains the number of threads per warp for the given +target device. It can differ between different architectures, and on RDNA +architectures it can even differ between kernel launches, depending on whether +they run in CU or WGP mode. See the +:doc:`hardware features <../reference/hardware_features>` for more +information. + +Since ``warpSize`` can differ between devices, it can not be assumed to be a +compile-time constant on the host. It has to be queried using +:cpp:func:`hipDeviceGetAttribute` or :cpp:func:`hipDeviceGetProperties`, e.g.: + +.. code-block:: cpp + + int val; + hipDeviceGetAttribute(&val, hipDeviceAttributeWarpSize, deviceId); + +.. note:: + + ``warpSize`` should not be assumed to be a specific value in portable HIP + applications. NVIDIA devices return 32 for this variable; AMD devices return + 64 for gfx9 and 32 for gfx10 and above. While code that assumes a ``warpSize`` + of 32 can run on devices with a ``warpSize`` of 64, it only utilizes half of + the the compute resources. + +******************************************************************************** +Vector types +******************************************************************************** + +These types are not automatically provided by the compiler. The +``hip_vector_types.h`` header, which is also included by ``hip_runtime.h`` has +to be included to use these types. + +Fundamental vector types +================================================================================ + +Fundamental vector types derive from the `fundamental C++ integral and +floating-point types `_. These +types are defined in ``hip_vector_types.h``, which is included by +``hip_runtime.h``. + +All vector types can be created with ``1``, ``2``, ``3`` or ``4`` elements, the +corresponding type is ``i``, where ``i`` is the number of +elements. + +All vector types support a constructor function of the form +``make_()``. For example, +``float3 make_float3(float x, float y, float z)`` creates a vector of type +``float3`` with value ``(x,y,z)``. +The elements of the vectors can be accessed using their members ``x``, ``y``, +``z``, and ``w``. + +.. code-block:: cpp + + double2 d2_vec = make_double2(2.0, 4.0); + double first_elem = d2_vec.x; + +HIP supports vectors created from the following fundamental types: + +.. list-table:: + :widths: 50 50 + + * + - **Integral Types** + - + * + - ``char`` + - ``uchar`` + * + - ``short`` + - ``ushort`` + * + - ``int`` + - ``uint`` + * + - ``long`` + - ``ulong`` + * + - ``longlong`` + - ``ulonglong`` + * + - **Floating-Point Types** + - + * + - ``float`` + - + * + - ``double`` + - + +.. _dim3: + +dim3 +================================================================================ + +``dim3`` is a special three-dimensional unsigned integer vector type that is +commonly used to specify grid and group dimensions for kernel launch +configurations. + +Its constructor accepts up to three arguments. The unspecified dimensions are +initialized to 1. + +******************************************************************************** +Built-in device functions +******************************************************************************** + +.. _memory_fence_instructions: + +Memory fence instructions +================================================================================ + +HIP does not enforce strict ordering on memory operations, meaning, that the +order in which memory accesses are executed, is not necessarily the order in +which other threads observe these changes. So it can not be assumed, that data +written by one thread is visible by another thread without synchronization. + +Memory fences are a way to enforce a sequentially consistent order on the memory +operations. This means, that all writes to memory made before a memory fence are +observed by all threads after the fence. The scope of these fences depends on +what specific memory fence is called. + +HIP supports ``__threadfence()``, ``__threadfence_block()`` and +``__threadfence_system()``: + +* ``__threadfence_block()`` orders memory accesses for all threads within a thread block. +* ``__threadfence()`` orders memory accesses for all threads on a device. +* ``__threadfence_system()`` orders memory accesses for all threads in the system, making writes to memory visible to other devices and the host + +.. _synchronization_functions: + +Synchronization functions +================================================================================ + +Synchronization functions cause all threads in a group to wait at this +synchronization point until all threads reached it. These functions implicitly +include a :ref:`threadfence `, thereby ensuring +visibility of memory accesses for the threads in the group. + +The ``__syncthreads()`` function comes in different versions. + +``void __syncthreads()`` simply synchronizes the threads of a block. The other +versions additionally evaluate a predicate: + +``int __syncthreads_count(int predicate)`` returns the number of threads for +which the predicate evaluates to non-zero. + +``int __syncthreads_and(int predicate)`` returns non-zero if the predicate +evaluates to non-zero for all threads. + +``int __syncthreads_or(int predicate)`` returns non-zero if any of the +predicates evaluates to non-zero. + +The Cooperative Groups API offers options to synchronize threads on a developer +defined set of thread groups. For further information, check the +:ref:`Cooperative Groups API reference ` or the +:ref:`Cooperative Groups section in the programming guide +`. + +Math functions +================================================================================ + +HIP-Clang supports a set of math operations that are callable from the device. +HIP supports most of the device functions supported by CUDA. These are described +on :ref:`Math API page `. + +Texture functions +================================================================================ + +The supported texture functions are listed in ``texture_fetch_functions.h`` and +``texture_indirect_functions.h`` header files in the +`HIP-AMD backend repository `_. + +Texture functions are not supported on some devices. To determine if texture functions are supported +on your device, use ``Macro __HIP_NO_IMAGE_SUPPORT == 1``. You can query the attribute +``hipDeviceAttributeImageSupport`` to check if texture functions are supported in the host runtime +code. + +Surface functions +================================================================================ + +The supported surface functions are located on :ref:`Surface object reference +page `. + +Timer functions +================================================================================ + +HIP provides device functions to read a high-resolution timer from within the +kernel. + +The following functions count the cycles on the device, where the rate varies +with the actual frequency. + +.. code-block:: cpp + + clock_t clock() + long long int clock64() + +.. note:: + + ``clock()`` and ``clock64()`` do not work properly on AMD RDNA3 (GFX11) graphic processors. + +The difference between the returned values represents the cycles used. + +.. code-block:: cpp + + __global void kernel(){ + long long int start = clock64(); + // kernel code + long long int stop = clock64(); + long long int cycles = stop - start; + } + +``long long int wall_clock64()`` returns the wall clock time on the device, with a constant, fixed frequency. +The frequency is device dependent and can be queried using: + +.. code-block:: cpp + + int wallClkRate = 0; //in kilohertz + hipDeviceGetAttribute(&wallClkRate, hipDeviceAttributeWallClockRate, deviceId); + +.. _atomic functions: + +Atomic functions +================================================================================ + +Atomic functions are read-modify-write (RMW) operations, whose result is visible +to all other threads on the scope of the atomic operation, once the operation +completes. + +If multiple instructions from different devices or threads target the same +memory location, the instructions are serialized in an undefined order. + +Atomic operations in kernels can operate on block scope (i.e. shared memory), +device scope (global memory), or system scope (system memory), depending on +:doc:`hardware support <../reference/hardware_features>`. + +The listed functions are also available with the ``_system`` (e.g. +``atomicAdd_system``) suffix, operating on system scope, which includes host +memory and other GPUs' memory. The functions without suffix operate on shared +or global memory on the executing device, depending on the memory space of the +variable. + +HIP supports the following atomic operations, where ``TYPE`` is one of ``int``, +``unsigned int``, ``unsigned long``, ``unsigned long long``, ``float`` or +``double``, while ``INTEGER`` is ``int``, ``unsigned int``, ``unsigned long``, +``unsigned long long``: + +.. list-table:: Atomic operations + + * - ``TYPE atomicAdd(TYPE* address, TYPE val)`` + + * - ``TYPE atomicSub(TYPE* address, TYPE val)`` + + * - ``TYPE atomicMin(TYPE* address, TYPE val)`` + * - ``long long atomicMin(long long* address, long long val)`` + + * - ``TYPE atomicMax(TYPE* address, TYPE val)`` + * - ``long long atomicMax(long long* address, long long val)`` + + * - ``TYPE atomicExch(TYPE* address, TYPE val)`` + + * - ``TYPE atomicCAS(TYPE* address, TYPE compare, TYPE val)`` + + * - ``INTEGER atomicAnd(INTEGER* address, INTEGER val)`` + + * - ``INTEGER atomicOr(INTEGER* address, INTEGER val)`` + + * - ``INTEGER atomicXor(INTEGER* address, INTEGER val)`` + + * - ``unsigned int atomicInc(unsigned int* address)`` + + * - ``unsigned int atomicDec(unsigned int* address)`` + +Unsafe floating-point atomic operations +-------------------------------------------------------------------------------- + +Some HIP devices support fast atomic operations on floating-point values. For +example, ``atomicAdd`` on single- or double-precision floating-point values may +generate a hardware instruction that is faster than emulating the atomic +operation using an atomic compare-and-swap (CAS) loop. + +On some devices, fast atomic instructions can produce results that differ from +the version implemented with atomic CAS loops. For example, some devices +will use different rounding or denormal modes, and some devices produce +incorrect answers if fast floating-point atomic instructions target fine-grained +memory allocations. + +The HIP-Clang compiler offers compile-time options to control the generation of +unsafe atomic instructions. By default the compiler does not generate unsafe +instructions. This is the same behaviour as with the ``-mno-unsafe-fp-atomics`` +compilation flag. The ``-munsafe-fp-atomics`` flag indicates to the compiler +that all floating-point atomic function calls are allowed to use an unsafe +version, if one exists. For example, on some devices, this flag indicates to the +compiler that no floating-point ``atomicAdd`` function can target fine-grained +memory. These options are applied globally for the entire compilation. + +HIP provides special functions that override the global compiler option for safe +or unsafe atomic functions. + +The ``safe`` prefix always generates safe atomic operations, even when +``-munsafe-fp-atomics`` is used, whereas ``unsafe`` always generates fast atomic +instructions, even when ``-mno-unsafe-fp-atomics``. The following table lists +the safe and unsafe atomic functions, where ``FLOAT_TYPE`` is either ``float`` +or ``double``. + +.. list-table:: AMD specific atomic operations + + * - ``FLOAT_TYPE unsafeAtomicAdd(FLOAT_TYPE* address, FLOAT_TYPE val)`` + + * - ``FLOAT_TYPE safeAtomicAdd(FLOAT_TYPE* address, FLOAT_TYPE val)`` + +.. _warp-cross-lane: + +Warp cross-lane functions +================================================================================ + +Threads in a warp are referred to as ``lanes`` and are numbered from ``0`` to +``warpSize - 1``. Warp cross-lane functions cooperate across all lanes in a +warp. AMD GPUs guarantee, that all warp lanes are executed in lockstep, whereas +NVIDIA GPUs that support Independent Thread Scheduling might require additional +synchronization, or the use of the ``__sync`` variants. + +Note that different devices can have different warp sizes. You should query the +:ref:`warpSize ` in portable code and not assume a fixed warp size. + +All mask values returned or accepted by these built-ins are 64-bit unsigned +integer values, even when compiled for a device with 32 threads per warp. On +such devices the higher bits are unused. CUDA code ported to HIP requires +changes to ensure that the correct type is used. + +Note that the ``__sync`` variants are made available in ROCm 6.2, but disabled by +default to help with the transition to 64-bit masks. They can be enabled by +setting the preprocessor macro ``HIP_ENABLE_WARP_SYNC_BUILTINS``. These built-ins +will be enabled unconditionally in the next ROCm release. Wherever possible, the +implementation includes a static assert to check that the program source uses +the correct type for the mask. + +The ``_sync`` variants require a 64-bit unsigned integer mask argument that +specifies the lanes of the warp that will participate. Each participating thread +must have its own bit set in its mask argument, and all active threads specified +in any mask argument must execute the same call with the same mask, otherwise +the result is undefined. + +.. _warp_vote_functions: + +Warp vote and ballot functions +-------------------------------------------------------------------------------- + +.. code-block:: cpp + + int __all(int predicate) + int __any(int predicate) + unsigned long long __ballot(int predicate) + unsigned long long __activemask() + + int __all_sync(unsigned long long mask, int predicate) + int __any_sync(unsigned long long mask, int predicate) + unsigned long long __ballot_sync(unsigned long long mask, int predicate) + +You can use ``__any`` and ``__all`` to get a summary view of the predicates evaluated by the +participating lanes. + +* ``__any()``: Returns 1 if the predicate is non-zero for any participating lane, otherwise it returns 0. + +* ``__all()``: Returns 1 if the predicate is non-zero for all participating lanes, otherwise it returns 0. + +To determine if the target platform supports the any/all instruction, you can +query the ``hasWarpVote`` device property on the host or use the +``HIP_ARCH_HAS_WARP_VOTE`` compiler definition in device code. + +``__ballot`` returns a bit mask containing the 1-bit predicate value from each +lane. The nth bit of the result contains the bit contributed by the nth lane. + +``__activemask()`` returns a bit mask of currently active warp lanes. The nth +bit of the result is 1 if the nth lane is active. + +Note that the ``__ballot`` and ``__activemask`` built-ins in HIP have a 64-bit return +value (unlike the 32-bit value returned by the CUDA built-ins). Code ported from +CUDA should be adapted to support the larger warp sizes that the HIP version +requires. + +Applications can test whether the target platform supports the ``__ballot`` or +``__activemask`` instructions using the ``hasWarpBallot`` device property in host +code or the ``HIP_ARCH_HAS_WARP_BALLOT`` macro defined by the compiler for device +code. + +Warp match functions +-------------------------------------------------------------------------------- + +.. code-block:: cpp + + unsigned long long __match_any(T value) + unsigned long long __match_all(T value, int *pred) + + unsigned long long __match_any_sync(unsigned long long mask, T value) + unsigned long long __match_all_sync(unsigned long long mask, T value, int *pred) + +``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or +double precision floating point type. + +``__match_any`` returns a bit mask where the n-th bit is set to 1 if the n-th +lane has the same ``value`` as the current lane, and 0 otherwise. + +``__match_all`` returns a bit mask with the bits of the participating lanes are +set to 1 if all lanes have the same ``value``, and 0 otherwise. +The predicate ``pred`` is set to true if all participating threads have the same +``value``, and false otherwise. + +Warp shuffle functions +-------------------------------------------------------------------------------- + +.. code-block:: cpp + + T __shfl (T var, int srcLane, int width=warpSize); + T __shfl_up (T var, unsigned int delta, int width=warpSize); + T __shfl_down (T var, unsigned int delta, int width=warpSize); + T __shfl_xor (T var, int laneMask, int width=warpSize); + + T __shfl_sync (unsigned long long mask, T var, int srcLane, int width=warpSize); + T __shfl_up_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize); + T __shfl_down_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize); + T __shfl_xor_sync (unsigned long long mask, T var, int laneMask, int width=warpSize); + +``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or +double precision floating point type. + +The warp shuffle functions exchange values between threads within a warp. + +The optional ``width`` argument specifies subgroups, in which the warp can be +divided to share the variables. +It has to be a power of two smaller than or equal to ``warpSize``. If it is +smaller than ``warpSize``, the warp is grouped into separate groups, that are each +indexed from 0 to width as if it was its own entity, and only the lanes within +that subgroup participate in the shuffle. The lane indices in the subgroup are +given by ``laneIdx % width``. + +The different shuffle functions behave as following: + +``__shfl`` + The thread reads the value from the lane specified in ``srcLane``. + +``__shfl_up`` + The thread reads ``var`` from lane ``laneIdx - delta``, thereby "shuffling" + the values of the lanes of the warp "up". If the resulting source lane is out + of range, the thread returns its own ``var``. + +``__shfl_down`` + The thread reads ``var`` from lane ``laneIdx - delta``, thereby "shuffling" + the values of the lanes of the warp "down". If the resulting source lane is + out of range, the thread returns its own ``var``. + +``__shfl_xor`` + The thread reads ``var`` from lane ``laneIdx xor lane_mask``. If ``width`` is + smaller than ``warpSize``, the threads can read values from subgroups before + the current subgroup. If it tries to read values from later subgroups, the + function returns the ``var`` of the calling thread. + +Warp matrix functions +-------------------------------------------------------------------------------- + +Warp matrix functions allow a warp to cooperatively operate on small matrices +that have elements spread over lanes in an unspecified manner. + +HIP does not support warp matrix types or functions. + +Cooperative groups functions +================================================================================ + +You can use cooperative groups to synchronize groups of threads across thread +blocks. It also provide a way of communicating between these groups. + +For further information, check the :ref:`Cooperative Groups API reference +` or the :ref:`Cooperative Groups programming +guide `. diff --git a/docs/how-to/hip_porting_guide.md b/docs/how-to/hip_porting_guide.md index bc3a2deda9..adda988ee5 100644 --- a/docs/how-to/hip_porting_guide.md +++ b/docs/how-to/hip_porting_guide.md @@ -373,7 +373,9 @@ run hipcc when appropriate. ### ``warpSize`` -Code should not assume a warp size of 32 or 64. See [Warp Cross-Lane Functions](https://rocm.docs.amd.com/projects/HIP/en/latest/reference/cpp_language_extensions.html#warp-cross-lane-functions) for information on how to write portable wave-aware code. +Code should not assume a warp size of 32 or 64. See the +:ref:`HIP language extension for warpSize ` for information on how +to write portable wave-aware code. ### Kernel launch with group size > 256 diff --git a/docs/how-to/kernel_language_cpp_support.rst b/docs/how-to/kernel_language_cpp_support.rst new file mode 100644 index 0000000000..e5ad9c733f --- /dev/null +++ b/docs/how-to/kernel_language_cpp_support.rst @@ -0,0 +1,209 @@ +.. meta:: + :description: This chapter describes HIP's kernel language's C++ support. + :keywords: AMD, ROCm, HIP, C++ support + +################################################################################ +Kernel language C++ support +################################################################################ + +The HIP host API can be compiled with any conforming C++ compiler, as long as no +kernel launch is present in the code. + +To compile device code and include kernel launches, a compiler with full HIP +support is needed, such as ``amdclang++``. For more information, see :doc:`ROCm +compilers `. + +In host code all modern C++ standards that are supported by the compiler can be +used. Device code compilation has some restrictions on modern C++ standards, but +in general also supports all C++ standards. The biggest restriction is the +reduced support of the C++ standard library in device code, as functions are +only compiled for the host by default. An exception to this are ``constexpr`` +functions that are resolved at compile time and can be used in device code. +There are ongoing efforts to implement C++ standard library functionality with +`libhipcxx `_. + +******************************************************************************** +Supported kernel language C++ features +******************************************************************************** + +This section describes HIP's kernel language C++ feature support for the +different versions of the standard. + +General C++ features +=============================================================================== + +Exception handling +------------------------------------------------------------------------------- + +An important difference between the host and device code C++ support is +exception handling. In device code, exceptions aren't available due to +the hardware architecture. The device code must use return codes to handle +errors. + +Assertions +-------------------------------------------------------------------------------- + +The ``assert`` function is supported in device code. Assertions are used for +debugging purposes. When the input expression equals zero, the execution will be +stopped. HIP provides its own implementation for ``assert`` for usage in device +code in ``hip/hip_runtime.h``. + +.. code-block:: cpp + + void assert(int input) + +HIP also provides the function ``abort()`` which can be used to terminate the +application when terminal failures are detected. It is implemented using the +``__builtin_trap()`` function. + +This function produces a similar effect as using CUDA's ``asm("trap")``. +In HIP, ``abort()`` terminates the entire application, while in CUDA, +``asm("trap")`` only terminates the current kernel and the application continues +to run. + +printf +-------------------------------------------------------------------------------- + +``printf`` is supported in device code, and can be used just like in host code. + +.. code-block:: cpp + + #include + + __global__ void run_printf() { printf("Hello World\n"); } + + int main() { + run_printf<<>>(); + } + +Device-Side Dynamic Global Memory Allocation +-------------------------------------------------------------------------------- + +Device code can use ``new`` or ``malloc`` to dynamically allocate global +memory on the device, and ``delete`` or ``free`` to deallocate global memory. + +Classes +-------------------------------------------------------------------------------- + +Classes work on both host and device side, with some constraints on the device +side. + +Member functions with the appropriate qualifiers can be called in host and +device code, and the corresponding overload is executed. + +``virtual`` member functions are also supported, however calling these functions +from the host if the object was created on the device, or the other way around, +is undefined behaviour. + +The ``__host__``, ``__device__``, ``__managed__``, ``__shared__`` and +``__constant__`` memory space qualifiers can not be applied to member variables. + +C++11 support +=============================================================================== + +``constexpr`` + Full support in device code. ``constexpr`` implicitly defines ``__host__ + __device__``, so standard library functions that are marked ``constexpr`` can + be used in device code. + ``constexpr`` variables can be used in both host and device code. + +Lambdas + Lambdas are implicitly marked with ``__host__ __device__``. To mark them as + only executable for the host or the device, they can be explicitly marked like + any other function. There are restrictions on variable capture, however. Host + and device specific variables can only be accessed on other devices or the + host by explicitly copying them. Accessing captured the variables by + reference, when the variable is not located on the executing device or host, + causes undefined behaviour. + +Polymorphic function wrappers + HIP does not support the polymorphic function wrapper ``std::function`` + + +C++14 support +=============================================================================== + +All `C++14 language features `_ are +supported. + +C++17 support +=============================================================================== + +All `C++17 language features `_ are +supported. + +C++20 support +=============================================================================== + +Most `C++20 language features `_ are +supported, but some restrictions apply. Coroutines are not available in device +code. + +******************************************************************************** +Compiler features +******************************************************************************** + +Pragma Unroll +================================================================================ + +The unroll pragma for unrolling loops with a compile-time constant is supported: + +.. code-block:: cpp + + #pragma unroll 16 /* hint to compiler to unroll next loop by 16 */ + for (int i=0; i<16; i++) ... + +.. code-block:: cpp + + #pragma unroll 1 /* tell compiler to never unroll the loop */ + for (int i=0; i<16; i++) ... + +.. code-block:: cpp + + #pragma unroll /* hint to compiler to completely unroll next loop. */ + for (int i=0; i<16; i++) ... + +In-Line Assembly +================================================================================ + +GCN ISA In-line assembly can be included in device code. + +It has to be mentioned however, that in-line assembly should be used carefully. +For more information, please refer to the +:doc:`Inline ASM statements section of amdclang`. + +A short example program including inline assembly can be found in +`HIP inline_assembly sample +`_. + +For information on what special AMD GPU hardware features are available +through assembly, please refer to the `ISA manuals of the corresponding +architecture +`_. + +Kernel Compilation +================================================================================ + +``hipcc`` now supports compiling C++/HIP kernels to binary code objects. The +file format for the binary files is usually ``.co`` which means Code Object. +The following command builds the code object using ``hipcc``. + +.. code-block:: bash + + hipcc --genco --offload-arch=[TARGET GPU] [INPUT FILE] -o [OUTPUT FILE] + + [TARGET GPU] = GPU architecture + [INPUT FILE] = Name of the file containing source code + [OUTPUT FILE] = Name of the generated code object file + +For an example on how to use these object files, refer to the `HIP module_api +sample +`_. + +Architecture specific code +================================================================================ + +``amdclang++`` defines ``__gfx*__`` macros based on the GPU architecture to be +compiled for. These macros can be used to include GPU architecture specific +code. Refer to the sample in `HIP gpu_arch sample +`_. diff --git a/docs/index.md b/docs/index.md index 7b3f3bc513..fdee24b518 100644 --- a/docs/index.md +++ b/docs/index.md @@ -30,6 +30,8 @@ The HIP documentation is organized into the following categories: * [Debugging with HIP](./how-to/debugging) * {doc}`./how-to/logging` * {doc}`./how-to/hip_runtime_api` +* {doc}`./how-to/hip_cpp_language_extensions` +* {doc}`./how-to/kernel_language_cpp_support` * [HIP porting guide](./how-to/hip_porting_guide) * [HIP porting: driver API guide](./how-to/hip_porting_driver_api) * {doc}`./how-to/hip_rtc` @@ -41,8 +43,6 @@ The HIP documentation is organized into the following categories: * [HIP runtime API](./reference/hip_runtime_api_reference) * [HSA runtime API for ROCm](./reference/virtual_rocr) -* [C++ language extensions](./reference/cpp_language_extensions) -* [C++ language support](./reference/cpp_language_support) * [HIP math API](./reference/math_api) * [HIP environment variables](./reference/env_variables) * [Comparing syntax for different APIs](./reference/terms) diff --git a/docs/reference/cpp_language_extensions.rst b/docs/reference/cpp_language_extensions.rst deleted file mode 100644 index 09a6d8f5dc..0000000000 --- a/docs/reference/cpp_language_extensions.rst +++ /dev/null @@ -1,1209 +0,0 @@ -.. meta:: - :description: This chapter describes the built-in variables and functions that are accessible from the - HIP kernel. It's intended for users who are familiar with CUDA kernel syntax and want to - learn how HIP differs from CUDA. - :keywords: AMD, ROCm, HIP, CUDA, c++ language extensions, HIP functions - -******************************************************************************** -C++ language extensions -******************************************************************************** - -HIP provides a C++ syntax that is suitable for compiling most code that commonly appears in -compute kernels (classes, namespaces, operator overloading, and templates). HIP also defines other -language features that are designed to target accelerators, such as: - -* A kernel-launch syntax that uses standard C++ (this resembles a function call and is portable to all - HIP targets) -* Short-vector headers that can serve on a host or device -* Math functions that resemble those in ``math.h``, which is included with standard C++ compilers -* Built-in functions for accessing specific GPU hardware capabilities - -.. note:: - - This chapter describes the built-in variables and functions that are accessible from the HIP kernel. It's - intended for users who are familiar with CUDA kernel syntax and want to learn how HIP differs from - CUDA. - -Features are labeled with one of the following keywords: - -* **Supported**: HIP supports the feature with a CUDA-equivalent function -* **Not supported**: HIP does not support the feature -* **Under development**: The feature is under development and not yet available - -Function-type qualifiers -======================================================== - -``__device__`` ------------------------------------------------------------------------ - -Supported ``__device__`` functions are: - - * Run on the device - * Called from the device only - -You can combine ``__device__`` with the host keyword (:ref:`host_attr`). - -``__global__`` ------------------------------------------------------------------------ - -Supported ``__global__`` functions are: - - * Run on the device - * Called (launched) from the host - -HIP ``__global__`` functions must have a ``void`` return type. - -HIP doesn't support dynamic-parallelism, which means that you can't call ``__global__`` functions from -the device. - -.. _host_attr: - -``__host__`` ------------------------------------------------------------------------ - -Supported ``__host__`` functions are: - - * Run on the host - * Called from the host - -You can combine ``__host__`` with ``__device__``; in this case, the function compiles for the host and the -device. Note that these functions can't use the HIP grid coordinate functions (e.g., ``threadIdx.x``). If -you need to use HIP grid coordinate functions, you can pass the necessary coordinate information as -an argument. - -You can't combine ``__host__`` with ``__global__``. - -HIP parses the ``__noinline__`` and ``__forceinline__`` keywords and converts them into the appropriate -Clang attributes. - -Calling ``__global__`` functions -============================================================= - -`__global__` functions are often referred to as *kernels*. When you call a global function, you're -*launching a kernel*. When launching a kernel, you must specify an execution configuration that includes the -grid and block dimensions. The execution configuration can also include other information for the launch, -such as the amount of additional shared memory to allocate and the stream where you want to execute the -kernel. - -HIP introduces a standard C++ calling convention (``hipLaunchKernelGGL``) to pass the run -configuration to the kernel. However, you can also use the CUDA ``<<< >>>`` syntax. - -When using ``hipLaunchKernelGGL``, your first five parameters must be: - - * ``symbol kernelName``: The name of the kernel you want to launch. To support template kernels - that contain ``","``, use the ``HIP_KERNEL_NAME`` macro (HIPIFY tools insert this automatically). - * ``dim3 gridDim``: 3D-grid dimensions that specify the number of blocks to launch. - * ``dim3 blockDim``: 3D-block dimensions that specify the number of threads in each block. - * ``size_t dynamicShared``: The amount of additional shared memory that you want to allocate - when launching the kernel (see :ref:`shared-variable-type`). - * ``hipStream_t``: The stream where you want to run the kernel. A value of ``0`` corresponds to the - NULL stream (see :ref:`synchronization_functions`). - -You can include your kernel arguments after these parameters. - -.. code-block:: cpp - - // Example hipLaunchKernelGGL pseudocode: - __global__ void MyKernel(float *A, float *B, float *C, size_t N) - { - ... - } - - MyKernel<<>> (a,b,c,n); - - // Alternatively, you can launch the kernel using: - // hipLaunchKernelGGL(MyKernel, dim3(gridDim), dim3(groupDim), 0/*dynamicShared*/, 0/*stream), a, b, c, n); - -You can use HIPIFY tools to convert CUDA launch syntax to ``hipLaunchKernelGGL``. This includes the -conversion of optional ``<<< >>>`` arguments into the five required ``hipLaunchKernelGGL`` -parameters. - -.. note:: - - HIP doesn't support dimension sizes of :math:`gridDim * blockDim \ge 2^{32}` when launching a kernel. - -.. _kernel-launch-example: - -Kernel launch example -========================================================== - -.. code-block:: cpp - - // Example showing device function, __device__ __host__ - // <- compile for both device and host - #include - // Example showing device function, __device__ __host__ - __host__ __device__ float PlusOne(float x) // <- compile for both device and host - { - return x + 1.0; - } - - __global__ void MyKernel (const float *a, const float *b, float *c, unsigned N) - { - const int gid = threadIdx.x + blockIdx.x * blockDim.x; // <- coordinate index function - if (gid < N) { - c[gid] = a[gid] + PlusOne(b[gid]); - } - } - - void callMyKernel() - { - float *a, *b, *c; // initialization not shown... - unsigned N = 1000000; - const unsigned blockSize = 256; - const int gridSize = (N + blockSize - 1)/blockSize; - - MyKernel<<>> (a,b,c,N); - // Alternatively, kernel can be launched by - // hipLaunchKernelGGL(MyKernel, dim3(gridSize), dim3(blockSize), 0, 0, a,b,c,N); - } - -Variable type qualifiers -======================================================== - -``__constant__`` ------------------------------------------------------------------------------ - -The host writes constant memory before launching the kernel. This memory is read-only from the GPU -while the kernel is running. The functions for accessing constant memory are: - -* ``hipGetSymbolAddress()`` -* ``hipGetSymbolSize()`` -* ``hipMemcpyToSymbol()`` -* ``hipMemcpyToSymbolAsync()`` -* ``hipMemcpyFromSymbol()`` -* ``hipMemcpyFromSymbolAsync()`` - -.. note:: - - Add ``__constant__`` to a template can lead to undefined behavior. Refer to `HIP Issue #3201 `_ for details. - -.. _shared-variable-type: - -``__shared__`` ------------------------------------------------------------------------------ - -To allow the host to dynamically allocate shared memory, you can specify ``extern __shared__`` as a -launch parameter. - -.. note:: - - Prior to the HIP-Clang compiler, dynamic shared memory had to be declared using the - ``HIP_DYNAMIC_SHARED`` macro in order to ensure accuracy. This is because using static shared - memory in the same kernel could've resulted in overlapping memory ranges and data-races. The - HIP-Clang compiler provides support for ``extern __shared_`` declarations, so ``HIP_DYNAMIC_SHARED`` - is no longer required. - -``__managed__`` ------------------------------------------------------------------------------ - -Managed memory, including the ``__managed__`` keyword, is supported in HIP combined host/device -compilation. - -``__restrict__`` ------------------------------------------------------------------------------ - -``__restrict__`` tells the compiler that the associated memory pointer not to alias with any other pointer -in the kernel or function. This can help the compiler generate better code. In most use cases, every -pointer argument should use this keyword in order to achieve the benefit. - -Built-in variables -==================================================== - -Coordinate built-ins ------------------------------------------------------------------------------ - -The kernel uses coordinate built-ins (``thread*``, ``block*``, ``grid*``) to determine the coordinate index -and bounds for the active work item. - -Built-ins are defined in ``amd_hip_runtime.h``, rather than being implicitly defined by the compiler. - -Coordinate variable definitions for built-ins are the same for HIP and CUDA. For example: ``threadIdx.x``, -``blockIdx.y``, and ``gridDim.y``. The products ``gridDim.x * blockDim.x``, ``gridDim.y * blockDim.y``, and -``gridDim.z * blockDim.z`` are always less than ``2^32``. - -Coordinate built-ins are implemented as structures for improved performance. When used with -``printf``, they must be explicitly cast to integer types. - -``warpSize`` ------------------------------------------------------------------------------ -The ``warpSize`` variable type is ``int``. It contains the warp size (in threads) for the target device. -``warpSize`` should only be used in device functions that develop portable wave-aware code. - -.. note:: - - NVIDIA devices return 32 for this variable; AMD devices return 64 for gfx9 and 32 for gfx10 and above. - -Vector types -==================================================== - -The following vector types are defined in ``hip_runtime.h``. They are not automatically provided by the -compiler. - -Short vector types --------------------------------------------------------------------------------------------- - -Short vector types derive from basic integer and floating-point types. These structures are defined in -``hip_vector_types.h``. The first, second, third, and fourth components of the vector are defined by the -``x``, ``y``, ``z``, and ``w`` fields, respectively. All short vector types support a constructor function of the -form ``make_()``. For example, ``float4 make_float4(float x, float y, float z, float w)`` creates -a vector with type ``float4`` and value ``(x,y,z,w)``. - -HIP supports the following short vector formats: - -* Signed Integers: - - * ``char1``, ``char2``, ``char3``, ``char4`` - * ``short1``, ``short2``, ``short3``, ``short4`` - * ``int1``, ``int2``, ``int3``, ``int4`` - * ``long1``, ``long2``, ``long3``, ``long4`` - * ``longlong1``, ``longlong2``, ``longlong3``, ``longlong4`` - -* Unsigned Integers: - - * ``uchar1``, ``uchar2``, ``uchar3``, ``uchar4`` - * ``ushort1``, ``ushort2``, ``ushort3``, ``ushort4`` - * ``uint1``, ``uint2``, ``uint3``, ``uint4`` - * ``ulong1``, ``ulong2``, ``ulong3``, ``ulong4`` - * ``ulonglong1``, ``ulonglong2``, ``ulonglong3``, ``ulonglong4`` - -* Floating Points: - - * ``float1``, ``float2``, ``float3``, ``float4`` - * ``double1``, ``double2``, ``double3``, ``double4`` - -.. _dim3: - -dim3 --------------------------------------------------------------------------------------------- - -``dim3`` is a three-dimensional integer vector type that is commonly used to specify grid and group -dimensions. - -The dim3 constructor accepts between zero and three arguments. By default, it initializes unspecified -dimensions to 1. - -.. code-block:: cpp - - typedef struct dim3 { - uint32_t x; - uint32_t y; - uint32_t z; - - dim3(uint32_t _x=1, uint32_t _y=1, uint32_t _z=1) : x(_x), y(_y), z(_z) {}; - }; - -.. _memory_fence_instructions: - -Memory fence instructions -==================================================== - -HIP supports ``__threadfence()`` and ``__threadfence_block()``. If you're using ``threadfence_system()`` in the HIP-Clang path, you can use the following workaround: - -#. Build HIP with the ``HIP_COHERENT_HOST_ALLOC`` environment variable enabled. -#. Modify kernels that use ``__threadfence_system()`` as follows: - - * Ensure the kernel operates only on fine-grained system memory, which should be allocated with - ``hipHostMalloc()``. - * Remove ``memcpy`` for all allocated fine-grained system memory regions. - -.. _synchronization_functions: - -Synchronization functions -==================================================== - -Synchronization functions causes all threads in the group to wait at this synchronization point, and for all shared and global memory accesses by the threads to complete, before running synchronization. This guarantees the visibility of accessed data for all threads in the group. - -The ``__syncthreads()`` built-in function is supported in HIP. The ``__syncthreads_count(int)``, -``__syncthreads_and(int)``, and ``__syncthreads_or(int)`` functions are under development. - -The Cooperative Groups API offer options to do synchronization on a developer defined set of thread groups. For further information, check :ref:`Cooperative Groups API ` or :ref:`Cooperative Groups how to `. - -Math functions -==================================================== - -HIP-Clang supports a set of math operations that are callable from the device. -HIP supports most of the device functions supported by CUDA. These are described -on :ref:`Math API page `. - -Texture functions -=============================================== - -The supported texture functions are listed in ``texture_fetch_functions.h`` and -``texture_indirect_functions.h`` header files in the -`HIP-AMD backend repository `_. - -Texture functions are not supported on some devices. To determine if texture functions are supported -on your device, use ``Macro __HIP_NO_IMAGE_SUPPORT == 1``. You can query the attribute -``hipDeviceAttributeImageSupport`` to check if texture functions are supported in the host runtime -code. - -Surface functions -=============================================== - -The supported surface functions are located on :ref:`Surface object reference -page `. - -Timer functions -=============================================== - -To read a high-resolution timer from the device, HIP provides the following built-in functions: - -* Returning the incremental counter value for every clock cycle on a device: - - .. code-block:: cpp - - clock_t clock() - long long int clock64() - - The difference between the values that are returned represents the cycles used. - -* Returning the wall clock count at a constant frequency on the device: - - .. code-block:: cpp - - long long int wall_clock64() - - This can be queried using the HIP API with the ``hipDeviceAttributeWallClockRate`` attribute of the - device in HIP application code. For example: - - .. code-block:: cpp - - int wallClkRate = 0; //in kilohertz - HIPCHECK(hipDeviceGetAttribute(&wallClkRate, hipDeviceAttributeWallClockRate, deviceId)); - - Where ``hipDeviceAttributeWallClockRate`` is a device attribute. Note that wall clock frequency is a - per-device attribute. - - Note that ``clock()`` and ``clock64()`` do not work properly on AMD RDNA3 (GFX11) graphic processors. - -.. _atomic functions: - -Atomic functions -=============================================== - -Atomic functions are run as read-modify-write (RMW) operations that reside in global or shared -memory. No other device or thread can observe or modify the memory location during an atomic -operation. If multiple instructions from different devices or threads target the same memory location, -the instructions are serialized in an undefined order. - -To support system scope atomic operations, you can use the HIP APIs that contain the ``_system`` suffix. -For example: - -* ``atomicAnd``: This function is atomic and coherent within the GPU device running the function - -* ``atomicAnd_system``: This function extends the atomic operation from the GPU device to other CPUs and GPU devices in the system. - -HIP supports the following atomic operations. - -.. list-table:: Atomic operations - - * - **Function** - - **Supported in HIP** - - **Supported in CUDA** - - * - ``int atomicAdd(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicAdd_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicAdd(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicAdd_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicAdd(unsigned long long* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicAdd_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - - * - ``float atomicAdd(float* address, float val)`` - - ✓ - - ✓ - - * - ``float atomicAdd_system(float* address, float val)`` - - ✓ - - ✓ - - * - ``double atomicAdd(double* address, double val)`` - - ✓ - - ✓ - - * - ``double atomicAdd_system(double* address, double val)`` - - ✓ - - ✓ - - * - ``float unsafeAtomicAdd(float* address, float val)`` - - ✓ - - ✗ - - * - ``float safeAtomicAdd(float* address, float val)`` - - ✓ - - ✗ - - * - ``double unsafeAtomicAdd(double* address, double val)`` - - ✓ - - ✗ - - * - ``double safeAtomicAdd(double* address, double val)`` - - ✓ - - ✗ - - * - ``int atomicSub(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicSub_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicSub(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicSub_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``int atomicExch(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicExch_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicExch(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicExch_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicExch(unsigned long long int* address,unsigned long long int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicExch_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicExch_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - - * - ``float atomicExch(float* address, float val)`` - - ✓ - - ✓ - - * - ``int atomicMin(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicMin_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicMin(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicMin_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicMin(unsigned long long* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``int atomicMax(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicMax_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicMax(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicMax_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicMax(unsigned long long* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicInc(unsigned int* address)`` - - ✗ - - ✓ - - * - ``unsigned int atomicDec(unsigned int* address)`` - - ✗ - - ✓ - - * - ``int atomicCAS(int* address, int compare, int val)`` - - ✓ - - ✓ - - * - ``int atomicCAS_system(int* address, int compare, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicCAS(unsigned int* address,unsigned int compare,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicCAS_system(unsigned int* address, unsigned int compare, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicCAS(unsigned long long* address,unsigned long long compare,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicCAS_system(unsigned long long* address, unsigned long long compare, unsigned long long val)`` - - ✓ - - ✓ - - * - ``int atomicAnd(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicAnd_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicAnd(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicAnd_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicAnd(unsigned long long* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicAnd_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - - * - ``int atomicOr(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicOr_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicOr(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicOr_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicOr_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicOr(unsigned long long int* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicOr_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - - * - ``int atomicXor(int* address, int val)`` - - ✓ - - ✓ - - * - ``int atomicXor_system(int* address, int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicXor(unsigned int* address,unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned int atomicXor_system(unsigned int* address, unsigned int val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicXor(unsigned long long* address,unsigned long long val)`` - - ✓ - - ✓ - - * - ``unsigned long long atomicXor_system(unsigned long long* address, unsigned long long val)`` - - ✓ - - ✓ - -Unsafe floating-point atomic RMW operations ----------------------------------------------------------------------------------------------------------------- -Some HIP devices support fast atomic RMW operations on floating-point values. For example, -``atomicAdd`` on single- or double-precision floating-point values may generate a hardware RMW -instruction that is faster than emulating the atomic operation using an atomic compare-and-swap -(CAS) loop. - -On some devices, fast atomic RMW instructions can produce results that differ from the same -functions implemented with atomic CAS loops. For example, some devices will use different rounding -or denormal modes, and some devices produce incorrect answers if fast floating-point atomic RMW -instructions target fine-grained memory allocations. - -The HIP-Clang compiler offers a compile-time option, so you can choose fast--but potentially -unsafe--atomic instructions for your code. On devices that support these instructions, you can include -the ``-munsafe-fp-atomics`` option. This flag indicates to the compiler that all floating-point atomic -function calls are allowed to use an unsafe version, if one exists. For example, on some devices, this -flag indicates to the compiler that no floating-point ``atomicAdd`` function can target fine-grained -memory. - -If you want to avoid using unsafe use a floating-point atomic RMW operations, you can use the -``-mno-unsafe-fp-atomics`` option. Note that the compiler default is to not produce unsafe -floating-point atomic RMW instructions, so the ``-mno-unsafe-fp-atomics`` option is not necessarily -required. However, passing this option to the compiler is good practice. - -When you pass ``-munsafe-fp-atomics`` or ``-mno-unsafe-fp-atomics`` to the compiler's command line, -the option is applied globally for the entire compilation. Note that if some of the atomic RMW function -calls cannot safely use the faster floating-point atomic RMW instructions, you must use -``-mno-unsafe-fp-atomics`` in order to ensure that your atomic RMW function calls produce correct -results. - -HIP has four extra functions that you can use to more precisely control which floating-point atomic -RMW functions produce unsafe atomic RMW instructions: - -* ``float unsafeAtomicAdd(float* address, float val)`` -* ``double unsafeAtomicAdd(double* address, double val)`` (Always produces fast atomic RMW - instructions on devices that have them, even when ``-mno-unsafe-fp-atomics`` is used) -* `float safeAtomicAdd(float* address, float val)` -* ``double safeAtomicAdd(double* address, double val)`` (Always produces safe atomic RMW - operations, even when ``-munsafe-fp-atomics`` is used) - -.. _warp-cross-lane: - -Warp cross-lane functions -======================================================== - -Threads in a warp are referred to as ``lanes`` and are numbered from ``0`` to ``warpSize - 1``. -Warp cross-lane functions operate across all lanes in a warp. The hardware guarantees that all warp -lanes will execute in lockstep, so additional synchronization is unnecessary, and the instructions -use no shared memory. - -Note that NVIDIA and AMD devices have different warp sizes. You can use ``warpSize`` built-ins in you -portable code to query the warp size. - -.. tip:: - Be sure to review HIP code generated from the CUDA path to ensure that it doesn't assume a - ``waveSize`` of 32. "Wave-aware" code that assumes a ``waveSize`` of 32 can run on a wave-64 - machine, but it only utilizes half of the machine's resources. - -To get the default warp size of a GPU device, use ``hipGetDeviceProperties`` in you host functions. - -.. code-block:: cpp - - cudaDeviceProp props; - cudaGetDeviceProperties(&props, deviceID); - int w = props.warpSize; - // implement portable algorithm based on w (rather than assume 32 or 64) - -Only use ``warpSize`` built-ins in device functions, and don't assume ``warpSize`` to be a compile-time -constant. - -Note that assembly kernels may be built for a warp size that is different from the default. -All mask values either returned or accepted by these builtins are 64-bit -unsigned integer values, even when compiled for a wave-32 device, where all the -higher bits are unused. CUDA code ported to HIP requires changes to ensure that -the correct type is used. - -Note that the ``__sync`` variants are made available in ROCm 6.2, but disabled by -default to help with the transition to 64-bit masks. They can be enabled by -setting the preprocessor macro ``HIP_ENABLE_WARP_SYNC_BUILTINS``. These builtins -will be enabled unconditionally in the next ROCm release. Wherever possible, the -implementation includes a static assert to check that the program source uses -the correct type for the mask. - -.. _warp_vote_functions: - -Warp vote and ballot functions -------------------------------------------------------------------------------------------------------------- - -.. code-block:: cpp - - int __all(int predicate) - int __any(int predicate) - unsigned long long __ballot(int predicate) - unsigned long long __activemask() - - int __all_sync(unsigned long long mask, int predicate) - int __any_sync(unsigned long long mask, int predicate) - unsigned long long __ballot_sync(unsigned long long mask, int predicate) - -You can use ``__any`` and ``__all`` to get a summary view of the predicates evaluated by the -participating lanes. - -* ``__any()``: Returns 1 if the predicate is non-zero for any participating lane, otherwise it returns 0. - -* ``__all()``: Returns 1 if the predicate is non-zero for all participating lanes, otherwise it returns 0. - -To determine if the target platform supports the any/all instruction, you can use the ``hasWarpVote`` -device property or the ``HIP_ARCH_HAS_WARP_VOTE`` compiler definition. - -``__ballot`` returns a bit mask containing the 1-bit predicate value from each -lane. The nth bit of the result contains the 1 bit contributed by the nth warp -lane. - -``__activemask()`` returns a bit mask of currently active warp lanes. The nth bit -of the result is 1 if the nth warp lane is active. - -Note that the ``__ballot`` and ``__activemask`` builtins in HIP have a 64-bit return -value (unlike the 32-bit value returned by the CUDA builtins). Code ported from -CUDA should be adapted to support the larger warp sizes that the HIP version -requires. - -Applications can test whether the target platform supports the ``__ballot`` or -``__activemask`` instructions using the ``hasWarpBallot`` device property in host -code or the ``HIP_ARCH_HAS_WARP_BALLOT`` macro defined by the compiler for device -code. - -The ``_sync`` variants require a 64-bit unsigned integer mask argument that -specifies the lanes in the warp that will participate in cross-lane -communication with the calling lane. Each participating thread must have its own -bit set in its mask argument, and all active threads specified in any mask -argument must execute the same call with the same mask, otherwise the result is -undefined. - -Warp match functions -------------------------------------------------------------------------------------------------------------- - -.. code-block:: cpp - - unsigned long long __match_any(T value) - unsigned long long __match_all(T value, int *pred) - - unsigned long long __match_any_sync(unsigned long long mask, T value) - unsigned long long __match_all_sync(unsigned long long mask, T value, int *pred) - -``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or -double precision floating point type. - -``__match_any`` returns a bit mask containing a 1-bit for every participating lane -if and only if that lane has the same value in ``value`` as the current lane, and -a 0-bit for all other lanes. - -``__match_all`` returns a bit mask containing a 1-bit for every participating lane -if and only if they all have the same value in ``value`` as the current lane, and -a 0-bit for all other lanes. The predicate ``pred`` is set to true if and only if -all participating threads have the same value in ``value``. - -The ``_sync`` variants require a 64-bit unsigned integer mask argument that -specifies the lanes in the warp that will participate in cross-lane -communication with the calling lane. Each participating thread must have its own -bit set in its mask argument, and all active threads specified in any mask -argument must execute the same call with the same mask, otherwise the result is -undefined. - -Warp shuffle functions -------------------------------------------------------------------------------------------------------------- - -The default width is ``warpSize`` (see :ref:`warp-cross-lane`). Half-float shuffles are not supported. - -.. code-block:: cpp - - T __shfl (T var, int srcLane, int width=warpSize); - T __shfl_up (T var, unsigned int delta, int width=warpSize); - T __shfl_down (T var, unsigned int delta, int width=warpSize); - T __shfl_xor (T var, int laneMask, int width=warpSize); - - T __shfl_sync (unsigned long long mask, T var, int srcLane, int width=warpSize); - T __shfl_up_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize); - T __shfl_down_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize); - T __shfl_xor_sync (unsigned long long mask, T var, int laneMask, int width=warpSize); - -``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or -double precision floating point type. - -The ``_sync`` variants require a 64-bit unsigned integer mask argument that -specifies the lanes in the warp that will participate in cross-lane -communication with the calling lane. Each participating thread must have its own -bit set in its mask argument, and all active threads specified in any mask -argument must execute the same call with the same mask, otherwise the result is -undefined. - -Cooperative groups functions -============================================================== - -You can use cooperative groups to synchronize groups of threads. Cooperative groups also provide a -way of communicating between groups of threads at a granularity that is different from the block. - -HIP supports the following kernel language cooperative groups types and functions: - -.. list-table:: Cooperative groups functions - - * - **Function** - - **Supported in HIP** - - **Supported in CUDA** - - * - ``void thread_group.sync();`` - - ✓ - - ✓ - - * - ``unsigned thread_group.size();`` - - ✓ - - ✓ - - * - ``unsigned thread_group.thread_rank()`` - - ✓ - - ✓ - - * - ``bool thread_group.is_valid();`` - - ✓ - - ✓ - - * - ``grid_group this_grid()`` - - ✓ - - ✓ - - * - ``void grid_group.sync()`` - - ✓ - - ✓ - - * - ``unsigned grid_group.size()`` - - ✓ - - ✓ - - * - ``unsigned grid_group.thread_rank()`` - - ✓ - - ✓ - - * - ``bool grid_group.is_valid()`` - - ✓ - - ✓ - - * - ``multi_grid_group this_multi_grid()`` - - ✓ - - ✓ - - * - ``void multi_grid_group.sync()`` - - ✓ - - ✓ - - * - ``unsigned multi_grid_group.size()`` - - ✓ - - ✓ - - * - ``unsigned multi_grid_group.thread_rank()`` - - ✓ - - ✓ - - * - ``bool multi_grid_group.is_valid()`` - - ✓ - - ✓ - - * - ``unsigned multi_grid_group.num_grids()`` - - ✓ - - ✓ - - * - ``unsigned multi_grid_group.grid_rank()`` - - ✓ - - ✓ - - * - ``thread_block this_thread_block()`` - - ✓ - - ✓ - - * - ``multi_grid_group this_multi_grid()`` - - ✓ - - ✓ - - * - ``void multi_grid_group.sync()`` - - ✓ - - ✓ - - * - ``void thread_block.sync()`` - - ✓ - - ✓ - - * - ``unsigned thread_block.size()`` - - ✓ - - ✓ - - * - ``unsigned thread_block.thread_rank()`` - - ✓ - - ✓ - - * - ``bool thread_block.is_valid()`` - - ✓ - - ✓ - - * - ``dim3 thread_block.group_index()`` - - ✓ - - ✓ - - * - ``dim3 thread_block.thread_index()`` - - ✓ - - ✓ - -For further information, check :ref:`Cooperative Groups API ` or :ref:`Cooperative Groups how to `. - -Warp matrix functions -============================================================ - -Warp matrix functions allow a warp to cooperatively operate on small matrices that have elements -spread over lanes in an unspecified manner. - -HIP does not support kernel language warp matrix types or functions. - -.. list-table:: Warp matrix functions - - * - **Function** - - **Supported in HIP** - - **Supported in CUDA** - - * - ``void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned lda)`` - - ✗ - - ✓ - - * - ``void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned lda, layout_t layout)`` - - ✗ - - ✓ - - * - ``void store_matrix_sync(T* mptr, fragment<...> &a, unsigned lda, layout_t layout)`` - - ✗ - - ✓ - - * - ``void fill_fragment(fragment<...> &a, const T &value)`` - - ✗ - - ✓ - - * - ``void mma_sync(fragment<...> &d, const fragment<...> &a, const fragment<...> &b, const fragment<...> &c , bool sat)`` - - ✗ - - ✓ - -Independent thread scheduling -============================================================ - -Certain architectures that support CUDA allow threads to progress independently of each other. This -independent thread scheduling makes intra-warp synchronization possible. - -HIP does not support this type of scheduling. - -Profiler Counter Function -============================================================ - -The CUDA ``__prof_trigger()`` instruction is not supported. - -Assert -============================================================ - -The assert function is supported in HIP. -Assert function is used for debugging purpose, when the input expression equals to zero, the execution will be stopped. - -.. code-block:: cpp - - void assert(int input) - -There are two kinds of implementations for assert functions depending on the use sceneries, -- One is for the host version of assert, which is defined in ``assert.h``, -- Another is the device version of assert, which is implemented in ``hip/hip_runtime.h``. -Users need to include ``assert.h`` to use ``assert``. For assert to work in both device and host functions, users need to include ``"hip/hip_runtime.h"``. - -HIP provides the function ``abort()`` which can be used to terminate the application when terminal failures are detected. It is implemented using the ``__builtin_trap()`` function. - -This function produces a similar effect of using ``asm("trap")`` in the CUDA code. - -.. note:: - - In HIP, the function terminates the entire application, while in CUDA, ``asm("trap")`` only terminates the dispatch and the application continues to run. - - -``printf`` -============================================================ - -``printf`` function is supported in HIP. -The following is a simple example to print information in the kernel. - -.. code-block:: cpp - - #include - - __global__ void run_printf() { printf("Hello World\n"); } - - int main() { - run_printf<<>>(); - } - - -Device-Side Dynamic Global Memory Allocation -============================================================ - -Device-side dynamic global memory allocation is under development. HIP now includes a preliminary -implementation of malloc and free that can be called from device functions. - -``__launch_bounds__`` -============================================================ - -GPU multiprocessors have a fixed pool of resources (primarily registers and shared memory) which are shared by the actively running warps. Using more resources can increase IPC of the kernel but reduces the resources available for other warps and limits the number of warps that can be simultaneously running. Thus GPUs have a complex relationship between resource usage and performance. - -``__launch_bounds__`` allows the application to provide usage hints that influence the resources (primarily registers) used by the generated code. It is a function attribute that must be attached to a __global__ function: - -.. code-block:: cpp - - __global__ void __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_WARPS_PER_EXECUTION_UNIT) - MyKernel(hipGridLaunch lp, ...) - ... - -``__launch_bounds__`` supports two parameters: -- MAX_THREADS_PER_BLOCK - The programmers guarantees that kernel will be launched with threads less than MAX_THREADS_PER_BLOCK. (On NVCC this maps to the ``.maxntid`` PTX directive). If no launch_bounds is specified, MAX_THREADS_PER_BLOCK is the maximum block size supported by the device (typically 1024 or larger). Specifying MAX_THREADS_PER_BLOCK less than the maximum effectively allows the compiler to use more resources than a default unconstrained compilation that supports all possible block sizes at launch time. -The threads-per-block is the product of (``blockDim.x * blockDim.y * blockDim.z``). -- MIN_WARPS_PER_EXECUTION_UNIT - directs the compiler to minimize resource usage so that the requested number of warps can be simultaneously active on a multi-processor. Since active warps compete for the same fixed pool of resources, the compiler must reduce resources required by each warp(primarily registers). MIN_WARPS_PER_EXECUTION_UNIT is optional and defaults to 1 if not specified. Specifying a MIN_WARPS_PER_EXECUTION_UNIT greater than the default 1 effectively constrains the compiler's resource usage. - -When launch kernel with HIP APIs, for example, ``hipModuleLaunchKernel()``, HIP will do validation to make sure input kernel dimension size is not larger than specified launch_bounds. -In case exceeded, HIP would return launch failure, if AMD_LOG_LEVEL is set with proper value (for details, please refer to ``docs/markdown/hip_logging.md``), detail information will be shown in the error log message, including -launch parameters of kernel dim size, launch bounds, and the name of the faulting kernel. It's helpful to figure out which is the faulting kernel, besides, the kernel dim size and launch bounds values will also assist in debugging such failures. - -Compiler Impact --------------------------------------------------------------------------------------------- - -The compiler uses these parameters as follows: -- The compiler uses the hints only to manage register usage, and does not automatically reduce shared memory or other resources. -- Compilation fails if compiler cannot generate a kernel which meets the requirements of the specified launch bounds. -- From MAX_THREADS_PER_BLOCK, the compiler derives the maximum number of warps/block that can be used at launch time. -Values of MAX_THREADS_PER_BLOCK less than the default allows the compiler to use a larger pool of registers : each warp uses registers, and this hint constrains the launch to a warps/block size which is less than maximum. -- From MIN_WARPS_PER_EXECUTION_UNIT, the compiler derives a maximum number of registers that can be used by the kernel (to meet the required #simultaneous active blocks). -If MIN_WARPS_PER_EXECUTION_UNIT is 1, then the kernel can use all registers supported by the multiprocessor. -- The compiler ensures that the registers used in the kernel is less than both allowed maximums, typically by spilling registers (to shared or global memory), or by using more instructions. -- The compiler may use heuristics to increase register usage, or may simply be able to avoid spilling. The MAX_THREADS_PER_BLOCK is particularly useful in this cases, since it allows the compiler to use more registers and avoid situations where the compiler constrains the register usage (potentially spilling) to meet the requirements of a large block size that is never used at launch time. - -CU and EU Definitions --------------------------------------------------------------------------------------------- - -A compute unit (CU) is responsible for executing the waves of a work-group. It is composed of one or more execution units (EU) which are responsible for executing waves. An EU can have enough resources to maintain the state of more than one executing wave. This allows an EU to hide latency by switching between waves in a similar way to symmetric multithreading on a CPU. In order to allow the state for multiple waves to fit on an EU, the resources used by a single wave have to be limited. Limiting such resources can allow greater latency hiding, but can result in having to spill some register state to memory. This attribute allows an advanced developer to tune the number of waves that are capable of fitting within the resources of an EU. It can be used to ensure at least a certain number will fit to help hide latency, and can also be used to ensure no more than a certain number will fit to limit cache thrashing. - -Porting from CUDA ``__launch_bounds`` --------------------------------------------------------------------------------------------- - -CUDA defines a ``__launch_bounds`` which is also designed to control occupancy: - -.. code-block:: cpp - - __launch_bounds(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR) - -- The second parameter ``__launch_bounds`` parameters must be converted to the format used __hip_launch_bounds, which uses warps and execution-units rather than blocks and multi-processors (this conversion is performed automatically by HIPIFY tools). - -.. code-block:: cpp - - MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / 32 - -The key differences in the interface are: -- Warps (rather than blocks): -The developer is trying to tell the compiler to control resource utilization to guarantee some amount of active Warps/EU for latency hiding. Specifying active warps in terms of blocks appears to hide the micro-architectural details of the warp size, but makes the interface more confusing since the developer ultimately needs to compute the number of warps to obtain the desired level of control. -- Execution Units (rather than multiprocessor): -The use of execution units rather than multiprocessors provides support for architectures with multiple execution units/multi-processor. For example, the AMD GCN architecture has 4 execution units per multiprocessor. The ``hipDeviceProps`` has a field ``executionUnitsPerMultiprocessor``. -Platform-specific coding techniques such as ``#ifdef`` can be used to specify different launch_bounds for NVCC and HIP-Clang platforms, if desired. - -``maxregcount`` --------------------------------------------------------------------------------------------- - -Unlike NVCC, HIP-Clang does not support the ``--maxregcount`` option. Instead, users are encouraged to use the hip_launch_bounds directive since the parameters are more intuitive and portable than -micro-architecture details like registers, and also the directive allows per-kernel control rather than an entire file. hip_launch_bounds works on both HIP-Clang and NVCC targets. - -Asynchronous Functions -============================================================ - -The supported asynchronous functions reference are located on the following pages: - -* :ref:`stream_management_reference` -* :ref:`stream_ordered_memory_allocator_reference` -* :ref:`peer_to_peer_device_memory_access_reference` -* :ref:`memory_management_reference` -* :ref:`external_resource_interoperability_reference` - -Register Keyword -============================================================ - -The register keyword is deprecated in C++, and is silently ignored by both NVCC and HIP-Clang. You can pass the option ``-Wdeprecated-register`` the compiler warning message. - -Pragma Unroll -============================================================ - -Unroll with a bounds that is known at compile-time is supported. For example: - -.. code-block:: cpp - - #pragma unroll 16 /* hint to compiler to unroll next loop by 16 */ - for (int i=0; i<16; i++) ... - -.. code-block:: cpp - - #pragma unroll 1 /* tell compiler to never unroll the loop */ - for (int i=0; i<16; i++) ... - -.. code-block:: cpp - - #pragma unroll /* hint to compiler to completely unroll next loop. */ - for (int i=0; i<16; i++) ... - -In-Line Assembly -============================================================ - -GCN ISA In-line assembly is supported. - -There are some usage limitations in ROCm compiler for inline asm support, please refer to `Inline ASM statements `_ for details. - -Users can get related background resources on `how to use inline assembly `_ for any usage of inline assembly features. - -A short example program including an inline assembly statement can be found at `inline asm tutorial `_. - -For further usage of special AMD GPU hardware features that are available through assembly, please refer to the ISA manual for `AMDGPU usage `_, in which AMD GCN is listed from gfx906 to RDNA 3.5. - -C++ Support -============================================================ - -The following C++ features are not supported: - -* Run-time-type information (RTTI) -* Try/catch - -Partially supported features: - -* Virtual functions - -Virtual functions are not supported if objects containing virtual function tables are passed between GPU's of different offload arch's, e.g. between gfx906 and gfx1030. Otherwise virtual functions are supported. - -Kernel Compilation -============================================================ - -hipcc now supports compiling C++/HIP kernels to binary code objects. -The file format for binary is ``.co`` which means Code Object. The following command builds the code object using ``hipcc``. - -.. code-block:: bash - - hipcc --genco --offload-arch=[TARGET GPU] [INPUT FILE] -o [OUTPUT FILE] - - [TARGET GPU] = GPU architecture - [INPUT FILE] = Name of the file containing kernels - [OUTPUT FILE] = Name of the generated code object file - -.. note:: - - When using binary code objects is that the number of arguments to the kernel is different on HIP-Clang and NVCC path. Refer to the `HIP module_api sample `_ for differences in the arguments to be passed to the kernel. - -gfx-arch-specific-kernel -============================================================ - -Clang defined '__gfx*__' macros can be used to execute gfx arch specific codes inside the kernel. Refer to the sample in `HIP 14_gpu_arch sample `_. diff --git a/docs/reference/cpp_language_support.rst b/docs/reference/cpp_language_support.rst deleted file mode 100644 index 1635258ccf..0000000000 --- a/docs/reference/cpp_language_support.rst +++ /dev/null @@ -1,171 +0,0 @@ -.. meta:: - :description: This chapter describes the C++ support of the HIP ecosystem - ROCm software. - :keywords: AMD, ROCm, HIP, C++ - -******************************************************************************* -C++ language support -******************************************************************************* - -The ROCm platform enables the power of combined C++ and HIP (Heterogeneous-computing -Interface for Portability) code. This code is compiled with a ``clang`` or ``clang++`` -compiler. The official compilers support the HIP platform, or you can use the -``amdclang`` or ``amdclang++`` included in the ROCm installation, which are a wrapper for -the official versions. - -The source code is compiled according to the ``C++03``, ``C++11``, ``C++14``, ``C++17``, -and ``C++20`` standards, along with HIP-specific extensions, but is subject to -restrictions. The key restriction is the reduced support of standard library in device -code. This is due to the fact that by default a function is considered to run on host, -except for ``constexpr`` functions, which can run on host and device as well. - -.. _language_modern_cpp_support: - -Modern C++ support -=============================================================================== - -C++ is considered a modern programming language as of C++11. This section describes how -HIP supports these new C++ features. - -C++11 support -------------------------------------------------------------------------------- - -The C++11 standard introduced many new features. These features are supported in HIP host -code, with some notable omissions on the device side. The rule of thumb here is that -``constexpr`` functions work on device, the rest doesn't. This means that some important -functionality like ``std::function`` is missing on the device, but unfortunately the -standard library wasn't designed with HIP in mind, which means that the support is in a -state of "works as-is". - -Certain features have restrictions and clarifications. For example, any functions using -the ``constexpr`` qualifier or the new ``initializer lists``, ``std::move`` or -``std::forward`` features are implicitly considered to have the ``__host__`` and -``__device__`` execution space specifier. Also, ``constexpr`` variables that are static -members or namespace scoped can be used from both host and device, but only for read -access. Dereferencing a static ``constexpr`` outside its specified execution space causes -an error. - -Lambdas are supported, but there are some extensions and restrictions on their usage. For -more information, see the `Extended lambdas`_ section below. - -C++14 support -------------------------------------------------------------------------------- - -The C++14 language features are supported. - -C++17 support -------------------------------------------------------------------------------- - -All C++17 language features are supported. - -C++20 support -------------------------------------------------------------------------------- - -All C++20 language features are supported, but extensions and restrictions apply. C++20 -introduced coroutines and modules, which fundamentally changed how programs are written. -HIP doesn't support these features. However, ``consteval`` functions can be called from -host and device, even if specified for host use only. - -The three-way comparison operator (spaceship operator ``<=>``) works with host and device -code. - -.. _language_restrictions: - -Extensions and restrictions -=============================================================================== - -In addition to the deviations from the standard, there are some general extensions and -restrictions to consider. - -Global functions -------------------------------------------------------------------------------- - -Functions that serve as an entry point for device execution are called kernels and are -specified with the ``__global__`` qualifier. To call a kernel function, use the triple -chevron operator: ``<<< >>>``. Kernel functions must have a ``void`` return type. These -functions can't: - -* have a ``constexpr`` specifier -* have a parameter of type ``std::initializer_list`` or ``va_list`` -* use an rvalue reference as a parameter. -* use parameters having different sizes in host and device code, e.g. long double arguments, or structs containing long double members. -* use struct-type arguments which have different layout in host and device code. - -Kernels can have variadic template parameters, but only one parameter pack, which must be -the last item in the template parameter list. - -Device space memory specifiers -------------------------------------------------------------------------------- - -HIP includes device space memory specifiers to indicate whether a variable is allocated -in host or device memory and how its memory should be allocated. HIP supports the -``__device__``, ``__shared__``, ``__managed__``, and ``__constant__`` specifiers. - -The ``__device__`` and ``__constant__`` specifiers define global variables, which are -allocated within global memory on the HIP devices. The only difference is that -``__constant__`` variables can't be changed after allocation. The ``__shared__`` -specifier allocates the variable within shared memory, which is available for all threads -in a block. - -The ``__managed__`` variable specifier creates global variables that are initially -undefined and unaddressed within the global symbol table. The HIP runtime allocates -managed memory and defines the symbol when it loads the device binary. A managed variable -can be accessed in both device and host code. - -It's important to know where a variable is stored because it is only available from -certain locations. Generally, variables allocated in the host memory are not accessible -from the device code, while variables allocated in the device memory are not directly -accessible from the host code. Dereferencing a pointer to device memory on the host -results in a segmentation fault. Accessing device variables in host code should be done -through kernel execution or HIP functions like ``hipMemCpyToSymbol``. - -Exception handling -------------------------------------------------------------------------------- - -An important difference between the host and device code is exception handling. In device -code, this control flow isn't available due to the hardware architecture. The device -code must use return codes to handle errors. - -Kernel parameters -------------------------------------------------------------------------------- - -There are some restrictions on kernel function parameters. They cannot be passed by -reference, because these functions are called from the host but run on the device. Also, -a variable number of arguments is not allowed. - -Classes -------------------------------------------------------------------------------- - -Classes work on both the host and device side, but there are some constraints. The -``static`` member functions can't be ``__global__``. ``Virtual`` member functions work, -but a ``virtual`` function must not be called from the host if the parent object was -created on the device, or the other way around, because this behavior is undefined. -Another minor restriction is that ``__device__`` variables, that are global scoped must -have trivial constructors. - -Polymorphic function wrappers -------------------------------------------------------------------------------- - -HIP doesn't support the polymorphic function wrapper ``std::function``, which was -introduced in C++11. - -Extended lambdas -------------------------------------------------------------------------------- - -HIP supports Lambdas, which by default work as expected. - -Lambdas have implicit host device attributes. This means that they can be executed by -both host and device code, and works the way you would expect. To make a lambda callable -only by host or device code, users can add ``__host__`` or ``__device__`` attribute. The -only restriction is that host variables can only be accessed through copy on the device. -Accessing through reference will cause undefined behavior. - -Inline namespaces -------------------------------------------------------------------------------- - -Inline namespaces are supported, but with a few exceptions. The following entities can't -be declared in namespace scope within an inline unnamed namespace: - -* ``__managed__``, ``__device__``, ``__shared__`` and ``__constant__`` variables -* ``__global__`` function and function templates -* variables with surface or texture type diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index ba90d82efd..823543a0aa 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -55,6 +55,8 @@ subtrees: - file: how-to/hip_runtime_api/multi_device - file: how-to/hip_runtime_api/opengl_interop - file: how-to/hip_runtime_api/external_interop + - file: how-to/hip_cpp_language_extensions + - file: how-to/kernel_language_cpp_support - file: how-to/hip_porting_guide - file: how-to/hip_porting_driver_api - file: how-to/hip_rtc @@ -106,10 +108,6 @@ subtrees: - file: doxygen/html/annotated - file: doxygen/html/files - file: reference/virtual_rocr - - file: reference/cpp_language_extensions - title: C++ language extensions - - file: reference/cpp_language_support - title: C++ language support - file: reference/math_api - file: reference/env_variables - file: reference/terms diff --git a/docs/what_is_hip.rst b/docs/what_is_hip.rst index d5af8d5937..0e4d0560d2 100644 --- a/docs/what_is_hip.rst +++ b/docs/what_is_hip.rst @@ -95,5 +95,5 @@ language features that are designed to target accelerators, such as: * Math functions that resemble those in ``math.h``, which is included with standard C++ compilers * Built-in functions for accessing specific GPU hardware capabilities -For further details, check :doc:`C++ language extensions ` -and :doc:`C++ language support `. +For further details, check :doc:`HIP C++ language extensions ` +and :doc:`Kernel language C++ support `. From 1a3a1e32014eeec084a81a8435a3a8d8be3f035d Mon Sep 17 00:00:00 2001 From: Matthias Knorr Date: Wed, 8 Jan 2025 14:40:42 +0100 Subject: [PATCH 11/46] Docs: Fix device memory refs --- .../hip_runtime_api/memory_management/device_memory.rst | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst index fba559f7ad..13fba386bb 100644 --- a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst @@ -35,7 +35,7 @@ with the ``__device__`` qualifier, this memory space is also referred to as device memory. Without explicitly copying it, it can only be accessed by the threads within a -kernel operating on the device, however :ref:`unified memory` can be used to +kernel operating on the device, however :ref:`unified_memory` can be used to let the runtime manage this, if desired. Allocating global memory @@ -101,7 +101,7 @@ better option, but is also limited in size. Copying between device and host -------------------------------------------------------------------------------- -When not using :ref:`unified memory`, memory has to be explicitly copied between +When not using :ref:`unified_memory`, memory has to be explicitly copied between the device and the host, using the HIP runtime API. .. code-block:: cpp @@ -214,6 +214,8 @@ has to be launched in order to see the updated surface. The corresponding functions are listed in the :ref:`Surface object API reference `. +.. _shared_memory: + Shared memory ================================================================================ From 8082ec161f4a81c41f6ddd64baccc754251e4e61 Mon Sep 17 00:00:00 2001 From: Matthias Knorr Date: Thu, 9 Jan 2025 17:02:35 +0100 Subject: [PATCH 12/46] Docs: Update example references The ROCm-examples replace the HIP-examples repository and the samples in hip-tests. --- README.md | 14 ++------------ docs/index.md | 3 +-- docs/install/build.rst | 2 +- docs/sphinx/_toc.yml.in | 4 +--- docs/understand/compilers.rst | 5 +++-- 5 files changed, 8 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 2493f09990..32031be961 100644 --- a/README.md +++ b/README.md @@ -127,19 +127,9 @@ provides source portability to either platform. HIP provides the _hipcc_ compi ## Examples and Getting Started -* A sample and [blog](https://github.com/ROCm/hip-tests/tree/develop/samples/0_Intro/square) that uses any of [HIPIFY](https://github.com/ROCm/HIPIFY/blob/amd-staging/README.md) tools to convert a simple app from CUDA to HIP: +* The [ROCm-examples](https://github.com/ROCm/rocm-examples) repository includes many examples with explanations that help users getting started with HIP, as well as providing advanced examples for HIP and its libraries. - ```shell - cd samples/01_Intro/square - # follow README / blog steps to hipify the application. - ``` - -* Guide to [Porting a New Cuda Project](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_porting_guide.html#porting-a-new-cuda-project) - -## More Examples - -The GitHub repository [HIP-Examples](https://github.com/ROCm/HIP-Examples) contains a hipified version of benchmark suite. -Besides, there are more samples in Github [HIP samples](https://github.com/ROCm/hip-tests/tree/develop/samples), showing how to program with different features, build and run. +* HIP's documentation includes a guide for [Porting a New Cuda Project](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_porting_guide.html#porting-a-new-cuda-project). ## Tour of the HIP Directories diff --git a/docs/index.md b/docs/index.md index fdee24b518..eb2eb1e6da 100644 --- a/docs/index.md +++ b/docs/index.md @@ -55,8 +55,7 @@ The HIP documentation is organized into the following categories: :::{grid-item-card} Tutorial * [HIP basic examples](https://github.com/ROCm/rocm-examples/tree/develop/HIP-Basic) -* [HIP examples](https://github.com/ROCm/HIP-Examples) -* [HIP test samples](https://github.com/ROCm/hip-tests/tree/develop/samples) +* [HIP examples](https://github.com/ROCm/rocm-examples) * [SAXPY tutorial](./tutorial/saxpy) * [Reduction tutorial](./tutorial/reduction) * [Cooperative groups tutorial](./tutorial/cooperative_groups_tutorial) diff --git a/docs/install/build.rst b/docs/install/build.rst index 4f8f8bf505..64deba241b 100644 --- a/docs/install/build.rst +++ b/docs/install/build.rst @@ -238,4 +238,4 @@ Run HIP ================================================= After installation and building HIP, you can compile your application and run. -A simple example is `square sample `_. +Simple examples can be found in the `ROCm-examples repository `_. diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 823543a0aa..a0baf6299e 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -122,10 +122,8 @@ subtrees: entries: - url: https://github.com/ROCm/rocm-examples/tree/develop/HIP-Basic title: HIP basic examples - - url: https://github.com/ROCm/HIP-Examples + - url: https://github.com/ROCm/rocm-examples title: HIP examples - - url: https://github.com/ROCm/hip-tests/tree/develop/samples - title: HIP test samples - file: tutorial/saxpy - file: tutorial/reduction - file: tutorial/cooperative_groups_tutorial diff --git a/docs/understand/compilers.rst b/docs/understand/compilers.rst index 12273f800d..53512e76e5 100644 --- a/docs/understand/compilers.rst +++ b/docs/understand/compilers.rst @@ -96,5 +96,6 @@ Static libraries ar rcsD libHipDevice.a hipDevice.o hipcc libHipDevice.a test.cpp -fgpu-rdc -o test.out -For more information, see `HIP samples host functions `_ -and `device functions `_. +A full example for this can be found in the ROCm-examples, see the examples for +`static host libraries `_ +or `static device libraries `_. From 099d8c3300c2605891dde92753e2abf50b5c8d27 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Tue, 14 Jan 2025 08:45:09 +0100 Subject: [PATCH 13/46] Add asynchronous execution documentation page --- .wordlist.txt | 3 +- .../sequential_async_event.drawio | 274 +++++++++ .../asynchronous/sequential_async_event.svg | 2 + docs/how-to/hip_runtime_api.rst | 1 + docs/how-to/hip_runtime_api/asynchronous.rst | 534 ++++++++++++++++++ docs/how-to/performance_guidelines.rst | 8 +- docs/sphinx/_toc.yml.in | 5 +- 7 files changed, 822 insertions(+), 5 deletions(-) create mode 100644 docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio create mode 100644 docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg create mode 100644 docs/how-to/hip_runtime_api/asynchronous.rst diff --git a/.wordlist.txt b/.wordlist.txt index a88f752b84..3a390c8309 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -7,7 +7,7 @@ APUs AQL AXPY asm -Asynchrony +asynchrony backtrace Bitcode bitcode @@ -124,6 +124,7 @@ overindexing oversubscription overutilized parallelizable +parallelized pixelated pragmas preallocated diff --git a/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio new file mode 100644 index 0000000000..2ea9376cf3 --- /dev/null +++ b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio @@ -0,0 +1,274 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg new file mode 100644 index 0000000000..fe52799858 --- /dev/null +++ b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg @@ -0,0 +1,2 @@ +
time
time
default stream
default stream
H2D
data1
H2D...
H2D
data2
H2D...
kernel
data1
kernel...
kernel
data2
kernel...
D2H
data1
D2H...
D2H
data2
D2H...
H2D
data2
H2D...
kernel
data2
kernel...
stream2
stream2
D2H
data2
D2H...
H2D
data1
H2D...
kernel
data1
kernel...
stream1
stream1
D2H
data1
D2H...
Seqeuntial calls:
Seqeuntial calls:
Asynchronous calls:
Asynchronous calls:
Asynchronous calls with hipEvent:
Asynchronous calls with hipEvent: +
H2D
data2
H2D...
kernel
data2
kernel...
stream2
stream2
H2D
data1
H2D...
kernel
data1
kernel...
stream1
stream1
D2H
data2
D2H...
event
event
D2H
data1
D2H...
eventA
eventB
eventA...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/how-to/hip_runtime_api.rst b/docs/how-to/hip_runtime_api.rst index 65c89a60ed..f76851e078 100644 --- a/docs/how-to/hip_runtime_api.rst +++ b/docs/how-to/hip_runtime_api.rst @@ -40,6 +40,7 @@ Here are the various HIP Runtime API high level functions: * :doc:`./hip_runtime_api/initialization` * :doc:`./hip_runtime_api/memory_management` * :doc:`./hip_runtime_api/error_handling` +* :doc:`./hip_runtime_api/asynchronous` * :doc:`./hip_runtime_api/cooperative_groups` * :doc:`./hip_runtime_api/hipgraph` * :doc:`./hip_runtime_api/call_stack` diff --git a/docs/how-to/hip_runtime_api/asynchronous.rst b/docs/how-to/hip_runtime_api/asynchronous.rst new file mode 100644 index 0000000000..81769da48e --- /dev/null +++ b/docs/how-to/hip_runtime_api/asynchronous.rst @@ -0,0 +1,534 @@ +.. meta:: + :description: This topic describes asynchronous concurrent execution in HIP + :keywords: AMD, ROCm, HIP, asynchronous concurrent execution, asynchronous, async, concurrent, concurrency + +.. _asynchronous_how-to: + +******************************************************************************* +Asynchronous concurrent execution +******************************************************************************* + +Asynchronous concurrent execution is important for efficient parallelism and +resource utilization, with techniques such as overlapping computation and data +transfer, managing concurrent kernel execution with streams on single or +multiple devices, or using HIP graphs. + +Streams and concurrent execution +=============================================================================== + +All asynchronous APIs, such as kernel execution, data movement and potentially +data allocation/freeing all happen in the context of device streams. + +Streams are FIFO buffers of commands to execute in order on a given device. +Commands which enqueue tasks on a stream all return promptly and the task is +executed asynchronously. Multiple streams can point to the same device and +those streams might be fed from multiple concurrent host-side threads. Multiple +streams tied to the same device are not guaranteed to execute their commands in +order. + +Managing streams +------------------------------------------------------------------------------- + +Streams enable the overlap of computation and data transfer, ensuring +continuous GPU activity. By enabling tasks to run concurrently within the same +GPU or across different GPUs, streams improve performance and throughput in +high-performance computing (HPC). + +To create a stream, the following functions are used, each defining a handle +to the newly created stream: + +- :cpp:func:`hipStreamCreate`: Creates a stream with default settings. +- :cpp:func:`hipStreamCreateWithFlags`: Creates a stream, with specific + flags, listed below, enabling more control over stream behavior: + + - ``hipStreamDefault``: creates a default stream suitable for most + operations. The default stream is a blocking operation. + - ``hipStreamNonBlocking``: creates a non-blocking stream, allowing + concurrent execution of operations. It ensures that tasks can run + simultaneously without waiting for each other to complete, thus improving + overall performance. + +- :cpp:func:`hipStreamCreateWithPriority`: Allows creating a stream with a + specified priority, enabling prioritization of certain tasks. + +The :cpp:func:`hipStreamSynchronize` function is used to block the calling host +thread until all previously submitted tasks in a specified HIP stream have +completed. It ensures that all operations in the given stream, such as kernel +executions or memory transfers, are finished before the host thread proceeds. + +.. note:: + + If the :cpp:func:`hipStreamSynchronize` function input stream is 0 (or the + default stream), it waits for all operations in the default stream to + complete. + +Concurrent execution between host and device +------------------------------------------------------------------------------- + +Concurrent execution between the host (CPU) and device (GPU) allows the CPU to +perform other tasks while the GPU is executing kernels. Kernels are launched +asynchronously using ``hipLaunchKernelGGL`` or using the triple chevron with a stream, +enabling the CPU to continue executing other code while the GPU processes the +kernel. Similarly, memory operations like :cpp:func:`hipMemcpyAsync` are +performed asynchronously, allowing data transfers between the host and device +without blocking the CPU. + +Concurrent kernel execution +------------------------------------------------------------------------------- + +Concurrent execution of multiple kernels on the GPU allows different kernels to +run simultaneously to maximize GPU resource usage. Managing dependencies +between kernels is crucial for ensuring correct execution order. This can be +achieved using :cpp:func:`hipStreamWaitEvent`, which allows a kernel to wait +for a specific event before starting execution. + +Independent kernels can only run concurrently if there are enough registers +and shared memory for the kernels. To enable concurrent kernel executions, the +developer may have to reduce the block size of the kernels. The kernel runtimes +can be misleading for concurrent kernel runs, that is why during optimization +it is a good practice to check the trace files, to see if one kernel is blocking +another kernel, while they are running in parallel. For more information about +the application tracing, check::doc:`rocprofiler:/how-to/using-rocprof`. + +When running kernels in parallel, the execution time can increase due to +contention for shared resources. This is because multiple kernels may attempt +to access the same GPU resources simultaneously, leading to delays. + +Multiple kernels executing concurrently is only beneficial under specific conditions. It +is most effective when the kernels do not fully utilize the GPU's resources. In +such cases, overlapping kernel execution can improve overall throughput and +efficiency by keeping the GPU busy without exceeding its capacity. + +Overlap of data transfer and kernel execution +=============================================================================== + +One of the primary benefits of asynchronous operations and multiple streams is +the ability to overlap data transfer with kernel execution, leading to better +resource utilization and improved performance. + +Asynchronous execution is particularly advantageous in iterative processes. For +instance, if a kernel is initiated, it can be efficient to prepare the input +data simultaneously, provided that this preparation does not depend on the +kernel's execution. Such iterative data transfer and kernel execution overlap +can be find in the :ref:`async_example`. + +Querying device capabilities +------------------------------------------------------------------------------- + +Some AMD HIP-enabled devices can perform asynchronous memory copy operations to +or from the GPU concurrently with kernel execution. Applications can query this +capability by checking the ``asyncEngineCount`` device property. Devices with +an ``asyncEngineCount`` greater than zero support concurrent data transfers. +Additionally, if host memory is involved in the copy, it should be page-locked +to ensure optimal performance. Page-locking (or pinning) host memory increases +the bandwidth between the host and the device, reducing the overhead associated +with data transfers. For more details, visit :ref:`host_memory` page. + +Asynchronous memory operations +------------------------------------------------------------------------------- + +Asynchronous memory operations do not block the host while copying data and, +when used with multiple streams, allow data to be transferred between the host +and device while kernels are executed on the same GPU. Using operations like +:cpp:func:`hipMemcpyAsync` or :cpp:func:`hipMemcpyPeerAsync`, developers can +initiate data transfers without waiting for the previous operation to complete. +This overlap of computation and data transfer ensures that the GPU is not idle +while waiting for data. :cpp:func:`hipMemcpyPeerAsync` enables data transfers +between different GPUs, facilitating multi-GPU communication. + +:ref:`async_example`` include launching kernels in one stream while performing +data transfers in another. This technique is especially useful in applications +with large data sets that need to be processed quickly. + +Concurrent data transfers with intra-device copies +------------------------------------------------------------------------------- + +Devices that support the ``concurrentKernels`` property can perform +intra-device copies concurrently with kernel execution. Additionally, devices +that support the ``asyncEngineCount`` property can perform data transfers to +or from the GPU simultaneously with kernel execution. Intra-device copies can +be initiated using standard memory copy functions with destination and source +addresses residing on the same device. + +Synchronization, event management and synchronous calls +=============================================================================== + +Synchronization and event management are important for coordinating tasks and +ensuring correct execution order, and synchronous calls are necessary for +maintaining data consistency. + +Synchronous calls +------------------------------------------------------------------------------- + +Synchronous calls ensure task completion before moving to the next operation. +For example, :cpp:func:`hipMemcpy` for data transfers waits for completion +before returning control to the host. Similarly, synchronous kernel launches +are used when immediate completion is required. When a synchronous function is +called, control is not returned to the host thread before the device has +completed the requested task. The behavior of the host thread—whether to yield, +block, or spin—can be specified using :cpp:func:`hipSetDeviceFlags` with +appropriate flags. Understanding when to use synchronous calls is important for +managing execution flow and avoiding data races. + +Events for synchronization +------------------------------------------------------------------------------- + +By creating an event with :cpp:func:`hipEventCreate` and recording it with +:cpp:func:`hipEventRecord`, developers can synchronize operations across +streams, ensuring correct task execution order. :cpp:func:`hipEventSynchronize` +lets the application wait for an event to complete before proceeding with the next +operation. + +Programmatic dependent launch and synchronization +------------------------------------------------------------------------------- + +While CUDA supports programmatic dependent launches allowing a secondary kernel +to start before the primary kernel finishes, HIP achieves similar functionality +using streams and events. By employing :cpp:func:`hipStreamWaitEvent`, it is +possible to manage the execution order without explicit hardware support. This +mechanism allows a secondary kernel to launch as soon as the necessary +conditions are met, even if the primary kernel is still running. + +.. _async_example: + +Example +------------------------------------------------------------------------------- + +The examples shows the difference between sequential, asynchronous calls and +asynchronous calls with ``hipEvents``. + +.. figure:: ../../data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg + :alt: Compare the different calls + :align: center + +The example codes + +.. tab-set:: + + .. tab-item:: Sequential + + .. code-block:: cpp + + #include + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t status = expression; \ + if(status != hipSuccess){ \ + std::cerr << "HIP error " \ + << status << ": " \ + << hipGetErrorString(status) \ + << " at " << __FILE__ << ":" \ + << __LINE__ << std::endl; \ + } \ + } + + // GPU Kernels + __global__ void kernelA(double* arrayA, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayA[x] += 1.0;} + }; + __global__ void kernelB(double* arrayA, double* arrayB, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayB[x] += arrayA[x] + 3.0;} + }; + + int main() + { + constexpr int numOfBlocks = 1 << 20; + constexpr int threadsPerBlock = 1024; + constexpr int numberOfIterations = 50; + // The array size smaller to avoid the relatively short kernel launch compared to memory copies + constexpr size_t arraySize = 1U << 25; + double *d_dataA; + double *d_dataB; + + double initValueA = 0.0; + double initValueB = 2.0; + + std::vector vectorA(arraySize, initValueA); + std::vector vectorB(arraySize, initValueB); + // Allocate device memory + HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA))); + HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB))); + for(int iteration = 0; iteration < numberOfIterations; iteration++) + { + // Host to Device copies + HIP_CHECK(hipMemcpy(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice)); + HIP_CHECK(hipMemcpy(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice)); + // Launch the GPU kernels + hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, 0, d_dataA, arraySize); + hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, 0, d_dataA, d_dataB, arraySize); + // Device to Host copies + HIP_CHECK(hipMemcpy(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost)); + HIP_CHECK(hipMemcpy(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost)); + } + // Wait for all operations to complete + HIP_CHECK(hipDeviceSynchronize()); + + // Verify results + const double expectedA = (double)numberOfIterations; + const double expectedB = + initValueB + (3.0 * numberOfIterations) + + (expectedA * (expectedA + 1.0)) / 2.0; + bool passed = true; + for(size_t i = 0; i < arraySize; ++i){ + if(vectorA[i] != expectedA){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << " at index: " << i << std::endl; + break; + } + if(vectorB[i] != expectedB){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << " at index: " << i << std::endl; + break; + } + } + + if(passed){ + std::cout << "Sequential execution completed successfully." << std::endl; + }else{ + std::cerr << "Sequential execution failed." << std::endl; + } + + // Cleanup + HIP_CHECK(hipFree(d_dataA)); + HIP_CHECK(hipFree(d_dataB)); + + return 0; + } + + .. tab-item:: Asynchronous + + .. code-block:: cpp + + #include + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t status = expression; \ + if(status != hipSuccess){ \ + std::cerr << "HIP error " \ + << status << ": " \ + << hipGetErrorString(status) \ + << " at " << __FILE__ << ":" \ + << __LINE__ << std::endl; \ + } \ + } + + // GPU Kernels + __global__ void kernelA(double* arrayA, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayA[x] += 1.0;} + }; + __global__ void kernelB(double* arrayA, double* arrayB, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayB[x] += arrayA[x] + 3.0;} + }; + + int main() + { + constexpr int numOfBlocks = 1 << 20; + constexpr int threadsPerBlock = 1024; + constexpr int numberOfIterations = 50; + // The array size smaller to avoid the relatively short kernel launch compared to memory copies + constexpr size_t arraySize = 1U << 25; + double *d_dataA; + double *d_dataB; + + double initValueA = 0.0; + double initValueB = 2.0; + + std::vector vectorA(arraySize, initValueA); + std::vector vectorB(arraySize, initValueB); + // Allocate device memory + HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA))); + HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB))); + // Create streams + hipStream_t streamA, streamB; + HIP_CHECK(hipStreamCreate(&streamA)); + HIP_CHECK(hipStreamCreate(&streamB)); + for(unsigned int iteration = 0; iteration < numberOfIterations; iteration++) + { + // Stream 1: Host to Device 1 + HIP_CHECK(hipMemcpyAsync(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice, streamA)); + // Stream 2: Host to Device 2 + HIP_CHECK(hipMemcpyAsync(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice, streamB)); + // Stream 1: Kernel 1 + hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamA, d_dataA, arraySize); + // Wait for streamA finish + HIP_CHECK(hipStreamSynchronize(streamA)); + // Stream 2: Kernel 2 + hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamB, d_dataA, d_dataB, arraySize); + // Stream 1: Device to Host 2 (after Kernel 1) + HIP_CHECK(hipMemcpyAsync(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost, streamA)); + // Stream 2: Device to Host 2 (after Kernel 2) + HIP_CHECK(hipMemcpyAsync(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost, streamB)); + } + // Wait for all operations in both streams to complete + HIP_CHECK(hipStreamSynchronize(streamA)); + HIP_CHECK(hipStreamSynchronize(streamB)); + // Verify results + double expectedA = (double)numberOfIterations; + double expectedB = + initValueB + (3.0 * numberOfIterations) + + (expectedA * (expectedA + 1.0)) / 2.0; + bool passed = true; + for(size_t i = 0; i < arraySize; ++i){ + if(vectorA[i] != expectedA){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << " at index: " << i << std::endl; + break; + } + if(vectorB[i] != expectedB){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << " at index: " << i << std::endl; + break; + } + } + if(passed){ + std::cout << "Asynchronous execution completed successfully." << std::endl; + }else{ + std::cerr << "Asynchronous execution failed." << std::endl; + } + + // Cleanup + HIP_CHECK(hipStreamDestroy(streamA)); + HIP_CHECK(hipStreamDestroy(streamB)); + HIP_CHECK(hipFree(d_dataA)); + HIP_CHECK(hipFree(d_dataB)); + + return 0; + } + + .. tab-item:: hipStreamWaitEvent + + .. code-block:: cpp + + #include + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t status = expression; \ + if(status != hipSuccess){ \ + std::cerr << "HIP error " \ + << status << ": " \ + << hipGetErrorString(status) \ + << " at " << __FILE__ << ":" \ + << __LINE__ << std::endl; \ + } \ + } + + // GPU Kernels + __global__ void kernelA(double* arrayA, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayA[x] += 1.0;} + }; + __global__ void kernelB(double* arrayA, double* arrayB, size_t size){ + const size_t x = threadIdx.x + blockDim.x * blockIdx.x; + if(x < size){arrayB[x] += arrayA[x] + 3.0;} + }; + + int main() + { + constexpr int numOfBlocks = 1 << 20; + constexpr int threadsPerBlock = 1024; + constexpr int numberOfIterations = 50; + // The array size smaller to avoid the relatively short kernel launch compared to memory copies + constexpr size_t arraySize = 1U << 25; + double *d_dataA; + double *d_dataB; + double initValueA = 0.0; + double initValueB = 2.0; + + std::vector vectorA(arraySize, initValueA); + std::vector vectorB(arraySize, initValueB); + // Allocate device memory + HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA))); + HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB))); + // Create streams + hipStream_t streamA, streamB; + HIP_CHECK(hipStreamCreate(&streamA)); + HIP_CHECK(hipStreamCreate(&streamB)); + // Create events + hipEvent_t event, eventA, eventB; + HIP_CHECK(hipEventCreate(&event)); + HIP_CHECK(hipEventCreate(&eventA)); + HIP_CHECK(hipEventCreate(&eventB)); + for(unsigned int iteration = 0; iteration < numberOfIterations; iteration++) + { + // Stream 1: Host to Device 1 + HIP_CHECK(hipMemcpyAsync(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice, streamA)); + // Stream 2: Host to Device 2 + HIP_CHECK(hipMemcpyAsync(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice, streamB)); + // Stream 1: Kernel 1 + hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamA, d_dataA, arraySize); + // Record event after the GPU kernel in Stream 1 + HIP_CHECK(hipEventRecord(event, streamA)); + // Stream 2: Wait for event before starting Kernel 2 + HIP_CHECK(hipStreamWaitEvent(streamB, event, 0)); + // Stream 2: Kernel 2 + hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamB, d_dataA, d_dataB, arraySize); + // Stream 1: Device to Host 2 (after Kernel 1) + HIP_CHECK(hipMemcpyAsync(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost, streamA)); + // Stream 2: Device to Host 2 (after Kernel 2) + HIP_CHECK(hipMemcpyAsync(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost, streamB)); + // Wait for all operations in both streams to complete + HIP_CHECK(hipEventRecord(eventA, streamA)); + HIP_CHECK(hipEventRecord(eventB, streamB)); + HIP_CHECK(hipStreamWaitEvent(streamA, eventA, 0)); + HIP_CHECK(hipStreamWaitEvent(streamB, eventB, 0)); + } + // Verify results + double expectedA = (double)numberOfIterations; + double expectedB = + initValueB + (3.0 * numberOfIterations) + + (expectedA * (expectedA + 1.0)) / 2.0; + bool passed = true; + for(size_t i = 0; i < arraySize; ++i){ + if(vectorA[i] != expectedA){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << std::endl; + break; + } + if(vectorB[i] != expectedB){ + passed = false; + std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << std::endl; + break; + } + } + if(passed){ + std::cout << "Asynchronous execution with events completed successfully." << std::endl; + }else{ + std::cerr << "Asynchronous execution with events failed." << std::endl; + } + + // Cleanup + HIP_CHECK(hipEventDestroy(event)); + HIP_CHECK(hipEventDestroy(eventA)); + HIP_CHECK(hipEventDestroy(eventB)); + HIP_CHECK(hipStreamDestroy(streamA)); + HIP_CHECK(hipStreamDestroy(streamB)); + HIP_CHECK(hipFree(d_dataA)); + HIP_CHECK(hipFree(d_dataB)); + + return 0; + } + +HIP Graphs +=============================================================================== + +HIP graphs offer an efficient alternative to the standard method of launching +GPU tasks via streams. Comprising nodes for operations and edges for +dependencies, HIP graphs reduce kernel launch overhead and provide a high-level +abstraction for managing dependencies and synchronization. By representing +sequences of kernels and memory operations as a single graph, they simplify +complex workflows and enhance performance, particularly for applications with +intricate dependencies and multiple execution stages. +For more details, see the :ref:`how_to_HIP_graph` documentation. diff --git a/docs/how-to/performance_guidelines.rst b/docs/how-to/performance_guidelines.rst index d71c646657..33dbbb4af4 100644 --- a/docs/how-to/performance_guidelines.rst +++ b/docs/how-to/performance_guidelines.rst @@ -3,6 +3,8 @@ developers optimize the performance of HIP-capable GPU architectures. :keywords: AMD, ROCm, HIP, CUDA, performance, guidelines +.. _how_to_performance_guidelines: + ******************************************************************************* Performance guidelines ******************************************************************************* @@ -32,12 +34,14 @@ reveal and efficiently provide as much parallelism as possible. The parallelism can be performed at the application level, device level, and multiprocessor level. +.. _application_parallel_execution: + Application level -------------------------------------------------------------------------------- To enable parallel execution of the application across the host and devices, use -asynchronous calls and streams. Assign workloads based on efficiency: serial to -the host or parallel to the devices. +:ref:`asynchronous calls and streams `. Assign workloads +based on efficiency: serial to the host or parallel to the devices. For parallel workloads, when threads belonging to the same block need to synchronize to share data, use :cpp:func:`__syncthreads()` (see: diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index a0baf6299e..04e1ce18a6 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -49,9 +49,10 @@ subtrees: - file: how-to/hip_runtime_api/memory_management/virtual_memory - file: how-to/hip_runtime_api/memory_management/stream_ordered_allocator - file: how-to/hip_runtime_api/error_handling - - file: how-to/hip_runtime_api/cooperative_groups - - file: how-to/hip_runtime_api/hipgraph - file: how-to/hip_runtime_api/call_stack + - file: how-to/hip_runtime_api/asynchronous + - file: how-to/hip_runtime_api/hipgraph + - file: how-to/hip_runtime_api/cooperative_groups - file: how-to/hip_runtime_api/multi_device - file: how-to/hip_runtime_api/opengl_interop - file: how-to/hip_runtime_api/external_interop From b5efd024c1432019d7f8ecb01926c5e776f7f032 Mon Sep 17 00:00:00 2001 From: randyh62 <42045079+randyh62@users.noreply.github.com> Date: Tue, 14 Jan 2025 16:25:56 -0800 Subject: [PATCH 14/46] Update terms.md Add SEO metadata to the markdown file --- docs/reference/terms.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/reference/terms.md b/docs/reference/terms.md index ea2b9d96ab..713bf6eb81 100644 --- a/docs/reference/terms.md +++ b/docs/reference/terms.md @@ -1,3 +1,9 @@ + + + + + + # Table comparing syntax for different compute APIs |Term|CUDA|HIP|OpenCL| From 4659a04ff25f05d13e9ab9c1dece596332dec035 Mon Sep 17 00:00:00 2001 From: randyh62 <42045079+randyh62@users.noreply.github.com> Date: Tue, 14 Jan 2025 16:36:36 -0800 Subject: [PATCH 15/46] Update hip_porting_guide.md add metadata to markdown file --- docs/how-to/hip_porting_guide.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/how-to/hip_porting_guide.md b/docs/how-to/hip_porting_guide.md index adda988ee5..a6027d4801 100644 --- a/docs/how-to/hip_porting_guide.md +++ b/docs/how-to/hip_porting_guide.md @@ -1,3 +1,9 @@ + + + + + + # HIP porting guide In addition to providing a portable C++ programming environment for GPUs, HIP is designed to ease From 74d39a9e3fa2727a1c9446648a8c84d12f1722ee Mon Sep 17 00:00:00 2001 From: randyh62 <42045079+randyh62@users.noreply.github.com> Date: Tue, 14 Jan 2025 16:38:25 -0800 Subject: [PATCH 16/46] Update debugging_env.rst add metadata to RST file --- docs/how-to/debugging_env.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/how-to/debugging_env.rst b/docs/how-to/debugging_env.rst index 7b3204143d..b3544a967f 100644 --- a/docs/how-to/debugging_env.rst +++ b/docs/how-to/debugging_env.rst @@ -1,3 +1,7 @@ +.. meta:: + :description: Debug environment variables for HIP. + :keywords: AMD, ROCm, HIP, debugging, Environment variables, ROCgdb + .. list-table:: :header-rows: 1 :widths: 35,14,51 From 7ecca36c039928ea1aabe5a6b536a22dafe45655 Mon Sep 17 00:00:00 2001 From: randyh62 <42045079+randyh62@users.noreply.github.com> Date: Tue, 14 Jan 2025 16:40:50 -0800 Subject: [PATCH 17/46] Update hip_rtc.md add metadata to markdown file --- docs/how-to/hip_rtc.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/how-to/hip_rtc.md b/docs/how-to/hip_rtc.md index b96c069cb2..0bf3a56570 100644 --- a/docs/how-to/hip_rtc.md +++ b/docs/how-to/hip_rtc.md @@ -1,3 +1,9 @@ + + + + + + # Programming for HIP runtime compiler (RTC) HIP lets you compile kernels at runtime with the `hiprtc*` APIs. From 2c0346286cefd058e397f5b95f2b0573af82f8e8 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 22 Jan 2025 00:52:23 +0000 Subject: [PATCH 18/46] Bump rocm-docs-core[api_reference] from 1.13.0 to 1.14.1 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.13.0 to 1.14.1. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.13.0...v1.14.1) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 136 +++++++++++++++++++++++++++++++++-- 2 files changed, 133 insertions(+), 5 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 0bd422bee5..0609244abd 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.13.0 +rocm-docs-core[api_reference]==1.14.1 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index bfbafa055a..29c613ece8 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -8,6 +8,13 @@ accessible-pygments==0.0.5 # via pydata-sphinx-theme alabaster==1.0.0 # via sphinx +asttokens==3.0.0 + # via stack-data +attrs==24.3.0 + # via + # jsonschema + # jupyter-cache + # referencing babel==2.16.0 # via # pydata-sphinx-theme @@ -28,11 +35,18 @@ click==8.1.7 # via # click-log # doxysphinx + # jupyter-cache # sphinx-external-toc click-log==0.4.0 # via doxysphinx +comm==0.2.2 + # via ipykernel cryptography==43.0.1 # via pyjwt +debugpy==1.8.12 + # via ipykernel +decorator==5.1.1 + # via ipython deprecated==1.2.14 # via pygithub docutils==0.21.2 @@ -43,20 +57,56 @@ docutils==0.21.2 # sphinx doxysphinx==3.3.10 # via rocm-docs-core +exceptiongroup==1.2.2 + # via ipython +executing==2.1.0 + # via stack-data fastjsonschema==2.20.0 - # via rocm-docs-core + # via + # nbformat + # rocm-docs-core gitdb==4.0.11 # via gitpython gitpython==3.1.43 # via rocm-docs-core +greenlet==3.1.1 + # via sqlalchemy idna==3.8 # via requests imagesize==1.4.1 # via sphinx +importlib-metadata==8.6.1 + # via + # jupyter-cache + # myst-nb +ipykernel==6.29.5 + # via myst-nb +ipython==8.31.0 + # via + # ipykernel + # myst-nb +jedi==0.19.2 + # via ipython jinja2==3.1.4 # via # myst-parser # sphinx +jsonschema==4.23.0 + # via nbformat +jsonschema-specifications==2024.10.1 + # via jsonschema +jupyter-cache==1.0.1 + # via myst-nb +jupyter-client==8.6.3 + # via + # ipykernel + # nbclient +jupyter-core==5.7.2 + # via + # ipykernel + # jupyter-client + # nbclient + # nbformat libsass==0.22.0 # via doxysphinx lxml==4.9.4 @@ -67,20 +117,52 @@ markdown-it-py==3.0.0 # myst-parser markupsafe==2.1.5 # via jinja2 +matplotlib-inline==0.1.7 + # via + # ipykernel + # ipython mdit-py-plugins==0.4.1 # via myst-parser mdurl==0.1.2 # via markdown-it-py mpire==2.10.2 # via doxysphinx -myst-parser==4.0.0 +myst-nb==1.1.2 # via rocm-docs-core +myst-parser==4.0.0 + # via myst-nb +nbclient==0.10.2 + # via + # jupyter-cache + # myst-nb +nbformat==5.10.4 + # via + # jupyter-cache + # myst-nb + # nbclient +nest-asyncio==1.6.0 + # via ipykernel numpy==1.26.4 # via doxysphinx packaging==24.1 # via + # ipykernel # pydata-sphinx-theme # sphinx +parso==0.8.4 + # via jedi +pexpect==4.9.0 + # via ipython +platformdirs==4.3.6 + # via jupyter-core +prompt-toolkit==3.0.50 + # via ipython +psutil==6.1.1 + # via ipykernel +ptyprocess==0.7.0 + # via pexpect +pure-eval==0.2.3 + # via stack-data pycparser==2.22 # via cffi pydata-sphinx-theme==0.15.4 @@ -92,6 +174,7 @@ pygithub==2.4.0 pygments==2.18.0 # via # accessible-pygments + # ipython # mpire # pydata-sphinx-theme # sphinx @@ -106,18 +189,34 @@ pyparsing==3.1.4 # doxysphinx # sphinxcontrib-doxylink python-dateutil==2.9.0.post0 - # via sphinxcontrib-doxylink + # via + # jupyter-client + # sphinxcontrib-doxylink pyyaml==6.0.2 # via + # jupyter-cache + # myst-nb # myst-parser # rocm-docs-core # sphinx-external-toc +pyzmq==26.2.0 + # via + # ipykernel + # jupyter-client +referencing==0.36.1 + # via + # jsonschema + # jsonschema-specifications requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.13.0 +rocm-docs-core[api-reference]==1.14.1 # via -r requirements.in +rpds-py==0.22.3 + # via + # jsonschema + # referencing six==1.16.0 # via python-dateutil smmap==5.0.1 @@ -129,6 +228,7 @@ soupsieve==2.6 sphinx==8.0.2 # via # breathe + # myst-nb # myst-parser # pydata-sphinx-theme # rocm-docs-core @@ -162,17 +262,45 @@ sphinxcontrib-qthelp==2.0.0 # via sphinx sphinxcontrib-serializinghtml==2.0.0 # via sphinx +sqlalchemy==2.0.37 + # via jupyter-cache +stack-data==0.6.3 + # via ipython +tabulate==0.9.0 + # via jupyter-cache tomli==2.0.1 # via sphinx +tornado==6.4.2 + # via + # ipykernel + # jupyter-client tqdm==4.66.5 # via mpire +traitlets==5.14.3 + # via + # comm + # ipykernel + # ipython + # jupyter-client + # jupyter-core + # matplotlib-inline + # nbclient + # nbformat typing-extensions==4.12.2 # via + # ipython + # myst-nb # pydata-sphinx-theme # pygithub + # referencing + # sqlalchemy urllib3==2.2.2 # via # pygithub # requests +wcwidth==0.2.13 + # via prompt-toolkit wrapt==1.16.0 # via deprecated +zipp==3.21.0 + # via importlib-metadata From 435d3577ce819d039cd7b98771d61bb88fc246cd Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Mon, 20 Jan 2025 13:53:16 +0100 Subject: [PATCH 19/46] Docs: Add local spellcheck file for spellcheck workflow --- .gitignore | 1 + .spellcheck.local.yaml | 10 ++++++++++ .wordlist.txt | 1 + 3 files changed, 12 insertions(+) create mode 100644 .spellcheck.local.yaml diff --git a/.gitignore b/.gitignore index 6bdb3a4030..ffb0b5f8c0 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,6 @@ .* !.gitignore +!.spellcheck.local.yaml *.o *.exe *.swp diff --git a/.spellcheck.local.yaml b/.spellcheck.local.yaml new file mode 100644 index 0000000000..a4adbe401f --- /dev/null +++ b/.spellcheck.local.yaml @@ -0,0 +1,10 @@ +matrix: +- name: Markdown + sources: + - ['!docs/doxygen/mainpage.md'] +- name: reST + sources: + - [] +- name: Cpp + sources: + - ['include/hip/*'] diff --git a/.wordlist.txt b/.wordlist.txt index 3a390c8309..2746b336b7 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -157,6 +157,7 @@ sinewave SOMA SPMV structs +struct's SYCL syntaxes texel From 862482486b441941168a0e71e05a0ce9ad7f4e83 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 30 Jan 2025 00:22:29 +0000 Subject: [PATCH 20/46] Bump rocm-docs-core[api_reference] from 1.14.1 to 1.15.0 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.14.1 to 1.15.0. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.14.1...v1.15.0) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 0609244abd..3a34f77e71 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.14.1 +rocm-docs-core[api_reference]==1.15.0 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 29c613ece8..9b8da47ab3 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -211,7 +211,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.14.1 +rocm-docs-core[api-reference]==1.15.0 # via -r requirements.in rpds-py==0.22.3 # via From ef76024c66652a07e871cf66b2230420b87d7265 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Fri, 31 Jan 2025 08:16:50 +0100 Subject: [PATCH 21/46] Add virtual aliases support section --- .../memory_management/virtual_memory.rst | 307 +++++++++++++++--- include/hip/hip_runtime_api.h | 5 +- 2 files changed, 272 insertions(+), 40 deletions(-) diff --git a/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst b/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst index 597b54040f..6663c52b82 100644 --- a/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst @@ -28,7 +28,7 @@ reduce memory usage and unnecessary ``memcpy`` calls. .. _memory_allocation_virtual_memory: Memory allocation -================================================================================ +================= Standard memory allocation uses the :cpp:func:`hipMalloc` function to allocate a block of memory on the device. However, when using virtual memory, this process @@ -37,10 +37,34 @@ is separated into multiple steps using the :cpp:func:`hipMemCreate`, :cpp:func:`hipMemSetAccess` functions. This guide explains what these functions do and how you can use them for virtual memory management. +.. _vmm_support: + +Virtual memory management support +--------------------------------- + +The first step is to check if the targeted device or GPU supports virtual memory management. +Use the :cpp:func:`hipDeviceGetAttribute` function to get the +``hipDeviceAttributeVirtualMemoryManagementSupported`` attribute for a specific GPU, as shown in the following example. + +.. code-block:: cpp + + int vmm = 0, currentDev = 0; + hipDeviceGetAttribute( + &vmm, hipDeviceAttributeVirtualMemoryManagementSupported, currentDev + ); + + if (vmm == 0) { + std::cout << "GPU " << currentDev << " doesn't support virtual memory management." << std::endl; + } else { + std::cout << "GPU " << currentDev << " support virtual memory management." << std::endl; + } + +.. _allocate_physical_memory: + Allocate physical memory --------------------------------------------------------------------------------- +------------------------ -The first step is to allocate the physical memory itself with the +The next step is to allocate the physical memory using the :cpp:func:`hipMemCreate` function. This function accepts the size of the buffer, an ``unsigned long long`` variable for the flags, and a :cpp:struct:`hipMemAllocationProp` variable. :cpp:struct:`hipMemAllocationProp` @@ -48,42 +72,54 @@ contains the properties of the memory to be allocated, such as where the memory is physically located and what kind of shareable handles are available. If the allocation is successful, the function returns a value of :cpp:enumerator:`hipSuccess`, with :cpp:type:`hipMemGenericAllocationHandle_t` -representing a valid physical memory allocation. The allocated memory size must -be aligned with the granularity appropriate for the properties of the -allocation. You can use the :cpp:func:`hipMemGetAllocationGranularity` function -to determine the correct granularity. +representing a valid physical memory allocation. + +The allocated memory must be aligned with the appropriate granularity. The +granularity value can be queried with :cpp:func:`hipMemGetAllocationGranularity`, +and its value depends on the target device hardware and the type of memory +allocation. If the allocation size is not aligned, meaning it is not cleanly +divisible by the minimum granularity value, :cpp:func:`hipMemCreate` will return +an out-of-memory error. .. code-block:: cpp size_t granularity = 0; hipMemGenericAllocationHandle_t allocHandle; hipMemAllocationProp prop = {}; - prop.type = HIP_MEM_ALLOCATION_TYPE_PINNED; - prop.location.type = HIP_MEM_LOCATION_TYPE_DEVICE; + // The pinned allocation type cannot be migrated from its current location + // while the application is actively using it. + prop.type = hipMemAllocationTypePinned; + // Set the location type to device, currently there are no other valid option. + prop.location.type = hipMemLocationTypeDevice; + // Set the device id, where the memory will be allocated. prop.location.id = currentDev; - hipMemGetAllocationGranularity(&granularity, &prop, HIP_MEM_ALLOC_GRANULARITY_MINIMUM); + hipMemGetAllocationGranularity(&granularity, &prop, hipMemAllocationGranularityMinimum); padded_size = ROUND_UP(size, granularity); hipMemCreate(&allocHandle, padded_size, &prop, 0); +.. _reserve_virtual_address: + Reserve virtual address range --------------------------------------------------------------------------------- +----------------------------- -After you have acquired an allocation of physical memory, you must map it before -you can use it. To do so, you need a virtual address to map it to. Mapping -means the physical memory allocation is available from the virtual address range -it is mapped to. To reserve a virtual memory range, use the -:cpp:func:`hipMemAddressReserve` function. The size of the virtual memory must -match the amount of physical memory previously allocated. You can then map the -physical memory allocation to the newly-acquired virtual memory address range -using the :cpp:func:`hipMemMap` function. +After you have acquired an allocation of physical memory, you must map it to a +virtual address before you can use it. Mapping means the physical memory +allocation is available from the virtual address range it is mapped to. To +reserve a virtual memory range, use the :cpp:func:`hipMemAddressReserve` +function. The size of the virtual memory must match the amount of physical +memory previously allocated. You can then map the physical memory allocation to +the newly-acquired virtual memory address range using the :cpp:func:`hipMemMap` +function. .. code-block:: cpp hipMemAddressReserve(&ptr, padded_size, 0, 0, 0); hipMemMap(ptr, padded_size, 0, allocHandle, 0); +.. _set_memory_access: + Set memory access --------------------------------------------------------------------------------- +----------------- Finally, use the :cpp:func:`hipMemSetAccess` function to enable memory access. It accepts the pointer to the virtual memory, the size, and a @@ -103,16 +139,39 @@ devices. .. code-block:: cpp hipMemAccessDesc accessDesc = {}; - accessDesc.location.type = HIP_MEM_LOCATION_TYPE_DEVICE; + accessDesc.location.type = hipMemLocationTypeDevice; accessDesc.location.id = currentDev; - accessDesc.flags = HIP_MEM_ACCESS_FLAGS_PROT_READWRITE; + accessDesc.flags = hipMemAccessFlagsProtReadwrite; hipMemSetAccess(ptr, padded_size, &accessDesc, 1); At this point the memory is allocated, mapped, and ready for use. You can read and write to it, just like you would a C style memory allocation. +.. _usage_virtual_memory: + +Dynamically increase allocation size +------------------------------------ + +To increase the amount of pre-allocated memory, use +:cpp:func:`hipMemAddressReserve`, which accepts the starting address, and the +size of the reservation in bytes. This allows you to have a continuous virtual +address space without worrying about the underlying physical allocation. + +.. code-block:: cpp + + hipMemAddressReserve(&new_ptr, (new_size - padded_size), 0, ptr + padded_size, 0); + hipMemMap(new_ptr, (new_size - padded_size), 0, newAllocHandle, 0); + hipMemSetAccess(new_ptr, (new_size - padded_size), &accessDesc, 1); + +The code sample above assumes that :cpp:func:`hipMemAddressReserve` was able to +reserve the memory address at the specified location. However, this isn't +guaranteed to be true, so you should validate that ``new_ptr`` points to a +specific virtual address before using it. + +.. _free_virtual_memory: + Free virtual memory --------------------------------------------------------------------------------- +------------------- To free the memory allocated in this manner, use the corresponding free functions. To unmap the memory, use :cpp:func:`hipMemUnmap`. To release the @@ -128,27 +187,197 @@ synchronizes the device. This causes worse resource usage and performance. hipMemRelease(allocHandle); hipMemAddressFree(ptr, size); -.. _usage_virtual_memory: +Example code +============ + +The virtual memory management example follows these steps: + +1. Check virtual memory management :ref:`support `: + The :cpp:func:`hipDeviceGetAttribute` function is used to check the virtual + memory management support of the GPU with ID 0. + +2. Physical memory :ref:`allocation `: Physical memory + is allocated using :cpp:func:`hipMemCreate` with pinned memory on the + device. + +3. Virtual memory :ref:`reservation `: Virtual address + range is reserved using :cpp:func:`hipMemAddressReserve`. + +4. Mapping virtual address to physical memory: The physical memory is mapped + to a virtual address (``virtualPointer``) using :cpp:func:`hipMemMap`. + +5. Memory :ref:`access permissions`: Permission is set for + pointer to allow read and write access using :cpp:func:`hipMemSetAccess`. + +6. Memory operation: Data is written to the memory via ``virtualPointer``. + +7. Launch kernels: The ``zeroAddr`` and ``fillAddr`` kernels are + launched using the virtual memory pointer. + +8. :ref:`Cleanup `: The mappings, physical memory, and + virtual address are released at the end to avoid memory leaks. + +.. code-block:: cpp -Memory usage + #include + #include + + #define ROUND_UP(SIZE,GRANULARITY) ((1 + SIZE / GRANULARITY) * GRANULARITY) + + #define HIP_CHECK(expression) \ + { \ + const hipError_t err = expression; \ + if(err != hipSuccess){ \ + std::cerr << "HIP error: " \ + << hipGetErrorString(err) \ + << " at " << __LINE__ << "\n"; \ + } \ + } + + __global__ void zeroAddr(int* pointer) { + *pointer = 0; + } + + __global__ void fillAddr(int* pointer) { + *pointer = 42; + } + + + int main() { + + int currentDev = 0; + + // Step 1: Check virtual memory management support on device 0 + int vmm = 0; + HIP_CHECK( + hipDeviceGetAttribute( + &vmm, hipDeviceAttributeVirtualMemoryManagementSupported, currentDev + ) + ); + + std::cout << "Virtual memory management support value: " << vmm << std::endl; + + if (vmm == 0) { + std::cout << "GPU 0 doesn't support virtual memory management."; + return 0; + } + + // Size of memory to allocate + size_t size = 4 * 1024; + + // Step 2: Allocate physical memory + hipMemGenericAllocationHandle_t allocHandle; + hipMemAllocationProp prop = {}; + prop.type = hipMemAllocationTypePinned; + prop.location.type = hipMemLocationTypeDevice; + prop.location.id = currentDev; + size_t granularity = 0; + HIP_CHECK( + hipMemGetAllocationGranularity( + &granularity, + &prop, + hipMemAllocationGranularityMinimum)); + size_t padded_size = ROUND_UP(size, granularity); + HIP_CHECK(hipMemCreate(&allocHandle, padded_size * 2, &prop, 0)); + + // Step 3: Reserve a virtual memory address range + void* virtualPointer = nullptr; + HIP_CHECK(hipMemAddressReserve(&virtualPointer, padded_size, granularity, nullptr, 0)); + + // Step 4: Map the physical memory to the virtual address range + HIP_CHECK(hipMemMap(virtualPointer, padded_size, 0, allocHandle, 0)); + + // Step 5: Set memory access permission for pointer + hipMemAccessDesc accessDesc = {}; + accessDesc.location.type = hipMemLocationTypeDevice; + accessDesc.location.id = currentDev; + accessDesc.flags = hipMemAccessFlagsProtReadWrite; + + HIP_CHECK(hipMemSetAccess(virtualPointer, padded_size, &accessDesc, 1)); + + // Step 6: Perform memory operation + int value = 42; + HIP_CHECK(hipMemcpy(virtualPointer, &value, sizeof(int), hipMemcpyHostToDevice)); + + int result = 1; + HIP_CHECK(hipMemcpy(&result, virtualPointer, sizeof(int), hipMemcpyDeviceToHost)); + if( result == 42) { + std::cout << "Success. Value: " << result << std::endl; + } else { + std::cout << "Failure. Value: " << result << std::endl; + } + + // Step 7: Launch kernels + // Launch zeroAddr kernel + zeroAddr<<<1, 1>>>((int*)virtualPointer); + HIP_CHECK(hipDeviceSynchronize()); + + // Check zeroAddr kernel result + result = 1; + HIP_CHECK(hipMemcpy(&result, virtualPointer, sizeof(int), hipMemcpyDeviceToHost)); + if( result == 0) { + std::cout << "Success. zeroAddr kernel: " << result << std::endl; + } else { + std::cout << "Failure. zeroAddr kernel: " << result << std::endl; + } + + // Launch fillAddr kernel + fillAddr<<<1, 1>>>((int*)virtualPointer); + HIP_CHECK(hipDeviceSynchronize()); + + // Check fillAddr kernel result + result = 1; + HIP_CHECK(hipMemcpy(&result, virtualPointer, sizeof(int), hipMemcpyDeviceToHost)); + if( result == 42) { + std::cout << "Success. fillAddr kernel: " << result << std::endl; + } else { + std::cout << "Failure. fillAddr kernel: " << result << std::endl; + } + + // Step 8: Cleanup + HIP_CHECK(hipMemUnmap(virtualPointer, padded_size)); + HIP_CHECK(hipMemRelease(allocHandle)); + HIP_CHECK(hipMemAddressFree(virtualPointer, padded_size)); + + return 0; + } + +Virtual aliases ================================================================================ -Dynamically increase allocation size --------------------------------------------------------------------------------- +Virtual aliases are multiple virtual memory addresses mapping to the same +physical memory on the GPU. When this occurs, different threads, processes, or memory +allocations to access shared physical memory through different virtual +addresses on different devices. + +Multiple virtual memory mappings can be created using multiple calls to +:cpp:func:`hipMemMap` on the same memory allocation. + +.. note:: + + RDNA cards may not produce correct results, if users access two different + virtual addresses that map to the same physical address. In this case, the + L1 data caches will be incoherent due to the virtual-to-physical aliasing. + These GPUs will produce correct results if users access virtual-to-physical + aliases using volatile pointers. -The :cpp:func:`hipMemAddressReserve` function allows you to increase the amount -of pre-allocated memory. This function accepts a parameter representing the -requested starting address of the virtual memory. This allows you to have a -continuous virtual address space without worrying about the underlying physical -allocation. + NVIDIA GPUs require special fences to produce correct results when + using virtual aliases. + +In the following code block, the kernels input device pointers are virtual +aliases of the same memory allocation: .. code-block:: cpp - hipMemAddressReserve(&new_ptr, (new_size - padded_size), 0, ptr + padded_size, 0); - hipMemMap(new_ptr, (new_size - padded_size), 0, newAllocHandle, 0); - hipMemSetAccess(new_ptr, (new_size - padded_size), &accessDesc, 1); + __global__ void updateBoth(int* pointerA, int* pointerB) { + // May produce incorrect results on RDNA and NVIDIA cards. + *pointerA = 0; + *pointerB = 42; + } + + __global__ void updateBoth_v2(volatile int* pointerA, volatile int* pointerB) { + // May produce incorrect results on NVIDIA cards. + *pointerA = 0; + *pointerB = 42; + } -The code sample above assumes that :cpp:func:`hipMemAddressReserve` was able to -reserve the memory address at the specified location. However, this isn't -guaranteed to be true, so you should validate that ``new_ptr`` points to a -specific virtual address before using it. diff --git a/include/hip/hip_runtime_api.h b/include/hip/hip_runtime_api.h index d29ca8dfe6..7a88760253 100644 --- a/include/hip/hip_runtime_api.h +++ b/include/hip/hip_runtime_api.h @@ -1089,7 +1089,10 @@ typedef enum hipMemAccessFlags { hipMemAccessFlagsProtReadWrite = 3 ///< Set the address range read-write accessible } hipMemAccessFlags; /** - * Memory access descriptor + * Memory access descriptor structure is used to specify memory access + * permissions for a virtual memory region in Virtual Memory Management API. + * This structure changes read, and write permissions for + * specific memory regions. */ typedef struct hipMemAccessDesc { hipMemLocation location; ///< Location on which the accessibility has to change From c424986fcbf37ca95e1a3543cd710e4d67705250 Mon Sep 17 00:00:00 2001 From: randyh62 Date: Fri, 31 Jan 2025 11:46:11 -0800 Subject: [PATCH 22/46] added compilation cache content --- docs/how-to/hip_rtc.md | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/docs/how-to/hip_rtc.md b/docs/how-to/hip_rtc.md index 0bf3a56570..14584828be 100644 --- a/docs/how-to/hip_rtc.md +++ b/docs/how-to/hip_rtc.md @@ -9,11 +9,13 @@ HIP lets you compile kernels at runtime with the `hiprtc*` APIs. Kernels can be stored as a text string and can be passed to HIPRTC APIs alongside options to guide the compilation. -NOTE: +:::{note} -* This library can be used on systems without HIP installed nor AMD GPU driver installed at all (offline compilation). Therefore, it does not depend on any HIP runtime library. -* But it does depend on Code Object Manager (comgr). You may try to statically link comgr into HIPRTC to avoid any ambiguity. -* Developers can decide to bundle this library with their application. +* This library can be used on systems without HIP installed nor AMD GPU driver installed at all (offline compilation). Therefore, it doesn't depend on any HIP runtime library. +* This library depends on Code Object Manager (comgr). You can try to statically link comgr into HIPRTC to avoid ambiguity. +* Developers can bundle this library with their application. + +::: ## Compilation APIs @@ -230,6 +232,22 @@ int main() { } ``` +## Kernel Compilation Cache + +HIPRTC incorporates a cache to avoid recompiling kernels between program executions. The contents of the cache include the kernel source code (including the contents of any `#include` headers), the compilation flags, and the compiler version. After a ROCm version update, the kernels are progressively recompiled, and the new results are cached. When the cache is disabled, each kernel is recompiled every time it is requested. + +Use the following environment variables to manage the cache status as enabled or disabled, the location for storing the cache contents, and the cache eviction policy: + +* `AMD_COMGR_CACHE` By default this variable has a value of `0` and the compilation cache feature is disabled. To enable the feature set the environment variable to a value of `1` (or any value other than `0`). This behavior may change in a future release. + +* `AMD_COMGR_CACHE_DIR`: By default the value of this environment variable is defined as `$XDG_CACHE_HOME/comgr_cache`, which defaults to `$USER/.cache/comgr_cache` on Linux, and `%LOCALAPPDATA%\cache\comgr_cache` on Windows. You can specify a different directory for the environment variable to change the path for cache storage. If the runtime fails to access the specified cache directory, or the environment variable is set to an empty string (""), the cache is disabled. + +* `AMD_COMGR_CACHE_POLICY`: If assigned a value, the string is interpreted and applied to the cache pruning policy. The string format is consistent with [Clang's ThinLTO cache pruning policy](https://rocm.docs.amd.com/projects/llvm-project/en/latest/LLVM/clang/html/ThinLTO.html#cache-pruning). The default policy is defined as: `prune_interval=1h:prune_expiration=0h:cache_size=75%:cache_size_bytes=30g:cache_size_files=0`. If the runtime fails to parse the defined string, or the environment variable is set to an empty string (""), the cache is disabled. + +:::{note} + This cache is also shared with the OpenCL runtime shipped with ROCm. +::: + ## HIPRTC specific options HIPRTC provides a few HIPRTC specific flags From 8a8a11037f94dc7bb33ea64fc0f3efb222f36194 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Wed, 22 Jan 2025 13:53:12 +0100 Subject: [PATCH 23/46] Docs: Replace terms.md page with page that provides example of API mapping --- docs/faq.rst | 35 +------ docs/index.md | 2 +- docs/reference/api_syntax.rst | 176 ++++++++++++++++++++++++++++++++++ docs/reference/terms.md | 44 --------- docs/sphinx/_toc.yml.in | 3 +- 5 files changed, 180 insertions(+), 80 deletions(-) create mode 100644 docs/reference/api_syntax.rst delete mode 100644 docs/reference/terms.md diff --git a/docs/faq.rst b/docs/faq.rst index 5e67ec465d..15308e437c 100644 --- a/docs/faq.rst +++ b/docs/faq.rst @@ -65,39 +65,8 @@ platforms. Additional porting might be required to deal with architecture feature queries or CUDA capabilities that HIP doesn't support. -How does HIP compare with OpenCL? ---------------------------------- - -HIP offers several benefits over OpenCL: - -* Device code can be written in modern C++, including templates, lambdas, - classes and so on. -* Host and device code can be mixed in the source files. -* The HIP API is less verbose than OpenCL and is familiar to CUDA developers. -* Porting from CUDA to HIP is significantly easier than from CUDA to OpenCL. -* HIP uses development tools specialized for each platform: :doc:`amdclang++ ` - for AMD GPUs or `nvcc `_ - for NVIDIA GPUs, and profilers like :doc:`ROCm Compute Profiler ` or - `Nsight Systems `_. -* HIP provides - * pointers and host-side pointer arithmetic. - * device-level control over memory allocation and placement. - * an offline compilation model. - -How does porting CUDA to HIP compare to porting CUDA to OpenCL? ---------------------------------------------------------------- - -OpenCL differs from HIP and CUDA when considering the host runtime, -but even more so when considering the kernel code. -The HIP device code is a C++ dialect, while OpenCL is C99-based. -OpenCL does not support single-source compilation. - -As a result, the OpenCL syntax differs significantly from HIP, and porting tools -must perform complex transformations, especially regarding templates or other -C++ features in kernels. - -To better understand the syntax differences, see :doc:`here` or -the :doc:`HIP porting guide `. +To better understand the syntax differences, see :doc:`CUDA to HIP API Function Comparison ` +or the :doc:`HIP porting guide `. Can I install CUDA and ROCm on the same machine? ------------------------------------------------ diff --git a/docs/index.md b/docs/index.md index eb2eb1e6da..7678aaae79 100644 --- a/docs/index.md +++ b/docs/index.md @@ -45,7 +45,7 @@ The HIP documentation is organized into the following categories: * [HSA runtime API for ROCm](./reference/virtual_rocr) * [HIP math API](./reference/math_api) * [HIP environment variables](./reference/env_variables) -* [Comparing syntax for different APIs](./reference/terms) +* [CUDA to HIP API Function Comparison](./reference/api_syntax) * [List of deprecated APIs](./reference/deprecated_api_list) * [FP8 numbers in HIP](./reference/fp8_numbers) * {doc}`./reference/hardware_features` diff --git a/docs/reference/api_syntax.rst b/docs/reference/api_syntax.rst new file mode 100644 index 0000000000..ead33fa5e1 --- /dev/null +++ b/docs/reference/api_syntax.rst @@ -0,0 +1,176 @@ +.. meta:: + :description: Maps CUDA API syntax to HIP API syntax with an example + :keywords: AMD, ROCm, HIP, CUDA, syntax, HIP syntax + +******************************************************************************** +CUDA to HIP API Function Comparison +******************************************************************************** + +This page introduces key syntax differences between CUDA and HIP APIs with a focused code +example and comparison table. For a complete list of mappings, visit :ref:`HIPIFY `. + +The following CUDA code example illustrates several CUDA API syntaxes. + +.. code-block:: cpp + + #include + #include + #include + + __global__ void block_reduction(const float* input, float* output, int num_elements) + { + extern __shared__ float s_data[]; + + int tid = threadIdx.x; + int global_id = blockDim.x * blockIdx.x + tid; + + if (global_id < num_elements) + { + s_data[tid] = input[global_id]; + } + else + { + s_data[tid] = 0.0f; + } + __syncthreads(); + + for (int stride = blockDim.x / 2; stride > 0; stride >>= 1) + { + if (tid < stride) + { + s_data[tid] += s_data[tid + stride]; + } + __syncthreads(); + } + + if (tid == 0) + { + output[blockIdx.x] = s_data[0]; + } + } + + int main() + { + int threads = 256; + const int num_elements = 50000; + + std::vector h_a(num_elements); + std::vector h_b((num_elements + threads - 1) / threads); + + for (int i = 0; i < num_elements; ++i) + { + h_a[i] = rand() / static_cast(RAND_MAX); + } + + float *d_a, *d_b; + cudaMalloc(&d_a, h_a.size() * sizeof(float)); + cudaMalloc(&d_b, h_b.size() * sizeof(float)); + + cudaStream_t stream; + cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking); + + cudaEvent_t start_event, stop_event; + cudaEventCreate(&start_event); + cudaEventCreate(&stop_event); + + cudaMemcpyAsync(d_a, h_a.data(), h_a.size() * sizeof(float), cudaMemcpyHostToDevice, stream); + + cudaEventRecord(start_event, stream); + + int blocks = (num_elements + threads - 1) / threads; + block_reduction<<>>(d_a, d_b, num_elements); + + cudaMemcpyAsync(h_b.data(), d_b, h_b.size() * sizeof(float), cudaMemcpyDeviceToHost, stream); + + cudaEventRecord(stop_event, stream); + cudaEventSynchronize(stop_event); + + cudaEventElapsedTime(&milliseconds, start_event, stop_event); + std::cout << "Kernel execution time: " << milliseconds << " ms\n"; + + cudaFree(d_a); + cudaFree(d_b); + + cudaEventDestroy(start_event); + cudaEventDestroy(stop_event); + cudaStreamDestroy(stream); + + return 0; + } + +The following table maps CUDA API functions to corresponding HIP API functions, as demonstrated in the +preceding code examples. + +.. list-table:: + :header-rows: 1 + :name: syntax-mapping-table + + * + - CUDA + - HIP + + * + - ``#include `` + - ``#include `` + + * + - ``cudaError_t`` + - ``hipError_t`` + + * + - ``cudaEvent_t`` + - ``hipEvent_t`` + + * + - ``cudaStream_t`` + - ``hipStream_t`` + + * + - ``cudaMalloc`` + - ``hipMalloc`` + + * + - ``cudaStreamCreateWithFlags`` + - ``hipStreamCreateWithFlags`` + + * + - ``cudaStreamNonBlocking`` + - ``hipStreamNonBlocking`` + + * + - ``cudaEventCreate`` + - ``hipEventCreate`` + + * + - ``cudaMemcpyAsync`` + - ``hipMemcpyAsync`` + + * + - ``cudaMemcpyHostToDevice`` + - ``hipMemcpyHostToDevice`` + + * + - ``cudaEventRecord`` + - ``hipEventRecord`` + + * + - ``cudaEventSynchronize`` + - ``hipEventSynchronize`` + + * + - ``cudaEventElapsedTime`` + - ``hipEventElapsedTime`` + + * + - ``cudaFree`` + - ``hipFree`` + + * + - ``cudaEventDestroy`` + - ``hipEventDestroy`` + + * + - ``cudaStreamDestroy`` + - ``hipStreamDestroy`` + +In summary, this comparison highlights the primary differences between CUDA and HIP APIs. diff --git a/docs/reference/terms.md b/docs/reference/terms.md deleted file mode 100644 index 713bf6eb81..0000000000 --- a/docs/reference/terms.md +++ /dev/null @@ -1,44 +0,0 @@ - - - - - - -# Table comparing syntax for different compute APIs - -|Term|CUDA|HIP|OpenCL| -|---|---|---|---| -|Device|`int deviceId`|`int deviceId`|`cl_device`| -|Queue|`cudaStream_t`|`hipStream_t`|`cl_command_queue`| -|Event|`cudaEvent_t`|`hipEvent_t`|`cl_event`| -|Memory|`void *`|`void *`|`cl_mem`| -||||| -| |grid|grid|NDRange| -| |block|block|work-group| -| |thread|thread|work-item| -| |warp|warp|sub-group| -||||| -|Thread-
index | `threadIdx.x` | `threadIdx.x` | `get_local_id(0)` | -|Block-
index | `blockIdx.x` | `blockIdx.x` | `get_group_id(0)` | -|Block-
dim | `blockDim.x` | `blockDim.x` | `get_local_size(0)` | -|Grid-dim | `gridDim.x` | `gridDim.x` | `get_num_groups(0)` | -||||| -|Device Kernel|`__global__`|`__global__`|`__kernel`| -|Device Function|`__device__`|`__device__`|Implied in device compilation| -|Host Function|`__host_` (default)|`__host_` (default)|Implied in host compilation| -|Host + Device Function|`__host__` `__device__`|`__host__` `__device__`| No equivalent| -|Kernel Launch|`<<< >>>`|`hipLaunchKernel`/`hipLaunchKernelGGL`/`<<< >>>`|`clEnqueueNDRangeKernel`| -|||||| -|Global Memory|`__global__`|`__global__`|`__global`| -|Group Memory|`__shared__`|`__shared__`|`__local`| -|Constant|`__constant__`|`__constant__`|`__constant`| -|||||| -||`__syncthreads`|`__syncthreads`|`barrier(CLK_LOCAL_MEMFENCE)`| -|Atomic Builtins|`atomicAdd`|`atomicAdd`|`atomic_add`| -|Precise Math|`cos(f)`|`cos(f)`|`cos(f)`| -|Fast Math|`__cos(f)`|`__cos(f)`|`native_cos(f)`| -|Vector|`float4`|`float4`|`float4`| - -## Notes - -The indexing functions (starting with `thread-index`) show the terminology for a 1D grid. Some APIs use reverse order of `xyz` / 012 indexing for 3D grids. diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 04e1ce18a6..ed0d7f914d 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -111,8 +111,7 @@ subtrees: - file: reference/virtual_rocr - file: reference/math_api - file: reference/env_variables - - file: reference/terms - title: Comparing syntax for different APIs + - file: reference/api_syntax - file: reference/deprecated_api_list title: List of deprecated APIs - file: reference/fp8_numbers From 9bd108bcc35c9b082b312931ab0aada285262720 Mon Sep 17 00:00:00 2001 From: Stefan Weigl-Bosker Date: Wed, 5 Feb 2025 15:12:38 -0500 Subject: [PATCH 24/46] Update README.md --- README.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 32031be961..4df0b4c6a9 100644 --- a/README.md +++ b/README.md @@ -42,8 +42,8 @@ HIP releases are typically naming convention for each ROCM release to help diffe * [HIP FAQ](docs/faq.rst) * [HIP C++ Language Extensions](docs/reference/cpp_language_extensions.rst) * [HIP Porting Guide](docs/how-to/hip_porting_guide.md) -* [HIP Porting Driver Guide](docs/how-to/hip_porting_driver_api.md) -* [HIP Programming Guide](docs/how-to/programming_manual.md) +* [HIP Porting Driver Guide](docs/how-to/hip_porting_driver_api.rst) +* [HIP Programming Guide](docs/programming_guide.rst) * [HIP Logging](docs/how-to/logging.rst) * [Building HIP From Source](docs/install/build.rst) * [HIP Debugging](docs/how-to/debugging.rst) @@ -51,15 +51,15 @@ HIP releases are typically naming convention for each ROCM release to help diffe * [HIP Terminology](docs/reference/terms.md) (including Rosetta Stone of GPU computing terms across CUDA/HIP/OpenCL) * [HIPIFY](https://github.com/ROCm/HIPIFY/blob/amd-staging/README.md) * Supported CUDA APIs: - * [Runtime API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUDA_Runtime_API_functions_supported_by_HIP.md) - * [Driver API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUDA_Driver_API_functions_supported_by_HIP.md) - * [cuComplex API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/cuComplex_API_supported_by_HIP.md) - * [Device API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUDA_Device_API_supported_by_HIP.md) - * [cuBLAS](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUBLAS_API_supported_by_ROC.md) - * [cuRAND](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CURAND_API_supported_by_HIP.md) - * [cuDNN](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUDNN_API_supported_by_HIP.md) - * [cuFFT](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUFFT_API_supported_by_HIP.md) - * [cuSPARSE](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/tables/CUSPARSE_API_supported_by_HIP.md) + * [Runtime API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Runtime_API_functions_supported_by_HIP.md) + * [Driver API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Driver_API_functions_supported_by_HIP.md) + * [cuComplex API](https://github.com/ROCm/HIPIFY/blob/amd-staging/reference/docs/tables/cuComplex_API_supported_by_HIP.md) + * [Device API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Device_API_supported_by_HIP.md) + * [cuBLAS](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUBLAS_API_supported_by_ROC.md) + * [cuRAND](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CURAND_API_supported_by_HIP.md) + * [cuDNN](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDNN_API_supported_by_HIP.md) + * [cuFFT](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUFFT_API_supported_by_HIP.md) + * [cuSPARSE](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUSPARSE_API_supported_by_HIP.md) * [Developer/CONTRIBUTING Info](CONTRIBUTING.md) * [Release Notes](RELEASE.md) From 2cc5b82976007ba0fd8d1ebfabc2c1e03c2829dc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 7 Feb 2025 00:39:15 +0000 Subject: [PATCH 25/46] Bump sphinxcontrib-doxylink from 1.12.3 to 1.12.4 in /docs/sphinx Bumps [sphinxcontrib-doxylink](https://github.com/sphinx-contrib/doxylink) from 1.12.3 to 1.12.4. - [Release notes](https://github.com/sphinx-contrib/doxylink/releases) - [Changelog](https://github.com/sphinx-contrib/doxylink/blob/master/CHANGELOG.md) - [Commits](https://github.com/sphinx-contrib/doxylink/compare/1.12.3...1.12.4) --- updated-dependencies: - dependency-name: sphinxcontrib-doxylink dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 9b8da47ab3..e88b29e61e 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -252,7 +252,7 @@ sphinxcontrib-applehelp==2.0.0 # via sphinx sphinxcontrib-devhelp==2.0.0 # via sphinx -sphinxcontrib-doxylink==1.12.3 +sphinxcontrib-doxylink==1.12.4 # via -r requirements.in sphinxcontrib-htmlhelp==2.1.0 # via sphinx From d6b5821e12c69f3884c62363acfe3fc094256483 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Thu, 23 Jan 2025 13:44:27 +0100 Subject: [PATCH 26/46] Docs: Consolidate environment variables into a single RST file, use a script to pick tables --- .wordlist.txt | 1 + docs/.gitignore | 1 + docs/conf.py | 10 +- docs/data/env_variables_hip.rst | 263 +++++++++++++++++++++++++++++ docs/extension/__init__.py | 0 docs/extension/custom_directive.py | 59 +++++++ docs/how-to/debugging.rst | 3 +- docs/how-to/debugging_env.rst | 99 ----------- docs/reference/env_variables.rst | 164 ++---------------- 9 files changed, 349 insertions(+), 251 deletions(-) create mode 100644 docs/data/env_variables_hip.rst create mode 100644 docs/extension/__init__.py create mode 100644 docs/extension/custom_directive.py delete mode 100644 docs/how-to/debugging_env.rst diff --git a/.wordlist.txt b/.wordlist.txt index 2746b336b7..32d489abc8 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -145,6 +145,7 @@ rocgdb ROCm's rocTX roundtrip +rst RTC RTTI rvalue diff --git a/docs/.gitignore b/docs/.gitignore index 53b7787fbd..76d890c082 100644 --- a/docs/.gitignore +++ b/docs/.gitignore @@ -6,3 +6,4 @@ /doxygen/html /doxygen/xml /sphinx/_toc.yml +__pycache__ diff --git a/docs/conf.py b/docs/conf.py index aed3ead08d..8261240fb0 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -5,6 +5,8 @@ # https://www.sphinx-doc.org/en/master/usage/configuration.html import re +import sys +from pathlib import Path from typing import Any, Dict, List from rocm_docs import ROCmDocs @@ -38,7 +40,10 @@ for sphinx_var in ROCmDocs.SPHINX_VARS: globals()[sphinx_var] = getattr(docs_core, sphinx_var) -extensions += ["sphinxcontrib.doxylink"] +# Add the _extensions directory to Python's search path +sys.path.append(str(Path(__file__).parent / 'extension')) + +extensions += ["sphinxcontrib.doxylink", "custom_directive"] cpp_id_attributes = ["__global__", "__device__", "__host__", "__forceinline__", "static"] cpp_paren_attributes = ["__declspec"] @@ -50,5 +55,6 @@ exclude_patterns = [ "doxygen/mainpage.md", "understand/glossary.md", - 'how-to/debugging_env.rst' + 'how-to/debugging_env.rst', + "data/env_variables_hip.rst" ] \ No newline at end of file diff --git a/docs/data/env_variables_hip.rst b/docs/data/env_variables_hip.rst new file mode 100644 index 0000000000..6186671ecf --- /dev/null +++ b/docs/data/env_variables_hip.rst @@ -0,0 +1,263 @@ +.. meta:: + :description: HIP environment variables + :keywords: AMD, HIP, environment variables, environment + +The GPU isolation environment variables in HIP are collected in the following table. + +.. _hip-env-isolation: +.. list-table:: + :header-rows: 1 + :widths: 70,30 + + * - **Environment variable** + - **Value** + + * - | ``ROCR_VISIBLE_DEVICES`` + | A list of device indices or UUIDs that will be exposed to applications. + - Example: ``0,GPU-DEADBEEFDEADBEEF`` + + * - | ``GPU_DEVICE_ORDINAL`` + | Devices indices exposed to OpenCL and HIP applications. + - Example: ``0,2`` + + * - | ``HIP_VISIBLE_DEVICES`` or ``CUDA_VISIBLE_DEVICES`` + | Device indices exposed to HIP applications. + - Example: ``0,2`` + +The profiling environment variables in HIP are collected in the following table. + +.. _hip-env-prof: +.. list-table:: + :header-rows: 1 + :widths: 70,30 + + * - **Environment variable** + - **Value** + + * - | ``HSA_CU_MASK`` + | Sets the mask on a lower level of queue creation in the driver, + | this mask will also be set for queues being profiled. + - Example: ``1:0-8`` + + * - | ``ROC_GLOBAL_CU_MASK`` + | Sets the mask on queues created by the HIP or the OpenCL runtimes, + | this mask will also be set for queues being profiled. + - Example: ``0xf``, enables only 4 CUs + + * - | ``HIP_FORCE_QUEUE_PROFILING`` + | Used to run the app as if it were run in rocprof. Forces command queue + | profiling on by default. + - | 0: Disable + | 1: Enable + +The debugging environment variables in HIP are collected in the following table. + +.. _hip-env-debug: +.. list-table:: + :header-rows: 1 + :widths: 35,14,51 + + * - **Environment variable** + - **Default value** + - **Value** + + * - | ``AMD_LOG_LEVEL`` + | Enables HIP log on various level. + - ``0`` + - | 0: Disable log. + | 1: Enables error logs. + | 2: Enables warning logs next to lower-level logs. + | 3: Enables information logs next to lower-level logs. + | 4: Enables debug logs next to lower-level logs. + | 5: Enables debug extra logs next to lower-level logs. + + * - | ``AMD_LOG_LEVEL_FILE`` + | Sets output file for ``AMD_LOG_LEVEL``. + - stderr output + - + + * - | ``AMD_LOG_MASK`` + | Specifies HIP log filters. Here is the ` complete list of log masks `_. + - ``0x7FFFFFFF`` + - | 0x1: Log API calls. + | 0x2: Kernel and copy commands and barriers. + | 0x4: Synchronization and waiting for commands to finish. + | 0x8: Decode and display AQL packets. + | 0x10: Queue commands and queue contents. + | 0x20: Signal creation, allocation, pool. + | 0x40: Locks and thread-safety code. + | 0x80: Kernel creations and arguments, etc. + | 0x100: Copy debug. + | 0x200: Detailed copy debug. + | 0x400: Resource allocation, performance-impacting events. + | 0x800: Initialization and shutdown. + | 0x1000: Misc debug, not yet classified. + | 0x2000: Show raw bytes of AQL packet. + | 0x4000: Show code creation debug. + | 0x8000: More detailed command info, including barrier commands. + | 0x10000: Log message location. + | 0x20000: Memory allocation. + | 0x40000: Memory pool allocation, including memory in graphs. + | 0x80000: Timestamp details. + | 0xFFFFFFFF: Log always even mask flag is zero. + + * - | ``HIP_LAUNCH_BLOCKING`` + | Used for serialization on kernel execution. + - ``0`` + - | 0: Disable. Kernel executes normally. + | 1: Enable. Serializes kernel enqueue, behaves the same as ``AMD_SERIALIZE_KERNEL``. + + * - | ``HIP_VISIBLE_DEVICES`` (or ``CUDA_VISIBLE_DEVICES``) + | Only devices whose index is present in the sequence are visible to HIP + - Unset by default. + - 0,1,2: Depending on the number of devices on the system. + + * - | ``GPU_DUMP_CODE_OBJECT`` + | Dump code object. + - ``0`` + - | 0: Disable + | 1: Enable + + * - | ``AMD_SERIALIZE_KERNEL`` + | Serialize kernel enqueue. + - ``0`` + - | 0: Disable + | 1: Wait for completion before enqueue. + | 2: Wait for completion after enqueue. + | 3: Both + + * - | ``AMD_SERIALIZE_COPY`` + | Serialize copies + - ``0`` + - | 0: Disable + | 1: Wait for completion before enqueue. + | 2: Wait for completion after enqueue. + | 3: Both + + * - | ``AMD_DIRECT_DISPATCH`` + | Enable direct kernel dispatch (Currently for Linux; under development for Windows). + - ``1`` + - | 0: Disable + | 1: Enable + + * - | ``GPU_MAX_HW_QUEUES`` + | The maximum number of hardware queues allocated per device. + - ``4`` + - The variable controls how many independent hardware queues HIP runtime can create per process, + per device. If an application allocates more HIP streams than this number, then HIP runtime reuses + the same hardware queues for the new streams in a round-robin manner. Note that this maximum + number does not apply to hardware queues that are created for CU-masked HIP streams, or + cooperative queues for HIP Cooperative Groups (single queue per device). + +The memory management related environment variables in HIP are collected in the +following table. + +.. _hip-env-memory: +.. list-table:: + :header-rows: 1 + :widths: 35,14,51 + + * - **Environment variable** + - **Default value** + - **Value** + + * - | ``HIP_HIDDEN_FREE_MEM`` + | Amount of memory to hide from the free memory reported by hipMemGetInfo. + - ``0`` + - | 0: Disable + | Unit: megabyte (MB) + + * - | ``HIP_HOST_COHERENT`` + | Specifies if the memory is coherent between the host and GPU in ``hipHostMalloc``. + - ``0`` + - | 0: Memory is not coherent. + | 1: Memory is coherent. + | Environment variable has effect, if the following conditions are statisfied: + | - One of the ``hipHostMallocDefault``, ``hipHostMallocPortable``, ``hipHostMallocWriteCombined`` or ``hipHostMallocNumaUser`` flag set to 1. + | - ``hipHostMallocCoherent``, ``hipHostMallocNonCoherent`` and ``hipHostMallocMapped`` flags set to 0. + + * - | ``HIP_INITIAL_DM_SIZE`` + | Set initial heap size for device malloc. + - ``8388608`` + - | Unit: Byte + | The default value corresponds to 8 MB. + + * - | ``HIP_MEM_POOL_SUPPORT`` + | Enables memory pool support in HIP. + - ``0`` + - | 0: Disable + | 1: Enable + + * - | ``HIP_MEM_POOL_USE_VM`` + | Enables memory pool support in HIP. + - | ``0``: other OS + | ``1``: Windows + - | 0: Disable + | 1: Enable + + * - | ``HIP_VMEM_MANAGE_SUPPORT`` + | Virtual Memory Management Support. + - ``1`` + - | 0: Disable + | 1: Enable + + * - | ``GPU_MAX_HEAP_SIZE`` + | Set maximum size of the GPU heap to % of board memory. + - ``100`` + - | Unit: Percentage + + * - | ``GPU_MAX_REMOTE_MEM_SIZE`` + | Maximum size that allows device memory substitution with system. + - ``2`` + - | Unit: kilobyte (KB) + + * - | ``GPU_NUM_MEM_DEPENDENCY`` + | Number of memory objects for dependency tracking. + - ``256`` + - + + * - | ``GPU_STREAMOPS_CP_WAIT`` + | Force the stream memory operation to wait on CP. + - ``0`` + - | 0: Disable + | 1: Enable + + * - | ``HSA_LOCAL_MEMORY_ENABLE`` + | Enable HSA device local memory usage. + - ``1`` + - | 0: Disable + | 1: Enable + + * - | ``PAL_ALWAYS_RESIDENT`` + | Force memory resources to become resident at allocation time. + - ``0`` + - | 0: Disable + | 1: Enable + + * - | ``PAL_PREPINNED_MEMORY_SIZE`` + | Size of prepinned memory. + - ``64`` + - | Unit: kilobyte (KB) + + * - | ``REMOTE_ALLOC`` + | Use remote memory for the global heap allocation. + - ``0`` + - | 0: Disable + | 1: Enable + +The following table lists environment variables that are useful but relate to +different features in HIP. + +.. _hip-env-other: +.. list-table:: + :header-rows: 1 + :widths: 35,14,51 + + * - **Environment variable** + - **Default value** + - **Value** + + * - | ``HIPRTC_COMPILE_OPTIONS_APPEND`` + | Sets compile options needed for ``hiprtc`` compilation. + - None + - ``--gpu-architecture=gfx906:sramecc+:xnack``, ``-fgpu-rdc`` diff --git a/docs/extension/__init__.py b/docs/extension/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/extension/custom_directive.py b/docs/extension/custom_directive.py new file mode 100644 index 0000000000..3c18396d7e --- /dev/null +++ b/docs/extension/custom_directive.py @@ -0,0 +1,59 @@ +import os +import re +from docutils.parsers.rst import Directive +from docutils.statemachine import StringList + +class TableInclude(Directive): + required_arguments = 1 + optional_arguments = 0 + final_argument_whitespace = True + option_spec = { + 'table': str + } + + def run(self): + # Get the file path from the first argument + file_path = self.arguments[0] + + # Get the environment to resolve the full path + env = self.state.document.settings.env + src_dir = os.path.abspath(env.srcdir) + full_file_path = os.path.join(src_dir, file_path) + + # Check if the file exists + if not os.path.exists(full_file_path): + raise self.error(f"RST file {full_file_path} does not exist.") + + # Read the entire file content + with open(full_file_path, 'r', encoding='utf-8') as f: + content = f.read() + + # Find all tables with named targets + table_pattern = r'(?:^\.\.\ _(.+?):\n)(.. list-table::.*?(?:\n\s*\*\s*-.*?)+)(?=\n\n|\Z)' + table_matches = list(re.finditer(table_pattern, content, re.MULTILINE | re.DOTALL)) + + # Get the specific table name from options + table_name = self.options.get('table') + + # If no table specified, merge compatible tables + if not table_name: + raise self.error("The ':table:' option is required to specify which table to include.") + + # Find the specific table + matching_tables = [ + match for match in table_matches + if match.group(1).strip() == table_name + ] + + if not matching_tables: + raise self.error(f"Table '{table_name}' not found in {full_file_path}") + + # Extract the matched table content + table_content = matching_tables[0].group(2) + + # Insert the table content into the current document + self.state_machine.insert_input(table_content.splitlines(), full_file_path) + return [] + +def setup(app): + app.add_directive('include-table', TableInclude) diff --git a/docs/how-to/debugging.rst b/docs/how-to/debugging.rst index 433d31de10..6d5ff2ff24 100644 --- a/docs/how-to/debugging.rst +++ b/docs/how-to/debugging.rst @@ -273,7 +273,8 @@ HIP environment variable summary Here are some of the more commonly used environment variables: -.. include:: ../how-to/debugging_env.rst +.. include-table:: data/env_variables_hip.rst + :table: hip-env-debug General debugging tips ====================================================== diff --git a/docs/how-to/debugging_env.rst b/docs/how-to/debugging_env.rst deleted file mode 100644 index b3544a967f..0000000000 --- a/docs/how-to/debugging_env.rst +++ /dev/null @@ -1,99 +0,0 @@ -.. meta:: - :description: Debug environment variables for HIP. - :keywords: AMD, ROCm, HIP, debugging, Environment variables, ROCgdb - -.. list-table:: - :header-rows: 1 - :widths: 35,14,51 - - * - **Environment variable** - - **Default value** - - **Value** - - * - | ``AMD_LOG_LEVEL`` - | Enables HIP log on various level. - - ``0`` - - | 0: Disable log. - | 1: Enables error logs. - | 2: Enables warning logs next to lower-level logs. - | 3: Enables information logs next to lower-level logs. - | 4: Enables debug logs next to lower-level logs. - | 5: Enables debug extra logs next to lower-level logs. - - * - | ``AMD_LOG_LEVEL_FILE`` - | Sets output file for ``AMD_LOG_LEVEL``. - - stderr output - - - - * - | ``AMD_LOG_MASK`` - | Specifies HIP log filters. Here is the ` complete list of log masks `_. - - ``0x7FFFFFFF`` - - | 0x1: Log API calls. - | 0x2: Kernel and copy commands and barriers. - | 0x4: Synchronization and waiting for commands to finish. - | 0x8: Decode and display AQL packets. - | 0x10: Queue commands and queue contents. - | 0x20: Signal creation, allocation, pool. - | 0x40: Locks and thread-safety code. - | 0x80: Kernel creations and arguments, etc. - | 0x100: Copy debug. - | 0x200: Detailed copy debug. - | 0x400: Resource allocation, performance-impacting events. - | 0x800: Initialization and shutdown. - | 0x1000: Misc debug, not yet classified. - | 0x2000: Show raw bytes of AQL packet. - | 0x4000: Show code creation debug. - | 0x8000: More detailed command info, including barrier commands. - | 0x10000: Log message location. - | 0x20000: Memory allocation. - | 0x40000: Memory pool allocation, including memory in graphs. - | 0x80000: Timestamp details. - | 0xFFFFFFFF: Log always even mask flag is zero. - - * - | ``HIP_LAUNCH_BLOCKING`` - | Used for serialization on kernel execution. - - ``0`` - - | 0: Disable. Kernel executes normally. - | 1: Enable. Serializes kernel enqueue, behaves the same as ``AMD_SERIALIZE_KERNEL``. - - * - | ``HIP_VISIBLE_DEVICES`` (or ``CUDA_VISIBLE_DEVICES``) - | Only devices whose index is present in the sequence are visible to HIP - - Unset by default. - - 0,1,2: Depending on the number of devices on the system. - - * - | ``GPU_DUMP_CODE_OBJECT`` - | Dump code object. - - ``0`` - - | 0: Disable - | 1: Enable - - * - | ``AMD_SERIALIZE_KERNEL`` - | Serialize kernel enqueue. - - ``0`` - - | 0: Disable - | 1: Wait for completion before enqueue. - | 2: Wait for completion after enqueue. - | 3: Both - - * - | ``AMD_SERIALIZE_COPY`` - | Serialize copies - - ``0`` - - | 0: Disable - | 1: Wait for completion before enqueue. - | 2: Wait for completion after enqueue. - | 3: Both - - * - | ``AMD_DIRECT_DISPATCH`` - | Enable direct kernel dispatch (Currently for Linux; under development for Windows). - - ``1`` - - | 0: Disable - | 1: Enable - - * - | ``GPU_MAX_HW_QUEUES`` - | The maximum number of hardware queues allocated per device. - - ``4`` - - The variable controls how many independent hardware queues HIP runtime can create per process, - per device. If an application allocates more HIP streams than this number, then HIP runtime reuses - the same hardware queues for the new streams in a round-robin manner. Note that this maximum - number does not apply to hardware queues that are created for CU-masked HIP streams, or - cooperative queues for HIP Cooperative Groups (single queue per device). diff --git a/docs/reference/env_variables.rst b/docs/reference/env_variables.rst index 7d336c0b4e..fb1732311d 100644 --- a/docs/reference/env_variables.rst +++ b/docs/reference/env_variables.rst @@ -12,162 +12,38 @@ on AMD platform, which are grouped by functionality. GPU isolation variables ================================================================================ -The GPU isolation environment variables in HIP are collected in the next table. +The GPU isolation environment variables in HIP are collected in the following table. For more information, check :doc:`GPU isolation page `. -.. list-table:: - :header-rows: 1 - :widths: 70,30 - - * - **Environment variable** - - **Value** - - * - | ``ROCR_VISIBLE_DEVICES`` - | A list of device indices or UUIDs that will be exposed to applications. - - Example: ``0,GPU-DEADBEEFDEADBEEF`` - - * - | ``GPU_DEVICE_ORDINAL`` - | Devices indices exposed to OpenCL and HIP applications. - - Example: ``0,2`` - - * - | ``HIP_VISIBLE_DEVICES`` or ``CUDA_VISIBLE_DEVICES`` - | Device indices exposed to HIP applications. - - Example: ``0,2`` +.. include-table:: data/env_variables_hip.rst + :table: hip-env-isolation Profiling variables ================================================================================ -The profiling environment variables in HIP are collected in the next table. For +The profiling environment variables in HIP are collected in the following table. For more information, check :doc:`setting the number of CUs page `. -.. list-table:: - :header-rows: 1 - :widths: 70,30 - - * - **Environment variable** - - **Value** - - * - | ``HSA_CU_MASK`` - | Sets the mask on a lower level of queue creation in the driver, - | this mask will also be set for queues being profiled. - - Example: ``1:0-8`` - - * - | ``ROC_GLOBAL_CU_MASK`` - | Sets the mask on queues created by the HIP or the OpenCL runtimes, - | this mask will also be set for queues being profiled. - - Example: ``0xf``, enables only 4 CUs - - * - | ``HIP_FORCE_QUEUE_PROFILING`` - | Used to run the app as if it were run in rocprof. Forces command queue - | profiling on by default. - - | 0: Disable - | 1: Enable +.. include-table:: data/env_variables_hip.rst + :table: hip-env-prof Debug variables ================================================================================ -The debugging environment variables in HIP are collected in the next table. For +The debugging environment variables in HIP are collected in the following table. For more information, check :ref:`debugging_with_hip`. -.. include:: ../how-to/debugging_env.rst +.. include-table:: data/env_variables_hip.rst + :table: hip-env-debug Memory management related variables ================================================================================ The memory management related environment variables in HIP are collected in the -next table. - -.. list-table:: - :header-rows: 1 - :widths: 35,14,51 - - * - **Environment variable** - - **Default value** - - **Value** - - * - | ``HIP_HIDDEN_FREE_MEM`` - | Amount of memory to hide from the free memory reported by hipMemGetInfo. - - ``0`` - - | 0: Disable - | Unit: megabyte (MB) - - * - | ``HIP_HOST_COHERENT`` - | Specifies if the memory is coherent between the host and GPU in ``hipHostMalloc``. - - ``0`` - - | 0: Memory is not coherent. - | 1: Memory is coherent. - | Environment variable has effect, if the following conditions are statisfied: - | - One of the ``hipHostMallocDefault``, ``hipHostMallocPortable``, ``hipHostMallocWriteCombined`` or ``hipHostMallocNumaUser`` flag set to 1. - | - ``hipHostMallocCoherent``, ``hipHostMallocNonCoherent`` and ``hipHostMallocMapped`` flags set to 0. - - * - | ``HIP_INITIAL_DM_SIZE`` - | Set initial heap size for device malloc. - - ``8388608`` - - | Unit: Byte - | The default value corresponds to 8 MB. - - * - | ``HIP_MEM_POOL_SUPPORT`` - | Enables memory pool support in HIP. - - ``0`` - - | 0: Disable - | 1: Enable - - * - | ``HIP_MEM_POOL_USE_VM`` - | Enables memory pool support in HIP. - - | ``0``: other OS - | ``1``: Windows - - | 0: Disable - | 1: Enable - - * - | ``HIP_VMEM_MANAGE_SUPPORT`` - | Virtual Memory Management Support. - - ``1`` - - | 0: Disable - | 1: Enable - - * - | ``GPU_MAX_HEAP_SIZE`` - | Set maximum size of the GPU heap to % of board memory. - - ``100`` - - | Unit: Percentage - - * - | ``GPU_MAX_REMOTE_MEM_SIZE`` - | Maximum size that allows device memory substitution with system. - - ``2`` - - | Unit: kilobyte (KB) - - * - | ``GPU_NUM_MEM_DEPENDENCY`` - | Number of memory objects for dependency tracking. - - ``256`` - - - - * - | ``GPU_STREAMOPS_CP_WAIT`` - | Force the stream memory operation to wait on CP. - - ``0`` - - | 0: Disable - | 1: Enable - - * - | ``HSA_LOCAL_MEMORY_ENABLE`` - | Enable HSA device local memory usage. - - ``1`` - - | 0: Disable - | 1: Enable - - * - | ``PAL_ALWAYS_RESIDENT`` - | Force memory resources to become resident at allocation time. - - ``0`` - - | 0: Disable - | 1: Enable - - * - | ``PAL_PREPINNED_MEMORY_SIZE`` - | Size of prepinned memory. - - ``64`` - - | Unit: kilobyte (KB) - - * - | ``REMOTE_ALLOC`` - | Use remote memory for the global heap allocation. - - ``0`` - - | 0: Disable - | 1: Enable +following table. + +.. include-table:: data/env_variables_hip.rst + :table: hip-env-memory Other useful variables ================================================================================ @@ -175,15 +51,5 @@ Other useful variables The following table lists environment variables that are useful but relate to different features. -.. list-table:: - :header-rows: 1 - :widths: 35,14,51 - - * - **Environment variable** - - **Default value** - - **Value** - - * - | ``HIPRTC_COMPILE_OPTIONS_APPEND`` - | Sets compile options needed for ``hiprtc`` compilation. - - None - - ``--gpu-architecture=gfx906:sramecc+:xnack``, ``-fgpu-rdc`` +.. include-table:: data/env_variables_hip.rst + :table: hip-env-other From f19534a65084d2faefd278727b579c2283ad7448 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Wed, 15 Jan 2025 13:07:35 +0100 Subject: [PATCH 27/46] Updated description of hipDeviceProp_t::major and hipDeviceProp_t::minor --- include/hip/hip_runtime_api.h | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/include/hip/hip_runtime_api.h b/include/hip/hip_runtime_api.h index 7a88760253..ecb911a3b0 100644 --- a/include/hip/hip_runtime_api.h +++ b/include/hip/hip_runtime_api.h @@ -113,12 +113,15 @@ typedef struct hipDeviceProp_t { int clockRate; ///< Max clock frequency of the multiProcessors in khz. size_t totalConstMem; ///< Size of shared constant memory region on the device ///< (in bytes). - int major; ///< Major compute capability. On HCC, this is an approximation and features may - ///< differ from CUDA CC. See the arch feature flags for portable ways to query - ///< feature caps. - int minor; ///< Minor compute capability. On HCC, this is an approximation and features may - ///< differ from CUDA CC. See the arch feature flags for portable ways to query + int major; ///< Major compute capability version. This indicates the core instruction set + ///< of the GPU architecture. For example, a value of 11 would correspond to + ///< Navi III (RDNA3). See the arch feature flags for portable ways to query ///< feature caps. + int minor; ///< Minor compute capability version. This indicates a particular configuration, + ///< feature set, or variation within the group represented by the major compute + ///< capability version. For example, different models within the same major version + ///< might have varying levels of support for certain features or optimizations. + ///< See the arch feature flags for portable ways to query feature caps. size_t textureAlignment; ///< Alignment requirement for textures size_t texturePitchAlignment; ///< Pitch alignment requirement for texture references bound to int deviceOverlap; ///< Deprecated. Use asyncEngineCount instead From d2ed0df2e078846ae3c91bcb056f467cec113240 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Mon, 3 Feb 2025 09:14:25 +0100 Subject: [PATCH 28/46] Docs: remove virtual_rocr.rst --- .../memory_management/virtual_memory.rst | 20 ++++++----- docs/index.md | 1 - docs/reference/virtual_rocr.rst | 35 ------------------- docs/sphinx/_toc.yml.in | 1 - 4 files changed, 12 insertions(+), 45 deletions(-) delete mode 100644 docs/reference/virtual_rocr.rst diff --git a/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst b/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst index 6663c52b82..b771b8c902 100644 --- a/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/virtual_memory.rst @@ -25,6 +25,10 @@ issue of reallocation when the extra buffer runs out. Virtual memory management solves these memory management problems. It helps to reduce memory usage and unnecessary ``memcpy`` calls. +HIP virtual memory management is built on top of HSA, which provides low-level +access to AMD GPU memory. For more details on the underlying HSA runtime, +see :doc:`ROCr documentation ` + .. _memory_allocation_virtual_memory: Memory allocation @@ -43,7 +47,7 @@ Virtual memory management support --------------------------------- The first step is to check if the targeted device or GPU supports virtual memory management. -Use the :cpp:func:`hipDeviceGetAttribute` function to get the +Use the :cpp:func:`hipDeviceGetAttribute` function to get the ``hipDeviceAttributeVirtualMemoryManagementSupported`` attribute for a specific GPU, as shown in the following example. .. code-block:: cpp @@ -246,7 +250,7 @@ The virtual memory management example follows these steps: int main() { int currentDev = 0; - + // Step 1: Check virtual memory management support on device 0 int vmm = 0; HIP_CHECK( @@ -261,10 +265,10 @@ The virtual memory management example follows these steps: std::cout << "GPU 0 doesn't support virtual memory management."; return 0; } - + // Size of memory to allocate size_t size = 4 * 1024; - + // Step 2: Allocate physical memory hipMemGenericAllocationHandle_t allocHandle; hipMemAllocationProp prop = {}; @@ -279,7 +283,7 @@ The virtual memory management example follows these steps: hipMemAllocationGranularityMinimum)); size_t padded_size = ROUND_UP(size, granularity); HIP_CHECK(hipMemCreate(&allocHandle, padded_size * 2, &prop, 0)); - + // Step 3: Reserve a virtual memory address range void* virtualPointer = nullptr; HIP_CHECK(hipMemAddressReserve(&virtualPointer, padded_size, granularity, nullptr, 0)); @@ -306,12 +310,12 @@ The virtual memory management example follows these steps: } else { std::cout << "Failure. Value: " << result << std::endl; } - + // Step 7: Launch kernels // Launch zeroAddr kernel zeroAddr<<<1, 1>>>((int*)virtualPointer); HIP_CHECK(hipDeviceSynchronize()); - + // Check zeroAddr kernel result result = 1; HIP_CHECK(hipMemcpy(&result, virtualPointer, sizeof(int), hipMemcpyDeviceToHost)); @@ -354,7 +358,7 @@ Multiple virtual memory mappings can be created using multiple calls to :cpp:func:`hipMemMap` on the same memory allocation. .. note:: - + RDNA cards may not produce correct results, if users access two different virtual addresses that map to the same physical address. In this case, the L1 data caches will be incoherent due to the virtual-to-physical aliasing. diff --git a/docs/index.md b/docs/index.md index 7678aaae79..247c58e2fd 100644 --- a/docs/index.md +++ b/docs/index.md @@ -42,7 +42,6 @@ The HIP documentation is organized into the following categories: :::{grid-item-card} Reference * [HIP runtime API](./reference/hip_runtime_api_reference) -* [HSA runtime API for ROCm](./reference/virtual_rocr) * [HIP math API](./reference/math_api) * [HIP environment variables](./reference/env_variables) * [CUDA to HIP API Function Comparison](./reference/api_syntax) diff --git a/docs/reference/virtual_rocr.rst b/docs/reference/virtual_rocr.rst deleted file mode 100644 index 7510e6f78d..0000000000 --- a/docs/reference/virtual_rocr.rst +++ /dev/null @@ -1,35 +0,0 @@ -.. meta:: - :description: This chapter lists user-mode API interfaces and libraries - necessary for host applications to launch compute kernels to - available HSA ROCm kernel agents. - :keywords: AMD, ROCm, HIP, HSA, ROCR runtime, virtual memory management - -******************************************************************************* -HSA runtime API for ROCm -******************************************************************************* - -The following functions are located in the https://github.com/ROCm/ROCR-Runtime repository. - -.. doxygenfunction:: hsa_amd_vmem_address_reserve - -.. doxygenfunction:: hsa_amd_vmem_address_free - -.. doxygenfunction:: hsa_amd_vmem_handle_create - -.. doxygenfunction:: hsa_amd_vmem_handle_release - -.. doxygenfunction:: hsa_amd_vmem_map - -.. doxygenfunction:: hsa_amd_vmem_unmap - -.. doxygenfunction:: hsa_amd_vmem_set_access - -.. doxygenfunction:: hsa_amd_vmem_get_access - -.. doxygenfunction:: hsa_amd_vmem_export_shareable_handle - -.. doxygenfunction:: hsa_amd_vmem_import_shareable_handle - -.. doxygenfunction:: hsa_amd_vmem_retain_alloc_handle - -.. doxygenfunction:: hsa_amd_vmem_get_alloc_properties_from_handle diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index ed0d7f914d..34050b2448 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -108,7 +108,6 @@ subtrees: - file: reference/hip_runtime_api/global_defines_enums_structs_files/driver_types - file: doxygen/html/annotated - file: doxygen/html/files - - file: reference/virtual_rocr - file: reference/math_api - file: reference/env_variables - file: reference/api_syntax From a6fb360c21f5f74563de0bf042e705ad47e50f20 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Wed, 12 Feb 2025 11:54:13 +0100 Subject: [PATCH 29/46] Fix documentation warnings --- docs/how-to/hip_runtime_api/asynchronous.rst | 2 +- .../memory_management/device_memory.rst | 48 +++++++++---------- 2 files changed, 25 insertions(+), 25 deletions(-) diff --git a/docs/how-to/hip_runtime_api/asynchronous.rst b/docs/how-to/hip_runtime_api/asynchronous.rst index 81769da48e..82c024969f 100644 --- a/docs/how-to/hip_runtime_api/asynchronous.rst +++ b/docs/how-to/hip_runtime_api/asynchronous.rst @@ -136,7 +136,7 @@ This overlap of computation and data transfer ensures that the GPU is not idle while waiting for data. :cpp:func:`hipMemcpyPeerAsync` enables data transfers between different GPUs, facilitating multi-GPU communication. -:ref:`async_example`` include launching kernels in one stream while performing +:ref:`async_example` include launching kernels in one stream while performing data transfers in another. This technique is especially useful in applications with large data sets that need to be processed quickly. diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst index 13fba386bb..54651a3f9f 100644 --- a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst @@ -69,34 +69,34 @@ better option, but is also limited in size. .. code-block:: cpp __global__ void kernel_memory_allocation(TYPE* pointer){ - // The pointer is stored in shared memory, so that all - // threads of the block can access the pointer - __shared__ int *memory; - - size_t blockSize = blockDim.x; - constexpr size_t elementsPerThread = 1024; - if(threadIdx.x == 0){ - // allocate memory in one contiguous block - memory = new int[blockDim.x * elementsPerThread]; - } - __syncthreads(); + // The pointer is stored in shared memory, so that all + // threads of the block can access the pointer + __shared__ int *memory; + + size_t blockSize = blockDim.x; + constexpr size_t elementsPerThread = 1024; + if(threadIdx.x == 0){ + // allocate memory in one contiguous block + memory = new int[blockDim.x * elementsPerThread]; + } + __syncthreads(); - // load pointer into thread-local variable to avoid - // unnecessary accesses to shared memory - int *localPtr = memory; + // load pointer into thread-local variable to avoid + // unnecessary accesses to shared memory + int *localPtr = memory; - // work with allocated memory, e.g. initialization - for(int i = 0; i < elementsPerThread; ++i){ - // access in a contiguous way - localPtr[i * blockSize + threadIdx.x] = i; - } + // work with allocated memory, e.g. initialization + for(int i = 0; i < elementsPerThread; ++i){ + // access in a contiguous way + localPtr[i * blockSize + threadIdx.x] = i; + } - // synchronize to make sure no thread is accessing the memory before freeing - __syncthreads(); - if(threadIdx.x == 0){ - delete[] memory; + // synchronize to make sure no thread is accessing the memory before freeing + __syncthreads(); + if(threadIdx.x == 0){ + delete[] memory; + } } -} Copying between device and host -------------------------------------------------------------------------------- From 07f8e0fb246837b410a72c08a6ed34a4c3da567e Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Wed, 12 Feb 2025 15:00:51 +0100 Subject: [PATCH 30/46] Reformat HIP RTC --- docs/how-to/hip_rtc.md | 535 ----------------------------- docs/how-to/hip_rtc.rst | 726 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 726 insertions(+), 535 deletions(-) delete mode 100644 docs/how-to/hip_rtc.md create mode 100644 docs/how-to/hip_rtc.rst diff --git a/docs/how-to/hip_rtc.md b/docs/how-to/hip_rtc.md deleted file mode 100644 index 14584828be..0000000000 --- a/docs/how-to/hip_rtc.md +++ /dev/null @@ -1,535 +0,0 @@ - - - - - - -# Programming for HIP runtime compiler (RTC) - -HIP lets you compile kernels at runtime with the `hiprtc*` APIs. -Kernels can be stored as a text string and can be passed to HIPRTC APIs alongside options to guide the compilation. - -:::{note} - -* This library can be used on systems without HIP installed nor AMD GPU driver installed at all (offline compilation). Therefore, it doesn't depend on any HIP runtime library. -* This library depends on Code Object Manager (comgr). You can try to statically link comgr into HIPRTC to avoid ambiguity. -* Developers can bundle this library with their application. - -::: - -## Compilation APIs - -To use HIPRTC functionality, HIPRTC header needs to be included first. -`#include ` - -Kernels can be stored in a string: - -```cpp -static constexpr auto kernel_source { -R"( - extern "C" - __global__ void vector_add(float* output, float* input1, float* input2, size_t size) { - int i = threadIdx.x; - if (i < size) { - output[i] = input1[i] + input2[i]; - } - } -)"}; -``` - -Now to compile this kernel, it needs to be associated with `hiprtcProgram` type, which is done by declaring `hiprtcProgram prog;` and associating the string of kernel with this program: - -```cpp -hiprtcCreateProgram(&prog, // HIPRTC program handle - kernel_source, // HIP kernel source string - "vector_add.cpp", // Name of the HIP program, can be null or an empty string - 0, // Number of headers - NULL, // Header sources - NULL); // Name of header files -``` - -`hiprtcCreateProgram` API also allows you to add headers which can be included in your RTC program. -For online compilation, the compiler pre-defines HIP device API functions, HIP specific types and macros for device compilation, but does not include standard C/C++ headers by default. Users can only include header files provided to `hiprtcCreateProgram`. - -After associating the kernel string with `hiprtcProgram`, you can now compile this program using: - -```cpp -hiprtcCompileProgram(prog, // hiprtcProgram - 0, // Number of options - options); // Clang Options [Supported Clang Options](clang_options.md) -``` - -`hiprtcCompileProgram` returns a status value which can be converted to string via `hiprtcGetErrorString`. If compilation is successful, `hiprtcCompileProgram` will return `HIPRTC_SUCCESS`. - -If the compilation fails, you can look up the logs via: - -```cpp -size_t logSize; -hiprtcGetProgramLogSize(prog, &logSize); - -if (logSize) { - string log(logSize, '\0'); - hiprtcGetProgramLog(prog, &log[0]); - // Corrective action with logs -} -``` - -If the compilation is successful, you can load the compiled binary in a local variable. - -```cpp -size_t codeSize; -hiprtcGetCodeSize(prog, &codeSize); - -vector kernel_binary(codeSize); -hiprtcGetCode(prog, kernel_binary.data()); -``` - -After loading the binary, `hiprtcProgram` can be destroyed. -`hiprtcDestroyProgram(&prog);` - -The binary present in `kernel_binary` can now be loaded via `hipModuleLoadData` API. - -```cpp -hipModule_t module; -hipFunction_t kernel; - -hipModuleLoadData(&module, kernel_binary.data()); -hipModuleGetFunction(&kernel, module, "vector_add"); -``` - -And now this kernel can be launched via `hipModule` APIs. - -The full example is below: - -```cpp -#include -#include - -#include -#include -#include - -#define CHECK_RET_CODE(call, ret_code) \ - { \ - if ((call) != ret_code) { \ - std::cout << "Failed in call: " << #call << std::endl; \ - std::abort(); \ - } \ - } -#define HIP_CHECK(call) CHECK_RET_CODE(call, hipSuccess) -#define HIPRTC_CHECK(call) CHECK_RET_CODE(call, HIPRTC_SUCCESS) - -// source code for hiprtc -static constexpr auto kernel_source{ - R"( - extern "C" - __global__ void vector_add(float* output, float* input1, float* input2, size_t size) { - int i = threadIdx.x; - if (i < size) { - output[i] = input1[i] + input2[i]; - } - } -)"}; - -int main() { - hiprtcProgram prog; - auto rtc_ret_code = hiprtcCreateProgram(&prog, // HIPRTC program handle - kernel_source, // kernel source string - "vector_add.cpp", // Name of the file - 0, // Number of headers - NULL, // Header sources - NULL); // Name of header file - - if (rtc_ret_code != HIPRTC_SUCCESS) { - std::cout << "Failed to create program" << std::endl; - std::abort(); - } - - hipDeviceProp_t props; - int device = 0; - HIP_CHECK(hipGetDeviceProperties(&props, device)); - std::string sarg = std::string("--gpu-architecture=") + - props.gcnArchName; // device for which binary is to be generated - - const char* options[] = {sarg.c_str()}; - - rtc_ret_code = hiprtcCompileProgram(prog, // hiprtcProgram - 0, // Number of options - options); // Clang Options - if (rtc_ret_code != HIPRTC_SUCCESS) { - std::cout << "Failed to create program" << std::endl; - std::abort(); - } - - size_t logSize; - HIPRTC_CHECK(hiprtcGetProgramLogSize(prog, &logSize)); - - if (logSize) { - std::string log(logSize, '\0'); - HIPRTC_CHECK(hiprtcGetProgramLog(prog, &log[0])); - std::cout << "Compilation failed with: " << log << std::endl; - std::abort(); - } - - size_t codeSize; - HIPRTC_CHECK(hiprtcGetCodeSize(prog, &codeSize)); - - std::vector kernel_binary(codeSize); - HIPRTC_CHECK(hiprtcGetCode(prog, kernel_binary.data())); - - HIPRTC_CHECK(hiprtcDestroyProgram(&prog)); - - hipModule_t module; - hipFunction_t kernel; - - HIP_CHECK(hipModuleLoadData(&module, kernel_binary.data())); - HIP_CHECK(hipModuleGetFunction(&kernel, module, "vector_add")); - - constexpr size_t ele_size = 256; // total number of items to add - std::vector hinput, output; - hinput.reserve(ele_size); - output.reserve(ele_size); - for (size_t i = 0; i < ele_size; i++) { - hinput.push_back(static_cast(i + 1)); - output.push_back(0.0f); - } - - float *dinput1, *dinput2, *doutput; - HIP_CHECK(hipMalloc(&dinput1, sizeof(float) * ele_size)); - HIP_CHECK(hipMalloc(&dinput2, sizeof(float) * ele_size)); - HIP_CHECK(hipMalloc(&doutput, sizeof(float) * ele_size)); - - HIP_CHECK(hipMemcpy(dinput1, hinput.data(), sizeof(float) * ele_size, hipMemcpyHostToDevice)); - HIP_CHECK(hipMemcpy(dinput2, hinput.data(), sizeof(float) * ele_size, hipMemcpyHostToDevice)); - - struct { - float* output; - float* input1; - float* input2; - size_t size; - } args{doutput, dinput1, dinput2, ele_size}; - - auto size = sizeof(args); - void* config[] = {HIP_LAUNCH_PARAM_BUFFER_POINTER, &args, HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, - HIP_LAUNCH_PARAM_END}; - - HIP_CHECK(hipModuleLaunchKernel(kernel, 1, 1, 1, ele_size, 1, 1, 0, nullptr, nullptr, config)); - - HIP_CHECK(hipMemcpy(output.data(), doutput, sizeof(float) * ele_size, hipMemcpyDeviceToHost)); - - for (size_t i = 0; i < ele_size; i++) { - if ((hinput[i] + hinput[i]) != output[i]) { - std::cout << "Failed in validation: " << (hinput[i] + hinput[i]) << " - " << output[i] - << std::endl; - std::abort(); - } - } - std::cout << "Passed" << std::endl; - - HIP_CHECK(hipFree(dinput1)); - HIP_CHECK(hipFree(dinput2)); - HIP_CHECK(hipFree(doutput)); -} -``` - -## Kernel Compilation Cache - -HIPRTC incorporates a cache to avoid recompiling kernels between program executions. The contents of the cache include the kernel source code (including the contents of any `#include` headers), the compilation flags, and the compiler version. After a ROCm version update, the kernels are progressively recompiled, and the new results are cached. When the cache is disabled, each kernel is recompiled every time it is requested. - -Use the following environment variables to manage the cache status as enabled or disabled, the location for storing the cache contents, and the cache eviction policy: - -* `AMD_COMGR_CACHE` By default this variable has a value of `0` and the compilation cache feature is disabled. To enable the feature set the environment variable to a value of `1` (or any value other than `0`). This behavior may change in a future release. - -* `AMD_COMGR_CACHE_DIR`: By default the value of this environment variable is defined as `$XDG_CACHE_HOME/comgr_cache`, which defaults to `$USER/.cache/comgr_cache` on Linux, and `%LOCALAPPDATA%\cache\comgr_cache` on Windows. You can specify a different directory for the environment variable to change the path for cache storage. If the runtime fails to access the specified cache directory, or the environment variable is set to an empty string (""), the cache is disabled. - -* `AMD_COMGR_CACHE_POLICY`: If assigned a value, the string is interpreted and applied to the cache pruning policy. The string format is consistent with [Clang's ThinLTO cache pruning policy](https://rocm.docs.amd.com/projects/llvm-project/en/latest/LLVM/clang/html/ThinLTO.html#cache-pruning). The default policy is defined as: `prune_interval=1h:prune_expiration=0h:cache_size=75%:cache_size_bytes=30g:cache_size_files=0`. If the runtime fails to parse the defined string, or the environment variable is set to an empty string (""), the cache is disabled. - -:::{note} - This cache is also shared with the OpenCL runtime shipped with ROCm. -::: - -## HIPRTC specific options - -HIPRTC provides a few HIPRTC specific flags - -* `--gpu-architecture` : This flag can guide the code object generation for a specific gpu arch. Example: `--gpu-architecture=gfx906:sramecc+:xnack-`, its equivalent to `--offload-arch`. - * This option is compulsory if compilation is done on a system without AMD GPUs supported by HIP runtime. - * Otherwise, HIPRTC will load the hip runtime and gather the current device and its architecture info and use it as option. -* `-fgpu-rdc` : This flag when provided during the `hiprtcCompileProgram` generates the bitcode (HIPRTC doesn't convert this bitcode into ISA and binary). This bitcode can later be fetched using `hiprtcGetBitcode` and `hiprtcGetBitcodeSize` APIs. - -### Bitcode - -In the usual scenario, the kernel associated with `hiprtcProgram` is compiled into the binary which can be loaded and run. However, if `-fpu-rdc` option is provided in the compile options, HIPRTC calls comgr and generates only the LLVM bitcode. It doesn't convert this bitcode to ISA and generate the final binary. - -```cpp -std::string sarg = std::string("-fgpu-rdc"); -const char* options[] = { - sarg.c_str() }; -hiprtcCompileProgram(prog, // hiprtcProgram - 1, // Number of options - options); -``` - -If the compilation is successful, one can load the bitcode in a local variable using the bitcode APIs provided by HIPRTC. - -```cpp -size_t bitCodeSize; -hiprtcGetBitcodeSize(prog, &bitCodeSize); - -vector kernel_bitcode(bitCodeSize); -hiprtcGetBitcode(prog, kernel_bitcode.data()); -``` - -### CU Mode vs WGP mode - -AMD GPUs consist of an array of workgroup processors, each built with 2 compute units (CUs) capable of executing SIMD32. All the CUs inside a workgroup processor use local data share (LDS). - -gfx10+ support execution of wavefront in CU mode and work-group processor mode (WGP). Please refer to section 2.3 of [RDNA3 ISA reference](https://www.amd.com/content/dam/amd/en/documents/radeon-tech-docs/instruction-set-architectures/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf). - -gfx9 and below only supports CU mode. - -In WGP mode, 4 warps of a block can simultaneously be executed on the workgroup processor, where as in CU mode only 2 warps of a block can simultaneously execute on a CU. In theory, WGP mode might help with occupancy and increase the performance of certain HIP programs (if not bound to inter warp communication), but might incur performance penalty on other HIP programs which rely on atomics and inter warp communication. This also has effect of how the LDS is split between warps, please refer to [RDNA3 ISA reference](https://www.amd.com/content/dam/amd/en/documents/radeon-tech-docs/instruction-set-architectures/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf) for more information. - -HIPRTC assumes **WGP mode by default** for gfx10+. This can be overridden by passing `-mcumode` to HIPRTC compile options in `hiprtcCompileProgram`. - -## Linker APIs - -The bitcode generated using the HIPRTC Bitcode APIs can be loaded using `hipModule` APIs and also can be linked with other generated bitcodes with appropriate linker flags using the HIPRTC linker APIs. This also provides more flexibility and optimizations to the applications who want to generate the binary dynamically according to their needs. The input bitcodes can be generated only for a specific architecture or it can be a bundled bitcode which is generated for multiple architectures. - -### Example - -Firstly, HIPRTC link instance or a pending linker invocation must be created using `hiprtcLinkCreate`, with the appropriate linker options provided. - -```cpp -hiprtcLinkCreate( num_options, // number of options - options, // Array of options - option_vals, // Array of option values cast to void* - &rtc_link_state ); // HIPRTC link state created upon success -``` - -Following which, the bitcode data can be added to this link instance via `hiprtcLinkAddData` (if the data is present as a string) or `hiprtcLinkAddFile` (if the data is present as a file) with the appropriate input type according to the data or the bitcode used. - -```cpp -hiprtcLinkAddData(rtc_link_state, // HIPRTC link state - input_type, // type of the input data or bitcode - bit_code_ptr, // input data which is null terminated - bit_code_size, // size of the input data - "a", // optional name for this input - 0, // size of the options - 0, // Array of options applied to this input - 0); // Array of option values cast to void* -``` - -```cpp -hiprtcLinkAddFile(rtc_link_state, // HIPRTC link state - input_type, // type of the input data or bitcode - bc_file_path.c_str(), // path to the input file where bitcode is present - 0, // size of the options - 0, // Array of options applied to this input - 0); // Array of option values cast to void* -``` - -Once the bitcodes for multiple architectures are added to the link instance, the linking of the device code must be completed using `hiprtcLinkComplete` which generates the final binary. - -```cpp -hiprtcLinkComplete(rtc_link_state, // HIPRTC link state - &binary, // upon success, points to the output binary - &binarySize); // size of the binary is stored (optional) -``` - -If the `hiprtcLinkComplete` returns successfully, the generated binary can be loaded and run using the `hipModule*` APIs. - -```cpp -hipModuleLoadData(&module, binary); -``` - -#### Note - -* The compiled binary must be loaded before HIPRTC link instance is destroyed using the `hiprtcLinkDestroy` API. - -```cpp -hiprtcLinkDestroy(rtc_link_state); -``` - -* The correct sequence of calls is : `hiprtcLinkCreate`, `hiprtcLinkAddData` or `hiprtcLinkAddFile`, `hiprtcLinkComplete`, `hiprtcModuleLoadData`, `hiprtcLinkDestroy`. - -### Input Types - -HIPRTC provides `hiprtcJITInputType` enumeration type which defines the input types accepted by the Linker APIs. Here are the `enum` values of `hiprtcJITInputType`. However only the input types `HIPRTC_JIT_INPUT_LLVM_BITCODE`, `HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE` and `HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE` are supported currently. - -`HIPRTC_JIT_INPUT_LLVM_BITCODE` can be used to load both LLVM bitcode or LLVM IR assembly code. However, `HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE` and `HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE` are only for bundled bitcode and archive of bundled bitcode. - -```cpp -HIPRTC_JIT_INPUT_CUBIN = 0, -HIPRTC_JIT_INPUT_PTX, -HIPRTC_JIT_INPUT_FATBINARY, -HIPRTC_JIT_INPUT_OBJECT, -HIPRTC_JIT_INPUT_LIBRARY, -HIPRTC_JIT_INPUT_NVVM, -HIPRTC_JIT_NUM_LEGACY_INPUT_TYPES, -HIPRTC_JIT_INPUT_LLVM_BITCODE = 100, -HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE = 101, -HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE = 102, -HIPRTC_JIT_NUM_INPUT_TYPES = (HIPRTC_JIT_NUM_LEGACY_INPUT_TYPES + 3) -``` - -### Backward Compatibility of LLVM Bitcode/IR - -For HIP applications utilizing HIPRTC to compile LLVM bitcode/IR, compatibility is assured only when the ROCm or HIP SDK version used for generating the LLVM bitcode/IR matches the version used during the runtime compilation. When an application requires the ingestion of bitcode/IR not derived from the currently installed AMD compiler, it must run with HIPRTC and comgr dynamic libraries that are compatible with the version of the bitcode/IR. - -comgr, a shared library, incorporates the LLVM/Clang compiler that HIPRTC relies on. To identify the bitcode/IR version that comgr is compatible with, one can execute "clang -v" using the clang binary from the same ROCm or HIP SDK package. For instance, if compiling bitcode/IR version 14, the HIPRTC and comgr libraries released by AMD around mid 2022 would be the best choice, assuming the LLVM/Clang version included in the package is also version 14. - -To ensure smooth operation and compatibility, an application may choose to ship the specific versions of HIPRTC and comgr dynamic libraries, or it may opt to clearly specify the version requirements and dependencies. This approach guarantees that the application can correctly compile the specified version of bitcode/IR. - -### Link Options - -* `HIPRTC_JIT_IR_TO_ISA_OPT_EXT` - AMD Only. Options to be passed on to link step of compiler by `hiprtcLinkCreate`. -* `HIPRTC_JIT_IR_TO_ISA_OPT_COUNT_EXT` - AMD Only. Count of options passed on to link step of compiler. - -Example: - -```cpp -const char* isaopts[] = {"-mllvm", "-inline-threshold=1", "-mllvm", "-inlinehint-threshold=1"}; -std::vector jit_options = {HIPRTC_JIT_IR_TO_ISA_OPT_EXT, - HIPRTC_JIT_IR_TO_ISA_OPT_COUNT_EXT}; -size_t isaoptssize = 4; -const void* lopts[] = {(void*)isaopts, (void*)(isaoptssize)}; -hiprtcLinkState linkstate; -hiprtcLinkCreate(2, jit_options.data(), (void**)lopts, &linkstate); -``` - -## Error Handling - -HIPRTC defines the `hiprtcResult` enumeration type and a function `hiprtcGetErrorString` for API call error handling. `hiprtcResult` `enum` defines the API result codes. HIPRTC APIs return `hiprtcResult` to indicate the call result. `hiprtcGetErrorString` function returns a string describing the given `hiprtcResult` code, e.g., HIPRTC_SUCCESS to "HIPRTC_SUCCESS". For unrecognized enumeration values, it returns "Invalid HIPRTC error code". - -`hiprtcResult` `enum` supported values and the `hiprtcGetErrorString` usage are mentioned below. - -```cpp -HIPRTC_SUCCESS = 0, -HIPRTC_ERROR_OUT_OF_MEMORY = 1, -HIPRTC_ERROR_PROGRAM_CREATION_FAILURE = 2, -HIPRTC_ERROR_INVALID_INPUT = 3, -HIPRTC_ERROR_INVALID_PROGRAM = 4, -HIPRTC_ERROR_INVALID_OPTION = 5, -HIPRTC_ERROR_COMPILATION = 6, -HIPRTC_ERROR_LINKING = 7, -HIPRTC_ERROR_BUILTIN_OPERATION_FAILURE = 8, -HIPRTC_ERROR_NO_NAME_EXPRESSIONS_AFTER_COMPILATION = 9, -HIPRTC_ERROR_NO_LOWERED_NAMES_BEFORE_COMPILATION = 10, -HIPRTC_ERROR_NAME_EXPRESSION_NOT_VALID = 11, -HIPRTC_ERROR_INTERNAL_ERROR = 12 -``` - -```cpp -hiprtcResult result; -result = hiprtcCompileProgram(prog, 1, opts); -if (result != HIPRTC_SUCCESS) { -std::cout << "hiprtcCompileProgram fails with error " << hiprtcGetErrorString(result); -} -``` - -## HIPRTC General APIs - -HIPRTC provides the following API for querying the version. - -`hiprtcVersion(int* major, int* minor)` - This sets the output parameters major and minor with the HIP Runtime compilation major version and minor version number respectively. - -Currently, it returns hardcoded value. This should be implemented to return HIP runtime major and minor version in the future releases. - -## Lowered Names (Mangled Names) - -HIPRTC mangles the `__global__` function names and names of `__device__` and `__constant__` variables. If the generated binary is being loaded using the HIP Runtime API, the kernel function or `__device__/__constant__` variable must be looked up by name, but this is very hard when the name has been mangled. To overcome this, HIPRTC provides API functions that map `__global__` function or `__device__/__constant__` variable names in the source to the mangled names present in the generated binary. - -The two APIs `hiprtcAddNameExpression` and `hiprtcGetLoweredName` provide this functionality. First, a 'name expression' string denoting the address for the `__global__` function or `__device__/__constant__` variable is provided to `hiprtcAddNameExpression`. Then, the program is compiled with `hiprtcCompileProgram`. During compilation, HIPRTC will parse the name expression string as a C++ constant expression at the end of the user program. Finally, the function `hiprtcGetLoweredName` is called with the original name expression and it returns a pointer to the lowered name. The lowered name can be used to refer to the kernel or variable in the HIP Runtime API. - -### Note - -* The identical name expression string must be provided on a subsequent call to `hiprtcGetLoweredName` to extract the lowered name. -* The correct sequence of calls is : `hiprtcAddNameExpression`, `hiprtcCompileProgram`, `hiprtcGetLoweredName`, `hiprtcDestroyProgram`. -* The lowered names must be fetched using `hiprtcGetLoweredName` only after the HIPRTC program has been compiled, and before it has been destroyed. - -### Example - -kernel containing various definitions `__global__` functions/function templates and `__device__/__constant__` variables can be stored in a string. - -```cpp -static constexpr const char gpu_program[] { -R"( -__device__ int V1; // set from host code -static __global__ void f1(int *result) { *result = V1 + 10; } -namespace N1 { -namespace N2 { -__constant__ int V2; // set from host code -__global__ void f2(int *result) { *result = V2 + 20; } -} -} -template -__global__ void f3(int *result) { *result = sizeof(T); } -)"}; -``` - -`hiprtcAddNameExpression` is called with various name expressions referring to the address of `__global__` functions and `__device__/__constant__` variables. - -```cpp -kernel_name_vec.push_back("&f1"); -kernel_name_vec.push_back("N1::N2::f2"); -kernel_name_vec.push_back("f3"); -for (auto&& x : kernel_name_vec) hiprtcAddNameExpression(prog, x.c_str()); -variable_name_vec.push_back("&V1"); -variable_name_vec.push_back("&N1::N2::V2"); -for (auto&& x : variable_name_vec) hiprtcAddNameExpression(prog, x.c_str()); -``` - -After which, the program is compiled using `hiprtcCompileProgram` and the generated binary is loaded using `hipModuleLoadData`. And the mangled names can be fetched using `hirtcGetLoweredName`. - -```cpp -for (decltype(variable_name_vec.size()) i = 0; i != variable_name_vec.size(); ++i) { - const char* name; - hiprtcGetLoweredName(prog, variable_name_vec[i].c_str(), &name); -} -``` - -```cpp -for (decltype(kernel_name_vec.size()) i = 0; i != kernel_name_vec.size(); ++i) { - const char* name; - hiprtcGetLoweredName(prog, kernel_name_vec[i].c_str(), &name); -} -``` - -The mangled name of the variables are used to look up the variable in the module and update its value. - -```cpp -hipDeviceptr_t variable_addr; -size_t bytes{}; -hipModuleGetGlobal(&variable_addr, &bytes, module, name); -hipMemcpyHtoD(variable_addr, &initial_value, sizeof(initial_value)); -``` - -Finally, the mangled name of the kernel is used to launch it using the `hipModule` APIs. - -```cpp -hipFunction_t kernel; -hipModuleGetFunction(&kernel, module, name); -hipModuleLaunchKernel(kernel, 1, 1, 1, 1, 1, 1, 0, nullptr, nullptr, config); -``` - -Please have a look at `hiprtcGetLoweredName.cpp` for the detailed example. - -## Versioning - -HIPRTC follows the below versioning. - -* Linux - * HIPRTC follows the same versioning as HIP runtime library. - * The `so` name field for the shared library is set to MAJOR version. For example, for HIP 5.3 the `so` name is set to 5 (`hiprtc.so.5`). -* Windows - * HIPRTC dll is named as `hiprtcXXYY.dll` where XX is MAJOR version and YY is MINOR version. For example, for HIP 5.3 the name is `hiprtc0503.dll`. - -## HIP header support - -* Added HIPRTC support for all the hip common header files such as library_types.h, hip_math_constants.h, hip_complex.h, math_functions.h, surface_types.h etc. from 6.1. HIPRTC users need not include any HIP macros or constants explicitly in their header files. All of these should get included via HIPRTC builtins when the app links to HIPRTC library. - -## Deprecation notice - -* Currently HIPRTC APIs are separated from HIP APIs and HIPRTC is available as a separate library `libhiprtc.so`/`libhiprtc.dll`. But on Linux, HIPRTC symbols are also present in `libamdhip64.so` in order to support the existing applications. Gradually, these symbols will be removed from HIP library and applications using HIPRTC will be required to explicitly link to HIPRTC library. However, on Windows `hiprtc.dll` must be used as the `amdhip64.dll` doesn't contain the HIPRTC symbols. -* Data types such as `uint32_t`, `uint64_t`, `int32_t`, `int64_t` defined in std namespace in HIPRTC are deprecated earlier and are being removed from ROCm release 6.1 since these can conflict with the standard C++ data types. These data types are now prefixed with `__hip__`, e.g. `__hip_uint32_t`. Applications previously using `std::uint32_t` or similar types can use `__hip_` prefixed types to avoid conflicts with standard std namespace or application can have their own definitions for these types. Also, type_traits templates previously defined in std namespace are moved to `__hip_internal` namespace as implementation details. diff --git a/docs/how-to/hip_rtc.rst b/docs/how-to/hip_rtc.rst new file mode 100644 index 0000000000..734bf60284 --- /dev/null +++ b/docs/how-to/hip_rtc.rst @@ -0,0 +1,726 @@ +.. meta:: + :description: HIP runtime compiler (RTC) + :keywords: AMD, ROCm, HIP, CUDA, RTC, HIP runtime compiler + +.. _hip_runtime_compiler_how-to: + +******************************************************************************* +Programming for HIP runtime compiler (RTC) +******************************************************************************* + +HIP supports the kernels compilation at runtime with the ``hiprtc*`` APIs. +Kernels can be stored as a text string and can be passed to HIPRTC APIs +alongside options to guide the compilation. + +.. note:: + + * This library can be used for compilation on systems without AMD GPU drivers + installed (offline compilation). However, running the compiled code still + requires both the HIP runtime library and GPU drivers on the target system. + * This library depends on Code Object Manager (comgr). You can try to + statically link comgr into HIPRTC to avoid ambiguity. + * Developers can bundle this library with their application. + +Compilation APIs +=============================================================================== + +To use HIPRTC functionality the header needs to be included: + +.. code-block:: cpp + + #include + +Kernels can be stored in a string: + +.. code-block:: cpp + + static constexpr auto kernel_source { + R"( + extern "C" + __global__ void vector_add(float* output, float* input1, float* input2, size_t size) { + int i = threadIdx.x; + if (i < size) { + output[i] = input1[i] + input2[i]; + } + } + )"}; + +To compile this kernel, it needs to be associated with +:cpp:struct:`hiprtcProgram` type, which is done by declaring :code:`hiprtcProgram prog;` +and associating the string of kernel with this program: + +.. code-block:: cpp + + hiprtcCreateProgram(&prog, // HIPRTC program handle + kernel_source, // HIP kernel source string + "vector_add.cpp", // Name of the HIP program, can be null or an empty string + 0, // Number of headers + NULL, // Header sources + NULL); // Name of header files + +:cpp:func:`hiprtcCreateProgram` API also allows you to add headers which can be +included in your RTC program. For online compilation, the compiler pre-defines +HIP device API functions, HIP specific types and macros for device compilation, +but doesn't include standard C/C++ headers by default. Users can only include +header files provided to :cpp:func:`hiprtcCreateProgram`. + +After associating the kernel string with :cpp:struct:`hiprtcProgram`, you can +now compile this program using: + +.. code-block:: cpp + + hiprtcCompileProgram(prog, // hiprtcProgram + 0, // Number of options + options); // Clang Options [Supported Clang Options](clang_options.md) + +:cpp:func:`hiprtcCompileProgram` returns a status value which can be converted +to string via :cpp:func:`hiprtcGetErrorString`. If compilation is successful, +:cpp:func:`hiprtcCompileProgram` will return ``HIPRTC_SUCCESS``. + +if the compilation fails or produces warnings, you can look up the logs via: + +.. code-block:: cpp + + size_t logSize; + hiprtcGetProgramLogSize(prog, &logSize); + + if (logSize) { + string log(logSize, '\0'); + hiprtcGetProgramLog(prog, &log[0]); + // Corrective action with logs + } + +If the compilation is successful, you can load the compiled binary in a local +variable. + +.. code-block:: cpp + + size_t codeSize; + hiprtcGetCodeSize(prog, &codeSize); + + vector kernel_binary(codeSize); + hiprtcGetCode(prog, kernel_binary.data()); + +After loading the binary, :cpp:struct:`hiprtcProgram` can be destroyed. +:code:`hiprtcDestroyProgram(&prog);` + +The binary present in ``kernel_binary`` can now be loaded via +:cpp:func:`hipModuleLoadData` API. + +.. code-block:: cpp + + hipModule_t module; + hipFunction_t kernel; + + hipModuleLoadData(&module, kernel_binary.data()); + hipModuleGetFunction(&kernel, module, "vector_add"); + +And now this kernel can be launched via ``hipModule`` APIs. + +The full example is below: + +.. code-block:: cpp + + #include + #include + + #include + #include + #include + + #define CHECK_RET_CODE(call, ret_code) \ + { \ + if ((call) != ret_code) { \ + std::cout << "Failed in call: " << #call << std::endl; \ + std::abort(); \ + } \ + } + #define HIP_CHECK(call) CHECK_RET_CODE(call, hipSuccess) + #define HIPRTC_CHECK(call) CHECK_RET_CODE(call, HIPRTC_SUCCESS) + + // source code for hiprtc + static constexpr auto kernel_source{ + R"( + extern "C" + __global__ void vector_add(float* output, float* input1, float* input2, size_t size) { + int i = threadIdx.x; + if (i < size) { + output[i] = input1[i] + input2[i]; + } + } + )"}; + + int main() { + hiprtcProgram prog; + auto rtc_ret_code = hiprtcCreateProgram(&prog, // HIPRTC program handle + kernel_source, // kernel source string + "vector_add.cpp", // Name of the file + 0, // Number of headers + NULL, // Header sources + NULL); // Name of header file + + if (rtc_ret_code != HIPRTC_SUCCESS) { + std::cout << "Failed to create program" << std::endl; + std::abort(); + } + + hipDeviceProp_t props; + int device = 0; + HIP_CHECK(hipGetDeviceProperties(&props, device)); + std::string sarg = std::string("--gpu-architecture=") + + props.gcnArchName; // device for which binary is to be generated + + const char* options[] = {sarg.c_str()}; + + rtc_ret_code = hiprtcCompileProgram(prog, // hiprtcProgram + 0, // Number of options + options); // Clang Options + if (rtc_ret_code != HIPRTC_SUCCESS) { + std::cout << "Failed to create program" << std::endl; + std::abort(); + } + + size_t logSize; + HIPRTC_CHECK(hiprtcGetProgramLogSize(prog, &logSize)); + + if (logSize) { + std::string log(logSize, '\0'); + HIPRTC_CHECK(hiprtcGetProgramLog(prog, &log[0])); + std::cout << "Compilation failed or produced warnings: " << log << std::endl; + std::abort(); + } + + size_t codeSize; + HIPRTC_CHECK(hiprtcGetCodeSize(prog, &codeSize)); + + std::vector kernel_binary(codeSize); + HIPRTC_CHECK(hiprtcGetCode(prog, kernel_binary.data())); + + HIPRTC_CHECK(hiprtcDestroyProgram(&prog)); + + hipModule_t module; + hipFunction_t kernel; + + HIP_CHECK(hipModuleLoadData(&module, kernel_binary.data())); + HIP_CHECK(hipModuleGetFunction(&kernel, module, "vector_add")); + + constexpr size_t ele_size = 256; // total number of items to add + std::vector hinput, output; + hinput.reserve(ele_size); + output.reserve(ele_size); + for (size_t i = 0; i < ele_size; i++) { + hinput.push_back(static_cast(i + 1)); + output.push_back(0.0f); + } + + float *dinput1, *dinput2, *doutput; + HIP_CHECK(hipMalloc(&dinput1, sizeof(float) * ele_size)); + HIP_CHECK(hipMalloc(&dinput2, sizeof(float) * ele_size)); + HIP_CHECK(hipMalloc(&doutput, sizeof(float) * ele_size)); + + HIP_CHECK(hipMemcpy(dinput1, hinput.data(), sizeof(float) * ele_size, hipMemcpyHostToDevice)); + HIP_CHECK(hipMemcpy(dinput2, hinput.data(), sizeof(float) * ele_size, hipMemcpyHostToDevice)); + + struct { + float* output; + float* input1; + float* input2; + size_t size; + } args{doutput, dinput1, dinput2, ele_size}; + + auto size = sizeof(args); + void* config[] = {HIP_LAUNCH_PARAM_BUFFER_POINTER, &args, HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, + HIP_LAUNCH_PARAM_END}; + + HIP_CHECK(hipModuleLaunchKernel(kernel, 1, 1, 1, ele_size, 1, 1, 0, nullptr, nullptr, config)); + + HIP_CHECK(hipMemcpy(output.data(), doutput, sizeof(float) * ele_size, hipMemcpyDeviceToHost)); + + for (size_t i = 0; i < ele_size; i++) { + if ((hinput[i] + hinput[i]) != output[i]) { + std::cout << "Failed in validation: " << (hinput[i] + hinput[i]) << " - " << output[i] + << std::endl; + std::abort(); + } + } + std::cout << "Passed" << std::endl; + + HIP_CHECK(hipFree(dinput1)); + HIP_CHECK(hipFree(dinput2)); + HIP_CHECK(hipFree(doutput)); + } + + +Kernel Compilation Cache +=============================================================================== + +HIPRTC incorporates a cache to avoid recompiling kernels between program +executions. The contents of the cache include the kernel source code (including +the contents of any ``#include`` headers), the compilation flags, and the +compiler version. After a ROCm version update, the kernels are progressively +recompiled, and the new results are cached. When the cache is disabled, each +kernel is recompiled every time it is requested. + +Use the following environment variables to manage the cache status as enabled or +disabled, the location for storing the cache contents, and the cache eviction +policy: + +* ``AMD_COMGR_CACHE`` By default this variable has a value of ``0`` and the + compilation cache feature is disabled. To enable the feature set the + environment variable to a value of ``1`` (or any value other than ``0``). + +* ``AMD_COMGR_CACHE_DIR``: By default the value of this environment variable is + defined as ``$XDG_CACHE_HOME/comgr_cache``, which defaults to + ``$USER/.cache/comgr_cache`` on Linux, and ``%LOCALAPPDATA%\cache\comgr_cache`` + on Windows. You can specify a different directory for the environment variable + to change the path for cache storage. If the runtime fails to access the + specified cache directory, or the environment variable is set to an empty + string (""), the cache is disabled. + +* ``AMD_COMGR_CACHE_POLICY``: If assigned a value, the string is interpreted and + applied to the cache pruning policy. The string format is consistent with + `Clang's ThinLTO cache pruning policy `_. + The default policy is defined as: + ``prune_interval=1h:prune_expiration=0h:cache_size=75%:cache_size_bytes=30g:cache_size_files=0``. + If the runtime fails to parse the defined string, or the environment variable + is set to an empty string (""), the cache is disabled. + +.. note:: + + This cache is also shared with the OpenCL runtime shipped with ROCm. + +HIPRTC specific options +=============================================================================== + +HIPRTC provides a few HIPRTC specific flags: + +* ``--gpu-architecture`` : This flag can guide the code object generation for a + specific GPU architecture. Example: + ``--gpu-architecture=gfx906:sramecc+:xnack-``, its equivalent to + ``--offload-arch``. + + * This option is compulsory if compilation is done on a system without AMD + GPUs supported by HIP runtime. + + * Otherwise, HIPRTC will load the hip runtime and gather the current device + and its architecture info and use it as option. + +* ``-fgpu-rdc`` : This flag when provided during the + :cpp:func:`hiprtcCreateProgram` generates the bitcode (HIPRTC doesn't convert + this bitcode into ISA and binary). This bitcode can later be fetched using + :cpp:func:`hiprtcGetBitcode` and :cpp:func:`hiprtcGetBitcodeSize` APIs. + +Bitcode +------------------------------------------------------------------------------- + +In the usual scenario, the kernel associated with :cpp:struct:`hiprtcProgram` is +compiled into the binary which can be loaded and run. However, if ``-fgpu-rdc`` +option is provided in the compile options, HIPRTC calls comgr and generates only +the LLVM bitcode. It doesn't convert this bitcode to ISA and generate the final +binary. + +.. code-block:: cpp + + std::string sarg = std::string("-fgpu-rdc"); + const char* options[] = { + sarg.c_str() }; + hiprtcCompileProgram(prog, // hiprtcProgram + 1, // Number of options + options); + +If the compilation is successful, one can load the bitcode in a local variable +using the bitcode APIs provided by HIPRTC. + +.. code-block:: cpp + + size_t bitCodeSize; + hiprtcGetBitcodeSize(prog, &bitCodeSize); + + vector kernel_bitcode(bitCodeSize); + hiprtcGetBitcode(prog, kernel_bitcode.data()); + +CU Mode vs WGP mode +------------------------------------------------------------------------------- + +AMD GPUs consist of an array of workgroup processors, each built with 2 compute +units (CUs) capable of executing SIMD32. All the CUs inside a workgroup +processor use local data share (LDS). + +gfx10+ support execution of wavefront in CU mode and work-group processor mode +(WGP). Please refer to section 2.3 of `RDNA3 ISA reference `_. + +gfx9 and below only supports CU mode. + +In WGP mode, 4 warps of a block can simultaneously be executed on the workgroup +processor, where as in CU mode only 2 warps of a block can simultaneously +execute on a CU. In theory, WGP mode might help with occupancy and increase the +performance of certain HIP programs (if not bound to inter warp communication), +but might incur performance penalty on other HIP programs which rely on atomics +and inter warp communication. This also has effect of how the LDS is split +between warps, please refer to `RDNA3 ISA reference `_ for more information. + +.. note:: + + HIPRTC assumes **WGP mode by default** for gfx10+. This can be overridden by + passing ``-mcumode`` to HIPRTC compile options in + :cpp:func:`hiprtcCompileProgram`. + +Linker APIs +=============================================================================== + +The bitcode generated using the HIPRTC Bitcode APIs can be loaded using +``hipModule`` APIs and also can be linked with other generated bitcodes with +appropriate linker flags using the HIPRTC linker APIs. This also provides more +flexibility and optimizations to the applications who want to generate the +binary dynamically according to their needs. The input bitcodes can be generated +only for a specific architecture or it can be a bundled bitcode which is +generated for multiple architectures. + +Example +------------------------------------------------------------------------------- + +Firstly, HIPRTC link instance or a pending linker invocation must be created +using :cpp:func:`hiprtcLinkCreate`, with the appropriate linker options +provided. + +.. code-block:: cpp + + hiprtcLinkCreate( num_options, // number of options + options, // Array of options + option_vals, // Array of option values cast to void* + &rtc_link_state ); // HIPRTC link state created upon success + +Following which, the bitcode data can be added to this link instance via +:cpp:func:`hiprtcLinkAddData` (if the data is present as a string) or +:cpp:func:`hiprtcLinkAddFile` (if the data is present as a file) with the +appropriate input type according to the data or the bitcode used. + +.. code-block:: cpp + + hiprtcLinkAddData(rtc_link_state, // HIPRTC link state + input_type, // type of the input data or bitcode + bit_code_ptr, // input data which is null terminated + bit_code_size, // size of the input data + "a", // optional name for this input + 0, // size of the options + 0, // Array of options applied to this input + 0); // Array of option values cast to void* + +.. code-block:: cpp + + hiprtcLinkAddFile(rtc_link_state, // HIPRTC link state + input_type, // type of the input data or bitcode + bc_file_path.c_str(), // path to the input file where bitcode is present + 0, // size of the options + 0, // Array of options applied to this input + 0); // Array of option values cast to void* + +Once the bitcodes for multiple architectures are added to the link instance, the +linking of the device code must be completed using :cpp:func:`hiprtcLinkComplete` +which generates the final binary. + +.. code-block:: cpp + + hiprtcLinkComplete(rtc_link_state, // HIPRTC link state + &binary, // upon success, points to the output binary + &binarySize); // size of the binary is stored (optional) + +If the :cpp:func:`hiprtcLinkComplete` returns successfully, the generated binary +can be loaded and run using the ``hipModule*`` APIs. + +.. code-block:: cpp + + hipModuleLoadData(&module, binary); + +.. note:: + + * The compiled binary must be loaded before HIPRTC link instance is destroyed + using the :cpp:func:`hiprtcLinkDestroy` API. + + .. code-block:: cpp + + hiprtcLinkDestroy(rtc_link_state); + + * The correct sequence of calls is : :cpp:func:`hiprtcLinkCreate`, + :cpp:func:`hiprtcLinkAddData` or :cpp:func:`hiprtcLinkAddFile`, + :cpp:func:`hiprtcLinkComplete`, :cpp:func:`hipModuleLoadData`, + :cpp:func:`hiprtcLinkDestroy`. + +Input Types +------------------------------------------------------------------------------- + +HIPRTC provides ``hiprtcJITInputType`` enumeration type which defines the input +types accepted by the Linker APIs. Here are the ``enum`` values of +``hiprtcJITInputType``. However only the input types +``HIPRTC_JIT_INPUT_LLVM_BITCODE``, ``HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE`` and +``HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE`` are supported currently. + +``HIPRTC_JIT_INPUT_LLVM_BITCODE`` can be used to load both LLVM bitcode or LLVM +IR assembly code. However, ``HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE`` and +``HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE`` are only for bundled +bitcode and archive of bundled bitcode. + +.. code-block:: cpp + + HIPRTC_JIT_INPUT_CUBIN = 0, + HIPRTC_JIT_INPUT_PTX, + HIPRTC_JIT_INPUT_FATBINARY, + HIPRTC_JIT_INPUT_OBJECT, + HIPRTC_JIT_INPUT_LIBRARY, + HIPRTC_JIT_INPUT_NVVM, + HIPRTC_JIT_NUM_LEGACY_INPUT_TYPES, + HIPRTC_JIT_INPUT_LLVM_BITCODE = 100, + HIPRTC_JIT_INPUT_LLVM_BUNDLED_BITCODE = 101, + HIPRTC_JIT_INPUT_LLVM_ARCHIVES_OF_BUNDLED_BITCODE = 102, + HIPRTC_JIT_NUM_INPUT_TYPES = (HIPRTC_JIT_NUM_LEGACY_INPUT_TYPES + 3) + +Backward Compatibility of LLVM Bitcode/IR +------------------------------------------------------------------------------- + +For HIP applications utilizing HIPRTC to compile LLVM bitcode/IR, compatibility +is assured only when the ROCm or HIP SDK version used for generating the LLVM +bitcode/IR matches the version used during the runtime compilation. When an +application requires the ingestion of bitcode/IR not derived from the currently +installed AMD compiler, it must run with HIPRTC and comgr dynamic libraries that +are compatible with the version of the bitcode/IR. + +`Comgr `_ is a +shared library that incorporates the LLVM/Clang compiler that HIPRTC relies on. +To identify the bitcode/IR version that comgr is compatible with, one can +execute "clang -v" using the clang binary from the same ROCm or HIP SDK package. +For instance, if compiling bitcode/IR version 14, the HIPRTC and comgr libraries +released by AMD around mid 2022 would be the best choice, assuming the +LLVM/Clang version included in the package is also version 14. + +To ensure smooth operation and compatibility, an application may choose to ship +the specific versions of HIPRTC and comgr dynamic libraries, or it may opt to +clearly specify the version requirements and dependencies. This approach +guarantees that the application can correctly compile the specified version of +bitcode/IR. + +Link Options +------------------------------------------------------------------------------- + +* ``HIPRTC_JIT_IR_TO_ISA_OPT_EXT`` - AMD Only. Options to be passed on to link + step of compiler by :cpp:func:`hiprtcLinkCreate`. + +* ``HIPRTC_JIT_IR_TO_ISA_OPT_COUNT_EXT`` - AMD Only. Count of options passed on + to link step of compiler. + +Example: + +.. code-block:: cpp + + const char* isaopts[] = {"-mllvm", "-inline-threshold=1", "-mllvm", "-inlinehint-threshold=1"}; + std::vector jit_options = {HIPRTC_JIT_IR_TO_ISA_OPT_EXT, + HIPRTC_JIT_IR_TO_ISA_OPT_COUNT_EXT}; + size_t isaoptssize = 4; + const void* lopts[] = {(void*)isaopts, (void*)(isaoptssize)}; + hiprtcLinkState linkstate; + hiprtcLinkCreate(2, jit_options.data(), (void**)lopts, &linkstate); + +Error Handling +=============================================================================== + +HIPRTC defines the ``hiprtcResult`` enumeration type and a function +:cpp:func:`hiprtcGetErrorString` for API call error handling. ``hiprtcResult`` +``enum`` defines the API result codes. HIPRTC APIs return ``hiprtcResult`` to +indicate the call result. :cpp:func:`hiprtcGetErrorString` function returns a +string describing the given ``hiprtcResult`` code, for example HIPRTC_SUCCESS to +"HIPRTC_SUCCESS". For unrecognized enumeration values, it returns +"Invalid HIPRTC error code". + +``hiprtcResult`` ``enum`` supported values and the +:cpp:func:`hiprtcGetErrorString` usage are mentioned below. + +.. code-block:: cpp + + HIPRTC_SUCCESS = 0, + HIPRTC_ERROR_OUT_OF_MEMORY = 1, + HIPRTC_ERROR_PROGRAM_CREATION_FAILURE = 2, + HIPRTC_ERROR_INVALID_INPUT = 3, + HIPRTC_ERROR_INVALID_PROGRAM = 4, + HIPRTC_ERROR_INVALID_OPTION = 5, + HIPRTC_ERROR_COMPILATION = 6, + HIPRTC_ERROR_LINKING = 7, + HIPRTC_ERROR_BUILTIN_OPERATION_FAILURE = 8, + HIPRTC_ERROR_NO_NAME_EXPRESSIONS_AFTER_COMPILATION = 9, + HIPRTC_ERROR_NO_LOWERED_NAMES_BEFORE_COMPILATION = 10, + HIPRTC_ERROR_NAME_EXPRESSION_NOT_VALID = 11, + HIPRTC_ERROR_INTERNAL_ERROR = 12 + +.. code-block:: cpp + + hiprtcResult result; + result = hiprtcCompileProgram(prog, 1, opts); + if (result != HIPRTC_SUCCESS) { + std::cout << "hiprtcCompileProgram fails with error " << hiprtcGetErrorString(result); + } + +HIPRTC General APIs +=============================================================================== + +HIPRTC provides ``hiprtcVersion(int* major, int* minor)`` for querying the +version. This sets the output parameters major and minor with the HIP Runtime +compilation major version and minor version number respectively. + +Currently, it returns hardcoded values. This should be implemented to return HIP +runtime major and minor version in the future releases. + +Lowered Names (Mangled Names) +=============================================================================== + +HIPRTC mangles the ``__global__`` function names and names of ``__device__`` and +``__constant__`` variables. If the generated binary is being loaded using the +HIP Runtime API, the kernel function or ``__device__/__constant__`` variable +must be looked up by name, but this is very hard when the name has been mangled. +To overcome this, HIPRTC provides API functions that map ``__global__`` function +or ``__device__/__constant__`` variable names in the source to the mangled names +present in the generated binary. + +The two APIs :cpp:func:`hiprtcAddNameExpression` and +:cpp:func:`hiprtcGetLoweredName` provide this functionality. First, a 'name +expression' string denoting the address for the ``__global__`` function or +``__device__/__constant__`` variable is provided to +:cpp:func:`hiprtcAddNameExpression`. Then, the program is compiled with +:cpp:func:`hiprtcCreateProgram`. During compilation, HIPRTC will parse the name +expression string as a C++ constant expression at the end of the user program. +Finally, the function :cpp:func:`hiprtcGetLoweredName` is called with the +original name expression and it returns a pointer to the lowered name. The +lowered name can be used to refer to the kernel or variable in the HIP Runtime +API. + +.. note:: + + * The identical name expression string must be provided on a subsequent call + to :cpp:func:`hiprtcGetLoweredName` to extract the lowered name. + + * The correct sequence of calls is : :cpp:func:`hiprtcAddNameExpression`, + :cpp:func:`hiprtcCreateProgram`, :cpp:func:`hiprtcGetLoweredName`, + :cpp:func:`hiprtcDestroyProgram`. + + * The lowered names must be fetched using :cpp:func:`hiprtcGetLoweredName` + only after the HIPRTC program has been compiled, and before it has been + destroyed. + +Example +------------------------------------------------------------------------------- + +Kernel containing various definitions ``__global__`` functions/function +templates and ``__device__/__constant__`` variables can be stored in a string. + +.. code-block:: cpp + + static constexpr const char gpu_program[] { + R"( + __device__ int V1; // set from host code + static __global__ void f1(int *result) { *result = V1 + 10; } + namespace N1 { + namespace N2 { + __constant__ int V2; // set from host code + __global__ void f2(int *result) { *result = V2 + 20; } + } + } + template + __global__ void f3(int *result) { *result = sizeof(T); } + )"}; + +:cpp:func:`hiprtcAddNameExpression` is called with various name expressions +referring to the address of ``__global__`` functions and +``__device__/__constant__`` variables. + +.. code-block:: cpp + + kernel_name_vec.push_back("&f1"); + kernel_name_vec.push_back("N1::N2::f2"); + kernel_name_vec.push_back("f3"); + for (auto&& x : kernel_name_vec) hiprtcAddNameExpression(prog, x.c_str()); + variable_name_vec.push_back("&V1"); + variable_name_vec.push_back("&N1::N2::V2"); + for (auto&& x : variable_name_vec) hiprtcAddNameExpression(prog, x.c_str()); + +After which, the program is compiled using :cpp:func:`hiprtcCompileProgram`, the +generated binary is loaded using :cpp:func:`hipModuleLoadData`, and the mangled +names can be fetched using :cpp:func:`hirtcGetLoweredName`. + +.. code-block:: cpp + + for (decltype(variable_name_vec.size()) i = 0; i != variable_name_vec.size(); ++i) { + const char* name; + hiprtcGetLoweredName(prog, variable_name_vec[i].c_str(), &name); + } + +.. code-block:: cpp + + for (decltype(kernel_name_vec.size()) i = 0; i != kernel_name_vec.size(); ++i) { + const char* name; + hiprtcGetLoweredName(prog, kernel_name_vec[i].c_str(), &name); + } + +The mangled name of the variables are used to look up the variable in the module +and update its value. + +.. code-block:: cpp + + hipDeviceptr_t variable_addr; + size_t bytes{}; + hipModuleGetGlobal(&variable_addr, &bytes, module, name); + hipMemcpyHtoD(variable_addr, &initial_value, sizeof(initial_value)); + + +Finally, the mangled name of the kernel is used to launch it using the +``hipModule`` APIs. + +.. code-block:: cpp + + hipFunction_t kernel; + hipModuleGetFunction(&kernel, module, name); + hipModuleLaunchKernel(kernel, 1, 1, 1, 1, 1, 1, 0, nullptr, nullptr, config); + +Versioning +=============================================================================== + +HIPRTC uses the following versioning: + +* Linux + + * HIPRTC follows the same versioning as HIP runtime library. + * The ``so`` name field for the shared library is set to MAJOR version. For + example, for HIP 5.3 the ``so`` name is set to 5 (``hiprtc.so.5``). + +* Windows + + * HIPRTC dll is named as ``hiprtcXXYY.dll`` where ``XX`` is MAJOR version and + ``YY`` is MINOR version. For example, for HIP 5.3 the name is + ``hiprtc0503.dll``. + +HIP header support +=============================================================================== + +Added HIPRTC support for all the hip common header files such as +``library_types.h``, ``hip_math_constants.h``, ``hip_complex.h``, +``math_functions.h``, ``surface_types.h`` etc. from 6.1. HIPRTC users need not +include any HIP macros or constants explicitly in their header files. All of +these should get included via HIPRTC builtins when the app links to HIPRTC +library. + +Deprecation notice +=============================================================================== + +* Currently HIPRTC APIs are separated from HIP APIs and HIPRTC is available as a + separate library ``libhiprtc.so``/ ``libhiprtc.dll``. But on Linux, HIPRTC + symbols are also present in ``libamdhip64.so`` in order to support the + existing applications. Gradually, these symbols will be removed from HIP + library and applications using HIPRTC will be required to explicitly link to + HIPRTC library. However, on Windows ``hiprtc.dll`` must be used as the + ``amdhip64.dll`` doesn't contain the HIPRTC symbols. + +* Data types such as ``uint32_t``, ``uint64_t``, ``int32_t``, ``int64_t`` + defined in std namespace in HIPRTC are deprecated earlier and are being + removed from ROCm release 6.1 since these can conflict with the standard + C++ data types. These data types are now prefixed with ``__hip__``, for example + ``__hip_uint32_t``. Applications previously using ``std::uint32_t`` or similar + types can use ``__hip_`` prefixed types to avoid conflicts with standard std + namespace or application can have their own definitions for these types. Also, + type_traits templates previously defined in std namespace are moved to + ``__hip_internal`` namespace as implementation details. From b7f8f1090cd73daafda493833dd8fb846872b633 Mon Sep 17 00:00:00 2001 From: Matthias Knorr Date: Mon, 13 Jan 2025 13:07:51 +0100 Subject: [PATCH 31/46] Docs: Refactor HIP porting guide --- .wordlist.txt | 7 +- docs/how-to/hip_cpp_language_extensions.rst | 2 +- docs/how-to/hip_porting_guide.md | 582 ------------------- docs/how-to/hip_porting_guide.rst | 604 ++++++++++++++++++++ docs/how-to/logging.rst | 13 + docs/understand/compilers.rst | 75 +++ 6 files changed, 695 insertions(+), 588 deletions(-) delete mode 100644 docs/how-to/hip_porting_guide.md create mode 100644 docs/how-to/hip_porting_guide.rst diff --git a/.wordlist.txt b/.wordlist.txt index 32d489abc8..de7c91b31a 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -24,7 +24,8 @@ coroutines Ctx cuBLASLt cuCtx -CUDA's +CUDA +cuda cuDNN cuModule dataflow @@ -35,7 +36,6 @@ Dereferencing dll DirectX EIGEN -EIGEN's enqueue enqueues entrypoint @@ -61,7 +61,6 @@ hardcoded HC hcBLAS HIP-Clang -HIP's hipcc hipCtx hipexamine @@ -142,7 +141,6 @@ quad representable RMW rocgdb -ROCm's rocTX roundtrip rst @@ -158,7 +156,6 @@ sinewave SOMA SPMV structs -struct's SYCL syntaxes texel diff --git a/docs/how-to/hip_cpp_language_extensions.rst b/docs/how-to/hip_cpp_language_extensions.rst index 6b18bd01e3..73462ba526 100644 --- a/docs/how-to/hip_cpp_language_extensions.rst +++ b/docs/how-to/hip_cpp_language_extensions.rst @@ -469,7 +469,7 @@ compile-time constant on the host. It has to be queried using applications. NVIDIA devices return 32 for this variable; AMD devices return 64 for gfx9 and 32 for gfx10 and above. While code that assumes a ``warpSize`` of 32 can run on devices with a ``warpSize`` of 64, it only utilizes half of - the the compute resources. + the compute resources. ******************************************************************************** Vector types diff --git a/docs/how-to/hip_porting_guide.md b/docs/how-to/hip_porting_guide.md deleted file mode 100644 index a6027d4801..0000000000 --- a/docs/how-to/hip_porting_guide.md +++ /dev/null @@ -1,582 +0,0 @@ - - - - - - -# HIP porting guide - -In addition to providing a portable C++ programming environment for GPUs, HIP is designed to ease -the porting of existing CUDA code into the HIP environment. This section describes the available tools -and provides practical suggestions on how to port CUDA code and work through common issues. - -## Porting a New CUDA Project - -### General Tips - -* Starting the port on a CUDA machine is often the easiest approach, since you can incrementally port pieces of the code to HIP while leaving the rest in CUDA. (Recall that on CUDA machines HIP is just a thin layer over CUDA, so the two code types can interoperate on NVCC platforms.) Also, the HIP port can be compared with the original CUDA code for function and performance. -* Once the CUDA code is ported to HIP and is running on the CUDA machine, compile the HIP code using the HIP compiler on an AMD machine. -* HIP ports can replace CUDA versions: HIP can deliver the same performance as a native CUDA implementation, with the benefit of portability to both NVIDIA and AMD architectures as well as a path to future C++ standard support. You can handle platform-specific features through conditional compilation or by adding them to the open-source HIP infrastructure. -* Use **[hipconvertinplace-perl.sh](https://github.com/ROCm/HIPIFY/blob/amd-staging/bin/hipconvertinplace-perl.sh)** to hipify all code files in the CUDA source directory. - -### Scanning existing CUDA code to scope the porting effort - -The **[hipexamine-perl.sh](https://github.com/ROCm/HIPIFY/blob/amd-staging/bin/hipexamine-perl.sh)** tool will scan a source directory to determine which files contain CUDA code and how much of that code can be automatically hipified. - -```shell -> cd examples/rodinia_3.0/cuda/kmeans -> $HIP_DIR/bin/hipexamine-perl.sh. -info: hipify ./kmeans.h =====> -info: hipify ./unistd.h =====> -info: hipify ./kmeans.c =====> -info: hipify ./kmeans_cuda_kernel.cu =====> - info: converted 40 CUDA->HIP refs( dev:0 mem:0 kern:0 builtin:37 math:0 stream:0 event:0 err:0 def:0 tex:3 other:0 ) warn:0 LOC:185 -info: hipify ./getopt.h =====> -info: hipify ./kmeans_cuda.cu =====> - info: converted 49 CUDA->HIP refs( dev:3 mem:32 kern:2 builtin:0 math:0 stream:0 event:0 err:0 def:0 tex:12 other:0 ) warn:0 LOC:311 -info: hipify ./rmse.c =====> -info: hipify ./cluster.c =====> -info: hipify ./getopt.c =====> -info: hipify ./kmeans_clustering.c =====> -info: TOTAL-converted 89 CUDA->HIP refs( dev:3 mem:32 kern:2 builtin:37 math:0 stream:0 event:0 err:0 def:0 tex:15 other:0 ) warn:0 LOC:3607 - kernels (1 total) : kmeansPoint(1) -``` - -hipexamine-perl scans each code file (cpp, c, h, hpp, etc.) found in the specified directory: - -* Files with no CUDA code (`kmeans.h`) print one line summary just listing the source file name. -* Files with CUDA code print a summary of what was found - for example the `kmeans_cuda_kernel.cu` file: - -```shell -info: hipify ./kmeans_cuda_kernel.cu =====> - info: converted 40 CUDA->HIP refs( dev:0 mem:0 kern:0 builtin:37 math:0 stream:0 event:0 -``` - -* Interesting information in `kmeans_cuda_kernel.cu` : - * How many CUDA calls were converted to HIP (40) - * Breakdown of the CUDA functionality used (`dev:0 mem:0` etc). This file uses many CUDA builtins (37) and texture functions (3). - * Warning for code that looks like CUDA API but was not converted (0 in this file). - * Count Lines-of-Code (LOC) - 185 for this file. - -* hipexamine-perl also presents a summary at the end of the process for the statistics collected across all files. This has similar format to the per-file reporting, and also includes a list of all kernels which have been called. An example from above: - -```shell -info: TOTAL-converted 89 CUDA->HIP refs( dev:3 mem:32 kern:2 builtin:37 math:0 stream:0 event:0 err:0 def:0 tex:15 other:0 ) warn:0 LOC:3607 - kernels (1 total) : kmeansPoint(1) -``` - -### Converting a project "in-place" - -```shell -> hipify-perl --inplace -``` - -For each input file FILE, this script will: - -* If `FILE.prehip` file does not exist, copy the original code to a new file with extension `.prehip`. Then hipify the code file. -* If `FILE.prehip` file exists, hipify `FILE.prehip` and save to FILE. - -This is useful for testing improvements to the hipify toolset. - -The [hipconvertinplace-perl.sh](https://github.com/ROCm/HIPIFY/blob/amd-staging/bin/hipconvertinplace-perl.sh) script will perform inplace conversion for all code files in the specified directory. -This can be quite handy when dealing with an existing CUDA code base since the script preserves the existing directory structure -and filenames - and includes work. After converting in-place, you can review the code to add additional parameters to -directory names. - -```shell -> hipconvertinplace-perl.sh MY_SRC_DIR -``` - -### Library Equivalents - -Most CUDA libraries have a corresponding ROCm library with similar functionality and APIs. However, ROCm also provides HIP marshalling libraries that greatly simplify the porting process because they more precisely reflect their CUDA counterparts and can be used with either the AMD or NVIDIA platforms (see "Identifying HIP Target Platform" below). There are a few notable exceptions: - -* MIOpen does not have a marshalling library interface to ease porting from cuDNN. -* RCCL is a drop-in replacement for NCCL and implements the NCCL APIs. -* hipBLASLt does not have a ROCm library but can still target the NVIDIA platform, as needed. -* EIGEN's HIP support is part of the library. - -| CUDA Library | HIP Library | ROCm Library | Comment | -|------------- | ----------- | ------------ | ------- | -| cuBLAS | hipBLAS | rocBLAS | Basic Linear Algebra Subroutines -| cuBLASLt | hipBLASLt | N/A | Basic Linear Algebra Subroutines, lightweight and new flexible API -| cuFFT | hipFFT | rocFFT | Fast Fourier Transfer Library -| cuSPARSE | hipSPARSE | rocSPARSE | Sparse BLAS + SPMV -| cuSOLVER | hipSOLVER | rocSOLVER | Lapack library -| AmgX | N/A | rocALUTION | Sparse iterative solvers and preconditioners with algebraic multigrid -| Thrust | N/A | rocThrust | C++ parallel algorithms library -| CUB | hipCUB | rocPRIM | Low Level Optimized Parallel Primitives -| cuDNN | N/A | MIOpen | Deep learning Solver Library -| cuRAND | hipRAND | rocRAND | Random Number Generator Library -| EIGEN | EIGEN | N/A | C++ template library for linear algebra: matrices, vectors, numerical solvers, -| NCCL | N/A | RCCL | Communications Primitives Library based on the MPI equivalents - -## Distinguishing Compiler Modes - -### Identifying HIP Target Platform - -All HIP projects target either AMD or NVIDIA platform. The platform affects which headers are included and which libraries are used for linking. - -* `__HIP_PLATFORM_AMD__` is defined if the HIP platform targets AMD. -Note, `__HIP_PLATFORM_HCC__` was previously defined if the HIP platform targeted AMD, it is deprecated. -* `__HIP_PLATFORM_NVDIA__` is defined if the HIP platform targets NVIDIA. -Note, `__HIP_PLATFORM_NVCC__` was previously defined if the HIP platform targeted NVIDIA, it is deprecated. - -### Identifying the Compiler: hip-clang or NVCC - -Often, it's useful to know whether the underlying compiler is HIP-Clang or NVCC. This knowledge can guard platform-specific code or aid in platform-specific performance tuning. - -```cpp -#ifdef __HIP_PLATFORM_AMD__ -// Compiled with HIP-Clang -#endif -``` - -```cpp -#ifdef __HIP_PLATFORM_NVIDIA__ -// Compiled with nvcc -// Could be compiling with CUDA language extensions enabled (for example, a ".cu file) -// Could be in pass-through mode to an underlying host compile OR (for example, a .cpp file) - -``` - -```cpp -#ifdef __CUDACC__ -// Compiled with nvcc (CUDA language extensions enabled) -``` - -Compiler directly generates the host code (using the Clang x86 target) and passes the code to another host compiler. Thus, they have no equivalent of the `__CUDACC__` define. - -### Identifying Current Compilation Pass: Host or Device - -NVCC makes two passes over the code: one for host code and one for device code. -HIP-Clang will have multiple passes over the code: one for the host code, and one for each architecture on the device code. -`__HIP_DEVICE_COMPILE__` is set to a nonzero value when the compiler (HIP-Clang or NVCC) is compiling code for a device inside a `__global__` kernel or for a device function. `__HIP_DEVICE_COMPILE__` can replace `#ifdef` checks on the `__CUDA_ARCH__` define. - -```cpp -// #ifdef __CUDA_ARCH__ -#if __HIP_DEVICE_COMPILE__ -``` - -Unlike `__CUDA_ARCH__`, the `__HIP_DEVICE_COMPILE__` value is 1 or undefined, and it doesn't represent the feature capability of the target device. - -### Compiler Defines: Summary - -|Define | HIP-Clang | NVCC | Other (GCC, ICC, Clang, etc.) -|--- | --- | --- |--- | -|HIP-related defines:| -|`__HIP_PLATFORM_AMD__` | Defined | Undefined | Defined if targeting AMD platform; undefined otherwise | -|`__HIP_PLATFORM_NVIDIA__` | Undefined | Defined | Defined if targeting NVIDIA platform; undefined otherwise | -|`__HIP_DEVICE_COMPILE__` | 1 if compiling for device; undefined if compiling for host | 1 if compiling for device; undefined if compiling for host | Undefined -|`__HIPCC__` | Defined | Defined | Undefined -|`__HIP_ARCH_*` | 0 or 1 depending on feature support (see below) | 0 or 1 depending on feature support (see below) | 0 -|NVCC-related defines:| -|`__CUDACC__` | Defined if source code is compiled by NVCC; undefined otherwise | Undefined -|`__NVCC__` Undefined | Defined | Undefined -|`__CUDA_ARCH__` | Undefined | Unsigned representing compute capability (e.g., "130") if in device code; 0 if in host code | Undefined -|hip-clang-related defines:| -|`__HIP__` | Defined | Undefined | Undefined -|HIP-Clang common defines: | -|`__clang__` | Defined | Defined | Undefined | Defined if using Clang; otherwise undefined - -## Identifying Architecture Features - -### HIP_ARCH Defines - -Some CUDA code tests `__CUDA_ARCH__` for a specific value to determine whether the machine supports a certain architectural feature. For instance, - -```cpp -#if (__CUDA_ARCH__ >= 130) -// doubles are supported -``` - -This type of code requires special attention, since AMD and CUDA devices have different architectural capabilities. Moreover, you can't determine the presence of a feature using a simple comparison against an architecture's version number. HIP provides a set of defines and device properties to query whether a specific architectural feature is supported. - -The `__HIP_ARCH_*` defines can replace comparisons of `__CUDA_ARCH__` values: - -```cpp -//#if (__CUDA_ARCH__ >= 130) // non-portable -if __HIP_ARCH_HAS_DOUBLES__ { // portable HIP feature query - // doubles are supported -} -``` - -For host code, the `__HIP_ARCH__*` defines are set to 0. You should only use the `__HIP_ARCH__` fields in device code. - -### Device-Architecture Properties - -Host code should query the architecture feature flags in the device properties that `hipGetDeviceProperties` returns, rather than testing the "major" and "minor" fields directly: - -```cpp -hipGetDeviceProperties(&deviceProp, device); -//if ((deviceProp.major == 1 && deviceProp.minor < 2)) // non-portable -if (deviceProp.arch.hasSharedInt32Atomics) { // portable HIP feature query - // has shared int32 atomic operations ... -} -``` - -### Table of Architecture Properties - -The table below shows the full set of architectural properties that HIP supports. - -|Define (use only in device code) | Device Property (run-time query) | Comment | -|------- | --------- | ----- | -|32-bit atomics: | | -|`__HIP_ARCH_HAS_GLOBAL_INT32_ATOMICS__` | `hasGlobalInt32Atomics` |32-bit integer atomics for global memory -|`__HIP_ARCH_HAS_GLOBAL_FLOAT_ATOMIC_EXCH__` | `hasGlobalFloatAtomicExch` |32-bit float atomic exchange for global memory -|`__HIP_ARCH_HAS_SHARED_INT32_ATOMICS__` | `hasSharedInt32Atomics` |32-bit integer atomics for shared memory -|`__HIP_ARCH_HAS_SHARED_FLOAT_ATOMIC_EXCH__` | `hasSharedFloatAtomicExch` |32-bit float atomic exchange for shared memory -|`__HIP_ARCH_HAS_FLOAT_ATOMIC_ADD__` | `hasFloatAtomicAdd` |32-bit float atomic add in global and shared memory -|64-bit atomics: | | -|`__HIP_ARCH_HAS_GLOBAL_INT64_ATOMICS__` | `hasGlobalInt64Atomics` |64-bit integer atomics for global memory -|`__HIP_ARCH_HAS_SHARED_INT64_ATOMICS__` | `hasSharedInt64Atomics` |64-bit integer atomics for shared memory -|Doubles: | | -|`__HIP_ARCH_HAS_DOUBLES__` | `hasDoubles` |Double-precision floating point -|Warp cross-lane operations: | | -|`__HIP_ARCH_HAS_WARP_VOTE__` | `hasWarpVote` |Warp vote instructions (`any`, `all`) -|`__HIP_ARCH_HAS_WARP_BALLOT__` | `hasWarpBallot` |Warp ballot instructions -|`__HIP_ARCH_HAS_WARP_SHUFFLE__` | `hasWarpShuffle` |Warp shuffle operations (`shfl_*`) -|`__HIP_ARCH_HAS_WARP_FUNNEL_SHIFT__` | `hasFunnelShift` |Funnel shift two input words into one -|Sync: | | -|`__HIP_ARCH_HAS_THREAD_FENCE_SYSTEM__` | `hasThreadFenceSystem` |`threadfence_system` -|`__HIP_ARCH_HAS_SYNC_THREAD_EXT__` | `hasSyncThreadsExt` |`syncthreads_count`, `syncthreads_and`, `syncthreads_or` -|Miscellaneous: | | -|`__HIP_ARCH_HAS_SURFACE_FUNCS__` | `hasSurfaceFuncs` | -|`__HIP_ARCH_HAS_3DGRID__` | `has3dGrid` | Grids and groups are 3D -|`__HIP_ARCH_HAS_DYNAMIC_PARALLEL__` | `hasDynamicParallelism` | - -## Finding HIP - -Makefiles can use the following syntax to conditionally provide a default HIP_PATH if one does not exist: - -```shell -HIP_PATH ?= $(shell hipconfig --path) -``` - -## Identifying HIP Runtime - -HIP can depend on rocclr, or CUDA as runtime - -* AMD platform -On AMD platform, HIP uses ROCm Compute Language Runtime, called ROCclr. -ROCclr is a virtual device interface that HIP runtimes interact with different backends which allows runtimes to work on Linux , as well as Windows without much efforts. - -* NVIDIA platform -On NVIDIA platform, HIP is just a thin layer on top of CUDA. - -The environment variable `HIP_PLATFORM` specifies the runtime to use. The -platform is detected automatically by HIP. When an AMD graphics driver and an -AMD GPU is detected, `HIP_PLATFORM` is set to `amd`. If both runtimes are -installed, and a specific one should be used, or HIP can't detect the runtime, -setting the environment variable manually tells `hipcc` what compilation path to -choose. To use the CUDA compilation path, set the environment variable to -`HIP_PLATFORM=nvidia`. - -## `hipLaunchKernelGGL` - -`hipLaunchKernelGGL` is a macro that can serve as an alternative way to launch kernel, which accepts parameters of launch configurations (grid dims, group dims, stream, dynamic shared size) followed by a variable number of kernel arguments. -It can replace <<< >>>, if the user so desires. - -## Compiler Options - -hipcc is a portable compiler driver that will call NVCC or HIP-Clang (depending on the target system) and attach all required include and library options. It passes options through to the target compiler. Tools that call hipcc must ensure the compiler options are appropriate for the target compiler. -The `hipconfig` script may helpful in identifying the target platform, compiler and runtime. It can also help set options appropriately. - -### Compiler options supported on AMD platforms - -Here are the main compiler options supported on AMD platforms by HIP-Clang. - -| Option | Description | -| ------ | ----------- | -| `--amdgpu-target=` | [DEPRECATED] This option is being replaced by `--offload-arch=`. Generate code for the given GPU target. Supported targets are gfx701, gfx801, gfx802, gfx803, gfx900, gfx906, gfx908, gfx1010, gfx1011, gfx1012, gfx1030, gfx1031. This option could appear multiple times on the same command line to generate a fat binary for multiple targets. | -| `--fgpu-rdc` | Generate relocatable device code, which allows kernels or device functions calling device functions in different translation units. | -| `-ggdb` | Equivalent to `-g` plus tuning for GDB. This is recommended when using ROCm's GDB to debug GPU code. | -| `--gpu-max-threads-per-block=` | Generate code to support up to the specified number of threads per block. | -| `-O` | Specify the optimization level. | -| `-offload-arch=` | Specify the AMD GPU [target ID](https://clang.llvm.org/docs/ClangOffloadBundler.html#target-id). | -| `-save-temps` | Save the compiler generated intermediate files. | -| `-v` | Show the compilation steps. | - -## Linking Issues - -### Linking With hipcc - -hipcc adds the necessary libraries for HIP as well as for the accelerator compiler (NVCC or AMD compiler). We recommend linking with hipcc since it automatically links the binary to the necessary HIP runtime libraries. It also has knowledge on how to link and to manage the GPU objects. - -### `-lm` Option - -hipcc adds `-lm` by default to the link command. - -## Linking Code With Other Compilers - -CUDA code often uses NVCC for accelerator code (defining and launching kernels, typically defined in `.cu` or `.cuh` files). -It also uses a standard compiler (g++) for the rest of the application. NVCC is a preprocessor that employs a standard host compiler (gcc) to generate the host code. -Code compiled using this tool can employ only the intersection of language features supported by both NVCC and the host compiler. -In some cases, you must take care to ensure the data types and alignment of the host compiler are identical to those of the device compiler. Only some host compilers are supported---for example, recent NVCC versions lack Clang host-compiler capability. - -HIP-Clang generates both device and host code using the same Clang-based compiler. The code uses the same API as gcc, which allows code generated by different gcc-compatible compilers to be linked together. For example, code compiled using HIP-Clang can link with code compiled using "standard" compilers (such as gcc, ICC and Clang). Take care to ensure all compilers use the same standard C++ header and library formats. - -### libc++ and libstdc++ - -hipcc links to libstdc++ by default. This provides better compatibility between g++ and HIP. - -If you pass `--stdlib=libc++` to hipcc, hipcc will use the libc++ library. Generally, libc++ provides a broader set of C++ features while libstdc++ is the standard for more compilers (notably including g++). - -When cross-linking C++ code, any C++ functions that use types from the C++ standard library (including std::string, std::vector and other containers) must use the same standard-library implementation. They include the following: - -* Functions or kernels defined in HIP-Clang that are called from a standard compiler -* Functions defined in a standard compiler that are called from HIP-Clang. - -Applications with these interfaces should use the default libstdc++ linking. - -Applications which are compiled entirely with hipcc, and which benefit from advanced C++ features not supported in libstdc++, and which do not require portability to NVCC, may choose to use libc++. - -### HIP Headers (`hip_runtime.h`, `hip_runtime_api.h`) - -The `hip_runtime.h` and `hip_runtime_api.h` files define the types, functions and enumerations needed to compile a HIP program: - -* `hip_runtime_api.h`: defines all the HIP runtime APIs (e.g., `hipMalloc`) and the types required to call them. A source file that is only calling HIP APIs but neither defines nor launches any kernels can include `hip_runtime_api.h`. `hip_runtime_api.h` uses no custom Heterogeneous Compute (HC) language features and can be compiled using a standard C++ compiler. -* `hip_runtime.h`: included in `hip_runtime_api.h`. It additionally provides the types and defines required to create and launch kernels. hip_runtime.h can be compiled using a standard C++ compiler but will expose a subset of the available functions. - -CUDA has slightly different contents for these two files. In some cases you may need to convert hipified code to include the richer `hip_runtime.h` instead of `hip_runtime_api.h`. - -### Using a Standard C++ Compiler - -You can compile `hip_runtime_api.h` using a standard C or C++ compiler (e.g., gcc or ICC). The HIP include paths and defines (`__HIP_PLATFORM_AMD__` or `__HIP_PLATFORM_NVIDIA__`) must pass to the standard compiler; `hipconfig` then returns the necessary options: - -```bash -> hipconfig --cxx_config - -D__HIP_PLATFORM_AMD__ -I/home/user1/hip/include -``` - -You can capture the `hipconfig` output and passed it to the standard compiler; below is a sample makefile syntax: - -```bash -CPPFLAGS += $(shell $(HIP_PATH)/bin/hipconfig --cpp_config) -``` - -NVCC includes some headers by default. However, HIP does not include default headers, and instead all required files must be explicitly included. -Specifically, files that call HIP run-time APIs or define HIP kernels must explicitly include the appropriate HIP headers. -If the compilation process reports that it cannot find necessary APIs (for example, `error: identifier hipSetDevice is undefined`), -ensure that the file includes hip_runtime.h (or hip_runtime_api.h, if appropriate). -The hipify-perl script automatically converts `cuda_runtime.h` to `hip_runtime.h`, and it converts `cuda_runtime_api.h` to `hip_runtime_api.h`, but it may miss nested headers or macros. - -#### `cuda.h` - -The HIP-Clang path provides an empty `cuda.h` file. Some existing CUDA programs include this file but don't require any of the functions. - -### Choosing HIP File Extensions - -Many existing CUDA projects use the `.cu` and `.cuh` file extensions to indicate code that should be run through the NVCC compiler. -For quick HIP ports, leaving these file extensions unchanged is often easier, as it minimizes the work required to change file names in the directory and #include statements in the files. - -For new projects or ports which can be re-factored, we recommend the use of the extension `.hip.cpp` for source files, and -`.hip.h` or `.hip.hpp` for header files. -This indicates that the code is standard C++ code, but also provides a unique indication for make tools to -run hipcc when appropriate. - -## Workarounds - -### ``warpSize`` - -Code should not assume a warp size of 32 or 64. See the -:ref:`HIP language extension for warpSize ` for information on how -to write portable wave-aware code. - -### Kernel launch with group size > 256 - -Kernel code should use `__attribute__((amdgpu_flat_work_group_size(,)))`. - -For example: - -```cpp -__global__ void dot(double *a,double *b,const int n) __attribute__((amdgpu_flat_work_group_size(1, 512))) -``` - -## `memcpyToSymbol` - -HIP support for `hipMemcpyToSymbol` is complete. This feature allows a kernel -to define a device-side data symbol which can be accessed on the host side. The symbol -can be in __constant or device space. - -Note that the symbol name needs to be encased in the HIP_SYMBOL macro, as shown in the code example below. This also applies to `hipMemcpyFromSymbol`, `hipGetSymbolAddress`, and `hipGetSymbolSize`. - -For example: - -Device Code: - -```cpp -#include -#include -#include - -#define HIP_ASSERT(status) \ - assert(status == hipSuccess) - -#define LEN 512 -#define SIZE 2048 - -__constant__ int Value[LEN]; - -__global__ void Get(int *Ad) -{ - int tid = threadIdx.x + blockIdx.x * blockDim.x; - Ad[tid] = Value[tid]; -} - -int main() -{ - int *A, *B, *Ad; - A = new int[LEN]; - B = new int[LEN]; - for(unsigned i=0;i(&ptr), sizeof(double)); -hipPointerAttribute_t attr; -hipPointerGetAttributes(&attr, ptr); /*attr.type will have value as hipMemoryTypeDevice*/ - -double* ptrHost; -hipHostMalloc(&ptrHost, sizeof(double)); -hipPointerAttribute_t attr; -hipPointerGetAttributes(&attr, ptrHost); /*attr.type will have value as hipMemoryTypeHost*/ -``` - -Please note, `hipMemoryType` enum values are different from `cudaMemoryType` enum values. - -For example, on AMD platform, `hipMemoryType` is defined in `hip_runtime_api.h`, - -```cpp -typedef enum hipMemoryType { - hipMemoryTypeHost = 0, ///< Memory is physically located on host - hipMemoryTypeDevice = 1, ///< Memory is physically located on device. (see deviceId for specific device) - hipMemoryTypeArray = 2, ///< Array memory, physically located on device. (see deviceId for specific device) - hipMemoryTypeUnified = 3, ///< Not used currently - hipMemoryTypeManaged = 4 ///< Managed memory, automaticallly managed by the unified memory system -} hipMemoryType; -``` - -Looking into CUDA toolkit, it defines `cudaMemoryType` as following, - -```cpp -enum cudaMemoryType -{ - cudaMemoryTypeUnregistered = 0, // Unregistered memory. - cudaMemoryTypeHost = 1, // Host memory. - cudaMemoryTypeDevice = 2, // Device memory. - cudaMemoryTypeManaged = 3, // Managed memory -} -``` - -In this case, memory type translation for `hipPointerGetAttributes` needs to be handled properly on NVIDIA platform to get the correct memory type in CUDA, which is done in the file `nvidia_hip_runtime_api.h`. - -So in any HIP applications which use HIP APIs involving memory types, developers should use `#ifdef` in order to assign the correct enum values depending on NVIDIA or AMD platform. - -As an example, please see the code from the [link](https://github.com/ROCm/hip-tests/tree/develop/catch/unit/memory/hipMemcpyParam2D.cc). - -With the `#ifdef` condition, HIP APIs work as expected on both AMD and NVIDIA platforms. - -Note, `cudaMemoryTypeUnregstered` is currently not supported in `hipMemoryType` enum, due to HIP functionality backward compatibility. - -## `threadfence_system` - -`threadfence_system` makes all device memory writes, all writes to mapped host memory, and all writes to peer memory visible to CPU and other GPU devices. -Some implementations can provide this behavior by flushing the GPU L2 cache. -HIP/HIP-Clang does not provide this functionality. As a workaround, users can set the environment variable `HSA_DISABLE_CACHE=1` to disable the GPU L2 cache. This will affect all accesses and for all kernels and so may have a performance impact. - -### Textures and Cache Control - -Compute programs sometimes use textures either to access dedicated texture caches or to use the texture-sampling hardware for interpolation and clamping. The former approach uses simple point samplers with linear interpolation, essentially only reading a single point. The latter approach uses the sampler hardware to interpolate and combine multiple samples. AMD hardware, as well as recent competing hardware, has a unified texture/L1 cache, so it no longer has a dedicated texture cache. But the NVCC path often caches global loads in the L2 cache, and some programs may benefit from explicit control of the L1 cache contents. We recommend the `__ldg` instruction for this purpose. - -AMD compilers currently load all data into both the L1 and L2 caches, so `__ldg` is treated as a no-op. - -We recommend the following for functional portability: - -* For programs that use textures only to benefit from improved caching, use the `__ldg` instruction -* Programs that use texture object and reference APIs, work well on HIP - -## More Tips - -### HIP Logging - -On an AMD platform, set the AMD_LOG_LEVEL environment variable to log HIP application execution information. - -The value of the setting controls different logging level, - -```cpp -enum LogLevel { -LOG_NONE = 0, -LOG_ERROR = 1, -LOG_WARNING = 2, -LOG_INFO = 3, -LOG_DEBUG = 4 -}; -``` - -Logging mask is used to print types of functionalities during the execution of HIP application. -It can be set as one of the following values, - -```cpp -enum LogMask { - LOG_API = 1, //!< (0x1) API call - LOG_CMD = 2, //!< (0x2) Kernel and Copy Commands and Barriers - LOG_WAIT = 4, //!< (0x4) Synchronization and waiting for commands to finish - LOG_AQL = 8, //!< (0x8) Decode and display AQL packets - LOG_QUEUE = 16, //!< (0x10) Queue commands and queue contents - LOG_SIG = 32, //!< (0x20) Signal creation, allocation, pool - LOG_LOCK = 64, //!< (0x40) Locks and thread-safety code. - LOG_KERN = 128, //!< (0x80) Kernel creations and arguments, etc. - LOG_COPY = 256, //!< (0x100) Copy debug - LOG_COPY2 = 512, //!< (0x200) Detailed copy debug - LOG_RESOURCE = 1024, //!< (0x400) Resource allocation, performance-impacting events. - LOG_INIT = 2048, //!< (0x800) Initialization and shutdown - LOG_MISC = 4096, //!< (0x1000) Misc debug, not yet classified - LOG_AQL2 = 8192, //!< (0x2000) Show raw bytes of AQL packet - LOG_CODE = 16384, //!< (0x4000) Show code creation debug - LOG_CMD2 = 32768, //!< (0x8000) More detailed command info, including barrier commands - LOG_LOCATION = 65536, //!< (0x10000) Log message location - LOG_MEM = 131072, //!< (0x20000) Memory allocation - LOG_MEM_POOL = 262144, //!< (0x40000) Memory pool allocation, including memory in graphs - LOG_ALWAYS = -1 //!< (0xFFFFFFFF) Log always even mask flag is zero -}; -``` - -### Debugging hipcc - -To see the detailed commands that hipcc issues, set the environment variable HIPCC_VERBOSE to 1. Doing so will print to ``stderr`` the HIP-clang (or NVCC) commands that hipcc generates. - -```bash -export HIPCC_VERBOSE=1 -make -... -hipcc-cmd: /opt/rocm/bin/hipcc --offload-arch=native -x hip backprop_cuda.cu -``` - -### Editor Highlighting - -See the utils/vim or utils/gedit directories to add handy highlighting to hip files. diff --git a/docs/how-to/hip_porting_guide.rst b/docs/how-to/hip_porting_guide.rst new file mode 100644 index 0000000000..af248b3ec9 --- /dev/null +++ b/docs/how-to/hip_porting_guide.rst @@ -0,0 +1,604 @@ +.. meta:: + :description: This chapter presents how to port CUDA source code to HIP. + :keywords: AMD, ROCm, HIP, CUDA, porting, port + +################################################################################ +HIP porting guide +################################################################################ + +HIP is designed to ease the porting of existing CUDA code into the HIP +environment. This page describes the available tools and provides practical +suggestions on how to port CUDA code and work through common issues. + +******************************************************************************** +Porting a CUDA Project +******************************************************************************** + +General Tips +================================================================================ + +* You can incrementally port pieces of the code to HIP while leaving the rest in CUDA. HIP is just a thin layer over CUDA, so the two languages can interoperate. +* Starting to port on an NVIDIA machine is often the easiest approach, as the code can be tested for functionality and performance even if not fully ported to HIP. +* Once the CUDA code is ported to HIP and is running on the CUDA machine, compile the HIP code for an AMD machine. +* You can handle platform-specific features through conditional compilation or by adding them to the open-source HIP infrastructure. +* Use the `HIPIFY `_ tools to automatically convert CUDA code to HIP, as described in the following section. + +HIPIFY +================================================================================ + +:doc:`HIPIFY ` is a collection of tools that automatically +translate CUDA to HIP code. There are two flavours available, ``hipfiy-clang`` +and ``hipify-perl``. + +:doc:`hipify-clang ` is, as the name implies, a Clang-based +tool, and actually parses the code, translates it into an Abstract Syntax Tree, +from which it then generates the HIP source. For this, ``hipify-clang`` needs to +be able to actually compile the code, so the CUDA code needs to be correct, and +a CUDA install with all necessary headers must be provided. + +:doc:`hipify-perl ` uses pattern matching, to translate the +CUDA code to HIP. It does not require a working CUDA installation, and can also +convert CUDA code, that is not syntactically correct. It is therefore easier to +set up and use, but is not as powerful as ``hipfiy-clang``. + +Scanning existing CUDA code to scope the porting effort +-------------------------------------------------------------------------------- + +The ``--examine`` option, supported by the clang and perl version, tells hipify +to do a test-run, without changing the files, but instead scan CUDA code to +determine which files contain CUDA code and how much of that code can +automatically be hipified. + +There also are ``hipexamine-perl.sh`` or ``hipexamine.sh`` (for +``hipify-clang``) scripts to automatically scan directories. + +For example, the following is a scan of one of the +`cuda-samples `_: + +.. code-block:: shell + + > cd Samples/2_Concepts_and_Techniques/convolutionSeparable/ + > hipexamine-perl.sh + [HIPIFY] info: file './convolutionSeparable.cu' statistics: + CONVERTED refs count: 2 + TOTAL lines of code: 214 + WARNINGS: 0 + [HIPIFY] info: CONVERTED refs by names: + cooperative_groups.h => hip/hip_cooperative_groups.h: 1 + cudaMemcpyToSymbol => hipMemcpyToSymbol: 1 + + [HIPIFY] info: file './main.cpp' statistics: + CONVERTED refs count: 13 + TOTAL lines of code: 174 + WARNINGS: 0 + [HIPIFY] info: CONVERTED refs by names: + cudaDeviceSynchronize => hipDeviceSynchronize: 2 + cudaFree => hipFree: 3 + cudaMalloc => hipMalloc: 3 + cudaMemcpy => hipMemcpy: 2 + cudaMemcpyDeviceToHost => hipMemcpyDeviceToHost: 1 + cudaMemcpyHostToDevice => hipMemcpyHostToDevice: 1 + cuda_runtime.h => hip/hip_runtime.h: 1 + + [HIPIFY] info: file 'GLOBAL' statistics: + CONVERTED refs count: 15 + TOTAL lines of code: 512 + WARNINGS: 0 + [HIPIFY] info: CONVERTED refs by names: + cooperative_groups.h => hip/hip_cooperative_groups.h: 1 + cudaDeviceSynchronize => hipDeviceSynchronize: 2 + cudaFree => hipFree: 3 + cudaMalloc => hipMalloc: 3 + cudaMemcpy => hipMemcpy: 2 + cudaMemcpyDeviceToHost => hipMemcpyDeviceToHost: 1 + cudaMemcpyHostToDevice => hipMemcpyHostToDevice: 1 + cudaMemcpyToSymbol => hipMemcpyToSymbol: 1 + cuda_runtime.h => hip/hip_runtime.h: 1 + +``hipexamine-perl.sh`` reports how many CUDA calls are going to be converted to +HIP (e.g. ``CONVERTED refs count: 2``), and lists them by name together with +their corresponding HIP-version (see the lines following ``[HIPIFY] info: +CONVERTED refs by names:``). It also lists the total lines of code for the file +and potential warnings. In the end it prints a summary for all files. + +Automatically converting a CUDA project +-------------------------------------------------------------------------------- + +To directly replace the files, the ``--inplace`` option of ``hipify-perl`` or +``hipify-clang`` can be used. This creates a backup of the original files in a +``.prehip`` file and overwrites the existing files, keeping their file +endings. If the ``--inplace`` option is not given, the scripts print the +hipified code to ``stdout``. + +``hipconvertinplace.sh``or ``hipconvertinplace-perl.sh`` operate on whole +directories. + +Library Equivalents +================================================================================ + +ROCm provides libraries to ease porting of code relying on CUDA libraries. +Most CUDA libraries have a corresponding HIP library. + +There are two flavours of libraries provided by ROCm, ones prefixed with ``hip`` +and ones prefixed with ``roc``. While both are written using HIP, in general +only the ``hip``-libraries are portable. The libraries with the ``roc``-prefix +might also run on CUDA-capable GPUs, however they have been optimized for AMD +GPUs and might use assembly code or a different API, to achieve the best +performance. + +.. note:: + + If the application is only required to run on AMD GPUs, it is recommended to + use the ``roc``-libraries. + +In the case where a library provides a ``roc``- and a ``hip``- version, the +``hip`` version is a marshalling library, which is just a thin layer that is +redirecting the function calls to either the ``roc``-library or the +corresponding CUDA library, depending on the platform, to provide compatibility. + +.. list-table:: + :header-rows: 1 + + * + - CUDA Library + - ``hip`` Library + - ``roc`` Library + - Comment + * + - cuBLAS + - `hipBLAS `_ + - `rocBLAS `_ + - Basic Linear Algebra Subroutines + * + - cuBLASLt + - `hipBLASLt `_ + - + - Linear Algebra Subroutines, lightweight and new flexible API + * + - cuFFT + - `hipFFT `_ + - `rocFFT `_ + - Fast Fourier Transfer Library + * + - cuSPARSE + - `hipSPARSE `_ + - `rocSPARSE `_ + - Sparse BLAS + SPMV + * + - cuSOLVER + - `hipSOLVER `_ + - `rocSOLVER `_ + - Lapack library + * + - AmgX + - + - `rocALUTION `_ + - Sparse iterative solvers and preconditioners with algebraic multigrid + * + - Thrust + - + - `rocThrust `_ + - C++ parallel algorithms library + * + - CUB + - `hipCUB `_ + - `rocPRIM `_ + - Low Level Optimized Parallel Primitives + * + - cuDNN + - + - `MIOpen `_ + - Deep learning Solver Library + * + - cuRAND + - `hipRAND `_ + - `rocRAND `_ + - Random Number Generator Library + * + - NCCL + - + - `RCCL `_ + - Communications Primitives Library based on the MPI equivalents + RCCL is a drop-in replacement for NCCL + +******************************************************************************** +Distinguishing compilers and platforms +******************************************************************************** + +Identifying the HIP Target Platform +================================================================================ + +HIP projects can target either the AMD or NVIDIA platform. The platform affects +which backend-headers are included and which libraries are used for linking. The +created binaries are not portable between AMD and NVIDIA platforms. + +To write code that is specific to a platform the C++-macros specified in the +following section can be used. + +Compiler Defines: Summary +-------------------------------------------------------------------------------- + +This section lists macros that are defined by compilers and the HIP/CUDA APIs, +and what compiler/platform combinations they are defined for. + +The following table lists the macros that can be used when compiling HIP. Most +of these macros are not directly defined by the compilers, but in +``hip_common.h``, which is included by ``hip_runtime.h``. + +.. list-table:: HIP-related defines + :header-rows: 1 + + * + - Macro + - ``amdclang++`` + - ``nvcc`` when used as backend for ``hipcc`` + - Other (GCC, ICC, Clang, etc.) + * + - ``__HIP_PLATFORM_AMD__`` + - Defined + - Undefined + - Undefined, needs to be set explicitly + * + - ``__HIP_PLATFORM_NVIDIA__`` + - Undefined + - Defined + - Undefined, needs to be set explicitly + * + - ``__HIPCC__`` + - Defined when compiling ``.hip`` files or specifying ``-x hip`` + - Defined when compiling ``.hip`` files or specifying ``-x hip`` + - Undefined + * + - ``__HIP_DEVICE_COMPILE__`` + - 1 if compiling for device + undefined if compiling for host + - 1 if compiling for device + undefined if compiling for host + - Undefined + * + - ``__HIP_ARCH___`` + - 0 or 1 depending on feature support of targeted hardware (see :ref:`identifying_device_architecture_features`) + - 0 or 1 depending on feature support of targeted hardware + - 0 + * + - ``__HIP__`` + - Defined when compiling ``.hip`` files or specifying ``-x hip`` + - Undefined + - Undefined + +The following table lists macros related to ``nvcc`` and CUDA as HIP backend. + +.. list-table:: NVCC-related defines + :header-rows: 1 + + * + - Macro + - ``amdclang++`` + - ``nvcc`` when used as backend for ``hipcc`` + - Other (GCC, ICC, Clang, etc.) + * + - ``__CUDACC__`` + - Undefined + - Defined + - Undefined + (Clang defines this when explicitly compiling CUDA code) + * + - ``__NVCC__`` + - Undefined + - Defined + - Undefined + * + - ``__CUDA_ARCH__`` [#cuda_arch]_ + - Undefined + - Defined in device code + Integer representing compute capability + Must not be used in host code + - Undefined + +.. [#cuda_arch] the use of ``__CUDA_ARCH__`` to check for hardware features is + discouraged, as this is not portable. Use the ``__HIP_ARCH_HAS_`` + macros instead. + +Identifying the compilation target platform +-------------------------------------------------------------------------------- + +Despite HIP's portability, it can be necessary to tailor code to a specific +platform, in order to provide platform-specific code, or aid in +platform-specific performance improvements. + +For this, the ``__HIP_PLATFORM_AMD__`` and ``__HIP_PLATFORM_NVIDIA__`` macros +can be used, e.g.: + +.. code-block:: cpp + + #ifdef __HIP_PLATFORM_AMD__ + // This code path is compiled when amdclang++ is used for compilation + #endif + +.. code-block:: cpp + + #ifdef __HIP_PLATFORM_NVIDIA__ + // This code path is compiled when nvcc is used for compilation + // Could be compiling with CUDA language extensions enabled (for example, a ".cu file) + // Could be in pass-through mode to an underlying host compiler (for example, a .cpp file) + #endif + +When using ``hipcc``, the environment variable ``HIP_PLATFORM`` specifies the +runtime to use. When an AMD graphics driver and an AMD GPU is detected, +``HIP_PLATFORM`` is set to ``amd``. If both runtimes are installed, and a +specific one should be used, or ``hipcc`` can't detect the runtime, the +environment variable has to be set manually. + +To explicitly use the CUDA compilation path, use: + +.. code-block:: bash + + export HIP_PLATFORM=nvidia + hipcc main.cpp + +Identifying Host or Device Compilation Pass +-------------------------------------------------------------------------------- + +``amdclang++`` makes multiple passes over the code: one for the host code, and +one each for the device code for every GPU architecture to be compiled for. +``nvcc`` makes two passes over the code: one for host code and one for device +code. + +The ``__HIP_DEVICE_COMPILE__``-macro is defined when the compiler is compiling +for the device. + + +``__HIP_DEVICE_COMPILE__`` is a portable check that can replace the +``__CUDA_ARCH__``. + +.. code-block:: cpp + + #include "hip/hip_runtime.h" + #include + + __host__ __device__ void call_func(){ + #ifdef __HIP_DEVICE_COMPILE__ + printf("device\n"); + #else + std::cout << "host" << std::endl; + #endif + } + + __global__ void test_kernel(){ + call_func(); + } + + int main(int argc, char** argv) { + test_kernel<<<1, 1, 0, 0>>>(); + + call_func(); + } + +.. _identifying_device_architecture_features: + +******************************************************************************** +Identifying Device Architecture Features +******************************************************************************** + +GPUs of different generations and architectures do not all provide the same +level of :doc:`hardware feature support <../reference/hardware_features>`. To +guard device-code using these architecture dependent features, the +``__HIP_ARCH___`` C++-macros can be used. + +Device Code Feature Identification +================================================================================ + +Some CUDA code tests ``__CUDA_ARCH__`` for a specific value to determine whether +the GPU supports a certain architectural feature, depending on its compute +capability. This requires knowledge about what ``__CUDA_ARCH__`` supports what +feature set. + +HIP simplifies this, by replacing these macros with feature-specific macros, not +architecture specific. + +For instance, + +.. code-block:: cpp + + //#if __CUDA_ARCH__ >= 130 // does not properly specify, what feature is required, not portable + #if __HIP_ARCH_HAS_DOUBLES__ == 1 // explicitly specifies, what feature is required, portable between AMD and NVIDIA GPUs + // device code + #endif + +For host code, the ``__HIP_ARCH___`` defines are set to 0, if +``hip_runtime.h`` is included, and undefined otherwise. It should not be relied +upon in host code. + +Host Code Feature Identification +================================================================================ + +Host code must not rely on the ``__HIP_ARCH___`` macros, as the GPUs +available to a system can not be known during compile time, and their +architectural features differ. + +Host code can query architecture feature flags during runtime, by using +:cpp:func:`hipGetDeviceProperties` or :cpp:func:`hipDeviceGetAttribute`. + +.. code-block:: cpp + + #include + #include + #include + + #define HIP_CHECK(expression) { \ + const hipError_t err = expression; \ + if (err != hipSuccess){ \ + std::cout << "HIP Error: " << hipGetErrorString(err)) \ + << " at line " << __LINE__ << std::endl; \ + std::exit(EXIT_FAILURE); \ + } \ + } + + int main(){ + int deviceCount; + HIP_CHECK(hipGetDeviceCount(&deviceCount)); + + int device = 0; // Query first available GPU. Can be replaced with any + // integer up to, not including, deviceCount + hipDeviceProp_t deviceProp; + HIP_CHECK(hipGetDeviceProperties(&deviceProp, device)); + + std::cout << "The queried device "; + if (deviceProp.arch.hasSharedInt32Atomics) // portable HIP feature query + std::cout << "supports"; + else + std::cout << "does not support"; + std::cout << " shared int32 atomic operations" << std::endl; + } + +Table of Architecture Properties +================================================================================ + +The table below shows the full set of architectural properties that HIP +supports, together with the corresponding macros and device properties. + +.. list-table:: + :header-rows: 1 + + * + - Macro (for device code) + - Device Property (host runtime query) + - Comment + * + - ``__HIP_ARCH_HAS_GLOBAL_INT32_ATOMICS__`` + - ``hasGlobalInt32Atomics`` + - 32-bit integer atomics for global memory + * + - ``__HIP_ARCH_HAS_GLOBAL_FLOAT_ATOMIC_EXCH__`` + - ``hasGlobalFloatAtomicExch`` + - 32-bit float atomic exchange for global memory + * + - ``__HIP_ARCH_HAS_SHARED_INT32_ATOMICS__`` + - ``hasSharedInt32Atomics`` + - 32-bit integer atomics for shared memory + * + - ``__HIP_ARCH_HAS_SHARED_FLOAT_ATOMIC_EXCH__`` + - ``hasSharedFloatAtomicExch`` + - 32-bit float atomic exchange for shared memory + * + - ``__HIP_ARCH_HAS_FLOAT_ATOMIC_ADD__`` + - ``hasFloatAtomicAdd`` + - 32-bit float atomic add in global and shared memory + * + - ``__HIP_ARCH_HAS_GLOBAL_INT64_ATOMICS__`` + - ``hasGlobalInt64Atomics`` + - 64-bit integer atomics for global memory + * + - ``__HIP_ARCH_HAS_SHARED_INT64_ATOMICS__`` + - ``hasSharedInt64Atomics`` + - 64-bit integer atomics for shared memory + * + - ``__HIP_ARCH_HAS_DOUBLES__`` + - ``hasDoubles`` + - Double-precision floating-point operations + * + - ``__HIP_ARCH_HAS_WARP_VOTE__`` + - ``hasWarpVote`` + - Warp vote instructions (``any``, ``all``) + * + - ``__HIP_ARCH_HAS_WARP_BALLOT__`` + - ``hasWarpBallot`` + - Warp ballot instructions + * + - ``__HIP_ARCH_HAS_WARP_SHUFFLE__`` + - ``hasWarpShuffle`` + - Warp shuffle operations (``shfl_*``) + * + - ``__HIP_ARCH_HAS_WARP_FUNNEL_SHIFT__`` + - ``hasFunnelShift`` + - Funnel shift two input words into one + * + - ``__HIP_ARCH_HAS_THREAD_FENCE_SYSTEM__`` + - ``hasThreadFenceSystem`` + - :cpp:func:`threadfence_system` + * + - ``__HIP_ARCH_HAS_SYNC_THREAD_EXT__`` + - ``hasSyncThreadsExt`` + - :cpp:func:`syncthreads_count`, :cpp:func:`syncthreads_and`, :cpp:func:`syncthreads_or` + * + - ``__HIP_ARCH_HAS_SURFACE_FUNCS__`` + - ``hasSurfaceFuncs`` + - Supports :ref:`surface functions `. + * + - ``__HIP_ARCH_HAS_3DGRID__`` + - ``has3dGrid`` + - Grids and groups are 3D + * + - ``__HIP_ARCH_HAS_DYNAMIC_PARALLEL__`` + - ``hasDynamicParallelism`` + - Ability to launch a kernel from within a kernel + +******************************************************************************** +Finding HIP +******************************************************************************** + +Makefiles can use the following syntax to conditionally provide a default HIP_PATH if one does not exist: + +.. code-block:: shell + + HIP_PATH ?= $(shell hipconfig --path) + +******************************************************************************** +Compilation +******************************************************************************** + +``hipcc`` is a portable compiler driver that calls ``nvcc`` or ``amdclang++`` +and forwards the appropriate options. It passes options through +to the target compiler. Tools that call ``hipcc`` must ensure the compiler +options are appropriate for the target compiler. + +``hipconfig`` is a helpful tool in identifying the current systems platform, +compiler and runtime. It can also help set options appropriately. + +HIP Headers +================================================================================ + +The ``hip_runtime.h`` headers define all the necessary types, functions, macros, +etc., needed to compile a HIP program, this includes host as well as device +code. ``hip_runtime_api.h`` is a subset of ``hip_runtime.h``. + +CUDA has slightly different contents for these two files. In some cases you may +need to convert hipified code to include the richer ``hip_runtime.h`` instead of +``hip_runtime_api.h``. + +Using a Standard C++ Compiler +================================================================================ + +You can compile ``hip_runtime_api.h`` using a standard C or C++ compiler +(e.g., ``gcc`` or ``icc``). +A source file that is only calling HIP APIs but neither defines nor launches any +kernels can be compiled with a standard host compiler (e.g. ``gcc`` or ``icc``) +even when ``hip_runtime_api.h`` or ``hip_runtime.h`` are included. + +The HIP include paths and platform macros (``__HIP_PLATFORM_AMD__`` or +``__HIP_PLATFORM_NVIDIA__``) must be passed to the compiler. + +``hipconfig`` can help in finding the necessary options, for example on an AMD +platform: + +.. code-block:: bash + + hipconfig --cpp_config + -D__HIP_PLATFORM_AMD__= -I/opt/rocm/include + +``nvcc`` includes some headers by default. ``hipcc`` does not include +default headers, and instead all required files must be explicitly included. + +The ``hipify`` tool automatically converts ``cuda_runtime.h`` to +``hip_runtime.h``, and it converts ``cuda_runtime_api.h`` to +``hip_runtime_api.h``, but it may miss nested headers or macros. + +******************************************************************************** +warpSize +******************************************************************************** + +Code should not assume a warp size of 32 or 64, as that is not portable between +platforms and architectures. The ``warpSize`` built-in should be used in device +code, while the host can query it during runtime via the device properties. See +the :ref:`HIP language extension for warpSize ` for information on +how to write portable wave-aware code. diff --git a/docs/how-to/logging.rst b/docs/how-to/logging.rst index ecf40fa192..3c8b8c5a53 100644 --- a/docs/how-to/logging.rst +++ b/docs/how-to/logging.rst @@ -240,3 +240,16 @@ information when calling the backend runtime. :3:C:\constructicon\builds\gfx\two\22.40\drivers\compute\hipamd\src\hip_memory.cpp:681 : 605414524092 us: 29864: [tid:0x9298] hipMemGetInfo: Returned hipSuccess : memInfo.total: 12.06 GB memInfo.free: 11.93 GB (99%) + +Logging hipcc commands +================================================================================ + +To see the detailed commands that hipcc issues, set the environment variable +``HIPCC_VERBOSE``. Doing so will print the HIP-clang (or NVCC) commands that +hipcc generates to ``stderr``. + +.. code-block:: shell + + export HIPCC_VERBOSE=1 + hipcc main.cpp + hipcc-cmd: /opt/rocm/lib/llvm/bin/clang++ --offload-arch=gfx90a --driver-mode=g++ -O3 --hip-link -x hip main.cpp diff --git a/docs/understand/compilers.rst b/docs/understand/compilers.rst index 53512e76e5..ccd2dbbec6 100644 --- a/docs/understand/compilers.rst +++ b/docs/understand/compilers.rst @@ -21,6 +21,81 @@ On NVIDIA CUDA platform, ``hipcc`` takes care of invoking compiler ``nvcc``. ``amdclang++`` is based on the ``clang++`` compiler. For more details, see the :doc:`llvm project`. +HIPCC +================================================================================ + +Common Compiler Options +-------------------------------------------------------------------------------- + +The following table shows the most common compiler options supported by +``hipcc``. + +.. list-table:: + :header-rows: 1 + + * + - Option + - Description + * + - ``--fgpu-rdc`` + - Generate relocatable device code, which allows kernels or device functions + to call device functions in different translation units. + * + - ``-ggdb`` + - Equivalent to `-g` plus tuning for GDB. This is recommended when using + ROCm's GDB to debug GPU code. + * + - ``--gpu-max-threads-per-block=`` + - Generate code to support up to the specified number of threads per block. + * + - ``-offload-arch=`` + - Generate code for the given GPU target. + For a full list of supported compilation targets see the `processor names in AMDGPU's llvm documentation `_. + This option can appear multiple times to generate a fat binary for multiple + targets. + The actual support of the platform's runtime may differ. + * + - ``-save-temps`` + - Save the compiler generated intermediate files. + * + - ``-v`` + - Show the compilation steps. + +Linking +-------------------------------------------------------------------------------- + +``hipcc`` adds the necessary libraries for HIP as well as for the accelerator +compiler (``nvcc`` or ``amdclang++``). We recommend linking with ``hipcc`` since +it automatically links the binary to the necessary HIP runtime libraries. + +Linking Code With Other Compilers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +``nvcc`` by default uses ``g++`` to generate the host code. + +``amdclang++`` generates both device and host code. The code uses the same API +as ``gcc``, which allows code generated by different ``gcc``-compatible +compilers to be linked together. For example, code compiled using ``amdclang++`` +can link with code compiled using compilers such as ``gcc``, ``icc`` and +``clang``. Take care to ensure all compilers use the same standard C++ header +and library formats. + +libc++ and libstdc++ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +``hipcc`` links to ``libstdc++`` by default. This provides better compatibility +between ``g++`` and HIP. + +In order to link to ``libc++``, pass ``--stdlib=libc++`` to ``hipcc``. +Generally, libc++ provides a broader set of C++ features while ``libstdc++`` is +the standard for more compilers, notably including ``g++``. + +When cross-linking C++ code, any C++ functions that use types from the C++ +standard library, such as ``std::string``, ``std::vector`` and other containers, +must use the same standard-library implementation. This includes cross-linking +between ``amdclang++`` and other compilers. + + HIP compilation workflow ================================================================================ From e0609d24c021c9c49821cc15d3f341cd51d23599 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Thu, 13 Feb 2025 17:43:29 +0100 Subject: [PATCH 32/46] Docs: Expand HIP porting guide and CUDA driver porting guide --- docs/how-to/hip_cpp_language_extensions.rst | 37 ----- docs/how-to/hip_porting_driver_api.rst | 155 ++++++++++++++++---- docs/how-to/hip_porting_guide.rst | 68 +++++++-- 3 files changed, 182 insertions(+), 78 deletions(-) diff --git a/docs/how-to/hip_cpp_language_extensions.rst b/docs/how-to/hip_cpp_language_extensions.rst index 73462ba526..2ee1ac874d 100644 --- a/docs/how-to/hip_cpp_language_extensions.rst +++ b/docs/how-to/hip_cpp_language_extensions.rst @@ -250,43 +250,6 @@ Units, also known as SIMDs, each with their own register file. For more information see :doc:`../understand/hardware_implementation`. :cpp:struct:`hipDeviceProp_t` also has a field ``executionUnitsPerMultiprocessor``. -Porting from CUDA __launch_bounds__ -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -CUDA also defines a ``__launch_bounds__`` qualifier which works similar to HIP's -implementation, however it uses different parameters: - -.. code-block:: cpp - - __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR) - -The first parameter is the same as HIP's implementation, but -``MIN_BLOCKS_PER_MULTIPROCESSOR`` must be converted to -``MIN_WARPS_PER_EXECUTION``, which uses warps and execution units rather than -blocks and multiprocessors. This conversion is performed automatically by -:doc:`HIPIFY `, or can be done manually with the following -equation. - -.. code-block:: cpp - - MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / warpSize - -Directly controlling the warps per execution unit makes it easier to reason -about the occupancy, unlike with blocks, where the occupancy depends on the -block size. - -The use of execution units rather than multiprocessors also provides support for -architectures with multiple execution units per multiprocessor. For example, the -AMD GCN architecture has 4 execution units per multiprocessor. - -maxregcount -"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" - -Unlike ``nvcc``, ``amdclang++`` does not support the ``--maxregcount`` option. -Instead, users are encouraged to use the ``__launch_bounds__`` directive since -the parameters are more intuitive and portable than micro-architecture details -like registers. The directive allows per-kernel control. - Memory space qualifiers ================================================================================ diff --git a/docs/how-to/hip_porting_driver_api.rst b/docs/how-to/hip_porting_driver_api.rst index d4d9da1673..41a7aff497 100644 --- a/docs/how-to/hip_porting_driver_api.rst +++ b/docs/how-to/hip_porting_driver_api.rst @@ -1,6 +1,6 @@ .. meta:: :description: This chapter presents how to port the CUDA driver API and showcases equivalent operations in HIP. - :keywords: AMD, ROCm, HIP, CUDA, driver API + :keywords: AMD, ROCm, HIP, CUDA, driver API, porting, port .. _porting_driver_api: @@ -8,26 +8,25 @@ Porting CUDA driver API ******************************************************************************* -NVIDIA provides separate CUDA driver and runtime APIs. The two APIs have -significant overlap in functionality: - -* Both APIs support events, streams, memory management, memory copy, and error - handling. - -* Both APIs deliver similar performance. +CUDA provides separate driver and runtime APIs. The two APIs generally provide +the similar functionality and mostly can be used interchangeably, however the +driver API allows for more fine-grained control over the kernel level +initialization, contexts and module management. This is all taken care of +implicitly by the runtime API. * Driver API calls begin with the prefix ``cu``, while runtime API calls begin with the prefix ``cuda``. For example, the driver API contains ``cuEventCreate``, while the runtime API contains ``cudaEventCreate``, which has similar functionality. -* The driver API defines a different, but largely overlapping, error code space - than the runtime API and uses a different coding convention. For example, the - driver API defines ``CUDA_ERROR_INVALID_VALUE``, while the runtime API defines - ``cudaErrorInvalidValue``. +* The driver API offers two additional low-level functionalities not exposed by + the runtime API: module management ``cuModule*`` and context management + ``cuCtx*`` APIs. -The driver API offers two additional functionalities not provided by the runtime -API: ``cuModule`` and ``cuCtx`` APIs. +HIP does not explicitly provide two different APIs, the corresponding functions +for the CUDA driver API are available in the HIP runtime API, and are usually +prefixed with ``hipDrv``. The module and context functionality is available with +the ``hipModule`` and ``hipCtx`` prefix. cuModule API ================================================================================ @@ -120,12 +119,21 @@ For context reference, visit :ref:`context_management_reference`. HIPIFY translation of CUDA driver API ================================================================================ -The HIPIFY tools convert CUDA driver APIs for streams, events, modules, devices, memory management, context, and the profiler to the equivalent HIP calls. For example, ``cuEventCreate`` is translated to ``hipEventCreate``. -HIPIFY tools also convert error codes from the driver namespace and coding conventions to the equivalent HIP error code. HIP unifies the APIs for these common functions. - -The memory copy API requires additional explanation. The CUDA driver includes the memory direction in the name of the API (``cuMemcpyH2D``), while the CUDA driver API provides a single memory copy API with a parameter that specifies the direction. It also supports a "default" direction where the runtime determines the direction automatically. -HIP provides APIs with both styles, for example, ``hipMemcpyH2D`` as well as ``hipMemcpy``. -The first version might be faster in some cases because it avoids any host overhead to detect the different memory directions. +The HIPIFY tools convert CUDA driver APIs such as streams, events, modules, +devices, memory management, context, and the profiler to the equivalent HIP +calls. For example, ``cuEventCreate`` is translated to :cpp:func:`hipEventCreate`. +HIPIFY tools also convert error codes from the driver namespace and coding +conventions to the equivalent HIP error code. HIP unifies the APIs for these +common functions. + +The memory copy API requires additional explanation. The CUDA driver includes +the memory direction in the name of the API (``cuMemcpyHtoD``), while the CUDA +runtime API provides a single memory copy API with a parameter that specifies +the direction. It also supports a "default" direction where the runtime +determines the direction automatically. +HIP provides both versions, for example, :cpp:func:`hipMemcpyHtoD` as well as +:cpp:func:`hipMemcpy`. The first version might be faster in some cases because +it avoids any host overhead to detect the different memory directions. HIP defines a single error space and uses camel case for all errors (i.e. ``hipErrorInvalidValue``). @@ -134,16 +142,25 @@ For further information, visit the :doc:`hipify:index`. Address spaces -------------------------------------------------------------------------------- -HIP-Clang defines a process-wide address space where the CPU and all devices allocate addresses from a single unified pool. -This means addresses can be shared between contexts. Unlike the original CUDA implementation, a new context does not create a new address space for the device. +HIP-Clang defines a process-wide address space where the CPU and all devices +allocate addresses from a single unified pool. +This means addresses can be shared between contexts. Unlike the original CUDA +implementation, a new context does not create a new address space for the device. Using hipModuleLaunchKernel -------------------------------------------------------------------------------- -Both CUDA driver and runtime APIs define a function for launching kernels, called ``cuLaunchKernel`` or ``cudaLaunchKernel``. The equivalent API in HIP is ``hipModuleLaunchKernel``. -The kernel arguments and the execution configuration (grid dimensions, group dimensions, dynamic shared memory, and stream) are passed as arguments to the launch function. -The runtime API additionally provides the ``<<< >>>`` syntax for launching kernels, which resembles a special function call and is easier to use than the explicit launch API, especially when handling kernel arguments. -However, this syntax is not standard C++ and is available only when NVCC is used to compile the host code. +Both CUDA driver and runtime APIs define a function for launching kernels, +called ``cuLaunchKernel`` or ``cudaLaunchKernel``. The equivalent API in HIP is +``hipModuleLaunchKernel``. +The kernel arguments and the execution configuration (grid dimensions, group +dimensions, dynamic shared memory, and stream) are passed as arguments to the +launch function. +The runtime API additionally provides the ``<<< >>>`` syntax for launching +kernels, which resembles a special function call and is easier to use than the +explicit launch API, especially when handling kernel arguments. +However, this syntax is not standard C++ and is available only when NVCC is used +to compile the host code. Additional information -------------------------------------------------------------------------------- @@ -186,12 +203,24 @@ functions. Kernel launching -------------------------------------------------------------------------------- -HIP-Clang supports kernel launching using either the CUDA ``<<<>>>`` syntax, ``hipLaunchKernel``, or ``hipLaunchKernelGGL``. The last option is a macro which expands to the CUDA ``<<<>>>`` syntax by default. It can also be turned into a template by defining ``HIP_TEMPLATE_KERNEL_LAUNCH``. +HIP-Clang supports kernel launching using either the CUDA ``<<<>>>`` syntax, +``hipLaunchKernel``, or ``hipLaunchKernelGGL``. The last option is a macro which +expands to the CUDA ``<<<>>>`` syntax by default. It can also be turned into a +template by defining ``HIP_TEMPLATE_KERNEL_LAUNCH``. -When the executable or shared library is loaded by the dynamic linker, the initialization functions are called. In the initialization functions, the code objects containing all kernels are loaded when ``__hipRegisterFatBinary`` is called. When ``__hipRegisterFunction`` is called, the stub functions are associated with the corresponding kernels in the code objects. +When the executable or shared library is loaded by the dynamic linker, the +initialization functions are called. In the initialization functions, the code +objects containing all kernels are loaded when ``__hipRegisterFatBinary`` is +called. When ``__hipRegisterFunction`` is called, the stub functions are +associated with the corresponding kernels in the code objects. HIP-Clang implements two sets of APIs for launching kernels. -By default, when HIP-Clang encounters the ``<<<>>>`` statement in the host code, it first calls ``hipConfigureCall`` to set up the threads and grids. It then calls the stub function with the given arguments. The stub function calls ``hipSetupArgument`` for each kernel argument, then calls ``hipLaunchByPtr`` with a function pointer to the stub function. In ``hipLaunchByPtr``, the actual kernel associated with the stub function is launched. +By default, when HIP-Clang encounters the ``<<<>>>`` statement in the host code, +it first calls ``hipConfigureCall`` to set up the threads and grids. It then +calls the stub function with the given arguments. The stub function calls +``hipSetupArgument`` for each kernel argument, then calls ``hipLaunchByPtr`` +with a function pointer to the stub function. In ``hipLaunchByPtr``, the actual +kernel associated with the stub function is launched. NVCC implementation notes ================================================================================ @@ -199,7 +228,9 @@ NVCC implementation notes Interoperation between HIP and CUDA driver -------------------------------------------------------------------------------- -CUDA applications might want to mix CUDA driver code with HIP code (see the example below). This table shows the equivalence between CUDA and HIP types required to implement this interaction. +CUDA applications might want to mix CUDA driver code with HIP code (see the +example below). This table shows the equivalence between CUDA and HIP types +required to implement this interaction. .. list-table:: Equivalence table between HIP and CUDA types :header-rows: 1 @@ -547,3 +578,67 @@ The HIP version number is defined as an integer: .. code-block:: cpp HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERSION_PATCH + +******************************************************************************** +CU_POINTER_ATTRIBUTE_MEMORY_TYPE +******************************************************************************** + +To get the pointer's memory type in HIP, developers should use +:cpp:func:`hipPointerGetAttributes`. First parameter of the function is +`hipPointerAttribute_t`. Its ``type`` member variable indicates whether the +memory pointed to is allocated on the device or the host. + +For example: + +.. code-block:: cpp + + double * ptr; + hipMalloc(&ptr, sizeof(double)); + hipPointerAttribute_t attr; + hipPointerGetAttributes(&attr, ptr); /*attr.type is hipMemoryTypeDevice*/ + if(attr.type == hipMemoryTypeDevice) + std::cout << "ptr is of type hipMemoryTypeDevice" << std::endl; + + double* ptrHost; + hipHostMalloc(&ptrHost, sizeof(double)); + hipPointerAttribute_t attr; + hipPointerGetAttributes(&attr, ptrHost); /*attr.type is hipMemoryTypeHost*/ + if(attr.type == hipMemorTypeHost) + std::cout << "ptrHost is of type hipMemoryTypeHost" << std::endl; + +Note that ``hipMemoryType`` enum values are different from the +``cudaMemoryType`` enum values. + +For example, on AMD platform, `hipMemoryType` is defined in `hip_runtime_api.h`, + +.. code-block:: cpp + + typedef enum hipMemoryType { + hipMemoryTypeHost = 0, ///< Memory is physically located on host + hipMemoryTypeDevice = 1, ///< Memory is physically located on device. (see deviceId for specific device) + hipMemoryTypeArray = 2, ///< Array memory, physically located on device. (see deviceId for specific device) + hipMemoryTypeUnified = 3, ///< Not used currently + hipMemoryTypeManaged = 4 ///< Managed memory, automaticallly managed by the unified memory system + } hipMemoryType; + +Looking into CUDA toolkit, it defines `cudaMemoryType` as following, + +.. code-block:: cpp + + enum cudaMemoryType + { + cudaMemoryTypeUnregistered = 0, // Unregistered memory. + cudaMemoryTypeHost = 1, // Host memory. + cudaMemoryTypeDevice = 2, // Device memory. + cudaMemoryTypeManaged = 3, // Managed memory + } + +In this case, memory type translation for `hipPointerGetAttributes` needs to be handled properly on NVIDIA platform to get the correct memory type in CUDA, which is done in the file `nvidia_hip_runtime_api.h`. + +So in any HIP applications which use HIP APIs involving memory types, developers should use `#ifdef` in order to assign the correct enum values depending on NVIDIA or AMD platform. + +As an example, please see the code from the `link `_. + +With the `#ifdef` condition, HIP APIs work as expected on both AMD and NVIDIA platforms. + +Note, `cudaMemoryTypeUnregistered` is currently not supported as `hipMemoryType` enum, due to HIP functionality backward compatibility. diff --git a/docs/how-to/hip_porting_guide.rst b/docs/how-to/hip_porting_guide.rst index af248b3ec9..e4cf93d14c 100644 --- a/docs/how-to/hip_porting_guide.rst +++ b/docs/how-to/hip_porting_guide.rst @@ -14,10 +14,22 @@ suggestions on how to port CUDA code and work through common issues. Porting a CUDA Project ******************************************************************************** +Mixing HIP and CUDA code results in valid CUDA code. This enables users to +incrementally port CUDA to HIP, and still compile and test the code during the +transition. + +The only notable exception is ``hipError_t``, which is not just an alias to +``cudaError_t``. In these cases HIP provides functions to convert between the +error code spaces: + +:cpp:func:`hipErrorToCudaError` +:cpp:func:`hipErrorToCUResult` +:cpp:func:`hipCUDAErrorTohipError` +:cpp:func:`hipCUResultTohipError` + General Tips ================================================================================ -* You can incrementally port pieces of the code to HIP while leaving the rest in CUDA. HIP is just a thin layer over CUDA, so the two languages can interoperate. * Starting to port on an NVIDIA machine is often the easiest approach, as the code can be tested for functionality and performance even if not fully ported to HIP. * Once the CUDA code is ported to HIP and is running on the CUDA machine, compile the HIP code for an AMD machine. * You can handle platform-specific features through conditional compilation or by adding them to the open-source HIP infrastructure. @@ -533,16 +545,6 @@ supports, together with the corresponding macros and device properties. - ``hasDynamicParallelism`` - Ability to launch a kernel from within a kernel -******************************************************************************** -Finding HIP -******************************************************************************** - -Makefiles can use the following syntax to conditionally provide a default HIP_PATH if one does not exist: - -.. code-block:: shell - - HIP_PATH ?= $(shell hipconfig --path) - ******************************************************************************** Compilation ******************************************************************************** @@ -555,6 +557,12 @@ options are appropriate for the target compiler. ``hipconfig`` is a helpful tool in identifying the current systems platform, compiler and runtime. It can also help set options appropriately. +As an example, it can provide a path to HIP, in Makefiles for example: + +.. code-block:: shell + + HIP_PATH ?= $(shell hipconfig --path) + HIP Headers ================================================================================ @@ -602,3 +610,41 @@ platforms and architectures. The ``warpSize`` built-in should be used in device code, while the host can query it during runtime via the device properties. See the :ref:`HIP language extension for warpSize ` for information on how to write portable wave-aware code. + +******************************************************************************** +Porting from CUDA __launch_bounds__ +******************************************************************************** + +CUDA also defines a ``__launch_bounds__`` qualifier which works similar to HIP's +implementation, however it uses different parameters: + +.. code-block:: cpp + + __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR) + +The first parameter is the same as HIP's implementation, but +``MIN_BLOCKS_PER_MULTIPROCESSOR`` must be converted to +``MIN_WARPS_PER_EXECUTION``, which uses warps and execution units rather than +blocks and multiprocessors. This conversion is performed automatically by +:doc:`HIPIFY `, or can be done manually with the following +equation. + +.. code-block:: cpp + + MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / warpSize + +Directly controlling the warps per execution unit makes it easier to reason +about the occupancy, unlike with blocks, where the occupancy depends on the +block size. + +The use of execution units rather than multiprocessors also provides support for +architectures with multiple execution units per multiprocessor. For example, the +AMD GCN architecture has 4 execution units per multiprocessor. + +maxregcount +================================================================================ + +Unlike ``nvcc``, ``amdclang++`` does not support the ``--maxregcount`` option. +Instead, users are encouraged to use the ``__launch_bounds__`` directive since +the parameters are more intuitive and portable than micro-architecture details +like registers. The directive allows per-kernel control. From 5e90d1431594dbc98635528ab73550583cc06bd9 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Fri, 14 Feb 2025 07:50:47 +0100 Subject: [PATCH 33/46] Minor fix --- docs/how-to/hip_porting_driver_api.rst | 17 ++++--- docs/how-to/hip_porting_guide.rst | 63 +++++++++++++------------- 2 files changed, 42 insertions(+), 38 deletions(-) diff --git a/docs/how-to/hip_porting_driver_api.rst b/docs/how-to/hip_porting_driver_api.rst index 41a7aff497..7d7ebbc24d 100644 --- a/docs/how-to/hip_porting_driver_api.rst +++ b/docs/how-to/hip_porting_driver_api.rst @@ -579,9 +579,8 @@ The HIP version number is defined as an integer: HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERSION_PATCH -******************************************************************************** CU_POINTER_ATTRIBUTE_MEMORY_TYPE -******************************************************************************** +================================================================================ To get the pointer's memory type in HIP, developers should use :cpp:func:`hipPointerGetAttributes`. First parameter of the function is @@ -633,12 +632,18 @@ Looking into CUDA toolkit, it defines `cudaMemoryType` as following, cudaMemoryTypeManaged = 3, // Managed memory } -In this case, memory type translation for `hipPointerGetAttributes` needs to be handled properly on NVIDIA platform to get the correct memory type in CUDA, which is done in the file `nvidia_hip_runtime_api.h`. +In this case, memory type translation for ``hipPointerGetAttributes`` needs to +be handled properly on NVIDIA platform to get the correct memory type in CUDA, +which is done in the file ``nvidia_hip_runtime_api.h``. -So in any HIP applications which use HIP APIs involving memory types, developers should use `#ifdef` in order to assign the correct enum values depending on NVIDIA or AMD platform. +So in any HIP applications which use HIP APIs involving memory types, developers +should use ``#ifdef`` in order to assign the correct enum values depending on +NVIDIA or AMD platform. As an example, please see the code from the `link `_. -With the `#ifdef` condition, HIP APIs work as expected on both AMD and NVIDIA platforms. +With the ``#ifdef`` condition, HIP APIs work as expected on both AMD and NVIDIA +platforms. -Note, `cudaMemoryTypeUnregistered` is currently not supported as `hipMemoryType` enum, due to HIP functionality backward compatibility. +Note, ``cudaMemoryTypeUnregistered`` is currently not supported as +``hipMemoryType`` enum, due to HIP functionality backward compatibility. diff --git a/docs/how-to/hip_porting_guide.rst b/docs/how-to/hip_porting_guide.rst index e4cf93d14c..136084f66b 100644 --- a/docs/how-to/hip_porting_guide.rst +++ b/docs/how-to/hip_porting_guide.rst @@ -2,17 +2,16 @@ :description: This chapter presents how to port CUDA source code to HIP. :keywords: AMD, ROCm, HIP, CUDA, porting, port -################################################################################ +******************************************************************************** HIP porting guide -################################################################################ +******************************************************************************** HIP is designed to ease the porting of existing CUDA code into the HIP environment. This page describes the available tools and provides practical suggestions on how to port CUDA code and work through common issues. -******************************************************************************** Porting a CUDA Project -******************************************************************************** +================================================================================ Mixing HIP and CUDA code results in valid CUDA code. This enables users to incrementally port CUDA to HIP, and still compile and test the code during the @@ -22,21 +21,26 @@ The only notable exception is ``hipError_t``, which is not just an alias to ``cudaError_t``. In these cases HIP provides functions to convert between the error code spaces: -:cpp:func:`hipErrorToCudaError` -:cpp:func:`hipErrorToCUResult` -:cpp:func:`hipCUDAErrorTohipError` -:cpp:func:`hipCUResultTohipError` +* :cpp:func:`hipErrorToCudaError` +* :cpp:func:`hipErrorToCUResult` +* :cpp:func:`hipCUDAErrorTohipError` +* :cpp:func:`hipCUResultTohipError` General Tips -================================================================================ +-------------------------------------------------------------------------------- -* Starting to port on an NVIDIA machine is often the easiest approach, as the code can be tested for functionality and performance even if not fully ported to HIP. -* Once the CUDA code is ported to HIP and is running on the CUDA machine, compile the HIP code for an AMD machine. -* You can handle platform-specific features through conditional compilation or by adding them to the open-source HIP infrastructure. -* Use the `HIPIFY `_ tools to automatically convert CUDA code to HIP, as described in the following section. +* Starting to port on an NVIDIA machine is often the easiest approach, as the + code can be tested for functionality and performance even if not fully ported + to HIP. +* Once the CUDA code is ported to HIP and is running on the CUDA machine, + compile the HIP code for an AMD machine. +* You can handle platform-specific features through conditional compilation or + by adding them to the open-source HIP infrastructure. +* Use the `HIPIFY `_ tools to automatically + convert CUDA code to HIP, as described in the following section. HIPIFY -================================================================================ +-------------------------------------------------------------------------------- :doc:`HIPIFY ` is a collection of tools that automatically translate CUDA to HIP code. There are two flavours available, ``hipfiy-clang`` @@ -126,7 +130,7 @@ hipified code to ``stdout``. directories. Library Equivalents -================================================================================ +-------------------------------------------------------------------------------- ROCm provides libraries to ease porting of code relying on CUDA libraries. Most CUDA libraries have a corresponding HIP library. @@ -213,12 +217,11 @@ corresponding CUDA library, depending on the platform, to provide compatibility. - Communications Primitives Library based on the MPI equivalents RCCL is a drop-in replacement for NCCL -******************************************************************************** Distinguishing compilers and platforms -******************************************************************************** +================================================================================ Identifying the HIP Target Platform -================================================================================ +-------------------------------------------------------------------------------- HIP projects can target either the AMD or NVIDIA platform. The platform affects which backend-headers are included and which libraries are used for linking. The @@ -388,9 +391,8 @@ for the device. .. _identifying_device_architecture_features: -******************************************************************************** Identifying Device Architecture Features -******************************************************************************** +================================================================================ GPUs of different generations and architectures do not all provide the same level of :doc:`hardware feature support <../reference/hardware_features>`. To @@ -398,7 +400,7 @@ guard device-code using these architecture dependent features, the ``__HIP_ARCH___`` C++-macros can be used. Device Code Feature Identification -================================================================================ +-------------------------------------------------------------------------------- Some CUDA code tests ``__CUDA_ARCH__`` for a specific value to determine whether the GPU supports a certain architectural feature, depending on its compute @@ -422,7 +424,7 @@ For host code, the ``__HIP_ARCH___`` defines are set to 0, if upon in host code. Host Code Feature Identification -================================================================================ +-------------------------------------------------------------------------------- Host code must not rely on the ``__HIP_ARCH___`` macros, as the GPUs available to a system can not be known during compile time, and their @@ -464,7 +466,7 @@ Host code can query architecture feature flags during runtime, by using } Table of Architecture Properties -================================================================================ +-------------------------------------------------------------------------------- The table below shows the full set of architectural properties that HIP supports, together with the corresponding macros and device properties. @@ -545,9 +547,8 @@ supports, together with the corresponding macros and device properties. - ``hasDynamicParallelism`` - Ability to launch a kernel from within a kernel -******************************************************************************** Compilation -******************************************************************************** +================================================================================ ``hipcc`` is a portable compiler driver that calls ``nvcc`` or ``amdclang++`` and forwards the appropriate options. It passes options through @@ -564,7 +565,7 @@ As an example, it can provide a path to HIP, in Makefiles for example: HIP_PATH ?= $(shell hipconfig --path) HIP Headers -================================================================================ +-------------------------------------------------------------------------------- The ``hip_runtime.h`` headers define all the necessary types, functions, macros, etc., needed to compile a HIP program, this includes host as well as device @@ -575,7 +576,7 @@ need to convert hipified code to include the richer ``hip_runtime.h`` instead of ``hip_runtime_api.h``. Using a Standard C++ Compiler -================================================================================ +-------------------------------------------------------------------------------- You can compile ``hip_runtime_api.h`` using a standard C or C++ compiler (e.g., ``gcc`` or ``icc``). @@ -601,9 +602,8 @@ The ``hipify`` tool automatically converts ``cuda_runtime.h`` to ``hip_runtime.h``, and it converts ``cuda_runtime_api.h`` to ``hip_runtime_api.h``, but it may miss nested headers or macros. -******************************************************************************** warpSize -******************************************************************************** +================================================================================ Code should not assume a warp size of 32 or 64, as that is not portable between platforms and architectures. The ``warpSize`` built-in should be used in device @@ -611,9 +611,8 @@ code, while the host can query it during runtime via the device properties. See the :ref:`HIP language extension for warpSize ` for information on how to write portable wave-aware code. -******************************************************************************** Porting from CUDA __launch_bounds__ -******************************************************************************** +================================================================================ CUDA also defines a ``__launch_bounds__`` qualifier which works similar to HIP's implementation, however it uses different parameters: @@ -642,7 +641,7 @@ architectures with multiple execution units per multiprocessor. For example, the AMD GCN architecture has 4 execution units per multiprocessor. maxregcount -================================================================================ +-------------------------------------------------------------------------------- Unlike ``nvcc``, ``amdclang++`` does not support the ``--maxregcount`` option. Instead, users are encouraged to use the ``__launch_bounds__`` directive since From c9d719ff7be2b14017b1955ef7403605f9aba8bd Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Fri, 14 Feb 2025 14:27:17 +0100 Subject: [PATCH 34/46] Docs: Update environment variables file --- docs/data/env_variables_hip.rst | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/docs/data/env_variables_hip.rst b/docs/data/env_variables_hip.rst index 6186671ecf..4192db7387 100644 --- a/docs/data/env_variables_hip.rst +++ b/docs/data/env_variables_hip.rst @@ -2,6 +2,9 @@ :description: HIP environment variables :keywords: AMD, HIP, environment variables, environment +HIP GPU isolation variables +-------------------------------------------------------------------------------- + The GPU isolation environment variables in HIP are collected in the following table. .. _hip-env-isolation: @@ -24,6 +27,9 @@ The GPU isolation environment variables in HIP are collected in the following ta | Device indices exposed to HIP applications. - Example: ``0,2`` +HIP profiling variables +-------------------------------------------------------------------------------- + The profiling environment variables in HIP are collected in the following table. .. _hip-env-prof: @@ -50,6 +56,9 @@ The profiling environment variables in HIP are collected in the following table. - | 0: Disable | 1: Enable +HIP debug variables +-------------------------------------------------------------------------------- + The debugging environment variables in HIP are collected in the following table. .. _hip-env-debug: @@ -149,6 +158,9 @@ The debugging environment variables in HIP are collected in the following table. number does not apply to hardware queues that are created for CU-masked HIP streams, or cooperative queues for HIP Cooperative Groups (single queue per device). +HIP memory management related variables +-------------------------------------------------------------------------------- + The memory management related environment variables in HIP are collected in the following table. @@ -245,6 +257,9 @@ following table. - | 0: Disable | 1: Enable +HIP miscellaneous variables +-------------------------------------------------------------------------------- + The following table lists environment variables that are useful but relate to different features in HIP. From 98e06aa708ed356ddbd52a4ac59da5775b8a50ec Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 20 Feb 2025 00:45:05 +0000 Subject: [PATCH 35/46] Bump rocm-docs-core[api_reference] from 1.15.0 to 1.17.0 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.15.0 to 1.17.0. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.15.0...v1.17.0) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 3a34f77e71..f7fb1c634a 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.15.0 +rocm-docs-core[api_reference]==1.17.0 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index e88b29e61e..3707dd92a3 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -211,7 +211,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.15.0 +rocm-docs-core[api-reference]==1.17.0 # via -r requirements.in rpds-py==0.22.3 # via From acd1a883b59e64ad81716697558a614a3aeb58f1 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Fri, 7 Feb 2025 14:11:52 +0100 Subject: [PATCH 36/46] Remove release.md and remove more-info section from readme --- README.md | 27 ------- RELEASE.md | 216 ----------------------------------------------------- 2 files changed, 243 deletions(-) delete mode 100644 RELEASE.md diff --git a/README.md b/README.md index 4df0b4c6a9..ed6db9581b 100644 --- a/README.md +++ b/README.md @@ -36,33 +36,6 @@ HIP releases are typically naming convention for each ROCM release to help diffe * rocm x.yy: These are the stable releases based on the ROCM release. This type of release is typically made once a month.* -## More Info - -* [Installation](docs/install/install.rst) -* [HIP FAQ](docs/faq.rst) -* [HIP C++ Language Extensions](docs/reference/cpp_language_extensions.rst) -* [HIP Porting Guide](docs/how-to/hip_porting_guide.md) -* [HIP Porting Driver Guide](docs/how-to/hip_porting_driver_api.rst) -* [HIP Programming Guide](docs/programming_guide.rst) -* [HIP Logging](docs/how-to/logging.rst) -* [Building HIP From Source](docs/install/build.rst) -* [HIP Debugging](docs/how-to/debugging.rst) -* [HIP RTC](docs/how-to/hip_rtc.md) -* [HIP Terminology](docs/reference/terms.md) (including Rosetta Stone of GPU computing terms across CUDA/HIP/OpenCL) -* [HIPIFY](https://github.com/ROCm/HIPIFY/blob/amd-staging/README.md) -* Supported CUDA APIs: - * [Runtime API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Runtime_API_functions_supported_by_HIP.md) - * [Driver API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Driver_API_functions_supported_by_HIP.md) - * [cuComplex API](https://github.com/ROCm/HIPIFY/blob/amd-staging/reference/docs/tables/cuComplex_API_supported_by_HIP.md) - * [Device API](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDA_Device_API_supported_by_HIP.md) - * [cuBLAS](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUBLAS_API_supported_by_ROC.md) - * [cuRAND](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CURAND_API_supported_by_HIP.md) - * [cuDNN](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUDNN_API_supported_by_HIP.md) - * [cuFFT](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUFFT_API_supported_by_HIP.md) - * [cuSPARSE](https://github.com/ROCm/HIPIFY/blob/amd-staging/docs/reference/tables/CUSPARSE_API_supported_by_HIP.md) -* [Developer/CONTRIBUTING Info](CONTRIBUTING.md) -* [Release Notes](RELEASE.md) - ## How do I get set up? See the [Installation](docs/install/install.rst) notes. diff --git a/RELEASE.md b/RELEASE.md deleted file mode 100644 index 15fb221549..0000000000 --- a/RELEASE.md +++ /dev/null @@ -1,216 +0,0 @@ -# Release notes - -We have attempted to document known bugs and limitations - in particular the [HIP Kernel Language](docs/markdown/hip_kernel_language.md) document uses the phrase "Under Development", and the [HIP Runtime API issue list](https://github.com/ROCm/HIP/issues) lists known bugs. - - -=================================================================================================== - - -## Revision History: - -=================================================================================================== -Release: 1.5 -Date: -- Support threadIdx, blockIdx, blockDim directly (no need for hipify conversions in kernels.) HIP - Kernel syntax is now identical to CUDA kernel syntax - no need for extra parms or conversions. -- Refactor launch syntax. HIP now extracts kernels from the executable and launches them using the - existing module interface. Kernels dispatch no longer flows through HCC. Result is faster - kernel launches and with less resource usage (no signals required). -- Remove requirement for manual "serializers" previously required when passing complex structures - into kernels. -- Remove need for manual destructors -- Provide printf in device code -- Support for globals when using module API -- hipify-clang now supports using newer versions of clang -- HIP texture support equivalent to CUDA texture driver APIs -- Updates to hipify-perl, hipify-clang and documentation - - -=================================================================================================== -Release: 1.4 -Date: 2017.10.06 -- Improvements to HIP event management -- Added new HIP_TRACE_API options -- Enabled device side assert support -- Several bug fixes including hipMallocArray, hipTexture fetch -- Support for RHEL/CentOS 7.4 -- Updates to hipify-perl, hipify-clang and documentation - - -=================================================================================================== -Release: 1.3 -Date: 2017.08.16 -- hipcc now auto-detects amdgcn arch. No need to specify the arch when building for same system. -- HIP texture support (run-time APIs) -- Implemented __threadfence_support -- Improvements in HIP context management logic -- Bug fixes in several APIs including hipDeviceGetPCIBusId, hipEventDestroy, hipMemcpy2DAsync -- Updates to hipify-clang and documentation -- HIP development now fully open and on GitHub. Developers should submit pull requests. - - -=================================================================================================== -Release: 1.2 -Date: 2017.06.29 -- new APIs: hipMemcpy2DAsync, hipMallocPitch, hipHostMallocCoherent, hipHostMallocNonCoherent -- added support for building hipify-clang using clang 3.9 -- hipify-clang updates for CUDA 8.0 runtime+driver support -- renamed hipify to hipify-perl -- initial implementation of hipify-cmakefile -- several documentation updates & bug fixes -- support for abort() function in device code - - -=================================================================================================== -Release: 1.0.17102 -Date: 2017.03.07 -- Lots of improvements to hipify-clang. -- Added HIP package config for cmake. -- Several bug fixes and documentation updates. - - -=================================================================================================== -Release: 1.0.17066 -Date: 2017.02.11 -- Improved support for math device functions. -- Added several half math device functions. -- Enabled support for CUDA 8.0 in hipify-clang. -- Lots of bug fixes and documentation updates. - - -=================================================================================================== -Release: 1.0.17015 -Date: 2017.01.06 -- Several improvements to the hipify-clang infrastructure. -- Refactored module and function APIs. -- HIP now defaults to linking against the shared runtime library. -- Documentation updates. - - -=================================================================================================== -Release: 1.0.16502 -Date: 2016.12.13 -- Added several fast math and packaged math instrincs -- Improved debug and profiler documentation -- Support for building and linking to HIP shared library -- Several improvements to hipify-clang -- Several bug fixes - - -=================================================================================================== -Release: 1.0.16461 -Date: 2016.11.14 -- Significant changes to the HIP Profiling APIs. Refer to the documentation for details -- Improvements to P2P support -- New API: hipDeviceGetByPCIBusId -- Several bug fixes in NV path -- hipModuleLaunch now works for multi-dim kernels - - -=================================================================================================== -Release:1.0 -Date: 2016.11.8 -- Initial implementation for FindHIP.cmake -- HIP library now installs as a static library by default -- Added support for HIP context and HIP module APIs -- Major changes to HIP signal & memory management implementation -- Support for complex data type and math functions -- clang-hipify is now known as hipify-clang -- Added several new HIP samples -- Preliminary support for new APIs: hipMemcpyToSymbol, hipDeviceGetLimit, hipRuntimeGetVersion -- Added support for async memcpy driver API (for example hipMemcpyHtoDAsync) -- Support for memory management device functions: malloc, free, memcpy & memset -- Removed deprecated HIP runtime header locations. Please include "hip/hip_runtime.h" instead of "hip_runtime.h". You can use `find . -type f -exec sed -i 's:#include "hip_runtime.h":#include "hip/hip_runtime.h":g' {} +` to replace all such references - - -=================================================================================================== -Release:0.92.00 -Date: 2016.8.14 -- hipLaunchKernel supports one-dimensional grid and/or block dims, without explicit cast to dim3 type (actually in 0.90.00) -- fp16 software support -- Support for Hawaii dGPUs using environment variable ROCM_TARGET=hawaii -- Support hipArray -- Improved profiler support -- Documentation updates -- Improvements to clang-hipify - - -=================================================================================================== -Release:0.90.00 -Date: 2016.06.29 -- Support dynamic shared memory allocations -- Min HCC compiler version is > 16186. -- Expanded math functions (device and host). Document unsupported functions. -- hipFree with null pointer initializes runtime and returns success. -- Improve error code reporting on nvcc. -- Add hipPeekAtError for nvcc. - - -=================================================================================================== -Release:0.86.00 -Date: 2016.06.06 -- Add clang-hipify : clang-based hipify tool. Improved parsing of source code, and automates - creation of hipLaunchParm variable. -- Implement memory register / unregister commands (hipHostRegister, hipHostUnregister) -- Add cross-linking support between G++ and HCC, in particular for interfaces that use - standard C++ libraries (ie std::vectors, std::strings). HIPCC now uses libstdc++ by default on the HCC - compilation path. -- More samples including gpu-burn, SHOC, nbody, rtm. See [HIP-Examples](https://github.com/ROCm/HIP-Examples) - - -=================================================================================================== -Release:0.84.01 -Date: 2016.04.25 -- Refactor HIP make and install system: - - Move to CMake. Refer to the installation section in README.md for details. - - Split source into multiple modular .cpp and .h files. - - Create static library and link. - - Set HIP_PATH to install. -- Make hipDevice and hipStream thread-safe. - - Preferred hipStream usage is still to create new streams for each new thread, but it works even if you don;t. -- Improve automated platform detection: If AMD GPU is installed and detected by driver, default HIP_PLATFORM to hcc. -- HIP_TRACE_API now prints arguments to the HIP function (in addition to name of function). -- Deprecate hipDeviceGetProp (Replace with hipGetDeviceProp) -- Deprecate hipMallocHost (Replace with hipHostMalloc) -- Deprecate hipFreeHost (Replace with hipHostFree) -- The mixbench benchmark tool for measuring operational intensity now has a HIP target, in addition to CUDA and OpenCL. Let the comparisons begin. :) -See here for more : https://github.com/ekondis/mixbench. - - -=================================================================================================== -Release:0.82.00 -Date: 2016.03.07 -- Bump minimum required HCC workweek to 16074. -- Bump minimum required ROCK-Kernel-Driver and ROCR-Runtime to Developer Preview 2. -- Enable multi-GPU support. - * Use hipSetDevice to select a device for subsequent kernel calls and memory allocations. - * CUDA_VISIBLE_DEVICES / HIP_VISIBLE_DEVICE environment variable selects devices visible to the runtime. -- Support hipStreams – send sequences of copy and kernel commands to a device. - * Asynchronous copies supported. -- Optimize memory copy operations. -- Support hipPointerGetAttribute – can determine if a pointer is host or device. -- Enable atomics to local memory. -- Support for LC Direct-To-ISA path. -- Improved free memory reporting. - * hipMemGetInfo (report full memory used in current process). - * hipDeviceReset (deletes all memory allocated by current process). - - -=================================================================================================== -Release:0.80.01 -Date: 2016.02.18 -- Improve reporting and support for device-side math functions. -- Update Runtime Documentation. -- Improve implementations of cross-lane operations (_ballot, _any, _all). -- Provide shuffle intrinsics (performance optimization in-progress). -- Support hipDeviceAttribute for querying "one-shot" device attributes, as an alternative to hipGetDeviceProperties. - - -=================================================================================================== -Release:0.80.00 -Date: 2016.01.25 - -Initial release with GPUOpen Launch. - - - From 9453139bcca6649c4de5a5463c896f27640f27c5 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Tue, 28 Jan 2025 17:13:25 +0100 Subject: [PATCH 37/46] Docs: Add xnack to unified memory page --- .../memory_management/unified_memory.rst | 152 +++++++++++++++++- 1 file changed, 144 insertions(+), 8 deletions(-) diff --git a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst index c253416928..7fc4e168ff 100644 --- a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst +++ b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst @@ -10,7 +10,7 @@ Unified memory management ******************************************************************************* In conventional architectures CPUs and attached devices have their own memory -space and dedicated physical memory backing it up, e.g. normal RAM for CPUs and +space and dedicated physical memory backing it up, for example normal RAM for CPUs and VRAM on GPUs. This way each device can have physical memory optimized for its use case. GPUs usually have specialized memory whose bandwidth is a magnitude higher than the RAM attached to CPUs. @@ -46,7 +46,7 @@ Hardware supported on-demand page migration When a kernel on the device tries to access a memory address that is not in its memory, a page-fault is triggered. The GPU then in turn requests the page from -the host or an other device, on which the memory is located. The page is then +the host or another device, on which the memory is located. The page is then unmapped from the source, sent to the device and mapped to the device's memory. The requested memory is then available to the processes running on the device. @@ -110,9 +110,145 @@ allocator can be used. ❌: **Unsupported** -:sup:`1` Works only with ``XNACK=1`` and kernels with HMM support. First GPU +:sup:`1` Works only with ``HSA_XNACK=1`` and kernels with HMM support. First GPU access causes recoverable page-fault. +.. _xnack: + +XNACK +----- + +On specific GPU architectures (referenced in the previous table), there is an +option to automatically migrate pages of memory between host and device. This is important +for managed memory, where the locality of the data is important for performance. +Depending on the system, page migration may be disabled by default in which case managed +memory will act like pinned host memory and suffer degraded performance. + +**XNACK** describes the GPU's ability to retry memory accesses that failed due to a page fault +(which normally would lead to a memory access error), and instead retrieve the missing page. +To enable this behavior, set the environment variable ``HSA_XNACK=1``. + +This also affects memory allocated by the system as indicated by the first table in +:ref:`unified memory allocators`. + +Below is a small example that demonstrates an explicit page fault and how **XNACK** affects +the page fault behavior. + +.. code-block:: cpp + + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t err = expression; \ + if(err != hipSuccess){ \ + std::cerr << "HIP error: " \ + << hipGetErrorString(err) \ + << " at " << __LINE__ << "\n"; \ + exit(EXIT_FAILURE); \ + } \ + } + + __global__ void write_to_memory(int* data, int size) + { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + if (idx < size) + { + // Writing to memory that may not have been allocated in GPU memory + data[idx] = idx * 2; // Triggers a page fault if not resident + } + } + + int main() + { + const int N = 1024; // 1K elements + const int blocksize = 256; + int* data; + + // Allocate unified memory + HIP_CHECK(hipMallocManaged(&data, N * sizeof(int))); + + // Intentionally don't initialize or prefetch any part of the data + // No initialization: data is uninitialized but accessible + + // Launch kernel that writes to all elements + dim3 threads(blocksize); + dim3 blocks(N / blocksize); + hipLaunchKernelGGL(write_to_memory, blocks, threads, 0, 0, data, N); + + // Synchronize to ensure kernel completion/termination and fault resolution + HIP_CHECK(hipDeviceSynchronize()); + + // Check results + bool pass = true; + for (int i = 0; i < N; ++i) + { + if (data[i] != (i * 2)) + { + pass = false; + std::cout << "Failed at position" << i << " with value " << data[i] <`_. Without this configuration, the behavior will be similar to that of systems without HMM @@ -157,10 +293,10 @@ functions on ROCm and CUDA, both with and without HMM support. :header-rows: 1 * - call - - Allocation origin without HMM or ``XNACK=0`` - - Access outside the origin without HMM or ``XNACK=0`` - - Allocation origin with HMM and ``XNACK=1`` - - Access outside the origin with HMM and ``XNACK=1`` + - Allocation origin without HMM or ``HSA_XNACK=0`` + - Access outside the origin without HMM or ``HSA_XNACK=0`` + - Allocation origin with HMM and ``HSA_XNACK=1`` + - Access outside the origin with HMM and ``HSA_XNACK=1`` * - ``new``, ``malloc()`` - host - not accessible on device From 99d763678fa616a556807135c11780652b729855 Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Wed, 5 Feb 2025 15:30:11 +0100 Subject: [PATCH 38/46] Docs: Update FP8 page to show both FP8 and FP16 types --- .wordlist.txt | 1 + docs/index.md | 2 +- docs/reference/fp8_numbers.rst | 230 ---------------- docs/reference/low_fp_types.rst | 470 ++++++++++++++++++++++++++++++++ docs/sphinx/_toc.yml.in | 4 +- 5 files changed, 474 insertions(+), 233 deletions(-) delete mode 100644 docs/reference/fp8_numbers.rst create mode 100644 docs/reference/low_fp_types.rst diff --git a/.wordlist.txt b/.wordlist.txt index de7c91b31a..009c63b73f 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -9,6 +9,7 @@ AXPY asm asynchrony backtrace +bfloat Bitcode bitcode bitcodes diff --git a/docs/index.md b/docs/index.md index 247c58e2fd..23f352e306 100644 --- a/docs/index.md +++ b/docs/index.md @@ -46,7 +46,7 @@ The HIP documentation is organized into the following categories: * [HIP environment variables](./reference/env_variables) * [CUDA to HIP API Function Comparison](./reference/api_syntax) * [List of deprecated APIs](./reference/deprecated_api_list) -* [FP8 numbers in HIP](./reference/fp8_numbers) +* [Low Precision Floating Point Types](./reference/low_fp_types) * {doc}`./reference/hardware_features` ::: diff --git a/docs/reference/fp8_numbers.rst b/docs/reference/fp8_numbers.rst deleted file mode 100644 index 00d85a9f15..0000000000 --- a/docs/reference/fp8_numbers.rst +++ /dev/null @@ -1,230 +0,0 @@ -.. meta:: - :description: This page describes FP8 numbers present in HIP. - :keywords: AMD, ROCm, HIP, fp8, fnuz, ocp - -******************************************************************************* -FP8 Numbers -******************************************************************************* - -`FP8 numbers `_ were introduced to accelerate deep learning inferencing. They provide higher throughput of matrix operations because the smaller size allows more of them in the available fixed memory. - -HIP has two FP8 number representations called *FP8-OCP* and *FP8-FNUZ*. - -Open Compute Project(OCP) number definition can be found `here `_. - -Definition of FNUZ: fnuz suffix means only finite and NaN values are supported. Unlike other types, Inf are not supported. -NaN is when sign bit is set and all other exponent and mantissa bits are 0. All other values are finite. -This provides one extra value of exponent and adds to the range of supported FP8 numbers. - -FP8 Definition -============== - -FP8 numbers are composed of a sign, an exponent and a mantissa. Their sizes are dependent on the format. -There are two formats of FP8 numbers, E4M3 and E5M2. - -- E4M3: 1 bit sign, 4 bit exponent, 3 bit mantissa -- E5M2: 1 bit sign, 5 bit exponent, 2 bit mantissa - -HIP Header -========== - -The `HIP header `_ defines the FP8 ocp/fnuz numbers. - -Supported Devices -================= - -.. list-table:: Supported devices for fp8 numbers - :header-rows: 1 - - * - Device Type - - FNUZ FP8 - - OCP FP8 - * - Host - - Yes - - Yes - * - gfx940/gfx941/gfx942 - - Yes - - No - * - gfx1200/gfx1201 - - No - - Yes - -Usage -===== - -To use the FP8 numbers inside HIP programs. - -.. code-block:: c - - #include - -FP8 numbers can be used on CPU side: - -.. code-block:: c - - __hip_fp8_storage_t convert_float_to_fp8( - float in, /* Input val */ - __hip_fp8_interpretation_t interpret, /* interpretation of number E4M3/E5M2 */ - __hip_saturation_t sat /* Saturation behavior */ - ) { - return __hip_cvt_float_to_fp8(in, sat, interpret); - } - -The same can be done in kernels as well. - -.. code-block:: c - - __device__ __hip_fp8_storage_t d_convert_float_to_fp8( - float in, - __hip_fp8_interpretation_t interpret, - __hip_saturation_t sat) { - return __hip_cvt_float_to_fp8(in, sat, interpret); - } - -An important thing to note here is if you use this on gfx94x GPU, it will be fnuz number but on any other GPU it will be an OCP number. - -The following code example does roundtrip FP8 conversions on both the CPU and GPU and compares the results. - -.. code-block:: c - - #include - #include - #include - #include - - #define hip_check(hip_call) \ - { \ - auto hip_res = hip_call; \ - if (hip_res != hipSuccess) { \ - std::cerr << "Failed in hip call: " << #hip_call \ - << " with error: " << hipGetErrorName(hip_res) << std::endl; \ - std::abort(); \ - } \ - } - - __device__ __hip_fp8_storage_t d_convert_float_to_fp8( - float in, __hip_fp8_interpretation_t interpret, __hip_saturation_t sat) { - return __hip_cvt_float_to_fp8(in, sat, interpret); - } - - __device__ float d_convert_fp8_to_float(float in, - __hip_fp8_interpretation_t interpret) { - __half hf = __hip_cvt_fp8_to_halfraw(in, interpret); - return hf; - } - - __global__ void float_to_fp8_to_float(float *in, - __hip_fp8_interpretation_t interpret, - __hip_saturation_t sat, float *out, - size_t size) { - int i = threadIdx.x; - if (i < size) { - auto fp8 = d_convert_float_to_fp8(in[i], interpret, sat); - out[i] = d_convert_fp8_to_float(fp8, interpret); - } - } - - __hip_fp8_storage_t - convert_float_to_fp8(float in, /* Input val */ - __hip_fp8_interpretation_t - interpret, /* interpretation of number E4M3/E5M2 */ - __hip_saturation_t sat /* Saturation behavior */ - ) { - return __hip_cvt_float_to_fp8(in, sat, interpret); - } - - float convert_fp8_to_float( - __hip_fp8_storage_t in, /* Input val */ - __hip_fp8_interpretation_t - interpret /* interpretation of number E4M3/E5M2 */ - ) { - __half hf = __hip_cvt_fp8_to_halfraw(in, interpret); - return hf; - } - - int main() { - constexpr size_t size = 32; - hipDeviceProp_t prop; - hip_check(hipGetDeviceProperties(&prop, 0)); - bool is_supported = (std::string(prop.gcnArchName).find("gfx94") != std::string::npos) || // gfx94x - (std::string(prop.gcnArchName).find("gfx120") != std::string::npos); // gfx120x - if(!is_supported) { - std::cerr << "Need a gfx94x or gfx120x, but found: " << prop.gcnArchName << std::endl; - std::cerr << "No device conversions are supported, only host conversions are supported." << std::endl; - return -1; - } - - const __hip_fp8_interpretation_t interpret = (std::string(prop.gcnArchName).find("gfx94") != std::string::npos) - ? __HIP_E4M3_FNUZ // gfx94x - : __HIP_E4M3; // gfx120x - constexpr __hip_saturation_t sat = __HIP_SATFINITE; - - std::vector in; - in.reserve(size); - for (size_t i = 0; i < size; i++) { - in.push_back(i + 1.1f); - } - - std::cout << "Converting float to fp8 and back..." << std::endl; - // CPU convert - std::vector cpu_out; - cpu_out.reserve(size); - for (const auto &fval : in) { - auto fp8 = convert_float_to_fp8(fval, interpret, sat); - cpu_out.push_back(convert_fp8_to_float(fp8, interpret)); - } - - // GPU convert - float *d_in, *d_out; - hip_check(hipMalloc(&d_in, sizeof(float) * size)); - hip_check(hipMalloc(&d_out, sizeof(float) * size)); - - hip_check(hipMemcpy(d_in, in.data(), sizeof(float) * in.size(), - hipMemcpyHostToDevice)); - - float_to_fp8_to_float<<<1, size>>>(d_in, interpret, sat, d_out, size); - - std::vector gpu_out(size, 0.0f); - hip_check(hipMemcpy(gpu_out.data(), d_out, sizeof(float) * gpu_out.size(), - hipMemcpyDeviceToHost)); - - hip_check(hipFree(d_in)); - hip_check(hipFree(d_out)); - - // Validation - for (size_t i = 0; i < size; i++) { - if (cpu_out[i] != gpu_out[i]) { - std::cerr << "cpu round trip result: " << cpu_out[i] - << " - gpu round trip result: " << gpu_out[i] << std::endl; - std::abort(); - } - } - std::cout << "...CPU and GPU round trip convert matches." << std::endl; - } - -There are C++ style classes available as well. - -.. code-block:: c - - __hip_fp8_e4m3_fnuz fp8_val(1.1f); // gfx94x - __hip_fp8_e4m3 fp8_val(1.1f); // gfx120x - -Each type of FP8 number has its own class: - -- __hip_fp8_e4m3 -- __hip_fp8_e5m2 -- __hip_fp8_e4m3_fnuz -- __hip_fp8_e5m2_fnuz - -There is support of vector of FP8 types. - -- __hip_fp8x2_e4m3: holds 2 values of OCP FP8 e4m3 numbers -- __hip_fp8x4_e4m3: holds 4 values of OCP FP8 e4m3 numbers -- __hip_fp8x2_e5m2: holds 2 values of OCP FP8 e5m2 numbers -- __hip_fp8x4_e5m2: holds 4 values of OCP FP8 e5m2 numbers -- __hip_fp8x2_e4m3_fnuz: holds 2 values of FP8 fnuz e4m3 numbers -- __hip_fp8x4_e4m3_fnuz: holds 4 values of FP8 fnuz e4m3 numbers -- __hip_fp8x2_e5m2_fnuz: holds 2 values of FP8 fnuz e5m2 numbers -- __hip_fp8x4_e5m2_fnuz: holds 4 values of FP8 fnuz e5m2 numbers - -FNUZ extensions will be available on gfx94x only. diff --git a/docs/reference/low_fp_types.rst b/docs/reference/low_fp_types.rst new file mode 100644 index 0000000000..7fe450a35f --- /dev/null +++ b/docs/reference/low_fp_types.rst @@ -0,0 +1,470 @@ +.. meta:: + :description: This page describes the FP8 and FP16 types present in HIP. + :keywords: AMD, ROCm, HIP, fp8, fnuz, ocp + +******************************************************************************* +Low precision floating point types +******************************************************************************* + +Modern computing tasks often require balancing numerical precision against hardware resources +and processing speed. Low precision floating point number formats in HIP include FP8 (Quarter Precision) +and FP16 (Half Precision), which reduce memory and bandwidth requirements compared to traditional +32-bit or 64-bit formats. The following sections detail their specifications, variants, and provide +practical guidance for implementation in HIP. + +FP8 (Quarter Precision) +======================= + +`FP8 (Floating Point 8-bit) numbers `_ were introduced +as a compact numerical format specifically tailored for deep learning inference. By reducing +precision while maintaining computational effectiveness, FP8 allows for significant memory +savings and improved processing speed. This makes it particularly beneficial for deploying +large-scale models with strict efficiency constraints. + +Unlike traditional floating-point formats such as FP32 or even FP16, FP8 further optimizes +performance by enabling a higher volume of matrix operations per second. Its reduced bit-width +minimizes bandwidth requirements, making it an attractive choice for hardware accelerators +in deep learning applications. + +There are two primary FP8 formats: + +- **E4M3 Format** + + - Sign: 1 bit + - Exponent: 4 bits + - Mantissa: 3 bits + +- **E5M2 Format** + + - Sign: 1 bit + - Exponent: 5 bits + - Mantissa: 2 bits + +The E4M3 format offers higher precision with a narrower range, while the E5M2 format provides +a wider range at the cost of some precision. + +Additionally, FP8 numbers have two representations: + +- **FP8-OCP (Open Compute Project)** + + - `This `_ + is a standardized format developed by the Open Compute Project to ensure compatibility + across various hardware and software implementations. + +- **FP8-FNUZ (Finite and NaN Only)** + + - A specialized format optimized for specific computations, supporting only finite and NaN values + (no Inf support). + - This provides one extra value of exponent and adds to the range of supported FP8 numbers. + - **NaN Definition**: When the sign bit is set, and all other exponent and mantissa bits are zero. + +The FNUZ representation provides an extra exponent value, expanding the range of representable +numbers compared to standard FP8 formats. + + +HIP Header +---------- + +The `HIP FP8 header `_ +defines the FP8 ocp/fnuz numbers. + +Supported Devices +----------------- + +Different GPU models support different FP8 formats. Here's a breakdown: + +.. list-table:: Supported devices for fp8 numbers + :header-rows: 1 + + * - Device Type + - FNUZ FP8 + - OCP FP8 + * - Host + - Yes + - Yes + * - CDNA1 + - No + - No + * - CDNA2 + - No + - No + * - CDNA3 + - Yes + - No + * - RDNA2 + - No + - No + * - RDNA3 + - No + - No + +Using FP8 Numbers in HIP Programs +--------------------------------- + +To use the FP8 numbers inside HIP programs. + +.. code-block:: cpp + + #include + +FP8 numbers can be used on CPU side: + +.. code-block:: cpp + + __hip_fp8_storage_t convert_float_to_fp8( + float in, /* Input val */ + __hip_fp8_interpretation_t interpret, /* interpretation of number E4M3/E5M2 */ + __hip_saturation_t sat /* Saturation behavior */ + ) { + return __hip_cvt_float_to_fp8(in, sat, interpret); + } + +The same can be done in kernels as well. + +.. code-block:: cpp + + __device__ __hip_fp8_storage_t d_convert_float_to_fp8( + float in, + __hip_fp8_interpretation_t interpret, + __hip_saturation_t sat) { + return __hip_cvt_float_to_fp8(in, sat, interpret); + } + +Note: On a gfx94x GPU, the type will default to the fnuz type. + +The following code example does roundtrip FP8 conversions on both the CPU and GPU and compares the results. + +.. code-block:: cpp + + #include + #include + #include + #include + + #define hip_check(hip_call) \ + { \ + auto hip_res = hip_call; \ + if (hip_res != hipSuccess) { \ + std::cerr << "Failed in HIP call: " << #hip_call \ + << " at " << __FILE__ << ":" << __LINE__ \ + << " with error: " << hipGetErrorString(hip_res) << std::endl; \ + std::abort(); \ + } \ + } + + __device__ __hip_fp8_storage_t d_convert_float_to_fp8( + float in, __hip_fp8_interpretation_t interpret, __hip_saturation_t sat) { + return __hip_cvt_float_to_fp8(in, sat, interpret); + } + + __device__ float d_convert_fp8_to_float(float in, + __hip_fp8_interpretation_t interpret) { + __half hf = __hip_cvt_fp8_to_halfraw(in, interpret); + return hf; + } + + __global__ void float_to_fp8_to_float(float *in, + __hip_fp8_interpretation_t interpret, + __hip_saturation_t sat, float *out, + size_t size) { + int i = threadIdx.x; + if (i < size) { + auto fp8 = d_convert_float_to_fp8(in[i], interpret, sat); + out[i] = d_convert_fp8_to_float(fp8, interpret); + } + } + + __hip_fp8_storage_t + convert_float_to_fp8(float in, /* Input val */ + __hip_fp8_interpretation_t + interpret, /* interpretation of number E4M3/E5M2 */ + __hip_saturation_t sat /* Saturation behavior */ + ) { + return __hip_cvt_float_to_fp8(in, sat, interpret); + } + + float convert_fp8_to_float( + __hip_fp8_storage_t in, /* Input val */ + __hip_fp8_interpretation_t + interpret /* interpretation of number E4M3/E5M2 */ + ) { + __half hf = __hip_cvt_fp8_to_halfraw(in, interpret); + return hf; + } + + int main() { + constexpr size_t size = 32; + hipDeviceProp_t prop; + hip_check(hipGetDeviceProperties(&prop, 0)); + bool is_supported = (std::string(prop.gcnArchName).find("gfx94") != std::string::npos); // gfx94x + if(!is_supported) { + std::cerr << "Need a gfx94x, but found: " << prop.gcnArchName << std::endl; + std::cerr << "No device conversions are supported, only host conversions are supported." << std::endl; + return -1; + } + + const __hip_fp8_interpretation_t interpret = (std::string(prop.gcnArchName).find("gfx94") != std::string::npos) + ? __HIP_E4M3_FNUZ // gfx94x + : __HIP_E4M3; + constexpr __hip_saturation_t sat = __HIP_SATFINITE; + + std::vector in; + in.reserve(size); + for (size_t i = 0; i < size; i++) { + in.push_back(i + 1.1f); + } + + std::cout << "Converting float to fp8 and back..." << std::endl; + // CPU convert + std::vector cpu_out; + cpu_out.reserve(size); + for (const auto &fval : in) { + auto fp8 = convert_float_to_fp8(fval, interpret, sat); + cpu_out.push_back(convert_fp8_to_float(fp8, interpret)); + } + + // GPU convert + float *d_in, *d_out; + hip_check(hipMalloc(&d_in, sizeof(float) * size)); + hip_check(hipMalloc(&d_out, sizeof(float) * size)); + + hip_check(hipMemcpy(d_in, in.data(), sizeof(float) * in.size(), + hipMemcpyHostToDevice)); + + float_to_fp8_to_float<<<1, size>>>(d_in, interpret, sat, d_out, size); + + std::vector gpu_out(size, 0.0f); + hip_check(hipMemcpy(gpu_out.data(), d_out, sizeof(float) * gpu_out.size(), + hipMemcpyDeviceToHost)); + + hip_check(hipFree(d_in)); + hip_check(hipFree(d_out)); + + // Validation + for (size_t i = 0; i < size; i++) { + if (cpu_out[i] != gpu_out[i]) { + std::cerr << "cpu round trip result: " << cpu_out[i] + << " - gpu round trip result: " << gpu_out[i] << std::endl; + std::abort(); + } + } + std::cout << "...CPU and GPU round trip convert matches." << std::endl; + } + +There are C++ style classes available as well. + +.. code-block:: cpp + + __hip_fp8_e4m3_fnuz fp8_val(1.1f); // gfx94x + __hip_fp8_e4m3 fp8_val(1.1f); + +Each type of FP8 number has its own class: + +- __hip_fp8_e4m3 +- __hip_fp8_e5m2 +- __hip_fp8_e4m3_fnuz +- __hip_fp8_e5m2_fnuz + +There is support of vector of FP8 types. + +- __hip_fp8x2_e4m3: holds 2 values of OCP FP8 e4m3 numbers +- __hip_fp8x4_e4m3: holds 4 values of OCP FP8 e4m3 numbers +- __hip_fp8x2_e5m2: holds 2 values of OCP FP8 e5m2 numbers +- __hip_fp8x4_e5m2: holds 4 values of OCP FP8 e5m2 numbers +- __hip_fp8x2_e4m3_fnuz: holds 2 values of FP8 fnuz e4m3 numbers +- __hip_fp8x4_e4m3_fnuz: holds 4 values of FP8 fnuz e4m3 numbers +- __hip_fp8x2_e5m2_fnuz: holds 2 values of FP8 fnuz e5m2 numbers +- __hip_fp8x4_e5m2_fnuz: holds 4 values of FP8 fnuz e5m2 numbers + +FNUZ extensions will be available on gfx94x only. + +FP16 (Half Precision) +===================== + +FP16 (Floating Point 16-bit) numbers offer a balance between precision and +efficiency, making them a widely adopted standard for accelerating deep learning +inference. With higher precision than FP8 but lower memory requirements than FP32, +FP16 enables faster computations while preserving model accuracy. + +Deep learning workloads often involve massive datasets and complex calculations, +making FP32 computationally expensive. FP16 helps mitigate these costs by reducing +storage and bandwidth demands, allowing for increased throughput without significant +loss of numerical stability. This format is particularly useful for training and +inference in GPUs and TPUs optimized for half-precision arithmetic. + +There are two primary FP16 formats: + +- **float16 Format** + + - Sign: 1 bit + - Exponent: 5 bits + - Mantissa: 10 bits + +- **bfloat16 Format** + + - Sign: 1 bit + - Exponent: 8 bits + - Mantissa: 7 bits + +The float16 format offers higher precision with a narrower range, while the bfloat16 +format provides a wider range at the cost of some precision. + +Additionally, FP16 numbers have standardized representations developed by industry +initiatives to ensure compatibility across various hardware and software implementations. +Unlike FP8, which has specific representations like OCP and FNUZ, FP16 is more uniformly +supported with its two main formats, float16 and bfloat16. + +HIP Header +---------- + +The `HIP FP16 header `_ +defines the float16 format. + +The `HIP BF16 header `_ +defines the bfloat16 format. + +Supported Devices +----------------- + +Different GPU models support different FP16 formats. Here's a breakdown: + +.. list-table:: Supported devices for fp16 numbers + :header-rows: 1 + + * - Device Type + - float16 + - bfloat16 + * - Host + - Yes + - Yes + * - CDNA1 + - Yes + - Yes + * - CDNA2 + - Yes + - Yes + * - CDNA3 + - Yes + - Yes + * - RDNA2 + - Yes + - Yes + * - RDNA3 + - Yes + - Yes + +Using FP16 Numbers in HIP Programs +---------------------------------- + +To use the FP16 numbers inside HIP programs. + +.. code-block:: cpp + + #include // for float16 + #include // for bfloat16 + +The following code example adds two float16 values on the GPU and compares the results +against summed float values on the CPU. + +.. code-block:: cpp + + #include + #include + #include + #include + + #define hip_check(hip_call) \ + { \ + auto hip_res = hip_call; \ + if (hip_res != hipSuccess) { \ + std::cerr << "Failed in HIP call: " << #hip_call \ + << " at " << __FILE__ << ":" << __LINE__ \ + << " with error: " << hipGetErrorString(hip_res) << std::endl; \ + std::abort(); \ + } \ + } + + __global__ void add_half_precision(__half* in1, __half* in2, float* out, size_t size) { + int idx = threadIdx.x; + if (idx < size) { + // Load as half, perform addition in float, store as float + float sum = __half2float(in1[idx] + in2[idx]); + out[idx] = sum; + } + } + + int main() { + constexpr size_t size = 32; + constexpr float tolerance = 1e-1f; // Allowable numerical difference + + // Initialize input vectors as floats + std::vector in1(size), in2(size); + for (size_t i = 0; i < size; i++) { + in1[i] = i + 1.1f; + in2[i] = i + 2.2f; + } + + // Compute expected results in full precision on CPU + std::vector cpu_out(size); + for (size_t i = 0; i < size; i++) { + cpu_out[i] = in1[i] + in2[i]; // Direct float addition + } + + // Allocate device memory (store input as half, output as float) + __half *d_in1, *d_in2; + float *d_out; + hip_check(hipMalloc(&d_in1, sizeof(__half) * size)); + hip_check(hipMalloc(&d_in2, sizeof(__half) * size)); + hip_check(hipMalloc(&d_out, sizeof(float) * size)); + + // Convert input to half and copy to device + std::vector<__half> in1_half(size), in2_half(size); + for (size_t i = 0; i < size; i++) { + in1_half[i] = __float2half(in1[i]); + in2_half[i] = __float2half(in2[i]); + } + + hip_check(hipMemcpy(d_in1, in1_half.data(), sizeof(__half) * size, hipMemcpyHostToDevice)); + hip_check(hipMemcpy(d_in2, in2_half.data(), sizeof(__half) * size, hipMemcpyHostToDevice)); + + // Launch kernel + add_half_precision<<<1, size>>>(d_in1, d_in2, d_out, size); + + // Copy result back to host + std::vector gpu_out(size, 0.0f); + hip_check(hipMemcpy(gpu_out.data(), d_out, sizeof(float) * size, hipMemcpyDeviceToHost)); + + // Free device memory + hip_check(hipFree(d_in1)); + hip_check(hipFree(d_in2)); + hip_check(hipFree(d_out)); + + // Validation with tolerance + for (size_t i = 0; i < size; i++) { + if (std::fabs(cpu_out[i] - gpu_out[i]) > tolerance) { + std::cerr << "Mismatch at index " << i << ": CPU result = " << cpu_out[i] + << ", GPU result = " << gpu_out[i] << std::endl; + std::abort(); + } + } + + std::cout << "Success: CPU and GPU half-precision addition match within tolerance!" << std::endl; + } + + +There are C++ style classes available as well. + +.. code-block:: cpp + + __half fp16_val(1.1f); // float16 + __hip_bfloat16 fp16_val(1.1f); // bfloat16 + +Each type of FP16 number has its own class: + +- __half +- __hip_bfloat16 + +There is support of vector of FP16 types. + +- __half2: holds 2 values of float16 numbers +- __hip_bfloat162: holds 2 values of bfloat16 numbers diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 34050b2448..2f08ffcd5a 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -113,8 +113,8 @@ subtrees: - file: reference/api_syntax - file: reference/deprecated_api_list title: List of deprecated APIs - - file: reference/fp8_numbers - title: FP8 numbers in HIP + - file: reference/low_fp_types + title: Low Precision Floating Point Types - file: reference/hardware_features - caption: Tutorials From c6889bf281a6631e07a9b1aa0dd72a65228a58d3 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 3 Mar 2025 00:44:37 +0000 Subject: [PATCH 39/46] Bump sphinxcontrib-doxylink from 1.12.4 to 1.13.0 in /docs/sphinx Bumps [sphinxcontrib-doxylink](https://github.com/sphinx-contrib/doxylink) from 1.12.4 to 1.13.0. - [Release notes](https://github.com/sphinx-contrib/doxylink/releases) - [Changelog](https://github.com/sphinx-contrib/doxylink/blob/master/CHANGELOG.md) - [Commits](https://github.com/sphinx-contrib/doxylink/compare/1.12.4...1.13.0) --- updated-dependencies: - dependency-name: sphinxcontrib-doxylink dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 3707dd92a3..9582abd942 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -252,7 +252,7 @@ sphinxcontrib-applehelp==2.0.0 # via sphinx sphinxcontrib-devhelp==2.0.0 # via sphinx -sphinxcontrib-doxylink==1.12.4 +sphinxcontrib-doxylink==1.13.0 # via -r requirements.in sphinxcontrib-htmlhelp==2.1.0 # via sphinx From 5adf65a670cd3ddc5449360994f32865a94873cc Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 4 Mar 2025 00:17:04 +0000 Subject: [PATCH 40/46] Bump rocm-docs-core[api_reference] from 1.17.0 to 1.17.1 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.17.0 to 1.17.1. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.17.0...v1.17.1) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index f7fb1c634a..1d65d55880 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.17.0 +rocm-docs-core[api_reference]==1.17.1 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 9582abd942..6db9f3e428 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -211,7 +211,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.17.0 +rocm-docs-core[api-reference]==1.17.1 # via -r requirements.in rpds-py==0.22.3 # via From f970e1832f30d3c29de642f4474caf293e3bd0cf Mon Sep 17 00:00:00 2001 From: Adel Johar Date: Fri, 24 Jan 2025 15:00:27 +0100 Subject: [PATCH 41/46] Docs: Update math api page --- .wordlist.txt | 3 + docs/conf.py | 2 +- docs/reference/math_api.rst | 2399 +++++++++++++++++++++-------------- 3 files changed, 1421 insertions(+), 983 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index 009c63b73f..c35dfb045d 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -154,6 +154,7 @@ sceneries shaders SIMT sinewave +sinf SOMA SPMV structs @@ -167,6 +168,8 @@ templated toolkits transfering typedefs +ULP +ULPs unintuitive UMM unmap diff --git a/docs/conf.py b/docs/conf.py index 8261240fb0..6e4b994bfb 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -57,4 +57,4 @@ "understand/glossary.md", 'how-to/debugging_env.rst', "data/env_variables_hip.rst" -] \ No newline at end of file +] diff --git a/docs/reference/math_api.rst b/docs/reference/math_api.rst index 58203de055..16c97c3637 100644 --- a/docs/reference/math_api.rst +++ b/docs/reference/math_api.rst @@ -8,1114 +8,1549 @@ HIP math API ******************************************************************************** -HIP-Clang supports a set of math operations that are callable from the device. HIP supports most of the device functions supported by NVIDIA CUDA. These are described in the following sections. +HIP-Clang provides device-callable math operations, supporting most functions available in +NVIDIA CUDA. + +This section documents: + +- Maximum error bounds for supported HIP math functions +- Currently unsupported functions + +For a comprehensive analysis of mathematical function accuracy—including detailed evaluations +in single, double, and quadruple precision and a discussion of the IEEE 754 standard's recommendations +on correct rounding — see the paper +`Accuracy of Mathematical Functions `_. + +Error bounds on this page are measured in units in the last place (ULPs), representing the absolute +difference between a HIP math function result and its corresponding C++ standard library function +(e.g., comparing HIP's sinf with C++'s sinf). + +The following C++ example shows a simplified method for computing ULP differences between +HIP and standard C++ math functions by first finding where the maximum absolute error +occurs. + +.. code-block:: cpp + + #include + #include + #include + #include + #include + + #define HIP_CHECK(expression) \ + { \ + const hipError_t err = expression; \ + if (err != hipSuccess) { \ + std::cerr << "HIP error: " \ + << hipGetErrorString(err) \ + << " at " << __LINE__ << "\n"; \ + exit(EXIT_FAILURE); \ + } \ + } + + // Simple ULP difference calculator + int64_t ulp_diff(float a, float b) { + if (a == b) return 0; + union { float f; int32_t i; } ua{a}, ub{b}; + + // For negative values, convert to a positive-based representation + if (ua.i < 0) ua.i = std::numeric_limits::max() - ua.i; + if (ub.i < 0) ub.i = std::numeric_limits::max() - ub.i; + + return std::abs((int64_t)ua.i - (int64_t)ub.i); + } + + // Test kernel + __global__ void test_sin(float* out, int n) { + int i = blockIdx.x * blockDim.x + threadIdx.x; + if (i < n) { + float x = -M_PI + (2.0f * M_PI * i) / (n - 1); + out[i] = sin(x); + } + } + + int main() { + const int n = 1000000; + const int blocksize = 256; + std::vector outputs(n); + float* d_out; + + HIP_CHECK(hipMalloc(&d_out, n * sizeof(float))); + dim3 threads(blocksize); + dim3 blocks((n + blocksize - 1) / blocksize); // Fixed grid calculation + test_sin<<>>(d_out, n); + HIP_CHECK(hipPeekAtLastError()); + HIP_CHECK(hipMemcpy(outputs.data(), d_out, n * sizeof(float), hipMemcpyDeviceToHost)); + + // Step 1: Find the maximum absolute error + double max_abs_error = 0.0; + float max_error_output = 0.0; + float max_error_expected = 0.0; + + for (int i = 0; i < n; i++) { + float x = -M_PI + (2.0f * M_PI * i) / (n - 1); + float expected = std::sin(x); + double abs_error = std::abs(outputs[i] - expected); + + if (abs_error > max_abs_error) { + max_abs_error = abs_error; + max_error_output = outputs[i]; + max_error_expected = expected; + } + } + + // Step 2: Compute ULP difference based on the max absolute error pair + int64_t max_ulp = ulp_diff(max_error_output, max_error_expected); + + // Output results + std::cout << "Max Absolute Error: " << max_abs_error << std::endl; + std::cout << "Max ULP Difference: " << max_ulp << std::endl; + std::cout << "Max Error Values -> Got: " << max_error_output + << ", Expected: " << max_error_expected << std::endl; + + HIP_CHECK(hipFree(d_out)); + return 0; + } + +Standard mathematical functions +=============================== + +The functions in this section prioritize numerical accuracy and correctness, making them well-suited for +applications that require high precision and predictable results. Unless explicitly specified, all +math functions listed below are available on the device side. + +Arithmetic +---------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float abs(float x)`` + | Returns the absolute value of :math:`x` + - :math:`x \in [-20, 20]` + - 0 + + * - | ``float fabsf(float x)`` + | Returns the absolute value of `x` + - :math:`x \in [-20, 20]` + - 0 + + * - | ``float fdimf(float x, float y)`` + | Returns the positive difference between :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float fmaf(float x, float y, float z)`` + | Returns :math:`x \cdot y + z` as a single operation. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 10]` + | :math:`z \in [-10, 10]` + - 0 + + * - | ``float fmaxf(float x, float y)`` + | Determine the maximum numeric value of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float fminf(float x, float y)`` + | Determine the minimum numeric value of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float fmodf(float x, float y)`` + | Returns the floating-point remainder of :math:`x / y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float modff(float x, float* iptr)`` + | Break down :math:`x` into fractional and integral parts. + - :math:`x \in [-10, 10]` + - 0 + + * - | ``float remainderf(float x, float y)`` + | Returns single-precision floating-point remainder. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float remquof(float x, float y, int* quo)`` + | Returns single-precision floating-point remainder and part of quotient. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float fdividef(float x, float y)`` + | Divide two floating point values. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-100, 100]` + - 0 + + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double abs(double x)`` + | Returns the absolute value of :math:`x` + - :math:`x \in [-20, 20]` + - 0 + + * - | ``double fabs(double x)`` + | Returns the absolute value of `x` + - :math:`x \in [-20, 20]` + - 0 + + * - | ``double fdim(double x, double y)`` + | Returns the positive difference between :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double fma(double x, double y, double z)`` + | Returns :math:`x \cdot y + z` as a single operation. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 10]` + | :math:`z \in [-10, 10]` + - 0 + + * - | ``double fmax(double x, double y)`` + | Determine the maximum numeric value of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double fmin(double x, double y)`` + | Determine the minimum numeric value of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double fmod(double x, double y)`` + | Returns the floating-point remainder of :math:`x / y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double modf(double x, double* iptr)`` + | Break down :math:`x` into fractional and integral parts. + - :math:`x \in [-10, 10]` + - 0 + + * - | ``double remainder(double x, double y)`` + | Returns double-precision floating-point remainder. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double remquo(double x, double y, int* quo)`` + | Returns double-precision floating-point remainder and part of quotient. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + +Classification +-------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``bool isfinite(float x)`` + | Determine whether :math:`x` is finite. + - | :math:`x \in [-\text{FLT_MAX}, \text{FLT_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool isinf(float x)`` + | Determine whether :math:`x` is infinite. + - | :math:`x \in [-\text{FLT_MAX}, \text{FLT_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool isnan(float x)`` + | Determine whether :math:`x` is a ``NAN``. + - | :math:`x \in [-\text{FLT_MAX}, \text{FLT_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool signbit(float x)`` + | Return the sign bit of :math:`x`. + - | :math:`x \in [-\text{FLT_MAX}, \text{FLT_MAX}]` + | Special values: :math:`\pm\infty`, :math:`\pm0`, NaN + - 0 + + * - | ``float nanf(const char* tagp)`` + | Returns "Not a Number" value. + - | Input strings: ``""``, ``"1"``, ``"2"``, + | ``"quiet"``, ``"signaling"``, ``"ind"`` + - 0 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``bool isfinite(double x)`` + | Determine whether :math:`x` is finite. + - | :math:`x \in [-\text{DBL_MAX}, \text{DBL_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool isin(double x)`` + | Determine whether :math:`x` is infinite. + - | :math:`x \in [-\text{DBL_MAX}, \text{DBL_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool isnan(double x)`` + | Determine whether :math:`x` is a ``NAN``. + - | :math:`x \in [-\text{DBL_MAX}, \text{DBL_MAX}]` + | Special values: :math:`\pm\infty`, NaN + - 0 + + * - | ``bool signbit(double x)`` + | Return the sign bit of :math:`x`. + - | :math:`x \in [-\text{DBL_MAX}, \text{DBL_MAX}]` + | Special values: :math:`\pm\infty`, :math:`\pm0`, NaN + - 0 + + * - | ``double nan(const char* tagp)`` + | Returns "Not a Number" value. + - | Input strings: ``""``, ``"1"``, ``"2"``, + | ``"quiet"``, ``"signaling"``, ``"ind"`` + - 0 + +Error and Gamma +--------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float erff(float x)`` + | Returns the error function of :math:`x`. + - :math:`x \in [-4, 4]` + - 4 + + * - | ``float erfcf(float x)`` + | Returns the complementary error function of :math:`x`. + - :math:`x \in [-4, 4]` + - 2 + + * - | ``float erfcxf(float x)`` + | Returns the scaled complementary error function of :math:`x`. + - :math:`x \in [-2, 2]` + - 5 + + * - | ``float lgammaf(float x)`` + | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. + - :math:`x \in [0.5, 20]` + - 4 + + * - | ``float tgammaf(float x)`` + | Returns the gamma function of :math:`x`. + - :math:`x \in [0.5, 15]` + - 6 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double erf(double x)`` + | Returns the error function of :math:`x`. + - :math:`x \in [-4, 4]` + - 4 + + * - | ``double erfc(double x)`` + | Returns the complementary error function of :math:`x`. + - :math:`x \in [-4, 4]` + - 2 + + * - | ``double erfcx(double x)`` + | Returns the scaled complementary error function of :math:`x`. + - :math:`x \in [-2, 2]` + - 5 + + * - | ``double lgamma(double x)`` + | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. + - :math:`x \in [0.5, 20]` + - 2 + + * - | ``double tgamma(double x)`` + | Returns the gamma function of :math:`x`. + - :math:`x \in [0.5, 15]` + - 6 + +Exponential and Logarithmic +--------------------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float expf(float x)`` + | Returns :math:`e^x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``float exp2f(float x)`` + | Returns :math:`2^x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``float exp10f(float x)`` + | Returns :math:`10^x`. + - :math:`x \in [-4, 4]` + - 1 + + * - | ``float expm1f(float x)`` + | Returns :math:`ln(x - 1)` + - :math:`x \in [-10, 10]` + - 1 + + * - | ``float log10f(float x)`` + | Returns the base 10 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 2 + + * - | ``float log1pf(float x)`` + | Returns the natural logarithm of :math:`x + 1`. + - :math:`x \in [-0.9, 10]` + - 1 + + * - | ``float log2f(float x)`` + | Returns the base 2 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 1 + + * - | ``float logf(float x)`` + | Returns the natural logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 2 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double exp(double x)`` + | Returns :math:`e^x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``double exp2(double x)`` + | Returns :math:`2^x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``double exp10(double x)`` + | Returns :math:`10^x`. + - :math:`x \in [-4, 4]` + - 1 + + * - | ``double expm1(double x)`` + | Returns :math:`ln(x - 1)` + - :math:`x \in [-10, 10]` + - 1 + + * - | ``double log10(double x)`` + | Returns the base 10 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 1 + + * - | ``double log1p(double x)`` + | Returns the natural logarithm of :math:`x + 1`. + - :math:`x \in [-0.9, 10]` + - 1 + + * - | ``double log2(double x)`` + | Returns the base 2 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 1 + + * - | ``double log(double x)`` + | Returns the natural logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 1 + +Floating Point Manipulation +--------------------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float copysignf(float x, float y)`` + | Create value with given magnitude, copying sign of second value. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float frexpf(float x, int* nptr)`` + | Extract mantissa and exponent of :math:`x`. + - :math:`x \in [-10, 10]` + - 0 + + * - | ``int ilogbf(float x)`` + | Returns the unbiased integer exponent of :math:`x`. + - :math:`x \in [0.01, 100]` + - 0 + + * - | ``float logbf(float x)`` + | Returns the floating point representation of the exponent of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 0 + + * - | ``float ldexpf(float x, int exp)`` + | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. + - | :math:`x \in [-10, 10]` + | :math:`\text{exp} \in [-4, 4]` + - 0 + + * - | ``float nextafterf(float x, float y)`` + | Returns next representable single-precision floating-point value after argument. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``float scalblnf(float x, long int n)`` + | Scale :math:`x` by :math:`2^n`. + - | :math:`x \in [-10, 10]` + | :math:`n \in [-4, 4]` + - 0 + + * - | ``float scalbnf(float x, int n)`` + | Scale :math:`x` by :math:`2^n`. + - | :math:`x \in [-10, 10]` + | :math:`n \in [-4, 4]` + - 0 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double copysign(double x, double y)`` + | Create value with given magnitude, copying sign of second value. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double frexp(double x, int* nptr)`` + | Extract mantissa and exponent of :math:`x`. + - :math:`x \in [-10, 10]` + - 0 + + * - | ``int ilogb(double x)`` + | Returns the unbiased integer exponent of :math:`x`. + - :math:`x \in [0.01, 100]` + - 0 + + * - | ``double logb(double x)`` + | Returns the floating point representation of the exponent of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 0 + + * - | ``double ldexp(double x, int exp)`` + | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. + - | :math:`x \in [-10, 10]` + | :math:`\text{exp} \in [-4, 4]` + - 0 + + * - | ``double nextafter(double x, double y)`` + | Returns next representable double-precision floating-point value after argument. + - | :math:`x \in [-10, 10]` + | :math:`y \in [-3, 3]` + - 0 + + * - | ``double scalbln(double x, long int n)`` + | Scale :math:`x` by :math:`2^n`. + - | :math:`x \in [-10, 10]` + | :math:`n \in [-4, 4]` + - 0 + + * - | ``double scalbn(double x, int n)`` + | Scale :math:`x` by :math:`2^n`. + - | :math:`x \in [-10, 10]` + | :math:`n \in [-4, 4]` + - 0 + +Hypotenuse and Norm +------------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float hypotf(float x, float y)`` + | Returns the square root of the sum of squares of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [0, 10]` + - 1 + + * - | ``float rhypotf(float x, float y)`` + | Returns one over the square root of the sum of squares of two arguments. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 100]` + - 1 + + * - | ``float norm3df(float x, float y, float z)`` + | Returns the square root of the sum of squares of :math:`x`, :math:`y` and :math:`z`. + - | All inputs in + | :math:`[-10, 10]` + - 1 + + * - | ``float norm4df(float x, float y, float z, float w)`` + | Returns the square root of the sum of squares of :math:`x`, :math:`y`, :math:`z` and :math:`w`. + - | All inputs in + | :math:`[-10, 10]` + - 2 + + * - | ``float rnorm3df(float x, float y, float z)`` + | Returns one over the square root of the sum of squares of three coordinates of the argument. + - | All inputs in + | :math:`[-10, 10]` + - 1 + + * - | ``float rnorm4df(float x, float y, float z, float w)`` + | Returns one over the square root of the sum of squares of four coordinates of the argument. + - | All inputs in + | :math:`[-10, 10]` + - 2 + + * - | ``float normf(int dim, const float *a)`` + | Returns the square root of the sum of squares of any number of coordinates. + - | :math:`\text{dim} \in [2,4]` + | :math:`a[i] \in [-10, 10]` + - | Error depends on the number of coordinates + | e.g. ``dim = 2`` -> 1 + | e.g. ``dim = 3`` -> 1 + | e.g. ``dim = 4`` -> 1 + + * - | ``float rnormf(int dim, const float *a)`` + | Returns the reciprocal of square root of the sum of squares of any number of coordinates. + - | :math:`\text{dim} \in [2,4]` + | :math:`a[i] \in [-10, 10]` + - | Error depends on the number of coordinates + | e.g. ``dim = 2`` -> 1 + | e.g. ``dim = 3`` -> 1 + | e.g. ``dim = 4`` -> 1 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double hypot(double x, double y)`` + | Returns the square root of the sum of squares of :math:`x` and :math:`y`. + - | :math:`x \in [-10, 10]` + | :math:`y \in [0, 10]` + - 1 + + * - | ``double rhypot(double x, double y)`` + | Returns one over the square root of the sum of squares of two arguments. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 100]` + - 1 + + * - | ``double norm3d(double x, double y, double z)`` + | Returns the square root of the sum of squares of :math:`x`, :math:`y` and :math:`z`. + - | All inputs in + | :math:`[-10, 10]` + - 1 + + * - | ``double norm4d(double x, double y, double z, double w)`` + | Returns the square root of the sum of squares of :math:`x`, :math:`y`, :math:`z` and :math:`w`. + - | All inputs in + | :math:`[-10, 10]` + - 2 + + * - | ``double rnorm3d(double x, double y, double z)`` + | Returns one over the square root of the sum of squares of three coordinates of the argument. + - | All inputs in + | :math:`[-10, 10]` + - 1 + + * - | ``double rnorm4d(double x, double y, double z, double w)`` + | Returns one over the square root of the sum of squares of four coordinates of the argument. + - | All inputs in + | :math:`[-10, 10]` + - 1 + + * - | ``double norm(int dim, const double *a)`` + | Returns the square root of the sum of squares of any number of coordinates. + - | :math:`\text{dim} \in [2,4]` + | :math:`a[i] \in [-10, 10]` + - | Error depends on the number of coordinates + | e.g. ``dim = 2`` -> 1 + | e.g. ``dim = 3`` -> 1 + | e.g. ``dim = 4`` -> 1 + + * - | ``double rnorm(int dim, const double *a)`` + | Returns the reciprocal of square root of the sum of squares of any number of coordinates. + - | :math:`\text{dim} \in [2,4]` + | :math:`a[i] \in [-10, 10]` + - | Error depends on the number of coordinates + | e.g. ``dim = 2`` -> 1 + | e.g. ``dim = 3`` -> 1 + | e.g. ``dim = 4`` -> 1 + + +Power and Root +-------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float cbrtf(float x)`` + | Returns the cube root of :math:`x`. + - :math:`x \in [-100, 100]` + - 2 + + * - | ``float powf(float x, float y)`` + | Returns :math:`x^y`. + - | :math:`x \in [-4, 4]` + | :math:`y \in [-2, 2]` + - 1 + + * - | ``float powif(float base, int iexp)`` + | Returns the value of first argument to the power of second argument. + - | :math:`\text{base} \in [-10, 10]` + | :math:`\text{iexp} \in [-4, 4]` + - 1 + + * - | ``float sqrtf(float x)`` + | Returns the square root of :math:`x`. + - :math:`x \in [0, 100]` + - 1 + + * - | ``float rsqrtf(float x)`` + | Returns the reciprocal of the square root of :math:`x`. + - :math:`x \in [0.01, 100]` + - 1 + + * - | ``float rcbrtf(float x)`` + | Returns the reciprocal cube root function. + - :math:`x \in [-100, 100]` + - 1 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double cbrt(double x)`` + | Returns the cube root of :math:`x`. + - :math:`x \in [-100, 100]` + - 1 + + * - | ``double pow(double x, double y)`` + | Returns :math:`x^y`. + - | :math:`x \in [-4, 4]` + | :math:`y \in [-2, 2]` + - 1 + + * - | ``double powi(double base, int iexp)`` + | Returns the value of first argument to the power of second argument. + - | :math:`\text{base} \in [-10, 10]` + | :math:`\text{iexp} \in [-4, 4]` + - 1 + + * - | ``double sqrt(double x)`` + | Returns the square root of :math:`x`. + - :math:`x \in [0, 100]` + - 1 + + * - | ``double rsqrt(double x)`` + | Returns the reciprocal of the square root of :math:`x`. + - :math:`x \in [0.01, 100]` + - 1 + + * - | ``double rcbrt(double x)`` + | Returns the reciprocal cube root function. + - :math:`x \in [-100, 100]` + - 1 + +Rounding +-------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float ceilf(float x)`` + | Returns ceiling of :math:`x`. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``float floorf(float x)`` + | Returns the largest integer less than or equal to :math:`x`. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long int lroundf(float x)`` + | Round to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long long int llroundf(float x)`` + | Round to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long int lrintf(float x)`` + | Round :math:`x` to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long long int llrintf(float x)`` + | Round :math:`x` to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``float nearbyintf(float x)`` + | Round :math:`x` to the nearest integer. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``float roundf(float x)`` + | Round to nearest integer value in floating-point. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``float rintf(float x)`` + | Round input to nearest integer value in floating-point. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``float truncf(float x)`` + | Truncate :math:`x` to the integral part. + - :math:`x \in [-4, 4]` + - 0 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double ceil(double x)`` + | Returns ceiling of :math:`x`. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``double floor(double x)`` + | Returns the largest integer less than or equal to :math:`x`. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long int lround(double x)`` + | Round to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long long int llround(double x)`` + | Round to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long int lrint(double x)`` + | Round :math:`x` to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``long long int llrint(double x)`` + | Round :math:`x` to nearest integer value. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``double nearbyint(double x)`` + | Round :math:`x` to the nearest integer. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``double round(double x)`` + | Round to nearest integer value in floating-point. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``double rint(double x)`` + | Round input to nearest integer value in floating-point. + - :math:`x \in [-4, 4]` + - 0 + + * - | ``double trunc(double x)`` + | Truncate :math:`x` to the integral part. + - :math:`x \in [-4, 4]` + - 0 + +Trigonometric and Hyperbolic +---------------------------- +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``float acosf(float x)`` + | Returns the arc cosine of :math:`x`. + - :math:`x \in [-1, 1]` + - 1 + + * - | ``float acoshf(float x)`` + | Returns the nonnegative arc hyperbolic cosine of :math:`x`. + - :math:`x \in [1, 100]` + - 1 + + * - | ``float asinf(float x)`` + | Returns the arc sine of :math:`x`. + - :math:`x \in [-1, 1]` + - 2 + + * - | ``float asinhf(float x)`` + | Returns the arc hyperbolic sine of :math:`x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``float atanf(float x)`` + | Returns the arc tangent of :math:`x`. + - :math:`x \in [-10, 10]` + - 2 + + * - | ``float atan2f(float x, float y)`` + | Returns the arc tangent of the ratio of :math:`x` and :math:`y`. + - | :math:`x \in [-4, 4]` + | :math:`y \in [-2, 2]` + - 1 + + * - | ``float atanhf(float x)`` + | Returns the arc hyperbolic tangent of :math:`x`. + - :math:`x \in [-0.9, 0.9]` + - 1 + + * - | ``float cosf(float x)`` + | Returns the cosine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 1 + + * - | ``float coshf(float x)`` + | Returns the hyperbolic cosine of :math:`x`. + - :math:`x \in [-5, 5]` + - 1 + + * - | ``float sinf(float x)`` + | Returns the sine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 1 + + * - | ``float sinhf(float x)`` + | Returns the hyperbolic sine of :math:`x`. + - :math:`x \in [-5, 5]` + - 1 + + * - | ``void sincosf(float x, float *sptr, float *cptr)`` + | Returns the sine and cosine of :math:`x`. + - :math:`x \in [-3, 3]` + - | ``sin``: 1 + | ``cos``: 1 + + * - | ``float tanf(float x)`` + | Returns the tangent of :math:`x`. + - :math:`x \in [-1.47\pi, 1.47\pi]` + - 1 + + * - | ``float tanhf(float x)`` + | Returns the hyperbolic tangent of :math:`x`. + - :math:`x \in [-5, 5]` + - 2 + + * - | ``float cospif(float x)`` + | Returns the cosine of :math:`\pi \cdot x`. + - :math:`x \in [-0.3, 0.3]` + - 1 + + * - | ``float sinpif(float x)`` + | Returns the hyperbolic sine of :math:`\pi \cdot x`. + - :math:`x \in [-0.625, 0.625]` + - 2 + + * - | ``void sincospif(float x, float *sptr, float *cptr)`` + | Returns the sine and cosine of :math:`\pi \cdot x`. + - :math:`x \in [-0.3, 0.3]` + - | ``sinpi``: 2 + | ``cospi``: 1 + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + :widths: 50,20,30 + + * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** + + * - | ``double acos(double x)`` + | Returns the arc cosine of :math:`x`. + - :math:`x \in [-1, 1]` + - 1 + + * - | ``double acosh(double x)`` + | Returns the nonnegative arc hyperbolic cosine of :math:`x`. + - :math:`x \in [1, 100]` + - 1 + + * - | ``double asin(double x)`` + | Returns the arc sine of :math:`x`. + - :math:`x \in [-1, 1]` + - 1 + + * - | ``double asinh(double x)`` + | Returns the arc hyperbolic sine of :math:`x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``double atan(double x)`` + | Returns the arc tangent of :math:`x`. + - :math:`x \in [-10, 10]` + - 1 + + * - | ``double atan2(double x, double y)`` + | Returns the arc tangent of the ratio of :math:`x` and :math:`y`. + - | :math:`x \in [-4, 4]` + | :math:`y \in [-2, 2]` + - 1 + + * - | ``double atanh(double x)`` + | Returns the arc hyperbolic tangent of :math:`x`. + - :math:`x \in [-0.9, 0.9]` + - 1 + + * - | ``double cos(double x)`` + | Returns the cosine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 1 + + * - | ``double cosh(double x)`` + | Returns the hyperbolic cosine of :math:`x`. + - :math:`x \in [-5, 5]` + - 1 -Single precision mathematical functions -======================================= + * - | ``double sin(double x)`` + | Returns the sine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 1 + * - | ``double sinh(double x)`` + | Returns the hyperbolic sine of :math:`x`. + - :math:`x \in [-5, 5]` + - 1 -Following is the list of supported single precision mathematical functions. + * - | ``void sincos(double x, double *sptr, double *cptr)`` + | Returns the sine and cosine of :math:`x`. + - :math:`x \in [-3, 3]` + - | ``sin``: 1 + | ``cos``: 1 -.. list-table:: Single precision mathematical functions + * - | ``double tan(double x)`` + | Returns the tangent of :math:`x`. + - :math:`x \in [-1.47\pi, 1.47\pi]` + - 1 - * - **Function** - - **Supported on Host** - - **Supported on Device** - - * - | ``float abs(float x)`` - | Returns the absolute value of :math:`x` - - ✓ - - ✓ - - * - | ``float acosf(float x)`` - | Returns the arc cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``float acoshf(float x)`` - | Returns the nonnegative arc hyperbolic cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``float asinf(float x)`` - | Returns the arc sine of :math:`x`. - - ✓ - - ✓ - - * - | ``float asinhf(float x)`` - | Returns the arc hyperbolic sine of :math:`x`. - - ✓ - - ✓ - - * - | ``float atanf(float x)`` - | Returns the arc tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``float atan2f(float x, float y)`` - | Returns the arc tangent of the ratio of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``float atanhf(float x)`` - | Returns the arc hyperbolic tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``float cbrtf(float x)`` - | Returns the cube root of :math:`x`. - - ✓ - - ✓ - - * - | ``float ceilf(float x)`` - | Returns ceiling of :math:`x`. - - ✓ - - ✓ - - * - | ``float copysignf(float x, float y)`` - | Create value with given magnitude, copying sign of second value. - - ✓ - - ✓ - - * - | ``float cosf(float x)`` - | Returns the cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``float coshf(float x)`` - | Returns the hyperbolic cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``float cospif(float x)`` - | Returns the cosine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``float cyl_bessel_i0f(float x)`` - | Returns the value of the regular modified cylindrical Bessel function of order 0 for :math:`x`. - - ✗ - - ✗ - - * - | ``float cyl_bessel_i1f(float x)`` - | Returns the value of the regular modified cylindrical Bessel function of order 1 for :math:`x`. - - ✗ - - ✗ - - * - | ``float erff(float x)`` - | Returns the error function of :math:`x`. - - ✓ - - ✓ - - * - | ``float erfcf(float x)`` - | Returns the complementary error function of :math:`x`. - - ✓ - - ✓ - - * - | ``float erfcinvf(float x)`` - | Returns the inverse complementary function of :math:`x`. - - ✓ - - ✓ - - * - | ``float erfcxf(float x)`` - | Returns the scaled complementary error function of :math:`x`. - - ✓ - - ✓ - - * - | ``float erfinvf(float x)`` - | Returns the inverse error function of :math:`x`. - - ✓ - - ✓ - - * - | ``float expf(float x)`` - | Returns :math:`e^x`. - - ✓ - - ✓ - - * - | ``float exp10f(float x)`` - | Returns :math:`10^x`. - - ✓ - - ✓ - - * - | ``float exp2f( float x)`` - | Returns :math:`2^x`. - - ✓ - - ✓ - - * - | ``float expm1f(float x)`` - | Returns :math:`ln(x - 1)` - - ✓ - - ✓ - - * - | ``float fabsf(float x)`` - | Returns the absolute value of `x` - - ✓ - - ✓ - - * - | ``float fdimf(float x, float y)`` - | Returns the positive difference between :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``float fdividef(float x, float y)`` - | Divide two floating point values. - - ✓ - - ✓ - - * - | ``float floorf(float x)`` - | Returns the largest integer less than or equal to :math:`x`. - - ✓ - - ✓ - - * - | ``float fmaf(float x, float y, float z)`` - | Returns :math:`x \cdot y + z` as a single operation. - - ✓ - - ✓ - - * - | ``float fmaxf(float x, float y)`` - | Determine the maximum numeric value of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``float fminf(float x, float y)`` - | Determine the minimum numeric value of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``float fmodf(float x, float y)`` - | Returns the floating-point remainder of :math:`x / y`. - - ✓ - - ✓ - - * - | ``float modff(float x, float* iptr)`` - | Break down :math:`x` into fractional and integral parts. - - ✓ - - ✗ - - * - | ``float frexpf(float x, int* nptr)`` - | Extract mantissa and exponent of :math:`x`. - - ✓ - - ✗ - - * - | ``float hypotf(float x, float y)`` - | Returns the square root of the sum of squares of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``int ilogbf(float x)`` - | Returns the unbiased integer exponent of :math:`x`. - - ✓ - - ✓ - - * - | ``bool isfinite(float x)`` - | Determine whether :math:`x` is finite. - - ✓ - - ✓ - - * - | ``bool isinf(float x)`` - | Determine whether :math:`x` is infinite. - - ✓ - - ✓ - - * - | ``bool isnan(float x)`` - | Determine whether :math:`x` is a ``NAN``. - - ✓ - - ✓ - - * - | ``float j0f(float x)`` - | Returns the value of the Bessel function of the first kind of order 0 for :math:`x`. - - ✓ - - ✓ - - * - | ``float j1f(float x)`` - | Returns the value of the Bessel function of the first kind of order 1 for :math:`x`. - - ✓ - - ✓ - - * - | ``float jnf(int n, float x)`` - | Returns the value of the Bessel function of the first kind of order n for :math:`x`. - - ✓ - - ✓ - - * - | ``float ldexpf(float x, int exp)`` - | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. - - ✓ - - ✓ - - * - | ``float lgammaf(float x)`` - | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. - - ✓ - - ✗ - - * - | ``long int lrintf(float x)`` - | Round :math:`x` to nearest integer value. - - ✓ - - ✓ - - * - | ``long long int llrintf(float x)`` - | Round :math:`x` to nearest integer value. - - ✓ - - ✓ - - * - | ``long int lroundf(float x)`` - | Round to nearest integer value. - - ✓ - - ✓ - - * - | ``long long int llroundf(float x)`` - | Round to nearest integer value. - - ✓ - - ✓ - - * - | ``float log10f(float x)`` - | Returns the base 10 logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``float log1pf(float x)`` - | Returns the natural logarithm of :math:`x + 1`. - - ✓ - - ✓ - - * - | ``float log2f(float x)`` - | Returns the base 2 logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``float logf(float x)`` - | Returns the natural logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``float logbf(float x)`` - | Returns the floating point representation of the exponent of :math:`x`. - - ✓ - - ✓ - - * - | ``float nanf(const char* tagp)`` - | Returns "Not a Number" value. - - ✗ - - ✓ - - * - | ``float nearbyintf(float x)`` - | Round :math:`x` to the nearest integer. - - ✓ - - ✓ - - * - | ``float nextafterf(float x, float y)`` - | Returns next representable single-precision floating-point value after argument. - - ✓ - - ✗ - - * - | ``float norm3df(float x, float y, float z)`` - | Returns the square root of the sum of squares of :math:`x`, :math:`y` and :math:`z`. - - ✓ - - ✓ - - * - | ``float norm4df(float x, float y, float z, float w)`` - | Returns the square root of the sum of squares of :math:`x`, :math:`y`, :math:`z` and :math:`w`. - - ✓ - - ✓ - - * - | ``float normcdff(float y)`` - | Returns the standard normal cumulative distribution function. - - ✓ - - ✓ - - * - | ``float normcdfinvf(float y)`` - | Returns the inverse of the standard normal cumulative distribution function. - - ✓ - - ✓ - - * - | ``float normf(int dim, const float *a)`` - | Returns the square root of the sum of squares of any number of coordinates. - - ✓ - - ✓ - - * - | ``float powf(float x, float y)`` - | Returns :math:`x^y`. - - ✓ - - ✓ - - * - | ``float powif(float base, int iexp)`` - | Returns the value of first argument to the power of second argument. - - ✓ - - ✓ - - * - | ``float remainderf(float x, float y)`` - | Returns single-precision floating-point remainder. - - ✓ - - ✓ - - * - | ``float remquof(float x, float y, int* quo)`` - | Returns single-precision floating-point remainder and part of quotient. - - ✓ - - ✓ - - * - | ``float roundf(float x)`` - | Round to nearest integer value in floating-point. - - ✓ - - ✓ - - * - | ``float rcbrtf(float x)`` - | Returns the reciprocal cube root function. - - ✓ - - ✓ - - * - | ``float rhypotf(float x, float y)`` - | Returns one over the square root of the sum of squares of two arguments. - - ✓ - - ✓ - - * - | ``float rintf(float x)`` - | Round input to nearest integer value in floating-point. - - ✓ - - ✓ - - * - | ``float rnorm3df(float x, float y, float z)`` - | Returns one over the square root of the sum of squares of three coordinates of the argument. - - ✓ - - ✓ - - * - | ``float rnorm4df(float x, float y, float z, float w)`` - | Returns one over the square root of the sum of squares of four coordinates of the argument. - - ✓ - - ✓ - - * - | ``float rnormf(int dim, const float *a)`` - | Returns the reciprocal of square root of the sum of squares of any number of coordinates. - - ✓ - - ✓ - - * - | ``float scalblnf(float x, long int n)`` - | Scale :math:`x` by :math:`2^n`. - - ✓ - - ✓ - - * - | ``float scalbnf(float x, int n)`` - | Scale :math:`x` by :math:`2^n`. - - ✓ - - ✓ - - * - | ``bool signbit(float x)`` - | Return the sign bit of :math:`x`. - - ✓ - - ✓ - - * - | ``float sinf(float x)`` - | Returns the sine of :math:`x`. - - ✓ - - ✓ - - * - | ``float sinhf(float x)`` - | Returns the hyperbolic sine of :math:`x`. - - ✓ - - ✓ - - * - | ``float sinpif(float x)`` - | Returns the hyperbolic sine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``void sincosf(float x, float *sptr, float *cptr)`` - | Returns the sine and cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``void sincospif(float x, float *sptr, float *cptr)`` - | Returns the sine and cosine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``float sqrtf(float x)`` - | Returns the square root of :math:`x`. - - ✓ - - ✓ - - * - | ``float rsqrtf(float x)`` - | Returns the reciprocal of the square root of :math:`x`. - - ✗ - - ✓ - - * - | ``float tanf(float x)`` - | Returns the tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``float tanhf(float x)`` - | Returns the hyperbolic tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``float tgammaf(float x)`` - | Returns the gamma function of :math:`x`. - - ✓ - - ✓ - - * - | ``float truncf(float x)`` - | Truncate :math:`x` to the integral part. - - ✓ - - ✓ - - * - | ``float y0f(float x)`` - | Returns the value of the Bessel function of the second kind of order 0 for :math:`x`. - - ✓ - - ✓ - - * - | ``float y1f(float x)`` - | Returns the value of the Bessel function of the second kind of order 1 for :math:`x`. - - ✓ - - ✓ - - * - | ``float ynf(int n, float x)`` - | Returns the value of the Bessel function of the second kind of order n for :math:`x`. - - ✓ - - ✓ - -Double precision mathematical functions -======================================= - -Following is the list of supported double precision mathematical functions. - -.. list-table:: Double precision mathematical functions + * - | ``double tanh(double x)`` + | Returns the hyperbolic tangent of :math:`x`. + - :math:`x \in [-5, 5]` + - 1 - * - **Function** - - **Supported on Host** - - **Supported on Device** - - * - | ``double abs(double x)`` - | Returns the absolute value of :math:`x` - - ✓ - - ✓ - - * - | ``double acos(double x)`` - | Returns the arc cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``double acosh(double x)`` - | Returns the nonnegative arc hyperbolic cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``double asin(double x)`` - | Returns the arc sine of :math:`x`. - - ✓ - - ✓ - - * - | ``double asinh(double x)`` - | Returns the arc hyperbolic sine of :math:`x`. - - ✓ - - ✓ - - * - | ``double atan(double x)`` - | Returns the arc tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``double atan2(double x, double y)`` - | Returns the arc tangent of the ratio of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``double atanh(double x)`` - | Returns the arc hyperbolic tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``double cbrt(double x)`` - | Returns the cube root of :math:`x`. - - ✓ - - ✓ - - * - | ``double ceil(double x)`` - | Returns ceiling of :math:`x`. - - ✓ - - ✓ - - * - | ``double copysign(double x, double y)`` - | Create value with given magnitude, copying sign of second value. - - ✓ - - ✓ - - * - | ``double cos(double x)`` - | Returns the cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``double cosh(double x)`` - | Returns the hyperbolic cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``double cospi(double x)`` - | Returns the cosine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``double cyl_bessel_i0(double x)`` - | Returns the value of the regular modified cylindrical Bessel function of order 0 for :math:`x`. - - ✗ - - ✗ - - * - | ``double cyl_bessel_i1(double x)`` - | Returns the value of the regular modified cylindrical Bessel function of order 1 for :math:`x`. - - ✗ - - ✗ - - * - | ``double erf(double x)`` - | Returns the error function of :math:`x`. - - ✓ - - ✓ - - * - | ``double erfc(double x)`` - | Returns the complementary error function of :math:`x`. - - ✓ - - ✓ - - * - | ``double erfcinv(double x)`` - | Returns the inverse complementary function of :math:`x`. - - ✓ - - ✓ - - * - | ``double erfcx(double x)`` - | Returns the scaled complementary error function of :math:`x`. - - ✓ - - ✓ - - * - | ``double erfinv(double x)`` - | Returns the inverse error function of :math:`x`. - - ✓ - - ✓ - - * - | ``double exp(double x)`` - | Returns :math:`e^x`. - - ✓ - - ✓ - - * - | ``double exp10(double x)`` - | Returns :math:`10^x`. - - ✓ - - ✓ - - * - | ``double exp2( double x)`` - | Returns :math:`2^x`. - - ✓ - - ✓ - - * - | ``double expm1(double x)`` - | Returns :math:`ln(x - 1)` - - ✓ - - ✓ - - * - | ``double fabs(double x)`` - | Returns the absolute value of `x` - - ✓ - - ✓ - - * - | ``double fdim(double x, double y)`` - | Returns the positive difference between :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``double floor(double x)`` - | Returns the largest integer less than or equal to :math:`x`. - - ✓ - - ✓ - - * - | ``double fma(double x, double y, double z)`` - | Returns :math:`x \cdot y + z` as a single operation. - - ✓ - - ✓ - - * - | ``double fmax(double x, double y)`` - | Determine the maximum numeric value of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``double fmin(double x, double y)`` - | Determine the minimum numeric value of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``double fmod(double x, double y)`` - | Returns the floating-point remainder of :math:`x / y`. - - ✓ - - ✓ - - * - | ``double modf(double x, double* iptr)`` - | Break down :math:`x` into fractional and integral parts. - - ✓ - - ✗ - - * - | ``double frexp(double x, int* nptr)`` - | Extract mantissa and exponent of :math:`x`. - - ✓ - - ✗ - - * - | ``double hypot(double x, double y)`` - | Returns the square root of the sum of squares of :math:`x` and :math:`y`. - - ✓ - - ✓ - - * - | ``int ilogb(double x)`` - | Returns the unbiased integer exponent of :math:`x`. - - ✓ - - ✓ - - * - | ``bool isfinite(double x)`` - | Determine whether :math:`x` is finite. - - ✓ - - ✓ - - * - | ``bool isin(double x)`` - | Determine whether :math:`x` is infinite. - - ✓ - - ✓ - - * - | ``bool isnan(double x)`` - | Determine whether :math:`x` is a ``NAN``. - - ✓ - - ✓ - - * - | ``double j0(double x)`` - | Returns the value of the Bessel function of the first kind of order 0 for :math:`x`. - - ✓ - - ✓ - - * - | ``double j1(double x)`` - | Returns the value of the Bessel function of the first kind of order 1 for :math:`x`. - - ✓ - - ✓ - - * - | ``double jn(int n, double x)`` - | Returns the value of the Bessel function of the first kind of order n for :math:`x`. - - ✓ - - ✓ - - * - | ``double ldexp(double x, int exp)`` - | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. - - ✓ - - ✓ - - * - | ``double lgamma(double x)`` - | Returns the natural logarithm of the absolute value of the gamma function of :math:`x`. - - ✓ - - ✗ - - * - | ``long int lrint(double x)`` - | Round :math:`x` to nearest integer value. - - ✓ - - ✓ - - * - | ``long long int llrint(double x)`` - | Round :math:`x` to nearest integer value. - - ✓ - - ✓ - - * - | ``long int lround(double x)`` - | Round to nearest integer value. - - ✓ - - ✓ - - * - | ``long long int llround(double x)`` - | Round to nearest integer value. - - ✓ - - ✓ - - * - | ``double log10(double x)`` - | Returns the base 10 logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``double log1p(double x)`` - | Returns the natural logarithm of :math:`x + 1`. - - ✓ - - ✓ - - * - | ``double log2(double x)`` - | Returns the base 2 logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``double log(double x)`` - | Returns the natural logarithm of :math:`x`. - - ✓ - - ✓ - - * - | ``double logb(double x)`` - | Returns the floating point representation of the exponent of :math:`x`. - - ✓ - - ✓ - - * - | ``double nan(const char* tagp)`` - | Returns "Not a Number" value. - - ✗ - - ✓ - - * - | ``double nearbyint(double x)`` - | Round :math:`x` to the nearest integer. - - ✓ - - ✓ - - * - | ``double nextafter(double x, double y)`` - | Returns next representable double-precision floating-point value after argument. - - ✓ - - ✓ - - * - | ``double norm3d(double x, double y, double z)`` - | Returns the square root of the sum of squares of :math:`x`, :math:`y` and :math:`z`. - - ✓ - - ✓ - - * - | ``double norm4d(double x, double y, double z, double w)`` - | Returns the square root of the sum of squares of :math:`x`, :math:`y`, :math:`z` and :math:`w`. - - ✓ - - ✓ - - * - | ``double normcdf(double y)`` - | Returns the standard normal cumulative distribution function. - - ✓ - - ✓ - - * - | ``double normcdfinv(double y)`` - | Returns the inverse of the standard normal cumulative distribution function. - - ✓ - - ✓ - - * - | ``double norm(int dim, const double *a)`` - | Returns the square root of the sum of squares of any number of coordinates. - - ✓ - - ✓ - - * - | ``double pow(double x, double y)`` - | Returns :math:`x^y`. - - ✓ - - ✓ - - * - | ``double powi(double base, int iexp)`` - | Returns the value of first argument to the power of second argument. - - ✓ - - ✓ - - * - | ``double remainder(double x, double y)`` - | Returns double-precision floating-point remainder. - - ✓ - - ✓ - - * - | ``double remquo(double x, double y, int* quo)`` - | Returns double-precision floating-point remainder and part of quotient. - - ✓ - - ✗ - - * - | ``double round(double x)`` - | Round to nearest integer value in floating-point. - - ✓ - - ✓ - - * - | ``double rcbrt(double x)`` - | Returns the reciprocal cube root function. - - ✓ - - ✓ - - * - | ``double rhypot(double x, double y)`` - | Returns one over the square root of the sum of squares of two arguments. - - ✓ - - ✓ - - * - | ``double rint(double x)`` - | Round input to nearest integer value in floating-point. - - ✓ - - ✓ - - * - | ``double rnorm3d(double x, double y, double z)`` - | Returns one over the square root of the sum of squares of three coordinates of the argument. - - ✓ - - ✓ - - * - | ``double rnorm4d(double x, double y, double z, double w)`` - | Returns one over the square root of the sum of squares of four coordinates of the argument. - - ✓ - - ✓ - - * - | ``double rnorm(int dim, const double *a)`` - | Returns the reciprocal of square root of the sum of squares of any number of coordinates. - - ✓ - - ✓ - - * - | ``double scalbln(double x, long int n)`` - | Scale :math:`x` by :math:`2^n`. - - ✓ - - ✓ - - * - | ``double scalbn(double x, int n)`` - | Scale :math:`x` by :math:`2^n`. - - ✓ - - ✓ - - * - | ``bool signbit(double x)`` - | Return the sign bit of :math:`x`. - - ✓ - - ✓ - - * - | ``double sin(double x)`` - | Returns the sine of :math:`x`. - - ✓ - - ✓ - - * - | ``double sinh(double x)`` - | Returns the hyperbolic sine of :math:`x`. - - ✓ - - ✓ - - * - | ``double sinpi(double x)`` - | Returns the hyperbolic sine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``void sincos(double x, double *sptr, double *cptr)`` - | Returns the sine and cosine of :math:`x`. - - ✓ - - ✓ - - * - | ``void sincospi(double x, double *sptr, double *cptr)`` - | Returns the sine and cosine of :math:`\pi \cdot x`. - - ✓ - - ✓ - - * - | ``double sqrt(double x)`` - | Returns the square root of :math:`x`. - - ✓ - - ✓ - - * - | ``double rsqrt(double x)`` - | Returns the reciprocal of the square root of :math:`x`. - - ✗ - - ✓ - - * - | ``double tan(double x)`` - | Returns the tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``double tanh(double x)`` - | Returns the hyperbolic tangent of :math:`x`. - - ✓ - - ✓ - - * - | ``double tgamma(double x)`` - | Returns the gamma function of :math:`x`. - - ✓ - - ✓ - - * - | ``double trunc(double x)`` - | Truncate :math:`x` to the integral part. - - ✓ - - ✓ - - * - | ``double y0(double x)`` - | Returns the value of the Bessel function of the second kind of order 0 for :math:`x`. - - ✓ - - ✓ - - * - | ``double y1(double x)`` - | Returns the value of the Bessel function of the second kind of order 1 for :math:`x`. - - ✓ - - ✓ - - * - | ``double yn(int n, double x)`` - | Returns the value of the Bessel function of the second kind of order n for :math:`x`. - - ✓ - - ✓ + * - | ``double cospi(double x)`` + | Returns the cosine of :math:`\pi \cdot x`. + - :math:`x \in [-0.3, 0.3]` + - 2 -Integer intrinsics -================== + * - | ``double sinpi(double x)`` + | Returns the hyperbolic sine of :math:`\pi \cdot x`. + - :math:`x \in [-0.625, 0.625]` + - 2 -Following is the list of supported integer intrinsics. Note that intrinsics are supported on device only. + * - | ``void sincospi(double x, double *sptr, double *cptr)`` + | Returns the sine and cosine of :math:`\pi \cdot x`. + - :math:`x \in [-0.3, 0.3]` + - | ``sinpi``: 2 + | ``cospi``: 2 -.. list-table:: Integer intrinsics mathematical functions +No C++ STD Implementation +------------------------- - * - **Function** +This table lists HIP device functions that do not have a direct equivalent in the C++ standard library. +These functions were excluded from comparison due to the complexity of implementing a precise +reference version within the standard library's constraints. - * - | ``unsigned int __brev(unsigned int x)`` - | Reverse the bit order of a 32 bit unsigned integer. +.. tab-set:: - * - | ``unsigned long long int __brevll(unsigned long long int x)`` - | Reverse the bit order of a 64 bit unsigned integer. + .. tab-item:: Single Precision Floating-point - * - | ``unsigned int __byte_perm(unsigned int x, unsigned int y, unsigned int z)`` - | Return selected bytes from two 32-bit unsigned integers. + .. list-table:: - * - | ``unsigned int __clz(int x)`` - | Return the number of consecutive high-order zero bits in 32 bit integer. + * - **Function** - * - | ``unsigned int __clzll(long long int x)`` - | Return the number of consecutive high-order zero bits in 64 bit integer. + * - | ``float j0f(float x)`` + | Returns the value of the Bessel function of the first kind of order 0 for :math:`x`. - * - | ``unsigned int __ffs(int x)`` - | Find the position of least significant bit set to 1 in a 32 bit integer. + * - | ``float j1f(float x)`` + | Returns the value of the Bessel function of the first kind of order 1 for :math:`x`. - * - | ``unsigned int __ffsll(long long int x)`` - | Find the position of least significant bit set to 1 in a 64 bit signed integer. + * - | ``float jnf(int n, float x)`` + | Returns the value of the Bessel function of the first kind of order n for :math:`x`. - * - | ``unsigned int __fns32(unsigned long long mask, unsigned int base, int offset)`` - | Find the position of the n-th set to 1 bit in a 32-bit integer. + * - | ``float y0f(float x)`` + | Returns the value of the Bessel function of the second kind of order 0 for :math:`x`. - * - | ``unsigned int __fns64(unsigned long long int mask, unsigned int base, int offset)`` - | Find the position of the n-th set to 1 bit in a 64-bit integer. + * - | ``float y1f(float x)`` + | Returns the value of the Bessel function of the second kind of order 1 for :math:`x`. - * - | ``unsigned int __funnelshift_l(unsigned int lo, unsigned int hi, unsigned int shift)`` - | Concatenate :math:`hi` and :math:`lo`, shift left by shift & 31 bits, return the most significant 32 bits. + * - | ``float ynf(int n, float x)`` + | Returns the value of the Bessel function of the second kind of order n for :math:`x`. - * - | ``unsigned int __funnelshift_lc(unsigned int lo, unsigned int hi, unsigned int shift)`` - | Concatenate :math:`hi` and :math:`lo`, shift left by min(shift, 32) bits, return the most significant 32 bits. + * - | ``float erfcinvf(float x)`` + | Returns the inverse complementary function of :math:`x`. - * - | ``unsigned int __funnelshift_r(unsigned int lo, unsigned int hi, unsigned int shift)`` - | Concatenate :math:`hi` and :math:`lo`, shift right by shift & 31 bits, return the least significant 32 bits. + * - | ``float erfinvf(float x)`` + | Returns the inverse error function of :math:`x`. - * - | ``unsigned int __funnelshift_rc(unsigned int lo, unsigned int hi, unsigned int shift)`` - | Concatenate :math:`hi` and :math:`lo`, shift right by min(shift, 32) bits, return the least significant 32 bits. + * - | ``float normcdff(float y)`` + | Returns the standard normal cumulative distribution function. - * - | ``unsigned int __hadd(int x, int y)`` - | Compute average of signed input arguments, avoiding overflow in the intermediate sum. + * - | ``float normcdfinvf(float y)`` + | Returns the inverse of the standard normal cumulative distribution function. - * - | ``unsigned int __rhadd(int x, int y)`` - | Compute rounded average of signed input arguments, avoiding overflow in the intermediate sum. + .. tab-item:: Double Precision Floating-point - * - | ``unsigned int __uhadd(int x, int y)`` - | Compute average of unsigned input arguments, avoiding overflow in the intermediate sum. + .. list-table:: - * - | ``unsigned int __urhadd (unsigned int x, unsigned int y)`` - | Compute rounded average of unsigned input arguments, avoiding overflow in the intermediate sum. + * - **Function** - * - | ``int __sad(int x, int y, int z)`` - | Returns :math:`|x - y| + z`, the sum of absolute difference. + * - | ``double j0(double x)`` + | Returns the value of the Bessel function of the first kind of order 0 for :math:`x`. - * - | ``unsigned int __usad(unsigned int x, unsigned int y, unsigned int z)`` - | Returns :math:`|x - y| + z`, the sum of absolute difference. + * - | ``double j1(double x)`` + | Returns the value of the Bessel function of the first kind of order 1 for :math:`x`. - * - | ``unsigned int __popc(unsigned int x)`` - | Count the number of bits that are set to 1 in a 32 bit integer. + * - | ``double jn(int n, double x)`` + | Returns the value of the Bessel function of the first kind of order n for :math:`x`. - * - | ``unsigned int __popcll(unsigned long long int x)`` - | Count the number of bits that are set to 1 in a 64 bit integer. + * - | ``double y0(double x)`` + | Returns the value of the Bessel function of the second kind of order 0 for :math:`x`. - * - | ``int __mul24(int x, int y)`` - | Multiply two 24bit integers. + * - | ``double y1(double x)`` + | Returns the value of the Bessel function of the second kind of order 1 for :math:`x`. - * - | ``unsigned int __umul24(unsigned int x, unsigned int y)`` - | Multiply two 24bit unsigned integers. + * - | ``double yn(int n, double x)`` + | Returns the value of the Bessel function of the second kind of order n for :math:`x`. - * - | ``int __mulhi(int x, int y)`` - | Returns the most significant 32 bits of the product of the two 32-bit integers. + * - | ``double erfcinv(double x)`` + | Returns the inverse complementary function of :math:`x`. - * - | ``unsigned int __umulhi(unsigned int x, unsigned int y)`` - | Returns the most significant 32 bits of the product of the two 32-bit unsigned integers. + * - | ``double erfinv(double x)`` + | Returns the inverse error function of :math:`x`. - * - | ``long long int __mul64hi(long long int x, long long int y)`` - | Returns the most significant 64 bits of the product of the two 64-bit integers. + * - | ``double normcdf(double y)`` + | Returns the standard normal cumulative distribution function. - * - | ``unsigned long long int __umul64hi(unsigned long long int x, unsigned long long int y)`` - | Returns the most significant 64 bits of the product of the two 64 unsigned bit integers. + * - | ``double normcdfinv(double y)`` + | Returns the inverse of the standard normal cumulative distribution function. -The HIP-Clang implementation of ``__ffs()`` and ``__ffsll()`` contains code to add a constant +1 to produce the ``ffs`` result format. -For the cases where this overhead is not acceptable and programmer is willing to specialize for the platform, -HIP-Clang provides ``__lastbit_u32_u32(unsigned int input)`` and ``__lastbit_u32_u64(unsigned long long int input)``. -The index returned by ``__lastbit_`` instructions starts at -1, while for ``ffs`` the index starts at 0. +Unsupported +----------- -Floating-point Intrinsics -========================= +This table lists functions that are not supported by HIP. + +.. tab-set:: + + .. tab-item:: Single Precision Floating-point + + .. list-table:: -Following is the list of supported floating-point intrinsics. Note that intrinsics are supported on device only. + * - **Function** + + * - | ``float cyl_bessel_i0f(float x)`` + | Returns the value of the regular modified cylindrical Bessel function of order 0 for :math:`x`. + + * - | ``float cyl_bessel_i1f(float x)`` + | Returns the value of the regular modified cylindrical Bessel function of order 1 for :math:`x`. + + .. tab-item:: Double Precision Floating-point + + .. list-table:: + + * - **Function** + + * - | ``double cyl_bessel_i0(double x)`` + | Returns the value of the regular modified cylindrical Bessel function of order 0 for :math:`x`. + + * - | ``double cyl_bessel_i1(double x)`` + | Returns the value of the regular modified cylindrical Bessel function of order 1 for :math:`x`. + +Intrinsic mathematical functions +================================ + +Intrinsic math functions are optimized for performance on HIP-supported hardware. These functions often +trade some precision for faster execution, making them ideal for applications where computational +efficiency is a priority over strict numerical accuracy. Note that intrinsics are supported on device only. + +Floating-point Intrinsics +------------------------- .. note:: - Only the nearest even rounding mode supported on AMD GPUs by defaults. The ``_rz``, ``_ru`` and - ``_rd`` suffixed intrinsic functions are existing in HIP AMD backend, if the + Only the nearest-even rounding mode is supported by default on AMD GPUs. The ``_rz``, ``_ru``, and ``_rd`` + suffixed intrinsic functions exist in the HIP AMD backend if the ``OCML_BASIC_ROUNDED_OPERATIONS`` macro is defined. .. list-table:: Single precision intrinsics mathematical functions + :widths: 50,20,30 * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** * - | ``float __cosf(float x)`` | Returns the fast approximate cosine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 4 * - | ``float __exp10f(float x)`` | Returns the fast approximate for 10 :sup:`x`. + - :math:`x \in [-4, 4]` + - 18 * - | ``float __expf(float x)`` | Returns the fast approximate for e :sup:`x`. + - :math:`x \in [-10, 10]` + - 6 * - | ``float __fadd_rn(float x, float y)`` | Add two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-1000, 1000]` + | :math:`y \in [-1000, 1000]` + - 0 * - | ``float __fdiv_rn(float x, float y)`` - | Divide two floating point values in round-to-nearest-even mode. + | Divide two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-100, 100]` + - 0 * - | ``float __fmaf_rn(float x, float y, float z)`` | Returns ``x × y + z`` as a single operation in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 10]` + | :math:`z \in [-10, 10]` + - 0 * - | ``float __fmul_rn(float x, float y)`` | Multiply two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-100, 100]` + - 0 * - | ``float __frcp_rn(float x, float y)`` | Returns ``1 / x`` in round-to-nearest-even mode. + - :math:`x \in [-100, 100]` + - 0 * - | ``float __frsqrt_rn(float x)`` | Returns ``1 / √x`` in round-to-nearest-even mode. + - :math:`x \in [0.01, 100]` + - 1 * - | ``float __fsqrt_rn(float x)`` | Returns ``√x`` in round-to-nearest-even mode. + - :math:`x \in [0, 100]` + - 1 * - | ``float __fsub_rn(float x, float y)`` | Subtract two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-1000, 1000]` + | :math:`y \in [-1000, 1000]` + - 0 * - | ``float __log10f(float x)`` | Returns the fast approximate for base 10 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 2 * - | ``float __log2f(float x)`` | Returns the fast approximate for base 2 logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 1 * - | ``float __logf(float x)`` | Returns the fast approximate for natural logarithm of :math:`x`. + - :math:`x \in [10^{-6}, 10^6]` + - 2 * - | ``float __powf(float x, float y)`` | Returns the fast approximate of x :sup:`y`. + - | :math:`x \in [-4, 4]` + | :math:`y \in [-2, 2]` + - 1 * - | ``float __saturatef(float x)`` | Clamp :math:`x` to [+0.0, 1.0]. + - :math:`x \in [-2, 3]` + - 0 * - | ``float __sincosf(float x, float* sinptr, float* cosptr)`` | Returns the fast approximate of sine and cosine of :math:`x`. + - :math:`x \in [-3, 3]` + - | ``sin``: 18 + | ``cos``: 4 * - | ``float __sinf(float x)`` | Returns the fast approximate sine of :math:`x`. + - :math:`x \in [-\pi, \pi]` + - 18 * - | ``float __tanf(float x)`` | Returns the fast approximate tangent of :math:`x`. + - :math:`x \in [-1.47\pi, 1.47\pi]` + - 1 .. list-table:: Double precision intrinsics mathematical functions + :widths: 50,20,30 * - **Function** + - **Test Range** + - **ULP Difference of Maximum Absolute Error** * - | ``double __dadd_rn(double x, double y)`` | Add two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-1000, 1000]` + | :math:`y \in [-1000, 1000]` + - 0 * - | ``double __ddiv_rn(double x, double y)`` | Divide two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-100, 100]` + - 0 * - | ``double __dmul_rn(double x, double y)`` | Multiply two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-100, 100]` + - 0 * - | ``double __drcp_rn(double x, double y)`` | Returns ``1 / x`` in round-to-nearest-even mode. + - :math:`x \in [-100, 100]` + - 0 * - | ``double __dsqrt_rn(double x)`` | Returns ``√x`` in round-to-nearest-even mode. + - :math:`x \in [0, 100]` + - 0 * - | ``double __dsub_rn(double x, double y)`` | Subtract two floating-point values in round-to-nearest-even mode. + - | :math:`x \in [-1000, 1000]` + | :math:`y \in [-1000, 1000]` + - 0 * - | ``double __fma_rn(double x, double y, double z)`` | Returns ``x × y + z`` as a single operation in round-to-nearest-even mode. + - | :math:`x \in [-100, 100]` + | :math:`y \in [-10, 10]` + | :math:`z \in [-10, 10]` + - 0 + +Integer intrinsics +------------------ + +This section covers HIP integer intrinsic functions. ULP error values are omitted +since they only apply to floating-point operations, not integer arithmetic. + +.. list-table:: Integer intrinsics mathematical functions + + * - **Function** + + * - | ``unsigned int __brev(unsigned int x)`` + | Reverse the bit order of a 32 bit unsigned integer. + + * - | ``unsigned long long int __brevll(unsigned long long int x)`` + | Reverse the bit order of a 64 bit unsigned integer. + + * - | ``unsigned int __byte_perm(unsigned int x, unsigned int y, unsigned int z)`` + | Return selected bytes from two 32-bit unsigned integers. + + * - | ``unsigned int __clz(int x)`` + | Return the number of consecutive high-order zero bits in 32 bit integer. + + * - | ``unsigned int __clzll(long long int x)`` + | Return the number of consecutive high-order zero bits in 64 bit integer. + + * - | ``unsigned int __ffs(int x)`` [1]_ + | Returns the position of the first set bit in a 32 bit integer. + | Note: if ``x`` is ``0``, will return ``0`` + + * - | ``unsigned int __ffsll(long long int x)`` [1]_ + | Returns the position of the first set bit in a 64 bit signed integer. + | Note: if ``x`` is ``0``, will return ``0`` + + * - | ``unsigned int __fns32(unsigned long long mask, unsigned int base, int offset)`` + | Find the position of the n-th set to 1 bit in a 32-bit integer. + | Note: this intrinsic is emulated via software, so performance can be potentially slower + + * - | ``unsigned int __fns64(unsigned long long int mask, unsigned int base, int offset)`` + | Find the position of the n-th set to 1 bit in a 64-bit integer. + | Note: this intrinsic is emulated via software, so performance can be potentially slower + + * - | ``unsigned int __funnelshift_l(unsigned int lo, unsigned int hi, unsigned int shift)`` + | Concatenate :math:`hi` and :math:`lo`, shift left by shift & 31 bits, return the most significant 32 bits. + + * - | ``unsigned int __funnelshift_lc(unsigned int lo, unsigned int hi, unsigned int shift)`` + | Concatenate :math:`hi` and :math:`lo`, shift left by min(shift, 32) bits, return the most significant 32 bits. + + * - | ``unsigned int __funnelshift_r(unsigned int lo, unsigned int hi, unsigned int shift)`` + | Concatenate :math:`hi` and :math:`lo`, shift right by shift & 31 bits, return the least significant 32 bits. + + * - | ``unsigned int __funnelshift_rc(unsigned int lo, unsigned int hi, unsigned int shift)`` + | Concatenate :math:`hi` and :math:`lo`, shift right by min(shift, 32) bits, return the least significant 32 bits. + + * - | ``unsigned int __hadd(int x, int y)`` + | Compute average of signed input arguments, avoiding overflow in the intermediate sum. + + * - | ``unsigned int __rhadd(int x, int y)`` + | Compute rounded average of signed input arguments, avoiding overflow in the intermediate sum. + + * - | ``unsigned int __uhadd(int x, int y)`` + | Compute average of unsigned input arguments, avoiding overflow in the intermediate sum. + + * - | ``unsigned int __urhadd (unsigned int x, unsigned int y)`` + | Compute rounded average of unsigned input arguments, avoiding overflow in the intermediate sum. + + * - | ``int __sad(int x, int y, int z)`` + | Returns :math:`|x - y| + z`, the sum of absolute difference. + + * - | ``unsigned int __usad(unsigned int x, unsigned int y, unsigned int z)`` + | Returns :math:`|x - y| + z`, the sum of absolute difference. + + * - | ``unsigned int __popc(unsigned int x)`` + | Count the number of bits that are set to 1 in a 32 bit integer. + + * - | ``unsigned int __popcll(unsigned long long int x)`` + | Count the number of bits that are set to 1 in a 64 bit integer. + + * - | ``int __mul24(int x, int y)`` + | Multiply two 24bit integers. + + * - | ``unsigned int __umul24(unsigned int x, unsigned int y)`` + | Multiply two 24bit unsigned integers. + + * - | ``int __mulhi(int x, int y)`` + | Returns the most significant 32 bits of the product of the two 32-bit integers. + + * - | ``unsigned int __umulhi(unsigned int x, unsigned int y)`` + | Returns the most significant 32 bits of the product of the two 32-bit unsigned integers. + + * - | ``long long int __mul64hi(long long int x, long long int y)`` + | Returns the most significant 64 bits of the product of the two 64-bit integers. + + * - | ``unsigned long long int __umul64hi(unsigned long long int x, unsigned long long int y)`` + | Returns the most significant 64 bits of the product of the two 64 unsigned bit integers. + +.. [1] The HIP-Clang implementation of ``__ffs()`` and ``__ffsll()`` contains code to add a constant +1 to produce the ``ffs`` result format. + For the cases where this overhead is not acceptable and programmer is willing to specialize for the platform, + HIP-Clang provides ``__lastbit_u32_u32(unsigned int input)`` and ``__lastbit_u32_u64(unsigned long long int input)``. + The index returned by ``__lastbit_`` instructions starts at -1, while for ``ffs`` the index starts at 0. From f705a2c6b3bda58470d548542b9df1b68a5319e5 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Mon, 10 Mar 2025 16:08:52 +0100 Subject: [PATCH 42/46] Remove external link --- docs/reference/math_api.rst | 5 ----- 1 file changed, 5 deletions(-) diff --git a/docs/reference/math_api.rst b/docs/reference/math_api.rst index 16c97c3637..504054ff91 100644 --- a/docs/reference/math_api.rst +++ b/docs/reference/math_api.rst @@ -16,11 +16,6 @@ This section documents: - Maximum error bounds for supported HIP math functions - Currently unsupported functions -For a comprehensive analysis of mathematical function accuracy—including detailed evaluations -in single, double, and quadruple precision and a discussion of the IEEE 754 standard's recommendations -on correct rounding — see the paper -`Accuracy of Mathematical Functions `_. - Error bounds on this page are measured in units in the last place (ULPs), representing the absolute difference between a HIP math function result and its corresponding C++ standard library function (e.g., comparing HIP's sinf with C++'s sinf). From cefca2c98c808755b104a216bb65573d28a67d84 Mon Sep 17 00:00:00 2001 From: randyh62 Date: Wed, 15 Jan 2025 15:14:36 -0800 Subject: [PATCH 43/46] Update programming model --- .wordlist.txt | 6 +- .../hip_runtime_api/cooperative_groups.rst | 2 +- docs/understand/hardware_implementation.rst | 9 +- docs/understand/programming_model.rst | 400 ++++++++++++------ 4 files changed, 286 insertions(+), 131 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index c35dfb045d..fab10155f6 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -7,7 +7,8 @@ APUs AQL AXPY asm -asynchrony +Asynchronicity +Asynchrony backtrace bfloat Bitcode @@ -71,6 +72,7 @@ hipModule hipModuleLaunchKernel hipother HIPRTC +hyperthreading icc IILE iGPU @@ -116,6 +118,7 @@ NDRange nonnegative NOP Numa +ns Nsight ocp omnitrace @@ -124,6 +127,7 @@ overindexing oversubscription overutilized parallelizable +pipelining parallelized pixelated pragmas diff --git a/docs/how-to/hip_runtime_api/cooperative_groups.rst b/docs/how-to/hip_runtime_api/cooperative_groups.rst index 3170e197ef..a3e32cd294 100644 --- a/docs/how-to/hip_runtime_api/cooperative_groups.rst +++ b/docs/how-to/hip_runtime_api/cooperative_groups.rst @@ -164,7 +164,7 @@ The ``thread_rank()`` , ``size()``, ``cg_type()``, ``is_valid()``, ``sync()``, ` Coalesced groups ------------------ -Threads (64 threads on CDNA and 32 threads on RDNA) in a warp cannot execute different instructions simultaneously, so conditional branches are executed serially within the warp. When threads encounter a conditional branch, they can diverge, resulting in some threads being disabled, if they do not meet the condition to execute that branch. The active threads referred as coalesced, and coalesced group represents an active thread group within a warp. +Threads (64 threads on CDNA and 32 threads on RDNA) in a warp cannot execute different instructions simultaneously, so conditional branches are executed serially within the warp. When threads encounter a conditional branch, they can diverge, resulting in some threads being disabled if they do not meet the condition to execute that branch. The active threads are referred to as coalesced, and coalesced group represents an active thread group within a warp. .. note:: diff --git a/docs/understand/hardware_implementation.rst b/docs/understand/hardware_implementation.rst index 7038262812..e57f7d4505 100644 --- a/docs/understand/hardware_implementation.rst +++ b/docs/understand/hardware_implementation.rst @@ -45,12 +45,13 @@ The amount of warps that can reside concurrently on a CU, known as occupancy, is determined by the warp's resource usage of registers and shared memory. +.. _gcn_cu: + .. figure:: ../data/understand/hardware_implementation/compute_unit.svg :alt: Diagram depicting the general structure of a compute unit of an AMD GPU. - An AMD Graphics Core Next (GCN) CU. The CDNA and RDNA CUs are based on - variations of the GCN CU. + AMD Graphics Core Next (GCN) CU On AMD GCN GPUs the basic structure of a CU is: @@ -102,6 +103,8 @@ The scalar unit performs instructions that are uniform within a warp. It thereby improves efficiency and reduces the pressure on the vector ALUs and the vector register file. +.. _cdna3_cu: + CDNA architecture ================= @@ -121,6 +124,8 @@ multiply-accumulate operations for Block Diagram of a CDNA3 Compute Unit. +.. _rdna3_cu: + RDNA architecture ================= diff --git a/docs/understand/programming_model.rst b/docs/understand/programming_model.rst index 6c7015996f..64a92df470 100644 --- a/docs/understand/programming_model.rst +++ b/docs/understand/programming_model.rst @@ -7,67 +7,91 @@ .. _programming_model: ******************************************************************************* -HIP programming model +Introduction to HIP programming model ******************************************************************************* The HIP programming model makes it easy to map data-parallel C/C++ algorithms to massively parallel, wide single instruction, multiple data (SIMD) architectures, -such as GPUs. +such as GPUs. HIP supports many imperative languages, such as Python via PyHIP, +but this document focuses on the original C/C++ API of HIP. -While the model may be expressed in most imperative languages, (for example -Python via PyHIP) this document will focus on the original C/C++ API of HIP. - -A basic understanding of the underlying device architecture helps you +While GPUs may be capable of running applications written for CPUs if properly ported +and compiled, it would not be an efficient use of GPU resources. GPUs are different +from CPUs in fundamental ways, and should be used accordingly to achieve optimum +performance. A basic understanding of the underlying device architecture helps you make efficient use of HIP and general purpose graphics processing unit (GPGPU) -programming in general. +programming in general. The following topics introduce you to the key concepts of +GPU-based programming, and the HIP programming model. + +Getting into Hardware: CPU vs GPU +================================= + +CPUs and GPUs have been designed for different purposes. CPUs have been designed +to quickly execute a single thread, decreasing the time it takes for a single +operation, increasing the amount of sequential instructions that can be executed. +This includes fetching data, and reducing pipeline stalls where the ALU has to +wait for previous instructions to finish. + +On CPUs the goal is to quickly process operations. CPUs provide low latency processing for +serial instructions. On the other hand, GPUs have been designed to execute many similar commands, or threads, +in parallel, achieving higher throughput. Latency is the delay from when an operation +is started to when it returns, such as 2 ns, while throughput is the number of operations completed +in a period of time, such as ten thousand threads completed. -RDNA & CDNA architecture summary -================================ +For the GPU, the objective is to process as many operations in parallel, rather +than to finish a single instruction quickly. GPUs in general are made up of basic +building blocks called compute units (CUs), that execute the threads of a kernel. +As described in :ref:`hardware_implementation`, these CUs provide the necessary +resources for the threads: the Arithmetic Logical Units (ALUs), register files, +caches and shared memory for efficient communication between the threads. -GPUs in general are made up of basic building blocks called compute units (CUs), -that execute the threads of a kernel. These CUs provide the necessary resources -for the threads: the Arithmetic Logical Units (ALUs), register files, caches and -shared memory for efficient communication between the threads. +The following defines a few hardware differences between CPUs and GPUs: -This design allows for efficient execution of kernels while also being able to -scale from small GPUs embedded in APUs with few CUs up to GPUs designed for data -centers with hundreds of CUs. Figure :ref:`rdna3_cu` and :ref:`cdna3_cu` show -examples of such compute units. +* CPU: -For architecture details, check :ref:`hardware_implementation`. + - Optimized for sequential processing with a few powerful cores (4-64 typically) + - High clock speeds (3-5 GHz) + - One register file per thread. On modern CPUs you have at most 2 register files per core, called hyperthreading. + - One ALU executing the thread. -.. _rdna3_cu: + - Designed to quickly execute instructions of the same thread. + - Complex branch prediction. -.. figure:: ../data/understand/programming_model/rdna3_cu.png - :alt: Block diagram showing the structure of an RDNA3 Compute Unit. It - consists of four SIMD units, each including a vector and scalar register - file, with the corresponding scalar and vector ALUs. All four SIMDs - share a scalar and instruction cache, as well as the shared memory. Two - of the SIMD units each share an L0 cache. + - Large L1/L2 cache per core, shared by fewer threads (maximum of 2 when hyperthreading is available). + - A disadvantage is switching execution from one thread to another (or context switching) takes a considerable amount of time: the ALU pipeline needs to be emptied, the register file has to be written to memory to free the register for another thread. + +* GPU: - Block Diagram of an RDNA3 Compute Unit. + - Designed for parallel processing with many simpler cores (hundreds/thousands) + - Lower clock speeds (1-2 GHz) + - Streamlined control logic + - Small caches, more registers + - Register files are shared among threads. The number of threads that can be run in parallel depends on the registers needed per thread. + - Multiple ALUs execute a collection of threads having the same operations, also known as a wavefront or warp. This is called single-instruction, multiple threads (SIMT) operation as described in :ref:`programming_model_simt`. -.. _cdna3_cu: + - The collection of ALUs is called SIMD. SIMDs are an extension to the hardware architecture, that allows a `single instruction` to concurrently operate on `multiple data` inputs. + - For branching threads where conditional instructions lead to thread divergence, ALUs still processes the full wavefront, but the result for divergent threads is masked out. This leads to wasted ALU cycles, and should be a consideration in your programming. Keep instructions consistent, and leave conditionals out of threads. -.. figure:: ../data/understand/programming_model/cdna3_cu.png - :alt: Block diagram showing the structure of a CDNA3 compute unit. It includes - Shader Cores, the Matrix Core Unit, a Local Data Share used for sharing - memory between threads in a block, an L1 Cache and a Scheduler. The - Shader Cores represent the vector ALUs and the Matrix Core Unit the - matrix ALUs. The Local Data Share is used as the shared memory. + - The advantage for GPUs is that context switching is easy. All threads that run on a core/compute unit have their registers on the compute unit, so they don't need to be stored to global memory, and each cycle one instruction from any wavefront that resides on the compute unit can be issued. - Block Diagram of a CDNA3 Compute Unit. +When programming for a heterogeneous system, which incorporates CPUs and GPUs, you must +write your program to take advantage of the strengths of the available hardware. +Use the CPU for tasks that require complex logic with conditional branching, to reduce the +time to reach a decision. Use the GPU for parallel operations of the same instruction +across large datasets, with little branching, where the volume of operations is the key. + +.. _heterogeneous_programming: Heterogeneous Programming ========================= -The HIP programming model assumes two execution contexts. One is referred to as -*host* while compute kernels execute on a *device*. These contexts have -different capabilities, therefor slightly different rules apply. The *host* -execution is defined by the C++ abstract machine, while *device* execution -follows the :ref:`SIMT model` of HIP. These execution contexts in -code are signified by the ``__host__`` and ``__device__`` decorators. There are -a few key differences between the two: +The HIP programming model has two execution contexts. The main application starts on the CPU, or +the *host* processor, and compute kernels are launched on the *device* such as `Instinct +accelerators `_ or AMD GPUs. +The host execution is defined by the C++ abstract machine, while device execution +follows the :ref:`SIMT model` of HIP. These two execution contexts +are signified by the ``__host__`` and ``__global__`` (or ``__device__``) decorators +in HIP program code. There are a few key differences between the two contexts: * The C++ abstract machine assumes a unified memory address space, meaning that one can always access any given address in memory (assuming the absence of @@ -75,65 +99,123 @@ a few key differences between the two: from one means nothing in another. Moreover, not all address spaces are accessible from all contexts. - Looking at :ref:`rdna3_cu` and :ref:`cdna3_cu`, you can see that - every CU has an instance of storage backing the namespace ``__shared__``. - Even if the host were to have access to these regions of - memory, the performance benefits of the segmented memory subsystem are + Looking at the :ref:`gcn_cu` figure, you can see that every CU has an instance of storage + backing the namespace ``__shared__``. Even if the host were to have access to these + regions of memory, the performance benefits of the segmented memory subsystem are supported by the inability of asynchronous access from the host. -* Not all C++ language features map cleanly to typical device architectures, - some are very expensive (meaning slow) to implement on GPU devices, therefor - they are forbidden in device contexts to avoid users tapping into features - that unexpectedly decimate their program's performance. Offload devices targeted - by HIP aren't general purpose devices, at least not in the sense that a CPU is. - HIP focuses on data parallel computations and as such caters to throughput - optimized architectures, such as GPUs or accelerators derived from GPU - architectures. +* Not all C++ language features map cleanly to typical GPU device architectures. + Some C++ features have poor latency when implemented on GPU devices, therefore + they are forbidden in device contexts to avoid using features that unexpectedly + decimate the program's performance. Offload devices targeted by HIP aren't general + purpose devices, at least not in the sense that a CPU is. HIP focuses on data + parallel computations and as such caters to throughput optimized architectures, + such as GPUs or accelerators derived from GPU architectures. -* Asynchrony is at the forefront of the HIP API. Computations launched on the device +* Asynchronicity is at the forefront of the HIP API. Computations launched on the device execute asynchronously with respect to the host, and it is the user's responsibility to synchronize their data dispatch/fetch with computations on the device. .. note:: - HIP does perform implicit synchronization on occasions, more advanced than other - APIs such as OpenCL or SYCL, in which the responsibility of synchronization mostly - depends on the user. + HIP performs implicit synchronization on occasions, unlike some + APIs where the responsibility for synchronization is left to the user. + +Host programming +---------------- + +In heterogeneous programming, the CPU is available for processing operations but the host application has the additional task of managing data and computation exchanges between the CPU (host) and GPU (device). The host acts as the application manager, coordinating the overall workflow and directing operations to the appropriate context, handles data preparation and data transfers, and manages GPU tasks and synchronization. Here is a typical sequence of operations: + +1. Initialize the HIP runtime and select the GPU: As described in :ref:`initialization`, refers to identifying and selecting a target GPU, setting up a context to let the CPU interact with the GPU. +2. Data preparation: As discussed in :ref:`memory_management`, this includes allocating the required memory on the host and device, preparing input data and transferring it from the host to the device. The data is both transferred to the device, and passed as an input parameter when launching the kernel. +3. Configure and launch the kernel on the GPU: As described in :ref:`device_program`, define and load the kernel or kernels to be run, launch kernels using the triple chevron syntax or appropriate API call (for example ``hipLaunchKernelGGL``), and pass parameters as needed. On the GPU, kernels are run on streams, or a queue of operations. Within the same stream operations run in the order they were issued, but different streams are independent and can execute concurrently. In the HIP runtime, kernels run on the default stream when one is not specified, but specifying a stream for the kernel lets you increase concurrency in task scheduling and resource utilization, and launch and manage multiple kernels from the host program. +4. Synchronization: As described in :ref:`asynchronous_how-to`, kernel execution occurs in the context of device streams, specifically the default (`0`) stream. You can use streams and events to manage task dependencies, overlap computation with data transfers, and manage asynchronous processes to ensure proper sequencing of operations. Wait for events or streams to finish execution and transfer results from the GPU back to the host. +5. Error handling: As described in :ref:`error_handling`, you should catch and handle potential errors from API calls, kernel launches, or memory operations. For example, use ``hipGetErrorString`` to retrieve error messages. +6. Cleanup and resource management: Validate results, clean up GPU contexts and resources, and free allocated memory on the host and devices. + +This structure allows for efficient use of GPU resources and facilitates the acceleration of compute-intensive tasks while keeping the host CPU available for other tasks. + +.. _device_program: + +Device programming +------------------ + +The device or kernel program acts as workers on the GPU application, distributing operations to be handled quickly and efficiently. Launching a kernel in the host application starts the kernel program running on the GPU, defining the parallel operations to repeat the same instructions across many datasets. Understanding how the kernel works and the processes involved is essential to writing efficient GPU applications. Threads, blocks, and grids provide a hierarchical approach to parallel operations. Understanding the thread hierarchy is critical to distributing work across the available CUs, managing parallel operations, and optimizing memory access. The general flow of the kernel program looks like this: + +1. Thread Grouping: As described in :ref:`inherent_thread_model`, threads are organized into a hierarchy consisting of threads which are individual instances of parallel operations, blocks that group the threads together, and grids that group blocks into the kernel. Each thread runs an instance of the kernel in parallel with other threads in the block. +2. Indexing: The kernel computes the unique index for each thread to access the relevant data to be processed by the thread. +3. Data Fetch: Threads fetch input data from memory previously transferred from the host to the device. As described in :ref:`memory_hierarchy`, the hierarchy of threads is influenced by the memory subsystem of GPUs. The memory hierarchy includes local memory per-thread with very fast access, shared memory for the block of threads which also supports quick access, and larger amounts of global memory visible to the whole kernel,but accesses are expensive due to high latency. Understanding the memory model is a key concept for kernel programming. +4. Computation: Threads perform the required computations on the input data, and generate any needed output. Each thread of the kernel runs the same instruction simultaneously on the different datasets. This sometimes require multiple iterations when the number of operations exceeds the resources of the CU. +5. Synchronization: When needed, threads synchronize within their block to ensure correct results when working with shared memory. + +Kernels can be simple single instruction programs deployed across multiple threads in wavefronts, as described below and as demonstrated in the `Hello World tutorial `_ or :doc:`../tutorial/saxpy`. However, heterogeneous GPU applications can also become quite complex, managing hundreds, thousands, or hundreds of thousands of operations with repeated data transfers between host and device to support massive parallelization, using multiple streams to manage concurrent asynchronous operations, using rich libraries of functions optimized for GPU hardware as described in the `ROCm documentation `_. .. _programming_model_simt: Single instruction multiple threads (SIMT) ========================================== -The SIMT programming model behind the HIP device-side execution is a middle-ground -between SMT (Simultaneous Multi-Threading) programming known from multicore CPUs, -and SIMD (Single Instruction, Multiple Data) programming mostly known from exploiting -relevant instruction sets on CPUs (for example SSE/AVX/Neon). +The HIP kernel code, which is written as a series of scalar instructions for multiple +threads with different thread indices, gets mapped to the SIMD units of the GPUs. +Every single instruction, which is executed for every participating thread of a +kernel, gets mapped to the SIMD. -A HIP device compiler maps SIMT code written in HIP C++ to an inherently SIMD -architecture (like GPUs). This is done by scalarizing the entire kernel and issuing the scalar -instructions of multiple kernel instances (called threads) to each of the SIMD engine lanes, rather -than exploiting data parallelism within a single instance of a kernel and spreading -identical instructions over the available SIMD engines. +This is done by grouping threads into warps, which contain as many threads as there +are physical lanes in a SIMD, and issuing that instruction to the SIMD for every +warp of a kernel. Ideally the SIMD is always fully utilized, however if the number of threads +can't be evenly divided by the warpSize, then the unused lanes are masked out +from the corresponding SIMD execution. -Consider the following kernel: +A kernel follows the same C++ rules as the functions on the host, but it has a special ``__global__`` label to mark it for execution on the device, as shown in the following example: .. code-block:: cpp - __global__ void k(float4* a, const float4* b) + __global__ void AddKernel(float* a, const float* b) { - int tid = threadIdx.x; - int bid = blockIdx.x; - int dim = blockDim.x; + int global_idx = threadIdx.x + blockIdx.x * blockDim.x; + + a[global_idx] += b[global_idx]; + } - a[tid] += (tid + bid - dim) * b[tid]; +One of the first things you might notice is the usage of the special ``threadIdx``, +``blockIdx`` and ``blockDim`` variables. Unlike normal C++ host functions, a kernel +is not launched once, but as often as specified by the user. Each of these instances +is a separate thread, with its own values for ``threadIdx``, ``blockIdx`` and ``blockDim``. + +The kernel program is launched from the host application using a language extension +called the triple chevron syntax, which looks like the following: + +.. code-block:: cpp + + AddKernel<<>>(a, b); + +Inside the angle brackets you provide the following: + +* The number of blocks to launch, which defines the grid size (relating to blockDim). +* The number of threads in a block, which defines the block size (relating to blockIdx). +* The amount of shared memory to allocate by the host, not specified above. +* The device stream to enqueue the operation on, not specified above so the default stream is used. + +.. note:: + The kernel can also be launched through other methods, such as the ``hipLaunchKernel()`` function. + +Here the total number of threads launched for the ``AddKernel`` program is defined by +``number_of_blocks * threads_per_block``. You define these values when launching the +kernel program to address the problem to be solved with the available resources within +the system. In other words, the thread configuration is customized to the needs of the +operations and the available hardware. + +For comparison, the ``AddKernel`` program could be written in plain C++ as a ``FOR`` loop: + +.. code-block:: cpp + + for(int i = 0; i < (number_of_blocks * threads_per_block); ++i){ + a[i] += b[i]; } -The incoming four-vector of floating-point values ``b`` is multiplied by a -scalar and then added element-wise to the four-vector floating-point values of -``a``. On modern SIMD-capable architectures, the four-vector ops are expected to -compile to a single SIMD instruction. However, GPU execution of this kernel will -typically break down the vector elements into 4 separate threads for parallel execution, -as seen in the following figure: +In HIP, lanes of the SIMD architecture are fed by mapping threads of a SIMT +execution, one thread down each lane of an SIMD engine. Execution parallelism +usually isn't exploited from the width of the built-in vector types, but across +multiple threads via the thread ID constants ``threadIdx.x``, ``blockIdx.x``, etc. .. _simt: @@ -143,28 +225,26 @@ as seen in the following figure: inside and ellipsis between the arrows. The instructions represented in the arrows are, from top to bottom: ADD, DIV, FMA, FMA, FMA and FMA. - Instruction flow of the sample SIMT program. - -In HIP, lanes of the SIMD architecture are fed by mapping threads of a SIMT -execution, one thread down each lane of an SIMD engine. Execution parallelism -usually isn't exploited from the width of the built-in vector types, but across multiple threads via the thread ID constants ``threadIdx.x``, ``blockIdx.x``, etc. + Instruction flow of a sample SIMT program. .. _inherent_thread_model: -Inherent thread model -===================== - -The SIMT nature of HIP is captured by the ability to execute user-provided -device programs, expressed as single-source C/C++ functions or sources compiled -online/offline to binaries, in bulk. +Hierarchical thread model +--------------------- -All threads of a kernel are uniquely identified by a set of integral values, called thread IDs. -The set of integers identifying a thread relate to the hierarchy in which the threads execute. +As previously discussed, all threads of a kernel are uniquely identified by a set +of integral values called thread IDs. The hierarchy consists of three levels: thread, +blocks, and grids. -The thread hierarchy inherent to how AMD GPUs operate is depicted in the -following figure. +* Threads are single instances of kernel operations, running concurrently across warps +* Blocks group threads together and enable cooperation and shared memory +* Grids define the number of thread blocks for a single kernel launch +* Blocks, and grids can be defined in 3 dimensions (``x``, ``y``, ``z``) +* By default, the Y and Z dimensions are set to 1 -.. _inherent_thread_hierarchy: +The combined values represent the thread index, and relate to the sequence that the +threads execute. The thread hierarchy is integral to how AMD GPUs operate, and is +depicted in the following figure. .. figure:: ../data/understand/programming_model/thread_hierarchy.svg :alt: Diagram depicting nested rectangles of varying color. The outermost one @@ -175,10 +255,13 @@ following figure. Hierarchy of thread groups. +.. _wavefront: + Warp (or Wavefront) - The innermost grouping of threads is called a warp, or a wavefront in ISA terms. A warp - is the most tightly coupled groups of threads, both physically and logically. Threads - inside a warp are also called lanes, and the integral value identifying them is the lane ID. + The innermost grouping of threads is called a warp. A warp is the most tightly + coupled groups of threads, both physically and logically. Threads inside a warp + are executed in lockstep, with each thread executing the same instruction. Threads + in a warp are also called lanes, and the value identifying them is the lane ID. .. tip:: @@ -187,41 +270,53 @@ Warp (or Wavefront) calculated values to be. The size of a warp is architecture dependent and always fixed. For AMD GPUs - the wavefront is typically 64 threads, though sometimes 32 threads. Warps are + the warp is typically 64 threads, though sometimes 32 threads. Warps are signified by the set of communication primitives at their disposal, as discussed in :ref:`warp-cross-lane`. .. _inherent_thread_hierarchy_block: Block - The middle grouping is called a block or thread block. The defining feature - of a block is that all threads in a block will share an instance of memory - which they may use to share data or synchronize with one another. - - The size of a block is user-configurable but is limited by the queryable - capabilities of the executing hardware. The unique ID of the thread within a - block is 3-dimensional as provided by the API. When linearizing thread IDs - within a block, assume the "fast index" being dimension ``x``, followed by - the ``y`` and ``z`` dimensions. + The next level of the thread hierarchy is called a thread block, or block. The + defining feature of a block is that all threads in the block have shared memory + that they can use to share data or synchronize with one another, as described in + :ref:`memory_hierarchy`. + + The size of a block, or the block dimension, is the user-configurable number of + threads per block, but is limited by the queryable capabilities of the executing + hardware. The unique ID of the thread within a block can be 1, 2, or 3-dimensional + as provided by the HIP API. You can configure the thread block to best represent + the data associated with the kernel instruction set. + + .. note:: + When linearizing thread IDs within a block, assume the *fast index* is the ``x`` + dimension, followed by the ``y`` and ``z`` dimensions. .. _inherent_thread_hierarchy_grid: Grid - The outermost grouping is called a grid. A grid manifests as a single - dispatch of kernels for execution. The unique ID of each block within a grid - is 3-dimensional, as provided by the API and is queryable by every thread - within the block. + The top-most level of the thread hierarchy is a grid. A grid is the number of blocks + needed for a single launch of the kernel. The unique ID of each block within + a grid can be 1, 2, or 3-dimensional, as provided by the API and is queryable + by every thread within the block. + +The three-dimensional thread hierarchy available to a kernel program lends itself to solutions +that align closely to the computational problem. The following are some examples: + +* 1 dimensional: array processing, linear data structures, or sequential data transformation +* 2 dimensional: Image processing, matrix operations, 2 dimensional simulations +* 3 dimensions: Volume rendering, 3D scientific simulations, spatial algorithms Cooperative groups thread model ------------------------------- -The Cooperative groups API introduces new APIs to launch, group, subdivide, +The Cooperative groups API introduces new functions to launch, group, subdivide, synchronize and identify threads, as well as some predefined group-collective -algorithms, but most importantly a matching threading model to think in terms of. -It relaxes some restrictions of the :ref:`inherent_thread_model` imposed by the -strict 1:1 mapping of architectural details to the programming model. Cooperative -groups let you define your own set of thread groups which may fit your user-cases -better than the defaults defined by the hardware. +algorithms. Most importantly it offers a matching thread model to think of the +cooperative groups in terms of. It relaxes some restrictions of the :ref:`inherent_thread_model` +imposed by the strict 1:1 mapping of architectural details to the programming model. +Cooperative groups let you define your own set of thread groups which may better +fit your use-case than the defaults defined by the hardware. .. note:: The implicit groups defined by kernel launch parameters are still available @@ -229,14 +324,15 @@ better than the defaults defined by the hardware. For further information, see :doc:`Cooperative groups
`. +.. _memory_hierarchy: + Memory model ============ -The hierarchy of threads introduced by the :ref:`inherent_thread_model` is induced -by the memory subsystem of GPUs. The following figure summarizes the memory -namespaces and how they relate to the various levels of the threading model. +The thread structure of the :ref:`inherent_thread_model` is supported by the memory +subsystem of GPUs. The following figure summarizes the memory namespaces and how +they relate to the various levels of the threading model. -.. _memory_hierarchy: .. figure:: ../data/understand/programming_model/memory_hierarchy.svg :alt: Diagram depicting nested rectangles of varying color. The outermost one @@ -250,10 +346,11 @@ namespaces and how they relate to the various levels of the threading model. Local or per-thread memory Read-write storage only visible to the threads defining the given variables, - also called per-thread memory. The size of a block for a given kernel, and thereby - the number of concurrent warps, are limited by local memory usage. - This relates to an important aspect: occupancy. This is the default memory - namespace. + also called per-thread memory. This is the default memory namespace. + The size of the blocks for a given kernel, and thereby the number of concurrent + warps, are limited by local memory usage. This relates to the *occupancy* of the + CU as described in :doc:`Compute Units <./hardware_implementation>`, + an important concept in resource usage and performance optimization. Shared memory Read-write storage visible to all the threads in a given block. @@ -274,10 +371,60 @@ Global Surface A read-write version of texture memory. +Using different memory types +---------------------------- + +* Use global memory when: + + - You are transferring data from the host to the device + - You have large data sets, and latency isn't an issue + - You are sharing data between thread blocks + +* Use shared memory when: + + - The data is reused within a thread block + - Cross-thread communication is needed + - To reduce global memory bandwidth + +* Use local memory when: + + - The data is specific to a thread + - To store automatic variables for the thread + - To provide register pressure relief for the thread + +* Use constant memory when: + + - The data is read-only + - The same value is used across threads + - The data size is small + +Memory access patterns and best practices +----------------------------------------- + +While you should refer to the :ref:`memory_management`, the following are a few memory +access patterns and best practices: + +* Global memory: Coalescing reduces memory transactions. +* Shared memory: Avoiding bank conflicts is crucial. +* Texture memory: Spatial locality improves caching. +* Unified memory: Structured access minimizes page migration overhead. + +When a kernel accesses global memory, the memory transactions typically occur in chunks of 32, 64, or 128 bytes. If threads access memory in a coalesced manner, meaning consecutive threads read or write consecutive memory locations, the memory controller can merge these accesses into a single transaction. Coalesced access primarily applies to global memory, which is the largest but slowest type of memory on a GPU and coalesced access significantly improves performance by reducing memory latency and increasing bandwidth efficiency. + +To achieve coalesced memory access in HIP, ensure that memory addresses accessed by consecutive threads are aligned. Structure data for coalesced access by storing it in a contiguous manner so that thread[i] can access array[i], and not some random location. Avoid strided access patterns, for example array[i * stride] can lead to memory bank conflicts and inefficient access. If all the threads in a warp can access consecutive memory locations, memory access is fully coalesced. + +Shared memory is a small, fast memory region inside the CU. Unlike global memory, shared memory accesses do not require coalescing, but they can suffer from bank conflicts, which are another form of inefficient memory access. Shared memory is divided into multiple memory banks (usually 32 banks on modern GPUs). If multiple threads within a warp try to access different addresses that map to the same memory bank, accesses get serialized, leading to poor performance. To optimize shared memory usage ensure that consecutive threads access different memory banks. Use padding if necessary to avoid conflicts. + +Texture memory is read-only memory optimized for spatial locality and caching rather than coalescing. Texture memory is cached, unlike standard global memory, and it provides optimized access patterns for 2D and spatially local data. Accessing neighboring values results in cache hits, improving performance. Therefore, instead of worrying about coalescing, optimal memory access patterns involve ensuring that threads access spatially adjacent texture elements, and the memory layout aligns well with the 2D caching mechanism. + +Unified memory allows the CPU and GPU to share memory seamlessly, but performance depends on access patterns. Unified memory enables automatic page migration between CPU and GPU memory. However, if different threads access different pages, it can lead to expensive page migrations and slow throughput performance. Accessing unified memory in a structured, warp-friendly manner reduces unnecessary page transfers. Ensure threads access memory in a structured, consecutive manner, minimizing page faults. Prefetch data to the GPU before computation by using ``hipMemPrefetchAsync()``. In addition, using small batch transfers as described below, can reduce unexpected page migrations when using unified memory. + +Memory transfers between the host and the device can become a major bottleneck if not optimized. One method is to use small batch memory transfers where data is transferred in smaller chunks instead of a dealing with large datasets to avoid long blocking operations. Small batch transfers offer better PCIe bandwidth utilization over large data transfers. Small batch transfers offer performance improvement by offering reduced latency with small batches that run asynchronously using ``hipMemcpyAsync()`` as described in :ref:`asynchronous_how-to`, pipelining data transfers and kernel execution using separate streams. Finally, using pinned memory with small batch transfers enables faster DMA transfers without CPU involvement, greatly improving memory transfer performance. + Execution model =============== -HIP programs consist of two distinct scopes: +As previously discussed in :ref:`heterogeneous_programming`, HIP programs consist of two distinct scopes: * The host-side API running on the host processor. There are two APIs available: @@ -362,4 +509,3 @@ intended use-cases. compiler itself and not intended towards end-user code. Should you be writing a tool having to launch device code using HIP, consider using these over the alternatives. - From 6c2960b3428e4c530495d5ab08593fc0d95bb0c2 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Tue, 11 Mar 2025 12:58:56 +0100 Subject: [PATCH 44/46] Update docs/understand/programming_model.rst Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com> --- .wordlist.txt | 3 +- .../cpu-gpu-comparison.drawio | 181 +++++++++++ .../programming_model/cpu-gpu-comparison.svg | 1 + .../programming_model/host-device-flow.drawio | 61 ++++ .../programming_model/host-device-flow.svg | 1 + .../programming_model/memory-access.drawio | 237 ++++++++++++++ .../programming_model/memory-access.svg | 1 + .../programming_model/multi-gpu.drawio | 64 ++++ .../programming_model/multi-gpu.svg | 1 + .../programming_model/simt-execution.drawio | 124 ++++++++ .../programming_model/simt-execution.svg | 1 + .../understand/programming_model/simt.drawio | 148 --------- .../understand/programming_model/simt.svg | 1 - .../programming_model/stream-workflow.drawio | 97 ++++++ .../programming_model/stream-workflow.svg | 1 + docs/index.md | 1 - docs/programming_guide.rst | 83 ----- docs/sphinx/_toc.yml.in | 2 - docs/understand/programming_model.rst | 298 ++++++++++-------- 19 files changed, 933 insertions(+), 373 deletions(-) create mode 100644 docs/data/understand/programming_model/cpu-gpu-comparison.drawio create mode 100644 docs/data/understand/programming_model/cpu-gpu-comparison.svg create mode 100644 docs/data/understand/programming_model/host-device-flow.drawio create mode 100644 docs/data/understand/programming_model/host-device-flow.svg create mode 100644 docs/data/understand/programming_model/memory-access.drawio create mode 100644 docs/data/understand/programming_model/memory-access.svg create mode 100644 docs/data/understand/programming_model/multi-gpu.drawio create mode 100644 docs/data/understand/programming_model/multi-gpu.svg create mode 100644 docs/data/understand/programming_model/simt-execution.drawio create mode 100644 docs/data/understand/programming_model/simt-execution.svg delete mode 100644 docs/data/understand/programming_model/simt.drawio delete mode 100644 docs/data/understand/programming_model/simt.svg create mode 100644 docs/data/understand/programming_model/stream-workflow.drawio create mode 100644 docs/data/understand/programming_model/stream-workflow.svg delete mode 100644 docs/programming_guide.rst diff --git a/.wordlist.txt b/.wordlist.txt index fab10155f6..7bc0f65fa0 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -9,6 +9,7 @@ AXPY asm Asynchronicity Asynchrony +asynchrony backtrace bfloat Bitcode @@ -93,7 +94,6 @@ iteratively Lapack latencies libc -libhipcxx libstdc lifecycle linearizing @@ -176,6 +176,7 @@ ULP ULPs unintuitive UMM +uncoalesced unmap unmapped unmapping diff --git a/docs/data/understand/programming_model/cpu-gpu-comparison.drawio b/docs/data/understand/programming_model/cpu-gpu-comparison.drawio new file mode 100644 index 0000000000..a7e851b3d5 --- /dev/null +++ b/docs/data/understand/programming_model/cpu-gpu-comparison.drawio @@ -0,0 +1,181 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/cpu-gpu-comparison.svg b/docs/data/understand/programming_model/cpu-gpu-comparison.svg new file mode 100644 index 0000000000..552290299f --- /dev/null +++ b/docs/data/understand/programming_model/cpu-gpu-comparison.svg @@ -0,0 +1 @@ +
CPU versus GPU Architecture
CPU versus GPU Archite...
CPU
CPU
CPU Core
CPU Core
CPU Core
CPU Core
CPU Core
CPU Core
CPU Core
CPU Core
GPU
GPU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
CU
Large Complex Cores
Large Complex Cores
High Clock Speed (3-5 GHz)
High Clock Speed (3-5 GHz)
Many Simple Cores
Many Simple Cores
Lower Clock Speed (1-2 GHz)
Lower Clock Speed (1-2 GHz)
Large Cache per Core
Large Cache per Core
Shared Memory across Cores
Shared Memory across Cores
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/host-device-flow.drawio b/docs/data/understand/programming_model/host-device-flow.drawio new file mode 100644 index 0000000000..2ee8c43ae9 --- /dev/null +++ b/docs/data/understand/programming_model/host-device-flow.drawio @@ -0,0 +1,61 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/host-device-flow.svg b/docs/data/understand/programming_model/host-device-flow.svg new file mode 100644 index 0000000000..02bce96c5d --- /dev/null +++ b/docs/data/understand/programming_model/host-device-flow.svg @@ -0,0 +1 @@ +
Host-Device Data Flow
Host-Device Data Flow
Host (CPU)
Host (CPU)
Device (GPU)
Device (GPU)
1. Initialize
1. Initialize
2. Transfer Data
2. Transfer Data
3. Execute Kernel
3. Execute Kernel
4. Return Results
4. Return Results
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/memory-access.drawio b/docs/data/understand/programming_model/memory-access.drawio new file mode 100644 index 0000000000..3577772532 --- /dev/null +++ b/docs/data/understand/programming_model/memory-access.drawio @@ -0,0 +1,237 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/memory-access.svg b/docs/data/understand/programming_model/memory-access.svg new file mode 100644 index 0000000000..5f0dbd8aae --- /dev/null +++ b/docs/data/understand/programming_model/memory-access.svg @@ -0,0 +1 @@ +
Memory Access Patterns
Memory Access Patterns
Uncoalesced Access
Uncoalesced Access
Threads
Threads
Memory
Memory
Coalesced Access
Coalesced Access
Threads
Threads
Memory
Memory
0
0
...
...
...
...
63
63
0
0
...
...
...
...
63
63
0
0
...
...
...
...
63
63
0
0
...
...
...
...
63
63
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/multi-gpu.drawio b/docs/data/understand/programming_model/multi-gpu.drawio new file mode 100644 index 0000000000..17eca3c318 --- /dev/null +++ b/docs/data/understand/programming_model/multi-gpu.drawio @@ -0,0 +1,64 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/multi-gpu.svg b/docs/data/understand/programming_model/multi-gpu.svg new file mode 100644 index 0000000000..190f2593d2 --- /dev/null +++ b/docs/data/understand/programming_model/multi-gpu.svg @@ -0,0 +1 @@ +
Multi-GPU Workload Distribution
Multi-GPU Workload Distribution
Host CPU
Host CPU
GPU 0
GPU 0
GPU 1
GPU 1
GPU 2
GPU 2
GPU 3
GPU 3
25%
25%
25%
25%
25%
25%
25%
25%
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/simt-execution.drawio b/docs/data/understand/programming_model/simt-execution.drawio new file mode 100644 index 0000000000..1e2652f51f --- /dev/null +++ b/docs/data/understand/programming_model/simt-execution.drawio @@ -0,0 +1,124 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/simt-execution.svg b/docs/data/understand/programming_model/simt-execution.svg new file mode 100644 index 0000000000..412b9265e7 --- /dev/null +++ b/docs/data/understand/programming_model/simt-execution.svg @@ -0,0 +1 @@ +
SIMT Execution Model
SIMT Execution Model
a[i] = b[i] + c[i]
a[i] = b[i] + c[i]
Thread 0
Thread 0
b[0] = 5
b[0] = 5
c[0] = 3
c[0] = 3
a[0] = 8
a[0] = 8
Thread 1
Thread 1
b[1] = 2
b[1] = 2
c[1] = 4
c[1] = 4
a[1] = 6
a[1] = 6
Thread 2
Thread 2
b[2] = 7
b[2] = 7
c[2] = 1
c[2] = 1
a[2] = 8
a[2] = 8
Thread 3
Thread 3
b[3] = 3
b[3] = 3
c[3] = 5
c[3] = 5
a[3] = 8
a[3] = 8
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/simt.drawio b/docs/data/understand/programming_model/simt.drawio deleted file mode 100644 index 4c5c5a3f26..0000000000 --- a/docs/data/understand/programming_model/simt.drawio +++ /dev/null @@ -1,148 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/data/understand/programming_model/simt.svg b/docs/data/understand/programming_model/simt.svg deleted file mode 100644 index c149ab88e4..0000000000 --- a/docs/data/understand/programming_model/simt.svg +++ /dev/null @@ -1 +0,0 @@ -
ADD
ADD
FM
FMA
FMA
FM
FMA
FMA
FM
FMA
FMA
FM
FMA
FMA
DIV
DIV
ADD
ADD
FM
FMA
FMA
FM
FMA
FMA
FM
FMA
FMA
FM
FMA
FMA
DIV
DIV
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/data/understand/programming_model/stream-workflow.drawio b/docs/data/understand/programming_model/stream-workflow.drawio new file mode 100644 index 0000000000..616dd28d78 --- /dev/null +++ b/docs/data/understand/programming_model/stream-workflow.drawio @@ -0,0 +1,97 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/data/understand/programming_model/stream-workflow.svg b/docs/data/understand/programming_model/stream-workflow.svg new file mode 100644 index 0000000000..9648351cad --- /dev/null +++ b/docs/data/understand/programming_model/stream-workflow.svg @@ -0,0 +1 @@ +
Stream and Event Workflow
Stream and Event Workf...
Stream 1
Stream 1
Stream 2
Stream 2
Stream 3
Stream 3
Operation
Operation
Event
Event
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/index.md b/docs/index.md index 23f352e306..9be67a91d3 100644 --- a/docs/index.md +++ b/docs/index.md @@ -22,7 +22,6 @@ The HIP documentation is organized into the following categories: :::{grid-item-card} Programming guide -* [Introduction](./programming_guide) * {doc}`./understand/programming_model` * {doc}`./understand/hardware_implementation` * {doc}`./understand/compilers` diff --git a/docs/programming_guide.rst b/docs/programming_guide.rst deleted file mode 100644 index 7444408866..0000000000 --- a/docs/programming_guide.rst +++ /dev/null @@ -1,83 +0,0 @@ -.. meta:: - :description: HIP programming guide introduction - :keywords: HIP programming guide introduction, HIP programming guide - -.. _hip-programming-guide: - -******************************************************************************** -HIP programming guide introduction -******************************************************************************** - -This topic provides key HIP programming concepts and links to more detailed -information. - -Write GPU Kernels for Parallel Execution -================================================================================ - -To make the most of the parallelism inherent to GPUs, a thorough understanding -of the :ref:`programming model ` is helpful. The HIP -programming model is designed to make it easy to map data-parallel algorithms to -architecture of the GPUs. HIP employs the SIMT-model (Single Instruction -Multiple Threads) with a multi-layered thread hierarchy for efficient execution. - -Understand the Target Architecture (CPU and GPU) -================================================================================ - -The :ref:`hardware implementation ` topic outlines the -GPUs supported by HIP. In general, GPUs are made up of Compute Units that excel -at executing parallelizable, computationally intensive workloads without complex -control-flow. - -Increase parallelism on multiple level -================================================================================ - -To maximize performance and keep all system components fully utilized, the -application should expose and efficiently manage as much parallelism as possible. -:ref:`Parallel execution ` can be achieved at the -application, device, and multiprocessor levels. - -The application’s host and device operations can achieve parallel execution -through asynchronous calls, streams, or HIP graphs. On the device level, -multiple kernels can execute concurrently when resources are available, and at -the multiprocessor level, developers can overlap data transfers with -computations to further optimize performance. - -Memory management -================================================================================ - -GPUs generally have their own distinct memory, also called :ref:`device -memory `, separate from the :ref:`host memory `. -Device memory needs to be managed separately from the host memory. This includes -allocating the memory and transfering it between the host and the device. These -operations can be performance critical, so it's important to know how to use -them effectively. For more information, see :ref:`Memory management `. - -Synchronize CPU and GPU Workloads -================================================================================ - -Tasks on the host and devices run asynchronously, so proper synchronization is -needed when dependencies between those tasks exist. The asynchronous execution -of tasks is useful for fully utilizing the available resources. Even when only a -single device is available, memory transfers and the execution of tasks can be -overlapped with asynchronous execution. - -Error Handling -================================================================================ - -All functions in the HIP runtime API return an error value of type -:cpp:enum:`hipError_t` that can be used to verify whether the function was -successfully executed. It's important to confirm these returned values, in order -to catch and handle those errors, if possible. An exception is kernel launches, -which don't return any value. These errors can be caught with specific functions -like :cpp:func:`hipGetLastError()`. - -For more information, see :ref:`error_handling` . - -Multi-GPU and Load Balancing -================================================================================ - -Large-scale applications that need more compute power can use multiple GPUs in -the system. This requires distributing workloads across multiple GPUs to balance -the load to prevent GPUs from being overutilized while others are idle. - -For more information, see :ref:`multi-device` . \ No newline at end of file diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 2f08ffcd5a..dacc58d884 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -24,8 +24,6 @@ subtrees: - caption: Programming guide entries: - - file: programming_guide - title: Introduction - file: understand/programming_model - file: understand/hardware_implementation - file: understand/compilers diff --git a/docs/understand/programming_model.rst b/docs/understand/programming_model.rst index 64a92df470..3cac7e374a 100644 --- a/docs/understand/programming_model.rst +++ b/docs/understand/programming_model.rst @@ -7,36 +7,41 @@ .. _programming_model: ******************************************************************************* -Introduction to HIP programming model +Introduction to the HIP programming model ******************************************************************************* -The HIP programming model makes it easy to map data-parallel C/C++ algorithms to -massively parallel, wide single instruction, multiple data (SIMD) architectures, -such as GPUs. HIP supports many imperative languages, such as Python via PyHIP, -but this document focuses on the original C/C++ API of HIP. +The HIP programming model enables mapping data-parallel C/C++ algorithms to massively +parallel SIMD (Single Instruction, Multiple Data) architectures like GPUs. HIP +supports many imperative languages, such as Python via PyHIP, but this document +focuses on the original C/C++ API of HIP. While GPUs may be capable of running applications written for CPUs if properly ported -and compiled, it would not be an efficient use of GPU resources. GPUs are different -from CPUs in fundamental ways, and should be used accordingly to achieve optimum +and compiled, it would not be an efficient use of GPU resources. GPUs fundamentally differ +from CPUs and should be used accordingly to achieve optimum performance. A basic understanding of the underlying device architecture helps you make efficient use of HIP and general purpose graphics processing unit (GPGPU) programming in general. The following topics introduce you to the key concepts of -GPU-based programming, and the HIP programming model. +GPU-based programming and the HIP programming model. -Getting into Hardware: CPU vs GPU -================================= +Hardware differences: CPU vs GPU +================================ -CPUs and GPUs have been designed for different purposes. CPUs have been designed -to quickly execute a single thread, decreasing the time it takes for a single -operation, increasing the amount of sequential instructions that can be executed. -This includes fetching data, and reducing pipeline stalls where the ALU has to -wait for previous instructions to finish. +CPUs and GPUs have been designed for different purposes. CPUs quickly execute a single thread, decreasing the time for a single operation while increasing the number of sequential instructions that can be executed. This includes fetching data and reducing pipeline stalls where the ALU has to wait for previous instructions to finish. -On CPUs the goal is to quickly process operations. CPUs provide low latency processing for +.. figure:: ../data/understand/programming_model/cpu-gpu-comparison.svg + :alt: Diagram depicting the differences between CPU and GPU hardware. + The CPU block shows four large processing cores, lists Large Cache per + Core, and High Clock Speed of 3 to 5 gigahertz. The GPU block shows 42 + smaller processing cores, lists Shared Memory across Cores, and Lower + Clock Speeds of 1 to 2 gigahertz. + + Differences in CPUs and GPUs + +With CPUs, the goal is to quickly process operations. CPUs provide low-latency processing for serial instructions. On the other hand, GPUs have been designed to execute many similar commands, or threads, -in parallel, achieving higher throughput. Latency is the delay from when an operation -is started to when it returns, such as 2 ns, while throughput is the number of operations completed -in a period of time, such as ten thousand threads completed. +in parallel, achieving higher throughput. Latency is the time between starting an +operation and receiving its result, such as 2 ns, while throughput is the rate of +completed operations, for example, operations per second. For the GPU, the objective is to process as many operations in parallel, rather than to finish a single instruction quickly. GPUs in general are made up of basic @@ -45,7 +50,7 @@ As described in :ref:`hardware_implementation`, these CUs provide the necessary resources for the threads: the Arithmetic Logical Units (ALUs), register files, caches and shared memory for efficient communication between the threads. -The following defines a few hardware differences between CPUs and GPUs: +The following describes a few hardware differences between CPUs and GPUs: * CPU: @@ -69,8 +74,8 @@ The following defines a few hardware differences between CPUs and GPUs: - Register files are shared among threads. The number of threads that can be run in parallel depends on the registers needed per thread. - Multiple ALUs execute a collection of threads having the same operations, also known as a wavefront or warp. This is called single-instruction, multiple threads (SIMT) operation as described in :ref:`programming_model_simt`. - - The collection of ALUs is called SIMD. SIMDs are an extension to the hardware architecture, that allows a `single instruction` to concurrently operate on `multiple data` inputs. - - For branching threads where conditional instructions lead to thread divergence, ALUs still processes the full wavefront, but the result for divergent threads is masked out. This leads to wasted ALU cycles, and should be a consideration in your programming. Keep instructions consistent, and leave conditionals out of threads. + - The collection of ALUs is called SIMD. SIMDs are an extension to the hardware architecture that allows a `single instruction` to concurrently operate on `multiple data` inputs. + - For branching threads where conditional instructions lead to thread divergence, ALUs still process the full wavefront, but the result for divergent threads is masked out. This leads to wasted ALU cycles and should be a consideration in your programming. Keep instructions consistent and leave conditionals out of threads. - The advantage for GPUs is that context switching is easy. All threads that run on a core/compute unit have their registers on the compute unit, so they don't need to be stored to global memory, and each cycle one instruction from any wavefront that resides on the compute unit can be issued. @@ -82,7 +87,7 @@ across large datasets, with little branching, where the volume of operations is .. _heterogeneous_programming: -Heterogeneous Programming +Heterogeneous programming ========================= The HIP programming model has two execution contexts. The main application starts on the CPU, or @@ -127,13 +132,21 @@ In heterogeneous programming, the CPU is available for processing operations but 1. Initialize the HIP runtime and select the GPU: As described in :ref:`initialization`, refers to identifying and selecting a target GPU, setting up a context to let the CPU interact with the GPU. 2. Data preparation: As discussed in :ref:`memory_management`, this includes allocating the required memory on the host and device, preparing input data and transferring it from the host to the device. The data is both transferred to the device, and passed as an input parameter when launching the kernel. -3. Configure and launch the kernel on the GPU: As described in :ref:`device_program`, define and load the kernel or kernels to be run, launch kernels using the triple chevron syntax or appropriate API call (for example ``hipLaunchKernelGGL``), and pass parameters as needed. On the GPU, kernels are run on streams, or a queue of operations. Within the same stream operations run in the order they were issued, but different streams are independent and can execute concurrently. In the HIP runtime, kernels run on the default stream when one is not specified, but specifying a stream for the kernel lets you increase concurrency in task scheduling and resource utilization, and launch and manage multiple kernels from the host program. +3. Configure and launch the kernel on the GPU: As described in :ref:`device_program`, this defines kernel configurations and arguments, launches kernel to run on the GPU device using the triple chevron syntax or appropriate API call (for example ``hipLaunchKernelGGL``). On the GPU, multiple kernels can run on streams, with a queue of operations. Within the same stream, operations run in the order they were issued, but on multiple streams operations are independent and can execute concurrently. In the HIP runtime, kernels run on the default stream when one is not specified, but specifying a stream for the kernel lets you increase concurrency in task scheduling and resource utilization, and launch and manage multiple kernels from the host program. 4. Synchronization: As described in :ref:`asynchronous_how-to`, kernel execution occurs in the context of device streams, specifically the default (`0`) stream. You can use streams and events to manage task dependencies, overlap computation with data transfers, and manage asynchronous processes to ensure proper sequencing of operations. Wait for events or streams to finish execution and transfer results from the GPU back to the host. 5. Error handling: As described in :ref:`error_handling`, you should catch and handle potential errors from API calls, kernel launches, or memory operations. For example, use ``hipGetErrorString`` to retrieve error messages. 6. Cleanup and resource management: Validate results, clean up GPU contexts and resources, and free allocated memory on the host and devices. This structure allows for efficient use of GPU resources and facilitates the acceleration of compute-intensive tasks while keeping the host CPU available for other tasks. +.. figure:: ../data/understand/programming_model/host-device-flow.svg + :alt: Diagram depicting a host CPU and device GPU rectangles of varying color. + There are arrows pointing between the rectangles showing from the Host + to the Device the initialization, data transfer, and Kernel execution + steps, and from the Device back to the Host the returning results. + + Interaction of Host and Device in a GPU application + .. _device_program: Device programming @@ -141,30 +154,40 @@ Device programming The device or kernel program acts as workers on the GPU application, distributing operations to be handled quickly and efficiently. Launching a kernel in the host application starts the kernel program running on the GPU, defining the parallel operations to repeat the same instructions across many datasets. Understanding how the kernel works and the processes involved is essential to writing efficient GPU applications. Threads, blocks, and grids provide a hierarchical approach to parallel operations. Understanding the thread hierarchy is critical to distributing work across the available CUs, managing parallel operations, and optimizing memory access. The general flow of the kernel program looks like this: -1. Thread Grouping: As described in :ref:`inherent_thread_model`, threads are organized into a hierarchy consisting of threads which are individual instances of parallel operations, blocks that group the threads together, and grids that group blocks into the kernel. Each thread runs an instance of the kernel in parallel with other threads in the block. +1. Thread Grouping: As described in :ref:`inherent_thread_model`, threads are organized into a hierarchy consisting of threads, which are individual instances of parallel operations, blocks that group the threads, and grids that group blocks into the kernel. Each thread runs an instance of the kernel in parallel with other threads in the block. 2. Indexing: The kernel computes the unique index for each thread to access the relevant data to be processed by the thread. 3. Data Fetch: Threads fetch input data from memory previously transferred from the host to the device. As described in :ref:`memory_hierarchy`, the hierarchy of threads is influenced by the memory subsystem of GPUs. The memory hierarchy includes local memory per-thread with very fast access, shared memory for the block of threads which also supports quick access, and larger amounts of global memory visible to the whole kernel,but accesses are expensive due to high latency. Understanding the memory model is a key concept for kernel programming. 4. Computation: Threads perform the required computations on the input data, and generate any needed output. Each thread of the kernel runs the same instruction simultaneously on the different datasets. This sometimes require multiple iterations when the number of operations exceeds the resources of the CU. 5. Synchronization: When needed, threads synchronize within their block to ensure correct results when working with shared memory. -Kernels can be simple single instruction programs deployed across multiple threads in wavefronts, as described below and as demonstrated in the `Hello World tutorial `_ or :doc:`../tutorial/saxpy`. However, heterogeneous GPU applications can also become quite complex, managing hundreds, thousands, or hundreds of thousands of operations with repeated data transfers between host and device to support massive parallelization, using multiple streams to manage concurrent asynchronous operations, using rich libraries of functions optimized for GPU hardware as described in the `ROCm documentation `_. +Kernels are parallel programs that execute the same instruction set across multiple threads, organized in wavefronts, as described below and as demonstrated in the `Hello World tutorial `_ or :doc:`../tutorial/saxpy`. However, heterogeneous GPU applications can also become quite complex, managing hundreds, thousands, or hundreds of thousands of operations with repeated data transfers between host and device to support massive parallelization, using multiple streams to manage concurrent asynchronous operations, using rich libraries of functions optimized for GPU hardware as described in the `ROCm documentation `_. .. _programming_model_simt: Single instruction multiple threads (SIMT) ========================================== -The HIP kernel code, which is written as a series of scalar instructions for multiple +The HIP kernel code, written as a series of scalar instructions for multiple threads with different thread indices, gets mapped to the SIMD units of the GPUs. Every single instruction, which is executed for every participating thread of a kernel, gets mapped to the SIMD. This is done by grouping threads into warps, which contain as many threads as there are physical lanes in a SIMD, and issuing that instruction to the SIMD for every -warp of a kernel. Ideally the SIMD is always fully utilized, however if the number of threads +warp of a kernel. Ideally, the SIMD is always fully utilized. However, if the number of threads can't be evenly divided by the warpSize, then the unused lanes are masked out from the corresponding SIMD execution. +.. _simt: + +.. figure:: ../data/understand/programming_model/simt-execution.svg + :alt: Diagram depicting the SIMT execution model. There is a red rectangle + which contains the expression a[i] = b[i] + c[i], and below that four + arrows that point to Thread 0,1,2, and 3. Each thread contains different + values for b, c, and a, showing the parallel operations of this equation. + + Instruction flow of a sample SIMT program + A kernel follows the same C++ rules as the functions on the host, but it has a special ``__global__`` label to mark it for execution on the device, as shown in the following example: .. code-block:: cpp @@ -188,7 +211,7 @@ called the triple chevron syntax, which looks like the following: AddKernel<<>>(a, b); -Inside the angle brackets you provide the following: +Inside the angle brackets, provide the following: * The number of blocks to launch, which defines the grid size (relating to blockDim). * The number of threads in a block, which defines the block size (relating to blockIdx). @@ -198,7 +221,7 @@ Inside the angle brackets you provide the following: .. note:: The kernel can also be launched through other methods, such as the ``hipLaunchKernel()`` function. -Here the total number of threads launched for the ``AddKernel`` program is defined by +Here, the total number of threads launched for the ``AddKernel`` program is defined by ``number_of_blocks * threads_per_block``. You define these values when launching the kernel program to address the problem to be solved with the available resources within the system. In other words, the thread configuration is customized to the needs of the @@ -217,16 +240,6 @@ execution, one thread down each lane of an SIMD engine. Execution parallelism usually isn't exploited from the width of the built-in vector types, but across multiple threads via the thread ID constants ``threadIdx.x``, ``blockIdx.x``, etc. -.. _simt: - -.. figure:: ../data/understand/programming_model/simt.svg - :alt: Image representing the instruction flow of a SIMT program. Two identical - arrows pointing downward with blocks representing the instructions - inside and ellipsis between the arrows. The instructions represented in - the arrows are, from top to bottom: ADD, DIV, FMA, FMA, FMA and FMA. - - Instruction flow of a sample SIMT program. - .. _inherent_thread_model: Hierarchical thread model @@ -239,7 +252,7 @@ blocks, and grids. * Threads are single instances of kernel operations, running concurrently across warps * Blocks group threads together and enable cooperation and shared memory * Grids define the number of thread blocks for a single kernel launch -* Blocks, and grids can be defined in 3 dimensions (``x``, ``y``, ``z``) +* Blocks and grids can be defined in 3 dimensions (``x``, ``y``, ``z``) * By default, the Y and Z dimensions are set to 1 The combined values represent the thread index, and relate to the sequence that the @@ -303,20 +316,19 @@ Grid The three-dimensional thread hierarchy available to a kernel program lends itself to solutions that align closely to the computational problem. The following are some examples: -* 1 dimensional: array processing, linear data structures, or sequential data transformation -* 2 dimensional: Image processing, matrix operations, 2 dimensional simulations -* 3 dimensions: Volume rendering, 3D scientific simulations, spatial algorithms +* 1-dimensional: array processing, linear data structures, or sequential data transformation +* 2-dimensional: Image processing, matrix operations, 2 dimensional simulations +* 3-dimensional: Volume rendering, 3D scientific simulations, spatial algorithms Cooperative groups thread model ------------------------------- The Cooperative groups API introduces new functions to launch, group, subdivide, synchronize and identify threads, as well as some predefined group-collective -algorithms. Most importantly it offers a matching thread model to think of the -cooperative groups in terms of. It relaxes some restrictions of the :ref:`inherent_thread_model` -imposed by the strict 1:1 mapping of architectural details to the programming model. -Cooperative groups let you define your own set of thread groups which may better -fit your use-case than the defaults defined by the hardware. +algorithms. Cooperative groups let you define your own set of thread groups which +may fit your use-cases better than those defined by the hardware. It relaxes some +restrictions of the :ref:`inherent_thread_model` imposed by the strict 1:1 mapping +of architectural details to the programming model. .. note:: The implicit groups defined by kernel launch parameters are still available @@ -329,10 +341,12 @@ For further information, see :doc:`Cooperative groups
`, an important concept in resource usage and performance optimization. + Use local memory when the data is specific to a thread, to store variables generated + by the thread, or to provide register pressure relief for the thread. + Shared memory - Read-write storage visible to all the threads in a given block. + Read-write storage visible to all the threads in a given block. Use shared memory + when the data is reused within a thread block, when cross-thread communication + is needed, or to minimize global memory transactions by using device memory + whenever possible. Global Read-write storage visible to all threads in a given grid. There are specialized versions of global memory with different usage semantics which - are typically backed by the same hardware storing global. + are typically backed by the same hardware storing global. + + Use global memory when you have large datasets, are transferring memory between + the host and the device, and when you are sharing data between thread blocks. Constant Read-only storage visible to all threads in a given grid. It is a limited - segment of global with queryable size. + segment of global with queryable size. Use constant memory for read-only data + that is shared across multiple threads, and that has a small data size. Texture Read-only storage visible to all threads in a given grid and accessible @@ -371,104 +395,86 @@ Global Surface A read-write version of texture memory. -Using different memory types ----------------------------- - -* Use global memory when: - - - You are transferring data from the host to the device - - You have large data sets, and latency isn't an issue - - You are sharing data between thread blocks +Memory optimizations and best practices +--------------------------------------- -* Use shared memory when: +.. figure:: ../data/understand/programming_model/memory-access.svg + :alt: Diagram depicting an example memory access pattern for coalesced memory. + The diagram has uncoalesced access on the left side, with consecutive + threads accessing memory in a random pattern. With coalesced access on the + right showing consecutive threads accessing consecutive memory addresses. - - The data is reused within a thread block - - Cross-thread communication is needed - - To reduce global memory bandwidth + Coalesced memory accesses -* Use local memory when: +The following are a few memory access patterns and best practices to improve performance. You can find additional information in :ref:`memory_management` and :doc:`../how-to/performance_guidelines`. - - The data is specific to a thread - - To store automatic variables for the thread - - To provide register pressure relief for the thread +* **Global memory**: Coalescing reduces the number of memory transactions. -* Use constant memory when: + Coalesced memory access in HIP refers to the optimization of memory transactions to maximize throughput when accessing global memory. When a kernel accesses global memory, the memory transactions typically occur in chunks of 32, 64, or 128 bytes, which must be naturally aligned. Coalescing memory accesses means aligning and organizing these accesses so that multiple threads in a warp can combine their memory requests into the fewest possible transactions. If threads access memory in a coalesced manner, meaning consecutive threads read or write consecutive memory locations, the memory controller can merge these accesses into a single transaction. This is crucial because global memory bandwidth is relatively low compared to on-chip bandwidths, and non-optimal memory accesses can significantly impact performance. If all the threads in a warp can access consecutive memory locations, memory access is fully coalesced. - - The data is read-only - - The same value is used across threads - - The data size is small + To achieve coalesced memory access in HIP, you should: -Memory access patterns and best practices ------------------------------------------ + 1. *Align Data*: Use data types that are naturally aligned and ensure that structures and arrays are aligned properly. + 2. *Optimize Access Patterns*: Arrange memory accesses so that consecutive threads in a warp access consecutive memory locations. For example, if threads access a 2D array, the array and thread block widths should be multiples of the warp size. + 3. *Avoid strided access*: For example array[i * stride] can lead to memory bank conflicts and inefficient access. + 4. *Pad Data*: If necessary, pad data structures to ensure alignment and coalescing. -While you should refer to the :ref:`memory_management`, the following are a few memory -access patterns and best practices: +* **Shared memory**: Avoiding bank conflicts reduces the serialization of memory transactions. -* Global memory: Coalescing reduces memory transactions. -* Shared memory: Avoiding bank conflicts is crucial. -* Texture memory: Spatial locality improves caching. -* Unified memory: Structured access minimizes page migration overhead. + Shared memory is a small, fast memory region inside the CU. Unlike global memory, shared memory accesses do not require coalescing, but they can suffer from bank conflicts, which are another form of inefficient memory access. Shared memory is divided into multiple memory banks (usually 32 banks on modern GPUs). If multiple threads within a warp try to access different addresses that map to the same memory bank, accesses get serialized, leading to poor performance. To optimize shared memory usage, ensure that consecutive threads access different memory banks. Use padding if necessary to avoid conflicts. -When a kernel accesses global memory, the memory transactions typically occur in chunks of 32, 64, or 128 bytes. If threads access memory in a coalesced manner, meaning consecutive threads read or write consecutive memory locations, the memory controller can merge these accesses into a single transaction. Coalesced access primarily applies to global memory, which is the largest but slowest type of memory on a GPU and coalesced access significantly improves performance by reducing memory latency and increasing bandwidth efficiency. +* **Texture memory**: Spatial locality improves caching performance. -To achieve coalesced memory access in HIP, ensure that memory addresses accessed by consecutive threads are aligned. Structure data for coalesced access by storing it in a contiguous manner so that thread[i] can access array[i], and not some random location. Avoid strided access patterns, for example array[i * stride] can lead to memory bank conflicts and inefficient access. If all the threads in a warp can access consecutive memory locations, memory access is fully coalesced. + Texture memory is read-only memory optimized for spatial locality and caching rather than coalescing. Texture memory is cached, unlike standard global memory, and it provides optimized access patterns for 2D and spatially local data. Accessing neighboring values results in cache hits, improving performance. Therefore, instead of worrying about coalescing, optimal memory access patterns involve ensuring that threads access spatially adjacent texture elements, and the memory layout aligns well with the 2D caching mechanism. -Shared memory is a small, fast memory region inside the CU. Unlike global memory, shared memory accesses do not require coalescing, but they can suffer from bank conflicts, which are another form of inefficient memory access. Shared memory is divided into multiple memory banks (usually 32 banks on modern GPUs). If multiple threads within a warp try to access different addresses that map to the same memory bank, accesses get serialized, leading to poor performance. To optimize shared memory usage ensure that consecutive threads access different memory banks. Use padding if necessary to avoid conflicts. +* **Unified memory**: Structured access reduces the overhead of page migrations. -Texture memory is read-only memory optimized for spatial locality and caching rather than coalescing. Texture memory is cached, unlike standard global memory, and it provides optimized access patterns for 2D and spatially local data. Accessing neighboring values results in cache hits, improving performance. Therefore, instead of worrying about coalescing, optimal memory access patterns involve ensuring that threads access spatially adjacent texture elements, and the memory layout aligns well with the 2D caching mechanism. + Unified memory allows the CPU and GPU to share memory seamlessly, but performance depends on access patterns. Unified memory enables automatic page migration between CPU and GPU memory. However, if different threads access different pages, it can lead to expensive page migrations and slow throughput performance. Accessing unified memory in a structured, warp-friendly manner reduces unnecessary page transfers. Ensure threads access memory in a structured, consecutive manner, minimizing page faults. Prefetch data to the GPU before computation by using ``hipMemPrefetchAsync()``. In addition, using small batch transfers as described below, can reduce unexpected page migrations when using unified memory. -Unified memory allows the CPU and GPU to share memory seamlessly, but performance depends on access patterns. Unified memory enables automatic page migration between CPU and GPU memory. However, if different threads access different pages, it can lead to expensive page migrations and slow throughput performance. Accessing unified memory in a structured, warp-friendly manner reduces unnecessary page transfers. Ensure threads access memory in a structured, consecutive manner, minimizing page faults. Prefetch data to the GPU before computation by using ``hipMemPrefetchAsync()``. In addition, using small batch transfers as described below, can reduce unexpected page migrations when using unified memory. +* **Small batch transfers**: Enable pipelining and improve PCIe bandwidth use. -Memory transfers between the host and the device can become a major bottleneck if not optimized. One method is to use small batch memory transfers where data is transferred in smaller chunks instead of a dealing with large datasets to avoid long blocking operations. Small batch transfers offer better PCIe bandwidth utilization over large data transfers. Small batch transfers offer performance improvement by offering reduced latency with small batches that run asynchronously using ``hipMemcpyAsync()`` as described in :ref:`asynchronous_how-to`, pipelining data transfers and kernel execution using separate streams. Finally, using pinned memory with small batch transfers enables faster DMA transfers without CPU involvement, greatly improving memory transfer performance. + Memory transfers between the host and the device can become a major bottleneck if not optimized. One method is to use small batch memory transfers where data is transferred in smaller chunks instead of dealing with large datasets to avoid long blocking operations. Small batch transfers offer better PCIe bandwidth utilization over large data transfers. Small batch transfers offer performance improvement by offering reduced latency with small batches that run asynchronously using ``hipMemcpyAsync()`` as described in :ref:`asynchronous_how-to`, pipelining data transfers and kernel execution using separate streams. Finally, using pinned memory with small batch transfers enables faster DMA transfers without CPU involvement, greatly improving memory transfer performance. Execution model =============== As previously discussed in :ref:`heterogeneous_programming`, HIP programs consist of two distinct scopes: -* The host-side API running on the host processor. There are two APIs available: - - * The HIP runtime API which enables use of the single-source programming - model. - - * The HIP driver API which sits at a lower level and most importantly differs - by removing some facilities provided by the runtime API, most - importantly around kernel launching and argument setting. It is geared - towards implementing abstractions atop, such as the runtime API itself. - Offers two additional pieces of functionality not provided by the Runtime - API: ``hipModule`` and ``hipCtx`` APIs. For further details, check - :doc:`HIP driver API
`. - -* The device-side kernels running on GPUs. Both the host and the device-side - APIs have synchronous and asynchronous functions in them. +* The host-side API running on the host processor. +* The device-side kernels running on GPUs. -.. note:: - - The HIP does not present two *separate* APIs link NVIDIA CUDA. HIP only extends - the HIP runtime API with new APIs for ``hipModule`` and ``hipCtx``. +Both the host and the device-side APIs have synchronous and asynchronous functions. Host-side execution ------------------- -The part of the host-side API which deals with device management and their -queries are synchronous. All asynchronous APIs, such as kernel execution, data -movement and potentially data allocation/freeing all happen in the context of -device streams. +The host-side API dealing with device management and their queries are synchronous. +All asynchronous APIs, such as kernel execution, data movement and potentially data +allocation/freeing all happen in the context of device streams, as described in `Managing streams <../how-to/hip_runtime_api/asynchronous.html#managing-streams>`_. Streams are FIFO buffers of commands to execute relating to a given device. -Commands which enqueue tasks on a stream all return promptly and the command is +Operations that enqueue tasks on a stream all return promptly, and the command is executed asynchronously. All side effects of a command on a stream are visible to all subsequent commands on the same stream. Multiple streams may point to the same device and those streams may be fed from multiple concurrent host-side threads. Execution on multiple streams may be concurrent but isn't required to be. -Asynchronous APIs involving a stream all return a stream event which may be +Asynchronous APIs involving a stream all return a stream event, which can be used to synchronize the execution of multiple streams. A user may enqueue a -barrier onto a stream referencing an event. The barrier will block until -the command related to the event does not complete, at which point all -side effects of the command shall be visible to commands following the barrier, -even if those side effects manifest on different devices. +barrier onto a stream referencing an event. The barrier will block activity on the +stream until the operation related to the event completes. After the event completes, all +side effects of the operation will be visible to subsequent commands even if those +side effects manifest on different devices. + +.. figure:: ../data/understand/programming_model/stream-workflow.svg + :alt: Diagram depicting the stream and event workflow, with an example of + multiple streams working together. The diagram shows operations as red + rectangles, and events as white dots. There are three streams labelled + Stream 1, 2, and 3. The streams each have multiple operations and events + that require synchronization between the streams. + + Multiple stream workflow Streams also support executing user-defined functions as callbacks on the host. The stream will not launch subsequent commands until the callback completes. @@ -476,17 +482,8 @@ The stream will not launch subsequent commands until the callback completes. Device-side execution --------------------- -The SIMT programming model behind the HIP device-side execution is a -middle-ground between SMT (Simultaneous Multi-Threading) programming known from -multicore CPUs, and SIMD (Single Instruction, Multiple Data) programming -mostly known from exploiting relevant instruction sets on CPUs (for example -SSE/AVX/Neon). - -Kernel launch -------------- - -Kernels may be launched in multiple ways all with different syntaxes and -intended use-cases. +Kernels may be launched in multiple ways, all with different syntaxes and +intended use cases. * Using the triple-chevron ``<<<...>>>`` operator on a ``__global__`` annotated function. @@ -495,17 +492,44 @@ intended use-cases. .. tip:: - This name by default is a macro expanding to triple-chevron. In cases where + This name, by default, is a macro expanding to the triple-chevron syntax. In cases where language syntax extensions are undesirable, or where launching templated and/or overloaded kernel functions define the ``HIP_TEMPLATE_KERNEL_LAUNCH`` preprocessor macro before including the HIP headers to turn it into a templated function. -* Using the launch APIs supporting the triple-chevron syntax directly. +Asynchronous execution +---------------------- + +Asynchronous operations between the host and the kernel provide a variety of opportunities, +or challenges, for managing synchronization, as described in :ref:`asynchronous_how-to`. +For instance, a basic model would be to launch an asynchronous operation on a kernel +in a stream, create an event to track the operation, continue operations in the host +program, and when the event shows that the asynchronous operation is complete, synchronize the kernel to return the results. + +However, one of the opportunities of asynchronous operation is the pipelining of operations +between launching kernels and transferring memory. In this case, you would be working +with multiple streams running concurrently, or at least overlapping in some regard, +and managing any dependencies between the streams in the host application. +The producer-consumer paradigm can be used to convert a sequential program +into parallel operations to improve performance. This process can employ multiple +streams to kick off asynchronous kernels, provide data to the kernels, perform operations, +and return the results for further processing in the host application. + +These asynchronous activities call for stream management strategies. In the case +of the single stream, the only management would be the stream synchronization +when the work was complete. However, with multiple streams you have +overlapping execution of operations and synchronization becomes more complex, as shown +in the variations of the example in `Programmatic dependent launch and synchronization <../how-to/hip_runtime_api/asynchronous.html#programmatic-dependent-launch-and-synchronization>`_. +You need to manage each stream's activities, evaluate the availability of results, evaluate the critical path of the tasks, allocate resources on the hardware, and manage the execution order. + +Multi-GPU and load balancing +---------------------------- - .. caution:: +For applications requiring additional computational power beyond a single device, +HIP supports utilizing multiple GPUs within a system. Large-scale applications +that need more compute power can use multiple GPUs in the system. This enables +the runtime to distribute workloads across multiple GPUs to balance the load and prevent some GPUs +from being over-utilized while others are idle. - These APIs are intended to be used/generated by tools such as the HIP - compiler itself and not intended towards end-user code. Should you be - writing a tool having to launch device code using HIP, consider using these - over the alternatives. +For more information, see :ref:`multi-device`. From 511d5a0991c449bfd9af3961add3c176d0cdf4c5 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 13 Mar 2025 00:37:46 +0000 Subject: [PATCH 45/46] Bump rocm-docs-core[api_reference] from 1.17.1 to 1.18.1 in /docs/sphinx Bumps [rocm-docs-core[api_reference]](https://github.com/ROCm/rocm-docs-core) from 1.17.1 to 1.18.1. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.17.1...v1.18.1) --- updated-dependencies: - dependency-name: rocm-docs-core[api_reference] dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- docs/sphinx/requirements.in | 2 +- docs/sphinx/requirements.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in index 1d65d55880..07e229101b 100644 --- a/docs/sphinx/requirements.in +++ b/docs/sphinx/requirements.in @@ -1,2 +1,2 @@ -rocm-docs-core[api_reference]==1.17.1 +rocm-docs-core[api_reference]==1.18.1 sphinxcontrib.doxylink diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt index 6db9f3e428..4f3afc36dc 100644 --- a/docs/sphinx/requirements.txt +++ b/docs/sphinx/requirements.txt @@ -211,7 +211,7 @@ requests==2.32.3 # via # pygithub # sphinx -rocm-docs-core[api-reference]==1.17.1 +rocm-docs-core[api-reference]==1.18.1 # via -r requirements.in rpds-py==0.22.3 # via From 0ef68e955d02c961794a358b331bae3bf7adcf1f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Juan=20Manuel=20Martinez=20Caama=C3=B1o?= Date: Fri, 14 Mar 2025 09:41:24 +0100 Subject: [PATCH 46/46] Update docs: the compilation cache is enabled by default --- docs/how-to/hip_rtc.rst | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/how-to/hip_rtc.rst b/docs/how-to/hip_rtc.rst index 734bf60284..223e11081c 100644 --- a/docs/how-to/hip_rtc.rst +++ b/docs/how-to/hip_rtc.rst @@ -265,17 +265,17 @@ Use the following environment variables to manage the cache status as enabled or disabled, the location for storing the cache contents, and the cache eviction policy: -* ``AMD_COMGR_CACHE`` By default this variable has a value of ``0`` and the - compilation cache feature is disabled. To enable the feature set the - environment variable to a value of ``1`` (or any value other than ``0``). +* ``AMD_COMGR_CACHE`` By default this variable is unset and the + compilation cache feature is enabled. To disable the feature set the + environment variable to a value of ``0``. * ``AMD_COMGR_CACHE_DIR``: By default the value of this environment variable is - defined as ``$XDG_CACHE_HOME/comgr_cache``, which defaults to - ``$USER/.cache/comgr_cache`` on Linux, and ``%LOCALAPPDATA%\cache\comgr_cache`` + defined as ``$XDG_CACHE_HOME/comgr``, which defaults to + ``$USER/.cache/comgr`` on Linux, and ``%LOCALAPPDATA%\cache\comgr`` on Windows. You can specify a different directory for the environment variable to change the path for cache storage. If the runtime fails to access the - specified cache directory, or the environment variable is set to an empty - string (""), the cache is disabled. + specified cache directory the cache is disabled. If the environment variable + is set to an empty string (``""``), the default directory is used. * ``AMD_COMGR_CACHE_POLICY``: If assigned a value, the string is interpreted and applied to the cache pruning policy. The string format is consistent with