GPUs

Support for NVIDIA Tesla Graphical Processing Units (GPUs) is provided when the CUDA library is installed on the system and the Magma CUDA executable magma.cuda.exe is used (suitably renamed for the driving magma command line script if need be).

The GPU mode is automatically selected by default in the Magma CUDA executable when running on a computer with CUDA support; the procedure call SetGPU(false); disables the CUDA matrix algorithms in this executable, while SetGPU(true); enables them again. (For the standard non-CUDA executable, SetGPU is ignored since the GPU mode is irrelevant.)

Multiple GPU-based linear algebra is supported since V2.26.

SetGPU(b) : BoolElt ->
GetGPU() : -> BoolElt
Set the NVIDIA GPU mode to b; this determines whether Magma should use NVIDIA GPUs via CUDA when supported. This is only relevant to a CUDA-enabled executable (typically downloaded as magma.cuda.exe) and is {true} by default in that case (so the GPU is used by default); for a non-CUDA-enabled executable, the procedure has no effect. Currently, a GPU is exploited in matrix multiplication over GF(2) and small prime finite fields and consequently anything which depends on such multiplication, such as the dense F4 Gröbner basis algorithm over such fields.
SetNGPUs(n) : RngIntElt ->
Set the number of GPUs to be used to n (uses devices 0 .. n - 1 by default).
GetNGPUs() : -> RngIntElt
Return the number of GPUs now used.
SetGPUDevices(S) : SeqEnum ->
Set the specific used GPU devices to be the numbers in sequence S (while setting the number of devices to #S); the entries of S must be distinct and ≥0.
SetGPUDevice(k) : RngIntElt ->
Set the single specific GPU device to be used to be k; this is equivalent to calling SetGPUDevices([k]) .

Being able to set specific devices provides finer-grained management when a machine has more than one GPU. For example, suppose that the machine has two GPUs (device numbers 0 and 1). One could just use SetNGPUs(2) for one job to use two GPUs, but one might prefer to run two separate jobs each using a single GPU. In that case the first job could specify SetGPUDevice(0) while the second job would specify SetGPUDevice(1).

NB: The utility nvtop (downloadable from https://github.com/Syllo/nvtop) is very useful for seeing the GPU usage during a run. It continuously shows how much each GPU is being used both in time and memory usage.

V2.28, 13 July 2023