Once you are logged on to the instance, you can query the GPU using the command
$ nvidia-smi
It should print out information similar to the following:
% nvidia-smi
Fri May 31 07:14:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 |
| N/A 58C P0 73W / 149W | 0MiB / 11441MiB | 96% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
This indicates the GPU is of the Kepler family, has 4GiB of memory, we have version 418.67 of the drivers installed, and version 10.1 of CUDA installed.
The Nvidia CUDA toolkit contains a large number of substantive examples.
With the latest version of the toolkit, everything related to CUDA is
installed in /usr/local/cuda
; the samples can be found in
/usr/local/cuda/samples
. I encourage you to look through some of
them – the 6_Advanced
subdirectory in particular has some
interesting and non-trivial examples.