LightYear
/Docs
DocsComputeGPU Server Overview and Use Cases

GPU Server Overview and Use Cases

Understand LightYear GPU server offerings, available GPU models, and which workloads benefit most from GPU acceleration.

beginner
7 min read
LightYear Docs Team
Updated April 24, 2026
gpuaimlcudacompute
Ready to get started?

Cloud Server Deployment Flow

LightYear GPU servers provide on-demand access to high-performance NVIDIA GPUs for AI/ML training, inference, rendering, and scientific computing workloads.

Available GPU Models

GPU ModelVRAMBest ForPrice
NVIDIA RTX 409024 GB GDDR6XAI inference, rendering$0.74/hr
NVIDIA A100 80GB80 GB HBM2eLarge model training$2.49/hr
NVIDIA H100 SXM80 GB HBM3LLM training, HPC$3.99/hr
NVIDIA L40S48 GB GDDR6Multi-modal AI, rendering$1.49/hr

Common Use Cases

AI/ML Model Training Training deep learning models with frameworks like PyTorch, TensorFlow, and JAX benefits significantly from GPU acceleration. A task that takes 24 hours on CPU may complete in 30 minutes on an A100.

LLM Inference Serving large language models (LLaMA 3, Mistral, Qwen) requires substantial VRAM. The A100 80GB can serve a 70B parameter model in 4-bit quantization.

Computer Vision Object detection, image segmentation, and video processing pipelines run efficiently on GPU servers.

Scientific Computing CUDA-accelerated libraries (cuBLAS, cuFFT, RAPIDS) enable GPU-accelerated data processing and simulation.

Deploy a GPU Server

>_BASH
$curl -X POST https://api.lightyear.host/v1/servers \
$ -H "Authorization: Bearer YOUR_API_KEY" \
$ -H "Content-Type: application/json" \
$ -d '{
$ "region": "sgp-01",
$ "plan": "gpu-a100-80gb",
$ "os_id": 1743,
$ "label": "ml-training-01",
$ "ssh_key_ids": ["key_abc123"]
$ }'

Verify GPU Availability After Deployment

>_BASH
$nvidia-smi
OUTPUT
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.12   Driver Version: 535.104.12   CUDA Version: 12.2    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  NVIDIA A100-SXM...  Off  | 00000000:00:05.0 Off |                    0 |
| N/A   32C    P0    55W / 400W |      0MiB / 81920MiB |      0%      Default |
+-----------------------------------------------------------------------------+

Install CUDA Toolkit

GPU servers come with NVIDIA drivers pre-installed. To install the CUDA toolkit:

>_BASH
$apt install -y cuda-toolkit-12-2

Verify the installation:

>_BASH
$nvcc --version
OUTPUT
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91

[!TIP] GPU servers are billed hourly. Stop or delete the server when training is complete to avoid unnecessary charges.

Was this article helpful?

Your cookie choices for this website

This site uses cookies and related technologies, as described in our privacy policy, for purposes that may include site operation, analytics, and enhanced user experience. You may choose to consent to our use of these technologies, or manage your own preferences. Cookie policy