Cloud Server Deployment Flow
LightYear GPU servers provide on-demand access to high-performance NVIDIA GPUs for AI/ML training, inference, rendering, and scientific computing workloads.
Available GPU Models
| GPU Model | VRAM | Best For | Price |
|---|---|---|---|
| NVIDIA RTX 4090 | 24 GB GDDR6X | AI inference, rendering | $0.74/hr |
| NVIDIA A100 80GB | 80 GB HBM2e | Large model training | $2.49/hr |
| NVIDIA H100 SXM | 80 GB HBM3 | LLM training, HPC | $3.99/hr |
| NVIDIA L40S | 48 GB GDDR6 | Multi-modal AI, rendering | $1.49/hr |
Common Use Cases
AI/ML Model Training Training deep learning models with frameworks like PyTorch, TensorFlow, and JAX benefits significantly from GPU acceleration. A task that takes 24 hours on CPU may complete in 30 minutes on an A100.
LLM Inference Serving large language models (LLaMA 3, Mistral, Qwen) requires substantial VRAM. The A100 80GB can serve a 70B parameter model in 4-bit quantization.
Computer Vision Object detection, image segmentation, and video processing pipelines run efficiently on GPU servers.
Scientific Computing CUDA-accelerated libraries (cuBLAS, cuFFT, RAPIDS) enable GPU-accelerated data processing and simulation.
Deploy a GPU Server
$curl -X POST https://api.lightyear.host/v1/servers \$ -H "Authorization: Bearer YOUR_API_KEY" \$ -H "Content-Type: application/json" \$ -d '{$ "region": "sgp-01",$ "plan": "gpu-a100-80gb",$ "os_id": 1743,$ "label": "ml-training-01",$ "ssh_key_ids": ["key_abc123"]$ }'Verify GPU Availability After Deployment
$nvidia-smi+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.12 Driver Version: 535.104.12 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... Off | 00000000:00:05.0 Off | 0 |
| N/A 32C P0 55W / 400W | 0MiB / 81920MiB | 0% Default |
+-----------------------------------------------------------------------------+Install CUDA Toolkit
GPU servers come with NVIDIA drivers pre-installed. To install the CUDA toolkit:
$apt install -y cuda-toolkit-12-2Verify the installation:
$nvcc --versionnvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91[!TIP] GPU servers are billed hourly. Stop or delete the server when training is complete to avoid unnecessary charges.
