LightYear/Cloud GPU
Cloud GPU

GPU Cloud for AI & ML

Deploy NVIDIA A16, A40, A100, and L40S GPU instances in seconds. Purpose-built for AI training, LLM fine-tuning, and high-performance computing.

Starting at $43/mo · Hourly billing · No minimum commitment

Choose Your GPU

Select the GPU tier that matches your workload requirements.

NVIDIA A16
NVIDIA A16 GPU with up to 64 GB VRAM. Ideal for AI inference, ML training, and GPU-accelerated workloads at an accessible price point.
AI/ML InferenceSmall model trainingGPU renderingScientific computing
NVIDIA A16
$53.75/mo
$0.0738/hr
2 GB VRAM
2 vCPU
8 GB RAM
50 GB NVMe SSD
1 TB bandwidth
10 Gbps Network
Popular
NVIDIA A16
$107.50/mo
$0.1475/hr
4 GB VRAM
2 vCPU
16 GB RAM
80 GB NVMe SSD
2 TB bandwidth
10 Gbps Network
NVIDIA A16
$215.00/mo
$0.2950/hr
8 GB VRAM
3 vCPU
32 GB RAM
170 GB NVMe SSD
3 TB bandwidth
10 Gbps Network
NVIDIA A16
$430.00/mo
$0.5888/hr
16 GB VRAM
6 vCPU
64 GB RAM
350 GB NVMe SSD
6 TB bandwidth
10 Gbps Network
NVIDIA A16
$860.00/mo
$1.1775/hr
32 GB VRAM
12 vCPU
128 GB RAM
700 GB NVMe SSD
10 TB bandwidth
10 Gbps Network
NVIDIA A16
$1718.75/mo
$2.3550/hr
64 GB VRAM
24 vCPU
256 GB RAM
1200 GB NVMe SSD
12 TB bandwidth
10 Gbps Network
NVIDIA A16
$3437.50/mo
$4.7088/hr
128 GB VRAM
48 vCPU
496 GB RAM
1500 GB NVMe SSD
15 TB bandwidth
10 Gbps Network
NVIDIA A16
$6875.00/mo
$9.4175/hr
256 GB VRAM
96 vCPU
960 GB RAM
1700 GB NVMe SSD
25 TB bandwidth
10 Gbps Network

Built for AI Workloads

Every GPU instance is optimized for machine learning, AI training, and high-performance computing.

Deploy in Seconds

Spin up GPU instances in under 60 seconds. No waiting, no queues.

NVMe SSD Storage

Ultra-fast local NVMe SSD storage for low-latency data access.

10 Gbps Network

High-bandwidth networking for distributed training and data pipelines.

Hourly Billing

Pay only for what you use. Billed to the hour with no minimum commitment.

CUDA Ready

All instances come pre-configured with CUDA drivers and GPU toolkits.

Full GPU Access

Dedicated GPU resources — no sharing, no throttling, full VRAM access.

Free DDoS Protection

Always-on network-level DDoS mitigation included on every GPU instance at no extra cost.

What Will You Build?

From training large language models to running real-time inference, LightYear GPU Cloud handles it all.

LLM Training

Train and fine-tune large language models with A100 or A40 GPUs.

AI Inference

Run production inference workloads at scale with low latency.

Computer Vision

Train image classification, detection, and segmentation models.

Scientific HPC

Accelerate simulations, molecular dynamics, and research workloads.

Ready to Deploy?

Start with any GPU tier. Scale up as your workload grows. Cancel anytime.

Your cookie choices for this website

This site uses cookies and related technologies, as described in our privacy policy, for purposes that may include site operation, analytics, and enhanced user experience. You may choose to consent to our use of these technologies, or manage your own preferences. Cookie policy