Available GPU Plans
| Plan | GPU | VRAM | vCPUs | RAM | Storage | Price/hr |
|---|---|---|---|---|---|---|
| GPU-A100-40 | NVIDIA A100 40GB | 40 GB | 8 | 64 GB | 500 GB NVMe | $1.89 |
| GPU-A100-80 | NVIDIA A100 80GB | 80 GB | 16 | 128 GB | 1 TB NVMe | $2.49 |
| GPU-H100-80 | NVIDIA H100 80GB | 80 GB | 24 | 192 GB | 2 TB NVMe | $3.99 |
| GPU-RTX4090 | NVIDIA RTX 4090 | 24 GB | 8 | 32 GB | 250 GB NVMe | $0.99 |
| GPU-A6000 | NVIDIA RTX A6000 | 48 GB | 12 | 96 GB | 500 GB NVMe | $1.49 |
Choosing the Right GPU
Training Large Language Models (LLMs)
Use H100 80GB for maximum throughput. The H100 offers 3× the FP8 throughput of the A100 and supports NVLink for multi-GPU configurations.
Fine-Tuning (LoRA / QLoRA)
A100 80GB or A6000 48GB are cost-effective for fine-tuning models up to 70B parameters with 4-bit quantisation.
Inference / Serving
RTX 4090 offers the best price-per-token for models up to 13B parameters. For larger models, use A100 40GB.
Computer Vision / Image Generation
RTX 4090 is ideal for Stable Diffusion and similar workloads due to its high memory bandwidth relative to cost.
Multi-GPU Configurations
Contact support to provision bare-metal nodes with 4× or 8× H100 GPUs for large-scale distributed training.
Spot vs On-Demand
LightYear currently offers on-demand pricing only. Spot instances are on the roadmap for Q3 2025.
