Documentation

Everything you need to build with LimitlessAI

Quick Start

Get up and running with LimitlessAI in under 5 minutes.

1. Install the CLI

npm install -g @limitlessai/cli
# or
curl -sSL https://get.limitlessai.com | bash

2. Authenticate

limitless auth login
# Enter your API key when prompted

3. Deploy Your First Instance

limitless deploy --name my-first-app \
  --type gpu-rtx4090 \
  --region us-west-1 \
  --image ubuntu:22.04
Pro Tip: Use limitless deploy --help to see all available options.

Installation

Choose your preferred installation method:

// Install SDK
npm install @limitlessai/sdk

// Usage
const LimitlessAI = require('@limitlessai/sdk');
const client = new LimitlessAI({
  apiKey: 'your-api-key'
});

// Create instance
const instance = await client.instances.create({
  name: 'gpu-worker',
  type: 'gpu-rtx4090',
  region: 'us-west-1'
});

Authentication

All API requests require authentication using your API key.

Getting Your API Key

  1. Sign in to your dashboard
  2. Navigate to Settings → API Keys
  3. Click "Generate New Key"
  4. Store your key securely

Using Your API Key

# Via environment variable
export LIMITLESS_API_KEY="your-api-key"

# Via CLI flag
limitless --api-key="your-api-key" instances list

# Via config file
echo "api_key: your-api-key" > ~/.limitless/config.yaml
Security Note: Never commit your API keys to version control.

GPU Computing Guide

Optimize your GPU workloads for maximum performance.

Available GPU Types

GPU Model VRAM CUDA Cores Best For Price/hr
RTX 4090 24GB 16,384 Large models, training $1.79
A100 80GB 6,912 Enterprise ML $2.79
T4 16GB 2,560 Inference $0.55
V100 32GB 5,120 Research $1.29

CUDA Setup

import torch
import limitlessai

# Check CUDA availability
print(f"CUDA Available: {torch.cuda.is_available()}")
print(f"GPU Count: {torch.cuda.device_count()}")

# Initialize model on GPU
model = YourModel().cuda()

# Distributed training
if torch.cuda.device_count() > 1:
    model = nn.DataParallel(model)