TorchRank

Miner / Provider Guide

Bring compute and models to subnets that pay for accuracy, reliability, and low latency. Win on unit economics and uptime.

Workloads
Hardware
SLOs
Autoscaling

What a Miner Does

You supply the “thinking power.” The network routes tasks; you respond quickly and accurately. The better you perform, the better you’re scored and rewarded.

Supply

Compute/Models

Serve requests for embeddings, agents, diffusion, etc. Match subnets to your strengths.

Operate

Reliability

High availability, predictable latency, graceful degradation under load.

Earn

Rewards

Score-linked emissions for useful work, scaled by consistency.

Mindset: This is ops. Ship observability first, fancy models second.

Common Workloads

Type

Embeddings

CPU-friendly; batchable. Throughput and latency are king.

Type

Agents

Multi-step + tool use. Concurrency, timeouts, and retries matter.

Type

Diffusion / Vision

GPU-heavy; tune presets for quality vs cost. Cache and reuse where possible.

Hardware & Footprint

Baselines

  • CPU: modern 8–16 cores
  • RAM: 32–64GB
  • Disk: NVMe SSD, logs rotated
  • GPU: match VRAM to task; avoid overbuying

Networking

  • Stable low-jitter link
  • Ingress protection, rate limits
  • Multi-AZ/region optional as you scale

Reliability & SLOs

1) Health
Heartbeats; circuit breakers; bounded queues; per-subnet watchdogs.
2) Observability
Latency histograms, error budgets, GPU utilization, autoscaler logs.
3) Rollouts
Canary new model/container; fast rollback on regression.

Economics

Inputs

  • Task volume
  • Acceptance rate
  • Score curve
  • Spot price

Costs

  • GPU hours & power
  • Bandwidth & egress
  • Orchestration time
  • Failures/retries
// Simple miner math
rev = tasks * acceptance_rate * reward_per_task
unit_cost = (gpu_hours * price_per_hour + power + ops) / tasks
profit = rev - unit_cost * tasks
// Win by picking subnets where your latency/price profile is advantaged.

Scale Plan

Prove stability on one subnet → expand.

Phase 1

Single Subnet

Lock SLOs; gather 7–14 day consistency data.

Phase 2

Multi-Subnet

Spread load; reuse infra; keep per-subnet dashboards.

Phase 3

Automation

Autoscale on queue depth & latency; preemptible where safe.

FAQ

How do I pick a subnet?

Match your hardware profile to the task. Favor subnets with steady demand and fair scoring.

What kills margins?

Idle GPUs, retries, and latency misses. Batch, cache, and cap concurrency.

What should I publish?

Basic transparency — uptime, version, changes — helps validators trust your outputs.

When to scale?

After 2–4 weeks of stable score + positive unit economics. Then expand gradually.

← Validator Guide Bittensor Overview →