Summary

Private endpoints are authenticated.

REST API Rate Limits

Public Endpoints

  • Requests per second per IP: VAR::REST_PUB_REQS_PER_SEC_PER_IP
  • Requests per second per IP in bursts: Up to VAR::REST_PUB_REQS_PER_SEC_PER_IP_BURST

Private Endpoints

  • Requests per second per profile: VAR::REST_PRV_REQS_PER_SEC_PER_PROFILE
  • Requests per second per profile in bursts: Up to VAR::REST_PRV_REQS_PER_SEC_PER_PROFILE_BURST

Private /fills Endpoint

  • Requests per second per profile: VAR::REST_PRV_FILLS_REQS_PER_SEC_PER_PROFILE
  • Requests per second per profile in bursts: Up to VAR::REST_PRV_FILLS_REQS_PER_SEC_PER_PROFILE_BURST

Private /loans Endpoint

  • Requests per second per profile: VAR::REST_PRV_LOANS_REQS_PER_SEC_PER_PROFILE

Rate limits do not apply to List loan assets (/loans/assets) which is not private.

FIX API Rate Limits

FIX 4.2 Rate Limits

  • Requests per rolling second per session: VAR::FIX_REQS_PER_ROLLING_SEC_PER_SESSION
  • Messages per second in bursts: VAR::FIX_MSGS_PER_SEC_BURST

FIX 5.0 Rate Limits

  • VAR::FIX_FIVE_LOGON_PER_SEC logons per second per API key
  • VAR::FIX_FIVE_REQS_PER_SEC requests per second

Your FIX 5 session is disconnected if your messages exceed VAR::FIX_FIVE_REQS_PER_SEC_DISCONNECT messages per second

FIX Maximums

  • Maximum API keys per session/connection: VAR::FIX_MAX_API_KEYS_PER_SESSION
  • Maximum connections per profile: VAR::FIX_MAX_CONNECTIONS_PER_PROFILE . See FIX Best Practices.
  • Maximum connections per user across all profiles: VAR::FIX_MAX_CONNECTIONS_PER_USER
  • Maximum profiles per user: VAR::FIX_MAX_PROFILES_PER_USER
  • Maximum orders per batch message message (new and cancelled): VAR::FIX_MAX_BATCH_ORDERS

Websocket Rate Limits

  • Requests per second per IP: VAR::WS_REQS_PER_SEC_PER_IP
  • Requests per second per IP in bursts: Up to VAR::WS_REQS_PER_SEC_PER_IP_BURST
  • Messages sent by the client every second per IP: VAR::WS_CLIENT_MSGS_PER_SEC_PER_IP

Other

  • Maximum open orders: VAR::MAX_OPEN_ORDERS

How Rate Limits Work

Rate-limiting for both the Exchange REST API and the FIX API use a lazy-fill token bucket implementation.

A TokenBucket stores a maximum amount of tokens, which is the burst size, and fills at a given rate called the refresh rate. The bucket starts full, and as requests are received, a token is removed for each request. Tokens are continuously added to the bucket at the refresh rate until full.

When a user sends a request, the TokenBucket calculates whether or not to rate limit the user as follows:

  1. Fill the user’s TokenBucket to a token size based on the following formula: token_amount = min(burst, previous_token_amount + (current_time - previous_request_time) * refresh_rate)
  2. Remove 1 token if possible, otherwise rate limit the request.
  3. Repeat Steps 1 and 2 for each subsequent request.

TokenBucket Example

Let’s say you have a TokenBucket with burst = 3 and refresh_rate = 1. The table below represents the state of your token bucket after a series of requests:

ActionTimeTokensNotes
Initial State0.03.0New TokenBucket is initialized to max capacity (burst)
Request 10.52.0Fill TokenBucket, then remove a token, because we are at max capacity, and subtract 1 token from 3
Request 20.81.3Fill TokenBucket to 2.3 (min(3, (2 + (.8 - .5) * 1.0)) = min(3, 2.3) = 2.3), then subtract 1
Request 30.90.4Fill TokenBucket to 1.4 (min(3, (1.3 + (.9 - .8) * 1.0)) = min(3, 1.4) = 1.4), then subtract 1
Request 41.00.5Fill TokenBucket to 0.5 (min(3, (.4 + (1.0 - .9) * 1.0)) = min(3, 0.5) = 0.5). Ratelimit because we don’t have enough tokens available
Request 51.40.9Fill TokenBucket to 0.9 (min(3, (0.5 + (1.4 - 1.0) * 1.0)) = min(3, 0.9) = 0.9). Ratelimit because we don’t have enough tokens available
Request 61.80.3Fill TokenBucket to 1.3 (min(3, (0.9 + (1.8 - 1.4) * 1.0)) = min(3, 1.3) = 1.3), then remove 1
Request 75.02.0Fill TokenBucket to 3.0 (min(3, (0.3 + (5.0 - 1.8) * 1.0)) = min(3, 3.5) = 3), since we would “overflow” with our calculations, then subtract 1