Skip to content
NewsDataHub NewsDataHub Learning Center

What's the Difference Between Offset and Cursor Pagination? How to Choose the Right Approach

Pagination divides large result sets into smaller chunks for performance and a better user experience. It reduces memory pressure, avoids timeouts, and improves perceived responsiveness.

Offset pagination selects a page by number or by how many rows to skip.

  • Client sends: page_size and page or offset
  • Server returns that slice of rows
-- Initial page
SELECT *
FROM posts
ORDER BY created_at DESC
LIMIT 10;
-- Subsequent page using page number (derived offset)
SELECT *
FROM posts
ORDER BY created_at DESC
LIMIT 10 OFFSET 10; -- page 2
  • Simple to implement and reason about
  • Easy jump‑to‑page UX
  • Fine for small, relatively static datasets
  • Performance degrades at high offsets because the database must scan and skip rows
  • Susceptible to duplicates or gaps when new rows are inserted or deleted between requests
  • Requires deterministic ordering or results will be inconsistent

More background on LIMIT/OFFSET behavior in PostgreSQL: PostgreSQL documentation

Cursor pagination continues from a specific position in a sorted sequence.

  • Client requests the first page with page_size
  • Server responds with rows plus an opaque cursor
  • Client requests the next page by sending back that cursor
Request 1:
page_size: 50
cursor: None
Response 1:
{
items: [ ... 50 rows ... ],
cursor: "some opaque string"
}
Request 2:
page_size: 50
cursor: "some opaque string"

Under the hood, the server uses a monotonic sort key and seeks from the last item rather than skipping N rows.

-- First page
SELECT *
FROM posts
ORDER BY created_at DESC, id DESC
LIMIT 10;
-- Next page using the last known sort key values
SELECT *
FROM posts
WHERE (created_at, id) < (:last_created_at, :last_id)
ORDER BY created_at DESC, id DESC
LIMIT 10;
  • Scales well on large datasets by seeking instead of skipping
  • Stable under inserts and deletes when using a deterministic sort key
  • Lower latency at deep pages
  • More complex to implement
  • Jump‑to‑page UX requires a separate service‑side abstraction

Choosing between offset and cursor (quick checklist)

Section titled “Choosing between offset and cursor (quick checklist)”

Choose cursor if:

  • The dataset is large or grows quickly
  • Results change frequently and you need consistency
  • You care about deep pagination performance
  • You can enforce a monotonic sort with a unique tie‑breaker

Choose offset if:

  • The dataset is small and rarely changes
  • You need exact page numbers for UX or reporting
  • Implementation simplicity is the priority
  • Sorting
    • Use a monotonic key plus a unique tie‑breaker, for example: ORDER BY created_at DESC, id DESC
    • Create a matching index to support the sort and seek efficiently
  • Stability
    • Keep filters constant during a pagination run; changing filters invalidates the current stream
  • Reliability
    • Log the last cursor and the first/last item IDs per page to support resumability
  • Pitfalls
    • Missing ORDER BY with offset leads to inconsistent pages
    • Interpreting the cursor string client‑side causes fragile clients
    • Attempting page jumps with cursor without a server abstraction leads to confusion
  • What is an API?[1]
  • HTTP Methods Explained[2]
  • API Authentication: Keys, Tokens, and OAuth[3]
  • NewsDataHub API Pagination: Efficient Data Fetching with Cursors[4]
  • Wikipedia: Pagination[5]
  • Slack Engineering on pagination[6]
  • PostgreSQL docs on LIMIT and OFFSET[7]
  • When should I prefer cursor pagination?
    • When datasets are large or frequently updated, and you need consistent ordering and performance.
  • Do I need a special index for cursor pagination?
    • Yes. Index the sort key and tie‑breaker, e.g., (created_at DESC, id DESC).
  • Can I jump to page 50 with cursors?
    • Not directly. Build a service abstraction that maps page numbers to anchor points if this UX is required.

Olga S.

Founder of NewsDataHub — Distributed Systems & Data Engineering

Connect on LinkedIn