Practical Guidance for Using Supabase

Last updated: 2026-04-22

This guide is a practical, engineer-focused overview of Supabase: what it is good at, where it can bite you, how to structure projects, how to think about security and cost, and how to ship with it responsibly.

It is written to be useful for real product work rather than as a feature brochure.


Table of Contents

  1. What Supabase Actually Is
  2. When Supabase Is a Great Fit
  3. When Supabase Is a Poor Fit
  4. A Good Mental Model
  5. Current Pricing Model and Cost Control
  6. Recommended Project Setup
  7. Local Development and Migrations
  8. Database Design Advice
  9. Auth, JWTs, and Row Level Security
  10. Using Supabase Auth with AWS Lambda
  11. Storage Patterns
  12. Realtime Guidance
  13. Edge Functions vs Database Functions vs Client Logic
  14. Performance, Scaling, and Observability
  15. Backups, Recovery, and Branching
  16. Practical iOS / Swift Guidance
  17. Common Pitfalls
  18. A Launch Checklist
  19. Suggested Starter Architecture
  20. Official References

What Supabase Actually Is

Supabase is not just “hosted Postgres.” A Supabase project gives you a dedicated Postgres instance plus a bundle of productized backend services around it:

That combination is why Supabase often feels much faster than rolling your own Postgres + API + auth + file storage stack.

The core design principle is important:

Supabase is PostgreSQL-first.

If you understand relational schema design, SQL, migrations, roles, and policies, you can get a lot of value out of it.


When Supabase Is a Great Fit

Supabase is a strong choice when you want most of the following:

Typical good fits:


When Supabase Is a Poor Fit

Supabase may be the wrong default when your main need is one of these:

In those cases, raw Postgres on AWS RDS / Aurora or a more custom backend may be a better long-term fit.


A Good Mental Model

flowchart TD
    A[Client App<br/>Web / iOS / Android] --> B[Supabase Auth]
    A --> C[Data API / Client SDK]
    A --> D[Storage]
    A --> E[Realtime]
    C --> F[(Postgres)]
    B --> F
    D --> F
    G[Edge Functions] --> F
    G --> H[Third-party APIs]
    I[SQL Migrations / CLI / Branches] --> F

The practical way to think about it

If you build with that mental model, Supabase usually feels coherent.


Current Pricing Model and Cost Control

Pricing changes over time. Always verify current limits and rates on the official pricing pages before making commitments.

1. Billing structure

Supabase bills by organization, and each organization has its own subscription plan. Different plans cannot be mixed inside one organization. If you want some projects on Free and others on Pro, you need separate organizations.

Also, each project has its own dedicated Postgres instance, and every project increases compute cost.

2. Practical pricing takeaway

For many teams, the base plan is not the whole bill. In practice, your total cost can include:

3. Current officially documented examples and rates

At the time of writing, the official docs/pages state:

4. Spend Cap

Supabase’s Spend Cap is one of the most important cost-control features.

When Spend Cap is on, usage beyond quota for covered items is blocked instead of billed. When Spend Cap is off, your services continue and you pay overages.

Important details:

5. Cost advice that actually matters

  1. Count projects, not just users. Every project adds compute.
  2. Use Pro deliberately. The base plan is only part of the bill.
  3. Monitor MAUs carefully. They are counted by distinct users who log in or refresh a token during the billing cycle.
  4. Watch Realtime and Storage. These are easy to ignore until they are not.
  5. Treat PITR and replicas as production add-ons, not defaults.
  6. If you buy via AWS Marketplace, re-check billing behavior, because marketplace billing has some differences.

6. A realistic cost mindset

Supabase pricing is usually easier to reason about than per-read/per-write systems, but it is still very possible to overspend if you:


Small team / startup recommendation

A very practical default is:

Environment strategy

Use this default split:

Environment Purpose Guidance
Local fast iteration, migrations, testing always use CLI + Docker
Staging integration checks, QA keep config close to prod
Production user traffic minimal manual edits

Organization strategy

Because plans are organization-based, decide early whether you want:

Branching strategy

Supabase branches are useful for safe experimentation. Each branch has its own:

Branches are isolated, which is powerful, but it also means they are real environments. Do not create them casually and forget them.


Local Development and Migrations

Supabase CLI is one of the best reasons to take Supabase seriously.

Why local development matters

It gives you:

What you need installed

At minimum:

Quick local workflow

# install the CLI (one option)
npm install supabase --save-dev

# initialize Supabase in your repo
npx supabase init

# start the local Supabase stack
npx supabase start

What you should see after supabase start

The CLI will print the local endpoints and keys you need for development.

It will look roughly like this:

API URL: http://localhost:54321
DB URL: postgresql://postgres:postgres@localhost:54322/postgres
Studio URL: http://localhost:54323
Mailpit URL: http://localhost:54324
anon key: <local-anon-key>
service_role key: <local-service-role-key>

In practice, these local endpoints are the important ones:

Minimal local repo layout

After supabase init, your repo will have a supabase/ directory.

A practical setup quickly becomes:

your-app/
  supabase/
    config.toml
    migrations/
    seed.sql

You should treat these as source-controlled project files, not throwaway local state.

Concrete example 1: prove the local database works

If you are new to Supabase local dev, start with a tiny table and seed data before adding auth or storage.

Create a migration:

npx supabase migration new create_todos_table

Then edit the generated file in supabase/migrations/<timestamp>_create_todos_table.sql:

create table public.todos (
  id bigint generated always as identity primary key,
  title text not null,
  is_done boolean not null default false,
  created_at timestamptz not null default now()
);

Create supabase/seed.sql:

insert into public.todos (title, is_done)
values
  ('Install the Supabase CLI', true),
  ('Start the local stack', true),
  ('Create the first migration', true),
  ('Build the app feature', false);

Then reset the local database so migrations and seeds are applied from scratch:

npx supabase db reset

At that point you should be able to:

That is the fastest way to confirm your local environment is actually working.

Concrete example 2: connect a local app to the local Supabase stack

Suppose you are building a small web app with Vite.

Create .env.local:

VITE_SUPABASE_URL=http://localhost:54321
VITE_SUPABASE_ANON_KEY=<local-anon-key>

Create src/lib/supabase.ts:

import { createClient } from "@supabase/supabase-js";

export const supabase = createClient(
  import.meta.env.VITE_SUPABASE_URL,
  import.meta.env.VITE_SUPABASE_ANON_KEY
);

Then query the local table:

import { supabase } from "./lib/supabase";

async function loadTodos() {
  const { data, error } = await supabase
    .from("todos")
    .select("id, title, is_done, created_at")
    .order("id", { ascending: true });

  if (error) {
    console.error(error);
    return;
  }

  console.table(data);
}

loadTodos();

This is useful because it proves three things at once:

Concrete example 3: test local auth without a real email provider

One of the nicest local-dev features is that auth emails are captured by Mailpit.

For a simple email/password flow, your app code can be:

const { data, error } = await supabase.auth.signUp({
  email: "dev@example.com",
  password: "dev-password-1234",
});

if (error) {
  console.error(error);
} else {
  console.log("Signed up", data.user?.id);
}

Then open http://localhost:54324 to inspect the confirmation email locally.

That gives you a practical auth loop for development:

  1. sign up from the app
  2. inspect the email in Mailpit
  3. click the confirmation link
  4. sign in and continue testing

This is much better than trying to wire real SMTP on day one.

Security note for local dev

If you are on an untrusted network, Supabase docs recommend binding the local stack to 127.0.0.1 via a dedicated Docker network. Do not expose your local stack publicly.

Migration workflow recommendation

A practical workflow:

  1. Make schema changes locally
  2. Capture them as SQL migrations
  3. Commit migration files
  4. Apply them in staging
  5. Apply them in production

Example: capture dashboard changes as migrations

If you like using local Studio to design tables first, that is fine. Just make sure you capture the result as SQL before you move on.

A practical local flow looks like this:

# 1. make table or column changes in local Studio

# 2. create a migration from the local database diff
npx supabase db diff --schema public -f add_status_to_todos

# 3. rebuild local state from migrations + seed
npx supabase db reset

# 4. lint the local schema
npx supabase db lint

This gives you a durable migration file in supabase/migrations/ instead of leaving the schema change trapped in the dashboard.

Example: use local config for auth providers

If you want to test OAuth locally, configure it in supabase/config.toml rather than assuming the hosted project settings will magically apply.

For example:

[auth.external.github]
enabled = true
client_id = "env(SUPABASE_AUTH_GITHUB_CLIENT_ID)"
secret = "env(SUPABASE_AUTH_GITHUB_SECRET)"
redirect_uri = "http://localhost:54321/auth/v1/callback"

And in your project root .env:

SUPABASE_AUTH_GITHUB_CLIENT_ID="your-github-client-id"
SUPABASE_AUTH_GITHUB_SECRET="your-github-client-secret"

After changing auth config, restart the local stack:

npx supabase stop
npx supabase start

Good rule

Avoid making production-only dashboard edits when the same change should live in migration files.

Use the dashboard for exploration and inspection; use migrations for durable change management.


Database Design Advice

Supabase becomes much easier to operate when your schema is boring in a good way.

Good defaults

Suggested conventions

Example starter schema

create table public.profiles (
  id uuid primary key references auth.users(id) on delete cascade,
  username text unique not null,
  avatar_path text,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create table public.projects (
  id uuid primary key default gen_random_uuid(),
  owner_id uuid not null references auth.users(id) on delete cascade,
  name text not null,
  description text,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create table public.project_members (
  project_id uuid not null references public.projects(id) on delete cascade,
  user_id uuid not null references auth.users(id) on delete cascade,
  role text not null check (role in ('owner', 'editor', 'viewer')),
  created_at timestamptz not null default now(),
  primary key (project_id, user_id)
);

Use database functions for stable business actions

A good heuristic:

Examples of good DB function candidates:


Auth, JWTs, and Row Level Security

This is the section where many Supabase projects either become elegant or become dangerous.

The core model

What “built-in auth” actually means

When people say Supabase has built-in auth, they do not mean “security is automatic” or “authorization is solved.”

They mean Supabase already includes a managed authentication system as part of the platform, so you do not need to separately build or bolt on the common identity pieces yourself.

In practice, that usually means Supabase gives you these pieces out of the box:

That is the “built-in” part: the identity system is already there and wired into the rest of the platform.

What built-in auth does not mean

Built-in auth does not mean:

Authentication answers:

Who is this user?

Authorization answers:

What is this user allowed to do?

Supabase Auth helps with the first question. RLS is how you enforce the second.

Example 1: normal email/password sign-up

Suppose you are building a notes app.

Without a built-in auth system, you would need to create or integrate:

With Supabase Auth, the basic sign-up flow can be as simple as:

const { data, error } = await supabase.auth.signUp({
  email: "ana@example.com",
  password: "correct-horse-battery-staple"
});

What Supabase handles for you here:

Your app still needs to decide what that user can access after login.

Example 2: passwordless email login

Suppose you want a lower-friction sign-in flow for a mobile app or lightweight SaaS.

const { error } = await supabase.auth.signInWithOtp({
  email: "ana@example.com"
});

Supabase can handle the email-based login flow and session creation for you. That means you avoid building password storage and reset UX entirely.

This is still built-in auth, because Supabase is operating the identity workflow rather than your team building it from scratch.

Example 3: Google login

Suppose users want to sign in with an existing Google account.

const { data, error } = await supabase.auth.signInWithOAuth({
  provider: "google"
});

Supabase handles the OAuth dance, receives the identity result, and creates the authenticated session your client will use afterward.

Again, built-in auth does not mean Google users can see every project in your database. It only means the platform has identified the user and issued the session.

Example 4: built-in auth plus RLS

Here is the key relationship:

That is why a policy like this matters:

create policy "users can read own notes"
on public.notes
for select
using (auth.uid() = user_id);

In plain English:

So if Ana signs in successfully, she is authenticated. But she still cannot read Ben’s notes, because the database policy blocks it.

That is the practical meaning of “built-in auth” in Supabase:

Example 5: server-side admin actions

Sometimes you need trusted backend-only behavior, such as inviting a user before they ever sign in.

That is where built-in auth also helps on the server side:

const { data, error } = await supabase.auth.admin.createUser({
  email: "new-user@example.com",
  email_confirm: true
});

This is useful for admin tooling, internal systems, migrations, or invite flows, but it should run only in a trusted server context with a server-only key. For supabase.auth.admin.*, that means a service_role key.

This is another good example of what built-in auth means: Supabase is not just storing tokens. It exposes an actual managed identity service with both end-user and admin workflows.

The single most important rule

Do not rely on the client app for authorization.

Use RLS to enforce it in the database.

Second most important rule

Never ship server-only secrets to the client.

For public/mobile/web apps, use the project URL plus a publishable key in the client. Keep server-only keys only in trusted server contexts.

RLS principles

  1. Enable RLS on exposed tables
  2. Write policies for each action you allow
  3. Assume the client can call your API directly
  4. Make the database prove access is allowed

Example: user-owned rows

create table public.notes (
  id uuid primary key default gen_random_uuid(),
  user_id uuid not null references auth.users(id) on delete cascade,
  body text not null,
  created_at timestamptz not null default now()
);

alter table public.notes enable row level security;

create policy "users can read own notes"
on public.notes
for select
using (auth.uid() = user_id);

create policy "users can insert own notes"
on public.notes
for insert
with check (auth.uid() = user_id);

create policy "users can update own notes"
on public.notes
for update
using (auth.uid() = user_id)
with check (auth.uid() = user_id);

create policy "users can delete own notes"
on public.notes
for delete
using (auth.uid() = user_id);

Example: membership-based access

alter table public.projects enable row level security;

create policy "members can read projects"
on public.projects
for select
using (
  exists (
    select 1
    from public.project_members pm
    where pm.project_id = projects.id
      and pm.user_id = auth.uid()
  )
);

Good security habits

Things that commonly go wrong


Using Supabase Auth with AWS Lambda

Supabase Auth works fine with AWS Lambda.

The important design choice is this:

flowchart LR
    A[Web or Mobile App] -->|Sign in| B[Supabase Auth]
    B -->|Access token JWT| A
    A -->|Authorization Bearer JWT| C[API Gateway]
    C --> D[AWS Lambda]
    D -->|Verify token signature| E[Supabase JWKS]
    D -->|Publishable key plus user JWT| F[Supabase Data API]
    F --> G[(Postgres + RLS)]

The practical rule is simple:

If the request is acting on behalf of a signed-in user, Lambda should usually operate with that user’s JWT, not with a service role key.

That keeps your existing RLS policies in force.

End-to-end example: client to Lambda to Supabase

In a browser or mobile client, send the Supabase access token to your AWS API:

const {
  data: { session },
} = await supabase.auth.getSession();

if (!session) {
  throw new Error("Not signed in");
}

const response = await fetch(`${API_BASE_URL}/notes`, {
  headers: {
    Authorization: `Bearer ${session.access_token}`,
  },
});

Then in Lambda, verify the JWT and create a Supabase client that uses the same user token.

import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { createClient } from "@supabase/supabase-js";
import { createRemoteJWKSet, jwtVerify } from "jose";

const SUPABASE_URL = process.env.SUPABASE_URL!;
const SUPABASE_PUBLISHABLE_KEY = process.env.SUPABASE_PUBLISHABLE_KEY!;
const SUPABASE_ISSUER = `${SUPABASE_URL}/auth/v1`;

const PROJECT_JWKS = createRemoteJWKSet(
  new URL(`${SUPABASE_ISSUER}/.well-known/jwks.json`)
);

function getBearerToken(headerValue?: string) {
  if (!headerValue?.startsWith("Bearer ")) {
    return null;
  }

  return headerValue.slice("Bearer ".length);
}

export const handler: APIGatewayProxyHandlerV2 = async (event) => {
  const authHeader = event.headers.authorization ?? event.headers.Authorization;
  const accessToken = getBearerToken(authHeader);

  if (!accessToken) {
    return {
      statusCode: 401,
      body: JSON.stringify({ error: "Missing bearer token" }),
    };
  }

  try {
    const { payload } = await jwtVerify(accessToken, PROJECT_JWKS, {
      issuer: SUPABASE_ISSUER,
    });

    if (typeof payload.sub !== "string") {
      return {
        statusCode: 401,
        body: JSON.stringify({ error: "Invalid subject claim" }),
      };
    }

    const supabase = createClient(
      SUPABASE_URL,
      SUPABASE_PUBLISHABLE_KEY,
      {
        accessToken: async () => accessToken,
      }
    );

    const { data, error } = await supabase
      .from("notes")
      .select("id, body, created_at")
      .order("created_at", { ascending: false })
      .limit(20);

    if (error) {
      throw error;
    }

    return {
      statusCode: 200,
      body: JSON.stringify({
        userId: payload.sub,
        notes: data,
      }),
    };
  } catch {
    return {
      statusCode: 401,
      body: JSON.stringify({ error: "Invalid or expired token" }),
    };
  }
};

Why this pattern is good:

Why the accessToken option matters

When Lambda calls Supabase as the user, prefer giving the client the user’s JWT via the accessToken option.

That means your Lambda is effectively saying:

“Run this query as the signed-in user.”

This is usually the cleanest way to preserve your existing RLS model.

Practical Lambda notes

Privileged admin or scheduled job flow

Some Lambda functions are not acting on behalf of a single end user.

Examples:

That is a different path:

flowchart LR
    A[Scheduler Queue or Admin API] --> B[AWS Lambda]
    B -->|Secret key or service_role key| C[Supabase Admin or Data API]
    C --> D[(Postgres)]
    B -. bypasses RLS .-> D

The important warning is:

Secret keys and service_role bypass RLS.

That is correct for trusted backend jobs, but it is the wrong default for ordinary user traffic.

Example: admin Lambda for invite flows

For supabase.auth.admin.*, Supabase requires a service_role key. Keep it only in Lambda environment variables and never expose it to clients.

import type { ScheduledHandler } from "aws-lambda";
import { createClient } from "@supabase/supabase-js";

const supabaseAdmin = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY!,
  {
    auth: {
      persistSession: false,
      autoRefreshToken: false,
      detectSessionInUrl: false,
    },
  }
);

export const handler: ScheduledHandler = async () => {
  const { data, error } = await supabaseAdmin.auth.admin.createUser({
    email: "new-user@example.com",
    email_confirm: true,
  });

  if (error) {
    throw error;
  }

  console.log("Created user", data.user?.id);
};

Use this path only when Lambda itself is the trusted authority.

Good default rules

  1. If the request starts with a signed-in user, forward the user’s JWT to Lambda and keep RLS active.
  2. If the function is an internal system job, use an elevated key deliberately and keep the scope narrow.
  3. Do not use a service_role key for routine user reads and writes unless Lambda is also enforcing authorization itself.

Storage Patterns

Supabase Storage integrates nicely with RLS, but only if you set it up correctly.

Important default

By default, Storage does not allow uploads to buckets without RLS policies. Policies are written against storage.objects.

Public assets bucket

Use for:

Rule of thumb:

Private user bucket

Use for:

Store files under a predictable path structure, for example:

users/<user_id>/avatars/<filename>
users/<user_id>/documents/<filename>
projects/<project_id>/attachments/<filename>

Then align RLS rules to those path patterns.

Example policy idea

Storage advice


Realtime Guidance

Supabase Realtime is powerful, but not every screen needs it.

Good uses for Realtime

Bad uses for Realtime

Practical rule

Use Realtime where the user experience is meaningfully improved.

Do not add it just because it is available.

Cost note

Realtime is billed on messages and peak connections, so careless use can become expensive.


Edge Functions vs Database Functions vs Client Logic

A lot of architecture confusion goes away if you decide where code belongs.

Put logic here Best for Avoid when
Client app UI state, local validation, optimistic updates security or trusted secrets are required
Database function (RPC) data-centric actions, transactions, authorization-sensitive operations you need third-party API calls or long external workflows
Edge Function webhooks, Stripe, secret handling, trusted orchestration the action is just a simple SQL transaction

Use a Database Function when

Use an Edge Function when

Use client logic when


Performance, Scaling, and Observability

Supabase is still Postgres. Most performance wins are still normal Postgres wins.

The practical performance ladder

  1. Fix schema design
  2. Add the right indexes
  3. reduce query volume
  4. inspect slow queries
  5. tune app behavior
  6. then scale compute

Useful built-in/available tools

Compute guidance

Each project has a dedicated Postgres instance. Free projects use Nano; paid projects start from Micro. Compute changes can incur downtime, so treat upgrades like production changes.

Disk guidance

Compute size affects baseline disk throughput and IOPS. Smaller instances can burst, but sustained load will expose their baseline limits.

AI / vector workloads

If you are using pgvector:


Backups, Recovery, and Branching

Daily backups vs PITR

Supabase projects are backed up daily, and paid plans can enable Point-in-Time Recovery (PITR) as an add-on.

PITR lets you restore to a chosen point with much finer granularity than daily backups, but:

When to use PITR

Use PITR when:

Do not turn it on blindly for every non-production environment.

Branching guidance

Supabase branching is great for:

A practical remote workflow from the docs is:

  1. create a preview branch
  2. switch to it
  3. make schema changes
  4. pull them locally with supabase db pull
  5. commit migrations
  6. push to Git

That is a good reminder that dashboard changes still need to end up in source control.


Practical iOS / Swift Guidance

If you are building an iOS app, Supabase can be a strong fit because it gives you:

For most apps:

Minimal initialization example

import Supabase

let supabase = SupabaseClient(
  supabaseURL: URL(string: "YOUR_SUPABASE_URL")!,
  supabaseKey: "YOUR_SUPABASE_PUBLISHABLE_KEY"
)

Query example

struct Instrument: Decodable, Identifiable {
  let id: Int
  let name: String
}

let instruments: [Instrument] = try await supabase
  .from("instruments")
  .select()
  .execute()
  .value

Practical mobile advice


Common Pitfalls

1. Treating Supabase like “magic backend”

It is still your database, your policies, and your schema.

2. Shipping without real RLS coverage

This is probably the most common serious mistake.

3. Doing everything from the client

Clients should not be trusted for secret handling or privileged actions.

4. Overusing jsonb

Use relational tables first. Reach for jsonb when flexibility is truly required.

5. Ignoring migrations

Dashboard-only changes drift fast.

6. Creating too many projects/environments

Every project costs compute. Branches and replicas also cost attention.

7. Turning on Realtime everywhere

Use it for clear UX value, not as a default transport.

8. Forgetting storage cleanup

Database rows deleted does not automatically mean files are cleaned up in the way you want.

9. Using the wrong key type

Public/mobile clients should not receive server-only keys.

10. Scaling compute before fixing queries

That usually burns money before it solves the actual problem.


A Launch Checklist

Security

Data

Cost

Operations


Suggested Starter Architecture

Here is a practical default for many apps:

flowchart TD
    A[App Client] --> B[Supabase Auth]
    A --> C[Supabase Data API / SDK]
    A --> D[Supabase Storage]
    A --> E[Realtime]
    C --> F[(Postgres + RLS)]
    G[Edge Functions] --> F
    G --> H[Stripe / Email / External APIs]
    I[Supabase CLI + Migrations] --> F
    J[Branches / Staging] --> F

Keep this architecture simple

That split will take you a surprisingly long way.


Final Advice

If you use Supabase well, it can be one of the fastest ways to ship a serious app without giving up relational data modeling.

The winning pattern is not “put everything in Supabase.” The winning pattern is:

  1. Use Postgres as the core truth
  2. Use RLS as the real authorization layer
  3. Use Edge Functions only where trust or external orchestration is needed
  4. Keep schema changes in migrations
  5. Monitor cost and environment sprawl early

If you ignore those rules, Supabase can become messy. If you follow them, it is often a very productive platform.


Official References

These are the primary official sources used to shape this guide. Re-check them for the latest details.