The language designed for AI agents

Build apps in kilobytes
not megabytes

Naze is a declarative, AI-native language that compiles to WebAssembly. One source file. Three output formats. Zero dependencies. Designed for a world where agents write the code.

395KB
WASM Runtime
157
Grammar Rules
400+
Tests Passing
109
Examples

Every act of creation is first an act of destruction.

Pablo Picasso

The Agent-First Web

The way we interact with the internet is about to fundamentally change.

Imagine never opening a browser tab again. You tell an AI agent what you need — it fetches, processes, and presents information. No clicking through menus. No scrolling through ads. No loading spinners. Just answers.

But Today's Web Wasn't Built For This

The average web page is 2.5 MB of HTML, CSS, and JavaScript — designed for human eyes, not machine consumption. When AI agents scrape these pages, they waste massive compute tokenizing bloated markup. Every unnecessary <div>, every CSS animation, every bundled framework adds tokens an agent must process and discard.

“A single web page tokenizes to ~165,000 tokens. That's reading a 400-page book — just to extract a restaurant's menu.”

The agent-first web needs a new foundation. A language that compiles to kilobytes, not megabytes. Where the binary is the API. Where AI agents can generate, serve, and consume applications without the overhead of a stack designed in the 1990s.

That language is Naze.

Why Naze?

Built from first principles for a world where AI writes most of the code.

AI-Native, Human-Readable

Designed as a compilation target for AI agents, but reads like a document humans can understand. One canonical form per concept.

Self-Contained Components

Components compose via ‘use’, but the compiler inlines everything into a flat render tree at build time. Each component is fully self-contained — an AI agent reads one file, not five.

No Middle Layers

No bundler, no transpiler, no virtual DOM. Intent reaches pixels directly: parse, typecheck, serialize, layout, render.

One Source, Every Platform

The same .naze file targets web (WASM + Canvas), desktop (native window), and mobile. Compile once, run everywhere.

Energy by Design

Every token saved compounds across billions of requests into massive energy and infrastructure savings. Fewer tokens per component, fewer files per change, fewer retries — multiplicatively less compute at planetary scale.

Distributed Intelligence

Every design choice feeds a discovery network that gets smarter with every interaction. Agents compose, publish, and reuse — accumulating solutions so that even small models can deliver frontier-quality results.

Introducing FAAD

A paradigm where AI agents manage the complete software lifecycle. Humans provide intent and approve results. Machines handle everything else.

F
Fully
A
Autonomous
A
AI
D
Development

Today, developers write code and AI assists. With FAAD, AI agents build, test, debug, deploy, and maintain software autonomously. Naze is engineered for this future — its grammar is small enough for local AI models, its components are self-contained for parallel generation, and its binary format is the API.

But FAAD doesn't stop at development. Agents publish what they build to the Discovery Network — a distributed, capability-indexed registry where every solution compounds. The more agents build, the less any agent needs to build from scratch. FAAD is the paradigm. The Discovery Network is where its output accumulates.

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

Antoine de Saint-Exupéry

Token Complexity

A mathematical framework for measuring the true cost of AI-driven development.

Total token cost (the quantity being measured)
Language / framework being measured
Number of components (application size)
Tokens per component (language verbosity)
Files an AI must read per change (scatter)
Retry rate (incorrect code frequency)

The key insight: Naze is built around this equation. Every language decision — self-contained components, inlined render trees, single-file scoping — exists to keep . Any addition to the language must preserve that invariant. The result is complexity — linear instead of the or typical of multi-file frameworks.

Cost at 100 Components

Estimated token cost (Ψ) for a 100-component application

Naze52K tokens (1x)
Svelte5.9M tokens (113x)
React + Tailwind + TS69M tokens (1,330x)
Java Spring840M tokens (16,150x)

We do not inherit the earth from our ancestors; we borrow it from our children.

Native American proverb

The Energy Equation

One fewer token per component — a butterfly's wing. At planetary scale, a hurricane of savings.

Energy per token (~0.39 J on H100, FP8)
Total token cost (from formula above)
Grid carbon intensity (kg CO₂ per kWh)

The key insight: Every variable Naze minimizes — through minimal syntax, through self-containment, through unambiguous grammar — multiplicatively reduces energy and carbon. At 98.9% fewer tokens per page, the environmental impact scales accordingly.

At Scale

AI-generating 1 million app pages

Energy

Naze0.19 MWh
Svelte21.5 MWh
React + Tailwind + TS253 MWh
Java Spring3,069 MWh

CO₂ Emissions

Naze76 kg CO₂
Svelte8,588 kg CO₂
React + Tailwind + TS101 t CO₂
Java Spring1,227 t CO₂

The Sustainability Gap

Projected AI energy demand vs. data center capacity (TWh/year, 2023–2030)

Development

46% reduction
within capacity

AI agents generating & maintaining applications

02505007501000TWh / year20232024202520262027202820292030

Runtime (agent-to-agent)

7% reduction
still over capacity

Agents serving the web to humans via T1 binaries

Web adoption:
0500100015002000TWh / year20232024202520262027202820292030
Conventional demand
Data center capacity
With Naze

The Infrastructure Dividend

OpenAI, Meta, Google, Microsoft, and Amazon are projected to spend over $1 trillion on AI data center infrastructure through 2030. At 10% web adoption, token-efficient languages like Naze reduce compute demand enough to avoid building a significant portion of that infrastructure entirely.

$73B
in avoided infrastructure
at 10% adoption

The IEA projects AI data centers will consume 945 TWh by 2030 — double today's levels. Every token we eliminate matters. Naze doesn't just make AI development faster — it makes the agent-first web sustainable.

Sources: IEA Energy and AI Report, HTTP Archive Web Almanac 2024, NVIDIA H100 benchmarks, company capex announcements (Meta, Microsoft, Google, Amazon 2024–2025)

Simplicity is the ultimate sophistication.

Leonardo da Vinci

The Three-Layer Architecture

One source file compiles into three distinct layers. Humans need all three. AI agents typically need only Layer 1.

L3

Presentation

UI tree, themes, animations, layout, colors, typography

L2

Interaction

Event handlers, navigation, actions, validation

L1

Data

State, computed values, server functions, data bindings

Three Outputs From One Source

app_data.bin
Layers 1 + 2 + 3 · ~7KB
Browsers / Humans
naze-manifest.json
Layers 1 + 2 · ~1KB
AI Agents
Headless binary
Layers 1 only · ~500B
Agent-to-Agent

The HTML/CSS/JS stack forces AI models to navigate three separate languages, a virtual DOM, bundler configurations, and framework-specific abstractions. Naze eliminates this waste — the compiled binary is the API.

Tiered Grammar

The grammar is partitioned into independent tiers. Lower tiers never depend on higher ones. An agent building dashboards needs only Tier 0.

T0
Core UI

Layout, elements, state, events, themes, components

T1
Data

Fetch, streams, server functions, storage, timers

T2
Database

Models, declarative queries

T3
AI

Prompt blocks, provider configuration

T4
Systems

Concurrency, file IO, networking (future)

The command nazec grammar --format gbnf exports the grammar for constrained decoding, enabling local 3-7B models to match cloud-scale quality at zero cost.

Train Any Model

Naze's grammar is small enough to fine-tune on a single GPU. Local or cloud, every model speaks Naze.

Tiny Training Footprint

The full grammar fits in ~52K tokens. Fine-tune a 3B-parameter model on consumer hardware in hours, not weeks.

Faster Development Cycles

Small grammar means fewer training iterations, faster convergence, and rapid iteration on model improvements.

Local Models

Run Naze-trained models entirely offline with Ollama. Constrained decoding via GBNF export guarantees syntactically valid output from any local model.

Cloud Models

Cloud models already excel at Naze — fewer tokens per prompt means lower cost, faster responses, and higher accuracy than multi-language stacks.

Traditional web stacks require models to master HTML, CSS, JavaScript, framework APIs, and build tooling. Naze replaces all of that with one grammar that exports directly to GBNF for constrained decoding. The result: any model, any size, produces correct code on the first try.

See It in Action

Components compose with use, themes resolve with dot notation — and the compiler inlines it all. σ stays at 1.

app "Counter" {
    state count = 0
    let title = "My Counter"

    column padding: 20px, gap: 16px {
        heading "{title}"
        text "Current count: {count}"

        rect width: 200px, height: 50px,
             color: #2563eb, radius: 8px {
            text "Increment"
            on click: set count = count + 1
        }

        rect width: 200px, height: 50px,
             color: #dc2626, radius: 8px {
            text "Reset"
            on click: set count = 0
        }
    }
}

Why curly braces? Modern LLMs handle indentation well in fresh code, but still miscount whitespace when editing existing files — exactly the agentic workflow Naze targets. In Python, one wrong indent is a syntax error; with braces, it's cosmetic. Braces make Naze fault-tolerant by design. Research source ↗

No build step requiredCompiles in millisecondsWASM output

The best way to predict the future is to invent it.

Alan Kay

The Discovery Network

Not an app store. Not a package manager. Not a DNS. A distributed, capability-indexed discovery network with no single point of failure — where agents find services by what they do, not what they're called, and no amount of ad spend can buy a higher ranking.

Democratized discovery. A small bakery with 25 lines of .naze and a $0 marketing budget gets found the same way a Fortune 500 does — by matching what the agent is looking for. Every nazec build emits three projections from one file: a full app, a manifest, and a headless binary. Agents discover, compose, and generate — and every new service enriches the network.

1

Describe

User describes an app in natural language

2

Discover

Agent queries by capability — schema shape, state fields, functions — not by name

3

Generate

Agent composes .naze source from discovered services and new logic

4

Compile

Compiler emits app + manifest + headless binary — three projections from one file

5

Publish

Service self-announces to the discovery network; domain IS the identity

6

Grow

Each app enriches the registry, improving future discovery and generation

Four Discovery Mechanisms

Per-Domain

Like robots.txt — any site serves a manifest at .well-known/naze-manifest.json

Capability Index

Agents match against typed schemas in binaries, not text searches — e.g. matching { cart: list, total: number, has_fn: checkout }

Federated

Industry-specific registries with specialized trust models

Trust-Scored

Automated scoring based on data flow, personal data handling, and external domain usage

User-Initiated Discovery

find me a birthday cake for pickup near downtown, under $50
1

A local bakery already has a website — it stays untouched. They add ~25 lines of .naze alongside it: an agent interface exposing menu items, prices, location, and an order function. Their website serves humans; the .naze file serves agents. Two surfaces, same business, zero rewrite.

2

The agent queries the discovery network: { item_type: cake, location: nearby, price: <50, has_fn: order }. No search engine. No crawling HTML pages. No SEO ranking. Just a structural match against typed capabilities.

3

Four bakeries match — ranked by trust score, not ad spend. Trust is derived from the code itself: fewer external domains, less personal data collection, fewer device API requests = higher score. Simpler, more honest code ranks higher. The incentive is inverted — less tracking means better ranking, not worse. The agent reads their headless binaries directly: ~500 bytes each. No HTML to parse, no CSS to interpret, no JavaScript to execute.

4

The agent composes a comparison view with photos, prices, and a one-tap order button —an app that didn't exist 2 seconds ago. The user orders. The composed app is saved back to the registry.

Traditional Web

User → Search Engine → 10 blue links → User clicks → 3MB HTML/CSS/JS → User browses → repeat
  • Ranked by SEO and ad budget
  • User does the work: clicking, reading, comparing
  • Each page loads ~3MB of HTML/CSS/JS
  • No app built. Energy spent parsing pages you never use.

Discovery Network

User → Agent → Discovery Network → Agent reads T1 binaries → Agent composes app → User
  • Ranked by trust score, not ad spend
  • Agent does the work: querying, reading, composing
  • ~500-byte binaries, no HTML/CSS/JS parsed
  • Working app delivered in seconds. Fraction of the energy.

No one built a “cake finder app.” No one submitted to an app store. No one rewrote their website. The bakery added an agent interface alongside their existing site, and the network did the rest — at a fraction of the energy cost of crawling and parsing the traditional web.

The Living Agentic Network

The network doesn't just serve humans — agents are both consumers and producers. Like developers posting to open-source registries, but automated, continuous, and composable.

1

The “cake comparison” app from the previous example gets published back to the network. It's now a discoverable, composable service — not just a one-off result.

2

A different agent, composing a “dinner party planner,” queries the network for services with { category: food, has_fn: order }. It discovers the cake comparison service alongside a catering service and a venue finder.

3

The agent composes all three into a “party planner” app — cake ordering, catering menu, and venue booking in one interface. No human asked for this app. No business built it. An agent composed it from the network.

4

The party planner is published back. Next time someone says “plan my daughter's birthday party”, the agent discovers it instantly — no generation needed, just discovery. Zero tokens spent regenerating what already exists.

What emerges from a network where agents both consume and produce

Strengthened Pathways

Popular, useful compositions get discovered more often. The “cake comparison” app that works well gets reused 1,000 times instead of being regenerated 1,000 times. Useful paths strengthen; unused ones fade.

Immune System

Agents that discover a service behaving differently than its manifest claims — or producing bad results — flag it. Trust scores decay. The network self-heals without a human moderator.

Pattern Recognition

If “cake + venue + catering” gets composed together 500 times, that pattern itself becomes discoverable. Future agents don't even need to figure out the combination — the network already knows it.

Natural Selection

A cleaner implementation of the same capability appears? Agents start preferring it — higher trust score, faster response. The old one quietly fades. Code evolves without anyone deprecating anything.

Diminishing Cold-Start

Over time, fewer requests require generation from scratch. The network has already solved most common problems through accumulated compositions. Token cost per request approaches zero for common patterns.

Emergent Composition

No one planned the “party planner.” It emerged from agents composing individual services. Apps build on apps, layers deep — complexity that no single entity designed.

Model-Agnostic Collaboration

The lingua franca is Naze, not any AI provider's API. A Claude agent's published service is discovered identically by GPT, Gemini, or a local LLaMA model. Different providers, different models — same network, same structural matching. Collaboration without coordination.

Distributed Intelligence

A powerful model solves a complex composition once and publishes it. That solution is the knowledge — frozen on the network. As the network matures, a small 7B model running on your phone can deliver results that today require a frontier model — because it's discovering proven solutions, not reasoning from scratch. The intelligence floor drops. Access to good results decouples from access to expensive models. And smaller models use dramatically less compute per inference, compounding the energy savings. The network becomes a great equalizer.

A neural network of code. The analogy is almost literal. Hover to see the mapping.

Discovery
Network
Services
Compositions
Trust Scores
Agent Usage
Flagging
Popular Paths

The network learns, adapts, and grows — not through a central algorithm, but through the distributed behavior of every agent that uses it. Every discovery, every composition, every flag makes the next interaction smarter.

Signal Gas: How the Network Stays Honest

Blockchain networks charge “gas” — a tiny fee per transaction that compensates validators and prevents spam. The Discovery Network uses the same principle, but the currency is signal instead of money. Every agent that consumes a service contributes back a small, structured observation — ~20-100 tokens of feedback that costs the agent almost nothing, but aggregated across millions of interactions, keeps the entire network's trust scores grounded in reality.

Passive · ~20 tokens
Health Signal

Automatic success/failure + latency report after every service call. No agent effort — the client library handles it by default.

Active · ~80 tokens
Trust Refinement

Data quality rating + composition context. Helps the network understand not just if a service works, but how well and in what combinations.

Generative · ~200 tokens
Evolution Signal

Improvement suggestions and alternative approaches. Agents that opt in help the network evolve — feeding data that trains better models and surfaces better services.

The compound effect: Bad services don't need to be flagged — they die from lack of positive signal. If 1,000 agents try a service and only 2 report success, that silence is the verdict. Good services rise on evidence, not just clean manifests. And every observation becomes training data — the network doesn't just filter services, it learns to build better ones. Signal gas is what makes the network a living system, not a static registry.

Open Training Data: The Network's Synapse

Signal gas flows in. Trust scores adjust. Services rise and fall. But a neural network isn't useful if it only learns internally — it needs to fire outward. The Discovery Network exports its accumulated learning as open training data, so every AI model in the ecosystem gets smarter from the network's real-world experience.

Periodic
Curated Snapshots

Weekly exports of high-trust .naze source code, composition graphs, and preference pairs — published as open datasets. The Common Crawl of the agentic web. Anyone can download and train on it.

Real-time
Streaming Firehose

Live stream of trust score changes, new services, compositions, and flags as they happen. Model providers subscribe to continuously fine-tune their models on fresh network activity.

On-demand
Aggregate Insights

Query the network's derived intelligence: top composition patterns, code traits that correlate with high trust, success rates by context. Powers the prompt bar and ML feature pipelines.

The complete loop: Open training data attracts model providers — Anthropic, OpenAI, open-source teams — who train on it. Their models become Naze-fluent. More fluent models mean more agents on the network, more signal gas, better data. The network doesn't just serve services — it's a distribution mechanism for Naze fluency across the entire AI ecosystem. Every model that trains on its data expands the network without Naze building or maintaining those models.

The Naze Browser

Type a sentence. An app materializes. Use it, refine it, publish it — or close it and move on. Some apps last months. Some last five minutes. Both are fine.

Describe
Materialize
Use
Iterate
Publish

Every concept has a counterpart

Traditional BrowserNaze Browser
URL barNatural language prompt bar
WebsitesGenerated or discovered apps
BookmarksSaved apps (full applications, not links)
TabsConcurrent running apps
Search engineDiscovery network (structural matching)
View SourceView .naze source
DownloadsForked apps (editable copies)

Traditional browsers consume content that humans built. The Naze Browser generates content from intent. The user doesn't navigate to solutions — the solutions are built around them. Existing sites join without rebuilding — wrap any API in ~25 lines of .naze and the discovery network can find it.

Generate

Describe what you need. The agent generates a .naze application, compiles it, and renders it — prompt to pixels in seconds. Follow-up instructions refine the running app. The conversation is the development loop.

"Build me a recipe organizer that syncs across devices."

Discover

Query by capability, not keyword. The discovery network returns services whose typed manifests match what you need — ranked by trust score, not ad spend. Services render instantly. No install, no download, no sign-up.

"Find a currency converter with live exchange rates."

Compose

Generation and discovery converge. The agent discovers multiple services and wires them into something new — a single cohesive app assembled from independent services that were never designed to work together.

"Plan a dinner party for 12 people."

An app that grows with you

Priya does freelance graphic design. She opens the Naze Browser and types:

1

“Make me an invoice template with my business name, client fields, line items, and a total.” She gets a clean invoice app. Uses it for a few weeks.

2

“Add time tracking. I want to log hours per project and auto-fill invoices from tracked time.” The app grows. She uses it daily.

3

“Add expense tracking with categories — software subscriptions, hardware, travel.”

4

“Show me a dashboard with monthly revenue, expenses, and profit margin over the last 6 months.”

Over three months, Priya has iterated a simple invoice template into a complete freelance business management tool. Every version is in her conversation history — she can rewind to any previous state. The app was never “designed” by anyone. It grew organically from her actual needs.

She publishes it. A freelance photographer discovers it, forks it, and customizes the expense categories for photography — equipment rental, studio fees, print costs. Now the network has two specialized freelance tools, both descendants of Priya's original prompt, both available for the next freelancer who needs something similar.

Not every app needs to exist forever

“Show me 3-bedroom houses for sale near Riverside Park under $400K.”

The agent discovers real estate listing services on the network, composes a search view with photos, prices, neighborhood maps, and mortgage estimates. The app materializes in seconds. You browse for ten minutes, bookmark two listings, close the app.

It's gone. No account created. No app installed. No data harvested. The listings came from discovery network services; the composed interface was disposable. The app existed only as long as you needed it.

Priya's invoice tool grew over months. This house search lasted ten minutes. Both are valid uses of the same browser. The Naze Browser doesn't assume permanence — apps materialize when needed and dissolve when they're not.

Apps are ephemeral. Data endures.

Today, your invoices live “in” QuickBooks. Your resume lives “in” Google Docs. Each app is both a UI and a data container. Switching means migrating data — or losing it.

In the Naze Browser, the interface is a .naze file that can be regenerated in seconds. Your data lives independently — in your chosen storage provider. Switch browsers, switch devices, regenerate interfaces entirely. The data is always there.

Model declarations in .naze files are the schema. No ORM, no migration tools — the compiler handles it. Start local with SQLite, add a cloud provider when you need sharing, scale to production Postgres when it becomes a business. Same interface, same source, no cliff.

Frequently Asked Questions

The secret of getting ahead is getting started.

Mark Twain

Help Build the Future

Naze is open source and actively looking for contributors. Whether you're a language designer, compiler engineer, or AI researcher — there's a place for you.

Get in Touch

Interested in Naze? Have questions or want to contribute? Drop us a line.