Client build · 2026ApeFX

Line zero to closed beta in 54 days. No marketing launch. ChatGPT started sending signups anyway.

ApeFX is a creative platform that unifies 66 AI image and video models behind a single prompt. Fatboy Studios wrote the code, designed the brand, wired the analytics, and ran the marketing infrastructure. The site went live on a Sunday with no campaigns, no content push, no social posts. Signups arrived before any of that was planned. The referrer was ChatGPT.

EngagementCustom · product, brand, growth
Duration54 days · Feb 26 to Apr 21, 2026
Live waitlist72 signups · $0 paid spend
Beta generations236 · 100% success rate
72
Waitlist signups · live
236
Beta generations shipped
22
Battle-tested · 44 loaded
100%
Generation success rate
54
Days line-zero to beta
9
Product surfaces shipped
Proof · Launch telemetry · March 2026The signal

The site went live. No marketing. ChatGPT started sending signups.

The platform was switched to live and properly indexed. No campaigns, no content, no social push, no announcement. The intention was to let it breathe. There was a feeling early signups might find their way in. Nobody expected near-instant waitlist submissions from sources like ChatGPT.

What actually happened

The waitlist went live on a Sunday in March. No post went out. No email dropped. No ad fired. Within the same day, real people were submitting their email addresses. When we checked the attribution, the referrer was ChatGPT. The platform was being cited by an LLM in response to a specific query, with no content written to target that query, no backlinks pointing to it, no paid placement behind it.

We had built the site with the right structural signals for how LLMs read and summarise content. That work paid off before we intended to use it. The signups were real. The referrer was real. And because first-touch attribution was wired before the first visitor arrived, we caught all of it from the start.

GEO before it was a campaign: The copy structure and technical setup primed the platform for LLM pickup. It happened faster than expected.
Tracking live before traffic: Attribution, conversion events, and unit economics instrumented before the site went public. Nothing pieced together in retrospect.
Zero spend, real signal: Every signup arrived through earned pickup. The paid flywheel is still loaded and waiting.
Mixpanel graph showing waitlist form submissions filtered by Initial Referrer: chatgpt.com, February through April 2026
Source · Mixpanel · apefx.ai · filter: initial referrer = chatgpt.com
Waitlist form submissions attributed to chatgpt.com as initial referrer. Spike begins at launch with zero paid spend, zero published content. The curve is the story.
01 / BriefWhat we walked into

The founder came with a vision. We came with a factory.

No existing codebase. No prior designs. No analytics. Just a direction: build a platform where 9 provider ecosystems behave like one tool, and where a first-time user gets cinematic output from a five-word prompt.

Hidden reactor thermal

What the market saw

One clean interface. A prompt box. A model switcher. Outputs that look like they came out of a post house.

What we built underneath

A multi-engine router that picks the right model per prompt. A credit-metering engine with per-API cost accounting. A pattern-recognition layer that writes /learn modules from provider docs and verified community runs. An attribution pipeline that knows which ad, influencer, or search result made a user sign up.

Pitch the platform, not the plumbing. The product looks simple because the infrastructure isn't.
No duct tape. Every provider wired through a single typed interface, gracefully degrading across speed, cost, and quality axes.
Day-one telemetry. First-touch attribution, conversion events, and unit economics wired before the first public signup.
02 / Innovation9-Shot Cinema

9-Shot Cinema. One prompt. A nine-shot sequence that cuts together.

Most AI tools give you one output per prompt. We built a system that reads your intent and returns nine connected shots of the same world. Same lighting, same mood, same subject, same palette. A sequence you can edit into a scene. One beta tester cut a full trailer from a single 9-shot output, using every one of the nine shots.

Not nine variants. A nine-shot sequence.

Nine angles of the same moment would be a contact sheet. 9-Shot Cinema is nine connected moments of the same world, rendered to the same grade, shot-matched by the orchestration layer so the sequence cuts together without correction. Every shot lands at production quality. No throwaways, no picks.

The LLM reads the prompt, writes a nine-beat narrative with continuity constraints locked across the set (same lighting, same subject, same palette, consistent camera language), then fires nine prompts in parallel through the best-fit render engine. What returns is a scene, not a mood board.

Continuity locked: Lighting, mood, subject, palette, camera language. Constrained identically across all nine. The sequence cuts together without grade work.
Every shot production-grade: All nine return at maximum quality. Designed to be used together, not sifted for the one good frame.
Trailer-cuttable: A beta tester edited a full trailer from one 9-shot output using every shot. That's the ceiling the system is designed for.
9-Shot Cinema · Skyline mountain drift sequence
Mechanic

Sequence orchestration

The LLM writes a nine-beat narrative from the prompt. Each shot has its place in a cut, not its place on a contact sheet. Openers, mid-beats, and resolution shots are composed deliberately.

Mechanic

Continuity lock

Lighting, subject, palette, and camera language are constrained identically across all nine prompts. The sequence returns already shot-matched. No grade pass needed before it can cut.

Mechanic

Parallel render · max quality

Nine renders fire in parallel through the best-fit engine. All nine return at production grade. First shot streams back in under ten seconds. One credit transaction, nine usable shots.

Continuity lock, two waysCharacter and grammar
Single-character 9-shot · same man across all nine frames
Character lock
Same man, nine shots, one world.

One lean thirty-something, three-piece navy suit, distinctive jaw scar. Every shot in the same smoky lighting. Identity holds across wide, mid, close. Cut as is.

Fantasy ensemble 9-shot grid · ELS, LS, MLS, MS, MCU, CU, ECU, LOW, HIGH
Shot grammar
ELS → ECU. Full coverage, one prompt.

Fantasy ensemble rendered across the nine canonical shot sizes. Wide establish, mid, close, extreme close, low-angle, high-angle. Same lighting, same palette, same subject. A cinematographer's coverage board from five words.

When I came to Kyle with the ApeFX idea, my concern was walking into a competitive field where everyone is building on top of the same models. He reeled me back in. We don't need to invent a new model, we need to change how we interact with the ones that already exist so the output is sharper than everyone else's, regardless of which engine is behind it. And we have to be able to track that north star from day one.

I came to him with an idea and a bullet-point list of what I wanted. Within a week the infrastructure was live and every PRD for the MVP was signed off. I've never had that experience with an agency before.

Working with Kyle feels like having a CPO, CMO, CTO and data scientist in one person. But the thing I'll actually say about him is that for every single problem we ran into, and there were plenty, he didn't come back with just a solution. He came back with a creative, original one. I've never seen anyone work as fast, or think as sideways about problems, as he does.

C
Claire
Founder & CEO · APE AI PTY LTD (ApeFX)
03 / LayerPrompt-injection enhancement

Prompt-injection enhancement. Thin input, thick output.

The router sits between the user and the model. It reads context. What the prompt is for, which model is about to receive it, what the user's account tier allows. Then it rewrites the prompt on the fly. The user never touches the router. The output quality jumps anyway.

Three layers on every prompt

Context layer: What is the user actually trying to make? Portrait, product, landscape, narrative, UI mockup, packaging. The router reads it and tags.
Model layer: What does this model need to perform at its peak? Each engine has its own prompt vocabulary. We wrote the style cards.
Grammar layer: Composition, lighting, lens choice, grade. Cinematography and photography grammar layered per category.

The effect for the user: same prompt, visibly better output. The effect for the business: output quality scales with the router, not with the user's learning curve. Retention follows.

Prompt injection thermal
04 / EngineThe /learn engine

The /learn engine. A custom LLM algorithm that turns every model into a tutor.

We built a pattern-recognition layer that ingests the full documentation set for every provider, then cross-references it against verified community use cases. The output is a model-by-model learning track. Digestible, specific, and sorted by what actually works in production.

What went in

Provider docs: Every parameter, every accepted aspect ratio, every hidden feature across all 66 wired models.
Verified community runs: Prompts that landed. Prompts that failed. What worked for which model. What changed between versions.
Output patterns: What a good result looks like. What a bad one looks like. What the model is and isn't for.

What comes out

Model-to-model modules: If you can use Seedream, here's how to get the same result from Flux, with the prompt translated.
Use-case playbooks: Product photography. Editorial portrait. Architecture visualisation. Typography poster. Each one model-ranked.
Progressive ladders: Start with a three-word prompt; end with a cinematographer's brief. The ladder is guided.
Learn pattern-recognition thermal
05 / RouterMulti-Engine auto-routing

Multi-Engine auto-routing. Nine categories, one interface, sixty-six engines.

Image. Image-edit. Text-to-video. Image-to-video. Upscale. Image-to-3D. Text-to-3D. TTS. Music. Nine model categories, sixty-six engines wired behind a single typed interface. 22 are battle-tested in closed beta today, carrying real user traffic with a 100% completion rate and zero fallback errors. The remaining 44 are wired, typed, and loaded for public launch. The router picks. The user just prompts.

Multi-engine router thermal

Why this matters commercially

Users don't want to learn 30 dashboards. They want output. The auto-router lets the platform absorb model-market churn without the user ever having to care. New releases, deprecations, price changes, all invisible.

For the business, it means we can swap an expensive engine for a cheaper one behind the scenes and keep margins intact. For the user, it means every new model release silently improves their result without a setting to toggle.

Provider-agnostic: No UI change when a new model wires in. No retraining for the user.
Cost-aware: Credit metering knows every engine's per-call cost. Router factors it in.
Quality-graded: Models ranked per category. Best-for-job selection is automatic.
Image · 18 models
  • Nano Banana ProGoogle
  • GPT Image 1.5OpenAI
  • Seedream 5ByteDance
  • Flux Pro / KleinBFL
  • Recraft V4Recraft
  • Ideogram V3Ideogram
  • Imagen 4 UltraGoogle
  • Multi-EngineFatboy
Video · 14 models
  • Sora 2 / 2 ProOpenAI
  • Veo 3.1Google
  • Kling 3.0Kuaishou
  • Seedance 1.5 ProByteDance
  • Hailuo 2MiniMax
  • LTX 2.3Lightricks
  • Cosmos 2.5NVIDIA
  • Multi-EngineFatboy
3D · 4 models
  • Meshy V6Meshy
  • Image-to-3DWired
  • Text-to-3DWired
  • Rig-ready exportsFBX · GLB · OBJ
Audio · 6 models
  • Kokoro TTSVoice
  • ACE-StepMusic
  • Upscale modelsTopaz · Bria
  • Background editsRecraft · Kie
06 / InfrastructureBoring, fast, secure

Boring, fast, secure. The parts that matter.

The platform runs on a stack we've hardened across dozens of builds. Every table has row-level security. Every API route has explicit timeout handling. Every third-party key rotates through the same vault. No console logs in production.

Pipeline infrastructure thermal

Edge-first, queue-backed

Public routes sit on the edge for global latency. Long-running generation jobs get queued through Redis for reliability. Uploaded assets route through object storage with a custom CDN surface. Errors route to Sentry with replay capture on the real user flow, not synthetic tests.

Row-level security on every table. Nobody reads another user's row, ever.
Per-route maxDuration tuning. Polling routes explicitly lifted above the default 10-second cap.
CDN custom domain. Media served from the brand's own subdomain, not a vendor URL.
EU-resident analytics. GDPR-compliant event pipeline from day one.
Frontend
  • Next.js 16App Router
  • React 19Server Components
  • Tailwind 4Design tokens
  • Framer MotionAnimation
  • ZustandState
Backend · Data
  • SupabasePostgres + RLS
  • Upstash RedisRate limit + queue
  • R2 Object StorageMedia + custom CDN
  • Typed API routesZod at the edge
Growth · Trust
  • Mixpanel EUProduct events
  • GA4Attribution
  • SentryError + replay
  • Dodo PaymentsCheckout
  • Resend + React EmailLifecycle
Delivery
  • VercelEdge hosting
  • CloudflareDNS + WAF
  • BunPackage runtime
  • Node 22Runtime floor
07 / GrowthAttribution, economics, affiliates

Attribution, economics, and affiliates, wired before launch.

Most AI startups wire analytics after product-market fit. We wire it before. On ApeFX we had first-touch attribution, thirty-plus product events, and full unit economics reporting working on day one. No retrofit, no backfill.

Growth and attribution thermal

First-touch attribution, persisted

UTM, referrer, click IDs, and landing page pinned in cold storage on first visit. Sign-up day seven still knows the ad that brought the user in. Conversion day thirty still credits the right campaign.

Credit-aware economics

Every generation routes through a credit meter that knows the real API cost. Margins are enforced at the router, not retroactively. Reports tell the founder which users, which models, and which tiers are profitable, by week one.

Affiliate conversions table: Creator-partner attribution baked into the signup flow. Payouts match the product.
Funnel events on every surface: Waitlist, promo, pricing, auth, generation, gallery. Thirty-plus instrumented events.
Cost per output, live: Per-model profitability visible before a month of data exists.
08 / BrandThe design system

A design system the product earned.

Electric blue for conviction. Amber orange for heat. Deep carbon for quiet. Typography picked for engineering clarity and cinematic weight. Tokens that scale from a 10px label to a 72px hero.

Electric Blue
#1B17FF
Primary · CTA
Signal Amber
#FF991C
Accent · Heat
Carbon Base
#0A0D12
Canvas
Surface
#181D27
Panels
Card
#252B37
Content blocks
Display · Inter Variable
Cinematic output
from a five-word prompt.
Weights 500 · 600 · 700 · letter-spacing -0.025em on hero scale
Body · Inter Regular

Prompt-injection enhancement lets a beginner get film-still output with five words. The better the prompt, the better the lift. Retention follows. No training, no fine-tune, just the right grammar at the right seam in the pipeline.

16 / 18 / 20px · line-height 1.55 · max-width 68ch
09 / SurfacesWhat the user sees

What the user sees. Every surface, live.

Nine product surfaces shipped in sequence. Each one instrumented, tested, tokenised, and wired to the credit system. Captured from the running beta. Two of them loop because they're better in motion.

01 / Landing
Homepage · animated hero in motion
Wavy-terrain shader behind the product headline. Live waitlist signup, 27+ models surfaced. Tap to visit.
/generate image surface with multi-engine output grid
02 / Generate
/generate · image surface
Prompt box, model picker, aspect picker, parallel outputs grid. All nine aspects live.
9-Shot Cinema sequence grid
03 / 9-Shot
9-Shot Cinema sequence grid
Nine shots of one world from one prompt. Shot-matched, cut-ready.
04 / Video
/video · text-to-video surface
Text-to-Video, Image-to-Video, 9-Shot-to-Video. Running a live generation in the loop.
All Models modal
05 / Models
The model shelf · one modal, every engine
Multi-Engine, Nano Banana, GPT Image, Seedream, Flux, Recraft, more. Router picks. User just prompts.
/learn · prompt formula
06 / Learn
/learn · model-by-model modules
Custom LLM algorithm writes tracks from provider docs and verified runs. Lesson shown: Kling prompt formula.
07 / Gallery
/gallery · community picks
Beta-generated video in a live grid. Auto-looping previews, filter by model, re-roll in one click.
/pricing · four-tier matrix
08 / Pricing
/pricing · tier matrix
Free · Creator $12 · Pro $29 · Studio $79. Credit packs below. Dodo checkout wired.
Waitlist signup · first-touch
09 / Waitlist
First-touch · the beta door
Pre-launch signup with first-touch attribution wired. 72 signups on zero paid spend.
10 / Timeline54 days

54 days. Line zero to beta.

Feb 26 · Day 0
Initial commit from Create Next App

Clean slate. No prior codebase, no design system, no infra. First commit on record. Nothing inherited.

Day 1 → 14
Stack hardening + first engines wired

Supabase schema + RLS, Redis rate limiting, Sentry + replay, Mixpanel EU, R2 + custom CDN. First three image models wired behind a typed interface.

Day 15 → 28
9-Shot Cinema prototype

Context-aware router + prompt-injection layer. First beta testers run nine parallel renders from five-word prompts. Reaction informs the v2 rewrite.

Day 29 → 42
Video + /learn engine

Fourteen video models wired. Custom LLM algorithm ingests provider docs + community patterns; first model-to-model learning modules generated and reviewed.

Day 43 → 54
Payments, pricing, polish

Dodo checkout, credit-pack shelf, affiliate attribution, tier gating. Final pass on every surface. Beta doors open to the waitlist.

Apr 21 · Beta
Beta live · 72 waitlist · 236 beta generations · 100% completion rate

Closed beta running. Waitlist converting daily. Beta cohort generating across 22 different models with zero routing failures. First-touch attribution credits the right campaign on every signup. Public launch ships once beta feedback is folded in.

11 / OutcomesWhat we hand over

What the founder got. What we hand over.

Product

Beta platform · 22 live, 44 loaded

22 models battle-tested in closed beta today, 44 more wired and loaded for public launch. Nine categories. One interface. A router in the middle picking best-fit per prompt. 100% generation success rate across the live fleet.

  • Unified typed API
  • Multi-Engine fallback
  • Credit metering wired
Innovation

9-Shot Cinema + /learn

Two proprietary layers beta testers raved about. Both orchestration. No training, no fine-tune. Both defensible.

  • Context-aware routing
  • Prompt-injection grammar
  • Pattern-recognition tutor
Growth

First-touch attribution live day one

Every signup credits the right campaign. Every generation costs a known amount. Every report is real from the first event.

  • 30+ instrumented events
  • Affiliate pipeline
  • Live unit economics
Trust

Security + compliance baseline

RLS on every table. EU analytics. GDPR-ready events. Sentry with replay on real flows. No secrets in client code.

  • Row-level security
  • Rate limiting + queue
  • Error replay capture
Brand

Design system the product earned

Tokens, typography, patterns. Figma source, codebase source, one source. Every component references the same token.

  • 5-swatch palette
  • Inter Variable
  • Tailwind token bridge
Handover

Every line documented

Environment docs, API docs, routing docs, cost docs. Founder can hire the next engineer from this repo without a conversation.

  • CLAUDE.md architecture rules
  • UNIT_ECONOMICS.md
  • ANALYTICS_EVENTS.md
Kyle du Randt, founder of Fatboy Studios
Your direct lineKyle du Randt

Kyle du Randt

Founder · Fatboy Studios

The team is the machine. Systems, pipelines, and AI running in parallel, built to function as a full product, marketing, and growth department without the briefing chains, account managers, or handoff lag that slow traditional agencies down. You work directly with the person who built all of it.

If you sign up, you get Kyle. The ApeFX build was Kyle from line zero. The campaign calendar on your Growth tier is Kyle. The automation in your Machine tier is Kyle. And when Kyle doesn't know something, you get a timeline for when you'll have a data-backed answer. Not a vibe. A date.

We build the factory so the founder can lead.

Full product, full stack, full brand, full growth. Fatboy Studios is the agency of record for teams who want to ship a platform, not a prototype.