ApeFX is a creative platform that unifies 66 AI image and video models behind a single prompt. Fatboy Studios wrote the code, designed the brand, wired the analytics, and ran the marketing infrastructure. The site went live on a Sunday with no campaigns, no content push, no social posts. Signups arrived before any of that was planned. The referrer was ChatGPT.
The platform was switched to live and properly indexed. No campaigns, no content, no social push, no announcement. The intention was to let it breathe. There was a feeling early signups might find their way in. Nobody expected near-instant waitlist submissions from sources like ChatGPT.
The waitlist went live on a Sunday in March. No post went out. No email dropped. No ad fired. Within the same day, real people were submitting their email addresses. When we checked the attribution, the referrer was ChatGPT. The platform was being cited by an LLM in response to a specific query, with no content written to target that query, no backlinks pointing to it, no paid placement behind it.
We had built the site with the right structural signals for how LLMs read and summarise content. That work paid off before we intended to use it. The signups were real. The referrer was real. And because first-touch attribution was wired before the first visitor arrived, we caught all of it from the start.

No existing codebase. No prior designs. No analytics. Just a direction: build a platform where 9 provider ecosystems behave like one tool, and where a first-time user gets cinematic output from a five-word prompt.

One clean interface. A prompt box. A model switcher. Outputs that look like they came out of a post house.
A multi-engine router that picks the right model per prompt. A credit-metering engine with per-API cost accounting. A pattern-recognition layer that writes /learn modules from provider docs and verified community runs. An attribution pipeline that knows which ad, influencer, or search result made a user sign up.
Most AI tools give you one output per prompt. We built a system that reads your intent and returns nine connected shots of the same world. Same lighting, same mood, same subject, same palette. A sequence you can edit into a scene. One beta tester cut a full trailer from a single 9-shot output, using every one of the nine shots.
Nine angles of the same moment would be a contact sheet. 9-Shot Cinema is nine connected moments of the same world, rendered to the same grade, shot-matched by the orchestration layer so the sequence cuts together without correction. Every shot lands at production quality. No throwaways, no picks.
The LLM reads the prompt, writes a nine-beat narrative with continuity constraints locked across the set (same lighting, same subject, same palette, consistent camera language), then fires nine prompts in parallel through the best-fit render engine. What returns is a scene, not a mood board.

The LLM writes a nine-beat narrative from the prompt. Each shot has its place in a cut, not its place on a contact sheet. Openers, mid-beats, and resolution shots are composed deliberately.
Lighting, subject, palette, and camera language are constrained identically across all nine prompts. The sequence returns already shot-matched. No grade pass needed before it can cut.
Nine renders fire in parallel through the best-fit engine. All nine return at production grade. First shot streams back in under ten seconds. One credit transaction, nine usable shots.

One lean thirty-something, three-piece navy suit, distinctive jaw scar. Every shot in the same smoky lighting. Identity holds across wide, mid, close. Cut as is.

Fantasy ensemble rendered across the nine canonical shot sizes. Wide establish, mid, close, extreme close, low-angle, high-angle. Same lighting, same palette, same subject. A cinematographer's coverage board from five words.
“When I came to Kyle with the ApeFX idea, my concern was walking into a competitive field where everyone is building on top of the same models. He reeled me back in. We don't need to invent a new model, we need to change how we interact with the ones that already exist so the output is sharper than everyone else's, regardless of which engine is behind it. And we have to be able to track that north star from day one.
I came to him with an idea and a bullet-point list of what I wanted. Within a week the infrastructure was live and every PRD for the MVP was signed off. I've never had that experience with an agency before.
Working with Kyle feels like having a CPO, CMO, CTO and data scientist in one person. But the thing I'll actually say about him is that for every single problem we ran into, and there were plenty, he didn't come back with just a solution. He came back with a creative, original one. I've never seen anyone work as fast, or think as sideways about problems, as he does.”
The router sits between the user and the model. It reads context. What the prompt is for, which model is about to receive it, what the user's account tier allows. Then it rewrites the prompt on the fly. The user never touches the router. The output quality jumps anyway.
The effect for the user: same prompt, visibly better output. The effect for the business: output quality scales with the router, not with the user's learning curve. Retention follows.

We built a pattern-recognition layer that ingests the full documentation set for every provider, then cross-references it against verified community use cases. The output is a model-by-model learning track. Digestible, specific, and sorted by what actually works in production.

Image. Image-edit. Text-to-video. Image-to-video. Upscale. Image-to-3D. Text-to-3D. TTS. Music. Nine model categories, sixty-six engines wired behind a single typed interface. 22 are battle-tested in closed beta today, carrying real user traffic with a 100% completion rate and zero fallback errors. The remaining 44 are wired, typed, and loaded for public launch. The router picks. The user just prompts.

Users don't want to learn 30 dashboards. They want output. The auto-router lets the platform absorb model-market churn without the user ever having to care. New releases, deprecations, price changes, all invisible.
For the business, it means we can swap an expensive engine for a cheaper one behind the scenes and keep margins intact. For the user, it means every new model release silently improves their result without a setting to toggle.
The platform runs on a stack we've hardened across dozens of builds. Every table has row-level security. Every API route has explicit timeout handling. Every third-party key rotates through the same vault. No console logs in production.

Public routes sit on the edge for global latency. Long-running generation jobs get queued through Redis for reliability. Uploaded assets route through object storage with a custom CDN surface. Errors route to Sentry with replay capture on the real user flow, not synthetic tests.
Most AI startups wire analytics after product-market fit. We wire it before. On ApeFX we had first-touch attribution, thirty-plus product events, and full unit economics reporting working on day one. No retrofit, no backfill.

UTM, referrer, click IDs, and landing page pinned in cold storage on first visit. Sign-up day seven still knows the ad that brought the user in. Conversion day thirty still credits the right campaign.
Every generation routes through a credit meter that knows the real API cost. Margins are enforced at the router, not retroactively. Reports tell the founder which users, which models, and which tiers are profitable, by week one.
Electric blue for conviction. Amber orange for heat. Deep carbon for quiet. Typography picked for engineering clarity and cinematic weight. Tokens that scale from a 10px label to a 72px hero.
Prompt-injection enhancement lets a beginner get film-still output with five words. The better the prompt, the better the lift. Retention follows. No training, no fine-tune, just the right grammar at the right seam in the pipeline.
Nine product surfaces shipped in sequence. Each one instrumented, tested, tokenised, and wired to the credit system. Captured from the running beta. Two of them loop because they're better in motion.






Clean slate. No prior codebase, no design system, no infra. First commit on record. Nothing inherited.
Supabase schema + RLS, Redis rate limiting, Sentry + replay, Mixpanel EU, R2 + custom CDN. First three image models wired behind a typed interface.
Context-aware router + prompt-injection layer. First beta testers run nine parallel renders from five-word prompts. Reaction informs the v2 rewrite.
Fourteen video models wired. Custom LLM algorithm ingests provider docs + community patterns; first model-to-model learning modules generated and reviewed.
Dodo checkout, credit-pack shelf, affiliate attribution, tier gating. Final pass on every surface. Beta doors open to the waitlist.
Closed beta running. Waitlist converting daily. Beta cohort generating across 22 different models with zero routing failures. First-touch attribution credits the right campaign on every signup. Public launch ships once beta feedback is folded in.
22 models battle-tested in closed beta today, 44 more wired and loaded for public launch. Nine categories. One interface. A router in the middle picking best-fit per prompt. 100% generation success rate across the live fleet.
Two proprietary layers beta testers raved about. Both orchestration. No training, no fine-tune. Both defensible.
Every signup credits the right campaign. Every generation costs a known amount. Every report is real from the first event.
RLS on every table. EU analytics. GDPR-ready events. Sentry with replay on real flows. No secrets in client code.
Tokens, typography, patterns. Figma source, codebase source, one source. Every component references the same token.
Environment docs, API docs, routing docs, cost docs. Founder can hire the next engineer from this repo without a conversation.

The team is the machine. Systems, pipelines, and AI running in parallel, built to function as a full product, marketing, and growth department without the briefing chains, account managers, or handoff lag that slow traditional agencies down. You work directly with the person who built all of it.
If you sign up, you get Kyle. The ApeFX build was Kyle from line zero. The campaign calendar on your Growth tier is Kyle. The automation in your Machine tier is Kyle. And when Kyle doesn't know something, you get a timeline for when you'll have a data-backed answer. Not a vibe. A date.
Full product, full stack, full brand, full growth. Fatboy Studios is the agency of record for teams who want to ship a platform, not a prototype.