Skip to content
myristica
4 min read

I Built an Enterprise Application. I Didn't Write a Single Line of Code.

I Built an Enterprise Application. I Didn't Write a Single Line of Code.

50,000 lines of code. 264 files. 9 interconnected applications. A production-grade grants management platform — deployed, tested, and live.

I didn't write a single line.

The Experiment

My background is in tech. I learned how to code in the early 90s and built an early ecommerce site. It didn't go anywhere, but the tech bug bit me and I have been involved in tech now in one form or another for over 30 years.

I've been spending a lot of time thinking about where AI is headed and what it means for how we work. Not in the abstract — in the specific. So I decided to run an experiment.

I took a real problem — one we deal with every day in state government — and asked: can an AI build a production enterprise application from scratch, guided only by natural language?

The tool: Claude Code, Anthropic's coding agent, powered by their latest model, Claude Opus 4.6. I described what I wanted. It architected, coded, tested, debugged, and deployed. It broke the project down into discrete elements, and deployed agents to code in parallel. When it hit errors, it diagnosed and fixed them — sometimes fixing bugs in its own earlier code. I gave direction. It did the engineering.

The result is Grantify. (please note that this is just a demo experiment, on a demo server - that name is trademarked by someone else but I am not marketing this commercially - please don't sue me)

Why Grants Management

Connecticut manages hundreds of millions of dollars in state and federal grants every year. The lifecycle is complex — opportunity posting, applications, scoring, awards, financial tracking, reporting, closeout. In practice, a lot of this still runs on spreadsheets, PDFs, and email chains.

A real RFP exists for a modernized system. That made it the perfect test case. Not a toy demo. Not a to-do app. A real, government-scale, multi-role enterprise platform with authentication, audit trails, financial controls, and compliance requirements.

If AI could build this, that would tell me something.

The Result

Grantify covers the full grant lifecycle, end to end:

**Opportunity through Closeout.** Agencies post grant programs. Applicants discover and apply. Reviewers score against configurable rubrics. Awards are issued with DocuSign e-signatures. Financial officers track budgets and drawdowns. Program staff manage progress reports and SF-425 federal filings. Closeout and compliance verification wrap it up.

**46 database models across 9 applications** — each cleanly separated: core, portal, grants, applications, reviews, awards, financial, reporting, and closeout.

**7 user roles** from Agency Admin to Federal Coordinator, each with tailored dashboards and permissioning.

**AI-powered grant matching** — and yes, the irony isn't lost on me — the platform uses Claude's own API to match applicants with relevant funding opportunities, scoring relevance and explaining recommendations in plain language.

**An interactive map** of all 169 Connecticut municipalities built with Mapbox, showing grant distribution as a choropleth with filters by agency and program.

**Analytics dashboard** with real-time KPIs, four chart types, and per-agency funding breakdowns.

**Microsoft Entra ID single sign-on** with multi-factor authentication. Bilingual support in English and Spanish. A REST API with 11 endpoints. A CI/CD pipeline. 138 automated tests.

It's deployed to production right now. You can try it.

What's Remarkable

I want to be specific about what happened here, because the details matter more than the headline.

I sat in a conversation window and described what I needed. Not in code — in English. "Build me a grants management system with role-based access control, a review workflow, financial tracking, and DocuSign integration." Then I iterated. "Add a map view showing awards by municipality." "Add AI-powered grant matching." "Make it bilingual." "Harden it for production."

Claude Code made architectural decisions — choosing Django's class-based views, designing database schemas with proper indexes, implementing rate limiting and open-redirect protections. When the production deployment threw a 500 error, it read the stack trace, identified a decorator incompatibility with class-based views, fixed it, ran the tests, and pushed the fix. I watched.

This isn't autocomplete. This is an AI that can hold the full context of a complex system in its head and make engineering decisions across it.

Opus 4.6 is a step change. It's among the first frontier models where AI played a significant role in its own training process. AI is now training AI. You can feel the difference in how it reasons about architecture and trade-offs.

What This Means

This is one of the reasons we're investing so heavily in AI and quantum in Connecticut. The leverage is real and it's here now.

A system that would typically require a team of developers and months of development can be prototyped, tested, and validated in days. That doesn't replace the need for proper procurement, security review, and institutional governance — it accelerates the path to getting there. Agencies can test ideas before writing an RFP. Small states can punch above their weight.

I'm releasing Grantify as open source under the MIT license. Any state agency, any municipality, any state in the country can take it, adapt it, and deploy it. Free.

Try It

**Live demo:** https://web-production-4b928.up.railway.app — click "Try the System" and log in as any role.

**Source code:** github.com/okeefedaniel/grantify

**License:** MIT — use it, modify it, ship it.


Connecticut is building the future. Let's get after it.

Dan O'Keefe
Commissioner,
Chief Innovation Officer
State of Connecticut