Cloud Run Hackathon: Building an AI Learning Platform

12 min read

How Google Cloud Run, Gemini AI, Firebase, and modern dev tools helped me ship FlashLearn AI in less than 2 weeks while working full-time. A practical guide to the stack.

Google Cloud Run Gemini AI Firebase TypeScript React Serverless Developer Tools AI Development
Cloud Run Hackathon: Building an AI Learning Platform

Cloud Run Hackathon: Building an AI Learning Platform

The Challenge: Build a production-ready AI-powered learning platform in less than 2 weeks while working a full-time job. Here's how the right tools made it possible.


The Context

When I found the Google Cloud Run hackathon with only 2 weeks left, I knew time was my biggest constraint. Working full-time meant I had maybe 25-30 hours total to design, build, test, and deploy a complete application.

The result? FlashLearn AI - a full-stack adaptive learning platform that generates personalized flashcards, tracks progress, and adapts difficulty using AI. Live on Cloud Run, built with modern tools that each solved a specific problem.

This isn't a chronological story. This is about the tools that made impossible deadlines possible, and how you can use them too.


1. Google Cloud Run: Deployment Without the Overhead

What It Is: Serverless container platform that scales to zero.

Why I Chose It: Coming from Azure App Service, I wanted true serverless with container flexibility. No server management, pay only for what you use, and automatic scaling.

How It Helped

Problem: I needed to deploy a backend API and frontend app, focus on code not infrastructure, and keep costs near zero during development.

Solution: Cloud Run's container-first approach meant I could:

  • Write a Dockerfile, push, deploy - that's it
  • Scale to zero when not in use (no idle costs)
  • Get HTTPS endpoints automatically
  • Update with zero downtime

Real Example - Deployment Config:

# Backend deployment config
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
    name: flashlearn-ai-backend
spec:
    template:
        metadata:
            annotations:
                autoscaling.knative.dev/maxScale: '10'
                autoscaling.knative.dev/minScale: '0'
        spec:
            containers:
                - image: gcr.io/PROJECT_ID/flashlearn-ai-backend
                  resources:
                      limits:
                          memory: 1Gi
                          cpu: '1'

Time Saved: No EC2 instances to configure, no load balancers to set up, no autoscaling rules to tune. Saved ~8 hours of DevOps work.

Cost: Development = $0 (scaled to zero). Production with 100+ users = ~$5/month.

Key Takeaway

Cloud Run removed infrastructure as a bottleneck. When you have 30 hours total, spending 8 hours on server config isn't an option. Container → Deploy → Done.


2. Gemini AI: The Content Engine

What It Is: Google's multimodal AI model with a powerful API.

Why I Chose It: The app needed to generate flashcards, quizzes, and assessments. Writing these manually would take weeks. Gemini could generate them in seconds.

How It Helped

Problem: Generate high-quality, contextual learning content that adapts to user skill level.

Solution: Gemini API with careful prompt engineering.

Real Implementation:

// Generate adaptive flashcards based on user performance
async generateFlashcards(
  topic: string,
  difficulty: 'easy' | 'medium' | 'hard',
  count: number,
  userContext: string
): Promise<Flashcard[]> {
  const prompt = `Generate ${count} ${difficulty} level flashcards about ${topic}.

  User context: ${userContext}

  Requirements:
  - Return valid JSON array
  - Format: [{ "front": "question", "back": "answer", "difficulty": "${difficulty}", "tags": ["tag1"] }]
  - Make them practical and actionable
  - Focus on key concepts`;

  const result = await this.model.generateContent(prompt);
  const text = result.response.text();

  return this.parseAndValidateFlashcards(text);
}

What I Learned:

  1. Structured prompts work best: Explicit format requirements reduced parsing errors by 90%
  2. Always validate outputs: AI is creative, not deterministic. I built robust parsing with fallbacks
  3. Context matters: Including user skill level dramatically improved content quality
  4. Fast iteration: Generate → Test → Refine prompt. Cycle time: 2 minutes vs. 2 hours writing manually

Time Saved: Would have taken 40+ hours to write quality flashcards manually. With Gemini: 2 hours to build the integration, then instant generation forever.

Key Takeaway

Gemini transformed content from my bottleneck to my superpower. Don't write what AI can generate. Focus on the integration logic and validation.


3. Firebase: Backend in a Box

What It Is: Google's backend platform (Firestore database + Authentication).

Why I Chose It: Needed a database and auth system fast. Firebase provides both with SDKs that handle 90% of the work.

How It Helped

Problem: Build user authentication, store learning objectives, track progress, manage sessions - all while moving fast.

Solution: Firebase gives you:

  • Firestore: NoSQL database with real-time sync
  • Firebase Auth: User management and JWT tokens
  • Admin SDK: Server-side operations with one library

Real Implementation - Auth Flow:

// Backend: Create user and return JWT
const userRecord = await admin.auth().createUser({
    email,
    password,
    displayName: name,
});

const token = jwt.sign(
    { uid: userRecord.uid, email: userRecord.email },
    process.env.JWT_SECRET!,
    { expiresIn: '7d' }
);

return { token, user: userRecord };

Real Implementation - Data Storage:

// Store learning objective with progress tracking
await firebaseService.createDocument('objectives', {
    userId: user.uid,
    title: 'Learn React Hooks',
    skillLevel: 'beginner',
    learningPaths: [],
    progress: 0,
    createdAt: new Date(),
});

What Made It Fast:

  • No schema migrations: NoSQL means iterate on data structure freely
  • No server setup: Managed service, just use it
  • Built-in security: Firebase rules protect data
  • Real-time updates: Frontend reflects changes instantly

Time Saved: Building auth + database from scratch = 12-15 hours minimum. With Firebase: 3 hours including learning curve.

Key Takeaway

Firebase eliminated the "build vs buy" decision for backend services. Focus on your app's unique logic, not reinventing authentication.


4. TypeScript Everywhere: Catch Bugs Before They Ship

What It Is: JavaScript with static typing.

Why I Chose It: With limited time for testing and debugging, I needed the compiler to catch errors before runtime.

How It Helped

Problem: Ship fast without breaking things. No time for hours of manual testing.

Solution: TypeScript caught bugs at compile time that would've been production disasters.

Real Example - Type Safety:

// Type catches mistake before running code
interface Flashcard {
    front: string;
    back: string;
    difficulty: 'easy' | 'medium' | 'hard'; // Limited to these values
    mastery: number; // Must be a number
}

// This would compile-error:
const card: Flashcard = {
    front: 'Question',
    back: 'Answer',
    difficulty: 'super-hard', // ❌ Type error!
    mastery: 'high', // ❌ Type error!
};

Stats from my project:

  • 0 runtime type errors in production
  • Refactored 3 major features without breaking anything
  • Self-documenting: Types = inline documentation

Time Saved: Avoided ~10 hours of debugging runtime errors. Every hour writing types saved 3 hours debugging.

Key Takeaway

TypeScript is an investment that pays off the moment you refactor. In time-constrained projects, it's mandatory not optional.


5. Google AI Studio: Brainstorming Partner

What It Is: Google's web interface for experimenting with Gemini AI.

Why I Chose It: Needed to validate ideas fast before writing code.

How It Helped

Problem: What should I build? What features are realistic in 2 weeks?

Solution: Used AI Studio during lunch breaks to:

  • Brainstorm app concepts
  • Validate technical feasibility
  • Refine feature scope
  • Test prompt engineering before coding

Real Workflow:

  1. Day 1: "What's a good hackathon project for Cloud Run that uses AI?"
  2. Day 2: "How would an adaptive learning algorithm work?"
  3. Day 3: "Generate example flashcards for learning React" → Test quality
  4. Day 4: Ready to code with validated concept

Time Saved: Avoided building the wrong thing. Saved ~6 hours of false starts and pivots.

Key Takeaway

AI Studio let me think with AI before coding with AI. Validate ideas conversationally before committing to code.


6. React + Vite: Modern Frontend Speed

What It Is: React 18 with Vite build tool.

Why I Chose It: Needed a fast dev experience and modern UI framework.

How It Helped

Problem: Build a responsive, interactive UI quickly.

Solution: React + Vite + Tailwind CSS combo:

  • Vite: Dev server starts in <1 second, hot reload is instant
  • React: Component-based, huge ecosystem
  • Tailwind: No writing CSS, just utility classes

Real Example - Build Speed:

# Development
npm run dev
# Server ready in 847ms

# Production build
npm run build
# Built in 2.4s

Compare to older tooling (Webpack): 30+ seconds to start, 10s rebuilds.

Time Saved: Fast iteration = more experiments. ~5 hours saved from faster feedback loops.

Key Takeaway

Developer experience matters. Fast tools = more iterations = better product.


7. Docker Multi-Stage Builds: Small & Fast Containers

What It Is: Build containers in stages, ship only what's needed.

Why I Chose It: Cloud Run charges by memory-time. Smaller containers = lower costs and faster cold starts.

How It Helped

Problem: Node.js projects can be 500MB+ with dependencies. Cloud Run cold starts suffer with large images.

Solution: Multi-stage builds separate build-time from runtime.

Real Example:

# Stage 1: Build
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production (only built code, no dev deps)
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]

Results:

  • Before: 520MB image, 3s cold start
  • After: 170MB image, <2s cold start
  • Cost: Lower memory usage = ~30% cost reduction

Key Takeaway

Optimize your containers for serverless. Every MB matters in cold starts and costs.


8. GitHub Actions: Zero-Touch Deployment

What It Is: CI/CD built into GitHub.

Why I Chose It: Needed automated testing and deployment without learning Jenkins/CircleCI.

How It Helped

Problem: Manual deployments are error-prone and slow. Need confidence every push works.

Solution: Automated pipeline: Test → Build → Deploy.

Real Workflow:

# .github/workflows/deploy.yml
name: Deploy to Cloud Run

on:
    push:
        branches: [main]

jobs:
    deploy:
        runs-on: ubuntu-latest
        steps:
            - name: Test Backend
              run: npm test --workspace=backend

            - name: Build Docker Image
              run: docker build -t gcr.io/$PROJECT/backend .

            - name: Deploy to Cloud Run
              run: gcloud run deploy backend --image gcr.io/$PROJECT/backend

Result: Push code → 4 minutes later → Live in production.

Time Saved: No manual deployments. Saved ~3 hours + reduced deployment anxiety.

Key Takeaway

Automate early. CI/CD isn't optional for side projects - it's how you ship confidently at 11pm on a Sunday.


9. AI-Powered Coding Assistants: The 3x Multiplier

What They Are: Tools like Cursor, GitHub Copilot that understand your codebase.

Why I Used Them: 30 hours to build an app means every minute counts.

How They Helped

Problem: Writing boilerplate, debugging, refactoring - all time sinks.

Solution: AI writes code, I review and guide architecture.

Real Examples:

Boilerplate:

// Me: "Create Express middleware for JWT auth"
// AI: *generates 50 lines of correct middleware in 10 seconds*

Debugging:

// Me: *paste error* "Fix this TypeScript error"
// AI: *identifies type mismatch, suggests fix*

Refactoring:

// Me: "Convert this class to functional component with hooks"
// AI: *refactors in 5 seconds*

Time Saved: Conservatively, AI assistance gave me 3-4x productivity boost. What would take 30 hours took ~10 hours of actual coding.

Important: I reviewed every line. AI accelerates, doesn't replace engineering judgment.

Key Takeaway

AI coding tools are mandatory for time-constrained projects. Spend time thinking, not typing.


10. Secret Manager: Secure by Default

What It Is: Google's service for managing API keys and secrets.

Why I Chose It: Hardcoding secrets = security disaster. Environment variables = still risky.

How It Helped

Problem: Need Gemini API key, Firebase credentials, JWT secret in production securely.

Solution: Secret Manager + Cloud Run integration.

Setup:

# Create secret
echo -n 'your-api-key' | gcloud secrets create gemini-api-key --data-file=-

# Mount in Cloud Run
gcloud run deploy backend \
  --set-secrets "GEMINI_API_KEY=gemini-api-key:latest"

In code:

// Just use it like environment variable
const apiKey = process.env.GEMINI_API_KEY;

Benefits:

  • No secrets in code or Git
  • Rotate keys without redeploying
  • Audit who accessed what
  • Automatic encryption

Key Takeaway

Security shouldn't be hard. Secret Manager makes it easy to do the right thing.


The Stack in Action: How It All Works Together

Here's how these tools combine into a working system:

1. User creates learning objective (React + Tailwind UI)
   ↓
2. Frontend sends request (Axios → Nginx → Backend)
   ↓
3. Backend validates JWT (Firebase Auth)
   ↓
4. Stores objective (Firestore)
   ↓
5. Generates flashcards (Gemini AI)
   ↓
6. Returns response (TypeScript ensures type safety)
   ↓
7. Frontend updates (React state management)
   ↓
8. All hosted on Cloud Run (auto-scales, $0 when idle)

Deploy flow:

Push code → GitHub Actions → Run tests → Build Docker → Deploy to Cloud Run → Live in 4 minutes

Key Takeaways

1. Cloud Run = Infrastructure Solved

Focus on code, not servers. Deploy containers in minutes, scale automatically, pay for nothing when idle.

2. Gemini AI = Content at Scale

Don't write what AI can generate. Build the integration, let AI handle the content.

3. Firebase = Backend Without the Backend

Auth + Database in one managed service. Perfect for moving fast.

4. TypeScript = Bugs Caught Early

Type safety isn't overhead - it's insurance against runtime disasters.

5. AI Tools = Force Multiplier

Google AI Studio for planning, AI coding assistants for implementation. 3-4x productivity boost.

6. Modern DevTools = Speed

Vite, GitHub Actions, Docker - choose tools that make you faster.


Resources

Google Cloud:

Google AI:

Development:


Final Thoughts

Building FlashLearn AI in less than 2 weeks while working full-time seemed impossible. The right tools made it inevitable.

Each tool solved a specific problem:

  • Cloud Run: Deployment
  • Gemini AI: Content generation
  • Firebase: Backend infrastructure
  • TypeScript: Code quality
  • AI Studio: Planning
  • Modern DevTools: Speed

The lesson? Don't fight with tools. Use tools that multiply your effectiveness.

In 2025, a single developer with the right tools can build and deploy production-ready AI applications in weeks, not months. The only question is: What will you build?

Salomon Nghukam

Salomon Nghukam

Software Engineer at CGI France