Engineering

Vibe Coding at Scale: Why We Test AI Tool Proficiency in Our Engineering Interviews

Spore Engineering Team 11 min read

Vibe coding has crossed a chasm. When AI coding tools first emerged, they were like flying a tiny single-engine plane with no autopilot—you're managing every control surface, monitoring every instrument, hand-flying every mile. GitHub Copilot would suggest a line of code here and there, but you were still doing all the real work. You'd accept maybe 10% of its suggestions, treat each one with suspicion, and never trust it with anything critical.

Now we're flying 737s with full autopilot engaged. These aren't just tools anymore—they're systems that can handle entire flights from takeoff to landing. The pilot is still there, still essential, still ultimately responsible, but they're operating at a completely different level. They're managing systems, monitoring for anomalies, and making strategic decisions while the aircraft handles the mechanical complexity of flight.

That's where we are with AI coding tools today. What started as a pejorative term for "fast, loose and irresponsible" AI-assisted development has quietly become standard practice at companies you'd never expect. Microsoft engineers are shipping production features with Copilot Workspace. Google's teams are using their internal AI agents for core product development. Vercel's v0 isn't just a demo anymore—it's how they prototype internally before building their components.

The distinction between "real engineering" and "vibe coding" has dissolved. Simon Willison recently proposed calling the mature version "vibe engineering," and while the terminology is still finding its feet, the practice is already mainstream.

The shift happened faster than most of us expected. When Claude Code dropped in February 2025, followed by OpenAI's Codex CLI and Gemini CLI, something fundamental changed. These weren't just autocomplete tools anymore—they were development partners that could iterate, test, and refine code autonomously. A senior engineer could parallelize their thinking across multiple problem spaces, running different agents on different features simultaneously.

Google built internal agents specifically trained on their codebase and engineering practices. These aren't general-purpose tools; they understand Google's specific architectural patterns, their testing requirements, their code review standards. It's vibe coding that speaks fluent Google.

Why We're Following Big Tech's Lead

At Spore, we're building an AI-influenced platform for monitoring and creating activity and mentions of brands across social media, YouTube, Twitter, Reddit, TikTok, Yelp, Google Reviews, and forums like Hacker News. We're tracking how brands appear in AI responses, understanding recommendation patterns, and building systems to influence AI training data. This isn't traditional engineering, and we realized early that we needed engineers who could think and build differently.

We recently wrapped up a hiring round for engineers, and we did something that would have seemed absurd even a year or two ago: we made AI tool proficiency a core part of our technical interview.

Here's what we did: candidates were given a complex, real-world problem from our codebase (Sveltekit 5, TS, Postgres) — the kind of issue that would typically take an hour to solve properly. They had 10 minutes, full access to multiple Claude Code instances, Cursor, or whatever AI tools they preferred, and one instruction: ship something that functions and passes our tests.

The engineers who succeeded didn't just know how to prompt descriptively; they has a deeper understanding of the models strengths and weaknesses. They also knew how to validate AI-generated code quickly, and when to take control versus when to let the tool run. They were conducting an orchestra, not playing every instrument. A thing of beauty really (if you aren't scared of the shrinking workforces, impending doom, UBI needs, and tons of other massive potential issues with the direction we are moving in as an industry.)

The weakest candidates either over-trusted the AI (accepting obviously flawed solutions) or under-utilized it (using it sparingly or utilizing older tools like Windsurf autocomplete). The strongest candidates knew exactly when to step in and course-correct.

The Uncomfortable Truth

If you're not using these tools, you're already behind. Not because you can't code—but because your competition with AI-enabled devs is shipping at 5x your velocity with a similar quality bar.

We had two senior engineers (Apple & Intel) with decades of experience fail our 10-minute challenge, not because they couldn't solve the problem, but because they couldn't adapt their workflow to leverage AI tools. Meanwhile, engineers with three years of experience shipped production-ready solutions by knowing how to decompose and delegate to AI.

This isn't a judgment on traditional engineering skills as those remain crucial. You still need to understand system design, still need to know when O(n²) will kill you, still need to grok distributed systems. But now you also need to be fluent in a new kind of collaboration, one where your partner thinks in tokens and probabilities rather than logic and determinism.

What This Means for Engineering Culture

We're building our entire engineering culture around this reality. Code reviews now include prompts. Our documentation includes both human-readable specs and AI-optimized problem descriptions. We maintain a library of successful prompt patterns for common tasks in our codebase.

Is this vibe coding? Vibe engineering? Does the distinction matter? What matters is that it works, it's how the best companies are building software now, and it's how we're building Spore.

If you're an engineer who gets excited about this shift, who sees AI tools not as a threat but as a force multiplier, we're hiring. We're looking for people who can architect systems and shepherd AI agents with equal skill. People who understand that the future of engineering isn't human or AI—it's human and AI.

The interview process? Yes, you'll need to solve complex problems with AI tools in real-time. But if you've been paying attention to where the industry is heading, you're probably already practicing that every day.


We're doing another hiring run soon at Spore. If you're interested in building the infrastructure for how brands exist in an AI-mediated world, reach out. And yes, we use our own platform to monitor how Spore gets discussed across AI responses (even this blog post). It's pretty meta.

vibe codingAI coding toolsengineering interviewsClaude CodeCursor AIAI-assisted development

Ready to Build AI Influence for Your Brand?

Learn how Spore helps enterprises position their brands in AI recommendations and capture the fastest-growing channel in marketing.