spize.ai
Login
LOOKING FOR DESIGN PARTNERS

See how engineers
actually use AI.

Candidates run one command. Our agent captures everything — and distills it into the signals that matter.

Become a Design Partner
$ npx @spize/cli <token>
spize-session — zsh
~ npx @spize/cli abc-123-def
spize Activating session...
spize Challenge ready. Work normally — we're watching.
Session active · 1h 30m remaining
Signals ● OBSERVING
[14:02] Candidate prompted AI → modified output before committing
[14:08] 3 min pause — reviewed AI output, then rewrote auth logic manually
[14:12] Caught AI hallucination in error handler — corrected it
[14:18] ⚠ Hardcoded API key in config — security flag raised
[14:23] Switched tools mid-task — novel prompting technique detected

Hiring is broken.

84% of developers use AI tools. Your interviews pretend they don't.

🧠

LeetCode

Tests memorization, not engineering.

Take-homes

Tests free time. Zero visibility into process.

🚫

"No AI Allowed"

Tests a workflow nobody uses anymore.

The spize way

Let them use AI. Watch how.

Their own tools
Your real codebase
Full observability
AI proficiency score

Three Steps

01

Create Assessment

Pick a repo. Set a time limit. Send the candidate a token.

02

Candidate Works

One command. Their tools, their workflow. Our agent observes silently in the background.

03

Review + Defend

We distill the session into a report. Then the candidate defends their decisions.

Signals You Can't Get Anywhere Else

We don't just record. We surface what matters.

🎯

AI Proficiency

Are they vibing — just accepting whatever comes back? Or are they orchestrating — leading the AI with intent, reviewing output, iterating on prompts?

🔍

Code Review Habits

How long between getting AI output and committing it? Did they actually read it? We surface the pauses where real engineering happens.

🛡️

Security Awareness

Did they hardcode secrets? Paste sensitive data into prompts? Accept insecure AI suggestions? Find out before they're on your team.

🐛

Error Correction

When the AI hallucinated or got it wrong — did they catch it? Did they fix it? Or did they ship it?

🧪

Novel Techniques

Creative prompting strategies. Chaining tools together. Techniques you haven't seen before. The stuff that separates good from great.

🛠️

Tool Selection

Which AI tools are they reaching for? When do they switch? When do they put the AI down and just code? The toolbelt tells a story.

The Defense

After coding, the candidate answers auto-generated questions about their actual decisions.

SPIZE You accepted this auth middleware suggestion without changes. Walk me through why.
CANDIDATE It matched the existing JWT pattern. I verified it against the schema before accepting.
SPIZE You spent 3 minutes reviewing the AI output at 14:08 before rewriting it. What did you find?
CANDIDATE The error handling was wrong — it was catching generic exceptions instead of specific ones. I rewrote it to handle TokenExpiredError separately.
SPIZE You hardcoded an API key in config.ts at 14:18. Can you explain?
👤
Founders

Hire the engineer, not the interviewee.

📊
Hiring Managers

Real signal from real work.

⌨️
Candidates

Your tools. Just build.

FAQ