Kaizen Approach
Customer briefingSource → Acceptance → Placement

A full-cycle recruiting agent that earns its keep on the only event that pays — the placement.

Kaizen Approach Recruiting is a governed agentic recruiting desk: it reads the contract, sweeps every sourcing channel, scores resumes against the labor category, walks the prime’s approval chain, and chases acceptance — measured per-source, per-recruiter, per-contract, all the way to a $3,000 placement event.

Only have 5 minutes? Skip ahead to the guided demo — it walks the five stages of the agent loop, end to end.

Live system

The desk you’re evaluating, right now.

$3,000 / placement

Active contracts

4

Live government contracting (GovCon) vehicles in flight

Candidate pool

64

Cleared & technical, deduped across sources

Placements landed

Recognized revenue events

Median time to first sub

Req-open → first prime submission

What is the revenue event

Resume accepted → candidate placed

We don't earn until the prime says yes and the candidate starts. Every metric in the platform ladders to that event.

01

What we measure first

Acceptance rate × source × LCAT

Acceptance rate is the unit of truth. We trend it by channel and labor category so the next sourcing dollar is informed by the last one.

02

What we ship

Frozen package, walked chain, clean ack

Every submission is a versioned package, routed through the contract's approval chain and audit-stamped on release — nothing leaves on a hunch.

03

Why this isn’t an Applicant Tracking System (ATS) or a generic AI recruiter

Built for the GovCon revenue loop, not for activity reports.

ATS systems track activity. Generic AI recruiters generate content. Kaizen runs the desk and is graded on acceptance. The shape of the product reflects that — every surface is wired to the revenue event.

CapabilityKaizen ApproachGeneric ATSGeneric AI recruiter
Reads the statement of work (SOW) & spawns requisitionsYes — extracts every labor category (LCAT), clearance, cert, headcountManual job postingGeneric JD parsing
Sources across cleared + technical + internalMulti-source sweep, deduped, provenance-taggedBoolean search boxSingle-channel scrape
Scores against the contract, not the job boardLCAT-aware fit + explainable factorsKeyword matchBlack-box embedding score
Tailors and freezes the submission packageResume + cover + RTR, version-frozen, audit-stampedAttach files manuallyResume rewrite, no governance
Walks the contract's approval chainPer-contract ladder, gated release, rework loopFree-form approver fieldNo approval concept
Chases the prime to acceptanceD+3 / D+7 cadence, escalation before deadlineStatus fields onlyReminder pings
Source-channel ROI on every placementCPA, CPP, margin per source, debatable assumptionsActivity reportsVolume metrics
Auditable for prime / customer reviewEvery AI action and human decision recordedComment threadsGenerally not

What makes this different

Six things that survive a careful read.

Every item below has a corresponding screen, audit trail, and metric. Open the workspace and you can verify each one in under five minutes.

01 · The desk, not a tool

Governed agentic recruiting desk

Most products give you another inbox. Kaizen runs the desk — sourcing, scoring, outreach, screening, packaging, and prime follow-up — with the recruiter as editor-in-chief, not data-entry.

  • Autonomous where it's safe (sweeps, ingest, scoring, follow-ups)
  • Assisted where it goes out under your name (outreach, screen)
  • Gated where the contract demands a human signature

02 · Money knows the channel

Source-channel return on investment (ROI), on every dollar

Every candidate carries provenance. Cost per accepted resume, cost per placement, gross margin and accept-rate are computed per source — so the next dollar of sourcing budget goes where it actually pays back.

  • Modeled cost per accepted resume (CPA), cost per placement (CPP), and margin per source — auditable assumptions
  • Acceptance-rate trend window-over-window per channel
  • Reallocation cues surfaced in the Kaizen brief, not buried in a report

03 · The job is acceptance

Resume acceptance intelligence

We optimize for the only event the prime cares about: the resume getting accepted. Every score, every tailoring decision, every package is grounded in what the contract's reviewers actually approve.

  • Labor-category-aware (LCAT) scoring with explainable fit factors
  • Acceptance rate by source × labor category × reviewer
  • Tailoring guided by what passed at this contract before

04 · One contract at a time

Contract / labor-category fit, not generic match

A score against a job board isn't a score against your prime. Kaizen reads the actual contract — clearance, certs, must-haves, escalation chain — and tailors the package and the approval path to that specific customer.

  • Per-contract approval ladder, reviewers, and notification channel
  • Per-LCAT must-haves and risk flags surfaced before submission
  • Frozen package version on submission — every byte is auditable

05 · The package wins or loses the deal

Submission package quality

A tailored resume, cover letter, and right-to-represent letter (RTR) — assembled from the candidate's actual record, reshaped to the labor category, and frozen at the moment the human approves. Nothing leaves the door without a recorded decision.

  • Resume reshaping that defends the score, not invents skills
  • Cover letter and RTR generated from contract context
  • Version-frozen, audit-stamped, prime-ready package

06 · The chain is the customer

Human approval gates, by design

Every contract has its own chain — TA lead, BD, program manager, prime POC. Kaizen walks that chain, notifies the right person on the right channel, and refuses to release until every gate clears.

  • Configurable chains per contract, per labor category
  • Reviewer-aware notifications with deadline-aware escalation
  • Rework loop preserves package history, not just the final cut

Operating posture

Three modes. One audit trail. Zero surprises.

The agent has three modes, and the mode is visible on every action. The recruiter is editor-in-chief. The contract’s chain is the customer. The audit trail is the proof.

Open the audit trail
Autonomous

Sourcing sweeps, resume ingest, scoring, dedup, tailoring drafts, prime follow-up cadence

Assisted

Outreach drafts, screening prompts, interview briefs — go out under the recruiter's name once approved

Gated

Submission release, offer, placement event — every contract's chain, no exceptions

Placement economics

Every dollar of sourcing is graded against the placement event.

Per-source CPA, CPP, and gross margin — using transparent, debatable cost assumptions. Tune the model and the entire ledger recomputes. Your team can defend the number to anyone who asks.

Open the economics ledger

Revenue / placement

$3,000

Set per deployment in admin settings

Cost per accepted resume

Modeled per source

CJ ≈ $2.2k/mo, LinkedIn ≈ $1.85k, Dice ≈ $1.1k, internal $0

Cost per placement

Modeled, deterministic

Sourcing cost ÷ placements — recomputes when assumptions change

Gross margin

Auditable per source

Revenue − modeled cost, surfaced per channel and recruiter

Executive impact

Why this changes the business — in numbers your CFO can defend.

The Stage 00 labor-rate floor, the LCAT-aware acceptance lift, and the agent-prepped queue compound into a measurable annual gross-profit move. Pilot ranges below are conservative and fully editable — slide your own numbers and watch the model recompute.

Plug in your numbers

Model the lift on a typical year of seats.

Conservative pilot ranges are pre-loaded — every assumption is editable. Numbers recompute as you slide. Nothing here leaves the browser.

Modeled annual lift

Gross profit added per year

$229,106

Above current run-rate, holding bill rate, hours, and margin constant. Driven by acceptance lift on the open-seat pipeline.

Placements added
+3.8

10.8 14.6 per year

GP per placement
$60,610

Annual revenue × target margin

Time to first sub
14.4h

48h14.4h (−33.6h)

Acceptance rate
24.3%

18% baseline · +35% lift applied

Model assumptions are deliberately conservative and fully editable. CPA / CPP and per-source margin are tracked live in the /economics ledger once a pilot is wired in.

Proof of value timeline

A 90-day pilot that earns the rollout, on the record.

Two contracts. Real labor rates. The agent runs in parallel to your live desk. By Day 90, the lift is either on the record — or it isn’t. Either way, the decision is informed by your data, not a slide.

  1. 1

    Day 0 — 30

    Stand up & configure

    • Cloud Run deploy in your tenant; sign-on wired to your identity provider.
    • Two contracts loaded with their labor-rate ledger (Stage 00) and approval chain.
    • Connector SDK pointed at your existing sourcing seats — ClearanceJobs, LinkedIn, Dice, internal pool.
  2. 2

    Day 30 — 60

    Run the desk in parallel

    • Agent shadows live requisitions: sourcing sweeps, scoring, drafted outreach, package tailoring.
    • Recruiter approves every send under their name; gated submissions walk the chain.
    • Acceptance rate, time-to-first-sub, and source ROI logged daily into /economics.
  3. 3

    Day 60 — 90

    Prove the lift, expand

    • Pilot vs. baseline window-over-window: acceptance, CPA, CPP, gross margin per source.
    • Kaizen brief surfaces the experiments to scale (templates, sources, approver routing).
    • Decision gate: roll to all live contracts, with the pilot economics on the record.

Objection handling

The questions a careful buyer asks first.

Governance, audit posture, profitability protection, and what the recruiter’s day looks like. Answered against how the product actually runs — not the marketing surface.

Does the agent send anything without us?

No. Outreach goes out under the recruiter's identity only after approval. Submissions walk the contract's approval chain — every gate must clear before the package leaves the door. Autonomous mode is restricted to safe, repeatable work (sweeps, ingest, scoring, follow-up cadence) and the mode is visible on every action.

How does this protect our prime relationships and audit posture?

Every AI action and human decision is recorded with a timestamp, actor, and frozen artifact. Submission packages are version-frozen on release. The full audit trail is exportable and built to be reviewable by the prime, your compliance lead, or the customer.

What if the lift assumptions don't hold for our contracts?

The 60-day pilot is structured to find that out fast. We measure acceptance rate × source × LCAT against a frozen baseline window. Cost assumptions in /economics are transparent and editable — you can defend any number to your prime or your CFO. If the lift isn't there, you don't roll forward.

Will this replace our recruiters?

It changes what they spend their day on. Sourcing sweeps, resume ingest, scoring, drafting, and follow-up cadence are agent work. Judgment calls — voice on outreach, screen decisions, package release, offer — stay with the recruiter. Drop-in new hires are productive in their first hour because the agent has already pre-sourced and pre-drafted.

What about data residency and customer separation?

Single-tenant Cloud Run deploy with your storage and your identity provider. Per-contract approval chains, notification channels, and outreach templates are configured without code changes. Your data does not co-mingle with another customer's.

Can the agent ingest Controlled Unclassified Information (CUI), including U//FOUO?

Not a blocker — but it changes hosting and security posture and requires customer approval of the operating environment. CUI ingestion is addressable as a risk-based deployment decision: deploy into a controlled environment such as Google Assured Workloads or AWS GovCloud, with role-based access controls, encryption in transit and at rest with customer-managed keys, immutable audit logging, U.S. data residency and per-customer segmentation, defined retention and disposition, and model-boundary controls that keep CUI out of any third-party model context. The recommended path is a written agreement on classification scope, the chosen environment, and the boundary controls before any CUI material is loaded.

How is profitability protected at submission?

Stage 00 — labor-rate intake — locks bill rate, pay rate, loaded cost, and target margin per LCAT before sourcing begins. Every downstream stage carries the gross-profit signal. Below-target LCATs surface a warning before submission; high-margin LCATs are prioritized in the sourcing queue.

How a deployment lands

Cloud Run-ready. Configurable per customer. Recruiters in within an hour.

  1. 1

    Stand up on Cloud Run with your data and your contracts.

  2. 2

    Connect your existing sourcing channels via the Connector SDK.

  3. 3

    Configure approval chains and notification routing per customer.

  4. 4

    Recruiters drop in within an hour. Acceptance rate moves in the first window.