Skip to content

Scoring Dimensions

Every resume is scored across 6 dimensions. Each platform weighs these differently based on how it actually processes resumes. See Scoring Methodology for the full math behind how these combine into a final score.

#DimensionRangeDescription
d1d_1Formatting0-100Parser compatibility and layout quality
d2d_2Keyword Match0-100Terminology overlap with JD or industry standards
d3d_3Section Completeness0-100Presence of standard resume sections
d4d_4Experience Relevance0-100Bullet quality, action verbs, role relevance
d5d_5Education Match0-100Degree, institution, and date formatting
d6d_6Quantification0-100Ratio of quantified achievement bullets

How well your resume would survive each platform’s parser. This is a deduction-based score:

F=max ⁣(0,  100kpkσ)F = \max\!\left(0,\;100 - \sum_{k} p_k \cdot \sigma\right)

You start at 100 and lose points for each detected issue, scaled by the platform’s parsing strictness σ\sigma.

What triggers deductions:

  • Multi-column layouts (p=15p = 15): parsers read text out of order
  • Tables (p=12p = 12): content inside tables may be skipped entirely
  • Images and graphics (p=8p = 8): text embedded in images is invisible
  • Pages exceeding 2 (p=5p = 5): may be truncated
  • Very short content under 150 words (p=10p = 10): likely a parsing failure
  • Very long content over 1500 words (p=3p = 3): consider trimming
  • High special character density (p=8p = 8): encoding problems
  • Excessive all-caps lines (p=3p = 3): confuses section detection
  • Inconsistent bullet styles (p=2p = 2): minor formatting noise

How strictness changes the impact:

The same issue costs you very different amounts depending on the platform:

IssueWorkday (σ ⁣= ⁣0.90\sigma\!=\!0.90)iCIMS (σ ⁣= ⁣0.60\sigma\!=\!0.60)Lever (σ ⁣= ⁣0.35\sigma\!=\!0.35)
Multi-column13.5-13.59.0-9.05.25-5.25
Tables10.8-10.87.2-7.24.2-4.2
Images7.2-7.24.8-4.82.8-2.8

Positive signals (noted but don’t add points):

  • Clean single-column layout
  • Appropriate page length (1-2 pages)
  • Word count in the ideal range (300-800)

How much each platform cares:

PlatformWeight (w1w_1)Strictness (σ\sigma)
Workday0.250.90
SuccessFactors0.250.85
Taleo0.200.85
iCIMS0.150.60
Greenhouse0.100.40
Lever0.080.35

Workday and SuccessFactors tie for the highest formatting weight. If you’re applying to Fortune 500 companies (overwhelmingly Workday), a clean single-column PDF is non-negotiable.

How well your resume’s terminology matches what the platform is looking for. This is the single most impactful dimension across most platforms.

The formula:

K=min ⁣(100,  M+0.8SJ×100)K = \min\!\left(100,\;\frac{|M| + 0.8 \cdot |S|}{|J|} \times 100\right)
  • MM = exact keyword matches
  • SS = synonym or partial matches (depends on matching strategy)
  • JJ = keywords extracted from the job description
  • 0.8 coefficient: synonym matches are worth 80% of exact matches

In general mode (no JD): the scorer evaluates industry-standard terminology, technical skills, certifications, action verbs, and professional language. Without a JD to compare against, K=100K = 100 by convention and the AI evaluates keyword quality holistically.

In targeted mode (with JD): direct comparison between JD requirements and resume content. This is where the matching strategy matters most.

Matching strategies by platform:

StrategyPlatformsWhat Counts as MMWhat Counts as SS
ExactWorkday, Taleo, SuccessFactorsLiteral term match only\emptyset (nothing)
FuzzyiCIMSLiteral matchSynonym database + canonical forms
SemanticGreenhouse, LeverLiteral matchSynonyms + partial string containment (3\geq 3 chars)

Why this matters: if the JD says “Project Manager” and your resume says “PM”:

  • Exact platforms: miss. “PM” \neq “Project Manager”. You need both forms.
  • Fuzzy platforms: hit (synonym match, worth 80%).
  • Semantic platforms: hit (synonym match, worth 80%).

How much each platform cares:

PlatformWeight (w2w_2)Strategy
Taleo0.35exact
Workday0.30exact
iCIMS0.30fuzzy
Greenhouse0.25semantic
SuccessFactors0.25exact
Lever0.22semantic

Taleo’s 0.35 keyword weight combined with exact-only matching is why it has a reputation for being the strictest platform. Over a third of your score depends on having the literal terms from the JD on your resume.

Whether your resume has the standard sections that ATS parsers expect.

Required sections (expected by all platforms):

  • Contact information (name, email, phone)
  • Professional experience / work history
  • Education
  • Skills / technical skills

Bonus sections (improve the score):

  • Summary or objective
  • Certifications
  • Projects
  • Publications
  • Volunteer work

Platform differences:

PlatformWeight (w3w_3)Behavior
SuccessFactors0.20Textkernel parser needs clearly structured sections
Workday0.15Requires standard headings (“Experience”, “Education”)
Taleo0.15Requires clearly labeled sections
iCIMS0.15ALEX parser is moderately flexible
Greenhouse0.10Lenient on section naming
Lever0.10Lenient on section naming

SuccessFactors cares the most about section structure. Its Textkernel-based parser relies on detecting standard section headers to route content into the right fields. Non-standard or creative headers (like “Where I’ve Been” instead of “Experience”) can cause parsing failures.

Quality and relevance of your professional experience.

What’s evaluated:

  • Quantified achievements (numbers, percentages, dollar amounts)
  • Action verb usage (“Led,” “Built,” “Increased” vs “Responsible for”)
  • Recency weighting (recent experience valued more)
  • Relevance to stated field or JD requirements
  • Bullet point quality and specificity

Example scoring contrast:

“Increased quarterly revenue by 34% through implementation of automated lead scoring”

This scores high: quantified achievement + action verb + specific methodology.

“Responsible for various projects and tasks as assigned”

This scores low: no quantification, passive language, zero specificity.

How much each platform cares:

PlatformWeight (w4w_4)
Lever0.30
Greenhouse0.25
iCIMS0.20
Workday0.15
Taleo0.15
SuccessFactors0.15

Lever and Greenhouse weight experience the highest because they’re designed around structured human review. These platforms surface your experience bullets directly to hiring managers through structured scorecards, so quality matters more than keyword count.

How well your education section parses and meets expectations.

What’s evaluated:

  • Degree presence and level
  • Field of study relevance
  • Institution name parseability
  • Date formatting (graduation year)
  • GPA/honors (if applicable and strong)
  • Certifications and continuing education

All platforms weight this equally at w5=0.10w_5 = 0.10. Education is a baseline requirement check rather than a differentiator in ATS scoring. It matters for pass/fail (does the degree meet the minimum requirement?) but has less weight in the composite score.

The ratio of experience bullets that contain measurable achievements. This dimension is derived directly from the experience analysis:

d6={bqbt×100if bt>00if bt=0d_6 = \begin{cases} \left\lfloor\dfrac{b_q}{b_t} \times 100\right\rfloor & \text{if } b_t > 0 \\[6pt] 0 & \text{if } b_t = 0 \end{cases}

where bqb_q = bullets with numbers/percentages/dollar amounts and btb_t = total bullets.

Examples:

  • 2 quantified out of 10 total: d6=20d_6 = 20
  • 5 quantified out of 10 total: d6=50d_6 = 50
  • 8 quantified out of 10 total: d6=80d_6 = 80

How much each platform cares:

PlatformWeight (w6w_6)
Greenhouse0.20
Lever0.20
iCIMS0.10
Workday0.05
Taleo0.05
SuccessFactors0.05

There’s a clear split here. Modern platforms (Greenhouse, Lever) weight quantification 4x more than legacy systems (Workday, Taleo, SuccessFactors). The reasoning: modern ATS platforms are built to surface candidate data to hiring managers through structured scorecards and evaluation workflows. Quantified achievements are the kind of concrete signal that helps a reviewer make a quick decision.

For legacy systems, the parser just needs to extract and index your content. Whether your bullets say “increased revenue by 34%” or “responsible for revenue” doesn’t change how Taleo’s keyword index stores it.

To see how different the scoring profiles really are, here’s how each platform distributes its weight budget across the 6 dimensions:

PlatformProfile Shape
WorkdayFormatting and keywords dominate (0.25 + 0.30 = 0.55)
TaleoKeywords dominate everything else (0.35)
iCIMSBalanced between keywords and experience (0.30 + 0.20)
GreenhouseExperience and quantification lead (0.25 + 0.20 = 0.45)
LeverExperience-heavy with quantification (0.30 + 0.20 = 0.50)
SuccessFactorsFormatting, keywords, and sections balanced (0.25 + 0.25 + 0.20 = 0.70)

The shift from legacy to modern platforms is clear: old systems care about parsing correctly and matching keywords. New systems care about experience quality and measurable impact.