Documentation

Learn how our assessments work, how we develop curriculum, and how to get the most out of solvelab.

Assessment Methodology

Our assessments are designed to measure practical AI fluency, not theoretical knowledge or tool familiarity. We focus on observable behaviors that indicate whether someone can effectively work with AI in real scenarios.

What We Measure

Task Decomposition

How well candidates break complex problems into AI-solvable components. Strong performers identify what AI can and cannot do.

Prompt Quality

The clarity, specificity, and effectiveness of instructions given to AI. We analyze prompt structure, context provision, and constraint setting.

Iteration Patterns

How candidates refine AI outputs. Effective users iterate purposefully, building on previous responses rather than starting over.

Output Validation

Critical evaluation of AI-generated content. We measure whether candidates verify, test, and appropriately trust or question AI outputs.

Business Application

The ability to connect AI capabilities to practical outcomes. Strong performers understand when AI adds value and when it doesn't.

Scoring

Each dimension is scored on a 0-100 scale. Scores are based on observable actions during the assessment, not self-reported abilities. We use a combination of automated analysis and expert-designed rubrics.

Overall scores are weighted based on role requirements. A data analyst role might weight output validation higher, while a creative role might emphasize iteration patterns.

Curriculum Development

Our assessment content is developed by AI practitioners with experience implementing AI in enterprise environments. We don't test abstract knowledge. We test skills that matter in real work.

Development Process

1

Role Analysis

We analyze job requirements and identify the specific AI skills that matter for each role type. This includes reviewing real job postings, interviewing hiring managers, and studying workflow patterns.

2

Scenario Design

Expert practitioners design realistic scenarios based on common tasks in each role. Scenarios are reviewed for authenticity and relevance by industry professionals.

3

Rubric Development

We create detailed scoring rubrics that define what good, average, and poor performance looks like. Rubrics are calibrated against known performers to ensure accuracy.

4

Validation

All assessments go through pilot testing with real users. We measure reliability, validity, and fairness before any assessment goes live.

Content Standards

  • All scenarios reflect real-world tasks, not artificial puzzles
  • Content is reviewed quarterly and updated as AI tools evolve
  • Industry-specific content is validated by domain experts
  • Assessments are designed to be fair across different AI tools and approaches

Getting Started

Setting up your first assessment takes just a few minutes. Here's how to get started.

1. Describe the role

Tell us about the position you're hiring for or the team you're assessing. Include:

  • • Job title and responsibilities
  • • Industry and company context
  • • Specific AI tools or workflows they'll use
  • • What "good" looks like for this role

2. Review the generated assessment

We'll create a custom assessment based on your input. You can:

  • • Preview all scenarios and questions
  • • Adjust difficulty and time limits
  • • Add your own custom questions
  • • Modify scoring weights

3. Invite candidates

Send assessment links to candidates or team members. They'll receive:

  • • A unique, secure link to their assessment
  • • Clear instructions and time expectations
  • • Access to the AI assistant during the test

4. Review results

Once assessments are complete, you'll get:

  • • Individual score breakdowns by dimension
  • • AI interaction analysis and patterns
  • • Comparative rankings across candidates
  • • Recommendations for each person

Ready to get started?

Request a demo and we'll walk you through the platform.

Request a Demo