AI Assessment

How the Assessment Agent grades open-ended responses and provides learner feedback.

The Assessment Agent automatically grades responses that can't be scored by simple keyword matching — short answers, essays, and AI Dialogue conversations.

What the Assessment Agent grades

Block / Question typeWhat's evaluated
Short Answer questionsResponse against a model answer
Essay questionsResponse against a rubric or criteria
AI Dialogue blocksFull conversation against success criteria

Short Answer grading

When you create a Short Answer question, you provide a model answer. The Assessment Agent:

  1. Reads the learner's response
  2. Compares it semantically to the model answer (not just keyword matching)
  3. Assigns a score based on how well the key points are covered
  4. Optionally provides written feedback

Setting it up:

  • In the Quiz block, add a Short Answer question
  • Enter your model answer in the Correct Answer field
  • Optionally add feedback text for learners

Essay grading

Essay questions support rubric-based grading.

Setting it up:

  1. Add an Essay question to a Quiz block
  2. In the Grading Rubric field, describe what a full-marks response looks like
  3. Optionally set criteria categories (e.g., "Clarity", "Accuracy", "Examples")
  4. Set the maximum score

The Assessment Agent reads the learner's essay, evaluates it against the rubric, assigns a score, and writes personalized feedback explaining what was done well and what could be improved.

Tip: The more specific your rubric, the more consistent and useful the grading. Vague rubrics produce vague feedback.


AI Dialogue grading

After a learner finishes an AI Dialogue conversation, the Assessment Agent reviews the full transcript.

What it evaluates:

  • Did the learner achieve the stated goal?
  • Were the success criteria met?
  • Quality of communication (tone, approach, accuracy)

A score and written debrief are shown to the learner at the end of the dialogue.

Setting success criteria: In the AI Dialogue block settings, define:

  • The goal of the conversation
  • Success criteria — what a successful outcome looks like
  • Optional scoring dimensions (e.g., Empathy, Accuracy, Resolution)

Scores and records

All AI-assessed scores are:

  • Recorded in the learner's progress record
  • Visible in the Learners dashboard
  • Included in overall course completion calculations

Instructors can review individual scores and feedback from the learner detail view.


Limitations

  • AI grading is high-quality but not infallible. For high-stakes assessments, consider manual review.
  • Grading consumes AI credits.
  • Feedback is generated in the course's configured content language.