In this section, we will see how to use the methods of the client.evals
module. Please check the API Reference for more details.
Evaluations
Create
Creates a brand‑new evaluation and returns the complete object.
A fully‑populated Evaluation instance.
Unique identifier for the evaluation.
Human‑readable evaluation name.
ID of the owning project.
default_inference_settings
Default inference settings inherited by iterations (nullable).
documents
array[EvaluationDocument]
List of linked documents.
List of linked iterations.
Unix timestamp (seconds) of creation.
from uiform import UiForm
client = UiForm()
schema = {
"title" : "InvoiceEvaluation" ,
"type" : "object" ,
"properties" : {
"invoice_number" : { "type" : "string" },
"total" : { "type" : "number" }
},
"required" : [ "invoice_number" , "total" ]
}
invoice_eval = client.evals.create(
name = "My Invoice Eval" ,
json_schema =schema
)
Get
Gets a single evaluation by ID.
invoice_eval = client.evals.get( evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" )
Update
Edits one or more fields. Omitted parameters remain unchanged.
updated = client.evals.update(
evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" ,
name = "Invoice Eval v2" ,
)
List
Lists all evaluations that belong to a given project.
An array under the data
key.
project_evals = client.evals.list( project_id = "proj_abc123" )
Delete
Permanently deletes an evaluation.
client.evals.delete( evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" )
Documents
Documents are the input data for UiForm evaluations. They contain:
Content : Files, images, PDFs, or other data to be analyzed
Annotation : User-supplied annotations for evaluation
ID : Unique identifier
Documents can be uploaded from file paths, URLs, file objects, PIL Images, or MIMEData. The client.evals.documents
API lets you create, list, retrieve, update, and delete documents associated with evaluations.
Create
Uploads a document (file path, URL, file‑like object, PIL Image, or pre‑built MIMEData
) and attaches ground‑truth annotations.
Returns
EvaluationDocument Object
Always "evaluation.document"
.
User‑supplied annotations.
from pathlib import Path
sales_invoice = client.evals.documents.create(
evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" ,
document =Path( "invoices/2025-04-invoice.pdf" ),
annotation ={ "invoice_number" : "INV-0425" , "total" : 1234.56 }
)
List
Returns
array[EvaluationDocument]
docs = client.evals.documents.list( evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" )
Get
single_doc = client.evals.documents.get(
evaluation_id = "eval_01HZX4H9Q0J2M4P1ZS6E6FY8MN" ,
document_id = "doc_f7c8…"
)
Update
Allows you to replace or amend the ground‑truth field.
client.evals.documents.update(
evaluation_id = "eval_…" ,
document_id = "doc_f7c8…" ,
annotation ={ "invoice_number" : "INV‑0425‑CORRECTED" , "total" : 1240.00 }
)
Delete
client.evals.documents.delete( evaluation_id = "eval_4988392" , document_id = "doc_f7c8…" )
Iterations
List
iters = client.evals.iterations.list( evaluation_id = "eval_…" )
Create
Runs the evaluation on a chosen model.
Always "evaluation.iteration"
.
modality
native | text | image | image+text
reasoning_effort
auto | low | medium | high
status
queued | running | finished | failed
iteration = client.evals.iterations.create(
evaluation_id = "eval_…" ,
model = "gpt‑4.1‑nano" ,
temperature = 0.0 ,
modality = "text" ,
n_consensus = 3
)
Delete
client.evals.iterations.delete( iteration_id = "iter_01HZ…" )
Compute Distances
Returns embedding‑level distances between a specific document and the ground‑truth/reference in a given iteration.
Always "distance.result"
.
client.evals.iterations.compute_distances(
iteration_id = "iter_01HZ…" ,
document_id = "doc_f7c8…"
)