2. Overview
2.1. Description
Python library for computing diefficiency metrics dief@t and dief@k.
The metrics dief@t and dief@k allow for measuring the diefficiency during an elapsed time period t or while k answers are produced, respectively. dief@t and dief@k rely on the computation of the area under the curve (AUC) of answer traces, and thus capturing the answer rate concentration over a time interval.
2.2. Attributes
Default colors for printing plots: yellow, red, blue |
2.3. Functions
|
Compares dief@k at different answer completeness percentages. |
|
Computes the dief@k metric for a specific test at a given number of answers k. |
|
Computes the dief@k metric for a specific test at a given percentage of answers kp. |
|
Computes the dief@t metric for a specific test at a given time point t. |
|
Reads the other metrics from a CSV file. |
|
Reads answer traces from a CSV file. |
|
Compares dief@t with other conventional metrics used in query performance analysis. |
|
Plots the answer traces of all tests; one plot per test. |
Generates radar plots that compare dief@k at different answer completeness percentages; one per test. |
|
Generates radar plots that compare dief@t with conventional metrics; one plot per test. |
|
|
Plots the answer trace of a given test for all approaches. |
|
Generates a radar plot that compares dief@k at different answer completeness percentages for a specific test. |
|
Creates a bar chart with the overall execution time for all the tests and approaches in the metrics data. |
Generates a radar plot that compares dief@t with conventional metrics for a specific test. |