public class TopNPrecisionRecallMetric extends ListOnlyTopNMetric<TopNPrecisionRecallMetric.Context>
A metric to compute the precision and recall of a recommender given a set of candidate items to recommend from and a set of desired items. The aggregate results are means of the user results.
This can be used to compute metrics like fallout (probability that a recommendation is bad) by configuring bad items as the test item set.
This metric is registered under the name pr
. It has two configuration parameters:
suffix
goodItems
Modifier and Type | Class and Description |
---|---|
static class |
TopNPrecisionRecallMetric.Context |
static class |
TopNPrecisionRecallMetric.PresRecResult |
Constructor and Description |
---|
TopNPrecisionRecallMetric()
Construct a new precision-recall metric using the user’s test items as good.
|
TopNPrecisionRecallMetric(ItemSelector good,
java.lang.String sfx)
Construct a new recall and precision top n metric
|
TopNPrecisionRecallMetric(PRMetricSpec spec)
Construct a precision-reacll metric from a spec.
|
Modifier and Type | Method and Description |
---|---|
TopNPrecisionRecallMetric.Context |
createContext(AlgorithmInstance algorithm,
DataSet dataSet,
RecommenderEngine rec)
Create the context for an experimental condition (algorithm/data set pair).
|
MetricResult |
getAggregateMeasurements(TopNPrecisionRecallMetric.Context context)
Get the aggregate results from an accumulator.
|
MetricResult |
measureUser(Recommender rec,
TestUser user,
int targetLength,
LongList recs,
TopNPrecisionRecallMetric.Context context)
Measurement method that only uses the recommend list.
|
measureUser
getAggregateColumnLabels, getColumnLabels, getRequiredRoots
public TopNPrecisionRecallMetric()
Construct a new precision-recall metric using the user’s test items as good.
public TopNPrecisionRecallMetric(PRMetricSpec spec)
Construct a precision-reacll metric from a spec.
spec
- The precision-recall metric.public TopNPrecisionRecallMetric(ItemSelector good, java.lang.String sfx)
Construct a new recall and precision top n metric
good
- an item selector for the good items.sfx
- the suffix label for this evaluation, or null
for no suffix.@Nonnull public MetricResult measureUser(Recommender rec, TestUser user, int targetLength, LongList recs, TopNPrecisionRecallMetric.Context context)
ListOnlyTopNMetric
Measurement method that only uses the recommend list.
Thread Safety: This method may be called concurrently by multiple threads with the same recommender and context.
measureUser
in class ListOnlyTopNMetric<TopNPrecisionRecallMetric.Context>
rec
- The recommender used to recommend for this user.user
- The user.targetLength
- The target list length.recs
- The list of recommendations.context
- The context.@Nullable public TopNPrecisionRecallMetric.Context createContext(AlgorithmInstance algorithm, DataSet dataSet, RecommenderEngine rec)
Metric
Create the context for an experimental condition (algorithm/data set pair). The default implementation returns null
.
Note: Contexts must be thread-safe, in that multiple concurrent calls to the appropriate user-measurement function with the same context must be safe. This can be handled either by the context itself, or by the user-measurement function.
createContext
in class Metric<TopNPrecisionRecallMetric.Context>
algorithm
- The algorithm.dataSet
- The data set.rec
- The LensKit recommender engine, if applicable. This can be null for an external algorithm that does not provide a LensKit recommender.null
.@Nonnull public MetricResult getAggregateMeasurements(TopNPrecisionRecallMetric.Context context)
Metric
Get the aggregate results from an accumulator. The default implementation returns MetricResult.empty()
.
getAggregateMeasurements
in class Metric<TopNPrecisionRecallMetric.Context>
context
- The context for an experimental condition.