Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

Fixed anomaly likelihood doc problems. #3629

Merged
merged 1 commit into from
May 16, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,8 @@ nupic
│   ├── stats.py [TODO]
│   └── topology.py [TODO]
├── regions
│   ├── AnomalyLikelihoodRegion.py [TODO]
│   ├── AnomalyRegion.py [TODO]
│   ├── AnomalyLikelihoodRegion.py [OK]
│   ├── AnomalyRegion.py [OK]
│   ├── CLAClassifierRegion.py [TODO]
│   ├── KNNAnomalyClassifierRegion.py [TODO]
│   ├── KNNClassifierRegion.py [TODO]
Expand Down
31 changes: 17 additions & 14 deletions src/nupic/algorithms/anomaly_likelihood.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,64 +21,67 @@

"""
This module analyzes and estimates the distribution of averaged anomaly scores
from a given model. Given a new anomaly score `s`, estimates `P(score >= s)`.
from a given model. Given a new anomaly score ``s``, estimates
``P(score >= s)``.

The number `P(score >= s)` represents the likelihood of the current state of
The number ``P(score >= s)`` represents the likelihood of the current state of
predictability. For example, a likelihood of 0.01 or 1% means we see this much
predictability about one out of every 100 records. The number is not as unusual
as it seems. For records that arrive every minute, this means once every hour
and 40 minutes. A likelihood of 0.0001 or 0.01% means we see it once out of
10,000 records, or about once every 7 days.

**USAGE**
USAGE
+++++

There are two ways to use the code: using the
:class:`.anomaly_likelihood.AnomalyLikelihood` helper class or using the raw
individual functions :func:`~.anomaly_likelihood.estimateAnomalyLikelihoods` and
:func:`~.anomaly_likelihood.updateAnomalyLikelihoods`.


**Low-Level Function Usage**
Low-Level Function Usage
++++++++++++++++++++++++


There are two primary interface routines:
There are two primary interface routines.

- :func:`~.anomaly_likelihood.estimateAnomalyLikelihoods`: batch routine, called
initially and once in a while
initially and once in a while
- :func:`~.anomaly_likelihood.updateAnomalyLikelihoods`: online routine, called
for every new data point
for every new data point

1. Initially::
Initially:

.. code-block:: python

likelihoods, avgRecordList, estimatorParams = \\
estimateAnomalyLikelihoods(metric_data)

2. Whenever you get new data::
Whenever you get new data:

.. code-block:: python

likelihoods, avgRecordList, estimatorParams = \\
updateAnomalyLikelihoods(data2, estimatorParams)

3. And again (make sure you use the new estimatorParams returned in the above
call to updateAnomalyLikelihoods!)::
And again (make sure you use the new estimatorParams returned in the above call
to updateAnomalyLikelihoods!).

.. code-block:: python

likelihoods, avgRecordList, estimatorParams = \\
updateAnomalyLikelihoods(data3, estimatorParams)

4. Every once in a while update estimator with a lot of recent data::
Every once in a while update estimator with a lot of recent data.

.. code-block:: python

likelihoods, avgRecordList, estimatorParams = \\
estimateAnomalyLikelihoods(lots_of_metric_data)


**PARAMS**
PARAMS
++++++

The parameters dict returned by the above functions has the following
structure. Note: the client does not need to know the details of this.
Expand Down