Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump version: 0.3.2 → 0.3.4 #46

Merged
merged 29 commits into from
Aug 8, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
f1f6373
support both python 2 and 3
Jul 21, 2016
e1d765a
- changed deps and envs in tox in preparation for python3+ support
mkolarek Aug 2, 2016
a056691
- changed deps
mkolarek Aug 2, 2016
45a3f81
support both python 2 and 3
Jul 21, 2016
e3247c2
- changed deps and envs in tox in preparation for python3+ support
mkolarek Aug 2, 2016
aba27bb
- changed deps
mkolarek Aug 2, 2016
460bed6
rebase on dev
mkolarek Aug 4, 2016
e1f49ce
Merge branch 'feature/2to3' of github.com:zalando/expan into feature/…
mkolarek Aug 4, 2016
0e827b8
removed Dockerfile, added pyenvs to travis.yml
mkolarek Aug 4, 2016
53a12f6
- added virtualenv system_site_packages flag and a before_install tha…
mkolarek Aug 4, 2016
5b57b14
- removed pip install requirements from install:
mkolarek Aug 4, 2016
9a53673
- moved additional pyenvs from tox.ini to .travis.yml
mkolarek Aug 5, 2016
d65686d
- toxenvs were still specified in .travis.yml, fixed
mkolarek Aug 5, 2016
c32c22f
- removed system packages support as it seems to have caused build fa…
mkolarek Aug 5, 2016
1801651
support both python 2 and 3
Jul 21, 2016
7f4058d
- merged incorrectly in previous commits which broke py34 testing, fixed
mkolarek Aug 5, 2016
5b7aff1
- added installation of requirements explicitly in travis
mkolarek Aug 5, 2016
c366329
- upgrade pip in travis
mkolarek Aug 5, 2016
6b59423
- upgrade setuptools in travis
mkolarek Aug 5, 2016
82b795d
- removed py26 and py33 from .travis.yml because of dependency issues
mkolarek Aug 8, 2016
eac0006
- changed tox.ini to use pytest instead of setup.py test
mkolarek Aug 8, 2016
8482317
- added instructions for running tests with tox
mkolarek Aug 8, 2016
a69115c
Merge pull request #43 from zalando/feature/2to3
mkolarek Aug 8, 2016
f2517b6
Bump version: 0.3.2 → 0.3.3
mkolarek Aug 8, 2016
8acd603
- added pip 8.1.0 to requirements.txt
mkolarek Aug 8, 2016
6d7ef38
- added requirements.txt in MANIFEST.ini so that setup.py could find it
mkolarek Aug 8, 2016
cf71379
[skip ci]
mkolarek Aug 8, 2016
cd0ff5d
Bump version: 0.3.3 → 0.3.4
mkolarek Aug 8, 2016
5b6e3b3
Merge pull request #45 from zalando/bug/fix_dependencies
mkolarek Aug 8, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,15 @@ deploy:
distributions: sdist bdist_wheel
repo: zalando/expan
env:
- TOXENV=py27
- TOXENV=py
install:
- pip install -U tox
- pip install -U pip setuptools tox
- pip install -r requirements.txt
language: python
python: 2.7
python:
#- 2.6
- 2.7
#- 3.3
- 3.4
- 3.5
script: tox
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ include CONTRIBUTING.rst
include HISTORY.rst
include LICENSE
include README.rst
include requirements.txt

recursive-include tests *
recursive-exclude * __pycache__
Expand Down
27 changes: 7 additions & 20 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,11 @@ with the following exceptions:

- Use *tabs instead of spaces* - this allows all individuals to have visual depth of indentation they prefer, without changing the source code at all, and it is simply smaller

Testing
-------

Easiest way to run tests is by running the command ``tox`` from the terminal. The default Python environments for testing with are py27 and py34, but you can specify your own by running e.g. ``tox -e py35``.

Branching / Release
-------------------

Expand All @@ -152,7 +157,7 @@ Versioning
when doing the analysis!**

We use semantic versioning (http://semver.org), and the current version of
ExpAn is: v0.3.2.
ExpAn is: v0.3.4.

The version is maintained in ``setup.cfg``, and propagated from there to various files
by the ``bumpversion`` program. The most important propagation destination is
Expand All @@ -176,7 +181,7 @@ repository.

>>> import core.binning
>>> core.version()
'v0.3.2'
'v0.3.4'
>>> core.version('{major}.{minor}..{commits}')
'0.0..176'
>>> core.version('{commit}')
Expand Down Expand Up @@ -263,21 +268,3 @@ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


History
=======

0.2.5
-----

* Inclusion of cli in install
* many other minor changes since open-sourcing...

0.2.0 (2016-05-03)
------------------

* First opensource release to GitHub

0.1.0 (2016-04-29)
------------------
9 changes: 5 additions & 4 deletions expan/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from core import *
from core.version import __version__
from data import *
from cli import *
from __future__ import absolute_import
from expan.core import *
from expan.core.version import __version__
from expan.data import *
from expan.cli import *

__all__ = ["core", "data", "cli"]
9 changes: 5 additions & 4 deletions expan/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ def run_analysis(features_file, kpis_file, metadata_file):
return (exp.delta(), exp.sga())


def run_expan((features_file, kpis_file, metadata_file, output_file)):
def run_expan(xxx_todo_changeme):
(features_file, kpis_file, metadata_file, output_file) = xxx_todo_changeme
(delta_result, sga_result) = run_analysis(features_file, kpis_file, metadata_file)
print_results(delta_result, sga_result, output_file)

Expand All @@ -58,12 +59,12 @@ def print_results(delta, sga, output_file):
output.write(delta_s)
output.write(sga_s)
else:
print delta_s
print sga_s
print(delta_s)
print(sga_s)


def check_input_data(args):
print args
print(args)
if not args.kpis:
raise UsageError('Kpis file shall be provided (-k cli parameter)')
if not args.metadata:
Expand Down
6 changes: 4 additions & 2 deletions expan/core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
from __future__ import absolute_import

__all__ = ["binning", "debugging", "experiment", "experimentdata", "results", "statistics", "util", "version"]

from version import __version__, version
from expan.core.version import __version__, version

print 'ExpAn core init: {}'.format(version())
print(('ExpAn core init: {}'.format(version())))
16 changes: 8 additions & 8 deletions expan/core/binning.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

def dbg(lvl, msg):
if lvl <= dbg_lvl:
print
print()
'D{:d}|{}'.format(lvl, msg)


Expand Down Expand Up @@ -157,9 +157,9 @@ def _labels(self, format_str='{standard}'):

if '{iter.' in format_str:
if '{iter.uppercase}' in format_str:
it = iter(string.uppercase)
it = iter(string.ascii_uppercase)
elif '{iter.lowercase}' in format_str:
it = iter(string.lowercase)
it = iter(string.ascii_lowercase)
elif '{iter.integer}' in format_str:
import itertools
it = itertools.count()
Expand All @@ -184,10 +184,10 @@ def _labels(self, format_str='{standard}'):
format_args['set_notation'] = '{unseen}'

if it is not None:
print
print()
'ii: ' + str(ii)
if not is_catchall:
format_args['iterator'] = it.next()
format_args['iterator'] = next(it)
else:
format_args['iterator'] = '?'

Expand Down Expand Up @@ -445,9 +445,9 @@ def _labels(self, format_str='{standard}'):

if '{iter.' in format_str:
if '{iter.uppercase}' in format_str:
it = iter(string.uppercase)
it = iter(string.ascii_uppercase)
elif '{iter.lowercase}' in format_str:
it = iter(string.lowercase)
it = iter(string.ascii_lowercase)
elif '{iter.integer}' in format_str:
import itertools
it = itertools.count()
Expand Down Expand Up @@ -485,7 +485,7 @@ def _labels(self, format_str='{standard}'):
'up_bracket': ']' if uc else ')',
}
if (not is_catchall) and (it is not None):
format_args['iterator'] = it.next()
format_args['iterator'] = next(it)

lbl = (format_str if not is_catchall else catchall_format_str).format(**format_args)

Expand Down
13 changes: 7 additions & 6 deletions expan/core/experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,20 @@
# the proper interface is through the Experiment instance functions, surely?

# import numpy as np
import statistics as statx

import expan.core.statistics as statx
import warnings

import binning as binmodule # name conflict with binning...
import expan.core.binning as binmodule # name conflict with binning...
import numpy as np
import pandas as pd
from experimentdata import ExperimentData
from results import Results, delta_to_dataframe_all_variants, feature_check_to_dataframe
from expan.core.experimentdata import ExperimentData
from expan.core.results import Results, delta_to_dataframe_all_variants, feature_check_to_dataframe

# raise the same warning multiple times
warnings.simplefilter('always', UserWarning)

from debugging import Dbg
from expan.core.debugging import Dbg

def _binned_deltas(df, variants, n_bins=4, binning=None, cumulative=False,
assume_normal=True, percentiles=[2.5, 97.5],
Expand Down Expand Up @@ -258,7 +259,7 @@ def time_dependent_deltas(df, variants, time_step=1, cumulative=False,

# create binning manually, ASSUMING uniform sampling
tpoints = np.unique(df.iloc[:,1])
binning = binmodule.NumericalBinning(uppers=tpoints, lowers=tpoints,
binning = binmodule.NumericalBinning(uppers=tpoints, lowers=tpoints,
up_closed=[True]*len(tpoints), lo_closed=[True]*len(tpoints))

# Push computation to _binned_deltas() function
Expand Down
25 changes: 13 additions & 12 deletions expan/core/results.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
from __future__ import absolute_import
import datetime
from copy import deepcopy

import numpy as np
import pandas as pd
import version
import statistics as statx
from expan.core.version import __version__
import expan.core.statistics as statx
from scipy.stats import norm

# from tests.tests_core.test_data import generate_random_data

from debugging import Dbg
from expan.core.debugging import Dbg
from pdb import set_trace

class Results(object):
Expand Down Expand Up @@ -42,7 +43,7 @@ def __init__(self, df, metadata={}, dbg=None):
"""
self.df = df
self.metadata = metadata
self.metadata['version'] = version.__version__
self.metadata['version'] = __version__
self.metadata['errors'] = {}
self.metadata['warnings'] = {}

Expand Down Expand Up @@ -275,7 +276,7 @@ def to_hdf(self, fpath):
hfile = h5py.File(fpath)
md = hfile.require_group('metadata')
datetime_conversions = set(md.attrs.get('_datetime_attributes', set()))
for k, v in self.metadata.iteritems():
for k, v in list(self.metadata.items()):
if k == '_datetime_attributes':
continue
if v is None:
Expand Down Expand Up @@ -350,7 +351,7 @@ def from_hdf(fpath, dbg=None):
md = hfile['metadata']
datetime_conversions = set(md.attrs.get('_datetime_attributes', set()))
metadata = {}
for k, v in md.attrs.iteritems():
for k, v in list(md.attrs.items()):
if k == '_datetime_attributes':
continue
dbg(3, 'from_hdf: retrieving metadata {}'.format(k))
Expand All @@ -371,8 +372,8 @@ def delta_to_dataframe(metric, variant, mu, pctiles, samplesize_variant, samples
'metric': metric,
'variant': variant,
'statistic': 'pctile',
'pctile': pctiles.keys(),
'value': pctiles.values(),
'pctile': list(pctiles.keys()),
'value': list(pctiles.values()),
'subgroup_metric': subgroup_metric,
'subgroup': subgroup
})
Expand Down Expand Up @@ -403,8 +404,8 @@ def delta_to_dataframe_all_variants(metric, mu, pctiles, samplesize_variant,
df = pd.DataFrame({
'metric': metric,
'statistic': 'uplift_pctile',
'pctile': pctiles.keys(),
'value': pctiles.values(),
'pctile': list(pctiles.keys()),
'value': list(pctiles.values()),
'subgroup_metric': subgroup_metric,
'subgroup': subgroup
})
Expand Down Expand Up @@ -435,8 +436,8 @@ def feature_check_to_dataframe(metric,
if pval is None:
df = pd.DataFrame({'metric': metric,
'statistic': 'pre_treatment_diff_pctile',
'pctile': pctiles.keys(),
'value': pctiles.values(),
'pctile': list(pctiles.keys()),
'value': list(pctiles.values()),
'subgroup_metric': '-',
'subgroup': None})
df = df.append(pd.DataFrame({
Expand Down
6 changes: 3 additions & 3 deletions expan/core/statistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def delta(x, y, assume_normal=True, percentiles=[2.5, 97.5],
# Set mean to nan
mu = np.nan
# Create nan dictionary
c_i = dict(zip(percentiles, np.empty(len(percentiles)) * np.nan))
c_i = dict(list(zip(percentiles, np.empty(len(percentiles)) * np.nan)))
else:
# Computing the mean
mu = _delta_mean(_x, _y)
Expand Down Expand Up @@ -239,7 +239,7 @@ def bootstrap(x, y, func=_delta_mean, nruns=10000, percentiles=[2.5, 97.5],
# Checking if enough observations are left after dropping NaNs
if min(ss_x, ss_y) < min_observations:
# Create nan percentile dictionary
c_val = dict(zip(percentiles, np.empty(len(percentiles)) * np.nan))
c_val = dict(list(zip(percentiles, np.empty(len(percentiles)) * np.nan)))
return (c_val, None)
else:
# Initializing bootstraps array and random sampling for each run
Expand All @@ -254,7 +254,7 @@ def bootstrap(x, y, func=_delta_mean, nruns=10000, percentiles=[2.5, 97.5],
if relative:
bootstraps -= np.nanmean(bootstraps)
# Confidence values per given percentile as dictionary
c_val = dict(zip(percentiles, np.percentile(bootstraps, q=percentiles)))
c_val = dict(list(zip(percentiles, np.percentile(bootstraps, q=percentiles))))
return (c_val, None) if not return_bootstraps else (c_val, bootstraps)


Expand Down
10 changes: 5 additions & 5 deletions expan/core/version.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#
__version__ = "0.3.2"
__version__ = "0.3.4"


def version_numbers():
Expand All @@ -12,7 +12,7 @@ def git_commit_count():

http://programmers.stackexchange.com/a/151558
"""
print
print()
'dd'
commit_count = None

Expand All @@ -30,7 +30,7 @@ def git_latest_commit():

http://programmers.stackexchange.com/a/151558
"""
print
print()
'ee'
latest_commit = None
import subprocess
Expand Down Expand Up @@ -67,7 +67,7 @@ def version(format_str='{short}'):


if __name__ == '__main__':
print
print()
version('Short Version String: "{short}"')
print
print()
version('Full Version String: "{long}"')
6 changes: 3 additions & 3 deletions expan/data/csv_fetcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,21 +30,21 @@ def get_data(folder_path):
try:
metrics = pd.read_csv(folder_path + '/' + f)
except Exception as e:
print
print()
e

elif 'metadata' in f:
try:
with open(folder_path + '/' + f, 'r') as input_json:
metadata = json.load(input_json)
except ValueError as e:
print
print()
e
raise

return ExperimentData(metrics=metrics, metadata=metadata)

except AssertionError as e:
print
print()
e
raise
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
pip >= 8.1.0
pandas >= 0.17.1
scipy >= 0.17.0
numpy >= 1.10.4
Expand Down
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.3.2
current_version = 0.3.4
commit = True
tag = True

Expand Down
Loading