From 3abb5a402fbdf04db68cc760359123f81ba71540 Mon Sep 17 00:00:00 2001 From: pyalex Date: Thu, 3 Feb 2022 16:57:30 +0800 Subject: [PATCH 1/4] dqm tutorial in docs Signed-off-by: pyalex --- docs/getting-started/concepts/dataset.md | 6 +- docs/tutorials/tutorials-overview.md | 2 + .../validating-historical-features.md | 989 ++++++++++++++++++ 3 files changed, 996 insertions(+), 1 deletion(-) create mode 100644 docs/tutorials/validating-historical-features.md diff --git a/docs/getting-started/concepts/dataset.md b/docs/getting-started/concepts/dataset.md index 9bdbbfffdf..59f7168905 100644 --- a/docs/getting-started/concepts/dataset.md +++ b/docs/getting-started/concepts/dataset.md @@ -43,4 +43,8 @@ Saved dataset can be later retrieved using `get_saved_dataset` method: ```python dataset = store.get_saved_dataset('my_training_dataset') dataset.to_df() -``` \ No newline at end of file +``` + +--- + +Check out our [tutorial on validating historical features](../../tutorials/validating-historical-features.md) to see how this concept can be applied in real-world use case. \ No newline at end of file diff --git a/docs/tutorials/tutorials-overview.md b/docs/tutorials/tutorials-overview.md index 86a8c25371..036f78af05 100644 --- a/docs/tutorials/tutorials-overview.md +++ b/docs/tutorials/tutorials-overview.md @@ -9,3 +9,5 @@ These Feast tutorials showcase how to use Feast to simplify end to end model tra {% page-ref page="real-time-credit-scoring-on-aws.md" %} {% page-ref page="driver-stats-using-snowflake.md" %} + +{% page-ref page="validation-historical-features.md" %} diff --git a/docs/tutorials/validating-historical-features.md b/docs/tutorials/validating-historical-features.md new file mode 100644 index 0000000000..039e8f4289 --- /dev/null +++ b/docs/tutorials/validating-historical-features.md @@ -0,0 +1,989 @@ +# Data Quality Monitoring + +## Validating Historical Features with Great Expectations + +In this tutorial, we will use the public dataset of Chicago taxi trips to present data validation capabilities of Feast. The original dataset is stored in BigQuery and consists of raw data for each taxi trip (one row per trip) since 2013. We will generate several training datasets (aka historical features in Feast) for different periods and evaluate expectations made on one dataset against another. Our features will represent aggregations of raw data with daily intervals (eg, trips per day, average fare or speed for a specific day, etc.). We will craft some features using SQL while pulling data from BigQuery (like total trips time or total miles travelled). Another chunk of features will be implemented using Feast's on-demand transformations - features calculated on the fly when requested. + +Our plan: + +0. Prepare environment +1. Pull data from BigQuery (optional) +2. Declare & apply features and feature views in Feast +3. Generate reference dataset +4. Develop & test profiler function +5. Run validation on different dataset using reference dataset & profiler + +### 0. Setup + +Install Feast Python SDK and great expectations: + + +```python +!pip install 'feast[ge]' +``` + + +### 1. Dataset preparation (Optional) + +**You can skip this step if you don't have GCP account. Please use parquet files that are coming with this tutorial instead** + + +```python +!pip install google-cloud-bigquery +``` + + +```python +import pyarrow.parquet + +from google.cloud.bigquery import Client +``` + + +```python +bq_client = Client(project='kf-feast') +``` + + /Users/pyalex/projects/feast/venv/lib/python3.7/site-packages/google/auth/_default.py:70: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ + warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) + + +Running some basic aggregations while pulling data from BigQuery. Grouping by taxi_id and day: + + +```python +data_query = """SELECT + taxi_id, + TIMESTAMP_TRUNC(trip_start_timestamp, DAY) as day, + SUM(trip_miles) as total_miles_travelled, + SUM(trip_seconds) as total_trip_seconds, + SUM(fare) as total_earned, + COUNT(*) as trip_count +FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` +WHERE + trip_miles > 0 AND trip_seconds > 60 AND + trip_start_timestamp BETWEEN '2019-01-01' and '2020-12-31' AND + trip_total < 1000 +GROUP BY taxi_id, TIMESTAMP_TRUNC(trip_start_timestamp, DAY)""" +``` + + +```python +driver_stats_table = bq_client.query(data_query).to_arrow() + +# Storing resulting dataset into parquet file +pyarrow.parquet.write_table(driver_stats_table, "trips_stats.parquet") +``` + + +```python +def entities_query(year): + return f"""SELECT + distinct taxi_id +FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` +WHERE + trip_miles > 0 AND trip_seconds > 0 AND + trip_start_timestamp BETWEEN '{year}-01-01' and '{year}-12-31' +""" +``` + + +```python +entities_2019_table = bq_client.query(entities_query(2019)).to_arrow() + +# Storing entities (taxi ids) into parquet file +pyarrow.parquet.write_table(entities_2019_table, "entities.parquet") +``` + + +```python +#entities_2020_table = bq_client.query(entities_query(2020)).to_arrow() +#pyarrow.parquet.write_table(entities_2019_table, "entities_2020.parquet") +``` + + +## 2. Declaring features + + +```python +import pyarrow.parquet +import pandas as pd + +from feast import Feature, FeatureView, Entity, FeatureStore +from feast.value_type import ValueType +from feast.data_format import ParquetFormat +from feast.on_demand_feature_view import on_demand_feature_view +from feast.infra.offline_stores.file_source import FileSource +from feast.infra.offline_stores.file import SavedDatasetFileStorage + +from google.protobuf.duration_pb2 import Duration +``` + + +```python +batch_source = FileSource( + event_timestamp_column="day", + path="trips_stats.parquet", # using parquet file that we created on previous step + file_format=ParquetFormat() +) +``` + + +```python +taxi_entity = Entity(name='taxi', join_key='taxi_id') +``` + + +```python +trips_stats_fv = FeatureView( + name='trip_stats', + entities=['taxi'], + features=[ + Feature("total_miles_travelled", ValueType.DOUBLE), + Feature("total_trip_seconds", ValueType.DOUBLE), + Feature("total_earned", ValueType.DOUBLE), + Feature("trip_count", ValueType.INT64), + + ], + ttl=Duration(seconds=86400), + batch_source=batch_source, +) +``` + +*Read more about feature views in [Feast docs](https://docs.feast.dev/getting-started/concepts/feature-view)* + + +```python +@on_demand_feature_view( + features=[ + Feature("avg_fare", ValueType.DOUBLE), + Feature("avg_speed", ValueType.DOUBLE), + Feature("avg_trip_seconds", ValueType.DOUBLE), + Feature("earned_per_hour", ValueType.DOUBLE), + ], + inputs={ + "stats": trips_stats_fv + } +) +def on_demand_stats(inp): + out = pd.DataFrame() + out["avg_fare"] = inp["total_earned"] / inp["trip_count"] + out["avg_speed"] = 3600 * inp["total_miles_travelled"] / inp["total_trip_seconds"] + out["avg_trip_seconds"] = inp["total_trip_seconds"] / inp["trip_count"] + out["earned_per_hour"] = 3600 * inp["total_earned"] / inp["total_trip_seconds"] + return out +``` + +*Read more about on demand feature views [here](https://docs.feast.dev/reference/alpha-on-demand-feature-view)* + + +```python +store = FeatureStore(".") # using feature_store.yaml that stored in the same directory +``` + + +```python +store.apply([taxi_entity, trips_stats_fv, on_demand_stats]) # writing to the registry +``` + + +## 3. Generating training (reference) dataset + + +```python +taxi_ids = pyarrow.parquet.read_table("entities.parquet").to_pandas() +``` + +Generating range of timestamps with daily frequency: + + +```python +timestamps = pd.DataFrame() +timestamps["event_timestamp"] = pd.date_range("2019-06-01", "2019-07-01", freq='D') +``` + +Cross merge (aka relation multiplication) produces entity dataframe with each taxi_id repeated for each timestamp: + + +```python +entity_df = pd.merge(taxi_ids, timestamps, how='cross') +entity_df +``` + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
taxi_idevent_timestamp
091d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2019-06-01
191d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2019-06-02
291d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2019-06-03
391d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2019-06-04
491d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2019-06-05
.........
1569797ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2019-06-27
1569807ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2019-06-28
1569817ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2019-06-29
1569827ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2019-06-30
1569837ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2019-07-01
+

156984 rows × 2 columns

+
+ + + +Retrieving historical features for resulting entity dataframe and persisting output as a saved dataset: + + +```python +job = store.get_historical_features( + entity_df=entity_df, + features=[ + "trip_stats:total_miles_travelled", + "trip_stats:total_trip_seconds", + "trip_stats:total_earned", + "trip_stats:trip_count", + "on_demand_stats:avg_fare", + "on_demand_stats:avg_trip_seconds", + "on_demand_stats:avg_speed", + "on_demand_stats:earned_per_hour", + ] +) + +store.create_saved_dataset( + from_=job, + name='my_training_ds', + storage=SavedDatasetFileStorage(path='my_training_ds.parquet') +) +``` + + /Users/pyalex/projects/feast/sdk/python/feast/feature_store.py:853: RuntimeWarning: Saving dataset is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. + RuntimeWarning, + + + + + + , full_feature_names = False, tags = {}, _retrieval_job = , min_event_timestamp = 2019-06-01 00:00:00, max_event_timestamp = 2019-07-01 00:00:00)> + + + +## 4. Developing dataset profiler + +Dataset profiler is a function that accepts dataset and generates set of its characteristics. This charasteristics will be then used to evaluate (validate) next datasets. + +**Important: datasets are not compared to each other! +Feast use a reference dataset and a profiler function to generate a reference profile. +This profile will be then used during validation of the tested dataset.** + + +```python +import numpy as np + +from feast.dqm.profilers.ge_profiler import ge_profiler + +from great_expectations.core.expectation_suite import ExpectationSuite +from great_expectations.dataset import PandasDataset +``` + + 02/02/2022 02:43:45 PM WARNING:/Users/pyalex/projects/feast/venv/lib/python3.7/site-packages/great_expectations/render/view/view.py:116: DeprecationWarning: 'contextfilter' is renamed to 'pass_context', the old name will be removed in Jinja 3.1. + def add_data_context_id_to_url(self, jinja_context, url, add_datetime=True): + + + +Loading saved dataset first and exploring the data: + + +```python +ds = store.get_saved_dataset('my_training_ds') +ds.to_df() +``` + + /Users/pyalex/projects/feast/sdk/python/feast/feature_store.py:904: RuntimeWarning: Retrieving datasets is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. + RuntimeWarning, + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
total_earnedavg_trip_secondstaxi_idtotal_miles_travelledtrip_countearned_per_hourevent_timestamptotal_trip_secondsavg_fareavg_speed
068.252270.00000091d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...24.702.054.1189432019-06-01 00:00:00+00:004540.034.12500019.585903
1221.00560.5000007a4a6162eaf27805aef407d25d5cb21fe779cd962922cb...54.1824.059.1436222019-06-01 00:00:00+00:0013452.09.20833314.499554
2160.501010.769231f4c9d05b215d7cbd08eca76252dae51cdb7aca9651d4ef...41.3013.043.9726032019-06-01 00:00:00+00:0013140.012.34615411.315068
3183.75697.550000c1f533318f8480a59173a9728ea0248c0d3eb187f4b897...37.3020.047.4159562019-06-01 00:00:00+00:0013951.09.1875009.625116
4217.751054.076923455b6b5cae6ca5a17cddd251485f2266d13d6a2c92f07c...69.6913.057.2064512019-06-01 00:00:00+00:0013703.016.75000018.308692
.................................
15697938.001980.0000000cccf0ec1f46d1e0beefcfdeaf5188d67e170cdff92618...14.901.069.0909092019-07-01 00:00:00+00:001980.038.00000027.090909
156980135.00551.250000beefd3462e3f5a8e854942a2796876f6db73ebbd25b435...28.4016.055.1020412019-07-01 00:00:00+00:008820.08.43750011.591837
156981NaNNaN9a3c52aa112f46cf0d129fafbd42051b0fb9b0ff8dcb0e...NaNNaNNaN2019-07-01 00:00:00+00:00NaNNaNNaN
15698263.00815.00000008308c31cd99f495dea73ca276d19a6258d7b4c9c88e43...19.964.069.5705522019-07-01 00:00:00+00:003260.015.75000022.041718
156983NaNNaN7ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...NaNNaNNaN2019-07-01 00:00:00+00:00NaNNaNNaN
+

156984 rows × 10 columns

+
+ + + +Feast uses [Great Expectations](https://docs.greatexpectations.io/docs/) as a validation engine and [ExpectationSuite](https://legacy.docs.greatexpectations.io/en/latest/autoapi/great_expectations/core/expectation_suite/index.html#great_expectations.core.expectation_suite.ExpectationSuite) as a dataset's profile. Hence, we need to develop a function that will generate ExpectationSuite. This function will receive instance of [PandasDataset](https://legacy.docs.greatexpectations.io/en/latest/autoapi/great_expectations/dataset/index.html?highlight=pandasdataset#great_expectations.dataset.PandasDataset) (wrapper around pandas.DataFrame) so we can utilize both Pandas DataFrame API and some helper functions from PandasDataset during profiling. + + +```python +DELTA = 0.1 # controlling allowed window in fraction of the value on scale [0, 1] + +@ge_profiler +def stats_profiler(ds: PandasDataset) -> ExpectationSuite: + # simple checks on data consistency + ds.expect_column_values_to_be_between( + "avg_speed", + min_value=0, + max_value=60, + mostly=0.99 # allow some outliers + ) + + ds.expect_column_values_to_be_between( + "total_miles_travelled", + min_value=0, + max_value=500, + mostly=0.99 # allow some outliers + ) + + # expectation of means based on observed values + observed_mean = ds.trip_count.mean() + ds.expect_column_mean_to_be_between("trip_count", + min_value=observed_mean * (1 - DELTA), + max_value=observed_mean * (1 + DELTA)) + + observed_mean = ds.earned_per_hour.mean() + ds.expect_column_mean_to_be_between("earned_per_hour", + min_value=observed_mean * (1 - DELTA), + max_value=observed_mean * (1 + DELTA)) + + + # expectation of quantiles + qs = [0.5, 0.75, 0.9, 0.95] + observed_quantiles = ds.avg_fare.quantile(qs) + + ds.expect_column_quantile_values_to_be_between( + "avg_fare", + quantile_ranges={ + "quantiles": qs, + "value_ranges": [[None, max_value] for max_value in observed_quantiles] + }) + + return ds.get_expectation_suite() +``` + +Testing our profiler function: + + +```python +ds.get_profile(profiler=stats_profiler) +``` + + 02/02/2022 02:43:47 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. + + + + + + + + + +**Verify that all expectations that we coded in our profiler are present here. Otherwise (if you can't find some expectations) it means that it failed to pass on the reference dataset (do it silently is default behavior of Great Expectations).** + +Now we can create validation reference from dataset and profiler function: + + +```python +validation_reference = ds.as_reference(profiler=stats_profiler) +``` + +and test it against our existing retrieval job + + +```python +_ = job.to_df(validation_reference=validation_reference) +``` + + /Users/pyalex/projects/feast/sdk/python/feast/infra/offline_stores/offline_store.py:93: RuntimeWarning: Dataset validation is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. + RuntimeWarning, + 02/02/2022 02:43:52 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. + 02/02/2022 02:43:53 PM INFO:Validating data_asset_name None with expectation_suite_name default + + +Validation successfully passed as no exception were raised. + + +### 5. Validating new historical retrieval + +Creating new timestamps for Dec 2020: + + +```python +from feast.dqm.errors import ValidationFailed +``` + + +```python +timestamps = pd.DataFrame() +timestamps["event_timestamp"] = pd.date_range("2020-12-01", "2020-12-07", freq='D') +``` + + +```python +entity_df = pd.merge(taxi_ids, timestamps, how='cross') +entity_df +``` + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
taxi_idevent_timestamp
091d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2020-12-01
191d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2020-12-02
291d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2020-12-03
391d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2020-12-04
491d5288487e87c5917b813ba6f75ab1c3a9749af906a2d...2020-12-05
.........
354437ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2020-12-03
354447ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2020-12-04
354457ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2020-12-05
354467ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2020-12-06
354477ebf27414a0c7b128e7925e1da56d51a8b81484f7630cf...2020-12-07
+

35448 rows × 2 columns

+
+ + + + +```python +job = store.get_historical_features( + entity_df=entity_df, + features=[ + "trip_stats:total_miles_travelled", + "trip_stats:total_trip_seconds", + "trip_stats:total_earned", + "trip_stats:trip_count", + "on_demand_stats:avg_fare", + "on_demand_stats:avg_trip_seconds", + "on_demand_stats:avg_speed", + "on_demand_stats:earned_per_hour", + ] +) +``` + +Execute retrieval job with validation reference: + + +```python +try: + df = job.to_df(validation_reference=validation_reference) +except ValidationFailed as exc: + print(exc.validation_report) +``` + + /Users/pyalex/projects/feast/sdk/python/feast/infra/offline_stores/offline_store.py:93: RuntimeWarning: Dataset validation is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. + RuntimeWarning, + 02/02/2022 02:43:58 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. + 02/02/2022 02:43:59 PM INFO:Validating data_asset_name None with expectation_suite_name default + + + [ + { + "expectation_config": { + "expectation_type": "expect_column_mean_to_be_between", + "kwargs": { + "column": "trip_count", + "min_value": 10.387244591346153, + "max_value": 12.695521167200855, + "result_format": "COMPLETE" + }, + "meta": {} + }, + "meta": {}, + "result": { + "observed_value": 6.692920555429092, + "element_count": 35448, + "missing_count": 31055, + "missing_percent": 87.6071992778154 + }, + "exception_info": { + "raised_exception": false, + "exception_message": null, + "exception_traceback": null + }, + "success": false + }, + { + "expectation_config": { + "expectation_type": "expect_column_mean_to_be_between", + "kwargs": { + "column": "earned_per_hour", + "min_value": 52.320624975640214, + "max_value": 63.94743052578249, + "result_format": "COMPLETE" + }, + "meta": {} + }, + "meta": {}, + "result": { + "observed_value": 68.99268345164135, + "element_count": 35448, + "missing_count": 31055, + "missing_percent": 87.6071992778154 + }, + "exception_info": { + "raised_exception": false, + "exception_message": null, + "exception_traceback": null + }, + "success": false + }, + { + "expectation_config": { + "expectation_type": "expect_column_quantile_values_to_be_between", + "kwargs": { + "column": "avg_fare", + "quantile_ranges": { + "quantiles": [ + 0.5, + 0.75, + 0.9, + 0.95 + ], + "value_ranges": [ + [ + null, + 16.4 + ], + [ + null, + 26.229166666666668 + ], + [ + null, + 36.4375 + ], + [ + null, + 42.0 + ] + ] + }, + "result_format": "COMPLETE" + }, + "meta": {} + }, + "meta": {}, + "result": { + "observed_value": { + "quantiles": [ + 0.5, + 0.75, + 0.9, + 0.95 + ], + "values": [ + 19.5, + 28.1, + 38.0, + 44.125 + ] + }, + "element_count": 35448, + "missing_count": 31055, + "missing_percent": 87.6071992778154, + "details": { + "success_details": [ + false, + false, + false, + false + ] + } + }, + "exception_info": { + "raised_exception": false, + "exception_message": null, + "exception_traceback": null + }, + "success": false + } + ] + + +Validation failed since several expectations didn't pass: +* Trip count (mean) decreased more than 10% (which is expected when comparing Dec 2020 vs June 2019) +* Average Fare increased - all quantiles are higher than expected +* Earn per hour (mean) increased more than 10% (most probably due to increased fare) + From 98f0cbfbb8c817e9d91a91ef863e9cb3351af6f5 Mon Sep 17 00:00:00 2001 From: pyalex Date: Thu, 3 Feb 2022 16:59:13 +0800 Subject: [PATCH 2/4] gh link Signed-off-by: pyalex --- docs/tutorials/validating-historical-features.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/tutorials/validating-historical-features.md b/docs/tutorials/validating-historical-features.md index 039e8f4289..f3be20711b 100644 --- a/docs/tutorials/validating-historical-features.md +++ b/docs/tutorials/validating-historical-features.md @@ -13,6 +13,9 @@ Our plan: 4. Develop & test profiler function 5. Run validation on different dataset using reference dataset & profiler + +> The original notebook and datasets for this tutorial can be found on [GitHub](https://github.com/feast-dev/dqm-tutorial). + ### 0. Setup Install Feast Python SDK and great expectations: From 122302a85f71e56fd7bf6a0d4312da7c1b02408e Mon Sep 17 00:00:00 2001 From: pyalex Date: Thu, 3 Feb 2022 17:04:45 +0800 Subject: [PATCH 3/4] cleanup Signed-off-by: pyalex --- .../validating-historical-features.md | 96 ++----------------- 1 file changed, 7 insertions(+), 89 deletions(-) diff --git a/docs/tutorials/validating-historical-features.md b/docs/tutorials/validating-historical-features.md index f3be20711b..8dcf82c011 100644 --- a/docs/tutorials/validating-historical-features.md +++ b/docs/tutorials/validating-historical-features.md @@ -47,10 +47,6 @@ from google.cloud.bigquery import Client bq_client = Client(project='kf-feast') ``` - /Users/pyalex/projects/feast/venv/lib/python3.7/site-packages/google/auth/_default.py:70: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ - warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) - - Running some basic aggregations while pulling data from BigQuery. Grouping by taxi_id and day: @@ -99,12 +95,6 @@ pyarrow.parquet.write_table(entities_2019_table, "entities.parquet") ``` -```python -#entities_2020_table = bq_client.query(entities_query(2020)).to_arrow() -#pyarrow.parquet.write_table(entities_2019_table, "entities_2020.parquet") -``` - - ## 2. Declaring features @@ -217,19 +207,6 @@ entity_df
- @@ -326,15 +303,9 @@ store.create_saved_dataset( ) ``` - /Users/pyalex/projects/feast/sdk/python/feast/feature_store.py:853: RuntimeWarning: Saving dataset is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. - RuntimeWarning, - - - - - - , full_feature_names = False, tags = {}, _retrieval_job = , min_event_timestamp = 2019-06-01 00:00:00, max_event_timestamp = 2019-07-01 00:00:00)> - +```python +, full_feature_names = False, tags = {}, _retrieval_job = , min_event_timestamp = 2019-06-01 00:00:00, max_event_timestamp = 2019-07-01 00:00:00)> +``` ## 4. Developing dataset profiler @@ -355,10 +326,6 @@ from great_expectations.core.expectation_suite import ExpectationSuite from great_expectations.dataset import PandasDataset ``` - 02/02/2022 02:43:45 PM WARNING:/Users/pyalex/projects/feast/venv/lib/python3.7/site-packages/great_expectations/render/view/view.py:116: DeprecationWarning: 'contextfilter' is renamed to 'pass_context', the old name will be removed in Jinja 3.1. - def add_data_context_id_to_url(self, jinja_context, url, add_datetime=True): - - Loading saved dataset first and exploring the data: @@ -368,27 +335,7 @@ ds = store.get_saved_dataset('my_training_ds') ds.to_df() ``` - /Users/pyalex/projects/feast/sdk/python/feast/feature_store.py:904: RuntimeWarning: Retrieving datasets is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. - RuntimeWarning, - - - - -
-
@@ -611,13 +558,7 @@ Testing our profiler function: ```python ds.get_profile(profiler=stats_profiler) ``` - 02/02/2022 02:43:47 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. - - - - - -
@@ -827,8 +750,6 @@ entity_df - - ```python job = store.get_historical_features( entity_df=entity_df, @@ -855,11 +776,8 @@ except ValidationFailed as exc: print(exc.validation_report) ``` - /Users/pyalex/projects/feast/sdk/python/feast/infra/offline_stores/offline_store.py:93: RuntimeWarning: Dataset validation is an experimental feature. This API is unstable and it could and most probably will be changed in the future. We do not guarantee that future changes will maintain backward compatibility. - RuntimeWarning, - 02/02/2022 02:43:58 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. - 02/02/2022 02:43:59 PM INFO:Validating data_asset_name None with expectation_suite_name default - + 02/02/2022 02:43:58 PM INFO: 5 expectation(s) included in expectation_suite. result_format settings filtered. + 02/02/2022 02:43:59 PM INFO: Validating data_asset_name None with expectation_suite_name default [ { From 9eaa5f530ae2d6f5379e2c93d5ae71a256e0d856 Mon Sep 17 00:00:00 2001 From: pyalex Date: Thu, 3 Feb 2022 17:06:34 +0800 Subject: [PATCH 4/4] typo Signed-off-by: pyalex --- docs/tutorials/tutorials-overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/tutorials-overview.md b/docs/tutorials/tutorials-overview.md index 036f78af05..e28e5836f7 100644 --- a/docs/tutorials/tutorials-overview.md +++ b/docs/tutorials/tutorials-overview.md @@ -10,4 +10,4 @@ These Feast tutorials showcase how to use Feast to simplify end to end model tra {% page-ref page="driver-stats-using-snowflake.md" %} -{% page-ref page="validation-historical-features.md" %} +{% page-ref page="validating-historical-features.md" %}