-
Notifications
You must be signed in to change notification settings - Fork 140
/
Console.py
195 lines (156 loc) · 6.38 KB
/
Console.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.0
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Console Analysis
#
# ## Introduction
#
# This notebook has been created to enable easy monitoring of the training progress in the console. It integrates directly with the console to retrieve information and metrics,
# so it is easy to reload to get updated charts, and more details than the current console UI provides.
#
# ### Usage
#
# Out of the box the file will only need the name of your model to load in the data.
#
# ### Contributions
#
# As usual, your ideas are very welcome and encouraged so if you have any suggestions either bring them to [the AWS DeepRacer Community](http://join.deepracing.io) or share as code contributions.
#
# ### Requirements
#
# Before you start using the notebook, you will need to install some dependencies. If you haven't yet done so, have a look at [The README.md file](/edit/README.md#running-the-notebooks) to find what you need to install.
# This workbook will require `deepracer-utils>=0.23`.
#
# Apart from the install, you also have to configure your programmatic access to AWS. Have a look at the guides below, AWS resources will lead you by the hand:
#
# AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
#
# Boto Configuration: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
# ## Core configuration
#
# ## Installs and setups
#
# If you are using an AWS SageMaker Notebook or Sagemaker Studio Lab to run the log analysis, you will need to ensure you install required dependencies. To do that uncomment and run the following:
# +
# Make sure you have the required pre-reqs
# import sys
# # !{sys.executable} -m pip install --upgrade -r requirements.txt
# -
# Run the imports
# +
from deepracer.console import ConsoleHelper
from deepracer.logs import AnalysisUtils as au
from deepracer.logs.metrics import TrainingMetrics
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import warnings
import os
from IPython.display import display
from boto3.exceptions import PythonDeprecationWarning
warnings.filterwarnings("ignore", category=PythonDeprecationWarning)
# -
# ## Login
#
# Login to AWS.
# Uncomment and use this section of code if the machine you're using for analysis isn't already authenticated to your AWS Account: -
# +
#os.environ["AWS_DEFAULT_REGION"] = "" #<-Add your region
#os.environ["AWS_ACCESS_KEY_ID"] = "" #<-Add your access key
#os.environ["AWS_SECRET_ACCESS_KEY"] = "" #<-Add you secret access key
#os.environ["AWS_SESSION_TOKEN"] = "" #<-Add your session key if you have one
# -
# ## Load model from console
# +
#Enter the name of your model as it appears in the console
MODEL='Analysis-Demo'
ch = ConsoleHelper()
model_arn = ch.find_model(MODEL)
training_job = ch.get_training_job(model_arn)
training_job_status = training_job['ActivityJob']['Status']['JobStatus']
metrics_url = training_job['ActivityJob']['MetricsPreSignedUrl']
if model_arn is not None:
print("Found model {} as {}".format(MODEL, model_arn))
if training_job_status is not None:
print("Training status is {}".format(training_job_status))
# -
# ## Metrics Analysis
#
# ### Loading Metrics Data
#
# The basic setup covers loading in data from one single model. It obtains the metrics.json URL via API call and directly loads it into a TrainingMetrics object.
tm = TrainingMetrics(None, url=metrics_url)
rounds = np.array([[1,1]])
NUM_ROUNDS = len(rounds)
# ### Analysis
# The first analysis we will do is to display the basic statistics of the last 5 iterations.
summary=tm.getSummary(method='mean', summary_index=['r-i','master_iteration'])
display(summary[-5:])
# +
train=tm.getTraining()
ev=tm.getEvaluation()
print("Latest iteration: %s / master %i" % (max(train['r-i']),max(train['master_iteration'])))
print("Episodes: %i" % len(train))
# -
# ### Plotting progress
#
# The next command will display the desired progress chart. It shows the data per iteration (dots), and a rolling average to allow the user to easily spot a trend.
#
# One can control the number of charts to show, based on which metric one wants to use `min`, `max`, `median` and `mean` are some of the available options.
#
# By altering the `rounds` parameter one can choose to not display all training rounds.
_ = tm.plotProgress(method=['median','mean','max'], rolling_average=5, figsize=(20,5), rounds=rounds[:,0])
# ### Best laps
#
# The following rounds will show the fastest 5 training and evaluation laps.
train_complete_lr = train[(train['round']>(NUM_ROUNDS-1)) & (train['complete']==1)]
display(train_complete_lr.nsmallest(5,['time']))
eval_complete_lr = ev[(ev['round']>(NUM_ROUNDS-1)) & (ev['complete']==1)]
display(eval_complete_lr.nsmallest(5,['time']))
# ### Best lap progression
#
# The below plot will show how the best laps for training and evaluation changes over time. This is useful to see if your model gets faster over time.
plt.figure(figsize=(15,5))
plt.title('Best lap progression')
plt.scatter(train_complete_lr['master_iteration'],train_complete_lr['time'],alpha=0.4)
plt.scatter(eval_complete_lr['master_iteration'],eval_complete_lr['time'],alpha=0.4)
plt.show()
# ### Lap progress
#
# The below shows the completion for each training and evaluation episode.
plt.figure(figsize=(15,5))
plt.title('Progress per episode')
train_r = train[train['round']==NUM_ROUNDS]
eval_r = ev[ev['round']==NUM_ROUNDS]
plt.scatter(train_r['episode'],train_r['completion'],alpha=0.5)
plt.scatter(eval_r['episode'],eval_r['completion'],c='orange',alpha=0.5)
plt.show()
# ## Robomaker Log Analysis
# If the training status is COMPLETED it is also possible to directly load the Robomaker logfile.
# Here we only provide a brief summary of the analysis. See `Training_analysis.ipynb` for more details.
#
# ### Load log file
# +
if training_job_status == "COMPLETED":
df = ch.get_training_log_robomaker(model_arn)
else:
df = pd.DataFrame()
simulation_agg = au.simulation_agg(df)
complete_ones = simulation_agg[simulation_agg['progress'] == 100]
# -
# ### Analysis
au.analyze_training_progress(simulation_agg, title='Training progress')
au.scatter_aggregates(simulation_agg, 'Stats for all laps')
# View five fastest complete laps
complete_ones.nsmallest(5, 'time')