-
Notifications
You must be signed in to change notification settings - Fork 0
/
CAS771_Report.tex
760 lines (625 loc) · 47.4 KB
/
CAS771_Report.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
\documentclass[12pt, titlepage]{article}
\usepackage{amsmath, mathtools}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{colortbl}
\usepackage{xr}
\usepackage{hyperref}
\usepackage{longtable}
\usepackage{xfrac}
\usepackage{tabularx}
\usepackage{float}
\usepackage{siunitx}
\usepackage{booktabs}
\usepackage{caption}
\usepackage{pdflscape}
\usepackage{afterpage}
\usepackage[round]{natbib}
%\usepackage{refcheck}
\hypersetup{
bookmarks=true, % show bookmarks bar?
colorlinks=true, % false: boxed links; true: colored links
linkcolor=red, % color of internal links (change box color with linkbordercolor)
citecolor=green, % color of links to bibliography
filecolor=magenta, % color of file links
urlcolor=cyan % color of external links
}
% For easy change of table widths
\newcommand{\colZwidth}{1.0\textwidth}
\newcommand{\colAwidth}{0.13\textwidth}
\newcommand{\colBwidth}{0.82\textwidth}
\newcommand{\colCwidth}{0.1\textwidth}
\newcommand{\colDwidth}{0.05\textwidth}
\newcommand{\colEwidth}{0.8\textwidth}
\newcommand{\colFwidth}{0.17\textwidth}
\newcommand{\colGwidth}{0.5\textwidth}
\newcommand{\colHwidth}{0.28\textwidth}
% Used so that cross-references have a meaningful prefix
\newcounter{defnum} %Definition Number
\newcommand{\dthedefnum}{GD\thedefnum}
\newcommand{\dref}[1]{GD\ref{#1}}
\newcounter{datadefnum} %Datadefinition Number
\newcommand{\ddthedatadefnum}{DD\thedatadefnum}
\newcommand{\ddref}[1]{DD\ref{#1}}
\newcounter{theorynum} %Theory Number
\newcommand{\tthetheorynum}{T\thetheorynum}
\newcommand{\tref}[1]{T\ref{#1}}
\newcounter{tablenum} %Table Number
\newcommand{\tbthetablenum}{T\thetablenum}
\newcommand{\tbref}[1]{TB\ref{#1}}
\newcounter{assumpnum} %Assumption Number
\newcommand{\atheassumpnum}{P\theassumpnum}
\newcommand{\aref}[1]{A\ref{#1}}
\newcounter{goalnum} %Goal Number
\newcommand{\gthegoalnum}{P\thegoalnum}
\newcommand{\gsref}[1]{GS\ref{#1}}
\newcounter{instnum} %Instance Number
\newcommand{\itheinstnum}{IM\theinstnum}
\newcommand{\iref}[1]{IM\ref{#1}}
\newcounter{reqnum} %Requirement Number
\newcommand{\rthereqnum}{P\thereqnum}
\newcommand{\rref}[1]{R\ref{#1}}
\newcounter{lcnum} %Likely change number
\newcommand{\lthelcnum}{LC\thelcnum}
\newcommand{\lcref}[1]{LC\ref{#1}}
\usepackage{fullpage}
\begin{document}
\title{Classification of Data Obfuscated By Blurring and Encryption}
\author{Peter Michalski}
\date{\today}
\maketitle
\newpage
\tableofcontents
\addtocontents{toc}{\protect\thispagestyle{empty}}
~\newpage
\pagenumbering{gobble}
\pagenumbering{arabic}
\section{Abstract}
We examined the classification accuracy of VGGNet and Autoencoder neural networks on data sets that have been obfuscated by both blurring and encryption. The MNIST data set was obfuscated along a variance of Gaussian noise. A block encryption algorithm was then used to further obfuscate the data.
Building on related work, the classification accuracy of these multi-obfuscated data sets was compared to the classification accuracy of data sets that were either blurred or encrypted. Classification accuracy significantly dropped for data sets that were obfuscated by multiple techniques and we attribute this to internal covariate shift. In order to mitigate this effect, we incorporated batch normalization into out networks in order to improve our classification accuracy results. Batch normalization significantly improved the classification accuracy of data that had been both blurred and encrypted. We hope that our findings regarding the application of batch normalization on multi-obfuscated data is helpful in solving problems suffering from internal covariate shift in the future.\\
~\newpage
\section{Introduction}
Social networking sites have been leveraging technical solutions of third parties into their business model. As \cite{ahmed2018obfuscated} discuss, these third parties provide secure data storage and classification services for the social networking sites. Through the use of obfuscated data classification techniques the stored data is used to make friend or hobby suggestions for the social media user without ever compromising privacy. Alternatively to this business model described by Ahmed et al., we believe that third parties providing such services could also gather metadata for resale without compromising the privacy of their client's users.\\
\noindent In regard to obfuscated data classification, studies such as the one conducted by Ahmed et al. have focused on data that has been obfuscated by either blurring or encryption techniques. Furthermore, Ahmed et al. have specifically focused on using Visual Geometry Group networks (VGGNet) and Autoencoder networks to classify obfuscated data.\\
\noindent This study will again attempt to use similar style networks as Ahmed et al. to classify images that have been obfuscated. Building on their work, this study will attempt to classify images that have been obfuscated by both blurring and encryption techniques. The objective of this, using the social media model described above, is to enable third party data service providers to make friend or hobby suggestions and gather metadata using noisy data. We believe that noisy data can provide meaningful user information and that it should not be overlooked. While uploaded images are not likely to be noisy, a non negligible number of uploaded video frames could be impacted by intermittent light interference, slow video focus, and unstable recording. Information gathered from these frames could be used to build a better model of the social media user and their environment.\\
\noindent We believe that the incorporation of multiple obfuscation techniques on test data will have a profound negative effect on the classification accuracy of the neural network models used by Ahmed et al. relative to their classification accuracy on data concealed by a single obfuscation technique. The additional obfuscation of the data serves to further eliminates patterns and increases convolution. In regard to neural network processing of this data, this has the effect of increasing the rate and size of changes to network parameters during learning. These changes are further amplified in deeper layers of the network. As layers in the network stay in a prolonged state of non-trivial adaptations, the network has difficulty efficiently converging on a final solution for its network parameters. As \cite{ioffe2015batch} designate, this is known as internal covariate shift. In order to mitigate this effect we believe that a solution is the incorporation of batch normalization into Ahmed et al.'s models. Batch normalization will scale the activation and output of nodes and effectively normalize input into each layer, better allowing the model to efficiently and effectively converge on a solution.\\
\noindent Further to the scenario above, the impact and contribution of this research can be extended to applications of classifying encrypted noisy data. For instance, noisy passive security camera footage may be classified while in a secure state. Additionally, analog data that is securely stored can also be classified for anomalies or meta-features by third party storage systems. Overall, the impact of this study is the revelation of the effectiveness in classifying secure noisy data using VGGNet and Autoencoder neural networks with and without a batch normalization technique.\\
\noindent Work related to our research can be found in Section \ref{RelatedWork} of this report. Background information, including data from related work which our research builds on found in Section \ref{Background}. The methodology of our research, including the techniques we used for image obfuscation and our neural network models is outlines in Section \ref{Methodology}. The classification results of our models are presented in Section \ref{Evaluations}. Concluding remarks are found in Section \ref{Conclusions}.\\
~\newpage
\section{Related Work}\label{RelatedWork}
The problem addressed in this paper is the classification of data obfuscated by multiple obfuscation techniques. The works discussed below address related studies and relevant data classification approaches and techniques.
\subsection{Obfuscated image classification for secure image-centric friend recommendation}
\noindent \cite{ahmed2018obfuscated} focus on the classification of images that have been obfuscated using one of several techniques, including blurring or encryption. Their motivation is a scenario where a third party cloud storage entity will classify secure private social media images for a social media provider with the purpose of generating friend recommendations for social media users. The model is shown in Figure \ref{Ahmed_social_model} below.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{ahmed_social_model}
\caption{Social Model Architecture}
\label{Ahmed_social_model}
\end{center}
\end{figure}
\noindent In their paper, Ahmed et al. use Gaussian blurring in two dimensions and AES-128 encryption in ECB block cipher mode for obfuscation of MNIST, CIFAR-10, and MirFlickr-25K data sets. They classify these obfuscated data sets using a deep convolutional neural network based on \cite{simonyan2014very} Visual Geometry Group (VGGNet) as well as an Autoencoder neural network. Specifically, they classify blurred images using the deep convolutional network and the encrypted images using the Autoencoder network. Their models and results are described in Section \ref{Background} of this report. \\
\subsection{Very Deep Convolutional Networks for Large-Scale Image Recognition}
\noindent \cite{simonyan2014very} evaluate networks of increased depth, 11-19 layers, and very small convolutional filters (3 x 3). They describe several network configurations for validating and classifying 224 x 224 RGB images. The networks incorporate several convolutional layers and filters, several max-pooling layers, and several fully-connected layers. The hidden layers of their networks are equipped with the ReLU activation function. The network configurations are described in Figure \ref{ConvNet} below.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{ConvNet}
\caption{ConvNet Configurations}
\label{ConvNet}
\end{center}
\end{figure}
\noindent The ConvNet training is carried out by optimizing the multinational logistic regression objective using mini-batch gradient descent. The paper demonstrated that an increased neural network representation depth is beneficial for classification accuracy. Simonyan and Zisserman's network's are regarded as VGGNet in reference to the University of Oxford's Visual Geometry Group, of which they were members at the time of publication of this paper.\\
\subsection{Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising}
\cite{zhang2017beyond} focuses on developing feed forward denoising convolutional neural networks - specifically for Gaussian denoising. The proposed DnCNN uses residual learning formulation to learn and batch normalization to improve speed and performance. Batch normalization alleviates the internal covariate shift by incorporating a normalization step and a scale and shift step. The paper found that batch normalization can greatly benefit neural network learning performance and was conductive in our decision to include the technique in our neural networks.\\
\subsection{Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift}
\cite{ioffe2015batch} designate the difficulty of training deep neural networks due to the compounding effects of node initialization on deeper layers as internal covariate shift. They outline the difficulty in convergence that deep neural networks experience when they need to continuously adapt to new distributions of layer's inputs. Ioffe and Szegedy suggest ensuring that the distribution of non-linearity inputs remains more stable as the network trains in order to ensure that the optimizer is less likely to get stuck in the saturated regime. They propose a new mechanism, which they call batch normalization, that reduces internal covariate shift via additional neural network steps that fix the means and variances of layer inputs.\\
~\newpage
\section{Background}\label{Background}
\noindent As our report intends to test the accuracy of classifying images obfuscated by multiple techniques on the neural network models of \cite{ahmed2018obfuscated}, this section will briefly outline the methodology and results of their framework. This end of this section will cover the batch normalization technique which we intend to introduce into Ahmed et al.'s models. \\
\noindent Ahmed et al. used a Gaussian low pass filter with size [x, y] and standard deviation $\sigma$ as shown in Figure \ref{gaussianahmed} to blur the MNIST data set. This was implemented using the fspecial tool of the MATLAB image filtering toolbox. \\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{gaussianahmed}
\caption{Ahmed et al. Gaussian Filter}
\label{gaussianahmed}
\end{center}
\end{figure}
\noindent To encrypt the images, Ahmed et al. used AES-128 in Electronic Codebook mode. This was implemented using the GitHub JImageEncryptor library by raphleon.\\
\noindent Ahmed et al. used an Autoencoder neural network as well as \cite{simonyan2014very}'s VGGNet neural network in their study. Their Autoencoder framework included a two step training process of pre-training and finetuning, as shown in Figure \ref{nnahmed}. Ahmed et al. did not specify which VGGNet of those listed in Figure \ref{ConvNet} was used for their study, an assumption was made in our methodology in Section \ref{Methodology}.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\textwidth]{nnahmed}
\caption{Ahmed et al. Autoencoder Architecture}
\label{nnahmed}
\end{center}
\end{figure}
\noindent Ahmed et al. had the following results on the MNIST data set: An average accuracy of 99.503\% on the non-secure version, 95.93\% on the blurred version, and 83.93\% on the encrypted version.\\
\noindent We intend to classify the MNIST dataset after it has been both blurred and encrypted with similar techniques to Ahmed et al. We predict that the classification accuracy will be significantly lower than classification on the data set when it is either blurred or encrypted due to the additional convolution of the data. We believe that this convolution will cause instability in the form of internal covariate shift and we intend to mitigate it by the addition of a batch normalization technique into Ahmed et al.'s models. As \cite{ioffe2015batch} describe in their paper, this technique will help to stabilize the network. They proposed that batch normalization can alleviate the internal covariate shift by incorporating of two steps into the network; a normalization step and a scale and shift step before the non-linearity in each layer.\\
~\newpage
\section{Methodology}\label{Methodology}
\subsection{Image Obfuscation}
\noindent We begin by downloading a copy of the MNIST data set from \href{http://yann.lecun.com/exdb/mnist/}{here}.
We will refer to this original MNIST data set as non-secure. The MNIST data set consist of 48,000 train and 12,000 test images of hand written digits. A sample of such images is shown in Figure \ref{MNIST_BASE}.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{MNIST_BASE}
\caption{Non-Secure MNIST Image Samples}
\label{MNIST_BASE}
\end{center}
\end{figure}
\noindent We will then generate a set of secure (obfuscated) versions of the MNIST data set. The associated MATLAB files can be viewed in the GitLab repo found \href{https://github.com/peter-michalski/CAS771/tree/master/MATLAB}{here}.\\
~\newpage
\subsubsection{Blurring}\label{BlurringMethodology}
We create 8 blurred MNIST data sets, each including training and testing data, using the imnoise MATLAB function. The data sets differ in the variance of Gaussian noise. The first data set we create is labeled Var0\_04 and has a variance of Gaussian noise setting of 0.04. The second data set is labeled Var0\_12 and has a variance of Gaussian noise setting of 0.12. We continue to increment the variance of Gaussian noise by 0.08 until we have our 8th data set labeled Var0\_60. A sample of such images is shown in Figure \ref{MNIST_BLURRED}.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\textwidth]{MNIST_BLUR}
\caption{Blurred MNIST}
\label{MNIST_BLURRED}
\end{center}
\end{figure}
~\newpage
\subsubsection{Encryption}\label{EncryptionMethodology}
\noindent We then create an encrypted-only version of MNIST using a simple block encryption algorithm. A sample of such images is shown in Figure \ref{MNIST_ENCRYPTED}.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{MNIST_EN}
\caption{Encrypted MNIST}
\label{MNIST_ENCRYPTED}
\end{center}
\end{figure}
\noindent Finally, we created data sets that have been both blurred and encrypted by running our blurred data sets through our encryption algorithm.
The first data set we create is labeled 0\_04\_en and is the encrypted version of Var0\_04. We follow this with procedure with our remaining 7 blurred data sets. A sample of such images is shown in Figure \ref{MNIST_BLURRED_EN}.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\textwidth]{MNIST_BLUR_ER}
\caption{Blurred and Encrypted MNIST}
\label{MNIST_BLURRED_EN}
\end{center}
\end{figure}
~\newpage
\subsection{Original VGGNet}
\noindent Similar to \cite{ahmed2018obfuscated} we use a VGGNet network from \cite{simonyan2014very} to measure classification accuracy. Ahmed et. al did not specify which VGGNet model they specifically used. We have decided to use VGG13 as the MNIST data is neither large nor complex and should not require a deeper neural network like a VGG16 or VGG19. The VGG13 network can be found
\href{https://github.com/peter-michalski/CAS771/blob/master/python/VGG/MYinitial_VGG.py}{here}.\\
\subsection{Modified VGGNet}
\noindent We incorporated batch normalization into our VGGNet neural network by adding a batch normalization step after each 2D convolutional layer. The updated VGG13 network can be found \href{https://github.com/peter-michalski/CAS771/blob/master/python/VGG/MYupdated_VGG.py}{here}.\\
\subsection{Original Autoencoder}
\noindent \cite{ahmed2018obfuscated} also used a two-step Autoencoder network to measure classification accuracy. Similarly, our network has a two-step training and fine tuning architecture. The Autoencoder network can be found
\href{https://github.com/peter-michalski/CAS771/blob/master/python/Autoencoder/MYoriginal_autoencoder.py}{here}.\\
\subsection{Modified Autoencoder}
We incorporated batch normalization into our Autoencoder neural network by adding a batch normalization step after each 2D convolutional layer. The updated Autoencoder network can be found \href{https://github.com/peter-michalski/CAS771/blob/master/python/Autoencoder/MYupgraded_autoencoder.py}{here}.\\
~\newpage
\section{Evaluations}\label{Evaluations}
\subsection{Original VGGNet}\label{EvalOrigVGG}
The classification result for the non-secure MNIST data set is found in Table \ref{table:basicVGG_MNIST}: Non-Secure MNIST Classification Results: Original VGGNet. The data set produced an accuracy of 99.31\% averaged over three tests. This accuracy is similar to the 99.503\% accuracy of Ahmed et al. for non-secure MNIST.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
MNIST & 0.9931\\
\hline
\end{tabular}
\caption{Non-Secure MNIST Classification Results: Original VGGNet}
\label{table:basicVGG_MNIST}
\end{center}
\end{table}
\noindent The classification result for the encrypted MNIST data set is found in Table \ref{table:basicVGG_Encryption}: Encrypted MNIST Classification Results: Original VGGNet. The data set produced an accuracy of 96.03\% averaged over three tests. This accuracy is significantly greater than the 83.93\% accuracy of Ahmed et al. for encrypted MNIST. This difference is attributed to using a novel simple block encryption algorithm for the purpose of maintaining convergence of our models when classifying images that have been both blurred and encrypted, as outlined in Section \ref{EncryptionMethodology}: Encryption.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Encrypted MNIST & 0.9603\\
\hline
\end{tabular}
\caption{Encrypted MNIST Classification Results: Original VGGNet}
\label{table:basicVGG_Encryption}
\end{center}
\end{table}
\noindent The classification result for the blurred MNIST data sets, averaged over three tests, is found in Table \ref{table:basicVGG_Blurred}: Blurred MNIST Classification Results: Original VGGNet. The highest accuracy of the data sets was 99.04\% for the Var0\_04 data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets was 89.60\% for the Var0\_60 data set, which has the highest variance of Gaussian noise at 0.60. A linear increase in variance is observed to cause an exponential decrease in accuracy. As a comparison, the average classification accuracy in Ahmed et al. for blurred MNIST was 95.93\%, suggesting that our blurring technique has similar effect.
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Var0\_04 & 0.9904\\
\hline
Var0\_12 & 0.9883\\
\hline
Var0\_20 & 0.9763\\
\hline
Var0\_28 & 0.9656\\
\hline
Var0\_36 & 0.9566\\
\hline
Var0\_44 & 0.9441\\
\hline
Var0\_52 & 0.9282\\
\hline
Var0\_60 & 0.8960\\
\hline
\end{tabular}
\caption{Blurred MNIST Classification Results: Original VGGNet}
\label{table:basicVGG_Blurred}
\end{center}
\end{table}
~\newpage
\noindent The classification result for the blurred and encrypted MNIST data sets, averaged over three tests, is found in Table \ref{table:basicVGG_BlurredEncrypted}: Blurred and Encrypted MNIST Classification Results: Original VGGNet. The highest accuracy of the data sets was 56.29\% for the 0\_04\_en data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets was 47.29\% for the 0\_60\_en data set, which has the highest variance of Gaussian noise at 0.60. Similar to the blurred MNIST data sets, a linear increase in variance is observed to cause an exponential decrease in accuracy in these blurred and encrypted data sets. The rate of change of accuracy relative to variance of Gaussian noise is inherited from the blurred data sets as they are further encrypted.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
0\_04\_en & 0.5629\\
\hline
0\_12\_en & 0.5794\\
\hline
0\_20\_en & 0.5781\\
\hline
0\_28\_en & 0.5599\\
\hline
0\_36\_en & 0.5609\\
\hline
0\_44\_en & 0.4866\\
\hline
0\_52\_en & 0.4878\\
\hline
0\_60\_en & 0.4729\\
\hline
\end{tabular}
\caption{Blurred and Encrypted MNIST Classification Results: Original VGGNet}
\label{table:basicVGG_BlurredEncrypted}
\end{center}
\end{table}
~\newpage
\noindent A graphical presentation of the classification results for the blurred MNIST data sets compared to the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_OriginalVGG}: Original VGGNet Accuracy Comparison. As stated previously, we can observe that both sets inherit the characteristic that a linear increase in the variance of Gaussian noise results in an exponential decrease in classification accuracy. We can also observe that further encryption has resulted in a profound decrease in classification accuracy for all blurred data sets. While the encrypted-only MNIST produced a high classification accuracy result of 96.03\%, as stated in Table \ref{table:basicVGG_Encryption}, the encryption of the blurred-only data sets with such a mild encryption algorithm has profoundly decreased their classification accuracy, as found in Table \ref{table:basicVGG_BlurredEncrypted} and observed in Figure \ref{GRAPH_OriginalVGG} below. The encryption of the fairly accurate blurred data sets has resulted in a drop of roughly 40\% in accuracy.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{O_VGG_BL_vs_BEN}
\caption{Original VGGNet Accuracy Comparison}
\label{GRAPH_OriginalVGG}
\end{center}
\end{figure}
~\newpage
\subsection{Modified VGGNet}\label{EvalModVGG}
The classification result for the non-secure MNIST data set is found in Table \ref{table:modVGG_MNIST}: Non-Secure MNIST Classification Results: Modified VGGNet. The data set produced an accuracy of 99.39\% averaged over three tests. The accuracy is slightly improved to the 99.31\% accuracy result of the original VGGNet neural network found in Section \ref{EvalOrigVGG}.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
MNIST & 0.9939\\
\hline
\end{tabular}
\caption{Non-Secure MNIST Classification Results: Modified VGGNet}
\label{table:modVGG_MNIST}
\end{center}
\end{table}
\noindent The classification result for the encrypted MNIST data set is found in Table \ref{table:modVGG_Encryption}: Encrypted MNIST Classification Results: Modified VGGNet. The data set produced an accuracy of 98.21\% averaged over three tests. The change in accuracy is non-negligible when compared to the 96.03\% accuracy of the original VGGNet found in Section \ref{EvalOrigVGG}. The addition of batch normalization to the original VGGNet has decreased inaccurate classification of encrypted MNIST images by roughly half.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Encrypted MNIST & 0.9821\\
\hline
\end{tabular}
\caption{Encrypted MNIST Classification Results: Modified VGGNet}
\label{table:modVGG_Encryption}
\end{center}
\end{table}
\noindent The classification result for the blurred MNIST data sets, averaged over three tests, is found in Table \ref{table:modVGG_Blurred}: Blurred MNIST Classification Results: Modified VGGNet. The highest accuracy of the data sets was 99.21\% for the Var0\_04 data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}. The lowest accuracy of the data sets was 90.31\% for the Var0\_60 data set, which has the highest variance of Gaussian noise at 0.60. Similar to the results of the original VGGNet, as found in section \ref{EvalOrigVGG}, a linear increase in variance is observed to cause an exponential decrease in accuracy.
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Var0\_04 & 0.9921\\
\hline
Var0\_12 & 0.9891\\
\hline
Var0\_20 & 0.9812\\
\hline
Var0\_28 & 0.9704\\
\hline
Var0\_36 & 0.9585\\
\hline
Var0\_44 & 0.9468\\
\hline
Var0\_52 & 0.9298\\
\hline
Var0\_60 & 0.9031\\
\hline
\end{tabular}
\caption{Blurred MNIST Classification Results: Modified VGGNet}
\label{table:modVGG_Blurred}
\end{center}
\end{table}
~\newpage
\noindent The classification result for the blurred and encrypted MNIST data sets, averaged over three tests, is found in Table \ref{table:modVGG_BlurredEncrypted}: Blurred and Encrypted MNIST Classification Results: Modified VGGNet. The highest accuracy of the data sets was 91.40\% for the 0\_04\_en data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets was 81.06\% for the 0\_60\_en data set, which has the highest variance of Gaussian noise at 0.60. Similar to the results of the original VGGNet found in Section \ref{EvalOrigVGG}, a linear increase in variance is observed to cause an exponential decrease in accuracy in these blurred and encrypted data sets. Section \ref{VGGAnalysis}: VGGNet Analysis addresses the observation that classification accuracy results for blurred and encrypted data sets is significantly higher in the modified VGG when compared to the original VGG.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
0\_04\_en & 0.9140\\
\hline
0\_12\_en & 0.9076\\
\hline
0\_20\_en & 0.8942\\
\hline
0\_28\_en & 0.8813\\
\hline
0\_36\_en & 0.8688\\
\hline
0\_44\_en & 0.8491\\
\hline
0\_52\_en & 0.8323\\
\hline
0\_60\_en & 0.8106\\
\hline
\end{tabular}
\caption{Blurred and Encrypted MNIST Classification Results: Modified VGGNet}
\label{table:modVGG_BlurredEncrypted}
\end{center}
\end{table}
~\newpage
\noindent A graphical presentation of the classification results for the blurred MNIST data sets compared to the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_ModifiedVGG}: Modified VGGNet Accuracy Comparison. As stated previously, we can observe that both sets inherit the characteristic that a linear increase in the variance of Gaussian noise results in an exponential decrease in classification accuracy. We can also observe that further encryption has resulted in a marginal decrease in classification accuracy for all blurred data sets. While the encrypted-only MNIST produced a high classification accuracy result of 98.21\%, as stated in Table \ref{table:modVGG_Encryption}, the encryption of the blurred-only data sets with such a mild encryption algorithm has marginally decreased their classification accuracy, as found in Table \ref{table:modVGG_BlurredEncrypted} and observed in Figure \ref{GRAPH_ModifiedVGG} below. The encryption of the blurred data sets has resulted in a drop of roughly 10\% in accuracy.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_VGG_BL_vs_BEN}
\caption{Modified VGGNet Accuracy Comparison}
\label{GRAPH_ModifiedVGG}
\end{center}
\end{figure}
~\newpage
\subsection{VGGNet Analysis}\label{VGGAnalysis}
The classification results of the original and modified VGGNet for the blurred MNIST data sets can be observed in Figure \ref{GRAPH_COMP_VGG_BL}: VGGNet Accuracy Comparison - Blurred. We observe that the addition of batch normalization has not significantly increased classification accuracy in the modified VGGNet. The exponential effect of linearly increasing the variance of Gaussian noise is seen in both neural networks.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_vs_Or_VGG_BL}
\caption{VGGNet Accuracy Comparison - Blurred}
\label{GRAPH_COMP_VGG_BL}
\end{center}
\end{figure}
~\newpage
\noindent The classification results of the original and modified VGGNet for the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_COMP_VGG_BEN}: VGGNet Accuracy Comparison - Blurred and Encrypted. We observe that the addition of batch normalization has significantly increased classification accuracy in the modified VGGNet. The data set with the highest accuracy when using the original VGGNet, 0\_04\_en with 56.29\%, has an accuracy of 91.40\% when using the modified VGGNet. The data set with the lowest accuracy when using the original VGGNet, 0\_60\_en with 47.29\%, has an accuracy of 81.06\% when using the modified VGGNet. The stabilizing effect of batch normalization on internal covariate shift is clear from these results.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_vs_Or_VGG_BEN}
\caption{VGGNet Accuracy Comparison - Blurred and Encrypted}
\label{GRAPH_COMP_VGG_BEN}
\end{center}
\end{figure}
~\newpage
\subsection{Original Autoencoder}\label{EvalOrigAE}
The classification result for the non-secure MNIST data set is found in Table \ref{table:basicAE_MNIST}: Non-Secure MNIST Classification Results: Original Autoencoder. The data set produced an accuracy of 98.64\% averaged over three tests. This accuracy is similar to the 99.503\% accuracy of Ahmed et al. for non-secure MNIST.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
MNIST & 0.9864\\
\hline
\end{tabular}
\caption{Non-Secure MNIST Classification Results: Original Autoencoder}
\label{table:basicAE_MNIST}
\end{center}
\end{table}
\noindent The classification result for the encrypted MNIST data set is found in Table \ref{table:basicAE_Encryption}: Encrypted MNIST Classification Results: Original Autoencoder. The data set produced an accuracy of 96.98\% averaged over three tests. Similar to the VGGNet model, this accuracy is significantly greater than the 83.93\% accuracy of Ahmed et al. for encrypted MNIST. This difference is attributed to using a novel simple block encryption algorithm for the purpose of maintaining convergence of our models when classifying images that have been both blurred and encrypted, as outlined in Section \ref{EncryptionMethodology}: Encryption.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Encrypted MNIST & 0.9698\\
\hline
\end{tabular}
\caption{Encrypted MNIST Classification Results: Original Autoencoder}
\label{table:basicAE_Encryption}
\end{center}
\end{table}
\noindent The classification result for the blurred MNIST data sets, averaged over three tests, is found in Table \ref{table:basicAE_Blurred}: Blurred MNIST Classification Results: Original Autoencoder. The highest accuracy of the data sets was 98.94\% for the Var0\_04 data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets was 89.77\% for the Var0\_60 data set, which has the highest variance of Gaussian noise at 0.60. A linear increase in variance is once again observed to cause an exponential decrease in accuracy. As a comparison, the average classification accuracy in Ahmed et al. for blurred MNIST was 95.93\%, suggesting that our blurring technique has similar effect.
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Var0\_04 & 0.9894\\
\hline
Var0\_12 & 0.9838\\
\hline
Var0\_20 & 0.9739\\
\hline
Var0\_28 & 0.9644\\
\hline
Var0\_36 & 0.9461\\
\hline
Var0\_44 & 0.9147\\
\hline
Var0\_52 & 0.9193\\
\hline
Var0\_60 & 0.8977\\
\hline
\end{tabular}
\caption{Blurred MNIST Classification Results: Original Autoencoder}
\label{table:basicAE_Blurred}
\end{center}
\end{table}
~\newpage
\noindent The classification result for the blurred and encrypted MNIST data sets, averaged over three tests, is found in Table \ref{table:basicAE_BlurredEncrypted}: Blurred and Encrypted MNIST Classification Results: Original Autoencoder. The highest accuracy of the data sets was 46.88\% for the 0\_04\_en data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets, from data sets that converged on a solution, was 17.29\% for the 0\_36\_en data set, which has a variance of Gaussian noise of 0.36. Data sets with a higher variance of Gaussian noise did not converge on a solution and are marked as DNC (Did Not Converge). Similar to the blurred MNIST data sets, a linear increase in variance is observed to cause an exponential decrease in accuracy in these blurred and encrypted data sets. The rate of change of accuracy relative to variance of Gaussian noise is inherited from the blurred data sets as they are further encrypted.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
0\_04\_en & 0.4688\\
\hline
0\_12\_en & 0.4365\\
\hline
0\_20\_en & 0.4301\\
\hline
0\_28\_en & 0.3193\\
\hline
0\_36\_en & 0.1729\\
\hline
0\_44\_en & DNC\\
\hline
0\_52\_en & DNC\\
\hline
0\_60\_en & DNC\\
\hline
\end{tabular}
\caption{Blurred and Encrypted MNIST Classification Results: Original Autoencoder}
\label{table:basicAE_BlurredEncrypted}
\end{center}
\end{table}
~\newpage
\noindent A graphical presentation of the classification results for the blurred MNIST data sets compared to the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_OriginalAE}: Original Autoencoder Accuracy Comparison. As stated previously, we can observe that both sets inherit the characteristic that a linear increase in the variance of Gaussian noise results in an exponential decrease in classification accuracy. This is more apparent in the blurred and encrypted data sets for the Autoencoder neural network. We can also observe that further encryption has resulted in a profound decrease in classification accuracy for all blurred data sets, with the most blurred data sets not converging on a solution. While the encrypted-only MNIST produced a high classification accuracy result of 96.98\%, as stated in Table \ref{table:basicAE_Encryption}, the encryption of the blurred-only data sets with such a mild encryption algorithm has significantly decreased their classification accuracy, as found in Table \ref{table:basicAE_BlurredEncrypted} and observed in Figure \ref{GRAPH_OriginalAE} below. The encryption of the fairly accurate blurred data sets has resulted in a drop of more than 50\% in accuracy. The three data sets with the highest variance of Gaussian noise did not converge.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{OAE_B_vs_BEN}
\caption{Original Autoencoder Accuracy Comparison}
\label{GRAPH_OriginalAE}
\end{center}
\end{figure}
\subsection{Modified Autoencoder}\label{EvalModAE}
The classification result for the non-secure MNIST data set is found in Table \ref{table:modAE_MNIST}: Non-Secure MNIST Classification Results: Modified Autoencoder. The data set produced an accuracy of 98.65\% averaged over three tests. The accuracy is similar to the 98.64\% accuracy result of the original Autoencoder neural network found in Section \ref{EvalOrigAE}.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
MNIST & 0.9865\\
\hline
\end{tabular}
\caption{Non-Secure MNIST Classification Results: Modified Autoencoder}
\label{table:modAE_MNIST}
\end{center}
\end{table}
\noindent The classification result for the encrypted MNIST data set is found in Table \ref{table:modAE_Encryption}: Encrypted MNIST Classification Results: Modified Autoencoder. The data set produced an accuracy of 97.27\% averaged over three tests. The change in accuracy is marginal when compared to the 96.98\% accuracy of the original Autoencoder found in Section \ref{EvalOrigAE}. The addition of batch normalization to the original Autoencoder has only slightly decreased inaccurate classification of encrypted MNIST images.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Encrypted MNIST & 0.9727\\
\hline
\end{tabular}
\caption{Encrypted MNIST Classification Results: Modified Autoencoder}
\label{table:modAE_Encryption}
\end{center}
\end{table}
\noindent The classification result for the blurred MNIST data sets, averaged over three tests, is found in Table \ref{table:modAE_Blurred}: Blurred MNIST Classification Results: Modified Autoencoder. The highest accuracy of the data sets was 98.67\% for the Var0\_04 data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}. The lowest accuracy of the data sets was 90.93\% for the Var0\_60 data set, which has the highest variance of Gaussian noise at 0.60. Similar to the results of the original Autoencoder, as found in section \ref{EvalOrigAE}, a linear increase in variance is observed to cause an exponential decrease in accuracy.
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
Var0\_04 & 0.9867\\
\hline
Var0\_12 & 0.9823\\
\hline
Var0\_20 & 0.9756\\
\hline
Var0\_28 & 0.9649\\
\hline
Var0\_36 & 0.9478\\
\hline
Var0\_44 & 0.9382\\
\hline
Var0\_52 & 0.9266\\
\hline
Var0\_60 & 0.9093\\
\hline
\end{tabular}
\caption{Blurred MNIST Classification Results: Modified Autoencoder}
\label{table:modAE_Blurred}
\end{center}
\end{table}
~\newpage
\noindent The classification result for the blurred and encrypted MNIST data sets, averaged over three tests, is found in Table \ref{table:modAE_BlurredEncrypted}: Blurred and Encrypted MNIST Classification Results: Modified Autoencoder. The highest accuracy of the data sets was 86.56\% for the 0\_04\_en data set, which has the lowest variance of Gaussian noise at 0.04, as outlined in Section \ref{BlurringMethodology}: Blurring. The lowest accuracy of the data sets was 75.04\% for the 0\_60\_en data set, which has the highest variance of Gaussian noise at 0.60. Similar to the results of the original Autoencoder found in Section \ref{EvalOrigAE}, a linear increase in variance is observed to cause an exponential decrease in accuracy in these blurred and encrypted data sets, except for the 0\_12\_en data set which responds as an outlier with a lower than expected average accuracy of 81.52\%. Additional testing may increase this average. Section \ref{AEAnalysis}: Autoencoder Analysis addresses the observation that classification accuracy results for blurred and encrypted data sets is significantly higher in the modified Autoencoder when compared to the original Autoencoder.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{| c | c |}
\hline
\textbf{Data-Set} & \textbf{Accuracy}\\
\hline
0\_04\_en & 0.8656\\
\hline
0\_12\_en & 0.8152\\
\hline
0\_20\_en & 0.8433\\
\hline
0\_28\_en & 0.8311\\
\hline
0\_36\_en & 0.8155\\
\hline
0\_44\_en & 0.7766\\
\hline
0\_52\_en & 0.7864\\
\hline
0\_60\_en & 0.7504\\
\hline
\end{tabular}
\caption{Blurred and Encrypted MNIST Classification Results: Modified Autoencoder}
\label{table:modAE_BlurredEncrypted}
\end{center}
\end{table}
~\newpage
\noindent A graphical presentation of the classification results for the blurred MNIST data sets compared to the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_ModifiedAE}: Modified Autoencoder Accuracy Comparison. As stated previously, we can observe that both sets inherit the characteristic that a linear increase in the variance of Gaussian noise results in a slightly exponential decrease in classification accuracy. We can also observe that further encryption has resulted in a marginal decrease in classification accuracy for all blurred data sets. While the encrypted-only MNIST produced a high classification accuracy result of 97.27\%, as stated in Table \ref{table:modAE_Encryption}, the encryption of the blurred-only data sets with such a mild encryption algorithm has marginally decreased their classification accuracy, as found in Table \ref{table:modAE_BlurredEncrypted} and observed in Figure \ref{GRAPH_ModifiedAE} below. The encryption of the blurred data sets has resulted in average in a drop of roughly 15\% in accuracy.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_AE_BL_vs_BEN}
\caption{Modified Autoencoder Accuracy Comparison}
\label{GRAPH_ModifiedAE}
\end{center}
\end{figure}
~\newpage
\subsection{Autoencoder Analysis}\label{AEAnalysis}
The classification results of the original and modified Autoencoder for the blurred MNIST data sets can be observed in Figure \ref{GRAPH_COMP_AE_BL}: Autoencoder Accuracy Comparison - Blurred. We observe that the addition of batch normalization has not increased classification accuracy in the modified Autoencoder for most data sets, however a slight divergence is apparent with larger variances of Gaussian noise. The exponential effect of linearly increasing the variance of Gaussian noise is seen in both neural networks.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_vs_Or_AE_BL}
\caption{Autoencoder Accuracy Comparison - Blurred}
\label{GRAPH_COMP_AE_BL}
\end{center}
\end{figure}
~\newpage
\noindent The classification results of the original and modified Autoencoder for the blurred and encrypted MNIST data sets can be observed in Figure \ref{GRAPH_COMP_AE_BEN}: Autoencoder Accuracy Comparison - Blurred and Encrypted. We observe that the addition of batch normalization has significantly increased classification accuracy in the modified Autoencoder. The data set with the highest accuracy when using the original VGGNet, 0\_04\_en with 46.88\%, has an accuracy of 86.56\% when using the modified Autoencoder. The data set with the lowest converged accuracy when using the original Autoencoder, 0\_36\_en with 17.29\%, has an accuracy of 81.55\% when using the modified Autoencoder. The stabilizing effect of batch normalization on internal covariate shift is clear from these results.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.0\textwidth]{Mod_vs_Or_AE_BEN}
\caption{Autoencoder Accuracy Comparison - Blurred and Encrypted}
\label{GRAPH_COMP_AE_BEN}
\end{center}
\end{figure}
~\newpage
\section{Conclusions}\label{Conclusions}
\noindent This report presented findings on the classification accuracy of data that has been both blurred and encrypted. We had hypothesized that the classification accuracy of such data in VGGNet and Autoencoder neural networks would be significantly lower than the classification accuracy of data that has been either blurred or encrypted. We further hypothesized that the incorporation of batch normalization into these networks would improve the classification accuracy of the data that had been both blurred and encrypted.\\
\noindent The results of our study has shown that basic VGGNet and Autoencoder neural networks are very accurate in classifying MNIST images that have been blurred by a Gaussian function or encrypted using block encryption techniques. We also found that the basic forms of these neural networks are much less accurate in classifying data that has been both blurred and encrypted, with the VGGNet producing slightly better results than the Autoencoder in such a capacity. It was also noted that a linear increase in variance of Gaussian noise caused an exponential decrease in classification accuracy. The classification results noted here can be observed in Section \ref{EvalOrigVGG} and Section \ref{EvalOrigAE}.\\
\noindent The VGGNet and Autoencoder neural networks produced much better results at classifying data that has been blurred and encrypted after they were modified to include batch normalization. This is due to the stabilizing effect of batch normalization on internal covariate shift. The modified VGGNet neural network had a slightly higher accuracy than the modified Autoencoder in classifying the blurred and encrypted MNIST data. The classification results noted here can be observed in Section \ref{EvalModVGG} and Section \ref{EvalModAE}.\\
\noindent This work addressed an interesting research problem in the field of security, specifically the classification of noisy data.We hope that our findings regarding the application of batch normalization on multi-obfuscated data is helpful in solving problems suffering from internal covariate shift in the future.\\
~\newpage
\section{Acknowledgments}
This work was supported by Dr. He of McMaster University. I would also like to acknowledge Aditya Sharma for the terrific Autoencoder tutorial hosted at the website DataCamp.
~\newpage
\bibliographystyle {plainnat}
\bibliography {../../References}
\end{document}