Skip to content

The goal of this project is to characterize the power and performance of deep learning for a CPU-GPU platform (TX1).

Notifications You must be signed in to change notification settings

inciaf/18-743-Power-and-Performance-optimizations-for-DNNs-on-CPU-GPU

Repository files navigation

Goal

To profile the relationship between performance and power of embedded platform under the scenario of inferencing deep neural network with CPU running on SPEC.

GPU - Inferencing Scenarios

  • Different architecture of neural network
  • (Different frequency for GPU)

CPU - Running other tasks

  • Run benchmarks (MiBench) (Understand the relationship between CPU utilization and how it affects the GPU speed)

  • Different frequency

    • Understand how it affects utilization
    • How it affects power consumption of CPU
    • How it affects power consumption of the overall system
    • How it affects the GPU speed

Action Item

  1. Come up with 3 image classification DNN that runs on TX1

    • Change the algorithms and frequency (low-mid-high) and measure power consumption and runtime to build up Figure 1. (3x3 points scatter plot)
  2. Compile MiBench and pick 3 benchmark to run on CPU (Branch, Memory, Compute)

    • Come up with a bi-axes plot that plots utilization and power consumption of CPU for different frequency. (3 curves)
  3. For each GPU setting (3x3), plot bi-axes plot that plots speed of GPU and overall power consumption of CPU tasks in different frequency (3x3)

Expected Results

  1. Categorize CPU benchmarks under Temperature, Power, Memory, and Latency constraints for given DNN benchmarks. (Table)

  2. Compare with baseline results

  3. (Potential Work) Do the same analysis for Training DNNs

About

The goal of this project is to characterize the power and performance of deep learning for a CPU-GPU platform (TX1).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published