Skip to content

My Advent of Code solutions. I prioritize readability over performance where possible.

Notifications You must be signed in to change notification settings

elliott-with-the-longest-name-on-github/advent-of-code-2021

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Advent of Code 2021

Getting Started

  • Clone the repository
  • Each folder is a day's challenge. The challenge is listed in that folder's README.
  • The input for the challenge is normally in the package at the bottom. This is just so I don't have to fiddle with reading from a file.

Benchmarking

# Assumes you're in the project root

# Run all tests, including benchmarks
go test -bench . ./...

# Run tests for a specific subdirectory
DIR='20'
go test -v ./$DIR/...

Results

Day Title Part One (ms) Part Two (ms) Notes
01 Sonar Sweep N/A N/A Didn't measure benchmarks simply because I wasn't benchmarking at this stage.
02 Dive! N/A 0..0014 Easy peasy.
03 Binary Diagnostic 0.116 0.155 Pretty easy, though it was definitely my first time bit-pushing with Go. Not altogether a bad experience.
04 Giant Squid (Bingo) 0.227 0.372 Not too bad. I seem to remember that efficiency was annoying, but it still ran, well, faster than is ever necessary.
05 Hydrothermal Venture 34.193 57.359 Not too difficult from an execution standpoint. Essentially, the interesting part was parsing the input into lines. Since the coordinate system only accepted integers, each line could be broken into a finite set of points and mapped to its count, making filtering for "Number of coordinates with count greater than X" pretty easy.
06 Lanternfish 0.046 0.129 Not much to say about this one -- pretty trivial, as long as you're not trying to track it through a list or stacks. Maps are the best!
07 The Treachery of Whales 0.065 0.021 This one was pretty interesting. The solution to use the median in the first part was obvious to me, but the mean in the second part took me a bit to figure out. I'm sure there's a better way to resolve split medians and figure out whether to floor or ceiling the result of mean, but brute forcing it was still super fast. This is the first one where I've seen faster times on Part 2 than Part 1, simply because calculating the mean doesn't require a list sort like the median does.
09 Smoke Basin 6.155 8.569 This was a super fun depth-first search algorithm to implement. So fun, in fact, that I packaged this one with a super cool terminal visualization. It should be pretty self-explanatory how to view it -- just look in main.go.
10 Syntax Scoring 0.607 0.652 I found this one to be interesting and fun. I didn't particularly prioritize bleeding fast performance; rather, I did my best from the outset to build a legit tag parsing system. I figured I'd have to do something with incomplete tags, so I went ahead and dealt with them during Part One. The design means that I pay the penalty of parsing lines that I could throw away for both Parts One and Two, but it also means that I could share a LogDump object between them if this were a real-world scenario. The looping/recursion combination I used for parsing everything in Part One translated over to Part Two extremely well -- for Part Two, I only had to add one line of code to the existing base, plus the scoring logic for incomplete lines.
13 Transparent Oragami 0.113 0.654 I was really a fan of this one. Super fun to print each stage, adn the fact that the solution was a visual solution was really neat. Seems to be a pretty performant solution, too -- I used a map of a string representation of each dot to its "data structure" representation. This made merging the dots after a fold super easy, and it also made counting visible dots really inexpensive, because I can just len(map).
14 Extended Polymerization
Iterative benchmarks
  • 10 iterations: 0.039, +/-0
  • 20 iterations: 0.078, +39
  • 30 iterations: 0.116, +38
  • 40 iterations: 0.154, +38
  • 50 iterations: 0.206, +52
  • 60 iterations: 0.230, +24
  • 70 iterations: 0.275, +45
  • 80 iterations: 0.305, +30
  • 90 iterations: 0.364, +59
  • 100 iterations: 0.391, +37
  • ...
  • 150 iterations: 0.610, +65
  • ...
  • 200 iterations: 0.796, +67
This was unquestionably my favorite one so far. I was one of the chumps who brute forced it with a string at the beginning, only to be slapped in the face with the exponential inefficiencies in the next section. I rewrote the solution as a map, which was not too hard at all. Since Part One and Part Two are the same thing but with more iterations, I decided to bench this one a little differently. As you can see, it runs in roughly O(N) time, which makes sense, as there are only a few possible polymer pairs and that number is reached very early on.
15 Chiton 8.523 199.635 This one was easy only because I cheated... sort of. I knew Dijkstra's or A* was pretty much the way to go for this, so I found a library for Dijkstra's. No point in implementing very well-defined functionality like that. As usual, parsing the input was the hardest part -- it's times like these when I really wish Go had array.Map and company.
21 Dirac Dice 0.005 413.700 Well isn't that a jump from Part One to Part Two. Part one was fun and I had a bit too much fun with the solution. Part Two kicked my butt, as my previous design pretty much didn't support what I needed to do at all. I'm still not super happy with the performance, but it's memoized already and I really don't want to optimize more.

About

My Advent of Code solutions. I prioritize readability over performance where possible.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages