-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
460 lines (444 loc) · 24.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="shortcut icon" type="image/png" href="assets/s.png"/>
<title>sam</title>
<style>
body {
font-size: 20px;
padding: 0 2rem;
}
.summary-row {
display: inline-grid;
grid-template-columns: 16em 6em 0fr;
align-items: baseline;
padding: 4px 0;
max-width: 90%;
}
@media (max-width: 1000px) {
.summary-row {
grid-template-columns: 50vw 6em 0fr;
}
}
.summary-row:hover {
font-weight: bold;
animation: wiggle 12s alternate infinite ease-in-out 4s;
}
.dropdown-content {
margin: 0.5rem 1.5rem 1rem 1.5rem;
}
.gallery {
display: flex;
flex-direction: row;
align-items: center;
justify-content: flex-start;
transform-origin: left;
flex-wrap: wrap;
}
.gallery-item {
margin: 1rem 1rem 0 0;
}
h1 {
display: inline;
font-weight: bold;
font-size: inherit;
margin: 0;
}
h2 {
display: inline;
font-weight: inherit;
font-size: inherit;
margin: 0;
}
.image-thumb {
height: 2rem;
width: auto;
transform: translate(0, 0.5rem);
margin-left: 4px;
}
.image-small {
height: 10rem;
}
.image-medium {
height: 20rem;
}
.text-small {
font-size: 16px;
}
@keyframes wiggle {
to { transform: scale(1.01,3); }
}
</style>
</head>
<body><main>
<!-- about me -->
<details open>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_s.png" alt="" />
<h1>sam engel</h1>
</div>
<span></span>
</div>
</summary>
<div class="dropdown-content">
I'm a software engineer with a focus on creative audiovisual, spatial, and scientific tools. You can reach me at <a href="mailto:samuel.d.engel@gmail.com">samuel.d.engel@gmail.com</a>. I use he/they pronouns. You can find some things I've made on this web page!
</div>
</details>
<!-- web sketches -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_websketches.png" alt="" />
<h2>web sketches</h2>
</div>
<span><a href="https://legnes.github.io/web-sketches">live project</a></span>
<span><a href="https://github.com/legnes/web-sketches">source</a></span>
</div>
</summary>
<div class="dropdown-content">
Experiments in graphics and gpu compute for the web. Proofs of concept, paper/blog implementations, small-scale demos, etc. written using WebGL and WebGPU.
<div class="gallery">
<figure class="gallery-item">
<img loading="lazy" class="image-medium" src="assets/websketches1.jpg" alt="Grid of Stanford bunnies blurred around a focal distance" />
<figcaption class="text-small">WebGPU depth of field using camera jitter</figcaption>
</figure>
<figure class="gallery-item">
<img loading="lazy" class="image-medium" src="assets/websketches2.jpg" alt="Primordial particle system looking like a microscope slide" />
<figcaption class="text-small">WebGPU primordial particle system</figcaption>
</figure>
<figure class="gallery-item">
<img loading="lazy" class="image-medium" src="assets/websketches3.jpg" alt="Fluid-like simulation" />
<figcaption class="text-small">WebGL reintegration tracking</figcaption>
</figure>
</div>
</div>
</details>
<!-- weekend raytracer -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_wkrt.png" alt="" />
<h2>weekend raytracer</h2>
</div>
<span></span>
<span><a href="https://github.com/legnes/weekend-raytracer-rs">source</a></span>
</div>
</summary>
<div class="dropdown-content">
A rust implementation of Peter Shirley, Trevor David Black, and Steve Hollasch's <a href="https://raytracing.github.io/books/RayTracingInOneWeekend.html">Ray Tracing in One Weekend</a>, with help from Daniel Busch's <a href="https://heyjuvi.github.io/raytracinginrust/">rust version of the book</a>. I added on a few extras, including image-based lighting, high dynamic range, direct lighting with shadow rays, parallelization, and a BVH based on <a href="https://jacco.ompf2.com/2022/04/13/how-to-build-a-bvh-part-1-basics/">this blog post</a> by Jacco Bikker. I also experimented with a surface area heuristic and sorted traversal for the BVH.
<div class="gallery">
<figure class="gallery-item">
<img class="image-small" src="assets/wkrt2.jpg" alt="A mix of large and small spheres with different colors and reflectivities on a gray background"></img>
<figcaption class="text-small">Multiple directional lights, shadows</figcaption>
</figure>
<figure class="gallery-item">
<img class="image-small" src="assets/wkrt3.jpg" alt="A mix of large and small spheres, seen from afar, reflecting a sunset by the sea"></img>
<figcaption class="text-small">Image-based lighting</figcaption>
</figure>
<figure class="gallery-item">
<img class="image-small" src="assets/wkrt4.jpg" alt="A mix of large and small spheres reflecting a sunset"></img>
<figcaption class="text-small">Image-based lighting</figcaption>
</figure>
<figure class="gallery-item">
<img class="image-small" src="assets/wkrt1.jpg" alt="Three spheres: one glass, one matte, and one metallic, on a yellow ground. The left half of the image has duller colors than the right half."></img>
<figcaption class="text-small">Tonemapper comparison (Uncharted 2 vs. approximate ACES)</figcaption>
</figure>
</div>
</div>
</details>
<!-- semaphore telegraph -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_telegraph.png" alt="" />
<h2>semaphore telegraph</h2>
</div>
<span><a href="https://legnes.github.io/semaphore-telegraph">live project</a></span>
<span><a href="https://github.com/legnes/semaphore-telegraph">source</a></span>
</div>
</summary>
<div class="dropdown-content">
At the end of the 18th century, engineers in France constructed a network of <a href="https://en.wikipedia.org/wiki/Optical_telegraph">optical telegraphs</a>, allowing messages to be transmitted hundreds of miles in a matter of minutes. As a production tool for the short film <a href="https://f.io/hMAeqFMa">Telegraph Tower</a>, I wrote a blender script to programatically animate a model of a sempahore telegraph transmitting an arbitrary input string.
<div>
<video class="image-medium" controls><source src="assets/telegraph1.webm"></video>
</div>
I also put together a simple web app version of the tool with additional animation options:
<div>
<img loading="lazy" class="image-medium" src="assets/telegraph2.jpg" alt="The semaphore telegraph web app" />
</div>
</div>
</details>
<!-- sdf text -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_sdf.png" alt="" />
<h2>sdf text</h2>
</div>
<span><a href="https://legnes.github.io/sdf-text">live project</a></span>
<span><a href="https://github.com/legnes/sdf-text">source</a></span>
</div>
</summary>
<div class="dropdown-content">
SDF Text is a small webgl and javascript tool for turning text into customizable word art:
<div>
<img loading="lazy" class="image-medium" src="assets/sdfs1.jpg" alt="The sdf-text app. On the left are a number of font settings. On the right, the words 'sdf word art!' appear in two tone pink and white with a dithered halo." />
</div>
It includes a live glsl shader editor that you can use to come up with new styles. I owe much of the inspiration and logic to <a href="https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf">Valve's 2007 paper on SDFs</a>, <a href="https://cs.brown.edu/people/pfelzens/papers/dt-final.pdf">Pedro Felzenszwalb and Dan Huttenlocher's paper on calculating SDFs</a>, and <a href="https://thebookofshaders.com/">The Book Of Shaders's</a> practical write-up of shapes and shaping. At some point I may revisit this to replace the old cpu-side SDF generation with a gpu-side jump flood approach, but overall I'm pleased with how it came out! Here are some of my favorite results:
<div>
<img loading="lazy" class="image-small" src="assets/sdfs2.jpg" alt="The words 'type here!' dissolving into vertical lines" />
<img loading="lazy" class="image-small" src="assets/sdfs3.jpg" alt="The word 'type' looking sort of like a bubble" />
<img loading="lazy" class="image-small" src="assets/sdfs4.jpg" alt="The letters 'adsf' with wood grain patterned extrusions" />
<img loading="lazy" class="image-small" src="assets/sdfs5.jpg" alt="The letters 'asdf' printed three times in three colors (red, white, blue), overlapping" />
<img loading="lazy" class="image-small" src="assets/sdfs6.jpg" alt="The letters 'asdf' trailing horizontal icicle-like shapes" />
</div>
</div>
</details>
<!-- crossword corpus -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_crosswords.png" alt="" />
<h2>crossword corpus</h2>
</div>
<span><a href="https://xw-data.herokuapp.com">live project</a></span>
<span><a href="https://github.com/legnes/xw-data">source</a></span>
</div>
</summary>
<div class="dropdown-content">
Crossword Corpus started as a complaint: why do words like ALOE, ERA, and OAT show up so often in crossword puzzles? It turned into a kind of blog/web app hybrid about language, data, and crossword puzzles.
<div>
<img loading="lazy" class="image-medium" src="assets/crosswords1.jpg" alt="A graph showing year-by-year usage of three crossword answers, SDI (peaks around 2000), DSL (peaks around 2011), and OMG (peaks around 2020)" />
<img loading="lazy" class="image-medium" src="assets/crosswords2.jpg" alt="A heatmap of vowel usage in a 15x15 crossword grid" />
</div>
To get started, I put together a scraper in bash and collected two and a half decades of puzzles, then read them with a home-grown <a href="https://code.google.com/archive/p/puz/wikis/FileFormat.wiki">.puz file</a> parser. Once I could actually interpret the puzzles, I ingested them into a postgres database behind a node/express web app on heroku. The app hosts blog pages, exposes an api for running data analysis, and serves the results into tables and graphs by way of plotlyjs and HTML CustomElements. Once I could run tests and visualize findings, it was finally time to read up on corpus linguistics and get some answers.
<div>
<img loading="lazy" class="image-medium" src="assets/crosswords3.jpg" alt="A 2d histogram of crossword answers with x letters and y vowels" />
<img loading="lazy" class="image-medium" src="assets/crosswords4.jpg" alt="A graph showing crossword answer births and deaths over time" />
</div>
Along the way I ran afoul of some data management problems. Brotli and gzip helped with the high network usage, and better data structure efficiency improved the performance of the large language corpora I was working with. On the front-end, I was happy to find that CustomElements and custom HTML events are a viable alternative to bulkier frameworks. I learned a lot about language, so give the blog a read if you're interested!
</div>
</details>
<!-- arsiliath unity compute workshop -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_compute.png" alt="" />
<h2>arsiliath compute workshop</h2>
</div>
<span></span>
<span></span>
</div>
</summary>
<div class="dropdown-content">
In December 2020 I took <a href="https://twitter.com/arsiliath">Arsiliath's</a> <a href="https://arsiliath.gumroad.com/">workshop</a> on creating biology-inspired simulations using compute shaders in Unity. We implemented algorithms from the literature on cellular automata, reaction-diffusion, physarum, primordial particle systems, flocking, etc. Here are some of my favorite results:
<div class="gallery">
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute1.webm"></video>
<figcaption class="text-small">Reaction diffusion simulation</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute2.webm"></video>
<figcaption class="text-small">Simple cyclic cellular automaton</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute3.webm"></video>
<figcaption class="text-small">Diffusion-limited aggregation</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute4.webm"></video>
<figcaption class="text-small">Cellular automaton using "rule 388"</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute5.webm"></video>
<figcaption class="text-small">Three primordial particle systems</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute6.webm"></video>
<figcaption class="text-small">Reaction diffusion algorithm visualized in 3d</figcaption>
</figure>
<figure class="gallery-item">
<video class="image-medium" controls><source src="assets/compute7.webm"></video>
<figcaption class="text-small">Primordial particle system in 3d</figcaption>
</figure>
</div>
</div>
</details>
<!-- chekerboard -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_check.png" alt="" />
<h2>checkerboard</h2>
</div>
<span><a href="https://legnes.github.io/checkerboard">live project</a></span>
<span><a href="https://github.com/legnes/checkerboard">source</a></span>
</div>
</summary>
<div class="dropdown-content">
Checkerboard is a threejs/webgl take on a doodle I've done since I was a kid (it looks like this):
<div>
<img loading="lazy" class="image-medium" src="assets/check1.jpg" alt="A warped checkerboard sketched on graph paper." />
<img loading="lazy" class="image-medium" src="assets/check2.jpg" alt="Several more warped checkerboards sketched on graph paper." />
</div>
Initially conceived as an excuse to play with shaders, Checkerboard grew into a small-scale particle system driven by a physical simulation running on the gpu. I had a blast working out a Runge-Kutta integrator and some simple lens optics!
<div>
<img loading="lazy" class="image-medium" src="assets/check3.jpg" alt="Checkerboard app, including an options menu with settings for particles, the checkerboard, gravity, the lens, and others." />
</div>
</div>
</details>
<!-- folds -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_folds.png" alt="" />
<h2>folds</h2>
</div>
<span><a href="https://legnes.github.io/folds">live project</a></span>
<span><a href="https://github.com/legnes/folds">source</a></span>
</div>
</summary>
<div class="dropdown-content">
<div>
<img loading="lazy" class="image-medium" src="assets/folds1.jpg" alt="Folds app. In the main window, a blue and pink square has a complex pattern of creases. On the right is a settings menu, including variables and commands for visualization and sonification." />
</div>
Fold a paper with webgl/threejs. A simple edge detection pass tracks folds. For prettier creases, folds get a cheap separable gaussian blur before they are composited on top of previous creases and the base texture. Here are some patterns I made:
<div>
<img loading="lazy" class="image-medium" src="assets/folds2.jpg" alt="Star-shaped creases" />
<img loading="lazy" class="image-medium" src="assets/folds3.jpg" alt="Spider web-like creases" />
<img loading="lazy" class="image-medium" src="assets/folds4.jpg" alt="Graph nodes or wallpaper-like creases" />
<img loading="lazy" class="image-medium" src="assets/folds5.jpg" alt="Mix of thin and thick creases" />
</div>
The audio component is just an overtone series. Each fold adds/amplifies a simple sine/cosine wave to the note or chord. The "amount" of paper creased (acting as a proxy for "frequency" of the crease on the unfolded paper) determines which partial gets added.
</div>
</details>
<!-- freecell -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_freecell.png" alt="" />
<h2>mom's freecell</h2>
</div>
<span><a href="https://legnes.github.io/freecell">live project</a></span>
<span><a href="https://github.com/legnes/freecell">source</a></span>
</div>
</summary>
<div class="dropdown-content">
I made this freecell for my mom, who used to play religiously but couldn't find any simple, minimal versions online. My implementation owes its chassis and much of its polish to a <a href="https://github.com/deck-of-cards/deck-of-cards">Deck of Cards</a> javascript library, which I overrode and extended to create the gameplay. I've included some aspects of progressive web apps like installability and a service worker for offline caching.
<div>
<img loading="lazy" class="image-medium" src="assets/freecell1.jpg" alt="Freecell app at the beginning of a game" />
<img loading="lazy" class="image-medium" src="assets/freecell2.jpg" alt="Freecell app after completing a game" />
</div>
</div>
</details>
<!-- waves -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_jueves.png" alt="" />
<h2>waves</h2>
</div>
<span><a href="https://legnes.github.io/jueves">live project</a></span>
<span><a href="https://github.com/legnes/jueves">source</a></span>
</div>
</summary>
<div class="dropdown-content">
I built Waves for the Global Game Jam 2017. It's an expansion and variation on <a href="https://threejs.org/examples/#webgl_gpgpu_water">an existing threejs demo</a>. No small part of the ~40 hours jamming time went towards creating the canvas/key control scheme and tuning the simulation for gameplay (there was a nasty energy leak from the input model). I stuck with pretty simple shader logic but I ended up happy with the look!
<div>
<img loading="lazy" class="image-medium" src="assets/jueves1.jpg" alt="Waves app. A small ball floating on a bumpy, etherial surface." />
</div>
</div>
</details>
<!-- l systems -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_robotanical.png" alt="" />
<h2>robotanical</h2>
</div>
<span><a href="https://legnes.github.io/robotanical">live project</a></span>
<span><a href="https://github.com/legnes/robotanical">source</a></span>
</div>
</summary>
<div class="dropdown-content">
My friend <a href="https://ethanmedwards.wixsite.com/portfolio">Ethan Edwards</a> and I collaborated on Robotanical for ProcJam 2016, although busy schedules left it in a somewhat unfinished state. The idea was to cultivate a virtual garden of plants grown by Lindenmayer systems and parsed into svg by a turtle renderer. The version here demonstrates the core functionality by implementing a few well-known L-systems.
<div>
<img loading="lazy" class="image-medium" src="assets/robotanical1.jpg" alt="Robotanical app. Several fractals and branching plant-like structures arranged in a grid." />
</div>
</div>
</details>
<!-- walk -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_walk.png" alt="" />
<h2>walk</h2>
</div>
<span><a href="https://legnes.github.io/walk">live project</a></span>
<span><a href="https://github.com/legnes/walk">source</a></span>
</div>
</summary>
<div class="dropdown-content">
Walk is an animation/experiment that attempts to fill a discretized grid by propagating a random(ish), non-self-intersecting path through it. Since the odds against that are pretty high, Walk is more like a movie of a computer painting itself into a corner over and over. Try changing the step pattern!
<div>
<img loading="lazy" class="image-medium" src="assets/walk1.jpg" alt="Walk app. Continents of pink pixels on a white background sit above the numbers 8, 4, 2, and an icon of a chess knight." />
</div>
</div>
</details>
<!-- phenomenol -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_phenomenol.png" alt="" />
<h2>phenomenol</h2>
</div>
<span><a href="https://legnes.github.io/phenomenol">live project</a></span>
<span></span>
</div>
</summary>
<div class="dropdown-content">
I should probably explain the controls to you now. When I made Phenomenol back in 2014, I wanted it to be a sort of 3d line rider. There are two third-person points of view from which you can draw platforms in the space (mouse to draw, z to toggle pov). While in this drawing mode, you can control (using wasd) a marker that determines how far away from the camera the platforms will be placed. The idea is to place platforms so that the first-person character (wasd + mouse) can, by traversing them, collect the spinning cubes around the level. If you run out of platforms, don't worry; in first-person mode you can shoot (lmb) projectiles to collect/recall them. Last thing: those blue boxes on the ground in each corner will launch you into the air, which can be helpful. Phenomenol came out a tedious and unforgiving game, but if you approach it with patience it ~can~ be (strangely) compelling.
NB: Phenomenol was originally made in a much older version of unity, and rebuilding it for the post-flash era introduced some strangeness. Then again, it was pretty strange to begin with...
<div>
<img loading="lazy" class="image-medium" src="assets/phenomenol1.jpg" alt="Phenomenol game. Neon platforms float in a stark drab room." />
</div>
</div>
</details>
<!-- eprnd -->
<details>
<summary>
<div class="summary-row">
<div>
<img loading="eager" class="image-thumb" width="1rem" height="1rem" src="assets/thumb_eprnd.png" alt="" />
<h2>epr&d</h2>
</div>
<span><a href="https://legnes.github.io/eprnd">live project</a></span>
<span></span>
</div>
</summary>
<div class="dropdown-content">
For the Global Game Jam 2014, I teamed up with my friend <a href="https://ethanmedwards.wixsite.com/portfolio">Ethan Edwards</a> to make EPR&D. There's always a theme for GGJ and in 2014 it was "We don't see things as they are, we see them as we are". I'd had this idea in my head for a while about a game whose mechanic only works when you can't see it work, and we decided to go with that.
NB: This is a rebuild of an old unity project and as such may have even more bugs than it did originally.
<div>
<img loading="lazy" class="image-medium" src="assets/eprnd1.jpg" alt="EPRnD game. An eyeball, a computer, walls, and a box sitting on a computer keyboard." />
</div>
</div>
</details>
</main></body>
</html>