The future of scientific software development will be cloud-based
+together with apps that use web technologies rather than
+platform-specific (“native”) applications despite recent mobile
+computing hardware advances. Advancements in computing tools and
+languages are already changing science to, for example, improve
+reproducibility of results and facilitate better collaboration. These
+same tools are helping to move development itself into the cloud and are
+migrating the community to web-based technologies and away from native
+apps and frameworks.
+
+
Mobile development for the scientific community now means programming on
+a laptop since there are very few scientific tools available on tablets
+and phones. “Mobile” in the everyday sense refers to, of course,
+smartphones and tablets. Eventually, scientific programming will move to
+these mobile platforms. I’m thinking of a tablet that can perform
+analysis, run a notebook environment, or even run certain kinds of
+simulations. You will be able to hook it up to measurement devices1 or
+controllers. At conferences, you will be able to answer questions by
+running your actual simulation live with different variables and show it
+to someone. There is a lot of great desktop-class software, proprietary
+and open-source, that powers science today. None of this will be a part
+of the mobile future. It will all be done in the cloud and with web
+technologies.
+
+
The discussion around native versus web technology frameworks is already
+robust in programming circles2, so I approach the topic as a
+researcher looking for mobile and cross-platform solutions. I try to
+answer these two questions:
+
+
+
What does software development look like in the future for science?
+
How are existing cross-platform and mobile frameworks shaping the future of scientific development?
+
+
+
I briefly describe the problems of the current fragmented ecosystem, how
+that ecosystem is converging on open-source tools, and then how the
+emerging cloud-based computing paradigm will shape scientific computing
+on mobile devices.
+
+
The fragmented ecosystem
+
+
The trajectory of scientific programming is interesting because it seems
+to be converging on a few tools from a historically fragmented and
+siloed ecosystem. Chemists, for example, use their particular flavors of
+modeling and analysis software (like Gaussian or ORCA), and Fortran is
+used for much of climate science. The fragmentation makes sense because
+of the wide range of applications that scientific programming must
+serve, including modeling, analysis, visualization, and instrument
+control. Furthermore, scientists are often not trained in programming,
+leading to large gaps in ability even within a single laboratory.
+
+
These factors lead to several problems and realities within the
+programmatic scientific community. These include:
+
+
+
+
Code that is often not reusable or readable across (or within) scientific disciplines. An example of this is the graduate student who writes software for their project, which nobody knows how to modify after they leave.
+
+
+
Domain-specific applications that inhibit cross-disciplinary collaboration. This includes proprietary software that, while effective, is not shareable because of cost or underutilization. Barriers to entry exist also because only a subset of people learn how to use a particular piece of software and would-be collaborators use something different.
+
+
+
Complicated old code that stalls development. Changing an old code base is a monumental task because the expertise that created the code has moved on. This is often the case with complex and large code bases that work, but nobody knows how. Making changes or sharing can require a complete rewrite.
+
+
+
+
The problems are more apparent today because the frontiers of science
+are increasingly cross-disciplinary. Without shareable and reusable
+code, there is considerable friction when trying to collaborate3.
+
+
Convergence to open-source tools
+
+
Several technologies are now maturing and their convergence is solving
+some of these problems. The transition will take a long time —
+decades-old code bases need to be rewritten and new libraries need to be
+built — but I expect the scientific programming landscape to be very
+different ten years from now.
+
+
The wide-spread adoption of Python, R, and Jupyter in the scientific
+community has solved many of the readability the share-ability
+problems4. Many projects now bundle Jupyter notebooks to demonstrate
+how the code works. Python is easy to read, easy to write, and
+open-source, making it an obvious choice for many to replace proprietary
+analysis software. The interactive coding environment of Jupyter is also
+having a
+major impact on scientific coding. Someone reading a
+scientific paper no longer has to take the author’s word that the
+modeling and analysis are sound; they can go on GitHub and run the
+software themselves.
+
+
A level above programming languages is apps for developing scientific
+software and doing analysis. There are a lot of apps out there, but a
+major component of development will use web technologies because of
+their inherent interoperability. Jupyter notebooks, for example, can be
+opened in the browser, meaning anyone can create and share something
+created in Jupyter without obscure or proprietary software. Jupyter can
+now also be used in Visual Studio Code,
+the popular, flexible, and rapidly-improving editor that is based on the
+web-technology platform, Electron.
+
+
The growing popularity of web technologies in science foreshadows the
+biggest change on the horizon, the move to cloud-based computing.
+
+
Cloud-based computing for science
+
+
Mobile devices are finally powerful and flexible enough that most
+people’s primary computing device is a smartphone. If this is the case,
+then one might think that they must be powerful enough for scientific
+applications. So, where are all of these great tools?
+
+
Ever since the iPad Pro came out in 20185, I have been searching for
+ways to fit it into my research workflow. So far, the best use-case for
+it is reading and annotating journal articles. This great, but nowhere
+near the mobile computing workstation I outlined above. The reason I
+still cannot do analysis or share a simulation on an iPad is that
+Python, Jupyter, an editor, graphing software, etc. are not available
+for it — and my iPad is faster and more powerful (in many respects) than
+my Mac6.
+
+
As I look around for solutions, it seems that the answer is to wait for
+cloud-based development to mature. Jupyter already has notebooks in the
+cloud via JupyterHub. A
+service called Binder promises to host notebook
+repositories and make code “immediately reproducible by anyone,
+anywhere”. Github will soon debut its
+Codespaces cloud platform, and
+the Julia community (a promising open-source scientific programming
+language) has put their resources into Jupyter and VS Code. Julia
+Computing has also introduced JuliaHub, Julia’s
+answer to cloud computing. Legacy tools for science trying to stay
+relevant are also moving to the cloud (see MatLab in the cloud,
+Mathematica Online, etc.). Any app or platform that does not make the
+move will likely become irrelevant as code-bases transition.
+
+
There are no mobile-first solutions from any of the major players in
+scientific software despite the incredible progress in mobile
+hardware7. Today I can write and run my software in a first-generation
+cloud-based environment or switch to my traditional computing
+workstation.
+
+
Conclusion
+
+
What lies ahead for scientific programming? Maybe Julia will continue
+its meteoric trajectory and become the de facto programming language
+for science and scientific papers will come attached with Jupyter
+notebooks. Maybe code will become so easy to share and reuse that the
+niche and proprietary software that keeps the disciplines siloed will
+become obsolete. These would be huge changes for the scientific
+community, but I think any of these kinds of changes in the software
+space are compounded by the coming cloud computing shift. Scientific
+development will happen in the cloud and code will be more reproducible
+and shareable than it is today as a result.
+
+
This future is different from the mobile computing world that I
+imagined, where devices would shrink and simultaneously become powerful
+enough that a thin computing slab empowered by a suite of on-device
+scientific tools could fulfill most of my computing needs. Instead, the
+mobile device will become a window to servers that will host my
+software. Reproducible and reusable code will proliferate as a result,
+but where does that leave the raw power of mobile computing devices?
+
+
+
+
+
+
+
This just became possible with the Moku devices coming out of Liquid Instruments. ↩
Katharine Hyatt describes these problems in the first few minutes of an excellent talk on using Julia for Quantum Physics. ↩
+
+
+
Another potential avenue for convergence is the ascent of the Julia open-source programming language, which promises to replace both high-performance code, higher-level analysis software, while making code reuse easy and natural. The language is still far from any sort of standard, but there are promising examples of its use. ↩
+
+
+
The iPad is, unfortunately, the only real contender in the mobile platform space. The Android ecosystem has not yet come up with a serious competitor that matches the performance of the iPad. ↩
+
+
+
Specifically, Apple does not allow code execution on its mobile operating systems. ↩
The modern news cycle is a periodic deluge. I don’t get the sense that the James Webb Space Telescope launch has hit the public in the same way that the Hubble did. It seems like everyone moved on pretty quickly. I can’t help but keep going back to the Webb images and looking at them in awe, with a much better viewing experience on modern computer displays unavailable to those seeing the Hubble images for the first time.
Martin Rees talks to The Economist’s Alok Jha on existential risks to civilization and the
+importance of science and science communication in the 21st century running up to his new book
+coming out this November (I already pre-ordered).
+
+
There is a constant buzz on Twitter about the pains of academic research.
+Martin Rees agrees that aspects of university research needs to be changed.
+Administrative bloat and scientists staying in their positions past retirement age discourage blue-sky research and gum up the promotion pipeline.
+He criticises the scope of UK ARIA (Advanced Research and Invention Agency) program, which is supposed to function similar to the US’s high-risk high-reward DARPA (Defence Advanced Research Projects Agency) program:
+
+
+
In that perspective, it’s just a sideshow.
+ The ministers say this is a wonderful way in which scientists can work in a long-term way on blue skies research without too much administrative hassle.
+ They’d be doing far more good if they reduced the amount of such administrative hassle in those who are supported by UKRI,
+ which is supporting fifty times as much research as ARIA will ever do.
+
+
+
Science in the last ten years or so, I feel, has really gotten bogged down.
+I agree that blue-sky thinking has sort of gone out of fashion.
+How much this is a function of perverse publishing incentives, administrative hurdles, or the constant firehose of publications to keep up with, I don’t know.
+I’m glad a prominent and highly respected figure in the science community is calling out the inefficiencies and problems in the way science is practiced.
This is a bit of an old post, but one that I liked a lot considering that I am also in the business of manufacturing quasiparticles (mine are polaritons).
+It’s fascinating that the quasiparticles that appear because of material excitations can be described using many of the same models as “real” elementary particles.
I have seen a few articles in the press this year on sewage monitoring for tracking disease and the health of a city. Sara Reardon writing for Scientific American reports on how wastewater monitoring has been taken up as a tool by the CDC and local communities for tracking COVID and other diseases in the US. The impact of wastewater data aggregation and analysis could be huge — in both the positive and negative. It strikes me that governments are largely reactionary to changes in public health. Little attention is paid to preventative measures. This could change that.
+
+
Thinking more broadly, I think this tool has much greater potential than disease tracking. Combining wastewater data with other inputs could be a monumental shift in understanding the health of a community on quite a granular level — both in terms of what substances are circulating in a community and the potential for real-time fidelity. You can imagine wastewater data being combined with data from hospitals, air quality monitoring, or even news of major events affecting the mental health of a city.
Geoff Anders of Leverage Research, a non-profit that writes scientific papers without publishing them in peer-reviewed journals (so it seems), writes in Palladium Magazine a brief summary of the role of science through the ages. His overall theme is clear from the section headers. He sees science as going from largely an endeavor of wealthy individuals to one that obtained authority from the state. I quibble with parts of Anders’s historical narrative, but there are some good ideas in his conclusion.
+
+
I’m not sure how big of a phase “science as a public phenomenon” was. He makes it seems like science was a circus show in the 16th, 17th, and parts of the 18th century. I think this an exaggeration, but I’m not a science historian (and neither is he). Anders also relies too heavily on a single instance of science being used authoritatively (King Louis XVI’s commission to investigate Franz Mesmer’s methods of apparent hypnosis) to make the case that science had become broadly authoritative. This strikes me as a weak way to make the argument. I would be hesitant to say science has ever had the authority he seems to imply that it had. A massive influence? Definitely. A justification? Probably, and sometimes a scapegoat. But I wouldn’t call it an “authority”. I think if one were to use that word it needs a bit more context, which Anders does not provide.
+
+
His section on science and the state overemphasizes military technologies and glosses over quality of life improvements that raised large parts of the global population out of abject poverty (see Bradford DeLong’s excellent grand narrative, Slouching Towards Utopia published recently).
+
+
The one area that I think Anders has something going for him is his conclusion. The scientific community is at some sort of crossroads in terms of funding and elements of how it is structured. I see a lot of complaints that funding for blue-sky ideas is drying up and there are reports that hiring is becoming difficult. Some of this likely has to do with how universities are funded — and that is a whole other can of worms.
+
+
I like Anders’s idea of splitting science into two camps: exploratory science and settled science. At first glance one might say that science is exploratory and that the settled part is taken care of by applied scientists and engineers, but I think Anders’s argument is more subtle. He says that funding might be restructured so that exploratory science is decentralized and career tracks are split into “later-state” and “earlier-stage” science. I would take this idea further. First, by establishing an exploratory wing of science its mission will be to take big risks with the expectation of failure — and failures should be reported and praised. This wing would be analogous to the US’s DARPA initiative.
+
+
Second, a later-stage wing of science wouldn’t have to feign novelty where none exists. They would be free to solidify existing science1. Maybe they can bundle a few studies from the exploratory stage and make that science robust, ready to pass the baton to the applied scientists and engineers.
+
+
I think the separation into early-phase and late-phase science would be a boon for the scientific endeavor. It would strengthen the pipeline from basic science to societal improvements. It would also clarify the mission of any given scientific project. Having a later-stage project would carry just as much importance as an exploratory project within its domain, and exploratory labs would be free to try out pie-in-the-sky ideas without fear of blowback from funding agencies.
+
+
+
+
+
I’m not sure how this has to do with Anders’s idea of “don’t trust the science”. I think he is throwing a bunch of ideas together without a clear thread (what does decentralization get you besides being hip with the crypto crowd?), but there are some nuggets in here worth thinking about. ↩
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/2022/10/26/Jumping-over-the-time-to-first-plot-problem-in-Julia.html b/2022/10/26/Jumping-over-the-time-to-first-plot-problem-in-Julia.html
new file mode 100644
index 0000000..39955a7
--- /dev/null
+++ b/2022/10/26/Jumping-over-the-time-to-first-plot-problem-in-Julia.html
@@ -0,0 +1,141 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
I’ve been using Julia for about a year now after moving my entire workflow
+from Python.
+When I sometimes revisit Python I am so glad I made the switch. No regrets whatsoever. Julia still has one pain point,
+which is time to first execution (TTFX) or time to first plot (TTFP)1.
+But even this “pain point” is somewhat bizarre because Julia is a compiled language. Of
+course there is going to be a compilation step that will make it slow to get going. What makes this a pain
+point is the desire to have it all — “we are greedy,” say the founders of the language.
+Julia wants to be interactive and dynamic, but compiled and fast.
+But the fact that it’s compiled means that when a user wants to make a simple line plot it takes two minutes to precompile the plotting library, compile the plotting functions, and finally show the plot on screen.
+Only after that initial setup are all subsequent plots instant — as long as you keep your session active. There are many more talented programmers in the community than me, and one user in a recent Discourse thread explained the tradeoff and the difficulty in reducing compile time:
+
+
+
A tangent: I believe it is worthwhile to discuss why this is such a phenomenally big problem in julia. Julia has two very special features other languages do not share: (1) multimethods as the fundamental principle for the entirety of the ecosystem and (2) compiled code. It is very difficult to know what code you need compiled and to not discard the vast majority of already compiled code when importing new libraries that add new methods for pre-existing functions. No one has had to deal with this problem before julia. It is being slowly dealt with. Sysimages basically carry the promise that no significant amount of new methods will be defined, hence they can cache more compiled code (this is very oversimplified borderline misleading explanation).
+
+
+
That last point about sysimages is interesting. Making a sysimage in Visual Studio Code is a big workflow improvement, and I recommend all Julia users try it. It essentially compiles all the libraries from your project, and any other files you specify, and puts them into a file. I guess you could say it freezes your Julia session to use later. This is faster than precompiling each time. It’s built into the Julia extension and easy to set up. Detailed instructions are on the Julia VS Code extension website, but in a nutshell the steps are:
+
+
+
Open your project folder in VS Code with the Julia extension installed (and make sure it’s activated)
+
Make a new folder called .vscode
+
Make a file called JuliaSysimage.toml in that folder
+
Paste the [sysimage] text below this list into that file
+
Select Tasks: Run Build Task and then select Julia: Build custom sysimage for current environment
+
Check the useCustomSysimage setting in the Julia extension settings in VS Code
+
Restart the Julia REPL. (Hit the trash can button and open a new REPL session from the Command Palette)
+
+
+
Copy and paste this into a JuliaSysimage.toml file:
+
[sysimage]
+exclude=[] # Additional packages to be exlucded in the system image
+statements_files=[] # Precompile statements files to be used, relative to the project folder
+execution_files=[] # Precompile execution files to be used, relative to the project folder
+
+
+
The extension automatically uses the sysimage instead of precompiling your project. And now your project should run much faster and TTFX will be significantly sped-up. On my M1 iMac I use the powerful but compiler-heavy Makie plotting library and I went from waiting about 2 minutes for precompilation and maybe 30 seconds for that first plot to almost no compile time, and execution in less than a second. (Other people have properly benchmarked this, I’m not going to do that here). I see similar results on my 2019 Intel Macbook Pro.
+
+
But here’s what really got my workflow sailing. I’m PhD student working in experimental physics. I have a lot of messy data and I need to make a ton of plots to explore that data. I have a top-level folder for my experiment.
+In there I have separate folders for raw data, daily scripts, and results/plots.
+Then I have a src folder where plotting, analysis, and file reading/writing scripts go. The files in src rarely change, so that means I can add them to the execution_files section in my JuliaSysimage.toml file. These scripts get compiled along with all my plotting packages into the sysimage. This makes everything fast.
to JuliaSysimage.toml. As long as I don’t change these files, their functions load instantly. The functions in these files are used in my lab_notebook files with an include() statement at the top (e.g. include("plotting_functions.jl")). For example, I have custom plotting functions and themes that make an interactive grid of plots with toggles and settings so I can look at and compare data exactly the way I want. Recreating the sysimage a couple of times a month (or even once a week) is not a big deal compared to the time savings I get every day.
+
+
As an aside, I recommend everyone have some kind of setup like this where you reuse plotting and analysis functions, no matter what language you’re using. If you are editing these functions every day then either these scripts have not settled down yet or something isn’t quite right with the workflow. It is worth it to sit down and figure out what tools you need to build to smooth out day-to-day computational tasks instead of writing scripts from scratch each time you have to make a graph of some data. For the most part, the file format for my data is the same, so I only need a handful of plotting and data read/write functions. Once they’re written, that’s it. I can move on.
+
+
As many others have said, the time-to-first-X problem is a priority for the Julia developers. The version 1.8 update this year saw some speedups,
+and I think the expectation is that this will continue in future 1.x releases.
+These improvements to the compilation stage, both in VS Code and the work being done in the language itself, have surpassed my expectations. I thought Julia would always have an initial lag and that people would have to make hacks and workarounds. This really is exciting, and there is a lot to look forward to in Julia’s future.
+
+
+
+
+
+
+
The plotting libraries generally take the longest to precompile. ↩
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/2022/11/01/More-on-sewage.html b/2022/11/01/More-on-sewage.html
new file mode 100644
index 0000000..14898e5
--- /dev/null
+++ b/2022/11/01/More-on-sewage.html
@@ -0,0 +1,62 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
the national reporting system for collecting and testing samples from wastewater treatment systems for Covid remains limited, uncoordinated and insufficiently standardized for a robust national surveillance system. If public health officials can’t track the data to mobilize a response to a crisis, the information that has been collected doesn’t do much good.
+
+
+
I had thought a program like this would have made it into the recently-passed infrastructure package. It would be a shame if health and safety monitoring systems (like wildfire or region-wide earthquake monitoring) were not built or strengthened in the near future.
+
+
Here’s more details on sewage on Jim Al-Khalili’s excellent BBC podcast, The Life Scientific:
I’m excited to introduce my first software package written for broad use. TransferMatrix.jl is a general 4 x 4 transfer matrix implementation written in the Julia programming language. The transfer matrix method analyzes the propagation of an electromagnetic wave through a multi-layered medium. You can compute the reflectance and transmittance spectra, as well as calculate the electric field profile as a function of position within the medium.
+
+
+
+
I started with some simple code in Python for my own projects and sharing it with other in my lab, but it had some limitations and I was growing to love coding in Julia. I didn’t want to switch to Python to do just this one thing. I started rewriting the code in Julia on the weekends, but I didn’t just want to reimplement what I had done in Python. You see, I’ve found that there are a lot of transfer matrix implementations on the web. It seems like every grad student doing something in optics or thin films writes one, plops it on the web, and lets it get stale when they graduate. A simple 2 x 2 algorithm is not hard to write but it can’t be fully generalized. I was also frustrated that there are all of these papers that try to improve the method (transfer matrices, apparently, are still an active area of research), but the code is difficult to read, poorly documented, untested, used poor programming practices, and abandonded.
+
+
+
+
I wanted to write something based on the latest developments that dealt with the shortfalls of the traditional transfer matrix (singularities and numerical instabilities), while being highly modular, reusable, and with great documentation and tutorials. And I wanted it in Julia to take advantage of Julia’s speed and the scientific community over there.
+
+
High modularity means that each function is as small as it can be. This makes it easy for someone to replace one or more steps with something custom to test a new idea and improve on the method in their own research. It means that it is easy to test and easy to read the code (in pure Julia).
+
+
Julia’s package manager makes it easy to install. Everything is documented and I have written an extensive tutorial — all of the code in the tutorial can be run as is.
+
+
Sharing and reuse is easy. You can make a config file with all of the simulation parameters (even the refractive index data from a file) and reproduce the results for that structure. You can create multiple variations easily this way and share the exact configuration that you used with others. Even complicated periodic structures are easy to make this way.
+
+
This implementation is based on the latest research in general transfer matrix methods and every piece of research that I use is cited at the function level, complete with the DOI so that you can follow everything that has been done and make precise modifications. A full list of references is also on the documentation website.
+
+
My hope is for this to be at least a first stop for someone looking for a transfer matrix algorithm. If the community likes it, then I would like this to become a part of a standard set of science or physics packages that currently exist in the Julia ecosystem. Ease of use and readability really were my priorities — there is little boilerplate code. And Julia’s speed means you can do wavelength and angle-tuning simulations to produce 2D contour plots quickly. Together with the generality of this implementation based on current research, I hope that others can use TransferMatrix.jl to try out new ideas.
Zeynep Tufekci, in an op-ed for the New York Times, ponders the implications of generative AI as we very likely enter the dawn of a new technological era.
+
+
+
Teachers could assign a complicated topic and allow students to use such tools as part of their research. Assessing the veracity and reliability of these A.I.-generated notes and using them to create an essay would be done in the classroom, with guidance and instruction from teachers. The goal would be to increase the quality and the complexity of the argument.
+
+
This would require more teachers to provide detailed feedback. Unless sufficient resources are provided equitably, adapting to conversational A.I. in flipped classrooms could exacerbate inequalities.
+
+
In schools with fewer resources, some students may end up turning in A.I.-produced essays without obtaining useful skills or really knowing what they have written.
+
+
+
I 100% agree. The coming sophisticated AI will demand that schools use more labor-intensive teaching methods.
+She is also right that this will exacerbate inequalities unless we as a society put a lot more money into our education systems.
This piece by Judith Enck, a former EPA regional administrator, and Jan Dell, a chemical engineer, in the Atlantic highlight three main problems with plastic recycling.
+
+
+
The large number of types of plastics make sorting and recycling difficult.
+
+
Just one fast-food meal can involve many different types of single-use plastic, including PET#1, HDPE#2, LDPE#4, PP#5, and PS#6 cups, lids, clamshells, trays, bags, and cutlery, which cannot be recycled together.
+
+
+
Processing plastic waste is toxic and wasteful.
+
+
Unlike metal and glass, plastics are not inert. Plastic products can include toxic additives and absorb chemicals, and are generally collected in curbside bins filled with possibly dangerous materials such as plastic pesticide containers. According to a report published by the Canadian government, toxicity risks in recycled plastic prohibit “the vast majority of plastic products and packaging produced” from being recycled into food-grade packaging.
+
+
+
Recycling plastic is not economical.
+
+
Yet another problem is that plastic recycling is simply not economical. Recycled plastic costs more than new plastic because collecting, sorting, transporting, and reprocessing plastic waste is exorbitantly expensive. The petrochemical industry is rapidly expanding, which will further lower the cost of new plastic.
+
+
+
+
+
In addition, there is a growing body of evidence showing that plastics break down into microplastics that permeate the environment, and humans and animals end up ingesting them. There are microplastics in all corners of the earth and researchers have been trying to understand their effects on human health.
+
+
This is frankly alarming, and I’ve been more and more shocked every year since the plastics issue has been making it into the mainstream press. I’ve been cutting down on the amount of plastic goods I purchased and several years ago I stopped storing food in plastic containers, opting for metal or glass. Last year, I stopped buying clothing made from synthetic materials (as much as I can), since clothing releases a huge amount of microplastics into the water supply in every wash. Now I buy cotton, wool, and linen clothing almost exclusively. I find myself paying more attention to the materials of pretty much every product I plan to purchase. It definitely feels like an uphill battle because of the sheer amount of plastic that is reported to be in our surroundings.
The New York Times just released New York Times Audio, an app for “audio journalism”. It curates all of the New York Times podcasts (including a new daily podcast called “Headlines”) as well as podcasts from third parties, like Foreign Policy and This American Life. It will also include audio versions of written articles.
+
+
I think it will be difficult to penetrate the pretty well established spoken-word market. Podcasts are dominated by Apple Podcasts, and Spotify has had a hard time turning podcasting into a core part of its business. I can see NYT Audio being a niche product that appeals to a small subset of NYT subscribers, but not much more. I’m guessing the goal is to charge a fee for third parties to access NYT subscribers. I don’t really see how this app would generate more revenue from existing subscribers both because I don’t see huge numbers using the app and because podcasts are traditionally free and use open web standards. Again, see Spotify’s and other attempts to make proprietary podcasting formats.
+
+
I’ll try the app, but I don’t see it becoming a habit. Overcast is already on my Home Screen and adding another podcast app is a tall order. If I find something I like, I will most likely just add it to a playlist in Overcast.
+
+
+
+
+
\ No newline at end of file
diff --git a/2023/06/23/newer-Macs-support-for-lossless-audio.html b/2023/06/23/newer-Macs-support-for-lossless-audio.html
new file mode 100644
index 0000000..018b0fd
--- /dev/null
+++ b/2023/06/23/newer-Macs-support-for-lossless-audio.html
@@ -0,0 +1,99 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
I bought the Blue Mo-Fi headphones shortly after they came out in 2014.
+They great headphones, but the fake leather on the ear pads have almost completely flaked off and now
+that Blue has been bought by Logitech and is killing the Blue mic brand there is little hope of trying to get replacement parts or repair them in the future (I have tried).
+Besides, they didn’t have stellar reviews when they came out and now I’m getting into high-fidelity audio.
+
+
Wading through the online world of audiophile hardware was making me consider buying a DAC and amp in addition to new headphones, but then I found Apple’s Support pages for lossless audio and it appears that Apple Silicon Macs not only support lossless audio output, but also have a built-in DAC and amp that can drive high-impedance headphones. That solves that problem. I’ll just buy some entry-level audiophile headphones and go.
+The built-in hardware is probably not as sophisticated as dedicated hardware, but I doubt I’ll ever be that into the highest-end audio equipment. There are other things to be obsessed about.
+
+
I can’t find any information on the built-in DAC or amplifier in System Information, but the
+Audio MIDI Setup app (comes with macOS) allows you to select the input and output sample rate and other settings.
“The entire Apple Music catalog is encoded in ALAC in resolutions ranging from 16-bit/44.1 kHz (CD Quality) up to 24-bit/192 kHz.”
+
+
+
Supported on iPhone, iPad, Mac, HomePod, Apple TV 4K (not greater than 48 kHz), and Android.
+
+
This page says only the 14-inch and 16-inch MacBook Pros support native playback up to 96 kHz, but
+I think this is outdated because the other support pages all say otherwise.
“To set the sample rate for the headphone jack, use the Audio Midi Setup app, which is located in the Utilities folder of your Applications folder. Make sure to connect your device to the headphone jack. In the sidebar of Audio MIDI Setup, select External Headphones, then choose a sample rate from the Format pop-up menu. For best results, match the sample rate for the headphone jack with the sample rate of your source material.”
The Economist has moved all of its podcasts behind a paywall except for its daily news show The Intelligence.
+This is disappointing because I somewhat regularly would share episodes with friends who don’t subscribe to the newspaper.
+It also means that the analogy of podcasts being like radio that you download no longer holds.
+Another idea recedes into the past.
+I get the move — the advertising market is drying up and many independent podcasts are moving toward membership models.
+Spotify has pushed the industry towards subscription and now Apple Podcasts has gotton on board.
+Still, if The Economist is going to move its content behind a paywall, at least they have done it the right way.
+You can still use any podcast player to listen to shows.
+They provide a subscriber RSS feed in addition to hooking into the subscriber features of the big podcast apps. This is definitely the way to go and I’m glad they are continuing to use RSS instead of making up their own to do it.
+
+
Previously I posted about The New York Times launching its own audio app.
+I still think this is doomed to fail. They have since launched more shows that are available only on the app.
+I don’t know their numbers, but I suspect they won’t see a lot of growth in the long run.
+Thinking big picture, the internet is now going through a phase of decentralization.
+This is most apparent in the social media space with the rise of new microblogging platforms like Mastodon and Threads — and more importantly ActivityPub which allows them all to interconnect — and the slowly disintegrating Twitter/X.
+Podcasts have always used use-it-anywhere RSS feeds and I don’t see that changing any time soon.
+
+
+
+
+
\ No newline at end of file
diff --git a/404.html b/404.html
new file mode 100644
index 0000000..fead809
--- /dev/null
+++ b/404.html
@@ -0,0 +1,68 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
+All of the CSS and HTML — layout, fonts, colors (like 'em or not) — on this site are my own design.
+Simple, fast, and a careful selection of shades of blue.
+The only JavaScript is the pop-over side menu.
+No tracking.
+The site is built with Jekyll and hosted on Github Pages.
+
+
+
+Being a tech enthusiast, I use and try lots of software.
+Here are some my favorites:
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/assets/favicon.png b/assets/favicon.png
new file mode 100644
index 0000000..10b5af7
Binary files /dev/null and b/assets/favicon.png differ
diff --git a/assets/fonts/Libre-Baskerville/LibreBaskerville-Bold.otf b/assets/fonts/Libre-Baskerville/LibreBaskerville-Bold.otf
new file mode 100644
index 0000000..5d4d968
Binary files /dev/null and b/assets/fonts/Libre-Baskerville/LibreBaskerville-Bold.otf differ
diff --git a/assets/fonts/Libre-Baskerville/LibreBaskerville-Italic.otf b/assets/fonts/Libre-Baskerville/LibreBaskerville-Italic.otf
new file mode 100644
index 0000000..bac08bf
Binary files /dev/null and b/assets/fonts/Libre-Baskerville/LibreBaskerville-Italic.otf differ
diff --git a/assets/fonts/Libre-Baskerville/LibreBaskerville-Regular.otf b/assets/fonts/Libre-Baskerville/LibreBaskerville-Regular.otf
new file mode 100644
index 0000000..3a58e6b
Binary files /dev/null and b/assets/fonts/Libre-Baskerville/LibreBaskerville-Regular.otf differ
diff --git a/assets/fonts/Libre-Baskerville/SIL Open Font License.txt b/assets/fonts/Libre-Baskerville/SIL Open Font License.txt
new file mode 100644
index 0000000..9478265
--- /dev/null
+++ b/assets/fonts/Libre-Baskerville/SIL Open Font License.txt
@@ -0,0 +1,44 @@
+Copyright (c) 2012, Pablo Impallari (www.impallari.com|impallari@gmail.com),
+Copyright (c) 2012, Rodrigo Fuenzalida (www.rfuenzalida.com|hello¨rfuenzalida.com), with Reserved Font Name Libre Baskerville.
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
+
+"Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
+
+5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
\ No newline at end of file
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Bd.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Bd.otf
new file mode 100644
index 0000000..ccdcf99
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Bd.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-BdIt.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-BdIt.otf
new file mode 100644
index 0000000..8574a72
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-BdIt.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-It.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-It.otf
new file mode 100644
index 0000000..d1951f3
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-It.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Lt.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Lt.otf
new file mode 100644
index 0000000..8c7c5c6
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Lt.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-LtIt.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-LtIt.otf
new file mode 100644
index 0000000..418aba0
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-LtIt.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Me.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Me.otf
new file mode 100644
index 0000000..5758bb9
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Me.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-MeIt.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-MeIt.otf
new file mode 100644
index 0000000..6d4e9f8
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-MeIt.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Rg.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Rg.otf
new file mode 100644
index 0000000..82e516e
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Rg.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Sb.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Sb.otf
new file mode 100644
index 0000000..df58180
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-Sb.otf differ
diff --git a/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-SbIt.otf b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-SbIt.otf
new file mode 100644
index 0000000..b5bbfce
Binary files /dev/null and b/assets/fonts/SkolarPETRIAL/SkolarPETRIAL-SbIt.otf differ
diff --git a/assets/hamburger.svg b/assets/hamburger.svg
new file mode 100644
index 0000000..233bf21
--- /dev/null
+++ b/assets/hamburger.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/assets/images/about/small-profile.jpg b/assets/images/about/small-profile.jpg
new file mode 100644
index 0000000..6c211ca
Binary files /dev/null and b/assets/images/about/small-profile.jpg differ
diff --git a/assets/images/logos/GRC_logo.png b/assets/images/logos/GRC_logo.png
new file mode 100644
index 0000000..b3a5412
Binary files /dev/null and b/assets/images/logos/GRC_logo.png differ
diff --git a/assets/images/logos/MolSci_logo_red.jpg.webp b/assets/images/logos/MolSci_logo_red.jpg.webp
new file mode 100644
index 0000000..2d3fa61
Binary files /dev/null and b/assets/images/logos/MolSci_logo_red.jpg.webp differ
diff --git a/assets/images/logos/MolSci_logo_red.png.webp b/assets/images/logos/MolSci_logo_red.png.webp
new file mode 100644
index 0000000..d312092
Binary files /dev/null and b/assets/images/logos/MolSci_logo_red.png.webp differ
diff --git a/assets/images/posts/TransferMatrix.jl-electric-field-profile.png b/assets/images/posts/TransferMatrix.jl-electric-field-profile.png
new file mode 100644
index 0000000..e806f73
Binary files /dev/null and b/assets/images/posts/TransferMatrix.jl-electric-field-profile.png differ
diff --git a/assets/images/posts/TransferMatrix.jl-quarter-wave.png b/assets/images/posts/TransferMatrix.jl-quarter-wave.png
new file mode 100644
index 0000000..05c34d2
Binary files /dev/null and b/assets/images/posts/TransferMatrix.jl-quarter-wave.png differ
diff --git a/assets/images/research/large-1200px-vibrational-energy-level-diagram.png b/assets/images/research/large-1200px-vibrational-energy-level-diagram.png
new file mode 100644
index 0000000..a25b6e2
Binary files /dev/null and b/assets/images/research/large-1200px-vibrational-energy-level-diagram.png differ
diff --git a/assets/images/research/medium-800px-ball-and-spring-blue.png b/assets/images/research/medium-800px-ball-and-spring-blue.png
new file mode 100644
index 0000000..bdc0c74
Binary files /dev/null and b/assets/images/research/medium-800px-ball-and-spring-blue.png differ
diff --git a/assets/images/research/medium-800px-ball-and-spring.png b/assets/images/research/medium-800px-ball-and-spring.png
new file mode 100644
index 0000000..9049267
Binary files /dev/null and b/assets/images/research/medium-800px-ball-and-spring.png differ
diff --git a/assets/images/research/medium-800px-coupled-pendulum-two-modes.png b/assets/images/research/medium-800px-coupled-pendulum-two-modes.png
new file mode 100644
index 0000000..56b7444
Binary files /dev/null and b/assets/images/research/medium-800px-coupled-pendulum-two-modes.png differ
diff --git a/assets/images/research/medium-800px-pump-probe.png b/assets/images/research/medium-800px-pump-probe.png
new file mode 100644
index 0000000..17e10cd
Binary files /dev/null and b/assets/images/research/medium-800px-pump-probe.png differ
diff --git a/contact/index.html b/contact/index.html
new file mode 100644
index 0000000..3644cf4
--- /dev/null
+++ b/contact/index.html
@@ -0,0 +1,64 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
+Send me your comments regarding what I write on this site.
+The social web is a bit of a mess right now, so there isn't one public place to comment on posts.
+My username is @GarrekStemo on most social media platforms.
+
Previously I posted about The New York Times launching its own audio app.
+I still think this is doomed to fail. They have since launched more shows that are available only on the app.
+I don’t know their numbers, but I suspect they won’t see a lot of growth in the long run.
+Thinking big picture, the internet is now going through a phase of decentralization.
+This is most apparent in the social media space with the rise of new microblogging platforms like Mastodon and Threads — and more importantly ActivityPub which allows them all to interconnect — and the slowly disintegrating Twitter/X.
+Podcasts have always used use-it-anywhere RSS feeds and I don’t see that changing any time soon.
]]>Apple Silicon Macs have a DAC that supports high-impedance headphones2023-06-23T00:00:00+00:002023-06-23T00:00:00+00:00https://garrek.org/2023/06/23/newer-Macs-support-for-lossless-audioI bought the Blue Mo-Fi headphones shortly after they came out in 2014.
+They great headphones, but the fake leather on the ear pads have almost completely flaked off and now
+that Blue has been bought by Logitech and is killing the Blue mic brand there is little hope of trying to get replacement parts or repair them in the future (I have tried).
+Besides, they didn’t have stellar reviews when they came out and now I’m getting into high-fidelity audio.
+
+
Wading through the online world of audiophile hardware was making me consider buying a DAC and amp in addition to new headphones, but then I found Apple’s Support pages for lossless audio and it appears that Apple Silicon Macs not only support lossless audio output, but also have a built-in DAC and amp that can drive high-impedance headphones. That solves that problem. I’ll just buy some entry-level audiophile headphones and go.
+The built-in hardware is probably not as sophisticated as dedicated hardware, but I doubt I’ll ever be that into the highest-end audio equipment. There are other things to be obsessed about.
+
+
I can’t find any information on the built-in DAC or amplifier in System Information, but the
+Audio MIDI Setup app (comes with macOS) allows you to select the input and output sample rate and other settings.
“The entire Apple Music catalog is encoded in ALAC in resolutions ranging from 16-bit/44.1 kHz (CD Quality) up to 24-bit/192 kHz.”
+
+
+
Supported on iPhone, iPad, Mac, HomePod, Apple TV 4K (not greater than 48 kHz), and Android.
+
+
This page says only the 14-inch and 16-inch MacBook Pros support native playback up to 96 kHz, but
+I think this is outdated because the other support pages all say otherwise.
“To set the sample rate for the headphone jack, use the Audio Midi Setup app, which is located in the Utilities folder of your Applications folder. Make sure to connect your device to the headphone jack. In the sidebar of Audio MIDI Setup, select External Headphones, then choose a sample rate from the Format pop-up menu. For best results, match the sample rate for the headphone jack with the sample rate of your source material.”
]]>The New York Times Makes a Podcast-like App2023-05-23T00:00:00+00:002023-05-23T00:00:00+00:00https://garrek.org/2023/05/23/The-New-York-Times-Makes-a-Podcast-like-AppThe New York Times just released New York Times Audio, an app for “audio journalism”. It curates all of the New York Times podcasts (including a new daily podcast called “Headlines”) as well as podcasts from third parties, like Foreign Policy and This American Life. It will also include audio versions of written articles.
+
+
I think it will be difficult to penetrate the pretty well established spoken-word market. Podcasts are dominated by Apple Podcasts, and Spotify has had a hard time turning podcasting into a core part of its business. I can see NYT Audio being a niche product that appeals to a small subset of NYT subscribers, but not much more. I’m guessing the goal is to charge a fee for third parties to access NYT subscribers. I don’t really see how this app would generate more revenue from existing subscribers both because I don’t see huge numbers using the app and because podcasts are traditionally free and use open web standards. Again, see Spotify’s and other attempts to make proprietary podcasting formats.
+
+
I’ll try the app, but I don’t see it becoming a habit. Overcast is already on my Home Screen and adding another podcast app is a tall order. If I find something I like, I will most likely just add it to a playlist in Overcast.
]]>Plastics Are Almost All Downside2022-12-29T00:00:00+00:002022-12-29T00:00:00+00:00https://garrek.org/2022/12/29/Plastics-are-almost-all-downsideThis piece by Judith Enck, a former EPA regional administrator, and Jan Dell, a chemical engineer, in the Atlantic highlight three main problems with plastic recycling.
+
+
+
The large number of types of plastics make sorting and recycling difficult.
+
+
Just one fast-food meal can involve many different types of single-use plastic, including PET#1, HDPE#2, LDPE#4, PP#5, and PS#6 cups, lids, clamshells, trays, bags, and cutlery, which cannot be recycled together.
+
+
+
Processing plastic waste is toxic and wasteful.
+
+
Unlike metal and glass, plastics are not inert. Plastic products can include toxic additives and absorb chemicals, and are generally collected in curbside bins filled with possibly dangerous materials such as plastic pesticide containers. According to a report published by the Canadian government, toxicity risks in recycled plastic prohibit “the vast majority of plastic products and packaging produced” from being recycled into food-grade packaging.
+
+
+
Recycling plastic is not economical.
+
+
Yet another problem is that plastic recycling is simply not economical. Recycled plastic costs more than new plastic because collecting, sorting, transporting, and reprocessing plastic waste is exorbitantly expensive. The petrochemical industry is rapidly expanding, which will further lower the cost of new plastic.
+
+
+
+
+
In addition, there is a growing body of evidence showing that plastics break down into microplastics that permeate the environment, and humans and animals end up ingesting them. There are microplastics in all corners of the earth and researchers have been trying to understand their effects on human health.
+
+
This is frankly alarming, and I’ve been more and more shocked every year since the plastics issue has been making it into the mainstream press. I’ve been cutting down on the amount of plastic goods I purchased and several years ago I stopped storing food in plastic containers, opting for metal or glass. Last year, I stopped buying clothing made from synthetic materials (as much as I can), since clothing releases a huge amount of microplastics into the water supply in every wash. Now I buy cotton, wool, and linen clothing almost exclusively. I find myself paying more attention to the materials of pretty much every product I plan to purchase. It definitely feels like an uphill battle because of the sheer amount of plastic that is reported to be in our surroundings.
]]>Classrooms will need to change with generative AI2022-12-16T00:00:00+00:002022-12-16T00:00:00+00:00https://garrek.org/2022/12/16/Classrooms-need-to-change-with-generative-AIZeynep Tufekci, in an op-ed for the New York Times, ponders the implications of generative AI as we very likely enter the dawn of a new technological era.
+
+
+
Teachers could assign a complicated topic and allow students to use such tools as part of their research. Assessing the veracity and reliability of these A.I.-generated notes and using them to create an essay would be done in the classroom, with guidance and instruction from teachers. The goal would be to increase the quality and the complexity of the argument.
+
+
This would require more teachers to provide detailed feedback. Unless sufficient resources are provided equitably, adapting to conversational A.I. in flipped classrooms could exacerbate inequalities.
+
+
In schools with fewer resources, some students may end up turning in A.I.-produced essays without obtaining useful skills or really knowing what they have written.
+
+
+
I 100% agree. The coming sophisticated AI will demand that schools use more labor-intensive teaching methods.
+She is also right that this will exacerbate inequalities unless we as a society put a lot more money into our education systems.
]]>Introducing TransferMatrix.jl2022-11-04T00:00:00+00:002022-11-04T00:00:00+00:00https://garrek.org/2022/11/04/TransferMatrix.jl-ReleaseI’m excited to introduce my first software package written for broad use. TransferMatrix.jl is a general 4 x 4 transfer matrix implementation written in the Julia programming language. The transfer matrix method analyzes the propagation of an electromagnetic wave through a multi-layered medium. You can compute the reflectance and transmittance spectra, as well as calculate the electric field profile as a function of position within the medium.
+
+
+
+
I started with some simple code in Python for my own projects and sharing it with other in my lab, but it had some limitations and I was growing to love coding in Julia. I didn’t want to switch to Python to do just this one thing. I started rewriting the code in Julia on the weekends, but I didn’t just want to reimplement what I had done in Python. You see, I’ve found that there are a lot of transfer matrix implementations on the web. It seems like every grad student doing something in optics or thin films writes one, plops it on the web, and lets it get stale when they graduate. A simple 2 x 2 algorithm is not hard to write but it can’t be fully generalized. I was also frustrated that there are all of these papers that try to improve the method (transfer matrices, apparently, are still an active area of research), but the code is difficult to read, poorly documented, untested, used poor programming practices, and abandonded.
+
+
+
+
I wanted to write something based on the latest developments that dealt with the shortfalls of the traditional transfer matrix (singularities and numerical instabilities), while being highly modular, reusable, and with great documentation and tutorials. And I wanted it in Julia to take advantage of Julia’s speed and the scientific community over there.
+
+
High modularity means that each function is as small as it can be. This makes it easy for someone to replace one or more steps with something custom to test a new idea and improve on the method in their own research. It means that it is easy to test and easy to read the code (in pure Julia).
+
+
Julia’s package manager makes it easy to install. Everything is documented and I have written an extensive tutorial — all of the code in the tutorial can be run as is.
+
+
Sharing and reuse is easy. You can make a config file with all of the simulation parameters (even the refractive index data from a file) and reproduce the results for that structure. You can create multiple variations easily this way and share the exact configuration that you used with others. Even complicated periodic structures are easy to make this way.
+
+
This implementation is based on the latest research in general transfer matrix methods and every piece of research that I use is cited at the function level, complete with the DOI so that you can follow everything that has been done and make precise modifications. A full list of references is also on the documentation website.
+
+
My hope is for this to be at least a first stop for someone looking for a transfer matrix algorithm. If the community likes it, then I would like this to become a part of a standard set of science or physics packages that currently exist in the Julia ecosystem. Ease of use and readability really were my priorities — there is little boilerplate code. And Julia’s speed means you can do wavelength and angle-tuning simulations to produce 2D contour plots quickly. Together with the generality of this implementation based on current research, I hope that others can use TransferMatrix.jl to try out new ideas.
]]>More on sewage monitoring2022-11-01T00:00:00+00:002022-11-01T00:00:00+00:00https://garrek.org/2022/11/01/More-on-sewageSewage monitoring is low-level persistent in the news.
+Former members of President Biden’s Covid-19 advisory board write in the New York Times
+
+
+
the national reporting system for collecting and testing samples from wastewater treatment systems for Covid remains limited, uncoordinated and insufficiently standardized for a robust national surveillance system. If public health officials can’t track the data to mobilize a response to a crisis, the information that has been collected doesn’t do much good.
+
+
+
I had thought a program like this would have made it into the recently-passed infrastructure package. It would be a shame if health and safety monitoring systems (like wildfire or region-wide earthquake monitoring) were not built or strengthened in the near future.
+
+
Here’s more details on sewage on Jim Al-Khalili’s excellent BBC podcast, The Life Scientific:
]]>Jumping over the time-to-first-plot problem in Julia2022-10-26T00:00:00+00:002022-10-26T00:00:00+00:00https://garrek.org/2022/10/26/Jumping-over-the-time-to-first-plot-problem-in-JuliaI’ve been using Julia for about a year now after moving my entire workflow
+from Python.
+When I sometimes revisit Python I am so glad I made the switch. No regrets whatsoever. Julia still has one pain point,
+which is time to first execution (TTFX) or time to first plot (TTFP)1.
+But even this “pain point” is somewhat bizarre because Julia is a compiled language. Of
+course there is going to be a compilation step that will make it slow to get going. What makes this a pain
+point is the desire to have it all — “we are greedy,” say the founders of the language.
+Julia wants to be interactive and dynamic, but compiled and fast.
+But the fact that it’s compiled means that when a user wants to make a simple line plot it takes two minutes to precompile the plotting library, compile the plotting functions, and finally show the plot on screen.
+Only after that initial setup are all subsequent plots instant — as long as you keep your session active. There are many more talented programmers in the community than me, and one user in a recent Discourse thread explained the tradeoff and the difficulty in reducing compile time:
+
+
+
A tangent: I believe it is worthwhile to discuss why this is such a phenomenally big problem in julia. Julia has two very special features other languages do not share: (1) multimethods as the fundamental principle for the entirety of the ecosystem and (2) compiled code. It is very difficult to know what code you need compiled and to not discard the vast majority of already compiled code when importing new libraries that add new methods for pre-existing functions. No one has had to deal with this problem before julia. It is being slowly dealt with. Sysimages basically carry the promise that no significant amount of new methods will be defined, hence they can cache more compiled code (this is very oversimplified borderline misleading explanation).
+
+
+
That last point about sysimages is interesting. Making a sysimage in Visual Studio Code is a big workflow improvement, and I recommend all Julia users try it. It essentially compiles all the libraries from your project, and any other files you specify, and puts them into a file. I guess you could say it freezes your Julia session to use later. This is faster than precompiling each time. It’s built into the Julia extension and easy to set up. Detailed instructions are on the Julia VS Code extension website, but in a nutshell the steps are:
+
+
+
Open your project folder in VS Code with the Julia extension installed (and make sure it’s activated)
+
Make a new folder called .vscode
+
Make a file called JuliaSysimage.toml in that folder
+
Paste the [sysimage] text below this list into that file
+
Select Tasks: Run Build Task and then select Julia: Build custom sysimage for current environment
+
Check the useCustomSysimage setting in the Julia extension settings in VS Code
+
Restart the Julia REPL. (Hit the trash can button and open a new REPL session from the Command Palette)
+
+
+
Copy and paste this into a JuliaSysimage.toml file:
+
[sysimage]
+exclude=[] # Additional packages to be exlucded in the system image
+statements_files=[] # Precompile statements files to be used, relative to the project folder
+execution_files=[] # Precompile execution files to be used, relative to the project folder
+
+
+
The extension automatically uses the sysimage instead of precompiling your project. And now your project should run much faster and TTFX will be significantly sped-up. On my M1 iMac I use the powerful but compiler-heavy Makie plotting library and I went from waiting about 2 minutes for precompilation and maybe 30 seconds for that first plot to almost no compile time, and execution in less than a second. (Other people have properly benchmarked this, I’m not going to do that here). I see similar results on my 2019 Intel Macbook Pro.
+
+
But here’s what really got my workflow sailing. I’m PhD student working in experimental physics. I have a lot of messy data and I need to make a ton of plots to explore that data. I have a top-level folder for my experiment.
+In there I have separate folders for raw data, daily scripts, and results/plots.
+Then I have a src folder where plotting, analysis, and file reading/writing scripts go. The files in src rarely change, so that means I can add them to the execution_files section in my JuliaSysimage.toml file. These scripts get compiled along with all my plotting packages into the sysimage. This makes everything fast.
to JuliaSysimage.toml. As long as I don’t change these files, their functions load instantly. The functions in these files are used in my lab_notebook files with an include() statement at the top (e.g. include("plotting_functions.jl")). For example, I have custom plotting functions and themes that make an interactive grid of plots with toggles and settings so I can look at and compare data exactly the way I want. Recreating the sysimage a couple of times a month (or even once a week) is not a big deal compared to the time savings I get every day.
+
+
As an aside, I recommend everyone have some kind of setup like this where you reuse plotting and analysis functions, no matter what language you’re using. If you are editing these functions every day then either these scripts have not settled down yet or something isn’t quite right with the workflow. It is worth it to sit down and figure out what tools you need to build to smooth out day-to-day computational tasks instead of writing scripts from scratch each time you have to make a graph of some data. For the most part, the file format for my data is the same, so I only need a handful of plotting and data read/write functions. Once they’re written, that’s it. I can move on.
+
+
As many others have said, the time-to-first-X problem is a priority for the Julia developers. The version 1.8 update this year saw some speedups,
+and I think the expectation is that this will continue in future 1.x releases.
+These improvements to the compilation stage, both in VS Code and the work being done in the language itself, have surpassed my expectations. I thought Julia would always have an initial lag and that people would have to make hacks and workarounds. This really is exciting, and there is a lot to look forward to in Julia’s future.
+
+
+
+
+
+
+
The plotting libraries generally take the longest to precompile. ↩
+
+
+
]]>The Structures of the Scientific Enterprise2022-10-17T00:00:00+00:002022-10-17T00:00:00+00:00https://garrek.org/2022/10/17/The-Structures-of-the-Scientific-EnterpriseGeoff Anders of Leverage Research, a non-profit that writes scientific papers without publishing them in peer-reviewed journals (so it seems), writes in Palladium Magazine a brief summary of the role of science through the ages. His overall theme is clear from the section headers. He sees science as going from largely an endeavor of wealthy individuals to one that obtained authority from the state. I quibble with parts of Anders’s historical narrative, but there are some good ideas in his conclusion.
+
+
I’m not sure how big of a phase “science as a public phenomenon” was. He makes it seems like science was a circus show in the 16th, 17th, and parts of the 18th century. I think this an exaggeration, but I’m not a science historian (and neither is he). Anders also relies too heavily on a single instance of science being used authoritatively (King Louis XVI’s commission to investigate Franz Mesmer’s methods of apparent hypnosis) to make the case that science had become broadly authoritative. This strikes me as a weak way to make the argument. I would be hesitant to say science has ever had the authority he seems to imply that it had. A massive influence? Definitely. A justification? Probably, and sometimes a scapegoat. But I wouldn’t call it an “authority”. I think if one were to use that word it needs a bit more context, which Anders does not provide.
+
+
His section on science and the state overemphasizes military technologies and glosses over quality of life improvements that raised large parts of the global population out of abject poverty (see Bradford DeLong’s excellent grand narrative, Slouching Towards Utopia published recently).
+
+
The one area that I think Anders has something going for him is his conclusion. The scientific community is at some sort of crossroads in terms of funding and elements of how it is structured. I see a lot of complaints that funding for blue-sky ideas is drying up and there are reports that hiring is becoming difficult. Some of this likely has to do with how universities are funded — and that is a whole other can of worms.
+
+
I like Anders’s idea of splitting science into two camps: exploratory science and settled science. At first glance one might say that science is exploratory and that the settled part is taken care of by applied scientists and engineers, but I think Anders’s argument is more subtle. He says that funding might be restructured so that exploratory science is decentralized and career tracks are split into “later-state” and “earlier-stage” science. I would take this idea further. First, by establishing an exploratory wing of science its mission will be to take big risks with the expectation of failure — and failures should be reported and praised. This wing would be analogous to the US’s DARPA initiative.
+
+
Second, a later-stage wing of science wouldn’t have to feign novelty where none exists. They would be free to solidify existing science1. Maybe they can bundle a few studies from the exploratory stage and make that science robust, ready to pass the baton to the applied scientists and engineers.
+
+
I think the separation into early-phase and late-phase science would be a boon for the scientific endeavor. It would strengthen the pipeline from basic science to societal improvements. It would also clarify the mission of any given scientific project. Having a later-stage project would carry just as much importance as an exploratory project within its domain, and exploratory labs would be free to try out pie-in-the-sky ideas without fear of blowback from funding agencies.
+
+
+
+
+
I’m not sure how this has to do with Anders’s idea of “don’t trust the science”. I think he is throwing a bunch of ideas together without a clear thread (what does decentralization get you besides being hip with the crypto crowd?), but there are some nuggets in here worth thinking about. ↩
+
+
+
]]>Sewage Monitoring2022-10-08T00:00:00+00:002022-10-08T00:00:00+00:00https://garrek.org/2022/10/08/Sewage-MonitoringI have seen a few articles in the press this year on sewage monitoring for tracking disease and the health of a city. Sara Reardon writing for Scientific American reports on how wastewater monitoring has been taken up as a tool by the CDC and local communities for tracking COVID and other diseases in the US. The impact of wastewater data aggregation and analysis could be huge — in both the positive and negative. It strikes me that governments are largely reactionary to changes in public health. Little attention is paid to preventative measures. This could change that.
+
+
Thinking more broadly, I think this tool has much greater potential than disease tracking. Combining wastewater data with other inputs could be a monumental shift in understanding the health of a community on quite a granular level — both in terms of what substances are circulating in a community and the potential for real-time fidelity. You can imagine wastewater data being combined with data from hospitals, air quality monitoring, or even news of major events affecting the mental health of a city.
The Economist has moved all of its podcasts behind a paywall except for its daily news show The Intelligence.
+This is disappointing because I somewhat regularly would share episodes with friends who don’t subscribe to the newspaper.
+It also means that the analogy of podcasts being like radio that you download no longer holds.
+Another idea recedes into the past.
+I get the move — the advertising market is drying up and many independent podcasts are moving toward membership models.
+Spotify has pushed the industry towards subscription and now Apple Podcasts has gotton on board.
+Still, if The Economist is going to move its content behind a paywall, at least they have done it the right way.
+You can still use any podcast player to listen to shows.
+They provide a subscriber RSS feed in addition to hooking into the subscriber features of the big podcast apps. This is definitely the way to go and I’m glad they are continuing to use RSS instead of making up their own to do it.
+
+
Previously I posted about The New York Times launching its own audio app.
+I still think this is doomed to fail. They have since launched more shows that are available only on the app.
+I don’t know their numbers, but I suspect they won’t see a lot of growth in the long run.
+Thinking big picture, the internet is now going through a phase of decentralization.
+This is most apparent in the social media space with the rise of new microblogging platforms like Mastodon and Threads — and more importantly ActivityPub which allows them all to interconnect — and the slowly disintegrating Twitter/X.
+Podcasts have always used use-it-anywhere RSS feeds and I don’t see that changing any time soon.
I bought the Blue Mo-Fi headphones shortly after they came out in 2014.
+They great headphones, but the fake leather on the ear pads have almost completely flaked off and now
+that Blue has been bought by Logitech and is killing the Blue mic brand there is little hope of trying to get replacement parts or repair them in the future (I have tried).
+Besides, they didn’t have stellar reviews when they came out and now I’m getting into high-fidelity audio.
+
+
Wading through the online world of audiophile hardware was making me consider buying a DAC and amp in addition to new headphones, but then I found Apple’s Support pages for lossless audio and it appears that Apple Silicon Macs not only support lossless audio output, but also have a built-in DAC and amp that can drive high-impedance headphones. That solves that problem. I’ll just buy some entry-level audiophile headphones and go.
+The built-in hardware is probably not as sophisticated as dedicated hardware, but I doubt I’ll ever be that into the highest-end audio equipment. There are other things to be obsessed about.
+
+
I can’t find any information on the built-in DAC or amplifier in System Information, but the
+Audio MIDI Setup app (comes with macOS) allows you to select the input and output sample rate and other settings.
“The entire Apple Music catalog is encoded in ALAC in resolutions ranging from 16-bit/44.1 kHz (CD Quality) up to 24-bit/192 kHz.”
+
+
+
Supported on iPhone, iPad, Mac, HomePod, Apple TV 4K (not greater than 48 kHz), and Android.
+
+
This page says only the 14-inch and 16-inch MacBook Pros support native playback up to 96 kHz, but
+I think this is outdated because the other support pages all say otherwise.
“To set the sample rate for the headphone jack, use the Audio Midi Setup app, which is located in the Utilities folder of your Applications folder. Make sure to connect your device to the headphone jack. In the sidebar of Audio MIDI Setup, select External Headphones, then choose a sample rate from the Format pop-up menu. For best results, match the sample rate for the headphone jack with the sample rate of your source material.”
The New York Times just released New York Times Audio, an app for “audio journalism”. It curates all of the New York Times podcasts (including a new daily podcast called “Headlines”) as well as podcasts from third parties, like Foreign Policy and This American Life. It will also include audio versions of written articles.
+
+
I think it will be difficult to penetrate the pretty well established spoken-word market. Podcasts are dominated by Apple Podcasts, and Spotify has had a hard time turning podcasting into a core part of its business. I can see NYT Audio being a niche product that appeals to a small subset of NYT subscribers, but not much more. I’m guessing the goal is to charge a fee for third parties to access NYT subscribers. I don’t really see how this app would generate more revenue from existing subscribers both because I don’t see huge numbers using the app and because podcasts are traditionally free and use open web standards. Again, see Spotify’s and other attempts to make proprietary podcasting formats.
+
+
I’ll try the app, but I don’t see it becoming a habit. Overcast is already on my Home Screen and adding another podcast app is a tall order. If I find something I like, I will most likely just add it to a playlist in Overcast.
This piece by Judith Enck, a former EPA regional administrator, and Jan Dell, a chemical engineer, in the Atlantic highlight three main problems with plastic recycling.
+
+
+
The large number of types of plastics make sorting and recycling difficult.
+
+
Just one fast-food meal can involve many different types of single-use plastic, including PET#1, HDPE#2, LDPE#4, PP#5, and PS#6 cups, lids, clamshells, trays, bags, and cutlery, which cannot be recycled together.
+
+
+
Processing plastic waste is toxic and wasteful.
+
+
Unlike metal and glass, plastics are not inert. Plastic products can include toxic additives and absorb chemicals, and are generally collected in curbside bins filled with possibly dangerous materials such as plastic pesticide containers. According to a report published by the Canadian government, toxicity risks in recycled plastic prohibit “the vast majority of plastic products and packaging produced” from being recycled into food-grade packaging.
+
+
+
Recycling plastic is not economical.
+
+
Yet another problem is that plastic recycling is simply not economical. Recycled plastic costs more than new plastic because collecting, sorting, transporting, and reprocessing plastic waste is exorbitantly expensive. The petrochemical industry is rapidly expanding, which will further lower the cost of new plastic.
+
+
+
+
+
In addition, there is a growing body of evidence showing that plastics break down into microplastics that permeate the environment, and humans and animals end up ingesting them. There are microplastics in all corners of the earth and researchers have been trying to understand their effects on human health.
+
+
This is frankly alarming, and I’ve been more and more shocked every year since the plastics issue has been making it into the mainstream press. I’ve been cutting down on the amount of plastic goods I purchased and several years ago I stopped storing food in plastic containers, opting for metal or glass. Last year, I stopped buying clothing made from synthetic materials (as much as I can), since clothing releases a huge amount of microplastics into the water supply in every wash. Now I buy cotton, wool, and linen clothing almost exclusively. I find myself paying more attention to the materials of pretty much every product I plan to purchase. It definitely feels like an uphill battle because of the sheer amount of plastic that is reported to be in our surroundings.
Zeynep Tufekci, in an op-ed for the New York Times, ponders the implications of generative AI as we very likely enter the dawn of a new technological era.
+
+
+
Teachers could assign a complicated topic and allow students to use such tools as part of their research. Assessing the veracity and reliability of these A.I.-generated notes and using them to create an essay would be done in the classroom, with guidance and instruction from teachers. The goal would be to increase the quality and the complexity of the argument.
+
+
This would require more teachers to provide detailed feedback. Unless sufficient resources are provided equitably, adapting to conversational A.I. in flipped classrooms could exacerbate inequalities.
+
+
In schools with fewer resources, some students may end up turning in A.I.-produced essays without obtaining useful skills or really knowing what they have written.
+
+
+
I 100% agree. The coming sophisticated AI will demand that schools use more labor-intensive teaching methods.
+She is also right that this will exacerbate inequalities unless we as a society put a lot more money into our education systems.
I’m excited to introduce my first software package written for broad use. TransferMatrix.jl is a general 4 x 4 transfer matrix implementation written in the Julia programming language. The transfer matrix method analyzes the propagation of an electromagnetic wave through a multi-layered medium. You can compute the reflectance and transmittance spectra, as well as calculate the electric field profile as a function of position within the medium.
+
+
+
+
I started with some simple code in Python for my own projects and sharing it with other in my lab, but it had some limitations and I was growing to love coding in Julia. I didn’t want to switch to Python to do just this one thing. I started rewriting the code in Julia on the weekends, but I didn’t just want to reimplement what I had done in Python. You see, I’ve found that there are a lot of transfer matrix implementations on the web. It seems like every grad student doing something in optics or thin films writes one, plops it on the web, and lets it get stale when they graduate. A simple 2 x 2 algorithm is not hard to write but it can’t be fully generalized. I was also frustrated that there are all of these papers that try to improve the method (transfer matrices, apparently, are still an active area of research), but the code is difficult to read, poorly documented, untested, used poor programming practices, and abandonded.
+
+
+
+
I wanted to write something based on the latest developments that dealt with the shortfalls of the traditional transfer matrix (singularities and numerical instabilities), while being highly modular, reusable, and with great documentation and tutorials. And I wanted it in Julia to take advantage of Julia’s speed and the scientific community over there.
+
+
High modularity means that each function is as small as it can be. This makes it easy for someone to replace one or more steps with something custom to test a new idea and improve on the method in their own research. It means that it is easy to test and easy to read the code (in pure Julia).
+
+
Julia’s package manager makes it easy to install. Everything is documented and I have written an extensive tutorial — all of the code in the tutorial can be run as is.
+
+
Sharing and reuse is easy. You can make a config file with all of the simulation parameters (even the refractive index data from a file) and reproduce the results for that structure. You can create multiple variations easily this way and share the exact configuration that you used with others. Even complicated periodic structures are easy to make this way.
+
+
This implementation is based on the latest research in general transfer matrix methods and every piece of research that I use is cited at the function level, complete with the DOI so that you can follow everything that has been done and make precise modifications. A full list of references is also on the documentation website.
+
+
My hope is for this to be at least a first stop for someone looking for a transfer matrix algorithm. If the community likes it, then I would like this to become a part of a standard set of science or physics packages that currently exist in the Julia ecosystem. Ease of use and readability really were my priorities — there is little boilerplate code. And Julia’s speed means you can do wavelength and angle-tuning simulations to produce 2D contour plots quickly. Together with the generality of this implementation based on current research, I hope that others can use TransferMatrix.jl to try out new ideas.
the national reporting system for collecting and testing samples from wastewater treatment systems for Covid remains limited, uncoordinated and insufficiently standardized for a robust national surveillance system. If public health officials can’t track the data to mobilize a response to a crisis, the information that has been collected doesn’t do much good.
+
+
+
I had thought a program like this would have made it into the recently-passed infrastructure package. It would be a shame if health and safety monitoring systems (like wildfire or region-wide earthquake monitoring) were not built or strengthened in the near future.
+
+
Here’s more details on sewage on Jim Al-Khalili’s excellent BBC podcast, The Life Scientific:
I’ve been using Julia for about a year now after moving my entire workflow
+from Python.
+When I sometimes revisit Python I am so glad I made the switch. No regrets whatsoever. Julia still has one pain point,
+which is time to first execution (TTFX) or time to first plot (TTFP)1.
+But even this “pain point” is somewhat bizarre because Julia is a compiled language. Of
+course there is going to be a compilation step that will make it slow to get going. What makes this a pain
+point is the desire to have it all — “we are greedy,” say the founders of the language.
+Julia wants to be interactive and dynamic, but compiled and fast.
+But the fact that it’s compiled means that when a user wants to make a simple line plot it takes two minutes to precompile the plotting library, compile the plotting functions, and finally show the plot on screen.
+Only after that initial setup are all subsequent plots instant — as long as you keep your session active. There are many more talented programmers in the community than me, and one user in a recent Discourse thread explained the tradeoff and the difficulty in reducing compile time:
+
+
+
A tangent: I believe it is worthwhile to discuss why this is such a phenomenally big problem in julia. Julia has two very special features other languages do not share: (1) multimethods as the fundamental principle for the entirety of the ecosystem and (2) compiled code. It is very difficult to know what code you need compiled and to not discard the vast majority of already compiled code when importing new libraries that add new methods for pre-existing functions. No one has had to deal with this problem before julia. It is being slowly dealt with. Sysimages basically carry the promise that no significant amount of new methods will be defined, hence they can cache more compiled code (this is very oversimplified borderline misleading explanation).
+
+
+
That last point about sysimages is interesting. Making a sysimage in Visual Studio Code is a big workflow improvement, and I recommend all Julia users try it. It essentially compiles all the libraries from your project, and any other files you specify, and puts them into a file. I guess you could say it freezes your Julia session to use later. This is faster than precompiling each time. It’s built into the Julia extension and easy to set up. Detailed instructions are on the Julia VS Code extension website, but in a nutshell the steps are:
+
+
+
Open your project folder in VS Code with the Julia extension installed (and make sure it’s activated)
+
Make a new folder called .vscode
+
Make a file called JuliaSysimage.toml in that folder
+
Paste the [sysimage] text below this list into that file
+
Select Tasks: Run Build Task and then select Julia: Build custom sysimage for current environment
+
Check the useCustomSysimage setting in the Julia extension settings in VS Code
+
Restart the Julia REPL. (Hit the trash can button and open a new REPL session from the Command Palette)
+
+
+
Copy and paste this into a JuliaSysimage.toml file:
+
[sysimage]
+exclude=[] # Additional packages to be exlucded in the system image
+statements_files=[] # Precompile statements files to be used, relative to the project folder
+execution_files=[] # Precompile execution files to be used, relative to the project folder
+
+
+
The extension automatically uses the sysimage instead of precompiling your project. And now your project should run much faster and TTFX will be significantly sped-up. On my M1 iMac I use the powerful but compiler-heavy Makie plotting library and I went from waiting about 2 minutes for precompilation and maybe 30 seconds for that first plot to almost no compile time, and execution in less than a second. (Other people have properly benchmarked this, I’m not going to do that here). I see similar results on my 2019 Intel Macbook Pro.
+
+
But here’s what really got my workflow sailing. I’m PhD student working in experimental physics. I have a lot of messy data and I need to make a ton of plots to explore that data. I have a top-level folder for my experiment.
+In there I have separate folders for raw data, daily scripts, and results/plots.
+Then I have a src folder where plotting, analysis, and file reading/writing scripts go. The files in src rarely change, so that means I can add them to the execution_files section in my JuliaSysimage.toml file. These scripts get compiled along with all my plotting packages into the sysimage. This makes everything fast.
to JuliaSysimage.toml. As long as I don’t change these files, their functions load instantly. The functions in these files are used in my lab_notebook files with an include() statement at the top (e.g. include("plotting_functions.jl")). For example, I have custom plotting functions and themes that make an interactive grid of plots with toggles and settings so I can look at and compare data exactly the way I want. Recreating the sysimage a couple of times a month (or even once a week) is not a big deal compared to the time savings I get every day.
+
+
As an aside, I recommend everyone have some kind of setup like this where you reuse plotting and analysis functions, no matter what language you’re using. If you are editing these functions every day then either these scripts have not settled down yet or something isn’t quite right with the workflow. It is worth it to sit down and figure out what tools you need to build to smooth out day-to-day computational tasks instead of writing scripts from scratch each time you have to make a graph of some data. For the most part, the file format for my data is the same, so I only need a handful of plotting and data read/write functions. Once they’re written, that’s it. I can move on.
+
+
As many others have said, the time-to-first-X problem is a priority for the Julia developers. The version 1.8 update this year saw some speedups,
+and I think the expectation is that this will continue in future 1.x releases.
+These improvements to the compilation stage, both in VS Code and the work being done in the language itself, have surpassed my expectations. I thought Julia would always have an initial lag and that people would have to make hacks and workarounds. This really is exciting, and there is a lot to look forward to in Julia’s future.
+
+
+
+
+
+
+
The plotting libraries generally take the longest to precompile. ↩
Geoff Anders of Leverage Research, a non-profit that writes scientific papers without publishing them in peer-reviewed journals (so it seems), writes in Palladium Magazine a brief summary of the role of science through the ages. His overall theme is clear from the section headers. He sees science as going from largely an endeavor of wealthy individuals to one that obtained authority from the state. I quibble with parts of Anders’s historical narrative, but there are some good ideas in his conclusion.
+
+
I’m not sure how big of a phase “science as a public phenomenon” was. He makes it seems like science was a circus show in the 16th, 17th, and parts of the 18th century. I think this an exaggeration, but I’m not a science historian (and neither is he). Anders also relies too heavily on a single instance of science being used authoritatively (King Louis XVI’s commission to investigate Franz Mesmer’s methods of apparent hypnosis) to make the case that science had become broadly authoritative. This strikes me as a weak way to make the argument. I would be hesitant to say science has ever had the authority he seems to imply that it had. A massive influence? Definitely. A justification? Probably, and sometimes a scapegoat. But I wouldn’t call it an “authority”. I think if one were to use that word it needs a bit more context, which Anders does not provide.
+
+
His section on science and the state overemphasizes military technologies and glosses over quality of life improvements that raised large parts of the global population out of abject poverty (see Bradford DeLong’s excellent grand narrative, Slouching Towards Utopia published recently).
+
+
The one area that I think Anders has something going for him is his conclusion. The scientific community is at some sort of crossroads in terms of funding and elements of how it is structured. I see a lot of complaints that funding for blue-sky ideas is drying up and there are reports that hiring is becoming difficult. Some of this likely has to do with how universities are funded — and that is a whole other can of worms.
+
+
I like Anders’s idea of splitting science into two camps: exploratory science and settled science. At first glance one might say that science is exploratory and that the settled part is taken care of by applied scientists and engineers, but I think Anders’s argument is more subtle. He says that funding might be restructured so that exploratory science is decentralized and career tracks are split into “later-state” and “earlier-stage” science. I would take this idea further. First, by establishing an exploratory wing of science its mission will be to take big risks with the expectation of failure — and failures should be reported and praised. This wing would be analogous to the US’s DARPA initiative.
+
+
Second, a later-stage wing of science wouldn’t have to feign novelty where none exists. They would be free to solidify existing science1. Maybe they can bundle a few studies from the exploratory stage and make that science robust, ready to pass the baton to the applied scientists and engineers.
+
+
I think the separation into early-phase and late-phase science would be a boon for the scientific endeavor. It would strengthen the pipeline from basic science to societal improvements. It would also clarify the mission of any given scientific project. Having a later-stage project would carry just as much importance as an exploratory project within its domain, and exploratory labs would be free to try out pie-in-the-sky ideas without fear of blowback from funding agencies.
+
+
+
+
+
I’m not sure how this has to do with Anders’s idea of “don’t trust the science”. I think he is throwing a bunch of ideas together without a clear thread (what does decentralization get you besides being hip with the crypto crowd?), but there are some nuggets in here worth thinking about. ↩
I have seen a few articles in the press this year on sewage monitoring for tracking disease and the health of a city. Sara Reardon writing for Scientific American reports on how wastewater monitoring has been taken up as a tool by the CDC and local communities for tracking COVID and other diseases in the US. The impact of wastewater data aggregation and analysis could be huge — in both the positive and negative. It strikes me that governments are largely reactionary to changes in public health. Little attention is paid to preventative measures. This could change that.
+
+
Thinking more broadly, I think this tool has much greater potential than disease tracking. Combining wastewater data with other inputs could be a monumental shift in understanding the health of a community on quite a granular level — both in terms of what substances are circulating in a community and the potential for real-time fidelity. You can imagine wastewater data being combined with data from hospitals, air quality monitoring, or even news of major events affecting the mental health of a city.
This is a bit of an old post, but one that I liked a lot considering that I am also in the business of manufacturing quasiparticles (mine are polaritons).
+It’s fascinating that the quasiparticles that appear because of material excitations can be described using many of the same models as “real” elementary particles.
Martin Rees talks to The Economist’s Alok Jha on existential risks to civilization and the
+importance of science and science communication in the 21st century running up to his new book
+coming out this November (I already pre-ordered).
+
+
There is a constant buzz on Twitter about the pains of academic research.
+Martin Rees agrees that aspects of university research needs to be changed.
+Administrative bloat and scientists staying in their positions past retirement age discourage blue-sky research and gum up the promotion pipeline.
+He criticises the scope of UK ARIA (Advanced Research and Invention Agency) program, which is supposed to function similar to the US’s high-risk high-reward DARPA (Defence Advanced Research Projects Agency) program:
+
+
+
In that perspective, it’s just a sideshow.
+ The ministers say this is a wonderful way in which scientists can work in a long-term way on blue skies research without too much administrative hassle.
+ They’d be doing far more good if they reduced the amount of such administrative hassle in those who are supported by UKRI,
+ which is supporting fifty times as much research as ARIA will ever do.
+
+
+
Science in the last ten years or so, I feel, has really gotten bogged down.
+I agree that blue-sky thinking has sort of gone out of fashion.
+How much this is a function of perverse publishing incentives, administrative hurdles, or the constant firehose of publications to keep up with, I don’t know.
+I’m glad a prominent and highly respected figure in the science community is calling out the inefficiencies and problems in the way science is practiced.
The modern news cycle is a periodic deluge. I don’t get the sense that the James Webb Space Telescope launch has hit the public in the same way that the Hubble did. It seems like everyone moved on pretty quickly. I can’t help but keep going back to the Webb images and looking at them in awe, with a much better viewing experience on modern computer displays unavailable to those seeing the Hubble images for the first time.
The future of scientific software development will be cloud-based
+together with apps that use web technologies rather than
+platform-specific (“native”) applications despite recent mobile
+computing hardware advances. Advancements in computing tools and
+languages are already changing science to, for example, improve
+reproducibility of results and facilitate better collaboration. These
+same tools are helping to move development itself into the cloud and are
+migrating the community to web-based technologies and away from native
+apps and frameworks.
+
+
Mobile development for the scientific community now means programming on
+a laptop since there are very few scientific tools available on tablets
+and phones. “Mobile” in the everyday sense refers to, of course,
+smartphones and tablets. Eventually, scientific programming will move to
+these mobile platforms. I’m thinking of a tablet that can perform
+analysis, run a notebook environment, or even run certain kinds of
+simulations. You will be able to hook it up to measurement devices1 or
+controllers. At conferences, you will be able to answer questions by
+running your actual simulation live with different variables and show it
+to someone. There is a lot of great desktop-class software, proprietary
+and open-source, that powers science today. None of this will be a part
+of the mobile future. It will all be done in the cloud and with web
+technologies.
+
+
The discussion around native versus web technology frameworks is already
+robust in programming circles2, so I approach the topic as a
+researcher looking for mobile and cross-platform solutions. I try to
+answer these two questions:
+
+
+
What does software development look like in the future for science?
+
How are existing cross-platform and mobile frameworks shaping the future of scientific development?
+
+
+
I briefly describe the problems of the current fragmented ecosystem, how
+that ecosystem is converging on open-source tools, and then how the
+emerging cloud-based computing paradigm will shape scientific computing
+on mobile devices.
+
+
The fragmented ecosystem
+
+
The trajectory of scientific programming is interesting because it seems
+to be converging on a few tools from a historically fragmented and
+siloed ecosystem. Chemists, for example, use their particular flavors of
+modeling and analysis software (like Gaussian or ORCA), and Fortran is
+used for much of climate science. The fragmentation makes sense because
+of the wide range of applications that scientific programming must
+serve, including modeling, analysis, visualization, and instrument
+control. Furthermore, scientists are often not trained in programming,
+leading to large gaps in ability even within a single laboratory.
+
+
These factors lead to several problems and realities within the
+programmatic scientific community. These include:
+
+
+
+
Code that is often not reusable or readable across (or within) scientific disciplines. An example of this is the graduate student who writes software for their project, which nobody knows how to modify after they leave.
+
+
+
Domain-specific applications that inhibit cross-disciplinary collaboration. This includes proprietary software that, while effective, is not shareable because of cost or underutilization. Barriers to entry exist also because only a subset of people learn how to use a particular piece of software and would-be collaborators use something different.
+
+
+
Complicated old code that stalls development. Changing an old code base is a monumental task because the expertise that created the code has moved on. This is often the case with complex and large code bases that work, but nobody knows how. Making changes or sharing can require a complete rewrite.
+
+
+
+
The problems are more apparent today because the frontiers of science
+are increasingly cross-disciplinary. Without shareable and reusable
+code, there is considerable friction when trying to collaborate3.
+
+
Convergence to open-source tools
+
+
Several technologies are now maturing and their convergence is solving
+some of these problems. The transition will take a long time —
+decades-old code bases need to be rewritten and new libraries need to be
+built — but I expect the scientific programming landscape to be very
+different ten years from now.
+
+
The wide-spread adoption of Python, R, and Jupyter in the scientific
+community has solved many of the readability the share-ability
+problems4. Many projects now bundle Jupyter notebooks to demonstrate
+how the code works. Python is easy to read, easy to write, and
+open-source, making it an obvious choice for many to replace proprietary
+analysis software. The interactive coding environment of Jupyter is also
+having a
+major impact on scientific coding. Someone reading a
+scientific paper no longer has to take the author’s word that the
+modeling and analysis are sound; they can go on GitHub and run the
+software themselves.
+
+
A level above programming languages is apps for developing scientific
+software and doing analysis. There are a lot of apps out there, but a
+major component of development will use web technologies because of
+their inherent interoperability. Jupyter notebooks, for example, can be
+opened in the browser, meaning anyone can create and share something
+created in Jupyter without obscure or proprietary software. Jupyter can
+now also be used in Visual Studio Code,
+the popular, flexible, and rapidly-improving editor that is based on the
+web-technology platform, Electron.
+
+
The growing popularity of web technologies in science foreshadows the
+biggest change on the horizon, the move to cloud-based computing.
+
+
Cloud-based computing for science
+
+
Mobile devices are finally powerful and flexible enough that most
+people’s primary computing device is a smartphone. If this is the case,
+then one might think that they must be powerful enough for scientific
+applications. So, where are all of these great tools?
+
+
Ever since the iPad Pro came out in 20185, I have been searching for
+ways to fit it into my research workflow. So far, the best use-case for
+it is reading and annotating journal articles. This great, but nowhere
+near the mobile computing workstation I outlined above. The reason I
+still cannot do analysis or share a simulation on an iPad is that
+Python, Jupyter, an editor, graphing software, etc. are not available
+for it — and my iPad is faster and more powerful (in many respects) than
+my Mac6.
+
+
As I look around for solutions, it seems that the answer is to wait for
+cloud-based development to mature. Jupyter already has notebooks in the
+cloud via JupyterHub. A
+service called Binder promises to host notebook
+repositories and make code “immediately reproducible by anyone,
+anywhere”. Github will soon debut its
+Codespaces cloud platform, and
+the Julia community (a promising open-source scientific programming
+language) has put their resources into Jupyter and VS Code. Julia
+Computing has also introduced JuliaHub, Julia’s
+answer to cloud computing. Legacy tools for science trying to stay
+relevant are also moving to the cloud (see MatLab in the cloud,
+Mathematica Online, etc.). Any app or platform that does not make the
+move will likely become irrelevant as code-bases transition.
+
+
There are no mobile-first solutions from any of the major players in
+scientific software despite the incredible progress in mobile
+hardware7. Today I can write and run my software in a first-generation
+cloud-based environment or switch to my traditional computing
+workstation.
+
+
Conclusion
+
+
What lies ahead for scientific programming? Maybe Julia will continue
+its meteoric trajectory and become the de facto programming language
+for science and scientific papers will come attached with Jupyter
+notebooks. Maybe code will become so easy to share and reuse that the
+niche and proprietary software that keeps the disciplines siloed will
+become obsolete. These would be huge changes for the scientific
+community, but I think any of these kinds of changes in the software
+space are compounded by the coming cloud computing shift. Scientific
+development will happen in the cloud and code will be more reproducible
+and shareable than it is today as a result.
+
+
This future is different from the mobile computing world that I
+imagined, where devices would shrink and simultaneously become powerful
+enough that a thin computing slab empowered by a suite of on-device
+scientific tools could fulfill most of my computing needs. Instead, the
+mobile device will become a window to servers that will host my
+software. Reproducible and reusable code will proliferate as a result,
+but where does that leave the raw power of mobile computing devices?
+
+
+
+
+
+
+
This just became possible with the Moku devices coming out of Liquid Instruments. ↩
Katharine Hyatt describes these problems in the first few minutes of an excellent talk on using Julia for Quantum Physics. ↩
+
+
+
Another potential avenue for convergence is the ascent of the Julia open-source programming language, which promises to replace both high-performance code, higher-level analysis software, while making code reuse easy and natural. The language is still far from any sort of standard, but there are promising examples of its use. ↩
+
+
+
The iPad is, unfortunately, the only real contender in the mobile platform space. The Android ecosystem has not yet come up with a serious competitor that matches the performance of the iPad. ↩
+
+
+
Specifically, Apple does not allow code execution on its mobile operating systems. ↩
A general 4 × 4 transfer matrix implementation for Julia built for reusability, ease-of-use, shareability based on some of the latest peer-reviewed research on the topic with full documentation and extensive tutorials.
I can’t say enough how much I’m in love with the Makie plotting software. One of the backends uses the GPU to render plots, which makes it responsive and interactive. This is a collection of examples of how to make interactive plots in Makie demonstrating some basic physics concepts. If you have ideas for more examples, open an issue or a pull request.
+
+
Other Projects
+
+
Check out my GitHub page for other projects I’m working on.
+
+
I also want to give a shout-out to Makie.jl. This is the workhorse of my research and the best plotting software I’ve ever used. The plots that it generates look good out of the box, and its GPU-powered and interactive features let me do modeling and data exploration easily. I try to push the boundaries of the interactive features of Makie and occasionally post about any rough edges and any cool things that I find on the Julia Discourse.
+
+
+
+
+
\ No newline at end of file
diff --git a/research/index.html b/research/index.html
new file mode 100644
index 0000000..d2f4f65
--- /dev/null
+++ b/research/index.html
@@ -0,0 +1,98 @@
+
+
+
+
+ Garrek.org
+
+
+
+
+
+
+
+
The bonds of molecules vibrate. They stretch, they twist, and they rotate kind of like when two balls are attached to a spring.
+Just like the balls-and-spring can be stretched and compressed faster or slower, so can molecular bonds.
+
+
The stretching frequency is associated with an energy — the more energy you put into the spring, the faster it will stretch and compress. In a molecule, you can directly excite a molecular bond with light.
+
+
When matter interacts with confined light, the two can couple together. Why would this be the case? Light in free space can of course be absorbed by the molecule (it might be absorbed by the bond), but the molecular vibration will eventually decay and the light will be re-emitted back into free space. If you confine light, say, by placing two mirrors face-to-face, then you make a standing wave. Two people holding opposite ends of a rope and waving it up and down is an example of a standing wave. Think of the opposing mirrors as the two people holding the rope. The light bounces back and forth between them and, just like with the rope, the standing light wave can only exist at certain frequencies (Try this! There are lots of videos online demonstrating this. At a slow speed, you just get a single arc going up and down. If you wave a little faster, the structure breaks down and the rope is all over the place. But go at just the right frequency, and you get two arcs going in opposite directions (one up and the other down), with a node in the middle that doesn’t move. You can keep going with more energy as long as you don’t tire out!)
+
+
When the frequency of the light is the same as the vibration frequency of the molecule, then the two can couple. The molecule absorbs the light (a photon) and re-emits it back into the cavity. Then it gets re-absorbed by the molecule and the cycle repeats until the photon leaks out of the cavity or decays into the molecular bath.
+
+
+
Light-matter coupling is analogous to two pendulums attached by a spring. Separate, each pendulum can oscillate at its own frequency independent of the other. Once they are attached by a spring, then there are two “resonances”. They can either swing together: going together to the right, then to the left. Or they can swing opposite one another: they swing apart and the stretched spring forces them back together, and the now compressed spring pushes them apart again. These are two “normal modes” of the coupled pendulum. The confined light and matter, when coupled together, behave much like the coupled pendulum (details of the physics here).
+
+
What is different in the coupled case versus just the molecule or cavity alone? And how long do the coupled states last? These questions require ultrafast lasers. Since we are looking at molecular vibrations, we also need the light to be in the mid-infrared region.
+The power of an ultrafast laser is in pulses that repeat at a certain rate. The temporal width of a pulse determines what kind of processes you can study. Molecular vibrations (once excited) decay very rapidly — on the order of pisoseconds (10-12 s) — so in order to see them a femtosecond (10-15 s) pulse width is required.
+
+
The experiment goes like this. An initial pulse is sent to excite the system — we put some energy into the light-matter coupled system. Then, while it is decaying, we send in a second pulse which then gets information about the system at some later time. This second pulse makes its way to a detector, which converts the light to an electrical signal that we can record. It’s kind of like a strobe light and camera. You flash a light on the subject and then snap a picture. In fact, if you want to take a picture of a fast-moving subject (a bullet going through an apple), you need a lot of light and a very fast shuttter speed. This is very similar to the femtosecond excitation pulse-measurement pulse process.
+
+
+
+
Why study interactions between molecular vibrations and light?
+One of the most exciting things about these systems is that
+they can be used to modify chemical reactions. There are already demonstrations of this happening, but nobody knows how it happens.
+My goal is to figure out the fundamental properties of vibration-light coupling so that the chemistry might be explained.
+
+
+
+
Interests (on a slightly more technical level)
+
+
I’m interested in the physics of vibration-cavity photon systems (vibrational polaritons) and how they
+facilitate modified chemical reactivity. Ultrafast laser spectroscopy is
+the tool for this job. It allows me to study how the molecule-light
+interaction changes over short time scales.
+There is still a lot to uncover about these curious quasi-particles. We
+don’t have a full picture of how vibrational polaritons relax from
+excited states, for example. The role of polariton coherence, population transfer, and interactions with the reservoir states (uncoupled molecules) also need to be studied further to understand how all of these factors might play into modified reactivity.