The Return of the Western!

Westerns might be the first genre of movie that I came to love growing up. Though, by the time I was watching movies, Westerns had largely died out and were rarely made any more. In 2021, it seems like the Western came back in a big way!

I have had this website for a while now, and I kept changing my ideas about what to do with it. I’ve used it to share tutoring materials, old lab projects, photos… But if anyone is familiar with my Facebook page, the only thing I seem to be able to do consistently is watch and talk about movies. So I thought I might as well do that here too.

Clint Eastwood was my favorite movie star growing up.

2021 was, somehow, a big year for Westerns. The Power of the Dog, Old Henry, and The Harder They Fall were all released towards the end of 2021. Even though, due to ongoing pandemic, 2021 was not a great year for theatrical releases, all three films receive a warm critical reception. Is the Western back? Is it here to stay? While I think all three of these films are worth seeing, only one makes me want to put it into the rotation of movies that I re-watch periodically.

Old Henry

Old Henry feels like a familiar movie. An old homesteader is doing his best to move on from his mysterious past and live a peaceful life, but trouble still finds him, and he is forced to take up his gun once again…

Old Henry (2021) - IMDb
Tim Blake Nelson is not your typical leading man, but he delivers in Old Henry.

The biggest thing that sets Old Henry apart is its star. Tim Blake Nelson is a long-time character actor who you will usually find in supporting roles. He has occasionally functioned as a co-lead in ensemble pieces like the Coen brothers’ O Brother, Where Art Though and The Ballad of Buster Scruggs. But here Tim Blake Nelson is the titular Old Henry, and he seems perfectly at home in the spotlight. Nelson and fellow underappreciated actor Stephen Dorff carry us through a fairly simple and predictable but satisfying little flick.

Old Henry is a fun, simple movie that does everything it needs to do in its brisk 99 minute runtime. It’s not especially pretty, nor is the score particularly exciting, but it is still a well-crafted little movie. It has well-choreographed shootouts and just enough twists and turns to stay interesting. Plus, young Gavin Lewis turns in a surprisingly compelling performance as Henry’s frustrated teenage son. I will have to keep an eye out for his future performances. Unforgiven Movie Poster 1 Sided Original Final 27x40 Clint  Eastwood: Posters & Prints
Old Henry is reminiscent of Unforgiven, but with a tighter focus and less commitment to its anti-violent message.

Some credit Unforgiven with ushering in the brutal, realistic age of the revisionist Western – killing the classic Western adventure in the process. Old Henry feels like an attempt to turn back the clock. Old Henry is gritty and visceral – just like Unforgiven. But unlike Unforgiven where every character feels like a real, living person, the bad guys in Old Henry are just that – bad guys. Even though the movie gives some lip service to an anti-violence message, it still feels like a return to the halcyon days when Clint Eastwood or John Wayne mowed down waves of nameless goons without challenging the audience to really think about the cost of the violence he was inflicting. Old Henry is a fun watch, and I would definitely recommend seeing it once. However, I do not foresee myself returning to watch it again. I’d probably just re-watch Unforgiven.

The Power of the Dog

Speaking of revisionist westerns… The Power of the Dog is a slow-burning psychological drama. The gentle pacing and emotional focus of this film remind me more of a dramatic Western miniseries like Lonesome Dove or abstract period piece Oscar-bait like There Will Be Blood. However, the setting in early 20th century Montana, themes of isolation and independence, and the use of natural environments are enough for me to count it as a Western. Not every Western has to have a shootout.

The Power of the Dog (2021) - IMDb
The Power of the Dog is all about sexuality, but struggles to get to the point.

The story centers around recently remarried widow Rose (Kirsten Dunst), her son Peter (Kodi Smit-McPhee), and her hostile brother-in-law Phil (Benedict Cumberbatch). The film is an exploration of masculinity, and centers on a conflict between Phil’s rough outdoorsy machismo and Peter’s soft-spoken femininity. Phil is something of a master of passive-aggressive emotional torture, which he uses to great effect on Rose. It’s Old-West Gaslight.

The Power of the Dog is beautiful to look at and has a minimalist score that eases us into the film before settling into subtly menacing us the rest of the movie. The cast is top-notch, and on some level, I can understand why it is raking in nominations and awards left and right. But the film didn’t quite come together for me. Kirsten Dunst’s performance is great, but she gets sidelined mid-movie and has very little to do in the third act. The emotion of Benedict Cumberbatch’s performance comes through, but his American accent occasionally falters, and the film never really lets him show the full ugliness of his conflicted and self-hating character. And while the film explores some interesting themes – masculinity, self-reliance, sexuality, social roles… it also never really feels like it comes to a conclusion or makes any strong statements. It explores without really finding anything.

I still recommend The Power of the Dog, and I want to see it again. Perhaps I will appreciate it more on a second viewing. At minimum, it is a beautifully-shot film with some excellent performances. Still, I feel that it has some structural and tonal issues that make it less powerful than it ought to have been.

If you want to get your Kirsten Dunst fix from a period drama that explores the tension between individual sexuality and social norms, I would recommend Sofia Coppola’s The Beguiled instead. It, too, is beautiful to look at, full of incredible performances, and digs deep into human nature. However, it is much more tightly structured, and I think it has more to say and does a better job getting the message across.

I liked The Power of the Dog just fine, but I loved The Beguiled.

The Harder They Fall

I told you that there was one Western in 2021 that deserved a place in the pantheon, and this was it. Guys – this movie rocks.

The Harder They Fall (2021) - IMDb
The Harder They Fall. See it ASAP with a big screen, a loud sound system, and lots of popcorn.

There is a format to a typical Clint Eastwood Western. He’s a peaceful man, minding his own business. The Bad Guy wrongs him. Our hero spends the rest of the movie shooting people. It’s a simple formula and it works.

The Harder They Fall does follow the Eastwood formula, but it builds a lot on top of the familiar format. First and foremost, the entire principal cast is Black – a rarity in the Western genre, and a reality in actual Western history. To drive that point home, the movie opens with “These. People. Existed.” The characters in the film are based, albeit very loosely, on real historical figures. Add to that a perfectly calibrated soundtrack and some cinematography that rivals The Power of the Dog, and you have a movie that is both exciting entertainment and compelling art.

The cast is loaded – Idris Elba, Zazi Beets, Regina King, Delroy Lindo, LaKeith Stanfield… Elba AND Stanfield! The main person I was not familiar with was the lead, Jonathan Majors, but he had no trouble convincing me he was a bona-fide move star.

How And Why The Harder They Fall All-White Town Was Created - Netflix Tudum
Did I mention that this movie is pretty yet?

Somehow, even though Old Henry was a continuation of the revisionist Western style and The Power of the Dog was a somewhat unstructured drama that didn’t fit well into any particular category, it was The Harder they Fall‘s rehashing of a well-worn formula that still felt the most fresh and exciting. This movie is fun, but also has some real character-driven drama that gives the movie real stakes. Even the antagonist, who at first seems like a typical vicious Western baddie, has a surprising amount of depth. And the final showdown is… I won’t spoil it.

There have been some pretty good Westerns in recent memory. The remake of The Magnificent Seven wasn’t bad, but wasn’t great. Django Unchained was good, but dipped its toe too deep into real human suffering to be much fun and had too many wild gunfights and too much wacky humor to actually say anything about that suffering. The Hateful Eight was loaded with on-screen talent, made good use of a beautiful setting, and had an incredible score. But it was also a bit of a slow, bloated, and boring mess that badly needed an editor. The Harder They Fall feels like everything that the Tarantino Westerns wanted to be and more, but in a much sleeker, more coherent package.

I definitely recommend The Harder They Fall. It is the most fun I have had watching a Western made this millennium, and one of the only ones (along with The Proposition and True Grit) that I am likely to re-watch regularly.

Convergence of Pi: Interactive Post

In an earlier post, I discussed how Pi can be expressed as a summation.

$$4 \sum _{n=1}^{\infty } \frac{(-1)^{n-1}}{2 n-1}=4 \left(1 \frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}\text{…}\right)=\pi$$

I thought I would create some embedded Mathematica applets that show how the terms of the series converge to π. To start out, let’s look at how the summation converges as we add more an more terms.

Notice how the approximation jumps between overestimating and underestimating π.

The series gradually converges to π as more terms are added.

You can try evaluating the summation to different numbers of terms with this interactive tool.

Scripted Drafting

Suppose you want to create a drawing for a simple part, like a typical tensile dogbone.

An “A” type tensile dogbone according to ASTM E345.

You could draw up this part according to specifications in a conventional drafting program. But parts like this often come in different sizes with different proportions, depending on the material and your testing needs. So for this kind of part, there’s a strong chance you would need to draft different versions to satisfy different specifications. When that need arises, your first impulse might be to draft a whole new part or modify an old one as needed, but there is a more efficient approach.

A dogbone drafted exactly according to E345.

The key is in recognizing that all of the dogbones you are making have the same basic features, and you can follow the same basic steps when drafting each one. Here is one possible workflow:

  • Draw a rectangle for the reduced middle section
  • Draw rectangles for the grips at the ends
  • Subtract out the rounded fillets

Regardless of how long, wide, or thick the dogbone is, you can make it following those same steps. All that changes are the measurements.

A dogbone that is half as wide, but four times as thick as the last one.

So if we approach our drafting project as a sequence of steps, we can draft practically every conceivable dogbone with the effort of drafting just one. It turns out that many software suites out there have some kind of scripting capability. For our example, there is a good free option: OpenSCAD.

Scripts on the left panel lay out the steps of drafting a part. The right panel shows a preview of the result.

OpenSCAD only lets you draft parts with scripts. Making an all-purpose dogbone-making script is simply a matter of setting up the variables which describe the part dimensions up at the beginning of the script.

Note how I decided to build in a little math for the grip lengths so they do not need to be defined separately.

Then, when it comes time to create the part, the same set of steps are followed every time.

The steps to creating a “dogbone” part are always the same. Only the measurements change.

So now you only need to change the numbers at the beginning of the script any time you want to draft a dogbone with new dimensions. This concept of simple, scripted drafting should work with a variety of parts. Keep an eye out situations where:

  • You are designing simple parts
  • The parts’ interrelated measurements that can be handled algebraically
  • Any part can be made by repeating the same set of steps
  • You will need to draft multiple versions of the part

If you use scripted drafting, you can save yourself the trouble of having to go back and re-invent the part from scratch each time. All it takes is a little thought focusing on the steps it takes to make the part rather than just the end result.

A wide, thick dogbone made by simply changing a few numbers and re-rendering the part.


In memory of Dr. Alex Punnoose

Dr. Alex Punnoose was one of my favorite teachers at Boise State. Tragically, he passed away in 2016. However, his memory lives on in his students and what he taught us about the universe and our roles as scientists.

Dr. Punnoose’s “Probe-Interaction-Signal” method of classifying material characterization techniques is one of the most useful lessons he taught me. I do not know if he invented this approach to categorizing characterization techniques, but his implementation of Probe-Interaction-Signal was very intuitive, and I still use it as a memory tool and when writing methods sections in my papers.


In essence, most material characterization methods can be understood by breaking the process into 3 steps: Probe, Interaction, and Signal.

  • Probe: Whatever you use to interrogate the specimen: light, radiation, physical contact with a tool, bombardment with particles, etc.
  • Interaction: The physics-based principle which describes how the probe and specimen interact. Usually, this incorporates some material properties of the specimen.
  • Signal: What you measure to understand the material: emitted particles, reflected or emitted radiation, reaction forces, etc.

This framework is essentially the same as classifying the components of an experiment as independent factors, theory, and dependent response variables. The Probe and Signal are the components of the experiment which a scientist can measure and record. The Interaction involves the theory and equations which we can use to gain insight into the specimen we are interrogating.

Let’s start with a really simple, silly example.

Poking Something with a Stick

So suppose you are a child walking along the beach, and you see a crab sitting on the sand. It’s not moving. What do you do? Well, you can design an experiment within the Probe-Interaction-Signal framework.

  • Probe: Poke it with a stick.
  • Interaction: If the crab is alive, it will feel you poking it, and it will move. If it is dead, it will not react.
  • Signal: You look at the crab to see if it moves or does not move.

So here we have a simple example of a characterization method based on the physical principle that living things react to being poked while nonliving things do not. The Interaction theory tells us how to draw a conclusion based on the Signal we measure after applying a Probe.

X-ray Photoelectron Spectroscopy (XPS)

XPS is an excellent choice of name for a characterization technique since the name describes almost exactly how XPS works.

  • Probe: Monochromatic X-rays of known energy
  • Interaction: Electrons are released with an energy equal to the incoming X-rays minus the binding energy that held them to the specimen
  • Signal: The kinetic energy of the electrons released from the material

In many cases, the Interaction portion of the process is based on an underlying physical principle which we can describe mathematically. XPS is based on measuring binding energies which are characteristic of the atoms which emitted the photoelectron.

$$ E_{binding} = E_{X-ray} – (E_{kinetic} + \phi) $$

We controlled the X-ray energy and measured the kinetic energy. The φ part of the equation is just a constant representing the work function of the electron detector. (Some versions of the equation will leave φ off, presuming that it is accounted for in the kinetic energy term.) So this equation lets us compute the binding energy of the atoms which emitted the, which in turn can be matched to the specimen composition.

Understanding the Interaction also helps us anticipate the limitations of a characterization technique. Since XPS measures the energy of emitted electrons, and electrons’ ability to escape a material diminishes with depth below the surface, we know that XPS measures surface composition. Anything that would interfere with the electrons’ path to the detector could interfere with the measurement, so XPS is best performed in a vacuum.

Let’s take a quick look at a few other techniques through the Probe-Interaction-Signal lens.

X-ray Fluorescence Spectroscopy (XRF)

  • Probe: X-rays from a source (can be monochromatic or not)
  • Interaction: Electron transitions induced by the probe X-rays result in the emission of “secondary” X-rays which are characteristic of the atoms which produce them
  • Signal: The secondary X-rays

XRF is another spectroscopic technique because the Signal is X-rays which are characteristic of the element which produced them. XRF is also very similar to XPF because they share the same Probe. However, they measure different Signals and require different, but related, Interaction theories to interpret their results.

Optical Microscopy

  • Probe: Light from a source
  • Interaction: Light is scattered by features of the specimen surface
  • Signal: Scattered light is magnified and directed to a human eye or a camera

Microscopy is intuitive because, like human vision, it lets us measure the spatial locations of features on the surface of a specimen. However, the Probe-Interaction-Signal framework helps us understand how modifications to optical microscopic techniques could be made to extract additional information.

For example, we could modify the probe and signal by adding filters, making the interaction more complex and providing us with new information. By adding polarizing filters, we could deliberately exclude light which changed polarization upon interaction with the specimen. This can help determine the orientation of crystals of birefringent materials, such as an aluminum oxide or barium titanate.

  • Probe: Polarized light from a source
  • Interaction: Light is scattered by features of the specimen surface and changes polarization angle depending on the orientation of birefringent surface features
  • Signal: Scattered light which is filtered by a polarizer, magnified, and directed to a human eye or a camera

Transmission Electron Microscopy (TEM)

  • Probe: Monochromatic electron beam
  • Interaction: Electrons are transmitted through the specimen and scattered by its atoms
  • Signal: Transmitted electrons form an image in the image plane and a diffraction pattern in the back focal plane

TEM is a far more powerful and sophisticated technique than my description makes it sound. My goal is to emphasize one important point: the spatial information from TEM images comes primarily from the Signal part of the technique. Let’s compare this to how scanning electron microscopy works.

Scanning Electron Microscopy (SEM)

  • Probe: “Primary” electron beam directed to pre-determined locations on a grid
  • Interaction: “Secondary” electrons are knocked loose from the specimen surface
  • Signal: Secondary electrons are collected by a detector

Notice how the location information for SEM comes from the Probe part of SEM rather than directly from the Signal like TEM and optical microscopy. It can be tempting to focus on the information carried in the Signal part of an experiment. However, the Signal-Interaction-Probe framework helps illustrate how important it is to interpret the dependent variables of an experiment in the context of the independent variables.

A Useful Model

We have only scratched the surface of material characterization techniques out there, but the Probe-Interaction-Signal concept can be applied to practically all of them. I find it especially handy when learning a new technique for the first time or training someone else. It’s also very useful in troubleshooting misbehaving instruments.

I hope you find Probe-Interaction-Signal useful, and I suspect it will not be the last of Dr. Punnoose’s lessons I transcribe into a blog post.

Image-Centric vs Deformation-Centric DIC Analysis

Digital Image Correlation Analysis

Digital image correlation (DIC) is a technique for measuring the deformation (i.e. strain) of materials using a sequence of images taken over the course of a mechanical experiment. DIC analysis generally follows 4 steps: capturing an image sequence, tracking features, deformation analysis, and data mining.

Flowchart mapping out the 4 steps of DIC analysis and the type of data produced by each step.

Problem: Image-Centric DIC Limitations

Our research group used DIC as our primary tool for non-contact measurement of material deformation. However, our work was inhibited by several limitations of our existing DIC analysis software.

  1. Computationally inefficient and error-prone motion tracking
  2. No measurement validation or error estimation
  3. Poor record-keeping of the steps of completed DIC analyses
  4. A single type of strain result which required long computation times and large amounts of disk space
  5. Limited data mining potential due to strain analysis, static reference states, and lack of error estimation

However, the biggest limitation was that our old DIC software was image-centric. From beginning to end, the data was formatted, processed, and saved in arrays matching pixels in the original image. This approach ignores the fact that the discrete feature tracking and the deformation gradient calculation trade location accuracy for displacement and deformation information while introducing and propagating error. Handling the data at the resolution of the image wastes computing resources while misrepresenting the accuracy of the analysis.

Solution: Deformation-Centric DIC

I started by retrofitting our old DIC software with new features and add-ons to compensate for its shortcomings.

  1. Improved motion-tracking based on the Lucas-Kanade optical flow function set from the OpenCV library
  2. A “rigid body motion” test for motion tracking validation, error estimation, and artifact detection
  3. A robust documentation policy that saves every step of the analysis human-and-machine-readable directory tree, eg: .\\images\motion tracking\deformation\data mining
  4. Deformation gradient-focused data structures which save computational resources and enable a wider variety of data mining strategies without sacrificing accuracy
  5. Flexible reference states, making it possible to target sections of experiments and compute deformation rates

In addition to these improvements, my DIC software was built fundamentally differently from our old DIC platform. Rather than handling data in an image-centric manner, I implemented a deformation-centric scheme. My DIC software treats deformation data in similar to the nodes and elements found in finite element analysis. This approach facilitates error estimation and enables faster computation times and more efficient use of memory.

Impact: Unprecedented Access to Material Deformation

My analysis suite has become the primary tool of my former research group at Georgia Tech. They are now able to distinguish real deformation from artifacts, validate their measurements, and perform sophisticated strain analyses which are not possible with conventional DIC analysis. For my dissertation, I combined deformation rate analyses with in-situ stress measurements, creating create the first-ever real-time deformation energy measurement based entirely on empirical measurement without simulation or model. By building a DIC analysis suite from the ground up with deformation analysis in mind, I granted my research group significantly more powerful tools for mining mechanical testing data.

Automating Research Team Management Using Experiment Design

Problem: Organizing an Inexperienced Team

In my final year at Georgia Tech, I had three research assistants who had the potential to be productive laboratory technicians, but they lacked experience and confidence. I needed to create an environment where they could hit the ground running gathering high-quality experimental data, documenting their work, and organizing their data without too much micromanagement. Meanwhile, I had publications and a dissertation to write and needed to ensure the quality of my assistant’s work without spending too much time looking over their shoulders.

Solution: Data Management as a Means to Automate Team Managment

A data management strategy can do more than just make data easier to navigate. A strategy which incorporates automated progress updates allows the team members to focus on their delegated roles rather than administrative tasks. Technicians are freed to focus more on their laboratory experiments, analysts can be updated the instant new data is added, and team meetings become more efficient and focused on the next step of the project.

So I began building a data management strategy based on the needs of our project. I would:

  1. Design a set of experiments to explore a parameter space.
  2. Create a data organization system based on that parameter space.
  3. Build tools to automatically track data placed into the system, eliminating the need for my assistants to report their progress.
  4. Expand upon those tools to automate data analysis.

I used Design of Experiments (DOE) principles as the basis of our experiment design and my data management strategy. I started by writing a Python script which built a directory tree which organized according to the parameters of each experiment. Each level of the directory tree corresponded to one parameter from the experiment design.

For example: I developed this data management strategy for a study of fracture toughness in various thin-sheet metallic specimens. Thus, the parameter space had three axes: material, thickness, and type of crack. Thus, the directory structure for the project had 3 layers. For example, an experiment on a copper sheet 100 µm thick with a single edge notch would be stored in the directory .\Cu\B100um\SE.

The ordering of the parameters / directory layers was arbitrary, and I simply chose an arrangement that made it easy for my assistants to find where to put the result of their experiments. For the purpose of organizing our data, the directory tree offered an intuitive, human-readable structure. This structure also made tracking progress straightforward.

Once we started collecting data, I wrote a directory crawler script automatically generated reports on my team’s progress based on the data they collected. This eliminated the need to interrupt my assistants’ laboratory work for progress updates and allowed me to observe our large experimental dataset taking shape in real-time and plan our next move. Finally, I augmented the directory crawler to retrieve and analyze experimental data on-demand.

Flowchart showing how I used software tools to streamline the team management process.

Once the directory crawler could retrieve data, it was straightforward to adapt the tool to analyses. For analytical purposes, it was not always advantageous to view the data in the way it was organized in the directory tree. So I added the ability to filter the data retrieved by the directory crawler based on any arbitrary criteria.

For example, consider our project exploring different materials, sheet thicknesses, and notch types. I could retrieve and compare data from any combination of the three parameters we explored. I could look at all of the single edge-notched experiments on all materials and all thicknesses. Or I could narrow my focus to only single-edge notched specimens from aluminum sheets. All of the data retrieval and visualization was automatic, and I merely needed to identify the scope I wished to examine.

Impact: An Efficient Research Team

Developing my automated team management strategy catalyzed my research team’s progress to such a degree that we accomplished more in one summer than in the previous two or three years. Tasks, like getting a progress report from my assistants, developing a work plan for the week, or applying a new analysis to old data, took only a few minutes, rather than hours of work and meetings. My management framework eliminated my need to micromanage my team, helped my assistants see how their work fit into the larger project and allowed everybody to focus on the science rather than the minutia of organizing the group’s efforts.

My new tools helped me created an optimal environment for my assistants where they always had clear goals, instant feedback on their progress, and a clear record of how their work contributed to the project. At the beginning of every week, I used my directory crawler scripts to compile a big-picture report of the team’s data, identify target areas for the coming week, and set goals for my assistants. I could check on my team’s progress at any time over the course of the week without needing to interrupt their work and then deliver praise or constructive feedback at lunch or the end of the day. This view of my assistant’s daily work even helped me with “sanity management” – making sure my assistants’ days had enough variety and challenges to keep them engaged.

The new management strategy created a highly efficient data collection and analysis pipeline which let me stop worrying about how to collect enough data for the project and shift my focus on developing new theories and models. I had built an efficient data pipeline, and the challenge of analyzing my pipeline’s output sparked my interest in data science. In one summer, my undergraduates collected more high-quality data than some grad students collect in their entire dissertation research. The dataset is so thoroughly documented and well-organized that more than a year later, my old research group is still mining the data for new insights.

Modern data acquisition technology has lowered the barrier to collecting data to such a degree that we can afford to collect data from experiments without every measurement necessarily being aimed at testing a particular hypothesis. Our success that summer hinged on a data management strategy which had the experiment design baked in. By exploring our parameter space in a systematic, disciplined way, we created a dataset conducive to developing and testing new hypotheses and models based on the comprehensive data we had already collected.

Design of experiments allowed us to avoid the traditional hypothesis – experiment – refinement – hypothesis … cycle of the scientific method, where progress is limited by the rate at which experiments can be run. Instead, we explored a large parameter space, and our physical science project became a data science problem. Our progress was only limited by how quickly and insightfully we could develop new models to understand our data.

Postscript: Experiment Specifics

I have kept this post focused on my research team management strategy with minimal discussion of actual research we were working on. I left those details out to avoid complicating the narrative. But for the curious, I thought I would write out some notes on the parameter space we were exploring.

The Goal

Develop a new fracture toughness parameter for thin, ductile metal sheets where traditional analyses such as K and J cannot be applied.

Experiment Parameter Space

  1. Sheet material
    • Al
    • Cu
    • Sn
  2. Sheet thickness (varied depending on material)
  3. Starting notch type
    • Single edge notch
    • Middle notch
    • No notch (tensile)

Raw Data Acquired

  • Force
  • Displacement
  • Optical micrograph sequences (videos of the experiment)
  • Fractographs (micrographs of the fracture surfaces)


The analyses we developed were targeted at characterizing the driving force and resistance to crack growth in the ductile sheets. The goal was to find analytical approaches which were insensitive to certain parameters, such as the starting notch type. (I.e. we could use forensic fractography to identify certain parameters as good negative controls.)

But the real beauty of our approach is that we were able to run multiple hypothesis – antithesis – synthesis cycles without the rate-limiting step of waiting to complete more physical experiments. The dataset was large and robust enough that we could simply test different analyses over many different scopes – from single experiments to a comprehensive analysis of the entire set and everything in between. I suspect that there may be a few more Ph.D. dissertations worth of insight still waiting to be discovered in our dataset.

Here are some examples of analyses I developed. The parenthetical statements list the raw data for each analysis.

  • Crack length (micrographs)
  • Stress (force & micrographs)
  • Crack propagation stress evolution (force & micrographs)
  • ANOVA of crack propagation stress convergence (force, micrographs & set groupings based on parameter space)
  • Strain distributions (digital image correlation deformation mapping)
  • Work of fracture (force, displacement, & crack length)
  • Deformation energy distribution (force & deformation maps)
  • Specific deformation energy accumulation rate (force, deformation maps, and crack length)

Measuring Color Part 3: The Observer Functions

Now that we’ve laid out the problem of taking a color measurement from a continuous spectrum, we can actually execute the conversion. We can start by considering the wavelengths of light that correspond to certain colors. For example, red light has a wavelength of about 650 nm, green light is about 550 nm, and blue light is down around 425 nm. In the actual world, light can exist at any wavelength. Color is more complex than just measuring the intensity of light at one specific wavelength.

An example of a continuous spectrum collected from some red-dyed diesel fuel. We want to convert this kind of data into a 3-channel RGB color representation.

The Observer Functions

The human eye perceives red, green, and blue light each over a range of wavelengths. The sensitivity of the eye also varies depending on wavelength. For our project, we will apply some mathematical equations called the “Observer Functions” which approximate the sensitivity of human eyes at each wavelength.

The observer functions approximate how the eye responds to red, green, and blue light. The CIE C Standard approximates the standard light source required for an ASTM D1500 color measurement test. This light source appears white because it stimulates all 3 areas of sensitivity in the human eye.

The observer functions show a few interesting things. First, the human eye is far more sensitive to blue light than green or red, but that sensitivity is over a narrower range of wavelengths. Perception of “red” light mostly covers the longer visible wavelengths. However, the red observer function has a secondary peak at about the same wavelength range as blue light. This suggests that the light we think of as “blue” also stimulates human “red” receptors, albeit not as much as the longer wavelengths we associate with red light.

So how do we use these observer functions? Well, we simply need to multiply each of the observer functions by the continuous transmission spectrum that came from our UV/Vis spectrophotometer and compare that result to the original observer function. Thus if a specimen transmitted 100% of light in the “red” range of wavelengths, the observer function would remain unchanged, and we would see that the red receptors in our “eye” would be 100% stimulated.

Comparison of the ideal observer and standard light source functions (left) compared to an actual spectrum applied to the observer functions (right). Notice how the observer functions on the right are reduced in magnitude according to where they match up to the spectrum.

In the example above, we can see an analysis of a typical red-dyed diesel specimen. The grey curve on the right is the spectrum measured from the specimen, and we can see how the red, green, and blue observer functions are changed by the spectrum. The blue spectrum is practically non-existent. The red spectrum came through at nearly full strength. Finally, the green spectrum was reduced in intensity but has not been suppressed entirely. If we quantify how strongly each range of receptors in our “eye” was stimulated, we could measure the (red, green, blue) color as (0.649, 0.345, 0.004). Mathematically, we accomplish this by summing up the observer functions on the right and comparing them to the observer functions on the left.

So this gives us a color measurement! All that remains is to make some adjustments to make sure our measurement accounts for the conditions of an actual experiment.

Simulating a Standard Experiment

Light Source

While our color measurement does represent the color of the actual specimen, there is an important difference between the UV/Vis spectrophotometer and the standard method: the light source! If we were to follow the standard method, we would use a light source which produces different intensities of light at different wavelengths. The UV/Vis automatically corrects for the light source intensity, so the transmission measurements it provides are as if they were collected using a light source which was equally intense at every wavelength.

For our experiment, correcting for the light source is fairly simple. We just take the spectrum of the standard light source (the “CIE C source” in the plots above) and apply it to our transmission spectrum before we perform our calculations using the observer functions.


There is another difference between our UV/Vis experiment and the standard method: the standard uses a 33 mm thick observation vial and the UV/Vis spectrophotometer uses 10 mm wide cuvettes. So our measurements traveled through less of the specimen than the standard mandates and absorbed less light in total. We can compensate for this using Beer’s Law.

$$A = \epsilon c b$$

Beer’s law states that absorbance, A, scales proportionally with the specimen thickness b. (The other constants are related to the concentration, c, and molar absorptivity, ε, of the specimen, which do not change.) So we know that to simulate the standard measurement, we simply need to scale the absorbance by 3.3 to get an equivalent result! However, our measurement is not in absorbance, it is in transmittance. Fortunately, the conversion between absorbance, A, and transmittance, T, is straightforward.

$$A = log_{10}\left(\frac{1}{T}\right) \\
T = 10^{-A} $$

So dealing with the difference in vial thicknesses is simply a matter of converting to absorbance, applying Beer’s Law to find the equivalent absorbance of a 33 mm vial from the 10 mm cuvette, then converting back to transmittance. Then we will have simulated the standard experiment accounting for both the light source and the thickness of the specimen vial.


I was able to build a color-measurement tool which converted from a continuous spectrum to 3-channel RGB color, which in turn let me match the color to standardized references. Building an automated alternative to the human eye-based standard method required a multi-step conversion.

  • Use of the observer functions to convert a continuous spectrum to 3-channel color.
  • Simulation of the standard light source called for in the standard method.
  • Correction for the smaller-thickness specimen which we used compared to the standard.

To me, the most interesting aspect of this project was understanding how the human eye perceives color. Seeing how the observer functions work showed how in a universe where light can have any wavelength, it is possible to simulate images using only 3 color channels. The usefulness of 3-color images is a result of just which wavelengths of light the human eye is sensitive to.

It is fascinating what this implies. Images displayed on 3-color monitors, printout, and even paintings might look realistic to humans, but any animal with different color sensitivity would likely see them very differently. Representing an image collected by a camera using UV, infrared, X-ray, or other non-visible light could also be accomplished simply by applying a different, specially-designed set of observer functions to convert to 3-channel RGB color. The human eye’s sensitivity is also wavelength-dependent. So a red and blue light might have equal intensities in reality, but a human eye would see the blue light as being brighter.

This all makes me appreciate the human body a little bit more, too. Just “eyeballing” a color measurement seemed like a pretty shaky foundation to build a standardized test on. Using an instrument still gives us a more detailed, objective, and quantitative record of the measurement. However, the eye is actually a rather complex and sophisticated device!

Measuring Color Part 2: The Visible Spectrum vs 3-Channel RGB Color

In my previous post, I laid out our overall goal: use a UV/Vis spectrophotometer to judge the color of fuel specimens. This post will introduce the two different kinds of information we will need to understand in order to solve the problem: the continuous spectrum measured by the UV/Vis spectrophotometer and the 3-channel RGB color value we want to use to identify the color.

The Visible Light Spectrum

We will gloss over the less interesting challenges of repairing the UV/Vis and getting it to talk to a computer. In short, the device just needed some adjustments to its light sources, and it had an RS-232 serial port which I could use to transmit data through an adapter to a listening computer.

The most important piece of the puzzle is the kind of data our instrument produces. To use the spectrophotometer, we place a 1 cm wide cuvette containing our specimen into the sample holder. Then the instrument measures how much light at different wavelengths is transmitted through the cuvette and specimen.

The white-capped cuvette in the center of this image contains our specimen.

So suppose we take a sample of fuel, pipette it into a cuvette, and take a quick spectrum measurement. This is what the resulting data looks like.

Transmittance spectrum collected from a sample of red-dyed diesel fuel.

What we get is a spectrum. The vertical axis shows the transmittance, which ranges from 0.0 to 1.0. A value of 1.0 would mean that 100% of the light was transmitted. A value of 0.0 would mean that absolutely no light was able to get through the specimen. This spectrum was collected within the range of wavelengths that humans can perceive, which runs from roughly 400 nm (violet) to 700 nm (deep red) wavelength. (If you don’t know what I mean by wavelength, you might want to read up a bit here.)

This specimen appeared red to the human eye. The light was mostly transmitted in 650-700 nm range of wavelength, which would correspond to red and orange colors, which makes sense given how the specimen looked before I scanned it. Practically no light was transmitted below 550 nm wavelength, which means the specimen blocked blue and violet light. Somewhere in these ranges of transmitted or blocked light, we will find the information we need to categorize the color of this specimen.

3-Channel RGB Color

But how does this spectrum compare to RGB color? The idea behind RGB color is that 3 numbers – one each for the red, green, and blue color channels – can be used to describe almost any color visible to the human eye.

A screengrab from GIMP showing how different RGB color values can be selected to reproduce a particular color. It is also worth noting that there are many other frameworks for representing color. This one just happens to be the best match to the standard method.

One thing is immediately apparent upon comparing our example of RGB color (255,36,0) to the spectrum from the UV/Vis spectrophotometer. The spectrum contains a great deal more information!

This brings us to the real challenge of the project: converting a continuous spectrum into RGB color values. To do that, we need to consider how the human eye perceives color, which I will discuss in my next post.

Measuring Color Part 1: Setting Up the Problem

This is the story about how a combination of a simple color experiment and limited resources prompted me to design a new experiment and analysis for the lab where I worked. What started as a simple fuel test turned into a project that taught me a great deal about how humans perceive light and color.

The Problem

Suppose you work in a laboratory, and your clients want you to to grade the color of some fuel specimens. They specify that you are supposed to assign a number to the fuel from 1 to 8 ranging from a pale yellow (1), red (4) or nearly black (8), and various shades and numbers in between. Well, it’s good to have clients, and the test sounds simple enough. Great!

But we can’t just eyeball that measurement, and the standard method calls for equipment you don’t have. The lab has limited resources, and you can’t seem to talk your boss into buying the parts you need or turning the clients down. So what do you do?


The standard methods of getting a color measurement require either visual matching to a reference material using a standard light source or use of a specialized piece of equipment. However, visual matching does not leave a quantitative record of the measurement, and we lacked the resources to acquire a specialized instrument. Furthermore, it would be advantageous to analyze colors that fell outside the narrowly-defined yellow-red-black range.

Maybe there was a way to get a measurement that was not only documentable, calibratable, and reproducible, but arguably better than the standard method of visually comparing the specimen to a piece of reference glass!

I found two key resources that would help me escape my conundrum:

  • The lab had an old, out-of-commission UV/Vis spectrophotometer.
  • ASTM D1500 provides the RGB color values for the different color numbers of specimens observed using a standard light source.
Maybe this old spectrophotometer could find a second life measuring fuel color!

So maybe I could get that spectrophotometer working, convert its output to RGB color values, and match it to the nearest standard color number! Then I would have a quantitative record and the capability to identify colors outside of the standard’s yellow-red-black palette.

The Challenge

So I had a path towards a method that I could feel good putting in front of a client.

  1. Bring the defunct UV/Vis back to life.
  2. Find a way to transfer the UV/Vis spectra to a computer.
  3. Convert a continuous spectrum to RGB space.
  4. Match the RGB coordinates to their nearest standardized color value.

Step 3 is where I will focus this series of blog posts because working on the spectrum → RGB conversion taught me a great deal about human vision and color perception. After all, how does the human eye really perceive the visible spectrum? How is it that we take just red, green, and blue light and still create what appears to be the entire spectrum of visible electromagnetic radiation? How can we convert a continuous spectrum to those 3 color channels?

The next post will get to the heart of the problem: How the continuous visible spectrum relates to the red, green, and blue color channels that humans use in our cameras and displays.

Rules of Exponents


We would call this “x raised to the a power”
x is called the base
a is called the exponent

Rules_of_exponents_2.png is called “seven raised to the eleventh power”
the base is 7
the exponent is 11

Integer exponents
An integer exponent just means multiplying a number by itself.


Fraction exponents
An exponent which is the reciprocal of an integer means taking the root


Negative exponents
A negative exponent indicates a reciprocal


NOTICE: This means you don’t ever have to deal with fractions or roots ever again if you don’t want to! You can treat them all like exponents.

Algebra rules

Combining exponents
When two numbers with the same base are multiplied, the exponents may be added


Exponents in ratios
When exponents in the numerator an denominator have the same base, their exponents may be subracted

NOTE: This is just a combination of the negative exponent rule with the multiplication rule!

Distributing exponents
If you have 2 layers of exponents, they may be combined


Negative bases
When the base is a negative number, the sign of the expression overall depends on whether the exponent is even or odd.
Rules_of_exponents_19.png   if n is an odd integer
Rules_of_exponents_20.png   if n is an even integer

Rules_of_exponents_21.png   if n is an odd integer
Rules_of_exponents_22.png   if n is an even integer


A warning about addition
Exponents DO NOT distribute cleanly over addition

You either have to find a way to simplify the base first OR use the definition of the exponent to simplify things.