In memory of Dr. Alex Punnoose

Dr. Alex Punnoose was one of my favorite teachers at Boise State. Tragically, he passed away in 2016. However, his memory lives on in his students and what he taught us about the universe and our roles as scientists.

Dr. Punnoose’s “Probe-Interaction-Signal” method of classifying material characterization techniques is one of the most useful lessons he taught me. I do not know if he invented this approach to categorizing characterization techniques, but his implementation of Probe-Interaction-Signal was very intuitive, and I still use it as a memory tool and when writing methods sections in my papers.


In essence, most material characterization methods can be understood by breaking the process into 3 steps: Probe, Interaction, and Signal.

  • Probe: Whatever you use to interrogate the specimen: light, radiation, physical contact with a tool, bombardment with particles, etc.
  • Interaction: The physics-based principle which describes how the probe and specimen interact. Usually, this incorporates some material properties of the specimen.
  • Signal: What you measure to understand the material: emitted particles, reflected or emitted radiation, reaction forces, etc.

This framework is essentially the same as classifying the components of an experiment as independent factors, theory, and dependent response variables. The Probe and Signal are the components of the experiment which a scientist can measure and record. The Interaction involves the theory and equations which we can use to gain insight into the specimen we are interrogating.

Let’s start with a really simple, silly example.

Poking Something with a Stick

So suppose you are a child walking along the beach, and you see a crab sitting on the sand. It’s not moving. What do you do? Well, you can design an experiment within the Probe-Interaction-Signal framework.

  • Probe: Poke it with a stick.
  • Interaction: If the crab is alive, it will feel you poking it, and it will move. If it is dead, it will not react.
  • Signal: You look at the crab to see if it moves or does not move.

So here we have a simple example of a characterization method based on the physical principle that living things react to being poked while nonliving things do not. The Interaction theory tells us how to draw a conclusion based on the Signal we measure after applying a Probe.

X-ray Photoelectron Spectroscopy (XPS)

XPS is an excellent choice of name for a characterization technique since the name describes almost exactly how XPS works.

  • Probe: Monochromatic X-rays of known energy
  • Interaction: Electrons are released with an energy equal to the incoming X-rays minus the binding energy that held them to the specimen
  • Signal: The kinetic energy of the electrons released from the material

In many cases, the Interaction portion of the process is based on an underlying physical principle which we can describe mathematically. XPS is based on measuring binding energies which are characteristic of the atoms which emitted the photoelectron.

$$ E_{binding} = E_{X-ray} – (E_{kinetic} + \phi) $$

We controlled the X-ray energy and measured the kinetic energy. The φ part of the equation is just a constant representing the work function of the electron detector. (Some versions of the equation will leave φ off, presuming that it is accounted for in the kinetic energy term.) So this equation lets us compute the binding energy of the atoms which emitted the, which in turn can be matched to the specimen composition.

Understanding the Interaction also helps us anticipate the limitations of a characterization technique. Since XPS measures the energy of emitted electrons, and electrons’ ability to escape a material diminishes with depth below the surface, we know that XPS measures surface composition. Anything that would interfere with the electrons’ path to the detector could interfere with the measurement, so XPS is best performed in a vacuum.

Let’s take a quick look at a few other techniques through the Probe-Interaction-Signal lens.

X-ray Fluorescence Spectroscopy (XRF)

  • Probe: X-rays from a source (can be monochromatic or not)
  • Interaction: Electron transitions induced by the probe X-rays result in the emission of “secondary” X-rays which are characteristic of the atoms which produce them
  • Signal: The secondary X-rays

XRF is another spectroscopic technique because the Signal is X-rays which are characteristic of the element which produced them. XRF is also very similar to XPF because they share the same Probe. However, they measure different Signals and require different, but related, Interaction theories to interpret their results.

Optical Microscopy

  • Probe: Light from a source
  • Interaction: Light is scattered by features of the specimen surface
  • Signal: Scattered light is magnified and directed to a human eye or a camera

Microscopy is intuitive because, like human vision, it lets us measure the spatial locations of features on the surface of a specimen. However, the Probe-Interaction-Signal framework helps us understand how modifications to optical microscopic techniques could be made to extract additional information.

For example, we could modify the probe and signal by adding filters, making the interaction more complex and providing us with new information. By adding polarizing filters, we could deliberately exclude light which changed polarization upon interaction with the specimen. This can help determine the orientation of crystals of birefringent materials, such as an aluminum oxide or barium titanate.

  • Probe: Polarized light from a source
  • Interaction: Light is scattered by features of the specimen surface and changes polarization angle depending on the orientation of birefringent surface features
  • Signal: Scattered light which is filtered by a polarizer, magnified, and directed to a human eye or a camera

Transmission Electron Microscopy (TEM)

  • Probe: Monochromatic electron beam
  • Interaction: Electrons are transmitted through the specimen and scattered by its atoms
  • Signal: Transmitted electrons form an image in the image plane and a diffraction pattern in the back focal plane

TEM is a far more powerful and sophisticated technique than my description makes it sound. My goal is to emphasize one important point: the spatial information from TEM images comes primarily from the Signal part of the technique. Let’s compare this to how scanning electron microscopy works.

Scanning Electron Microscopy (SEM)

  • Probe: “Primary” electron beam directed to pre-determined locations on a grid
  • Interaction: “Secondary” electrons are knocked loose from the specimen surface
  • Signal: Secondary electrons are collected by a detector

Notice how the location information for SEM comes from the Probe part of SEM rather than directly from the Signal like TEM and optical microscopy. It can be tempting to focus on the information carried in the Signal part of an experiment. However, the Signal-Interaction-Probe framework helps illustrate how important it is to interpret the dependent variables of an experiment in the context of the independent variables.

A Useful Model

We have only scratched the surface of material characterization techniques out there, but the Probe-Interaction-Signal concept can be applied to practically all of them. I find it especially handy when learning a new technique for the first time or training someone else. It’s also very useful in troubleshooting misbehaving instruments.

I hope you find Probe-Interaction-Signal useful, and I suspect it will not be the last of Dr. Punnoose’s lessons I transcribe into a blog post.

Measuring Color Part 3: The Observer Functions

Now that we’ve laid out the problem of taking a color measurement from a continuous spectrum, we can actually execute the conversion. We can start by considering the wavelengths of light that correspond to certain colors. For example, red light has a wavelength of about 650 nm, green light is about 550 nm, and blue light is down around 425 nm. In the actual world, light can exist at any wavelength. Color is more complex than just measuring the intensity of light at one specific wavelength.

An example of a continuous spectrum collected from some red-dyed diesel fuel. We want to convert this kind of data into a 3-channel RGB color representation.

The Observer Functions

The human eye perceives red, green, and blue light each over a range of wavelengths. The sensitivity of the eye also varies depending on wavelength. For our project, we will apply some mathematical equations called the “Observer Functions” which approximate the sensitivity of human eyes at each wavelength.

The observer functions approximate how the eye responds to red, green, and blue light. The CIE C Standard approximates the standard light source required for an ASTM D1500 color measurement test. This light source appears white because it stimulates all 3 areas of sensitivity in the human eye.

The observer functions show a few interesting things. First, the human eye is far more sensitive to blue light than green or red, but that sensitivity is over a narrower range of wavelengths. Perception of “red” light mostly covers the longer visible wavelengths. However, the red observer function has a secondary peak at about the same wavelength range as blue light. This suggests that the light we think of as “blue” also stimulates human “red” receptors, albeit not as much as the longer wavelengths we associate with red light.

So how do we use these observer functions? Well, we simply need to multiply each of the observer functions by the continuous transmission spectrum that came from our UV/Vis spectrophotometer and compare that result to the original observer function. Thus if a specimen transmitted 100% of light in the “red” range of wavelengths, the observer function would remain unchanged, and we would see that the red receptors in our “eye” would be 100% stimulated.

Comparison of the ideal observer and standard light source functions (left) compared to an actual spectrum applied to the observer functions (right). Notice how the observer functions on the right are reduced in magnitude according to where they match up to the spectrum.

In the example above, we can see an analysis of a typical red-dyed diesel specimen. The grey curve on the right is the spectrum measured from the specimen, and we can see how the red, green, and blue observer functions are changed by the spectrum. The blue spectrum is practically non-existent. The red spectrum came through at nearly full strength. Finally, the green spectrum was reduced in intensity but has not been suppressed entirely. If we quantify how strongly each range of receptors in our “eye” was stimulated, we could measure the (red, green, blue) color as (0.649, 0.345, 0.004). Mathematically, we accomplish this by summing up the observer functions on the right and comparing them to the observer functions on the left.

So this gives us a color measurement! All that remains is to make some adjustments to make sure our measurement accounts for the conditions of an actual experiment.

Simulating a Standard Experiment

Light Source

While our color measurement does represent the color of the actual specimen, there is an important difference between the UV/Vis spectrophotometer and the standard method: the light source! If we were to follow the standard method, we would use a light source which produces different intensities of light at different wavelengths. The UV/Vis automatically corrects for the light source intensity, so the transmission measurements it provides are as if they were collected using a light source which was equally intense at every wavelength.

For our experiment, correcting for the light source is fairly simple. We just take the spectrum of the standard light source (the “CIE C source” in the plots above) and apply it to our transmission spectrum before we perform our calculations using the observer functions.


There is another difference between our UV/Vis experiment and the standard method: the standard uses a 33 mm thick observation vial and the UV/Vis spectrophotometer uses 10 mm wide cuvettes. So our measurements traveled through less of the specimen than the standard mandates and absorbed less light in total. We can compensate for this using Beer’s Law.

$$A = \epsilon c b$$

Beer’s law states that absorbance, A, scales proportionally with the specimen thickness b. (The other constants are related to the concentration, c, and molar absorptivity, ε, of the specimen, which do not change.) So we know that to simulate the standard measurement, we simply need to scale the absorbance by 3.3 to get an equivalent result! However, our measurement is not in absorbance, it is in transmittance. Fortunately, the conversion between absorbance, A, and transmittance, T, is straightforward.

$$A = log_{10}\left(\frac{1}{T}\right) \\
T = 10^{-A} $$

So dealing with the difference in vial thicknesses is simply a matter of converting to absorbance, applying Beer’s Law to find the equivalent absorbance of a 33 mm vial from the 10 mm cuvette, then converting back to transmittance. Then we will have simulated the standard experiment accounting for both the light source and the thickness of the specimen vial.


I was able to build a color-measurement tool which converted from a continuous spectrum to 3-channel RGB color, which in turn let me match the color to standardized references. Building an automated alternative to the human eye-based standard method required a multi-step conversion.

  • Use of the observer functions to convert a continuous spectrum to 3-channel color.
  • Simulation of the standard light source called for in the standard method.
  • Correction for the smaller-thickness specimen which we used compared to the standard.

To me, the most interesting aspect of this project was understanding how the human eye perceives color. Seeing how the observer functions work showed how in a universe where light can have any wavelength, it is possible to simulate images using only 3 color channels. The usefulness of 3-color images is a result of just which wavelengths of light the human eye is sensitive to.

It is fascinating what this implies. Images displayed on 3-color monitors, printout, and even paintings might look realistic to humans, but any animal with different color sensitivity would likely see them very differently. Representing an image collected by a camera using UV, infrared, X-ray, or other non-visible light could also be accomplished simply by applying a different, specially-designed set of observer functions to convert to 3-channel RGB color. The human eye’s sensitivity is also wavelength-dependent. So a red and blue light might have equal intensities in reality, but a human eye would see the blue light as being brighter.

This all makes me appreciate the human body a little bit more, too. Just “eyeballing” a color measurement seemed like a pretty shaky foundation to build a standardized test on. Using an instrument still gives us a more detailed, objective, and quantitative record of the measurement. However, the eye is actually a rather complex and sophisticated device!

Measuring Color Part 2: The Visible Spectrum vs 3-Channel RGB Color

In my previous post, I laid out our overall goal: use a UV/Vis spectrophotometer to judge the color of fuel specimens. This post will introduce the two different kinds of information we will need to understand in order to solve the problem: the continuous spectrum measured by the UV/Vis spectrophotometer and the 3-channel RGB color value we want to use to identify the color.

The Visible Light Spectrum

We will gloss over the less interesting challenges of repairing the UV/Vis and getting it to talk to a computer. In short, the device just needed some adjustments to its light sources, and it had an RS-232 serial port which I could use to transmit data through an adapter to a listening computer.

The most important piece of the puzzle is the kind of data our instrument produces. To use the spectrophotometer, we place a 1 cm wide cuvette containing our specimen into the sample holder. Then the instrument measures how much light at different wavelengths is transmitted through the cuvette and specimen.

The white-capped cuvette in the center of this image contains our specimen.

So suppose we take a sample of fuel, pipette it into a cuvette, and take a quick spectrum measurement. This is what the resulting data looks like.

Transmittance spectrum collected from a sample of red-dyed diesel fuel.

What we get is a spectrum. The vertical axis shows the transmittance, which ranges from 0.0 to 1.0. A value of 1.0 would mean that 100% of the light was transmitted. A value of 0.0 would mean that absolutely no light was able to get through the specimen. This spectrum was collected within the range of wavelengths that humans can perceive, which runs from roughly 400 nm (violet) to 700 nm (deep red) wavelength. (If you don’t know what I mean by wavelength, you might want to read up a bit here.)

This specimen appeared red to the human eye. The light was mostly transmitted in 650-700 nm range of wavelength, which would correspond to red and orange colors, which makes sense given how the specimen looked before I scanned it. Practically no light was transmitted below 550 nm wavelength, which means the specimen blocked blue and violet light. Somewhere in these ranges of transmitted or blocked light, we will find the information we need to categorize the color of this specimen.

3-Channel RGB Color

But how does this spectrum compare to RGB color? The idea behind RGB color is that 3 numbers – one each for the red, green, and blue color channels – can be used to describe almost any color visible to the human eye.

A screengrab from GIMP showing how different RGB color values can be selected to reproduce a particular color. It is also worth noting that there are many other frameworks for representing color. This one just happens to be the best match to the standard method.

One thing is immediately apparent upon comparing our example of RGB color (255,36,0) to the spectrum from the UV/Vis spectrophotometer. The spectrum contains a great deal more information!

This brings us to the real challenge of the project: converting a continuous spectrum into RGB color values. To do that, we need to consider how the human eye perceives color, which I will discuss in my next post.

Measuring Color Part 1: Setting Up the Problem

This is the story about how a combination of a simple color experiment and limited resources prompted me to design a new experiment and analysis for the lab where I worked. What started as a simple fuel test turned into a project that taught me a great deal about how humans perceive light and color.

The Problem

Suppose you work in a laboratory, and your clients want you to to grade the color of some fuel specimens. They specify that you are supposed to assign a number to the fuel from 1 to 8 ranging from a pale yellow (1), red (4) or nearly black (8), and various shades and numbers in between. Well, it’s good to have clients, and the test sounds simple enough. Great!

But we can’t just eyeball that measurement, and the standard method calls for equipment you don’t have. The lab has limited resources, and you can’t seem to talk your boss into buying the parts you need or turning the clients down. So what do you do?


The standard methods of getting a color measurement require either visual matching to a reference material using a standard light source or use of a specialized piece of equipment. However, visual matching does not leave a quantitative record of the measurement, and we lacked the resources to acquire a specialized instrument. Furthermore, it would be advantageous to analyze colors that fell outside the narrowly-defined yellow-red-black range.

Maybe there was a way to get a measurement that was not only documentable, calibratable, and reproducible, but arguably better than the standard method of visually comparing the specimen to a piece of reference glass!

I found two key resources that would help me escape my conundrum:

  • The lab had an old, out-of-commission UV/Vis spectrophotometer.
  • ASTM D1500 provides the RGB color values for the different color numbers of specimens observed using a standard light source.
Maybe this old spectrophotometer could find a second life measuring fuel color!

So maybe I could get that spectrophotometer working, convert its output to RGB color values, and match it to the nearest standard color number! Then I would have a quantitative record and the capability to identify colors outside of the standard’s yellow-red-black palette.

The Challenge

So I had a path towards a method that I could feel good putting in front of a client.

  1. Bring the defunct UV/Vis back to life.
  2. Find a way to transfer the UV/Vis spectra to a computer.
  3. Convert a continuous spectrum to RGB space.
  4. Match the RGB coordinates to their nearest standardized color value.

Step 3 is where I will focus this series of blog posts because working on the spectrum → RGB conversion taught me a great deal about human vision and color perception. After all, how does the human eye really perceive the visible spectrum? How is it that we take just red, green, and blue light and still create what appears to be the entire spectrum of visible electromagnetic radiation? How can we convert a continuous spectrum to those 3 color channels?

The next post will get to the heart of the problem: How the continuous visible spectrum relates to the red, green, and blue color channels that humans use in our cameras and displays.