Scripted Drafting

Suppose you want to create a drawing for a simple part, like a typical tensile dogbone.

An “A” type tensile dogbone according to ASTM E345.

You could draw up this part according to specifications in a conventional drafting program. But parts like this often come in different sizes with different proportions, depending on the material and your testing needs. So for this kind of part, there’s a strong chance you would need to draft different versions to satisfy different specifications. When that need arises, your first impulse might be to draft a whole new part or modify an old one as needed, but there is a more efficient approach.

A dogbone drafted exactly according to E345.

The key is in recognizing that all of the dogbones you are making have the same basic features, and you can follow the same basic steps when drafting each one. Here is one possible workflow:

  • Draw a rectangle for the reduced middle section
  • Draw rectangles for the grips at the ends
  • Subtract out the rounded fillets

Regardless of how long, wide, or thick the dogbone is, you can make it following those same steps. All that changes are the measurements.

A dogbone that is half as wide, but four times as thick as the last one.

So if we approach our drafting project as a sequence of steps, we can draft practically every conceivable dogbone with the effort of drafting just one. It turns out that many software suites out there have some kind of scripting capability. For our example, there is a good free option: OpenSCAD.

Scripts on the left panel lay out the steps of drafting a part. The right panel shows a preview of the result.

OpenSCAD only lets you draft parts with scripts. Making an all-purpose dogbone-making script is simply a matter of setting up the variables which describe the part dimensions up at the beginning of the script.

Note how I decided to build in a little math for the grip lengths so they do not need to be defined separately.

Then, when it comes time to create the part, the same set of steps are followed every time.

The steps to creating a “dogbone” part are always the same. Only the measurements change.

So now you only need to change the numbers at the beginning of the script any time you want to draft a dogbone with new dimensions. This concept of simple, scripted drafting should work with a variety of parts. Keep an eye out situations where:

  • You are designing simple parts
  • The parts’ interrelated measurements that can be handled algebraically
  • Any part can be made by repeating the same set of steps
  • You will need to draft multiple versions of the part

If you use scripted drafting, you can save yourself the trouble of having to go back and re-invent the part from scratch each time. All it takes is a little thought focusing on the steps it takes to make the part rather than just the end result.

A wide, thick dogbone made by simply changing a few numbers and re-rendering the part.

Probe-Interaction-Signal

Flower
In memory of Dr. Alex Punnoose
1968-2016

Dr. Alex Punnoose was one of my favorite teachers at Boise State. Tragically, he passed away in 2016. However, his memory lives on in his students and what he taught us about the universe and our roles as scientists.

Dr. Punnoose’s “Probe-Interaction-Signal” method of classifying material characterization techniques is one of the most useful lessons he taught me. I do not know if he invented this approach to categorizing characterization techniques, but his implementation of Probe-Interaction-Signal was very intuitive, and I still use it as a memory tool and when writing methods sections in my papers.

Probe-Interaction-Signal

In essence, most material characterization methods can be understood by breaking the process into 3 steps: Probe, Interaction, and Signal.

  • Probe: Whatever you use to interrogate the specimen: light, radiation, physical contact with a tool, bombardment with particles, etc.
  • Interaction: The physics-based principle which describes how the probe and specimen interact. Usually, this incorporates some material properties of the specimen.
  • Signal: What you measure to understand the material: emitted particles, reflected or emitted radiation, reaction forces, etc.

This framework is essentially the same as classifying the components of an experiment as independent factors, theory, and dependent response variables. The Probe and Signal are the components of the experiment which a scientist can measure and record. The Interaction involves the theory and equations which we can use to gain insight into the specimen we are interrogating.

Let’s start with a really simple, silly example.

Poking Something with a Stick

So suppose you are a child walking along the beach, and you see a crab sitting on the sand. It’s not moving. What do you do? Well, you can design an experiment within the Probe-Interaction-Signal framework.

  • Probe: Poke it with a stick.
  • Interaction: If the crab is alive, it will feel you poking it, and it will move. If it is dead, it will not react.
  • Signal: You look at the crab to see if it moves or does not move.

So here we have a simple example of a characterization method based on the physical principle that living things react to being poked while nonliving things do not. The Interaction theory tells us how to draw a conclusion based on the Signal we measure after applying a Probe.

X-ray Photoelectron Spectroscopy (XPS)

XPS is an excellent choice of name for a characterization technique since the name describes almost exactly how XPS works.

  • Probe: Monochromatic X-rays of known energy
  • Interaction: Electrons are released with an energy equal to the incoming X-rays minus the binding energy that held them to the specimen
  • Signal: The kinetic energy of the electrons released from the material

In many cases, the Interaction portion of the process is based on an underlying physical principle which we can describe mathematically. XPS is based on measuring binding energies which are characteristic of the atoms which emitted the photoelectron.

$$ E_{binding} = E_{X-ray} – (E_{kinetic} + \phi) $$

We controlled the X-ray energy and measured the kinetic energy. The φ part of the equation is just a constant representing the work function of the electron detector. (Some versions of the equation will leave φ off, presuming that it is accounted for in the kinetic energy term.) So this equation lets us compute the binding energy of the atoms which emitted the, which in turn can be matched to the specimen composition.

Understanding the Interaction also helps us anticipate the limitations of a characterization technique. Since XPS measures the energy of emitted electrons, and electrons’ ability to escape a material diminishes with depth below the surface, we know that XPS measures surface composition. Anything that would interfere with the electrons’ path to the detector could interfere with the measurement, so XPS is best performed in a vacuum.

Let’s take a quick look at a few other techniques through the Probe-Interaction-Signal lens.

X-ray Fluorescence Spectroscopy (XRF)

  • Probe: X-rays from a source (can be monochromatic or not)
  • Interaction: Electron transitions induced by the probe X-rays result in the emission of “secondary” X-rays which are characteristic of the atoms which produce them
  • Signal: The secondary X-rays

XRF is another spectroscopic technique because the Signal is X-rays which are characteristic of the element which produced them. XRF is also very similar to XPF because they share the same Probe. However, they measure different Signals and require different, but related, Interaction theories to interpret their results.

Optical Microscopy

  • Probe: Light from a source
  • Interaction: Light is scattered by features of the specimen surface
  • Signal: Scattered light is magnified and directed to a human eye or a camera

Microscopy is intuitive because, like human vision, it lets us measure the spatial locations of features on the surface of a specimen. However, the Probe-Interaction-Signal framework helps us understand how modifications to optical microscopic techniques could be made to extract additional information.

For example, we could modify the probe and signal by adding filters, making the interaction more complex and providing us with new information. By adding polarizing filters, we could deliberately exclude light which changed polarization upon interaction with the specimen. This can help determine the orientation of crystals of birefringent materials, such as an aluminum oxide or barium titanate.

  • Probe: Polarized light from a source
  • Interaction: Light is scattered by features of the specimen surface and changes polarization angle depending on the orientation of birefringent surface features
  • Signal: Scattered light which is filtered by a polarizer, magnified, and directed to a human eye or a camera

Transmission Electron Microscopy (TEM)

  • Probe: Monochromatic electron beam
  • Interaction: Electrons are transmitted through the specimen and scattered by its atoms
  • Signal: Transmitted electrons form an image in the image plane and a diffraction pattern in the back focal plane

TEM is a far more powerful and sophisticated technique than my description makes it sound. My goal is to emphasize one important point: the spatial information from TEM images comes primarily from the Signal part of the technique. Let’s compare this to how scanning electron microscopy works.

Scanning Electron Microscopy (SEM)

  • Probe: “Primary” electron beam directed to pre-determined locations on a grid
  • Interaction: “Secondary” electrons are knocked loose from the specimen surface
  • Signal: Secondary electrons are collected by a detector

Notice how the location information for SEM comes from the Probe part of SEM rather than directly from the Signal like TEM and optical microscopy. It can be tempting to focus on the information carried in the Signal part of an experiment. However, the Signal-Interaction-Probe framework helps illustrate how important it is to interpret the dependent variables of an experiment in the context of the independent variables.

A Useful Model

We have only scratched the surface of material characterization techniques out there, but the Probe-Interaction-Signal concept can be applied to practically all of them. I find it especially handy when learning a new technique for the first time or training someone else. It’s also very useful in troubleshooting misbehaving instruments.

I hope you find Probe-Interaction-Signal useful, and I suspect it will not be the last of Dr. Punnoose’s lessons I transcribe into a blog post.

Image-Centric vs Deformation-Centric DIC Analysis

Digital Image Correlation Analysis

Digital image correlation (DIC) is a technique for measuring the deformation (i.e. strain) of materials using a sequence of images taken over the course of a mechanical experiment. DIC analysis generally follows 4 steps: capturing an image sequence, tracking features, deformation analysis, and data mining.

Flowchart mapping out the 4 steps of DIC analysis and the type of data produced by each step.

Problem: Image-Centric DIC Limitations

Our research group used DIC as our primary tool for non-contact measurement of material deformation. However, our work was inhibited by several limitations of our existing DIC analysis software.

  1. Computationally inefficient and error-prone motion tracking
  2. No measurement validation or error estimation
  3. Poor record-keeping of the steps of completed DIC analyses
  4. A single type of strain result which required long computation times and large amounts of disk space
  5. Limited data mining potential due to strain analysis, static reference states, and lack of error estimation

However, the biggest limitation was that our old DIC software was image-centric. From beginning to end, the data was formatted, processed, and saved in arrays matching pixels in the original image. This approach ignores the fact that the discrete feature tracking and the deformation gradient calculation trade location accuracy for displacement and deformation information while introducing and propagating error. Handling the data at the resolution of the image wastes computing resources while misrepresenting the accuracy of the analysis.

Solution: Deformation-Centric DIC

I started by retrofitting our old DIC software with new features and add-ons to compensate for its shortcomings.

  1. Improved motion-tracking based on the Lucas-Kanade optical flow function set from the OpenCV library
  2. A “rigid body motion” test for motion tracking validation, error estimation, and artifact detection
  3. A robust documentation policy that saves every step of the analysis human-and-machine-readable directory tree, eg: .\\images\motion tracking\deformation\data mining
  4. Deformation gradient-focused data structures which save computational resources and enable a wider variety of data mining strategies without sacrificing accuracy
  5. Flexible reference states, making it possible to target sections of experiments and compute deformation rates

In addition to these improvements, my DIC software was built fundamentally differently from our old DIC platform. Rather than handling data in an image-centric manner, I implemented a deformation-centric scheme. My DIC software treats deformation data in similar to the nodes and elements found in finite element analysis. This approach facilitates error estimation and enables faster computation times and more efficient use of memory.

Impact: Unprecedented Access to Material Deformation

My analysis suite has become the primary tool of my former research group at Georgia Tech. They are now able to distinguish real deformation from artifacts, validate their measurements, and perform sophisticated strain analyses which are not possible with conventional DIC analysis. For my dissertation, I combined deformation rate analyses with in-situ stress measurements, creating create the first-ever real-time deformation energy measurement based entirely on empirical measurement without simulation or model. By building a DIC analysis suite from the ground up with deformation analysis in mind, I granted my research group significantly more powerful tools for mining mechanical testing data.

K: The Stress Intensity Factor

Fracture Mechanics is the study of how cracks grow and materials break apart. One of the most important concepts in the world of fracture mechanics is K, the linear-elastic stress intensity factor.

It is easy to confuse what K is used for with what it actually means. K is useful for measuring a material’s resistance to fracture starting at a sharp-tipped crack. However, K is defined as a measurement of the magnitude of stress distribution around a sharp crack tip in a linear-elastic solid. K only has meaning in the context of materials that can be expected to form a certain distribution of stress around a sharp crack tip!

So if we have something that is predominantly linear-elastic (high-strength steels, glass, etc.) and it contains a sharp-tipped crack, then the stress distribution around that crack can be given in terms of this parameter we call K. The stress distributions for a crack in tension perpendicular to the crack faces (aka Mode I loading) are:

$$\sigma _{\text{yy}}=\frac{K
}{\sqrt{2 \pi r}} \cos (\theta ) \left(\sin \left(\frac{\theta }{2}\right) \sin \left(\frac{3 \theta }{2}\right)+1\right)$$

$$\sigma _{\text{xx}}=\frac{K
}{\sqrt{2 \pi r}}
\cos (\theta ) \left(1-\sin \left(\frac{\theta }{2}\right) \sin \left(\frac{3 \theta }{2}\right)\right) $$

$$\sigma _{\text{xy}}=\frac{K
}{\sqrt{2 \pi r}} \left(\sin \left(\frac{\theta }{2}\right) \cos \left(\frac{\theta }{2}\right) \cos \left(\frac{3 \theta }{2}\right)\right)$$

These plots show how the stress distribution looks for a stress intensity factor of K = 1 MPa √m.


Stress distribution for a stress intensity factor of K = 1 MPa √m. The crack tip is located at (0,0) and the crack follows the x-axis along x < 0.

Stress distribution for a stress intensity factor of K = 2 MPa √m. Note that I kept the same scale so that we can see the effect on the spatial distribution.

Hopefully, this helps illustrate what the stress intensity factor, K, means in a mathematical sense. Though K is useful for measuring fracture toughness, it is only applicable for sharp crack tips in linear-elastic materials that exhibit stress distributions similar to what I have shown here.

Note that these solutions are approximations: they are the first, most significant term of a summation. But K is still a very useful parameter for materials that obey linear-elastic fracture mechanics.

I created a live-view applet for Wolfram Cloud, but it wasn’t playing nice with the cloud, hence my screenshots above. In the event I get it working, I’ll put it here.

Ce qui est simple est toujours faux. Ce qui ne l’est pas est inutilisable.
What is simple is always wrong. What is not is unusable.

Paul Valéry