Blog

Hidden Blade (2023)

I don’t think I have ever had such mixed feelings about how to rate a movie. If I were grading it on a rubric, it would have either a perfect score or a zero in every category. Hidden Blade is a very mixed bag.

A little historical background is needed to understand this movie. Hidden Blade is set in the late 1930s and early 1940s in Shanghai under occupation by a declining Empire of Japan and kept in check by a puppet government led by Wang Jingwei. Shanghai is rife with tension between the occupying Japanese forces, the puppet government’s terrifying secret police, an underground resistance movement, and the civilians caught in the crossfire. One real historical figure referenced in the film is Ding Mocun, the brutal head of the secret police who was notorious for using torture as his preferred method of keeping the populace in line.

At least it looks good!

First, the good. Hidden Blade looks gorgeous and has an oustanding cast. Tony Leung Chiue Wai and Wang Yibo are the leads and carry their scenes effortlessly. Tony plays a charming but ruthless head of the secret police (sound familiar?) while Wang Yibo plays a low-level goon and interrogator who ingratiates himself with the Japanese officer overseeing the occupation. Both are charming and their performances reveal the characters’ inner turmoil. But the best performances might actually be a pair of minor female characters – one a Communist resistance fighter who lures Japanese soldiers to their deaths and the other a Nationalist spy who is caught and interrogated by Tony.

Hidden Blade is also gorgeous. It embraces a washed-out neo noir look, and there are plenty of moody shots in dimly-lit rooms or rainy streets. Most of the film feels bleak and claustrophobic. Occasionally, the story switches over to a war drama in the vein of Schindler’s List or Saving Private Ryan and those scenes, while horrifying in content, are beautifully-executed. The score seems to crib from the Denis Villenueve/Hans Zimmer playbook (Dune/Blade Runner 2049) of throwing a beautiful image on screen, waiting half a beat, then blasting the audience with a huge full-orchestra chord. It’s not the most original take on movie-making, but it is very well-executed.

Tony is awesome, but not awesome enough to save this movie.

If I loved the acting and visuals, surely Hidden Blade must get high marks, right? Well, this is where things fall apart: The writing and editing are a disorienting mess. As a piece of storytelling, Hidden Blade fails to be entertaining, emotionally moving, or to make any kind of sense. I throw editing in there because I can’t help but shake the feeling that with some judicious cuts and a little reorganization, there might be a great movie hidden somewhere in this mess. I wouldn’t be surprised if there is a better version out there. But as-released, there is no way to tell what any character is trying to accomplish or what they care about. And because the characters seem to change their personalities between scenes and have no consistent motivations, there is no drama or tension. So many great scenes roll by, but they are disconnected vignettes that fail to coalesce into a story.

To illustrate the problem, I have to use some big spoilers. Consider this your warning.

Hidden Blade is obviously propaganda-laden, but I actually don’t hold that against it. Clearly their intent was to write some historical fiction where the heroes were brave Communists fighting off the evil occupiers and treacherous collaborators. That’s fine. It might be ahistorical, but a movie like that could still be good art and entertaining. But Hidden Blade isn’t even good propaganda. Let me explain…

So suppose Hidden Blade was intended to paint Communists in a good light. That should be easy to pull off. Have some brave Communist resistance fighters battle the evil occupiers. Make them go undercover, build some tension, put them in a few morally difficult situations, and ultimately have our heroes take down the occupation from the inside. We even have Ding Mocun as the real-life inspiration for a truly horrifying villain. Easy peasy.

But that’s not what Hidden Blade does. Wang Yibo is a low-level enforcer for the secret police, but also a secret Communist freedom fighter. He does horrible things for the occupation – capturing, torturing, and killing resistance fighters and innocent people. Sure, he gets in one brawl with some Japanese soldiers, but that’s more about a personal grievance than any desire to end the occupation. Does our secret freedom fighter ever really do anything to take down the occupiers to justify all his evil deeds? Nope. (More on this later.)

The best character in the movie is also barely in the movie.

Part of the problem is that the secret police, which Wang Yibo’s character has infiltrated, was run by Tony Leung’s character, ANOTHER secret Communist freedom-fighter. He interrogates prisoners. He signs death warrants. He sends Wang Yibo off to torture and kill people. It’s basically Ding Mocun, but sexier because he’s played by Tony Leung. His most significant acts of rebellion are silencing a defector and letting a Nationalist spy go free. But for the most part, he does an excellent job of terrorizing the local populace. He does more to help the Japanese occupation than he does to hinder it. Not exactly heroic behavior.

Our heroes aren’t revealed as secret Communist freedom-fighters until the very end. It’s a bewildering decision. If their identities had been revealed earlier, it would have added dramatic tension. Will their secret be found out? Do they have inner turmoil about the things they must do to maintain their cover? Never mind that – it would be too interesting. Instead we watch awful collaborators commit war crimes for two hours. The big reveal is practically hidden in a post-credits scene.

There is an 11th-hour plot point where the Japanese commander shows Wang Yibo’s character intelligence on Japanese defenses in Manchuria, but it’s far too late in the film to mean anything to the story. Plus, it’s total nonsense. Why would the local garrison officer have secret battle plans for a region hundreds of miles to the north? There’s no reason he would have those plans, no reason anyone else would try to get them from him, and it’s not clear why the plans would make any difference in defeating the Empire. It makes zero sense. Think about it for one moment and you realized that Wang Yibo’s character was a loyal enforcer who got lucky after the occupation ended – no thanks to him. The movie comes across as the story of two secret Communist spies collaborating with the Japanese occupation instead of the story of two brave freedom fighters.

I still wonder if Hidden Blade could be re-edited into a good movie. Unless a director’s cut comes out, I can’t recommend it. Hidden Blade feels like an attempt to make a pro-Communist remake of Lust, Caution, but somehow it fails to be a movie and even fails to be effective propaganda.

Second Opinion: My partner says “When propaganda goes too hard, it makes everything stupid. I’m not surprised, but still disappointed. I have such a beef with this movie that I think I’ll just stick to Taiwanese movies from now on.” Harsh criticism from a diehard fan of both Tony Leung and Wang Yibo!

Instead, just watch Lust, Caution – which is an absolute masterpiece with the same historical setting. It has a tight focus on a sympathetic central character, the story builds layers upon layers of suspense, and director Ang Lee is at the top of his game. And, in contrast to the chaste but extremely violent Hidden Blade, Lust, Caution is absolutely saturated in sex.

See Tony Leung in a movie that is good enough to deserve him!

Game of the Rings of Time: Fantasy Streaming Series Showdown

One series to rule them all? Time for a showdown between The Wheel of Time (Amazon) vs. House of the Dragon (HBO) vs. The Rings of Power (Amazon) vs. Willow (Disney).

One would think that fans of the fantasy genre would be in nirvana with every streaming service cranking out a bonanza of big-budget fantasy series. I have been negligent not writing reviews lately, so I will play catch-up and write one giant review for four of the latest, greatest series!

House of The Dragon

King Viserys affectionately grasps his brother Daemon's arm. Still image from House of the Dragon (HBO).
It’s a family show! It is a show about a big mess of a family, and most of them want to kill each other. (HBO)

Game of Thrones kicked off the current wave of fantasy shows – and rightfully so. Inspired casting and excellent character-driven writing helped GoT expand its reach outside of typical fantasy fandom, though things fell apart once the show ran beyond the bounds of the source books. (Hardly an original take from me, there.) Fortunately, House of the Dragon brought back the strong character-centric storytelling, assembled an excellent cast, and generally looks and feels like the better seasons of GoT.

In some ways, HoTD surpasses GoT. It is easy to forget how dry season 1 of GoT could be. That is not to say that it was elevated by compelling storytelling and an incredible cast. But introducing such a massive number of characters with long histories, and doing in on a tight budget, made the first season of GoT move along pretty slowly. The writers of HoTD came up with a clever way to streamline the world-building. Instead of alluding to the backstory of the principle characters and conflicts, we get to see them develop over decades. The early episodes leap forward years at a time, showing us huge shifts in character development and changing allegiances.

All in all, I think HoTD is an impressive return to what made GoT great, with some added flourishes. The leaps forward in time are executed beautifully, and it is much more exciting to see things happen then to hear about them in expositional dialogue. Watching players maneuver for advantage in anticipation of a coming conflict makes for a tense, exciting story even without any open fighting. The characters are all centered in one place, rather than sprawled across continents. Some writer also seems to have fixated on the idea that childbirth is absolutely terrifying, and the combination of pregnancy and medieval medicine ends up being scarier than either dragons or torture chambers.

At its best, HoTD explores difficult family relationships. My favorite scenes are between the kind-hearted King Viserys (Paddy Constantine), his mercurial brother Daemon (Matt Smith), and Princess Rhaenyra (Emma D’Arcy/Milly Alcock). Their struggles as they attempt to balance self-interest and their responsibility to their family and the realm gives the series a strong central conflict. HoTD is off to a strong start, and seems to understand that a core of well-developed characters was the key to GoT’s success. Season 1 did a great job setting the stage, and hopefully we are in for an exciting ride for the rest of the series.

Second Opinion: My partner would cut straight to the chase and declare House of the Dragon the winner of this 4-way showdown. “It’s like The Handmaid’s Tale transplanted into a medieval fantasy.”

The Rings of Power

Elven ranger Arondir and human healer Bronwyn sit under a tree on a sunny day. Still image from The Rings of Power (Amazon).
Pretty people and pretty scenery. Middle-Earth is looking good. (Amazon)

Like House of the Dragon, The Rings of Power is a show continuing a legacy. As far as I am concerned, The Lord of the Rings movies are still the pinnacle of fantasy filmmaking. The LoTR movies adapted J.R.R. Tolkein’s vast fantasy world with unreserved sincerity. The makers of the LoTR movie gave it everything they had, and everything from the acting to the costumes to the music reflected a level of care and craftmanship rarely seen in any film or show. Just thinking about it makes me want to watch them again. The Hobbit movies did not live up to the LoTR standard, but I was still perfectly happy revisiting Middle-Earth again – even if a heavier reliance on computer effects felt a little more artificial.

If The Rings of Power series has one thing going for it, it is absolutely beautiful visuals. Where HoTD felt a little more drab and lived-in, RoP seems determined to throw the prettiest image possible on the screen whenever possible. The show has also returned to relying more on makeup and costuming for its fantasy creatures, which only adds to the great look and feel of the show. Itmakes me want to watch it on the biggest screen possible so I can take it all in.

As a long-time Tolkein reader, I also could not help but love getting to see more of that world. Númenor! Khazad-dûm! Valinor! Lindon! Yes, this show is so pretty that I got excited about fantasy geography. It was awesome to see such a beautifully-realized world.

I feel the biggest weakness of RoP is that the performances, while sincere and still well-done, felt just a little one-note. Galadriel is always really intense, so it is a little hard to tell when she’s at an emotional high or low. Elrond never deviates far from placid and calm, so any inner conflict he feels does not resonate as strongly as it should. The performances were not bad by any means, but perhaps choosing to make the elven characters always feel dignified and serene created a problem with making them feel relatable. But I feel the actors are good enough that they can strike the right tone in later seasons.

To me, the little people were the highlight of the series. The dwarf prince Durin IV (Owain Arthur) and the hobbit-like harfoot Nori (Markella Kavenagh) felt like more relatable, emotional characters. When the show hit a strong emotional note, it was usually those two characters who pulled it off. I never felt bored with The Rings of Power, but I the gorgeous images on screen definitely helped paper over some occasional lulls in the drama.

Second Opinion: We are a house divided – my partner downright disliked The Rings of Power. “It’s like someone wanted to write original fiction, and the Lord of the Rings characters and settings were just a veneer.”

Preliminary Verdict: It’s a tough call in the Battle of the Prequels. House of the Dragon was more coherent, better-written, better-acted, and more consistently excellent episode-to-episode. But with The Rings of Power I felt more immersed in the world, excited by the beauty on screen, and occasionally crying along with Durin or Nori. It is the more flawed of the two, and I cannot pretend not to have a long-standing bias in favor of The Lord of the Rings, but I would give the crown to The Rings of Power. It’s a close call, but I am more excited about spending future seasons in Middle Earth.

The Wheel of Time

What a beautiful, fully-realized fantasy world. Surely nothing bad will ever happen here. (Amazon)

The Wheel of Time novels by Robert Jordan are one of the most well-regarded American-written high fantasy series. The series is also infamously long, consisting of 15 books and more than 4.4 million words and out-living its original author (Brandon Sanderson finished the last 3 novels). Given all of this, I was surprised that Amazon’s Wheel of Time series didn’t make a bigger splash.

The show is a fairly faithful adaptation, and has a lot of the elements I liked most from the novels. It deftly juggles a large cast of characters as they navigate a complex world of clashing cultures, political intrigue, religious fanaticism, and a struggle to unite squabbling factions in the face of looming danger. I was particularly pleased with how they cast and presented my favorite characters from the books: Nynaeve (Zoë Robins) and Perrin (Marcus Rutherford). More than that, the show lets character-driven events introduce us to the world without letting the world-building kill the pace of the storytelling.

The Wheel of Time doesn’t commit quite as hard to slick political drama as House of the Dragon and is not quite so pretty as The Rings of Power, but it still has some tight writing and excellent visuals. More importantly, The Wheel of Time seems to be playing with a broader palette than its peers. The characters are more likeable, the show has a better sense of humor, and some scenes are genuinely frightening.

The horror elements, in particular, stand out. The trollocs, WoT’s analog of an orc, never stop feeling threatening. When other fantasy series tend to have hordes of faceless baddies just for our heroes to mow down, even a lone trolloc feels like a serious threat. In the show, when a village is assaulted by trollocs, it results in some of the most harrowing moments I can recall seeing in any fantasy series. Adapting the novels’ description of trollocs to the screen resulted in some truly bizarre, janky-looking monsters, and the show uses that to its advantage.

The Wheel of Time may be based on a book series, but unlike HotD and RoP, there was no prior movie or show adaptation it could lean on to help draw the audience in. In spite of this, The Wheel of Time does a huge amount of world-building without letting it bog down the narrative. It also moves between different characters’ storylines without giving any of them short shrift. It is an impressive feat of adaptation, and I am excited that Amazon has renewed the series for at least two more seasons. Given the books it is based on, the show could go on for who knows how long without running out of source material.

Second Opinion: I was unable to pry my partner away from the Taiwanese drama Light the Night to watch this one. I think they missed out, even if Light the Night is also awesome.

Willow

Heroic questing! Magic! Hijinks! More Hijinks. Possibly too many Hijinks. (Disney)

Unlike the other shows, which were based on book series, Willow 2023 is the sequel to a movie. The movie, which had a story by George Lucas, is a little hard to quantify. It follows some familiar fantasy tropes: an evil sorceress queen seeks to kill the baby prophesied to destroy her. Willow (the movie) subverted conventions by having Warwick Davis play the lead and setting up Val Kilmer as a sort of sidekick and comedic foil.

But Willow 1988 was also a hot mess. The accents were all over the place, the tone swing from dramatic to ridiculous at the drop of a hat, and new characters and creatures popped out of nowhere without reason or warning. It might have had the veneer of a high fantasy series, but the characters embodied the humor and sensibility of late 1980’s America to a distracting degree. But, it is still an endearingly original watch if you are willing to ignore whatever expectations you might normally have for a high fantasy movie.

Willow the series is very much in tune with Willow the movie: It’s another hot mess. The pace is all over the place. Sometimes I couldn’t keep up, and at other times it seemed to languish for a whole episode. The accents are still all over the place. The production value and writing varied wildly in quality from episode to episode. In spite of all of that, maybe because it was such a hot mess, Willow is still really fun.

The best thing Willow has going for it is its cast. There’s a lot of personality on display, and I was particularly surprised by how charmed I was by the obvious comedic relief character, Boorman (Amar Chadha-Patel). Warwick Davis is still great, and the rest of the cast have lots of neat little moments. For better or for worse, I have a suspicion that a lot of improv was involved. The quality of the show was all over the place, but I watched it through because I wanted to see where the characters wound up.

Unfortunately, I think Willow is the only one of the four series that I can only give a conditional recommendation. It might wink at the camera too much for some viewers, and you definitely have to ride out some rough patches to get to the good stuff. I found it refreshingly different and original, but the ups and downs might try your patience.

Second Opinion: My partner praised Willow for having a lesbian relationship as the main romance like it was no big deal. But they also bailed out when the show started meandering a couple episodes in, and did not finish the series.

And the Winner Is…

Conan! What [fantasy series] is best in life [on streaming]? I think every show here has something to offer, depending on what you like.

  • House of the Dragon has outstanding casting, tight writing, and is perfect if you want a dark, serious fantasy drama that hearkens back to the better episodes of Game of Thrones. The downside is that it is a little drab and colorless by comparison to its peers, and Season 1 feels like stage-setting for future seasons (albeit extremely well-executed stage-setting).
  • The Rings of Power is visually vibrant and perfect if all you want is to bask in the beauty of Middle Earth. Not every storyline was compelling enough to keep me engaged on their own, but the visuals were so pretty that I didn’t really care.
  • The Wheel of Time is a well-balanced blend of storytelling and world-building, supported by an outstanding ensemble cast and beautiful visuals. It’s the first adaptation of a truly great novel series, and the quality of the first season has me excited for future installments. This could become something beautiful.
  • Willow is a charming, disjointed, strangely-paced mess. It’s a lot of fun, and some parts are excellent. Other times, it really loses its way. I thought it was a nice change of pace. Willow took some chances. Some paid off, some did not.

For me, The Wheel of Time was the best-in-show. The ensemble cast was awesome, the pacing was fast without being disorienting, the visuals were striking – there was no aspect of the show that didn’t feel like it was crafted with care. But more than anything, I loved getting a look into a world that really felt vast, deep, and lived-in. Maybe it’s because the other shows have baggage that The Wheel of Time does not. Maybe it’s because The Wheel of Time has the advantage of adapting one of the most well-regarded novel series in the fantasy genre. Whatever it is, I am excited for what comes next.

I’m not sure I want it to be as long as the book series, but more Wheel of Time, please!

The Return of the Western!

Westerns might be the first genre of movie that I came to love growing up. Though, by the time I was watching movies, Westerns had largely died out and were rarely made any more. In 2021, it seems like the Western came back in a big way!

I have had this website for a while now, and I kept changing my ideas about what to do with it. I’ve used it to share tutoring materials, old lab projects, photos… But if anyone is familiar with my Facebook page, the only thing I seem to be able to do consistently is watch and talk about movies. So I thought I might as well do that here too.

Clint Eastwood was my favorite movie star growing up.

2021 was, somehow, a big year for Westerns. The Power of the Dog, Old Henry, and The Harder They Fall were all released towards the end of 2021. Even though, due to ongoing pandemic, 2021 was not a great year for theatrical releases, all three films receive a warm critical reception. Is the Western back? Is it here to stay? While I think all three of these films are worth seeing, only one makes me want to put it into the rotation of movies that I re-watch periodically.

Old Henry

Old Henry feels like a familiar movie. An old homesteader is doing his best to move on from his mysterious past and live a peaceful life, but trouble still finds him, and he is forced to take up his gun once again…

Old Henry (2021) - IMDb
Tim Blake Nelson is not your typical leading man, but he delivers in Old Henry.

The biggest thing that sets Old Henry apart is its star. Tim Blake Nelson is a long-time character actor who you will usually find in supporting roles. He has occasionally functioned as a co-lead in ensemble pieces like the Coen brothers’ O Brother, Where Art Though and The Ballad of Buster Scruggs. But here Tim Blake Nelson is the titular Old Henry, and he seems perfectly at home in the spotlight. Nelson and fellow underappreciated actor Stephen Dorff carry us through a fairly simple and predictable but satisfying little flick.

Old Henry is a fun, simple movie that does everything it needs to do in its brisk 99 minute runtime. It’s not especially pretty, nor is the score particularly exciting, but it is still a well-crafted little movie. It has well-choreographed shootouts and just enough twists and turns to stay interesting. Plus, young Gavin Lewis turns in a surprisingly compelling performance as Henry’s frustrated teenage son. I will have to keep an eye out for his future performances.

Amazon.com: Unforgiven Movie Poster 1 Sided Original Final 27x40 Clint  Eastwood: Posters & Prints
Old Henry is reminiscent of Unforgiven, but with a tighter focus and less commitment to its anti-violent message.

Some credit Unforgiven with ushering in the brutal, realistic age of the revisionist Western – killing the classic Western adventure in the process. Old Henry feels like an attempt to turn back the clock. Old Henry is gritty and visceral – just like Unforgiven. But unlike Unforgiven where every character feels like a real, living person, the bad guys in Old Henry are just that – bad guys. Even though the movie gives some lip service to an anti-violence message, it still feels like a return to the halcyon days when Clint Eastwood or John Wayne mowed down waves of nameless goons without challenging the audience to really think about the cost of the violence he was inflicting. Old Henry is a fun watch, and I would definitely recommend seeing it once. However, I do not foresee myself returning to watch it again. I’d probably just re-watch Unforgiven.

The Power of the Dog

Speaking of revisionist westerns… The Power of the Dog is a slow-burning psychological drama. The gentle pacing and emotional focus of this film remind me more of a dramatic Western miniseries like Lonesome Dove or abstract period piece Oscar-bait like There Will Be Blood. However, the setting in early 20th century Montana, themes of isolation and independence, and the use of natural environments are enough for me to count it as a Western. Not every Western has to have a shootout.

The Power of the Dog (2021) - IMDb
The Power of the Dog is all about sexuality, but struggles to get to the point.

The story centers around recently remarried widow Rose (Kirsten Dunst), her son Peter (Kodi Smit-McPhee), and her hostile brother-in-law Phil (Benedict Cumberbatch). The film is an exploration of masculinity, and centers on a conflict between Phil’s rough outdoorsy machismo and Peter’s soft-spoken femininity. Phil is something of a master of passive-aggressive emotional torture, which he uses to great effect on Rose. It’s Old-West Gaslight.

The Power of the Dog is beautiful to look at and has a minimalist score that eases us into the film before settling into subtly menacing us the rest of the movie. The cast is top-notch, and on some level, I can understand why it is raking in nominations and awards left and right. But the film didn’t quite come together for me. Kirsten Dunst’s performance is great, but she gets sidelined mid-movie and has very little to do in the third act. The emotion of Benedict Cumberbatch’s performance comes through, but his American accent occasionally falters, and the film never really lets him show the full ugliness of his conflicted and self-hating character. And while the film explores some interesting themes – masculinity, self-reliance, sexuality, social roles… it also never really feels like it comes to a conclusion or makes any strong statements. It explores without really finding anything.

I still recommend The Power of the Dog, and I want to see it again. Perhaps I will appreciate it more on a second viewing. At minimum, it is a beautifully-shot film with some excellent performances. Still, I feel that it has some structural and tonal issues that make it less powerful than it ought to have been.

If you want to get your Kirsten Dunst fix from a period drama that explores the tension between individual sexuality and social norms, I would recommend Sofia Coppola’s The Beguiled instead. It, too, is beautiful to look at, full of incredible performances, and digs deep into human nature. However, it is much more tightly structured, and I think it has more to say and does a better job getting the message across.

I liked The Power of the Dog just fine, but I loved The Beguiled.

The Harder They Fall

I told you that there was one Western in 2021 that deserved a place in the pantheon, and this was it. Guys – this movie rocks.

The Harder They Fall (2021) - IMDb
The Harder They Fall. See it ASAP with a big screen, a loud sound system, and lots of popcorn.

There is a format to a typical Clint Eastwood Western. He’s a peaceful man, minding his own business. The Bad Guy wrongs him. Our hero spends the rest of the movie shooting people. It’s a simple formula and it works.

The Harder They Fall does follow the Eastwood formula, but it builds a lot on top of the familiar format. First and foremost, the entire principal cast is Black – a rarity in the Western genre, and a reality in actual Western history. To drive that point home, the movie opens with “These. People. Existed.” The characters in the film are based, albeit very loosely, on real historical figures. Add to that a perfectly calibrated soundtrack and some cinematography that rivals The Power of the Dog, and you have a movie that is both exciting entertainment and compelling art.

The cast is loaded – Idris Elba, Zazi Beets, Regina King, Delroy Lindo, LaKeith Stanfield… Elba AND Stanfield! The main person I was not familiar with was the lead, Jonathan Majors, but he had no trouble convincing me he was a bona-fide move star.

How And Why The Harder They Fall All-White Town Was Created - Netflix Tudum
Did I mention that this movie is pretty yet?

Somehow, even though Old Henry was a continuation of the revisionist Western style and The Power of the Dog was a somewhat unstructured drama that didn’t fit well into any particular category, it was The Harder they Fall‘s rehashing of a well-worn formula that still felt the most fresh and exciting. This movie is fun, but also has some real character-driven drama that gives the movie real stakes. Even the antagonist, who at first seems like a typical vicious Western baddie, has a surprising amount of depth. And the final showdown is… I won’t spoil it.

There have been some pretty good Westerns in recent memory. The remake of The Magnificent Seven wasn’t bad, but wasn’t great. Django Unchained was good, but dipped its toe too deep into real human suffering to be much fun and had too many wild gunfights and too much wacky humor to actually say anything about that suffering. The Hateful Eight was loaded with on-screen talent, made good use of a beautiful setting, and had an incredible score. But it was also a bit of a slow, bloated, and boring mess that badly needed an editor. The Harder They Fall feels like everything that the Tarantino Westerns wanted to be and more, but in a much sleeker, more coherent package.

I definitely recommend The Harder They Fall. It is the most fun I have had watching a Western made this millennium, and one of the only ones (along with The Proposition and True Grit) that I am likely to re-watch regularly.

Convergence of Pi: Interactive Post

In an earlier post, I discussed how Pi can be expressed as a summation.

$$4 \sum _{n=1}^{\infty } \frac{(-1)^{n-1}}{2 n-1}=4 \left(1 \frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}\text{…}\right)=\pi$$

I thought I would create some embedded Mathematica applets that show how the terms of the series converge to π. To start out, let’s look at how the summation converges as we add more an more terms.

Notice how the approximation jumps between overestimating and underestimating π.

The series gradually converges to π as more terms are added.

You can try evaluating the summation to different numbers of terms with this interactive tool.

Scripted Drafting

Suppose you want to create a drawing for a simple part, like a typical tensile dogbone.

An “A” type tensile dogbone according to ASTM E345.

You could draw up this part according to specifications in a conventional drafting program. But parts like this often come in different sizes with different proportions, depending on the material and your testing needs. So for this kind of part, there’s a strong chance you would need to draft different versions to satisfy different specifications. When that need arises, your first impulse might be to draft a whole new part or modify an old one as needed, but there is a more efficient approach.

A dogbone drafted exactly according to E345.

The key is in recognizing that all of the dogbones you are making have the same basic features, and you can follow the same basic steps when drafting each one. Here is one possible workflow:

  • Draw a rectangle for the reduced middle section
  • Draw rectangles for the grips at the ends
  • Subtract out the rounded fillets

Regardless of how long, wide, or thick the dogbone is, you can make it following those same steps. All that changes are the measurements.

A dogbone that is half as wide, but four times as thick as the last one.

So if we approach our drafting project as a sequence of steps, we can draft practically every conceivable dogbone with the effort of drafting just one. It turns out that many software suites out there have some kind of scripting capability. For our example, there is a good free option: OpenSCAD.

Scripts on the left panel lay out the steps of drafting a part. The right panel shows a preview of the result.

OpenSCAD only lets you draft parts with scripts. Making an all-purpose dogbone-making script is simply a matter of setting up the variables which describe the part dimensions up at the beginning of the script.

Note how I decided to build in a little math for the grip lengths so they do not need to be defined separately.

Then, when it comes time to create the part, the same set of steps are followed every time.

The steps to creating a “dogbone” part are always the same. Only the measurements change.

So now you only need to change the numbers at the beginning of the script any time you want to draft a dogbone with new dimensions. This concept of simple, scripted drafting should work with a variety of parts. Keep an eye out situations where:

  • You are designing simple parts
  • The parts’ interrelated measurements that can be handled algebraically
  • Any part can be made by repeating the same set of steps
  • You will need to draft multiple versions of the part

If you use scripted drafting, you can save yourself the trouble of having to go back and re-invent the part from scratch each time. All it takes is a little thought focusing on the steps it takes to make the part rather than just the end result.

A wide, thick dogbone made by simply changing a few numbers and re-rendering the part.

Probe-Interaction-Signal

Flower
In memory of Dr. Alex Punnoose
1968-2016

Dr. Alex Punnoose was one of my favorite teachers at Boise State. Tragically, he passed away in 2016. However, his memory lives on in his students and what he taught us about the universe and our roles as scientists.

Dr. Punnoose’s “Probe-Interaction-Signal” method of classifying material characterization techniques is one of the most useful lessons he taught me. I do not know if he invented this approach to categorizing characterization techniques, but his implementation of Probe-Interaction-Signal was very intuitive, and I still use it as a memory tool and when writing methods sections in my papers.

Probe-Interaction-Signal

In essence, most material characterization methods can be understood by breaking the process into 3 steps: Probe, Interaction, and Signal.

  • Probe: Whatever you use to interrogate the specimen: light, radiation, physical contact with a tool, bombardment with particles, etc.
  • Interaction: The physics-based principle which describes how the probe and specimen interact. Usually, this incorporates some material properties of the specimen.
  • Signal: What you measure to understand the material: emitted particles, reflected or emitted radiation, reaction forces, etc.

This framework is essentially the same as classifying the components of an experiment as independent factors, theory, and dependent response variables. The Probe and Signal are the components of the experiment which a scientist can measure and record. The Interaction involves the theory and equations which we can use to gain insight into the specimen we are interrogating.

Let’s start with a really simple, silly example.

Poking Something with a Stick

So suppose you are a child walking along the beach, and you see a crab sitting on the sand. It’s not moving. What do you do? Well, you can design an experiment within the Probe-Interaction-Signal framework.

  • Probe: Poke it with a stick.
  • Interaction: If the crab is alive, it will feel you poking it, and it will move. If it is dead, it will not react.
  • Signal: You look at the crab to see if it moves or does not move.

So here we have a simple example of a characterization method based on the physical principle that living things react to being poked while nonliving things do not. The Interaction theory tells us how to draw a conclusion based on the Signal we measure after applying a Probe.

X-ray Photoelectron Spectroscopy (XPS)

XPS is an excellent choice of name for a characterization technique since the name describes almost exactly how XPS works.

  • Probe: Monochromatic X-rays of known energy
  • Interaction: Electrons are released with an energy equal to the incoming X-rays minus the binding energy that held them to the specimen
  • Signal: The kinetic energy of the electrons released from the material

In many cases, the Interaction portion of the process is based on an underlying physical principle which we can describe mathematically. XPS is based on measuring binding energies which are characteristic of the atoms which emitted the photoelectron.

$$ E_{binding} = E_{X-ray} – (E_{kinetic} + \phi) $$

We controlled the X-ray energy and measured the kinetic energy. The φ part of the equation is just a constant representing the work function of the electron detector. (Some versions of the equation will leave φ off, presuming that it is accounted for in the kinetic energy term.) So this equation lets us compute the binding energy of the atoms which emitted the, which in turn can be matched to the specimen composition.

Understanding the Interaction also helps us anticipate the limitations of a characterization technique. Since XPS measures the energy of emitted electrons, and electrons’ ability to escape a material diminishes with depth below the surface, we know that XPS measures surface composition. Anything that would interfere with the electrons’ path to the detector could interfere with the measurement, so XPS is best performed in a vacuum.

Let’s take a quick look at a few other techniques through the Probe-Interaction-Signal lens.

X-ray Fluorescence Spectroscopy (XRF)

  • Probe: X-rays from a source (can be monochromatic or not)
  • Interaction: Electron transitions induced by the probe X-rays result in the emission of “secondary” X-rays which are characteristic of the atoms which produce them
  • Signal: The secondary X-rays

XRF is another spectroscopic technique because the Signal is X-rays which are characteristic of the element which produced them. XRF is also very similar to XPF because they share the same Probe. However, they measure different Signals and require different, but related, Interaction theories to interpret their results.

Optical Microscopy

  • Probe: Light from a source
  • Interaction: Light is scattered by features of the specimen surface
  • Signal: Scattered light is magnified and directed to a human eye or a camera

Microscopy is intuitive because, like human vision, it lets us measure the spatial locations of features on the surface of a specimen. However, the Probe-Interaction-Signal framework helps us understand how modifications to optical microscopic techniques could be made to extract additional information.

For example, we could modify the probe and signal by adding filters, making the interaction more complex and providing us with new information. By adding polarizing filters, we could deliberately exclude light which changed polarization upon interaction with the specimen. This can help determine the orientation of crystals of birefringent materials, such as an aluminum oxide or barium titanate.

  • Probe: Polarized light from a source
  • Interaction: Light is scattered by features of the specimen surface and changes polarization angle depending on the orientation of birefringent surface features
  • Signal: Scattered light which is filtered by a polarizer, magnified, and directed to a human eye or a camera

Transmission Electron Microscopy (TEM)

  • Probe: Monochromatic electron beam
  • Interaction: Electrons are transmitted through the specimen and scattered by its atoms
  • Signal: Transmitted electrons form an image in the image plane and a diffraction pattern in the back focal plane

TEM is a far more powerful and sophisticated technique than my description makes it sound. My goal is to emphasize one important point: the spatial information from TEM images comes primarily from the Signal part of the technique. Let’s compare this to how scanning electron microscopy works.

Scanning Electron Microscopy (SEM)

  • Probe: “Primary” electron beam directed to pre-determined locations on a grid
  • Interaction: “Secondary” electrons are knocked loose from the specimen surface
  • Signal: Secondary electrons are collected by a detector

Notice how the location information for SEM comes from the Probe part of SEM rather than directly from the Signal like TEM and optical microscopy. It can be tempting to focus on the information carried in the Signal part of an experiment. However, the Signal-Interaction-Probe framework helps illustrate how important it is to interpret the dependent variables of an experiment in the context of the independent variables.

A Useful Model

We have only scratched the surface of material characterization techniques out there, but the Probe-Interaction-Signal concept can be applied to practically all of them. I find it especially handy when learning a new technique for the first time or training someone else. It’s also very useful in troubleshooting misbehaving instruments.

I hope you find Probe-Interaction-Signal useful, and I suspect it will not be the last of Dr. Punnoose’s lessons I transcribe into a blog post.

Image-Centric vs Deformation-Centric DIC Analysis

Digital Image Correlation Analysis

Digital image correlation (DIC) is a technique for measuring the deformation (i.e. strain) of materials using a sequence of images taken over the course of a mechanical experiment. DIC analysis generally follows 4 steps: capturing an image sequence, tracking features, deformation analysis, and data mining.

Flowchart mapping out the 4 steps of DIC analysis and the type of data produced by each step.

Problem: Image-Centric DIC Limitations

Our research group used DIC as our primary tool for non-contact measurement of material deformation. However, our work was inhibited by several limitations of our existing DIC analysis software.

  1. Computationally inefficient and error-prone motion tracking
  2. No measurement validation or error estimation
  3. Poor record-keeping of the steps of completed DIC analyses
  4. A single type of strain result which required long computation times and large amounts of disk space
  5. Limited data mining potential due to strain analysis, static reference states, and lack of error estimation

However, the biggest limitation was that our old DIC software was image-centric. From beginning to end, the data was formatted, processed, and saved in arrays matching pixels in the original image. This approach ignores the fact that the discrete feature tracking and the deformation gradient calculation trade location accuracy for displacement and deformation information while introducing and propagating error. Handling the data at the resolution of the image wastes computing resources while misrepresenting the accuracy of the analysis.

Solution: Deformation-Centric DIC

I started by retrofitting our old DIC software with new features and add-ons to compensate for its shortcomings.

  1. Improved motion-tracking based on the Lucas-Kanade optical flow function set from the OpenCV library
  2. A “rigid body motion” test for motion tracking validation, error estimation, and artifact detection
  3. A robust documentation policy that saves every step of the analysis human-and-machine-readable directory tree, eg: .\\images\motion tracking\deformation\data mining
  4. Deformation gradient-focused data structures which save computational resources and enable a wider variety of data mining strategies without sacrificing accuracy
  5. Flexible reference states, making it possible to target sections of experiments and compute deformation rates

In addition to these improvements, my DIC software was built fundamentally differently from our old DIC platform. Rather than handling data in an image-centric manner, I implemented a deformation-centric scheme. My DIC software treats deformation data in similar to the nodes and elements found in finite element analysis. This approach facilitates error estimation and enables faster computation times and more efficient use of memory.

Impact: Unprecedented Access to Material Deformation

My analysis suite has become the primary tool of my former research group at Georgia Tech. They are now able to distinguish real deformation from artifacts, validate their measurements, and perform sophisticated strain analyses which are not possible with conventional DIC analysis. For my dissertation, I combined deformation rate analyses with in-situ stress measurements, creating create the first-ever real-time deformation energy measurement based entirely on empirical measurement without simulation or model. By building a DIC analysis suite from the ground up with deformation analysis in mind, I granted my research group significantly more powerful tools for mining mechanical testing data.

Automating Research Team Management Using Experiment Design

Problem: Organizing an Inexperienced Team

In my final year at Georgia Tech, I had three research assistants who had the potential to be productive laboratory technicians, but they lacked experience and confidence. I needed to create an environment where they could hit the ground running gathering high-quality experimental data, documenting their work, and organizing their data without too much micromanagement. Meanwhile, I had publications and a dissertation to write and needed to ensure the quality of my assistant’s work without spending too much time looking over their shoulders.

Solution: Data Management as a Means to Automate Team Managment

A data management strategy can do more than just make data easier to navigate. A strategy which incorporates automated progress updates allows the team members to focus on their delegated roles rather than administrative tasks. Technicians are freed to focus more on their laboratory experiments, analysts can be updated the instant new data is added, and team meetings become more efficient and focused on the next step of the project.

So I began building a data management strategy based on the needs of our project. I would:

  1. Design a set of experiments to explore a parameter space.
  2. Create a data organization system based on that parameter space.
  3. Build tools to automatically track data placed into the system, eliminating the need for my assistants to report their progress.
  4. Expand upon those tools to automate data analysis.

I used Design of Experiments (DOE) principles as the basis of our experiment design and my data management strategy. I started by writing a Python script which built a directory tree which organized according to the parameters of each experiment. Each level of the directory tree corresponded to one parameter from the experiment design.

For example: I developed this data management strategy for a study of fracture toughness in various thin-sheet metallic specimens. Thus, the parameter space had three axes: material, thickness, and type of crack. Thus, the directory structure for the project had 3 layers. For example, an experiment on a copper sheet 100 µm thick with a single edge notch would be stored in the directory .\Cu\B100um\SE.

The ordering of the parameters / directory layers was arbitrary, and I simply chose an arrangement that made it easy for my assistants to find where to put the result of their experiments. For the purpose of organizing our data, the directory tree offered an intuitive, human-readable structure. This structure also made tracking progress straightforward.

Once we started collecting data, I wrote a directory crawler script automatically generated reports on my team’s progress based on the data they collected. This eliminated the need to interrupt my assistants’ laboratory work for progress updates and allowed me to observe our large experimental dataset taking shape in real-time and plan our next move. Finally, I augmented the directory crawler to retrieve and analyze experimental data on-demand.

Flowchart showing how I used software tools to streamline the team management process.

Once the directory crawler could retrieve data, it was straightforward to adapt the tool to analyses. For analytical purposes, it was not always advantageous to view the data in the way it was organized in the directory tree. So I added the ability to filter the data retrieved by the directory crawler based on any arbitrary criteria.

For example, consider our project exploring different materials, sheet thicknesses, and notch types. I could retrieve and compare data from any combination of the three parameters we explored. I could look at all of the single edge-notched experiments on all materials and all thicknesses. Or I could narrow my focus to only single-edge notched specimens from aluminum sheets. All of the data retrieval and visualization was automatic, and I merely needed to identify the scope I wished to examine.

Impact: An Efficient Research Team

Developing my automated team management strategy catalyzed my research team’s progress to such a degree that we accomplished more in one summer than in the previous two or three years. Tasks, like getting a progress report from my assistants, developing a work plan for the week, or applying a new analysis to old data, took only a few minutes, rather than hours of work and meetings. My management framework eliminated my need to micromanage my team, helped my assistants see how their work fit into the larger project and allowed everybody to focus on the science rather than the minutia of organizing the group’s efforts.

My new tools helped me created an optimal environment for my assistants where they always had clear goals, instant feedback on their progress, and a clear record of how their work contributed to the project. At the beginning of every week, I used my directory crawler scripts to compile a big-picture report of the team’s data, identify target areas for the coming week, and set goals for my assistants. I could check on my team’s progress at any time over the course of the week without needing to interrupt their work and then deliver praise or constructive feedback at lunch or the end of the day. This view of my assistant’s daily work even helped me with “sanity management” – making sure my assistants’ days had enough variety and challenges to keep them engaged.

The new management strategy created a highly efficient data collection and analysis pipeline which let me stop worrying about how to collect enough data for the project and shift my focus on developing new theories and models. I had built an efficient data pipeline, and the challenge of analyzing my pipeline’s output sparked my interest in data science. In one summer, my undergraduates collected more high-quality data than some grad students collect in their entire dissertation research. The dataset is so thoroughly documented and well-organized that more than a year later, my old research group is still mining the data for new insights.

Modern data acquisition technology has lowered the barrier to collecting data to such a degree that we can afford to collect data from experiments without every measurement necessarily being aimed at testing a particular hypothesis. Our success that summer hinged on a data management strategy which had the experiment design baked in. By exploring our parameter space in a systematic, disciplined way, we created a dataset conducive to developing and testing new hypotheses and models based on the comprehensive data we had already collected.

Design of experiments allowed us to avoid the traditional hypothesis – experiment – refinement – hypothesis … cycle of the scientific method, where progress is limited by the rate at which experiments can be run. Instead, we explored a large parameter space, and our physical science project became a data science problem. Our progress was only limited by how quickly and insightfully we could develop new models to understand our data.

Postscript: Experiment Specifics

I have kept this post focused on my research team management strategy with minimal discussion of actual research we were working on. I left those details out to avoid complicating the narrative. But for the curious, I thought I would write out some notes on the parameter space we were exploring.

The Goal

Develop a new fracture toughness parameter for thin, ductile metal sheets where traditional analyses such as K and J cannot be applied.

Experiment Parameter Space

  1. Sheet material
    • Al
    • Cu
    • Sn
  2. Sheet thickness (varied depending on material)
  3. Starting notch type
    • Single edge notch
    • Middle notch
    • No notch (tensile)

Raw Data Acquired

  • Force
  • Displacement
  • Optical micrograph sequences (videos of the experiment)
  • Fractographs (micrographs of the fracture surfaces)

Analyses

The analyses we developed were targeted at characterizing the driving force and resistance to crack growth in the ductile sheets. The goal was to find analytical approaches which were insensitive to certain parameters, such as the starting notch type. (I.e. we could use forensic fractography to identify certain parameters as good negative controls.)

But the real beauty of our approach is that we were able to run multiple hypothesis – antithesis – synthesis cycles without the rate-limiting step of waiting to complete more physical experiments. The dataset was large and robust enough that we could simply test different analyses over many different scopes – from single experiments to a comprehensive analysis of the entire set and everything in between. I suspect that there may be a few more Ph.D. dissertations worth of insight still waiting to be discovered in our dataset.

Here are some examples of analyses I developed. The parenthetical statements list the raw data for each analysis.

  • Crack length (micrographs)
  • Stress (force & micrographs)
  • Crack propagation stress evolution (force & micrographs)
  • ANOVA of crack propagation stress convergence (force, micrographs & set groupings based on parameter space)
  • Strain distributions (digital image correlation deformation mapping)
  • Work of fracture (force, displacement, & crack length)
  • Deformation energy distribution (force & deformation maps)
  • Specific deformation energy accumulation rate (force, deformation maps, and crack length)

Measuring Color Part 3: The Observer Functions

Now that we’ve laid out the problem of taking a color measurement from a continuous spectrum, we can actually execute the conversion. We can start by considering the wavelengths of light that correspond to certain colors. For example, red light has a wavelength of about 650 nm, green light is about 550 nm, and blue light is down around 425 nm. In the actual world, light can exist at any wavelength. Color is more complex than just measuring the intensity of light at one specific wavelength.

An example of a continuous spectrum collected from some red-dyed diesel fuel. We want to convert this kind of data into a 3-channel RGB color representation.

The Observer Functions

The human eye perceives red, green, and blue light each over a range of wavelengths. The sensitivity of the eye also varies depending on wavelength. For our project, we will apply some mathematical equations called the “Observer Functions” which approximate the sensitivity of human eyes at each wavelength.

The observer functions approximate how the eye responds to red, green, and blue light. The CIE C Standard approximates the standard light source required for an ASTM D1500 color measurement test. This light source appears white because it stimulates all 3 areas of sensitivity in the human eye.

The observer functions show a few interesting things. First, the human eye is far more sensitive to blue light than green or red, but that sensitivity is over a narrower range of wavelengths. Perception of “red” light mostly covers the longer visible wavelengths. However, the red observer function has a secondary peak at about the same wavelength range as blue light. This suggests that the light we think of as “blue” also stimulates human “red” receptors, albeit not as much as the longer wavelengths we associate with red light.

So how do we use these observer functions? Well, we simply need to multiply each of the observer functions by the continuous transmission spectrum that came from our UV/Vis spectrophotometer and compare that result to the original observer function. Thus if a specimen transmitted 100% of light in the “red” range of wavelengths, the observer function would remain unchanged, and we would see that the red receptors in our “eye” would be 100% stimulated.

Comparison of the ideal observer and standard light source functions (left) compared to an actual spectrum applied to the observer functions (right). Notice how the observer functions on the right are reduced in magnitude according to where they match up to the spectrum.

In the example above, we can see an analysis of a typical red-dyed diesel specimen. The grey curve on the right is the spectrum measured from the specimen, and we can see how the red, green, and blue observer functions are changed by the spectrum. The blue spectrum is practically non-existent. The red spectrum came through at nearly full strength. Finally, the green spectrum was reduced in intensity but has not been suppressed entirely. If we quantify how strongly each range of receptors in our “eye” was stimulated, we could measure the (red, green, blue) color as (0.649, 0.345, 0.004). Mathematically, we accomplish this by summing up the observer functions on the right and comparing them to the observer functions on the left.

So this gives us a color measurement! All that remains is to make some adjustments to make sure our measurement accounts for the conditions of an actual experiment.

Simulating a Standard Experiment

Light Source

While our color measurement does represent the color of the actual specimen, there is an important difference between the UV/Vis spectrophotometer and the standard method: the light source! If we were to follow the standard method, we would use a light source which produces different intensities of light at different wavelengths. The UV/Vis automatically corrects for the light source intensity, so the transmission measurements it provides are as if they were collected using a light source which was equally intense at every wavelength.

For our experiment, correcting for the light source is fairly simple. We just take the spectrum of the standard light source (the “CIE C source” in the plots above) and apply it to our transmission spectrum before we perform our calculations using the observer functions.

Attenuation

There is another difference between our UV/Vis experiment and the standard method: the standard uses a 33 mm thick observation vial and the UV/Vis spectrophotometer uses 10 mm wide cuvettes. So our measurements traveled through less of the specimen than the standard mandates and absorbed less light in total. We can compensate for this using Beer’s Law.

$$A = \epsilon c b$$

Beer’s law states that absorbance, A, scales proportionally with the specimen thickness b. (The other constants are related to the concentration, c, and molar absorptivity, ε, of the specimen, which do not change.) So we know that to simulate the standard measurement, we simply need to scale the absorbance by 3.3 to get an equivalent result! However, our measurement is not in absorbance, it is in transmittance. Fortunately, the conversion between absorbance, A, and transmittance, T, is straightforward.

$$A = log_{10}\left(\frac{1}{T}\right) \\
T = 10^{-A} $$

So dealing with the difference in vial thicknesses is simply a matter of converting to absorbance, applying Beer’s Law to find the equivalent absorbance of a 33 mm vial from the 10 mm cuvette, then converting back to transmittance. Then we will have simulated the standard experiment accounting for both the light source and the thickness of the specimen vial.

Conclusions

I was able to build a color-measurement tool which converted from a continuous spectrum to 3-channel RGB color, which in turn let me match the color to standardized references. Building an automated alternative to the human eye-based standard method required a multi-step conversion.

  • Use of the observer functions to convert a continuous spectrum to 3-channel color.
  • Simulation of the standard light source called for in the standard method.
  • Correction for the smaller-thickness specimen which we used compared to the standard.

To me, the most interesting aspect of this project was understanding how the human eye perceives color. Seeing how the observer functions work showed how in a universe where light can have any wavelength, it is possible to simulate images using only 3 color channels. The usefulness of 3-color images is a result of just which wavelengths of light the human eye is sensitive to.

It is fascinating what this implies. Images displayed on 3-color monitors, printout, and even paintings might look realistic to humans, but any animal with different color sensitivity would likely see them very differently. Representing an image collected by a camera using UV, infrared, X-ray, or other non-visible light could also be accomplished simply by applying a different, specially-designed set of observer functions to convert to 3-channel RGB color. The human eye’s sensitivity is also wavelength-dependent. So a red and blue light might have equal intensities in reality, but a human eye would see the blue light as being brighter.

This all makes me appreciate the human body a little bit more, too. Just “eyeballing” a color measurement seemed like a pretty shaky foundation to build a standardized test on. Using an instrument still gives us a more detailed, objective, and quantitative record of the measurement. However, the eye is actually a rather complex and sophisticated device!

Measuring Color Part 2: The Visible Spectrum vs 3-Channel RGB Color

In my previous post, I laid out our overall goal: use a UV/Vis spectrophotometer to judge the color of fuel specimens. This post will introduce the two different kinds of information we will need to understand in order to solve the problem: the continuous spectrum measured by the UV/Vis spectrophotometer and the 3-channel RGB color value we want to use to identify the color.

The Visible Light Spectrum

We will gloss over the less interesting challenges of repairing the UV/Vis and getting it to talk to a computer. In short, the device just needed some adjustments to its light sources, and it had an RS-232 serial port which I could use to transmit data through an adapter to a listening computer.

The most important piece of the puzzle is the kind of data our instrument produces. To use the spectrophotometer, we place a 1 cm wide cuvette containing our specimen into the sample holder. Then the instrument measures how much light at different wavelengths is transmitted through the cuvette and specimen.

The white-capped cuvette in the center of this image contains our specimen.

So suppose we take a sample of fuel, pipette it into a cuvette, and take a quick spectrum measurement. This is what the resulting data looks like.

Transmittance spectrum collected from a sample of red-dyed diesel fuel.

What we get is a spectrum. The vertical axis shows the transmittance, which ranges from 0.0 to 1.0. A value of 1.0 would mean that 100% of the light was transmitted. A value of 0.0 would mean that absolutely no light was able to get through the specimen. This spectrum was collected within the range of wavelengths that humans can perceive, which runs from roughly 400 nm (violet) to 700 nm (deep red) wavelength. (If you don’t know what I mean by wavelength, you might want to read up a bit here.)

This specimen appeared red to the human eye. The light was mostly transmitted in 650-700 nm range of wavelength, which would correspond to red and orange colors, which makes sense given how the specimen looked before I scanned it. Practically no light was transmitted below 550 nm wavelength, which means the specimen blocked blue and violet light. Somewhere in these ranges of transmitted or blocked light, we will find the information we need to categorize the color of this specimen.

3-Channel RGB Color

But how does this spectrum compare to RGB color? The idea behind RGB color is that 3 numbers – one each for the red, green, and blue color channels – can be used to describe almost any color visible to the human eye.

A screengrab from GIMP showing how different RGB color values can be selected to reproduce a particular color. It is also worth noting that there are many other frameworks for representing color. This one just happens to be the best match to the standard method.

One thing is immediately apparent upon comparing our example of RGB color (255,36,0) to the spectrum from the UV/Vis spectrophotometer. The spectrum contains a great deal more information!

This brings us to the real challenge of the project: converting a continuous spectrum into RGB color values. To do that, we need to consider how the human eye perceives color, which I will discuss in my next post.