There was but one question he left unasked, and it vibrated between his lines: if gross miscalculations of a person’s value could occur on a baseball field, before a live audience of thirty thousand, and a television audience of millions more, what did that say about the measurement of performance in other lines of work? If professional baseball players could be over- or undervalued, who couldn’t? - Michael Lewis, Moneyball: The Art of Winning an Unfair Game.
I found myself asking a similar question as a maintenance officer in a CH-53E squadron. I knew I had Marines who were excellent maintainers, but I had a difficult time trying to figure out how to quantify any of their individual value to a given task. While we were awash in data, maintenance statistics often did not lead to meaningful conclusions about individual contributions. Yet when I went to write Marines up for awards, I would dutifully pull down information regarding how many man-hours they spent on the aircraft or how many maintenance action forms they completed. While this was useful in getting a Marine recognized for what I felt was a significant contribution, I knew it also was an empty statistic. I mostly used those numbers to back up the eye test that I had already done on the Marine’s performance. I knew that these numbers did not tell the whole story of individual performance because the data was not organized in a useful manner to understand the individual Marine's impact. I also felt that even more importantly than recognizing Marines for awards, proper analysis of performance could increase unit efficiency by improving training and procedures while incorporating best practices across the fleet.
If Marine Corps Units could better utilize data analysis, they could better understand what is actually happening during maintenance operations fleet-wide. For instance, do more maintenance hours mean that a Marine is working harder, or simply longer? How do the hours on the aircraft translate to readiness? Was the task completed in what would be considered a faster or slower average rate? What about the task itself? Was the aircraft more or less likely to need additional maintenance or rework after the initial work was done?
There is an analytic approach to answering these questions - in fact, Marine Corps aviation is already collecting maintenance data from Optimized Organizational Maintenance Activity (OOMA), but our systems were not designed to answer the questions previously addressed. A more significant point is that we often lack “clean data” to provide accurate answers to questions on maintenance performance. Corrupt data can occur when Marines improperly log maintenance actions, which often occurs when Marines do not see the value in properly logging the data – e.g. when a Marine leaves a work task in progress and does not mark the task complete. Data entry is often seen as an unrewarding task and Marines do not understanding implications of “bad data.”
Big Data Revolution
Analytics have revolutionized many industries concerned with individual performance of team activities. For instance, the sports world has undergone a revolution in the last twenty years starting with professional baseball and sabermetrics. Sabermetricians collect and summarize relevant data from in-game activity to answer specific questions. They allow teams to evaluate past performances and predict future performances to determine a player’s contributions. This mathematical approach has allowed teams to properly value their talent and to assemble the strongest teams possible in a highly competitive environment. While sports teams have the ability to compare player’s Wins Above Replacement (WAR), we have no analysis on a Marine’s Repairs Above Replacement (RAR). Lacking proper statistical analysis means we have a limited understanding of a Marines abilities to perform the most essential tasks of their profession. An analysis like this could allow the Marine Corps to create better teams and ensure that they have the appropriate Marines to execute tasks in a given time.
It is also true that Aviation Maintenance, or any maintenance procedures in the Marine Corps for that matter, are not a total match to baseball. Baseball statistics work very well, because there is a defined data set that is universally observable and baseball games generally occur under relatively controlled conditions. There are more deviations in maintenance procedures based upon location or the requirement for field expedient repairs, which causes complications in maintenance data. More importantly, baseball meticulously maintains accurate statistical records because the entire institution of baseball has selfish interests for accurately compiling statistics.
Baseball players, mangers, owners, coaches, and reporters all have vested interests in the correct logging of statistics because it provides each of them direct tangible benefits. Players are directly rewarded for their performance on the field in terms of both pay and recognition. Coaches use statistics to make decisions based on how a player matches up in a given situation. Owners are able hire and fire players based on their past performance and its correlation to future performances. In contrast, maintenance records in the Marine Corps are plagued with inaccuracies due to a lack of incentives.
The proper logging of data within the Marine Corps writ large depends almost entirely on personal integrity and altruistic data input. This creates issues across the Marine Corps, from admin shops to DRRS reports. This leads to inaccurate or misleading maintenance statistics like Ready Basic Aircraft that do not accurately produce assessments of squadron readiness.
Additionally, the logging of maintenance tasks is not uniformly executed among similar units. The effect that this has is different units are essentially playing different games. A proper understanding of the data problems in the Marine Corps must start with analyzing the systems of data input and learning how to create a human-centered design approach that can increase accuracy of maintenance data.
Develop an analytical, evidence-based approach to the maintenance talent management system to maximize the value of each Marines individual contributions to mission success. Improve maintenance efficiency by properly resourcing all tasks. Provide commanders with predictive maintenance time tables based on proven performance so that flight schedule decisions can be informed. Learn to identify keys to success and understand how to improve teaching methods at the school house and in the fleet. Create data driven solutions to increase readiness so that Marines are prepared for world wide deployment.
This approach could target many key attributes of human performance but would initially focus on three key indicators of success for an aircraft maintainer ranked in order of impact to readiness. These three attributes could be averaged to generate an overall indicator score for the maintainer but could also be used individually to aide leadership in assigning the right people to the right tasks, particularly in forward deployed operations when the trial and error method is untenable. Moreover, Marines are competitive and having a score similar to WAR (overall contribution to unit a/c readiness) that directly tied to their maintenance data input but that couldn’t easily be manipulated by simply entering bad data, would serve a decades long initiative of HQMC to improve data entry by our junior maintainers because proper data entry would directly improve their own careers.
1. How effective is a maintainer their maintenance actions? Meaning, how long does an aircraft system remain in a functional status following one of their repair actions. Quiet often, aircraft display intermittent faults in or between flights, that can be difficult to duplicate on the ground. These discrepancies are typically signed off as “no repair required/could not duplicate discrepancy” for some time, until maintainers and pilots get frustrated, leading to often ineffective triage and haphazard component replacements. Since the fault is intermittent, these actions can often have the appearance of generating repairs, only to see the same discrepancy return two flights in the future. The reason for this is often a small and innocuous discrepancy in an ancillary or parasitic system that only the most gifted and dedicated maintainers are capable or willing to find and repair. By tracking the average time a discrepancy remains repair following a given maintainers actions, a more accurate success rate can be generated and factored into a benefit analysis score. This would be analogous to the wRC+ used within sabermetrics.
2. How efficient is a given maintainer at completing maintenance tasks over a given period? While man-hours are an excellent indicator of time commitment, they are not necessarily an effective measure of efficiency. By linking man-hours with the number of actions completed, the Sub-System Component Impact Report (SCIR) impact of those discrepancies, and the impact to readiness (future flight hours for that a/c) the “impact” of a maintainer’s man-hours can be ascertained. This could end up becoming a more all-encompassing type of metric similar to SIERA.
3. How effective is a maintainer at troubleshooting system discrepancies to the correct defective component? Quite often, frustrated maintenance departments will begin “throwing parts” at a problem to attempt to find the right solution, as paragraph 1 describes. This results in a massive waste in manpower (O/I/Supply/D), induced discrepancies as connectors are needlessly removed and replaced, a waste of resources for test equipment at the I-Level, a waste of money if components are O to D items, and degraded readiness for the enterprise as components are needlessly depleted from spare reserves. A maintainer’s success in component removal can be ascertained by tracking those removed components through the enterprise repair process. By tracking the higher echelon activities “no fault found” rate against the serial number components removed by the O-Level maintainer, a percent erroneous removals rate can be calculated. An effective enterprise metric would also need to include a portion of paragraph 1, as there is occasionally an issue with disparities between I/D test equipment and aircraft performance that can lead to false no fault founds but this is a minority. The overall score could be validated with a loopback to whether the replacement of the part at the O-Level resulted in a long term (in flight hours) sustainment of that system.
These examples all relate to aviation maintenance because that is what our group is most familiar with but we believe there is a wider application for this type of analysis. It could directly translate to ground based maintenance but also could be used to improve performance in many other fields in the Marine Corps. Like the Oakland A’s described in Moneyball, the Marine Corps competes on a global stage with severely constrained resource and personnel budgets amid joint service allies and opponents alike. As General Charles “Brute” Krulak so eloquently put it; “For over 221 years our Corps has done two things for this great Nation. We make Marines, and we win battles.” Data and metrics aren’t the total answer; we cannot and do not propose reducing and individual Marine to a number, but rather they can provide tools for leaders to make better decisions to increase proficiency and unit readiness.
-Major Karl Fisher is a CH-53E pilot and a Naval Concepts Officer at the Marine Corps Warfighting Lab
-Captain Kevin Champaigne is an Avionics Officer at Naval Air Systems Command