
By Rob Shaul, Founder
I judge tactical fitness programming by asking the coach or programmer to tell me the three to six most important, mission-direct fitness demands for the athletes they train.
Any professional coach should be able to answer this clearly and without hesitation. If the response is slow, vague, or hedged, red flags go up immediately. If he lists ten or twelve demands, it tells me he hasn’t prioritized at all—and that his programming likely tries to train everything, all the time. Occasionally, someone answers clearly and decisively. That’s a good sign—but it’s only the first question.
The follow-up question …..
What assessment do you use for each of those demands? And how are your athletes scoring?
This is where almost every coach fails.
Most coaches have never designed or deployed assessments that align with what they claim matters most. They may talk about strength, endurance, work capacity, or durability, but they can’t point to a clean, repeatable way they measure those qualities. They don’t know how their athletes are actually performing, and they can’t tell whether their programming is working.
That gap tells me everything I need to know.
If a coach can’t clearly identify the primary fitness demands of the job, can’t assess them, and can’t tell me how his athletes are scoring, then his programming isn’t anchored to reality. It may be hard. It may be creative. It may even be popular. But it isn’t accountable.
Assessments force that accountability.
They also force hard decisions most coaches would rather avoid. You have to decide what fitness qualities actually matter for performance. You have to decide which ones matter most. And because an assessment can’t take all damn day, you have to leave some things out. That act of prioritization is critical—and revealing.
Once the demands are identified, the next set of decisions is just as difficult: how to assess them accurately, efficiently, and fairly. Textbooks aren’t much help here. They often recommend technically complex exercises that require skill, familiarity, and coaching time that have little to do with fitness. That’s a problem.
When we design assessments at MTI, we follow a few hard rules.
First, every event must reflect a mission-direct fitness demand of the job. If it doesn’t show up in the field, it doesn’t belong in the assessment.
Second, the exercises must be simple. The goal is to assess fitness, not technique. Highly technical movements may be useful for training, but they make poor assessment tools if success depends more on skill than capacity.
Third, the equipment must be readily available. Athletes must be able to train for the assessment. If the test requires specialized equipment they can’t access beforehand, it’s flawed.
Fourth, scoring has to be simple and transparent. If it takes a spreadsheet and a briefing to understand what happened, the assessment isn’t doing its job.
Finally, the assessment has to be time-efficient. If it takes all damn day, it won’t be deployed consistently—and an assessment that isn’t used might as well not exist.
Making these decisions is risky. Designing your own assessments opens you up to criticism from academics, administrators, lawyers, and other coaches. It’s much safer to hide behind established tests, even when those tests fail to assess what actually matters.
MTI has chosen to accept that risk because actionable truth beats safe and lazy ambiguity.
Another challenge is that many mission-direct fitness demands don’t exist cleanly in the literature at all. For example, textbooks say little about the work capacity required for movement under fire—one of the most dangerous scenarios faced by military and law-enforcement athletes. Because it wasn’t defined, we had to build ways to assess it ourselves.
Scoring introduces another layer of judgment. What metric do you use? Are events equally weighted? Should they be? In many cases, equal weighting is wrong. Some attributes matter more than others depending on the occupation. Pretending otherwise produces misleading results.
At MTI, many occupational assessments are scored simply as Excellent, Good, or Poor. The intent isn’t certification or bureaucracy. It’s focus and accountability. Our standard is Good. That language matters. It sets expectations without pretending precision where it doesn’t exist.
Loading decisions are equally fraught. Fixed loads disadvantage lighter athletes. Bodyweight-based loads disadvantage heavier ones. Percentages introduce complexity. Every choice carries tradeoffs. There is no perfect solution—only decisions that better reflect the demands of the job.
Assessments don’t just measure fitness. They direct training. Athletes train for what is tested. Coaches program toward what is measured. If the assessment is wrong, training will drift no matter how well intentioned the coach is.
This is why MTI’s assessments are often uncomfortable. They expose athletes under fatigue. They combine demands. They reveal weaknesses without apology. That isn’t about being clever or cruel. Fatigue reveals truth, and real performance rarely happens fresh.
One hard rule we’ve learned is this: if an assessment can’t be trained for fairly, it’s flawed. If success depends on tricks, pacing games, or familiarity rather than fitness, the assessment isn’t doing its job.
Assessments at MTI are living instruments. Like our programming, they are revised, refined, and sometimes scrapped entirely as we learn more. If we find a better way to assess the demands of a job, we change the test. No assessment is protected.
Assessments don’t replace judgment. They don’t guarantee performance. And they don’t matter more than training itself. But when they are designed honestly and deployed consistently, they force clarity. They keep programming anchored to reality—and make it much harder to lie to ourselves about what actually matters.
That’s why we keep building them.
MTI has designed and deployed dozens of occupation-specific fitness assessments over the past two decades. A current list can be found here: