Movement Health Blog | Sparta Science

Is your test cheatable? | Sparta Science

Written by Sparta Science | Jan 7, 2015 7:00:00 AM

One of the biggest reasons we chose vertical jump as our test that generates the Movement Signature is that it is homoscedastic. As discussed in the previous post, that is a complicated way of saying that the way that we use vertical jump (in the test) makes it just as reliable a measurement tool for a basketball player as is does for a swimmer. There is no advantage in being a more skilled jumper when it comes to the signature. One of the other big reasons we chose vertical jump is that it is a general movement that requires less skill than other tests, like a sprint or rotational movement. The fact that it is a simple movement allowed us to create a testing protocol that limits our athletes ability to “beat” the test.

In sports, as well as in medicine, there are ways to “beat” lots of tests. One of the most obvious is to eat before a fasting blood draw (causing a doctor to give medication for improper values). Because the vertical jump test we use is an evaluation of the quality (how) of the athletes’ movement as opposed to the end result (height), it potentially creates an incentive for the athlete to try and jump differently (unnaturally) than they normally would to produce the desired change to their Movement Signature. If our test did not account for this we could potentially give an athlete an incorrect training plan. So we had to create test protocols that limits these errors, often referred to in medicine as false positives or negatives, since the results could inaccurately dictate the next steps. There are really 3 methods to limit these errors.

The first method involves a brief standardized warm-up that limits the athletes ability to alter their result unnaturally (by doing an alternative arm swing, short eccentric load, etc.).

The second step is choosing the correct variables for actionable results. Analysis of our own data shows that the internal consistency for LOAD and EXPLODE are excellent and that DRIVE is very good (near excellent). Internal consistency is a measure of reliability; basically the correlation of different jump test occasions when rating the same group of people. However, internal consistency does not necessarily mean the same exact result, rather that the variability of each movement signature variables are similar. This similar variability existed not only across our diverse athlete population, but such error occurs within the same individual athletes, suggesting that effects above and beyond jump technique are expressed in the scan. The reason for this accuracy, irrespective of jump technique, is that the force plate variables expressed by LOAD, EXPLODE and DRIVE are averages along the force-time curve, rather than peaks or maximum outputs.

The third step is the processing of the trials’ variables into a movement signature. These results are the average of the 3 best jumps (identified and averaged via jump height) out of 6 total jumps. Our statisticians performed a Principal Components Analysis (PCA) to explain any patterns, and this averaging process explained at least 90% of any variability in these assessment. Think of this averaging processes as filters for purifying water, removing the athletes’ ability to repeat a (unnatural) dynamic movement pattern. In summary, every athlete would have to try to trick the force plate in the same exact way to make the test unreliable, which is impossible.

The takeaway message here is that if you incentivize your subjects to “cheat” by choosing a test that is easily “beaten” you won’t really be getting the useful info and data you’re after.