X
    Categories: research

Influence of familiarisation and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables

 
Nibali, ML, Tombleson, T, Brady, PH, and Wagner, P. Influence of familiarization and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables. J Strength Cond Res 29(10)/2827–2835, 2015.
 

KEY TAKEAWAYS

  • The countermovement jump assessment (CMJ) as described is homoscedastic, meaning the test is reliable for all measured variables regardless of skill level.
  • The CMJ test can be performed without the need for familiarization trials.
  • The CMJ test is a reliable measure of assessing vertical JUMP HEIGHT.
  • A change in LOAD, EXPLODE, or DRIVE by 1 t-score is a significant change.
  • The EXPLODE and DRIVE variables are highly reliable and can be used to determine real changes in performance.
  • LOAD is highly variable, however changes in LOAD may be considered sensitive to training responses and fatigue.
POPULATION: One hundred eighteen male and 60 female athletes participated in this study. The 3 strata comprised 113 high school athletes, 30 college athletes, and 35 professional athletes, competing in the sports of baseball, basketball, American football, rugby union, soccer, tennis, volleyball, and water polo. Subjects were experienced athletes and were engaged in a structured resistance training program with a minimum of 12 months of experience.  

SUMMARY

The questions covered:
  • If an athlete performs a vertical jump test and improves their results each time, are the improvements a result of improved athletic ability? Or, is the athlete learning how to perform the test better?  (i.e. ‘cheat’ the test to get a better result).
  • Is there greater reliability in vertical jump results for professional athletes compared to high school or college athletes?
The study investigated the reliability of three measurements of vertical ground reaction forces during a countermovement jump test. The three force measurements were average eccentric rate of force development (LOAD), average concentric force (EXPLODE) and concentric impulse (DRIVE). 178 athletes performed repeated jump trials with between 24 h to 14 d between trails. The changes in an athlete’s mean scores between trials were compared to identify any learning effect. The non-uniformity of error was compared between the professional, college and high school athletes to see if the reliability was consistent across different levels of competition. The study found that a reliable measurement can be performed the first time an athlete does a vertical jump test. EXPLODE and DRIVE were highly reliable. LOAD was highly variable between jump trial. However, when the change in eccentric rate of force development (LOAD) is greater than the typical error of measurement it is considered sensitive to training responses and fatigue. The three variables LOAD, EXPLODE, and DRIVE are converted to standardized t-scores, meaning a t-score change of 1 or more can be considered significant. Therefore, the change can be considered a real change as it is larger than the typical error. ​  

ABSTRACT

Nibali, ML, Tombleson, T, Brady, PH, and Wagner, P. Influence of familiarization and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables. J Strength Cond Res 29(10)/2827–2835, 2015. — Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 2 trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: 20.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% 3/O 1.10), although jump height was the only variable to display a %CV #SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.

Read Full Paper
Download PDF

Influence of familiarization and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables

  Introduction The performance of a maximal vertical jump (VJ) is a complex human movement that requires the coordinated activation, synchronization, and contraction of monoarticular and biarticular muscles of the leg extensors (i.e., hip extensors, knee extensors, and plantar flexors) (29), and the sequencing of segmental motions (5,39). Direct measurement of VJ kinetic and kinematic variables provides valuable insight pertaining to (a) the neuromuscular strategies used to achieve maximal jump performance, thus reflecting the movement efficiency of the athlete; (b) the neuromuscular status of athletes in response to training and competition, thus intimating the presence of adaptation or fatigue (7,8,12,26,27,38); and (c) the lower-body explosive qualities of the athlete (11,36,40), thus highlighting areas of deficiency to better direct training program design (27). As such, assessment of VJ kinetic and kinematic variables is a useful tool in the routine monitoring of athletes. However, to make informed decisions regarding an individuals performance, sport practitioners and researchers must first be aware of the typical variation or reliability associated with VJ performance and its related kinetic and kinematic variables (2,13). Reliability refers to the repeatability of a measure or an individual’s performance (2,13) and encompasses both biological (i.e., within-subject) and nonsystematic measurement (e.g., equipment, tester) error, which renders the observed value of a measure different from the true value (13). The reliability of a measurement or performance variable quantifies the degree of precision associated with that variable and therefore has important implications for the interpretation of athletic data. When observing changes in performance, the magnitude of the observed change in a given variable needs to exceed the typical variation or random error (“noise”) for there to be any confidence that the true change in performance is a “real” effect. To this end, measurement variables need to demonstrate adequate reliability to detect small, but practically meaningful changes in athletic performance (2). Previous research on athletic populations has reported “acceptable” levels of reliability for VJ (or its derivatives) kinetic and kinematic variables. Cormack et al. (9) calculated the intraday and interday reliability of countermovement jump (CMJ) variables in elite Australian Rules Football players, reporting coefficient of variation (CV) values of 1.1–7.1% (intraday) and 1.0–5.7% (interday). Sheppard et al. (36) assessed unloaded (body mass) and loaded (body mass +25%) CMJs in elite and developmental athletes, and observed CVs of 3.5% (peak force) to 36.3% (concentric [CON] peak rate of force development [RFD]) in unloaded jumps, and 3.0% (mean power) to 47.4% (CON peak RFD) in loaded jumps. To date, few authors have considered the reliability of kinetic and kinematic jump variables with respect to the smallest worthwhile change (SWC) in performance (9,33,38). In sport science, the SWC is defined as the smallest effect or change in performance that elicits a practically meaningful outcome (17). The sensitivity of a measure to detect small changes in performance is therefore reliant on the typical error (TE) being smaller than (or close to) the SWC (35). Further examination of the reliability of kinetic and kinematic variables and their relationship to the SWC is required to better direct the interpretation of jump performance in practical and research settings. Evaluation of the magnitude of moderate (MWC) and large (LWC) worthwhile changes (analogous to moderate and large effect sizes) in jump performance may be of additional value when interpreting changes in the routine monitoring of athletes. Vertical jump ability has traditionally centred on the generation of maximal muscular power or jump height because of their associated relationship with athletic performance (3,20,25,37). Although these measures quantify the overall “outcome” of the movement, they provide no insight into the mechanics (i.e., jump strategy) that characterize the “execution” of the movement. Peak force, RFD, and impulse are reported to correlate highly with maximal power production and jump height (6,21,28). However, although RFD and impulse provide valuable insight pertaining to jump strategy, they are reported to demonstrate higher variability compared with peak power and jump height (6,28,30,31,36,38). Moreover, the availability of reliability statistics for eccentric (ECC) RFD, and specifically average ECC RFD is limited. Moir et al. (31) reported CVs for peak and average ECC RFD ranging from 17 to 21% in physically active men and women. To date, no reliability statistics exist for peak or average ECC RFD in highly trained or elite athlete populations. Furthermore, no research has assessed if the reliability of jump kinetic and kinematic variables differs between athletes of varying competitive level (i.e., developmental, subelite, elite), nor has the effect of competitive level on the systematic error (i.e., learning effect, familiarization) of VJ performance been examined. Systematic error (or systematic changes in the mean) is an important component of reliability that requires consideration the in the routine monitoring of athletes. Systematic error refers to the trend for a measurement to differ in a specific direction with the performance of repeat trials (2). When the intention of a performance test is to assess the specific qualities of an athlete or to monitor changes in performance in response to an intervention or training, it is important to eradicate this socalled “learning effect” (13). Moir et al. (30) examined the influence of familiarization sessions on squat jump (i.e., CON only CMJ) and CMJ (31) performance of physically active subjects. No systematic bias was reported for kinetic and kinematic variables with the exception of CMJ CON peak RFD (31). Similarly, no learning effect was observed in the performance of 30 consecutive loaded jump squats in male soldiers (1). Whether a learning effect in VJ performance is observed in younger, less trained subjects compared with subelite or elite athletes is yet to be substantiated. A further component of reliability that requires consideration in the assessment of athletes is how the measurement error relates to the magnitude of the measurement variable (2). The presence of heteroscedasticity (or nonuniformity of error) exists when the TE varies in a systematic manner between subjects and ultimately dictates how the reliability should be analysed and expressed (2). Typically, heteroscedasticity occurs when participants with larger values for a measurement variable display larger TEs (2,13), which has implications for the assessment of athletes differing in competitive level or sport, where the proficiency of the athlete in performing the test confounds the observed reliability. Analysis of reliability after logarithmic transformation addresses the issue of heteroscedasticity, providing an estimate of the typical percentage error (CV) that is unaffected by the magnitude of the measure (13). In order for jump vertical ground reaction force (VGRF) data to be a useful monitoring tool of athletic ability and the neuromuscular status of athletes, a comprehensive understanding of the factors that influence the reliability of VJ performance and its associated kinetic and kinematic variables is required. We therefore sought to determine if reliability of VJ performance in athletes that differ in competitive level and sport is affected by (a) systematic error (i.e., learning effect, familiarization) and (b) nonuniformity of error (heteroscedasticity). We further sought to quantify the reliability of VJ kinetic and kinematic variables with respect to the SWC that elicits a practically meaningful outcome in VJ performance and to quantify the magnitude of MWC and LWC to aid in the interpretation of VJ assessments. Methodology Experimental Approach to the Problem: A total of 178 male (n = 118) and female (n = 60) athletes performed repeat jump trials (i.e., testing sessions). Ranges of between 2 to 6 trials were performed to assess the reliability of VJ (defined here as a CMJ performed with an arm swing) kinetic and kinematic variables and to determine if systematic error and nonuniformity of error are present in the performance of repeat trials. Pairwise comparisons of testing sessions were separated by a minimum of 24 hours and a maximum of 14 days. To determine the effect of competition level on the reliability and systematic error, athletes were categorized into 3 strata on the basis of their competitive divisions—high school, college, or professional. Before testing sessions, athletes performed a standardized warm-up consisting of soft-tissue preparation, mobility exercises, dynamic stretches of the lower-body musculature, and a series of warm-up jumps in preparation for maximal effort VJ testing. After completion of the warm-up, a 3-minute rest period was provided before commencement of jump testing. Subjects: One hundred eighteen male (age: 18.7 6 3.9 years; body mass: 82.0 6 14.5 kg) and 60 female (age: 17.4 6 3.6 years; body mass: 65.8 6 8.6 kg) athletes participated in this study. The 3 strata comprised 113 high school athletes (age: 16.1 6 2.2 years; body mass: 70.9 6 10.9 kg), 30 college athletes (age: 19.4 6 1.5 years; body mass: 80.1 6 14.4 kg), and 35 professional athletes (age: 23.7 6 3.6 years; body mass: 91.7 6 15.6 kg), competing in the sports of baseball, basketball, American football, rugby union, soccer, tennis, volleyball, and water polo. Subjects were experienced athletes and were engaged in a structured resistance training program with a minimum of 12 months of experience. Subjects had all procedures explained to them and were informed of the possible risks. The data collection process was completed free of injuries and was conducted as part of the athletes routine training at the SPARTA Performance Science training facility (Menlo Park, CA, USA). The subjects (or parent/ legal guardian for subjects under the age of 18 years) provided written consent for testing, data collection, and the publication of results as part of their agreement with SPARTA Performance Science; as such, ethical approval for this study was not sought. Jump Testing: Subjects participated in 2–6 repeat trials that were separated by a minimum of 24 hours and a maximum of 14 days. They completed 6 maximal-effort unloaded (i.e., body mass only) VJs for each trial, performed with a countermovement to a self-selected depth and arm swing. They were instructed to perform each jump to achieve maximal VJ height and to reset their position between each jump effort; the 6 jumps were performed 30 seconds apart. No further technical instruction pertaining to the execution of the VJ was provided. Depending on the individual training schedule of the athletes, trials were conducted in AM or PM sessions; however, testing times were kept consistent (within a 4-hour window) for each subject across repeat trials. We deliberately refrained from extensively modifying routine training and nutritional practices before testing sessions to ensure application of findings to the daily training environment of highly trained and professional athletes.  Data Collection and Signal: Jumps were performed with the subject standing on a commercially available piezoelectric force plate (9260AA6; Kistler Instruments, Winterthur, Switzerland). The force plate was interfaced with a data acquisition system (5691; Kistler Instruments) for signal processing of VGRF data, sampling at a frequency of 1,000 Hz. Analysis of kinetic and kinematic variables was performed using custom-designed software (SpartaTrac; SPARTA Performance Science).  Numerical integration of the VGRF data produced forcetime, velocity-time, acceleration-time, and displacement time curves for determination of kinetic and kinematic variables (23). Jump height was calculated using the impulse-momentum relationship as described previously (23). Kinetic variables were assessed for both the ECC and CON phases of the VJ, defined as follows: ECC phase: the point at which VGRF exceeds body mass during the countermovement to the point of minimum displacement of the countermovement (approximately zero velocity); and CON phase: the next sample (0.001 seconds) from the end of the ECC phase (i.e., minimum displacement) to the point of take-off (i.e., VGRF approximates zero). Average ECC RFD was determined between the minimum and maximum force during the ECC phase. Average CON force was determined as the average force achieved during the CON phase and is expressed relative to body mass (N$kg21). Concentric impulse was calculated as the integral of the VGRF over the duration of the CON phase and is expressed relative to body mass (N$s21$kg21). Vertical jump height was determined as the maximum vertical displace- ment achieved. Statistical Analyses: The average of the best 3 of 6 VJs (determined by jump height values) in each trial was calculated for each subject. Measures of central tendency and spread of the data are presented as mean values and SDs (mean 6 SD). Athletes were categorized into 3 strata for analyses, which were delineated by the level of their competitive divisions—high school, college, or professional. Systematic error was assessed by evaluating changes in the mean of repeat trials (i.e., trial 2 2 trial 1 [T2 2 T1], trial 3 2 trial2[T32T2],trial42trial3[T42T3],trial52trial4 [T5 2 T4], trial 6 2 trial 5 [T6 2 T5]) using a publicly available spreadsheet (15). Uncertainty of the estimates is presented as confidence limits (CLs) at the 90% level, which is appropriate for the kind of mechanistic measures reported here (18). Magnitudes of standardized differences between repeat trials of ,0.2, ,0.6, ,1.2, ,2.0, .2.0 are interpreted as trivial, small, moderate, large, and very large effect sizes (ES) (18). Where the 6CLs for the ES extend beyond the boundaries of 20.2 to 0.2, effects are deemed unclear. Thresholds for assigning qualitative terms to the likelihood that the true effect is substantial (i.e., greater than the smallest practically important effect) are as follows: most unlikely, ,0.5%; very unlikely, 0.5–5%; unlikely, 5–25%; possibly, 25–75%; likely, 75–95%; very likely, 95–99.5%; and most likely, .99.5% (24). Nonuniformity of error was assessed by plotting the difference score (T2 2 T1) against the mean of the 2 trials for each subject (2,4,13). This was performed for both the raw and the logtransformed data, and a Pearson correlation coefficient was calculated (2). Where the Pearson correlation coefficients (i.e., heteroscedasticity correlations) approach zero or are not substantially improved with log-transformation, there is no evidence of nonuniformity of the error (2). Magnitudes of correlations of ,0.1, 0.1–0.3, 0.3–0.5, 0.5–0.7, 0.7–0.9, .0.9 are interpreted qualitatively as trivial, small, moderate, large, very large, and almost perfect correlations, respectively (14). Reliability was calculated (T2 2 T1) using a publicly available spreadsheet (16) and is presented as the TE and the %CV. Confidence limits are expressed as “6” for uncertainties of the TE and as “3/O” factor uncertainties for CVs. Practically meaningful changes in average ECC RFD, average CON force, CON impulse, and jump height are quantified as 0.2 (small), 0.6 (moderate), and 1.2 (large) 3 between-athlete SD (18,32) for the raw and log-transformed data (which was then back-transformed and expressed as a %CV).  Results Mean values (6SD) for kinetic and kinematic variables for subjects categorized by competitive division are presented in Table 1.  Systematic Error Changes in the mean between pairwise comparisons of trials in the high school division were insubstantial for all kinetic and kinematic variables (ES range: 20.12 to 0.12; likely trivial to very likely trivial; Figure 1). The college stratum displayed insubstantial changes in the mean with the exception of T3 2 T2, T4 2 T3, and T6 2 T5, which demonstrated unclear or possibly small effects in some variables (Figure 1); however, the changes in the mean were nonsystematic (i.e., changes were not in the same direction across repeat trials). Similarly, the professional stratum displayed unclear or possibly small (nonsystematic) effects for some variables in all pairwise trials with the exception of T2 2 T1, which demonstrated trivial effects (ES range: 20.05 to 0.12; likely trivial to very likely trivial; Figure 1). When data for the 3 strata were pooled, no systematic error was present between pairwise comparisons of trials for any of the kinetic or kinematic variables (ES range: 20.07 to 0.11; likely trivial to very likely trivial; Figure 1). Non-uniformity of Error The difference score (T2 2 T1) plotted against the mean of the 2 trials for each subject is presented in Figure 2. Pearson correlation coefficients were trivial to small and were only marginally improved, if at all, with log-transformation (Table 2); thus, heteroscedasticity does not seem to be present in any of the VJ variables assessed. Reliability Comparisons of the coefficients of variation and their uncertainties (i.e., 90% CL) reveal insubstantial differences between the 3 strata for average ECC RFD and jump height, despite a tendency toward better reliability in the professional stratum (Figure 3). The high school stratum displays marginally higher variability (i.e., poorer reliability) compared with the college stratum for average CON force (%CV: 3.0 3/O 0.37 and 1.9 3/O 0.44, respectively). Similarly, the high school stratum (%CV: 3.1 3/O 0.38) displayed greater variability in CON impulse compared with the college (%CV: 1.7 3/O 0.39) and professional (%CV: 2.0 3/O 0.44) strata. Overall reliability for the 3 strata combined is presented in Table 3. Average CON force and CON impulse displayed the best reliability, although jump height was the only variable to display a TE or %CV #SWC. All variables were capable of detecting moderate and large effect sizes (Table 3). Average ECC RFD seems to be highly variable (Table 3). Discussion The degree of precision associated with VJ performance and its associated kinetic and kinematic variables has important implications for the interpretation of true lower-body explosive capacity and meaningful changes in VJ performance. This study provides a comprehensive quantification of the typical variation of VJ kinetic and kinematic variables (with respect to the practically SWC) in a large sample of athletes that differ in competitive level and sport. It further evaluates the influence of confounding sources of variability, such as jump familiarization (i.e., learning effect) and athletic ability (i.e., level of competitive division, jump proficiency), providing novel insight into the factors that dictate the reliability of VJ performance. Evaluation of repeat trials revealed trivial or small nonsystematic changes in the mean for average ECC RFD, average CON force, CON impulse, and jump height (Figure 1); suggesting no evidence of systematic error in any of the 3 competitive divisions nor for the combined pool encompassing all levels of athletes (Figure 1). It seems that familiarization trials before VJ assessment are not necessary in athletes, irrespective of the competitive level or sport, thus intimating the proficiency of the athlete with respect to VJ performance. This is the first published investigation to examine a systematic learning effect in VJ performance in a large sample of athletes ranging from highly trained to elite and (or) professional, therefore direct comparison of our findings with similar investigations is not possible. However, our findings are consistent with those reported in physically trained males (30,31), females (31), and soldiers (1). Nuzzo et al. (34) reported systematic error in the CMJ performance of trained and untrained subjects; however, the learning effect was specific to intrasession trials (i.e., number of jumps performed in each session) and is not analogous with the investigation of intersession (i.e., repeat trials from session to session) systematic error investigated here. Additionally, although not specific to (but inclusive of ) VJ performance, a meta-analysis on the reliability of power in physical performance tests revealed substantial learning effects with a reported increase in the CV of 1.2% (likely range 0.5–1.9%) between the first 2 trials (19). It is likely that the inclusion of physical performance assessments such as constant-work and constantduration tests, which involve an element of pacing strategy and are more reliant in maintaining high levels of motivation, elevates the requirement for familiarization. It is also possible that the requirement for familiarization trials in VJ performance is diminished due to the nature of the task and its similarity to athletic movements, with athletes already possessing competency in the motor-patterns required to perform the movement consistently (30,31). Furthermore, although we found insubstantial changes in the mean of repeat trials for all levels of athletes, it is likely that some, if not the majority of athletes assessed, had previously performed VJs as part of their routine training or sport. Thus, the authors cannot guarantee that the same findings will apply to athletes with no previous VJ experience. There is no clear evidence of heteroscedasticity in the kinetic and kinematic variables assessed (Figure 2). Logtransformation marginally improved the Pearson correlation coefficients for average ECC RFD, average CON force, and jump height; however, correlations approximate zero and remain trivial to small; no change was observed for CON impulse (Table 2). It is stated that the presence of hetero-scedasticity should inform the analysis and expression of reliability; despite this, it receives little attention in reliability studies (2). We are aware of only 1 other investigation that has examined nonuniformity of error in reliability studies of VJ performance (34), which, contrary to our findings, reported heteroscedasticity in CMJ height. The assumption of heteroscedasticity was accepted because of the reduction of Pearson correlation coefficients after log-transformation (females: r = 0.17–0.01, and males: r = 0.16 to 20.03) (34). However, given the magnitude of the correlations approximates zero and are small to trivial, the conclusion of heteroscedasticity by the authors could be argued. Nonetheless, in the present investigation, findings suggest that the reliability of VJ kinetic and kinematic variables can be presented as either the TE (raw data) or CV (log-transformed data). Although there are no formal criteria to define acceptable levels of reliability, many researchers have adopted an arbitrary threshold for CV values of ,10% to infer “good” reliability (2,9,36,38). Accordingly, our findings demonstrate good levels of reliability irrespective of the competitive level for average CON force, CON impulse, and jump height (Figure 3). When the data were combined, we observed TEs for these variables ranging between 2.7 and 3.5% (90% CL: 3/O 1.10 to 1.11; Table 3). Although these variables demonstrate acceptable reliability and can be used to determine “real” changes in performance (i.e., changes that fall outside the margins of the TE), they do not signify information pertaining to the magnitude or “meaningfulness” of the observed change in performance. Quantification of small, moderate, and large changes in performance variables indicated that jump height was the only variable to demonstrate a TE or %CV that is #SWC. Interestingly, previous research is inconsistent with this finding, reporting a TE #SWC for peak force (38) and mean force (9,38) only; this may be explained by disparities in the methods used to determine jump height. Although not capable of detecting small changes in performance, all other variables examined in our investigation were sensitive to moderate and large variations (Table 3). Contrary to the other kinetic and kinematic variables investigated, average ECC RFD displayed poor reliability for each of the 3 strata (Figure 3) and for the combined athlete pool (%CV: 21.3 3/O 1.11; Table 3), which is comparable with previously reported CV values for average ECC RFD in males and females (%CV: ;17%) (31), albeit in nonathletic populations. Despite its poor reliability, average ECC RFD should not be automatically discounted in the assessment of jump performance, because tests that are deemed reliable are not necessarily the most effective for monitoring changes in performance (19). The relationship between jump variables and successful performance must also be given due consideration (38). Average ECC RFD is reported to be highly correlated with the jump height of elite athletes (21,22) and is a key variable in discerning the unique jump profile exhibited by athletes competing in different sports, particularly when it is examined in relation to average CON force (21). Furthermore, it is stated to be a contributing factor in the improved stretch-shorten cycle function observed in response to ballistic training, which results in subsequent enhancement of jump performance (10). It may be that average ECC RFD is a sensitive measure in discerning training adaptations despite its variability, owing to the magnitude of observed changes in ECC RFD in response to training. Cormie et al. (10) reported significant improvements in ECC RFD in the stronger power (SP), weaker power (WP), and weaker strength (WS) groups after 10 weeks of ballistic (0%– 30% 1 repetition maximum [1RM] jump squats) or strength (75–90% 1RM back squats) training. Although a significant improvement was reported, the magnitude of change in ECC RFD was not presented. For the purpose of comparison to this investigation, we calculated the percent changes (log-transformed) between baseline and midtest and posttest values reported by Cormie et al. (10). Our calculations indicate changes in ECC RFD in the order of 91.6% (SP), 198.7% (WP), and 79.4% (WS) after 5 weeks (midtest) of training, and 124.9% (SP), 320.6% (WP), and 120.8% (WS) after 10 weeks (posttest) of training. With respect to the TE calculated for average ECC RFD in the present investigation, the magnitudes of changes observed by Cormie et al. (10) exceed the TE of 21.3% and would further be deemed moderate to large changes (Table 3). To this end, the high variability and subsequent inability of ECC RFD to detect small “meaningful” changes in performance would seem irrelevant and do not necessarily preclude the use of ECC RFD as a diagnostic tool for monitoring training-induced changes in VJ performance or the neuromuscular status of athletes. Practical Application To enhance the diagnostic value and utility of VJ assessments, it is imperative that sport practitioners and researches have an understanding of the sources of variability of VJ kinetic and kinematic variables and the magnitude of practically meaningful changes in jump performance. We have quantified the reliability of VJ average ECC RFD, average CON force, CON impulse, and jump height and determined small, moderate, and large practically meaningful changes in these variables to aid practitioners with the interpretation of their data. Furthermore, our findings provide novel insight into the factors that dictate the reliability of VJ performance and can be applied to better direct assessment protocols. Vertical jump assessments can be performed without the need for familiarization trials irrespective of the competitive level of athlete. In light of the absence of heteroscedasticity observed here, the precision of the observed value (i.e., variability) could be conveyed using either the raw TE or the %CV; although when presenting changes in VJ variables, reporting the percent change and CV is recommended. Additionally, practitioners can relay the magnitude of changes in performance to their athletes and coaches as small, moderate, or large changes in the observed value; however, only small changes in jump height can be interpreted with confidence, as this was the only measure that was effective in monitoring small changes in performance. Average CON force and CON impulse are the most reliable variables, but are less effective in detecting small changes in performance. Average eccentric RFD seems unreliable and is incapable of detecting small changes; however, this does not imply that it should be discounted in the assessment of VJ performance, particularly in light of its demonstrated contribution to enhanced VJ ability. It is plausible that average ECC RFD is sensitive to change and that the magnitudes of changes observed in response to training adaptations or fatigue typically exceed the TE; this warrants further investigation. Acknowledgements The authors thank Eric Drinkwater at Edith Cowen University for the valuable feedback provided on drafts of this article. The results of this study do not constitute endorsement of the product by the authors or the NSCA.
admin :