Movement Health Blog | Sparta Science

Why the Force Plate does not Matter | Sparta Science

Written by Sparta Science | Jan 4, 2019 7:00:00 AM

Often times, it is easy for coaches and sports organizations to get caught up in the concrete, particularly hardware as a magical solution that only spews good information. Force plates may appear to be the total solution, or more often in the US, GPS is perceived as the catch-all solution for sports science. The reality is that both solutions are not necessary, as they require something far more challenging than the hardware itself; they require consistency and subsequent validated actions to be effective.  First, you must ask questions like…

  • How often do you assess athletes?

  • Why did you choose that frequency of assessment?

  • What is the standardized protocol to collect the data?

  • Which variables do you monitor?

  • How do you know such variables are important?

  • What is the best plan for the collected data?

Reliability is the Cornerstone…Built from Habits

As these questions begin to be addressed, the hardware of choice becomes less important and the habits of the organization and staff become increasingly powerful, omnipotent in fact. Most technology solutions in sport skip over the most important first step in best practices; reliability of the data. This first step is boring as it requires consistency of data collection. Processes must first be identified, such as how often do you assess (daily with GPS? monthly with force plate? etc.).

When I am on the road, the most frequently asked question from athletes and staffs is “How often should I assess? The answer is fairly simple, yet abstract…you should assess only when you are prepared to take action. You must be emotionally and intellectually prepared to adjust your plan based on the results.

Validity is Sexy, not required and not actionable

The second step after reliability is meaning; what do these variables mean to my goals, generally injuries and sport. This is the step where many solutions begin because it is more sexy.  For example, organizations choose total distance from GPS because coaches understand the concept and if collected appropriately (turned on at the same time), it tends to be a consistent metric.

The problem arises as we try to link total distance to injury rates, return to play protocols, or even performance improvements. At Sparta, we have done validity analyses to investigate the relationship between sports performance and our force plate variables. The results show changes in force plate variables with more minutes played. This kind of validity is not limited to force plate metrics, and there are many other data sets within our software that have relationships to injuries and performance. The only requirement for such validity is generally volume, informally referred to as a database. The real key for a database is depth, rather than width.  Rather than collecting dozens of variables you think might be important, gather a few key variables often and over a long period of time. You want to be able to access trends over consecutive seasons, ideally combining data sets from other organizations to discover such relationships sooner than you would within an isolated database.

Predictive Models Make Data Actionable

The final piece is being actionable with the data collected, specifically using predictive models. Unfortunately, the word predictive is starting to be thrown around like “analytics” in the sports world. Technology companies claim to be predictive about injuries, yet they have little to no data for such relationships. Instead the message about predicting injuries are anecdotal; visuals of fancy graphs/charts and infographics of how much money is lost per year in sport.

The reality is that predictive models require years of consistent, reliable data collection that is meaningful (valid). A predictive model is built, showing an increased or decreased odds ratios of the likelihood that specific injuries or performances could occur. Think about this scenario like Blackjack, there are no guarantees from the information only increased/decreased chances of certain outcomes with each hand. The last piece in predictive models is to validate the model. Use data observations that weren’t included in the creation of the model, and run the data through your predictive model to see if it holds up.  The process of gathering enough data to build a predictive model and enough data to validate it takes years.

The Take Home Steps

  • Is your data reliable? Often times reliability studies are not proprietary so if they are not published, this should be your first red flag.

  • Which variables are the most valid? This part requires discipline to use longitudinal data trends and avoid being reactive to every variable, Excel column, or chart you can create.

  • What changes are meaningful? Some small changes predict different outcomes, and models help dictate what actions should be taken and when.