top of page
Search

Getting the Most out of Performance Testing

Updated: Apr 29, 2020

Testing is the bread and butter of athletes, sport science personnel and strength & conditioning staff, but are we using it to the greatest effect?

In this article I will try to highlight some common pitfalls coaches fall into when designing and performing a testing battery and then suggest a better alternative that hopefully makes life a little easier whilst proving richer data.




Pitfall No. 1) Performing every test known to man

This is the equivalent of the “seeing what sticks” approach. Hopefully something meaningful jumps out amongst all that noise and leads to better decision making on behalf of the multidisciplinary team.

Why is it a pitfall?


TIME! An athletes time is precious, in particular for team sports athletes during the in-season. Between technical training, gym training, physiotherapy appointments, team meetings, media interviews and the all the other nonsense a high level sports person needs to fit into his week, you now want to waste his time performing a testing battery when you have no idea what you're looking for!



Tackling the Issue- Choose a Differentiating Testing Battery

A typical approach looks at comparing the athlete's results to data from norm tables, comparing, then improving on those lacking qualities. This doesn’t provide rich data or tell you how to help your specific athlete, merely how well he/she is doing compared to someone else. Choose tests that allow for the gathering of deep meaningful data to optimise your own athelte’s specific physical qualities. For example John Goodwin suggests an approach consisting of a Back Squat max, Squat Jump, CMJ, 40cm Drop jump and 80cm Drop Jump for identifying limiting factors in sprinters (alongside actually sprinting of course!). Each of these tests measure a unique physiological capacity and the ratios between these results, such as the popular Eccentric Utilisation Ratio, sign posts the direction training needs to be taken in order to improve athletic performance. Whilst the desired ratios between exercises are undetermined in the literature, they are likely to be unique for each sport and athlete. The art and science of coaching is working where to take your athlete and how best to get there.



Pitfall No. 2) Not using the Data

Similar to the above in terms of wasting an athlete's time. In order to “make room” for testing something somewhere else in the athletes schedule will have to “give” in order to provide time- make sure it is worth it!

Why is it a pitfall?


If you don’t look at the results after you test, objectively measure improvement, tinker programs/ plans to suit this newly emergent data, then why test at all? Testing should inform your decision making (what to do, how to do it and when) and test the efficacy of a previous intervention. If you can’t decide what to do next or work out if your athlete even improved, then you need to go back to the drawing board and re-evaluate your testing battery.



Tackling the Issue- Objectively measuring Improvements

Often after testing, athletes ask their coach whether their improvements are “good or not” with the coaches often stuttering in reply. What is a meaningful improvement in a broad jump? Or 10m sprint time? Will Hopkins realised that there is problem in sport science as traditional statistics can’t be used to compare single cases or athletes. He instead suggested using the standard deviation (SD); the normal bandwidth of performance that the athlete operates in, to infer improvements in performance. If an athlete now averages higher than their previous bandwidth, you can say that the athlete has a made a meaningful change to their performance.





The larger difference the greater the improvement made. If an athlete’s improvement is within their previous SD, the improvement is trivial and it's back to the drawing board. The diagram above shows no improvement and improvement when compared with the base results using Hopkins criteria.



Pitfall No. 3) Testing too Sporadically

Rightly or wrongly, entry and exiting phase testing is common place in the sport science discipline. These largely confirm the effect of an intervention and may not be all that useful.

Why is it a pitfall?


Performing testing on a day predetermined 6 weeks ago doesn't take into account an athletes physical readiness, the interaction of recovery status, previous sleep patterns, soreness and motivation, just to name a few. So you may find an athletes score might soar on a good day, or plummet on a bad one. Additionally, practitioners usually try to pick a "good day" to test their athletes, introducing bias into the experiment. If you wait 2 months or until the end of phase to conduct your testing it neglects all the little things that happen in between, the sessions missed, minor injuries and lost games, which may influence the outcome. Finally, what if your intervention didn't work? What if it worked it so well that you could have changed focus earlier? You may not have optimised the limited time you have with that athlete or you could have wasted it entirely. You'll never know, because you didn't check!


Tackling the Issue- Embedded Testing

Find a way to incorporate elements of your testing battery into your average training session. An easy example is the predicted 1rep max formula. By using the formula below you can track the adaptation to strength training on a session by session basis without maximal testing every week;

Predicted 1RM= (kg+(kg x (Reps Performed-Reps Left) x 0.0333))

However this approach does not work for all types of performance testing. You can incorporate timing gates into speed sessions or velocity based feedback in power work, but you wouldn’t want to run a Yo-Yo test every conditioning session to check if you are making progress.



So that is my pennies worth on testing, it may not be perfect but a little better than the traditional way of doing things. Hopefully, this article makes you think critically about what you might do in your own practice.










 
 
 

Comments


Post: Blog2_Post
bottom of page