Here’s a quick thought:
“Spend each day trying to be a little wiser than you were when you woke up. Day by day, and at the end of the day- if you live long enough- like most people, you will get out of life what you deserve.” – Charlie Munger
Charlie Munger is worth $2.5 billion, nearly all of which was accumulated through a career managing investments in his role at Berkshire Hathaway.
He didn’t even start a career in investing until he was 38 years old.
He wakes up each day, explores his curiosity, reads a boat load of books, and plays the long game.
If steady improvement worked for him, I bet it works for you too.
Can we predict which minor league pitchers will get hurt?
Title: Including Modifiable and Nonmodifiable Factors Improves Injury Risk Assessment in Professional Baseball Pitchers Authors: Ellen Shanley, et. al
What is it?
A prospective cohort study of minor league pitchers over a 10-year period to evaluate an injury risk prediction model that included modifiable (e.g. ROM, pitch count) and nonmodifiable (e.g. years played professionally, throwing arm).
Why does it matter?
Injury prediction models are the gateway to the holy grail of sports performance/physical therapy: injury prevention.
If we can predict who’s going to get injured and what’s going to cause it, then we can intervene and keep athletes on the field.
The problem? Current injury prediction models suck.
What did they find?
- The primary prediction model produced good discrimination and calibration values (see the section below to figure out what in the world that means).
- The primary prediction model was best when only considering arm injuries that occurred within 90 days after testing.
- This makes sense. Risk factors change over time and periodic reassessment might improve the prediction.
- Overall model fit (R2) was fair (0.22).
- That sounds stupid low, but accounting for 22% of the variability in a topic as complex as human injury rates in a high performance environment actually ain’t bad.
- Only including modifiable predictors (pitch count and shoulder ROM) or only shoulder ROM demonstrated decreased prediction performance (discrimination, calibration, and R2).
The big conclusion: when evaluating professional pitchers, both modifiable and nonmodifiable predictors should be incorporated to better ascertain arm injury risk.
What was the process?
A total of 407 different minor league pitchers from 1 organization (comprising 593 pitcher seasons) were analyzed from 2009-2019.
Testing was done during spring training and then pitchers were followed through the length of the season.
An injury was defined as something that caused them to miss at least 7 days of practice or games.
They collected all the data and then did some fancy machine learning stuff that my brain isn’t big enough to comprehend right now.
The factors included in the primary prediction model were:
- BMI (18-34)
- Throwing arm (R/L)
- Years played professionally (5+, 1-2, 3-4)
- Previous arm injury (Y/N)
- Performance of an individualized arm injury prevention program (Y/N)
- Number of pitches thrown in the previous season (0 to 2,800)
- Difference in humeral torsion b/w throwing and non-throwing arm (-30 to 60)
- Dominant arm total shoulder range of motion (120 to 200)
- Dominant shoulder horizontal adduction (-70 to 50)
They then isolated modifiable risk factors of pitch count and shoulder ROM to compare their prediction capabilities to the primary prediction model.
The models were evaluated based on:
- Discrimination
- The ability of the model to predict who will develop an event earlier and who will develop an event later or not at all
- Measured by the area under the curve (AUC). An AUC of 0.5 means you correctly predicted 50% of outcomes, which is just like guessing. An AUC of 1.0 makes you the perfect second coming of Nostradamus.
- Calibration
- Agreement between the predicted and observed number of events.
- Optimal calibration has a slope of 1.
- Calibration in the large
- The mean difference between predicted and actual outcomes.
- The closer the number is to 0, the better.
- Coefficient of determination
- The fun R2 value, which is probably the only statistics thing you remember from grad school
- The fun R2 value, which is probably the only statistics thing you remember from grad school
My thoughts.
I’d grade this injury prediction model as “not trash,” which is better than most other injury prediction models.
The authors believe the model should be toyed around with more to improve accuracy and be externally validated- I agree.
There might be something here in the future.
Is it currently that helpful? Not really. Its accuracy is solid, but not overwhelming.
How can you use it?
- If you’ve got a professional pitcher in front of you and an ultrasound unit at your disposal, it’s not a complete waste of your time to collect the risk factors measured in this study to see where he lands. I’d then see what modifiable risk factors we can target to decrease injury risk.