As I am watching the Stanford Volleyball’s recording of the women’s team celebrating their national championship match win, I am struck by the emotions etched in the faces of the players and coach Kevin Hambly. It was a mix of unadulterated joy for some and for others, particularly Libero Morgan Hentz, it was a look of desperate sadness. In the audio portion almost all the players made some comment about the sadness of seeing their four years end, a sadness that came with the recognition that this was the last time that this team, this particular alchemy of people was ever going to play together. Ever. The finality of the thought is brutal but honest.
However, it is Morgan’s demeanor and her human response to that finality that captured my thoughts about the reasons for coaching, at least the most important reason. The juxtaposition of her tear streaked face to go along with her big broad smile captures the juxtaposition of emotions that had enveloped her. Her absolute honesty and integrity made me think on this moment that is fraught with conflicting thoughts.
This scene plays all over the country at the end of Fall, as high school and college teams end their season. At least half of them will end in defeat, so they don’t experience the euphoria that the Stanford team experienced at that moment in PPG Paint Arena in Pittsburgh, only the very few get to do that. Johns Hopkins in Division III, Cal State-Bernardino in Division II, and Marian in NAIA all get to do the celebratory dance, as do the Junior College champions. No doubt their celebrations are joyous and over the top.
But the sense of sadness, the sense of finality of the last match hit every team without regard to winning or losing, we just get to see Morgan express her loss publicly. No doubt there are heartfelt expressions of love and loss in the locker rooms, both the winning and the losing ones. No doubt there are coaching staff sitting stoically in the seats in the arena, processing the meaning of the last match and the sense of loss which has finally hit them after the adrenaline of the match had worn off. No doubt players, staff, and coaches are feeling the weight of regret for things left unsaid, acts of friendship left unperformed, love unexpressed, hugs unhugged. For those who were lucky enough to win the last match together, it is a mixture of happiness, gratefulness, sadness and regret. For those who lose their last match together it is pangs of goals unmet, and missions unaccomplished mixed with the sadness and regret. The common denominator is the sadness and regret. From the team who did not win a match all season to the team who did not lose a match all season, the common denominator is the team, with all the adjectives which inadequately describe the meaning of the term team.
Coaches try to build teams from day one. They preach about family, they admonish the players about having each other’s back, they cajole them to be vulnerable to each other, and they think up ridiculous exercise to motivate the mélange of players to bond into a team. All to capture that magical alchemy call a cohesive team. Some think that team chemistry is a formula, a recipe. If we gave them an opportunity to do this, or to do that then at the end we will have a team. I am much more romantic than that. Each team is much greater than the sum of its parts, but the parts are important. There are as many disparate personalities, temperaments, cultures, logic, and mindsets as there are players, the job of melding them all into a strong and bonded collective is seemingly next to impossible. The team building tactics, and activities do help in progressing the team to their goals, but there is an element of magic which is unpredictable and undetected in all the interpersonal interactions that happen in a team. That magic must happen serendipitously, there are catalysts but their effects are all also uncertain. There are no ways to replicate the magic year after year, there are no ways to capture it if you don’t have it. You sow the ground the best you can and then you hope for the best. Prepare the ground, make sure it is fecund, and then let it happen. Or not.
For the coaches, watching the end of a chapter in your team or program is the ultimate test of your coaching philosophy. John Kessel used to always ask beginning coaches what they were coaching. He would play gotcha with them if they answered: volleyball. “NO!” he would bellow, scaring the dickens out of the group, “you don’t coach volleyball, you coach people!”
It is because we coach people that we value, actually treasure a true team.
It is because we coach people that we, volleyball coaches, are so touched and moved by the elation and sadness of the scene in PPG Paint Area. We don’t do this to win matches, the extrinsic rewards are obviously fantastic, but we do it for the intrinsic rewards, rewards we enjoy in the privacy of our minds and heart, rewards that are inexpressible to those who have not been where we have been. We do it for so many human and emotional reasons and the real rewards comes from witnessing and experiencing our teams become one and reveling in the presence of one another. You don’t need to win the national championship to experience that euphoria and love.
"I write to find out what I think." Joan Didion. "Qu'est ce que je sais"-What do you know? "a fox knows many things, but a hedgehog know one big thing" Archilochus I studied most of my life for credentials, now I study as a Polymath. This blog is my personal ruminations. I invite you along to explore many things. I won't promise that it will all be interesting, but I promise that the thoughts are honest. I realized, relatively late, that life is for the living. So, it was time to live.
Friday, December 27, 2019
Monday, December 16, 2019
Stats For Spikes-Variance
I had wanted to do a little bit of explaining about probability
and statistics tools. One of them is the concept of variance, so it was with
much delight that I saw Coach Jim Stone write the articles below about the observations
he made regarding variances.
Definitions
First some definitions.
The mean is the average of the same performance measurements taken over a long time and sampled at regular intervals. That mean is compared to the expected value of the measurements which is calculated prior to measurement decides the accuracy of the measurement,. In manufacturing or any engineering related activities, the mean of the measurements are compared to what the designer had intended and designed to achieve, that is a reference value, a goal to measure against. The formula for the mean is just the numerical average of all the measurements of the metric, hitting percentage, conversion percentage, and in the article, the focus was on hitting efficiency. The article compares the average hitting efficiency of various players from 2018 to their hitting efficiency in 2019.
In Coach Stones articles, his use of the term variance refers to a comparison of the 2018 and 2019 numbers, so he is using the 2018 hitting efficiency number as the reference and comparing the 2019 hitting efficiency number against it. The variance that he talks about is the difference for each player from 2018 to 2019.
In the statistical sciences however, variance
is defined as the square of the difference of a measurement from the mean
of many measurements. The variance, in statistical language, can be calculated
as
Figure 1
Formula to calculate Variance. (Staff, WikiHow 2019)
Standard deviation is
defined as how spread out the measurements are from the mean, or the
square root of the variance. The calculation is simple and if you don’t
want to do it by hand most spreadsheet programs will have a function. In Excel
the mean is: mean=average(x1, x2, …xn)
and standard deviation function is standard dev=stdev((x1, x2, …xn). You may
need to download the statistical function package to make it work but it is
very simple to use.
Mean and standard
deviations are used by statisticians to decide just how precise and accurate
the thing being measured is, whether it is a player or a team.
The standard deviation tells
us the precision of the process that we are trying to measure.
An illustration is better at
getting that point across. The first illustration shows the Normal or Gaussian probability
distribution. The mean of the measurements, as compared to the reference
value tells us how inaccurate the measured performances of the process/team/people
are, the width of the spread of the normal distribution tells us how spread out
the measurements are and it gives us a measurement about how imprecise the performance
of the process/team/people are.
Figure 2:
Illustration of the meaning of accuracy of precision. (Medcalc Staff
2019)
Another illustration uses a picture
of the bull’s eye to better show the relationships.
Figure 3
Bull's eye explanation of the differences in interpretation of accuracy and
precision. (Circuit Globe 2019)
In the world of athletic performance, it is next to
impossible to use the accuracy intelligently because people and teams
will perform according to their best ability in that time at that place, there
are too many extraneous variable to account for and to uphold the hypothetical
performance standard. Many coaches on VCT often ask for reference values as a
goal to achieve for their teams rather than as a means of assessing where their
team performance is as compared to a generic measure. The difference is subtle
but important, by using a reference measure as comparison is a normal practice:
you want to know what the “average” standard is for a team that is for a
certain age and gender. The problem is that each team is unique, each player is
unique, the conglomeration of the performances of each unique member of the unique
team can be averaged to get a measurable “team average”, but comparing your
team to a generic measure is unrealistic and depending on how you use that
reference value, the usage can cause more problems than it will solve; there
are many complicating factors to make the reference measure meaningful. A
better way to measure your team’s performance is to do what Coach Stone did,
which is to compare your present performance with your past performance,
assuming you have a past performance record. That comparison, making the comparison
relative to the previous performance gives the coach a more concrete measure of
the team performance.
It is also good to keep in mind that even though measured
performance increases the probability of success for teams and players, they
are not the determining factors for success; that is, have a great hitting efficiency
percentage tells us that the chances of winning are going to be better but they
do not guarantee a win: having good measures is not predictive, in this case
correlation definitely does not equal to causality.
The standard deviation or precision is
something important for coaches to examine, as Coach Stone has said in his
articles. The statistics that we gather on the bench for the players and the team
in the game gives us a performance measure of the player and team for that set
and that match. The measures during a set or match: hitting percentage, blocks,
assists, digs, passing efficiency etc. are all a function of the opponent’s
strengths and weaknesses; whether the match is home, neutral or away; the temperature
and humidity in the gym etc. So that
when we average the same performance measures across matches played against
different opponents, locations, atmospheric conditions, we are making an
assumption: that the variations inherent in playing the games under different
circumstances can affect the performance measures but we can still get the
information we want about our team and players by taking the mean of the
performance measure while under different conditions. In fact, taking the
average of many performances is the preferred way to isolate the actual team
performance because the primary performance characteristics of our players,
good or bad, shows itself in the average, more so than in looking at a bunch of
data from individual matches. The effect of matches played against different
opponents, locations, atmospheric conditions, etc. are all accounted for in the variances that we
see in the performance measures. We take for granted that those variances are a
part of the performance capabilities of the team and players. Indeed, by taking
the average of the performance measures, we are in effect smoothing out the transient
performances for each individual match or allowing the variations from the
environment and opponent factors to average itself out and we hope that by
allowing averaging to take place, we end up filtering out the extraneous
effects and we get the team’s actual capabilities over a designated time span.
This is the Law of large numbers, which states that by virtue of taking many measurements
or samples, the mean that we calculate of all the measurements will end
up being closer to the real mean or the player or team that we are
measuring.
In Coach Stone’s article on Variance and Lineup, his measure
of volatility is what I assume to be the standard deviation. In his
case, the volatility of the player is important because we are assuming that
all the extenuating circumstances have been smoothed away uniformly and that
the actual volatility or standard deviation reflects the actual ability
of the player to execute precisely.
Coach Stone was making the point that when coaches make
decisions on starting lineups, that volatility, standard deviation, of a
player’s statistic should be considered in conjunction with the mean. I heartily
agree, except that I would warn that these are probabilistic descriptions of
the performance, they are not deterministic, that is there are randomness and
uncertainty embedded in the numbers.
In Coach Stones article, he said that he would trade higher
efficiency for less volatility. A prudent decision that reflects his personal
preference for stability. In his personal probability he saw that there was safety
in lower volatility. There is high probability that his decision is a sound
one, and the results may bear out the decision, but it is also
probabilistically possible that the decision may work out to the contrary, that
the events such as a volleyball match are probabilistic in nature and the
players performance may not her personal performance curve. He compares the
hitting efficiency of Kathryn Plummer and Jazz Sweet and stated that:
If you look one standard deviation from her average
efficiency, you can see that Plummer will hit between .200 and .370 almost 70%
of her outings. This is what the team and coach can generally expect on
any given night. Conversely, one of the more volatile players would be
Jazz Sweet from Nebraska. Her volatility is high (relative to Plummer) so
her range of performance will be broader. One could expect that 70% of
the matches Sweet will hit between .000%-.320%.
While I agree with the sentiment, a better wording is that Plummer’s
performance would be somewhere between 0.200 and 0.370 at 70% of the time. The
turn of phrase is not merely playing with semantics, it turns the argument to the
information that the Normal curve actually demonstrate: rather than saying that
Plummer hits between 0.200-0.370 70 % of the time, it says that the probability
is 70% that she will hit between 0.200 and 0.370. We are putting the probabilistic
thinking into play for the decision maker, the term “probability” gives the
decision maker food for thought while introducing the reality that there is a
30% chance that she will hit below 0.200 and above 0.375. Instead of thinking
that it is a sure thing, that hitting in the range 70% is a great deal, the
thought becomes that there is a 15% chance that she will hit below that range and
there is a 15% chance that she can hit above that range. Thinking in probability terms because we have
the data already available to us means that we can contemplate the possibility
of potential failure and helps temper our expectations. This is why we play the
game out, rather than do simulations on laptops. It is a matter of what your
personal probability tells you.
One way to help us refine our decision making is if we had
prior data on the performance of the players in exact situations, in that case
one would use using Baye’s Theorem to recompute the probabilities.
One other myth regarding mean and standard deviation
that I would like to dispel is the following. I had heard an anecdote that a
coach would, as a matter of habit, pull a player out of the lineup when their
statistics indicate that they were performing much above their season mean
for a significant amount of time. The rationale for this move relies on the
mistaken belief that since the player is outperforming himself for the
season, he was due for a low performance, and that by pulling him out of the
lineup at that moment, the team is able
to avoid or bypass the low performance and that the player would resume his high
performance in his next start. Statistics doesn’t work that way. While the standard
deviation measure says that there should be instances where a lower performance
occurs to balance the higher performance, there is nothing in statistics that
says that a low performance must necessarily happen in a symmetrical way, or
happen immediately after a series of high performances. In fact, the lower
performance may come when you least expect it. The thing to remember is that the
measures are cumulative over a long period of time. Once again, the law of
large numbers tells us that the balancing of the highs and lows comes after a
large number of measurements rather than instantaneously. In the world of
Statistical Process Control, a series of measures that continuously oscillates
above and below the mean is an indication that something is wrong with
the process.
Next: Statistical Process Control tools. Six-sigma and its
significance and whether we should apply that criteria.
Works Cited
Circuit Globe. 2019. Circuit Globe /Accuracy and
Precision. May. Accessed August 17, 2019.
https://circuitglobe.com/accuracy-and-precision.html.
Medcalc Staff. 2019. Medcalc.org/Accuracy and
Precision. August. Accessed August 20, 2019.
https://www.medcalc.org/manual/accuracy_precision.php.
Staff, WikiHow. 2019. WikiHow. October 23.
Accessed December 15, 2019. https://www.wikihow.com/Calculate-Variance.
Subscribe to:
Posts (Atom)