EDITOR’S NOTE:
To celebrate the imminent release of the new title Prepared: Unlocking Human Performance with Lessons from Elite Sport we are sharing this special post. The following is an excerpt from the book. I hope you enjoy the read and stay safe out there…
THE ILLOGIC OF BEING DATA-DRIVEN…
In the digital era there is a great onus on being ‘data driven’ across all domains. The drive to be objective and to quantify input and output is eminently understandable. The well-worn phrase that gets thrown around is ‘how can you manage something if you don’t measure it?’.
We do however need to be very careful about what metrics we use as a proxy for the thing we are attempting to evaluate. There is inevitably a separation between the measure we can objectively quantify and the complex entity that it represents. We need to be very confident about the specificity and sensitivity of the particular measure in relation to what we are seeking to evaluate, to avoid false positives (detecting something that isn’t there) and false negatives (failing to detect something that is there). On a more fundamental level, complex phenomena defy simple measurement.
I have publicly argued that the present onus on being ‘data-driven’ is by definition illogical. The advent of big data is making this all the more problematic. The data are not sentient, so it is nonsensical that they be granted the role of driving our decisions and behaviours. Proclaiming being data-driven in our practice as a badge of honour to signal how advanced we are is frankly bizarre behaviour. Any meaning is borne out of our interpretation of the data. Accordingly, being data-informed seems a more worthy (and logical) proposition.
Similarly, the use of metrics that is prevalent in all sectors becomes problematic if we do not understand the nuances (and paradoxical effects) of their application. By extension we need to be very considered in our interpretation when choosing to act based on the data, and what weight we assign to particular metrics when making decisions. We need to understand the limitations and uncertainty involved with any metric we choose to employ, and take this into account in how we interpret and apply the data in our decision making process.
As leaders and managers we further need to be very mindful in what we communicate to those we lead when it comes to what metrics we are using to evaluate them. The observer effect derived from the field of physics is relevant here. In essence, to measure something is to change it. This applies all the more when working with sentient beings; when we select a metric and assign importance to it, this inevitably influences subsequent behaviour.
THE DISTRACTION OF METRICS…
To be clear, use of data should be a fundamental part of what informs our decision-making, and we should absolutely seek to collect quality data to support this process. Equally, professional sports provide a host of examples of the unintended consequences that come when we let data alone drive our judgements. The burgeoning use of data and increasing attention given to an ever growing array of performance metrics is changing how teams play. Moneyball famously brought data analytics in sport to mass consciousness, and general managers in many professional team sports now use data analytics extensively when recruiting athletes.
The use of data to recruit athletes in professional sport seems benign and largely positive. However, as coaches seek to leverage data this is serving to influence the tactics employed by teams, and in turn technical development among players to some extent. In professional sports it is becoming abundantly evident (ironically from trends in data) that the actions of players are also being shaped by the metrics the coaching staff are looking at. As specific aspects of players’ performance are being evaluated (and rewarded) based on selected metrics, this inevitably drives how they subsequently operate on the field, on the court, or on the ice.
As we stated earlier, to measure something is to change it. Moreover, Goodhart’s Law (this time from the field of economics) describes how the act of (publicly) assigning importance to a metric inevitably changes behaviour in a way that the metric no longer provides valid insights into independent and freely chosen behaviour (given that that the metric itself has now become the driver for behaviour).
A generic example is the standardised assessment employed in education: given the stakes involved students are steered to direct their efforts to preparing for the test, rather than learning the material to understand it. As a result test scores reflect students’ ability to successfully prepare for the test, rather than necessarily their understanding of the material or their ability to apply it. This is something of an issue as learning for understanding was the original purpose and is the central premise of education. Making the measure the target, by assigning a high degree of importance in terms of future prospects, negates its ability to evaluate what we were originally seeking to measure, and alters behaviour in ways that are contrary to our original purpose.
A similar scenario can be observed in sport, and more specifically within the area of strength and conditioning. When particular assessments are employed to evaluate the progress of the athlete, and thereby the effectiveness of the training programme, naturally this leads practitioners to train the athlete to score well on the assessments, as opposed to preparing them to compete in the sport. Once again, assigning importance to the metric steers behaviour away from the original objective.
Similarly, the currently popular practice of velocity-based training, whereby athletes are given feedback on bar velocity following each repetition, predictably prompts athletes to chase numbers, with little regard for how they are performing the movement. Indeed chasing numbers is an apt description of the many scenarios where an emphasis on metrics adversely affects the quality of the output, and distracts from, or even displaces, the original purpose of the endeavour.
There are also numerous examples from a host of domains that demonstrate how humans (and other sentient beings) find ways to game the chosen the metric, so that the metric itself drives their behaviour to gain greater rewards, rather than simply engaging in the original task itself. One such example was the introduction of GPS monitoring in team sports, whereby players are evaluated based on metrics such as total distance covered in a game, which led some players to run around pointlessly during breaks in play simply to drive their total distance numbers up.
Incentivising work with performance based metrics further changes the source of motivation. What we are seeking to foster and preserve is intrinsic motivation derived from the work itself and the inherent satisfaction of doing good work. In turn, this is associated with a feeling of purpose and a sense of meaning in our work. When we start to employ metrics in an attempt to shape reward desired behaviours by definition we introduce extrinsic motivation, which is fickle, and this is to the detriment of the intrinsic motivation that we seek.
Once again, the industry approach of incentivising performance with rewards (or punishment) based on metrics inevitably tends towards a transactional mindset, and this is poison to a sense of purpose, meaning, and ultimately loyalty or attachment to the team or organisation.
EXERCISING JUDGEMENT VERSUS DRIVE FOR OPTIMISATION…
As we alluded to earlier, when we select a metric and attach importance to it, naturally this will lead us to optimise for that metric, and thereby the metric starts to drive our behaviour rather than the task itself. At a surface level, optimising performance would seem to be a good thing. However, human performance is a complex phenomenon. The drive for optimisation and efficiency is worthwhile and makes sense when it comes to procedures and operations, which once again lend themselves to standardisation and evaluation. More complex endeavours with humans are far less amenable to this. When we attempt to employ this approach it follows we run into trouble.
Simple systems are easy to optimise. Complex systems are not. Our attempts to optimise complex systems inevitably lead us to oversimplify and we substitute performance for more simple and easier to quantify metrics. In different realms of human performance we select key performance indicators. Indicators of performance are separate from performance itself, particularly when it comes to human performance. Once again, when we select key performance indicators, naturally there is a tendency to optimise for those metrics. In doing so, behaviours become driven by optimising for the key performance indicator metric, rather than performance itself.
In this drive to employ metrics and optimise clearly we have lost our original purpose. When dealing with complex phenomena such as human performance it follows we need to temper the industry drive for optimisation.
We can avoid these issues when our use of data and optimisation efforts are tempered with critical thinking. Exercising professional judgement would seem to be a much better approach when we are dealing with complex adaptive systems such as humans, rather than solely relying upon a metrics and a standard operating system independently of critical thinking and professional judgement.
Professional judgement is oddly becoming largely forgotten in the era of big data, and the present drive to optimise and operate in a data driven manner. A rare exception in the realm of sport is the professional judgement and decision making framework proposed by Martindale and Collins. This framework was designed as a tool to assess applied practice of support staff and evaluate the effectiveness of their input and intervention (as opposed to relying on standardised evaluation). Equally this is something of an oasis in the sea of data driven and metrics-based assessment of output that is so prevalent in professional sport, and this trend looks set to continue.