Saturday, July 28, 2007

 

Welcome to Woodshedder's Re-Education Camp

"Ya got time to breathe, ya got time for music."


Comments:
So, is this your not so subtle way of endorsing ethanol stocks?
 
The only ethanol stock I like is Brown Forman.

Woodford Reserve dude, buy it for the long term hold!
 
Holy Shast, "Dooley!" An' ya left Ernest T. Bass outta it!

(Save for his brick)

You know that was Uncle Jessie playin' the jug flute, dontcha?
 
woodshedder--

wtf? where's the bravado, the big "first out of the blocks" boo- yah?....judging from your post, I take it that getting drunk and hitting on your wife was non-productive.
 
Stockhead, I canned that schtick back in early March, when I first blogged here. What you are looking at is a more refined guest blogger schtick.

Believe me, Danny will, I'm sure, give us a double dose of bravado.
 
hell tits yeah
 
God kill me if this music ever comes back.
 
Sin 1: Forecasting (Pride)
An enormous amount of evidence suggests that we simply can’t forecast. The core root
of this inability to forecast seems to lie in the fact that we all seem to be over-optimistic
and over-confident. For instance, we’ve found that around 75% of fund managers think
they are above average at their jobs! It doesn’t matter whether it is forecasting bonds,
equities, earnings or pretty much anything else, we are simply far too sure about our
ability to forecast the future.
Given the dreadful track records that can be seen from even a cursory glance at the data,
it begs the question of why we bother to keep using forecasts? Let alone putting them at
the very heart of the investment process? (A mistake probably 95% of the investment
processes I’ve come across persist in making).
The answer probably lies in a trait known as anchoring. That is in the face of uncertainty
we will cling to any irrelevant number as support. Little wonder, then, that investors
continue to rely on forecasts.
Some have argued that any forecast is better than no forecast at all. For instance, Joe
Nocera writing in the New York Times (1 October 2005) opined “Indeed, I wound up
thinking that forecasting is to the market what gravity is to the earth. As much as we like
to poke fun at faulty predictions, we can't function without them. Even if we disagree with,
say, the analysts' consensus on Cisco, that consensus gives us a basis that helps us
form our own judgments about whether it is overvalued or undervalued. Without
forecasts, the market would no longer be grounded to anything.”
This misses the point on many levels. Firstly, when it comes to anchoring we know that
irrelevant numbers can influence people’s behaviour. For instance, Englich, Mussweiler
and Strack1 show that legal experts were influenced by irrelevant anchors when setting
jail sentences even when the experts were fully aware of the irrelevance of the input.
In one study, participants (judges) were asked to role dice to determine the sentencing
request from the prosecution. The pair of dice they used were loaded to either give a low
number (1, 2) or a high number (3, 6). Having rolled the dice, participants were told to
sum the scores and this number represented the prosecutions demand. Since the judges
themselves rolled the dice, they could clearly see the input was totally irrelevant.
However, the group who received the total score of 3 issued an average sentence of 5.3
months; those who received a total score of 9 issued an average sentence of 7.8 months!
So by even providing a forecast, people are likely to cling to it.
1 Englich, Mussweiler and Strack( 2005) Playing dice with criminal sentences: the influence of irrelevant
anchors on experts’ judicial decision making, Personality and Social Psychology Bulletin, forthcoming
Seven Sins of Fund Management 18 November 2005
4
Secondly, and this is a really radical idea, how about we anchor share values in
something we can measure like dividends! Since we know people will stumble into the
pitfall of anchoring, our best hope is getting them to anchor to something vaguely
sensible. Support for this idea is offered by the work of Hirota and Sunder2. They show
that in experimental markets, bubbles are much more likely to appear when investors
lack dividends as an anchor.
 
The folly of forecasting
For those who have endured one of my behavioural finance presentations will have
heard me rant and rave over the pointlessness of forecasting. I have finally got around to
putting pen to paper on this subject4.
The 6th century BC poet Lao Tzu observed “Those who have knowledge, don’t predict.
Those who predict don’t have knowledge.” Despite these age-old words of wisdom, our
industry seems to eternally persist in basing the investment process around forecasts.
Before exploring the reasons for our dependency upon the irrelevant guess of
unknowable future, I had better buttress my case by showing just how bad the track
record of forecasting actually is. The charts below set out the forecasting performance of
so-called professionals. For the ease of data accessibility, all series below are taken from
the Federal Reserve Bank of Philadelphia Livingston survey or the Survey of professional
forecasters. However, the findings are not the result of a strange data set, I have used
different data and found similar patterns exist across them all.
The first chart shows economists attempts to forecast the rate of inflation as measured by
the GDP deflator. Sadly it reveals a pattern that will become all too common in the next
few charts. Economists are really very good at telling you what has just happened! They
constantly seem to lag reality. Inflation forecasts appear to be largely a function of past
inflation rates.
US GDP deflator and forecasts
70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 00 02 04
0
2
4
6
8
10
12
0
2
4
6
8
10
12
Actual GDP deflator
Forecast
Source: DrKW Macro research
Our second category are the bond forecasters. Previously, we have analysed their
behaviour in depth (see Global Equity Strategy, 22 February 2005). Much like the
economists above, their performance is found to be severely lacking. Not only are bond
forecasters bad at guessing the level of the yield, they can’t get the direction of yield
changes right either. The table below shows that when yields were forecast to rise, they
actually fell 55% of the time!
4 I was much inspired to write this after reading Nassim Taleb’s recent paper The Scandal of Prediction
(2005). He renewed my vigour for this subject.
Seven Sins of Fund Management 18 November 2005
11
Consensus one year ahead bond yield forecasts and reality (%)
Source: DrKW Macro research
Predicted vs. actual yield movement (four quarters ahead , 1992-2004)
% of occurrences Actual
Up Down
Up 45 55 Predicted
Down 22 78
Source: DrKW Macro research
Just in case you think this is just a case of an equity man picking on debt, the chart below
shows the feeble forecasting abilities of equity strategists. They too seem to think that the
recent past is best extrapolated into the future, and hence end up lagging reality.
Acknowledgement of our own limitations is one of the reasons why we don’t even attempt
to produce index forecasts.
S&P500 and forecasts
400
600
800
1000
1200
1400
1600
Jun-91
Jun-92
Jun-93
Jun-94
Jun-95
Jun-96
Jun-97
Jun-98
Jun-99
Jun-00
Jun-01
Jun-02
Jun-03
Jun-04
Actual S&P500
Forecast
Source: DrKW Macro research
Our last category of truly inept seers are the analysts. Their inability is perhaps the most
worrying, as their forecasts are possibly taken far more seriously than the average macro
forecast.
The chart overleaf is constructed by removing the linear time trend from both the
operating earnings series for the S&P500 and the analyst forecasts of those same
earnings. I have simply plotted the deviations from trend in the chart overleaf. It clearly
shows that just like the other forecasters examined here, analysts are terribly good at
telling us what has just happened but of little use in telling us what is going to happen in
the future.
4
5
6
7
8
Q1 1993
Q1 1994
Q1 1995
Q1 1996
Q1 1997
Q1 1998
Q1 1999
Q1 2000
Q1 2001
Q1 2002
Q1 2003
Q1 2004
Q1 2005
Forecast
10 year bond yield
Seven Sins of Fund Management 18 November 2005
12
Analysts lag reality (Operating earnings and forecasts, deviations from trend, $/Sh)
-10
-5
0
5
10
15
Jan-86
Jan-87
Jan-88
Jan-89
Jan-90
Jan-91
Jan-92
Jan-93
Jan-94
Jan-95
Jan-96
Jan-97
Jan-98
Jan-99
Jan-00
Jan-01
Jan-02
Jan-03
Jan-04
Jan-05
Earnings
Forecasts
Source: DrKW Macro research
Overconfidence as a driver of poor forecasting
The two most common biases that psychologists have documented are over-optimism
and over-confidence. Technically speaking overconfidence refers to a situation where
people are surprised more often than they expect to be. Statistically we describe such
individuals as ‘not well calibrated’. What we really mean by that is if we ask people for a
forecast and then ask them for the 98% confidence intervals, so that the true answer
should lie outside of the bounds just 2% of the time, it tends to lie outside of the bounds
30-40% of the time! People are simply far too sure about their ability to predict.
Russo and Schoemaker5 have devised a simple test. Before you go any further try and
answer the questions below and see how you do.
Self-test of overconfidence
90% confidence range
Low High
Martin Luther King's age at death
Length of the Nile River
Number of countries that are members of OPEC
Number of books in the Old Testament
Diameter of the moon in miles
Weight of an empty Boeing 747 in pounds
Year in which Wolfgang Amadeus Mozart was born
Gestation period (in days) of an Asian elephant
Air distance from London to Tokyo
Deepest (know) point in the ocean (in feet)
Source: Russo and Schoemaker
The answers can be found at the bottom of the page6. If you are properly calibrated only
one of the answers to the above questions should lie outside of the limits you wrote
down. When I took the test two of my answers were outside of the bounds so I, like
everyone else, am overconfident. However, compared to Russo and Schoemaker's
sample of over 1000 participants I didn’t do too badly. Less than 1% got nine or more
answer correct, with most respondents missing four to seven items!
One key finding in the literature on overconfidence is that experts are even more
overconfident that lay people. Experts do know more than lay people, but sadly this extra
knowledge seems to trigger even higher levels of overconfidence.
5 Russo and Schoemaker (1989) Decision traps: Ten barriers to brilliant decision making and how to
overcome them, Simon&Schuster
6 39 years, 4187 miles, 13 countries, 39 books, 2160 miles, 390,000 pounds, 1756, 645 days, 5959 miles,
36,198 feet
Seven Sins of Fund Management 18 November 2005
13
Overconfidence and experts
The chart below shows the calibration curves for two groups of experts – weathermen
and doctors. Each group is given information relative to their own discipline, so weather
men are given weather patterns and asked to predict the weather, doctors are given case
notes and asked to diagnose the patient.
We are measuring predicted probability (confidence) against actual probability. So the
45° line is perfect statistical calibration. Weather forecasters actually do remarkably well.
In contrast, doctors are a terrifying bunch of people. When they were 90% sure they were
correct, they were actually right less than 15% of the time!
So why the difference in the performance between these two groups? It largely appears
to relate to the illusion of knowledge (defined as a situation where we think we know
more than everyone else). Weather men get rapid undeniable evidence on their abilities
as forecasters, after all you have to do is look out of the widow to see if they managed to
get it right or not. Doctors, in contrast, often lack feedback so find it far harder to know
when they have been right or wrong.
Calibration of weathermen and doctors
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100
Doctors
Weathermen
Source: Plous (1991) The psychology of judgement and decision-making
It might be tempting to think of our industry as akin to weather men, if we make decisions
or forecasts we should be able to see in the fairly near term if they were correct or not.
However, recent evidence suggests that most investors are more akin to doctors than
weathermen, at least in terms of the scale of their overconfidence.
The chart below is based on a recent study by Torngren and Montgomery7. Participants
were asked to select the stock they thought would outperform each month from a pair of
stocks. All the stocks were well known blue chip names, and players were given the
name, industry and prior 12 months’ performance for each stock. Both laypeople
(undergrads in psychology) and professional investors (portfolio managers, analysts and
brokers) took part in the study.
Overall, the students were around 59% confident in their stock picking abilities. However,
the professionals averaged 65% confidence. The bad news is that both groups were
worse than sheer luck. That is to say you should have been able to beat both groups just
by tossing a coin!
7 Torngren and Montgomery (2004) Worse than chance? Performance and confidence among
professionals and laypeople in the stock market, Journal of Behavioural Finance, 5
Seven Sins of Fund Management 18 November 2005
14
Average Accuracy and confidence on stock selection (%)
0
10
20
30
40
50
60
70
Students Professionals
Confidence Accuracy
Source: Torngren and Montgomery (2004)
In addition to the overall statistics, at each selection, players were asked to state how
confident they were in the outcome predicted. The even worse news was that
professionals were really dreadful, underperforming laypeople by a large margin. For
instance, when the professionals were 100% sure they were correct they were actually
right less than 15% of the time!
Accuracy and confidence on stock selection
Source: Torngren and Montgomery (2004)
Players were also asked to rank the inputs they used in reaching their decisions. The
second chart below shows the average scores for the inputs. Laypeople were essentially
just guessing, but were also influenced by prior price performance. In contrast, the
professionals thought they were using their knowledge to pick the winners. It is hard to
imagine a better example of the illusion of knowledge driving confidence.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.55 0.65 0.75 0.85 0.95 1
Lay people
Professionals
Perfect calibration
Seven Sins of Fund Management 18 November 2005
15
Average rating of input importance
Source: Torngren and Montgomery (2004)
Glaser, Langer and Weber8 investigate overconfidence in professional investors and lay
people by asking both groups to answer 10 general knowledge questions and 10 finance
questions, much like the self-test set out on page 4. If people are well calibrated, the
number of correct answers that fall outside the limits should be about one in ten. The
chart below shows the actual number of answers that exceeded the confidence limits (the
general knowledge and finance questions have been averaged together to give a score
out of ten).
The professional investors had a median of nearly 8 questions outside of their confidence
intervals; the laypeople (students) had a median 6 questions outside of their confidence
ranges. Once again confirming that experts are more over-confident than the rest of us.
Average number of questions outside of the confidence interval
0
1
2
3
4
5
6
7
8
9
Experts Students
Source: Glaser, Langer and Weber (2005)
A new paper by Stotz and Nitzsch9 surveyed analysts at major investment banks. They
were asked to say how many of their rivals were more accurate and less accurate than
they themselves were with respect to both earnings forecasts and target prices.
Unsurprisingly, the analysts thought that they were all above average. Indeed the
average analyst’s overconfidence with regard to earnings was 68.44%, and 61.49% with
respect to target prices.
8 Glaser, Langer and Weber (2005) Overconfidence of professionals and lay men: Individual difference
within and between tasks, University of Mannheim Working Paper
9 Stotz and Nitzsch (2005) The perception of control and the level of overconfidence: Evidence from
analysts earnings estimates and price targets, The Journal of Behavioural Finance, Vol 6
0
1
2
3
4
5
6
7
8
Previous month's results Other knowledge Intuition Guessing
Laypeople Professionals
Seven Sins of Fund Management 18 November 2005
16
Stotz and Nitzsch also asked the analysts to give reasons for their assessment of their
ability. They found that when it came to target prices (where analysts were less
overconfident) analysts often argued that “prices sometimes happen by chance”, or that
they were the result of “irrational investors”, or that successful price forecasts had a large
element of luck. In contrast, when it came to explain their earnings forecasts analysts
said “detailed knowledge of the company or sector” helped to make good forecasts, as
did “experience” and “hard work”. This would seem to be further evidence of the illusion
of control and the illusion of knowledge driving overconfidence.
Average analysts confidence in their ability to forecast earnings and prices
50
52
54
56
58
60
62
64
66
68
70
Earnings Prices
%
Source: Stotz and Nitzsch (2005)
I have recently been subjecting participants at my behavioural finance seminars to a
questionnaire designed to measure their behavioural biases. I’ve been collating these
results and will soon publish a note on the findings. However, as a sneak preview, one of
the questions is: Are you above average at your job? I have around 200 respondents; all
of them are professional fund managers10. A stunning 75% of those who I have asked
think themselves above average at their jobs. Many have written things like, “I know
everyone thinks they are above average, but I am”!
% of fund managers who rate themselves as above average at their jobs
0
10
20
30
40
50
60
70
80
Above average Equal to or below average
Source: DrKW Macro research
All of this begs at least two questions. Firstly, why do professionals manage to keep
forecasting given that the evidence suggests they can’t? Secondly, why do we keep
using these useless forecasts? So let’s examine each of these in turn.
10 If anyone is interested in taking the test, please email me, and I will be able to send you the
questionnaire and add your response to the sample. James.Montier@drkw.com
Seven Sins of Fund Management 18 November 2005
17
Why forecast when the evidence shows you can’t?
Two areas of psychology help to explain how forecasters keep forecasting in the face of
pretty overwhelming evidence that they aren’t any good at it. They can perhaps be
explained as ignorance (not knowing the overconfidence exists) and arrogance (ego
defence mechanism).
Unskilled and unaware
David Dunning and a variety of co-authors over the years have documented a disturbing
pattern of behaviour. Those who are amongst the worst performers actually are the
most over-confident.
For instance, Kruger and Dunning11 ask people to rate how they have performed on a
logic-reasoning test. The chart below shows the perceived score and the actual score.
Those in the bottom two quartiles by actual score thought they would be in the 60
percentile (i.e. well above average). However, their actual scores put those in the bottom
quartile in the tenth percentile. A massive case of overconfidence.
Perceived and actual scores: Unskilled and unaware
0
10
20
30
40
50
60
70
80
90
Bottom Quartile 3rd Quartile 2nd Quartile Top Quartile
Perceived score
Actual score
Source: Kruger and Dunning (1999)
In a follow-up paper, Dunning et al12 explore some of the mechanisms that prevent
people from realizing just how unskilled they actually are. They note “People fail to
recognize their own incompetences because that incompetence carries with it a double
curse... the skills needed to produce correct responses are virtually identical to those
needed to evaluate the accuracy of one’s responses... Thus, if people lack the skills to
produce correct answers, they are also cursed with an inability to know when their own
answers, or anyone else’s are right or wrong.”
Dunning et al also point out that very often people’s estimates of their ability arise from a
‘top-down’ approach. That is to say people start with a preconceived belief about their
skills or abilities (along the lines of ‘I’m good at my job’ or ‘I’m good at forecasting’) and
use those beliefs to estimate how well they will do at a specific task.
Unfortunately, all the evidence suggests that people’s impressions of their skills and
abilities are at best moderately correlated and frequently uncorrelated with their actual
performance. Indeed this is nicely evidenced by the example above where all groups had
a perceived score of between 50 and 60% - bearing no relation to the actual outturn!
11 Kruger and Dunning (1999) Unskilled and unaware of it: How difficulties in recognizing one’s own
incompetence lead to inflated self-assessments, Journal of Personality and Social Psychology, Vol. 77
12 Dunning, Johnson, Ehrlinger and Kruger (2003) Why people fail to recognize their own incompetence,
Current Directions in Psychological Science
Seven Sins of Fund Management 18 November 2005
18
Ego defence mechanism
A second group of techniques deployed by forecasters could be best described as ego
defence mechanisms. Philip Tetlock13 has investigated the use of ‘excuses’ for forecast
failures amongst experts on world politics. Tetlock has been monitoring expert’s views on
world politics in real time for more than a decade. He notes “Almost as many experts as
not thought that the Soviet Communist Party would remain firmly in the saddle of power
in 1993, that Canada was doomed by 1997, that neo-fascism would prevail in Pretoria by
1994, that EMU would collapse by 1997... that the Persian Gulf Crisis would be resolved
peacefully.”
He found that across the vast array of predictions with respect to a wide range of political
events experts who reported they were 80% or higher confident in their predictions were
actually correct only around 45% of the time. Across all predictions, the experts were little
better than coin tossers. As Tetlock notes, “Expertise thus may not translate into
predictive accuracy but it does translate into the ability to generate explanations for
predictions that experts themselves find so compelling that the result is massive overconfidence.”
After each of the events passed the forecasts were shown to be either right or wrong,
Tetlock returned to the experts and asked them to reassess how well they thought they
understood the underlying process and forces at work. The table below shows the
experts belief in their own abilities both before the events and after the events. Look at
the judged probabilities both pre and post events for those whose forecasts were
incorrect. They are virtually identical. So despite the incontrovertible evidence that they
were wrong, the experts showed no sign of cutting their faith in their own understanding
of the situation. A true Bayesian would have slashed their assigned probability (last
column in the table below). This is prime evidence of the conservatism bias – a tendency
to hang on to your views for too long, and only slowly adjust from them.
Subjective probabilities experts assigned to their understanding of the underlying
forces at the beginning and end of the forecast periods
Predicting the
future of
Status of forecast
Judged prior probability
(Before the outcome is know)
Judged posterior probability
(after the outcome is know)
Bayesian predicted
posterior probability
Soviet Union Inaccurate 0.74 0.70 0.49
Accurate 0.69 0.83 0.80
South Africa Inaccurate 0.72 0.69 0.42
Accurate 0.70 0.77 0.82
EMU Inaccurate 0.66 0.68 0.45
Accurate 0.71 0.78 0.85
Canada Inaccurate 0.65 0.67 0.39
Accurate 0.68 0.81 0.79
Source: Tetlock (2002)
Tetlock identified five common strategies/defences used to explain the forecast error
whilst preserving the faith in the view:
1. The ‘if only’ defence – if only the Federal Reserve had raised rates, then the US
stock price bubble would have been avoided. Effectively, the experts claim they
would have been correct ‘if only’ their original advice or analysis had been
followed. This makes their forecast an historical counterfactual, which is
impossible to prove.
13 Tetlock (2002) Theory-driven reasoning about plausible pasts and probable futures in world politics, in
Gilovich, Griffin and Kahneman (2002) Heuristics ad Biases: The psychology of intuitive judgement, CUP
Seven Sins of Fund Management 18 November 2005
19
2. The ‘ceteris paribus’ defence - Although the experts’ advice or analysis was
correct, something else occurred, which was covered in the ubiquitous ceteris
paribus, that resulted in the forecast being blown off course. So the stock market
would have crashed but for the presence of government led manipulation.
3. The ‘I was almost right’ defence - Although the predicted outcome did not occur,
it ‘almost’ did. Tetlock gives the examples of so-called close call counterfactuals
such as “the hardliners almost overthrew Gorbachev” or “The EU almost
disintegrated during the currency crisis of 1992”.
4. The ‘It just hasn’t happened yet’ defence – although the predicted outcome has
not yet occurred, it will eventually come to pass. This is one of my favourites! I
know that I regularly use this defence to assert that high valuations will inevitably
and eventually lead to low returns for investors, thus maintaining my faith in my
view of markets.
5. The ‘single prediction’ defence – Although the conditions of the forecast were
met, and the outcome never came close to occurring and now never will, this
failure shouldn’t be held against the framework/view that inspired it. The
“everyone knows (or should know) that forecasting is pointless” thus the analysis
is valid, but the act of forecasting was flawed.
These five defence mechanisms are regularly deployed by experts to excuse the dismal
failure of their forecasts. The table below shows scores (on a nine point scale) of how
important these defences are. Unsurprisingly those who gave inaccurate forecasts rely
much more heavily upon the mechanisms than those who gave accurate forecasts. In
fact, across the four cases used here, those who gave inaccurate forecasts were 1.6x
more likely to reply on one of these defence mechanisms than the accurate forecasters.
Average reactions of experts to confirmation and disconfirmation of their conditional
forecasts
Predicting the
future of
Status of
forecast If only
Ceteris
Paribus
I was
almost right
It just hasn't
happened yet
Single
prediction
Soviet Union Inaccurate 7.0 7.1 6.8 6.4 7.3
Accurate 4.1 3.9 3.6 5.0 3.1
South Africa Inaccurate 7.1 7.0 7.3 7.3 7.1
Accurate 4.5 3.5 3.3 4.0 4.8
EMU Inaccurate 7.2 5.9 6.2 7.8 7.0
Accurate 5.1 4.6 4.9 3.8 4.3
Canada Inaccurate 7.6 6.8 6.5 8.0 7.2
Accurate 6.8 3.7 4.2 4.4 4.5
Source: Adapted from Tetlock (2002)
Average use of defence mechanism across four cases
0
1
2
3
4
5
6
7
8
If only Ceteris Paribus I was almost
right
It just hasn't
happened yet
Single
prediction
Accurate Inaccurate
Source: Adapted from Tetlock (2002)
Seven Sins of Fund Management 18 November 2005
20
Tyska and Zielonka14 applied Tetlock’s approach to analysts and weathermen. As we
have already noted weathermen are one of those rare groups that are actually well
calibrated. Financial analysts, in contrast, have been found to be very overconfident as
documented above.
Tyska and Zielonka asked financial analysts to predict a stock market level in about a
month and half’s time. Weathermen were asked to predict the average temperature in
April (again around one and half months into the future). In both cases, three mutually
exclusive and exhaustive outcomes were specified in such a way that each outcome was
roughly equally likely (i.e. had a 0.33 chance of happening). For example, the analysts
were asked would the index be below y, between x and y, or above x. They were also
asked how confident they were in their predictions.
The chart below shows the average scale of overconfidence that was reported.
Remember that the three choices were constructed so that each option was roughly
equally likely, so a well-calibrated individual would have reported 33% confidence.
However, the analysts had an average confidence of just over 58%, the weatherman had
an average confidence of just over 50%. So both groups were, as usual, overconfident,
but the analysts were more overconfident.
Average confidence probability
44
46
48
50
52
54
56
58
60
Financial Analysts Weather men
Source: Tyszka and Zielonka (2002)
In fact only around one third of the analysts were actually correct, and around two thirds
of the weathermen were correct. Those who gave incorrect forecasts were once again
contacted and asked to assign importance ratings on a eight point scale to various
reasons for their forecast failure.
It is interesting to note that the less confident weathermen’s single biggest justification for
their forecast failure was a lack of personal experience, followed by an acknowledgment
that the weather is inherently unforecastable. Analysts, on the other hand, argued that
they shouldn’t be judged on the basis of a single prediction (the single prediction
defence), and that something else happened that altered the outcome that would have
otherwise been achieved (the ‘ceteris paribus’ defence from above).
So just like Tetlock’s political experts, financial analysts seem to be using mental defence
mechanisms to protect themselves from the reality of their appalling bad track record at
forecasting.
14 Tyska and Zielonka (2002) Expert judgements: Financial analysts versus weather forecasters, The
Journal of Psychology and Financial Markets, Vol. 3
 
you're a fucking asshole
 
?
 
?
 
Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?

 Subscribe in a reader

DISCLAIMER: This is a personal web site, reflecting the opinions of its author. It is not a production of my employer, and it is unaffiliated with any FINRA broker/dealer. Statements on this site do not represent the views or policies of anyone other than myself. The information on this site is provided for discussion purposes only, and are not investing recommendations. Under no circumstances does this information represent a recommendation to buy or sell securities.