You may have read and have been interested in a recent BBC article dealing with how poll predictions can go badly wrong before an election.  Or you may not:

See: Election polls 2015: What went wrong?  http://www.bbc.co.uk/news/uk-politics-35350274

With the upcoming EU referendum in the UK and the Presidential elections in the US, what are the chances of us predicting the outcomes before the ballot papers are counted?

In a recent New York Times article by Frank Bruni (“Our insane addition to polls” Sunday January 24th, 2016) there’s a quote by Ralph Reed “There seems to be an inverse relationship between the preponderance of polling and the reliability of polling” seeming to get to the kernel of the issue (and sampling of muesli is another matter).

This wasn’t the first time that such issues have been raised – and it won’t be the last either.  In the US election between Dewey & Truman in 1948, the polls got it so badly wrong that a smiling Truman was able to hold up a newspaper predicting the incorrect outcome…

So what went wrong?  The BBC’s conclusion was determined to be ‘unrepresentative sampling’.  This actually may or may not be true.  We’ll see later.  And what’s the relation of all of this to particle size analysis?  W Edwards Deming and his mentor Walter Shewhart spent a lot of time on poll analysis in the 1930’s (before the Truman election referred to above).  These are 2 really famous names in quality control.  Did they get it wrong?

Of course not….  The math is actually incredibly simple in both the polling and the particle size distribution examples. The standard error is proportional to 1/√n where n is the number of people interviewed, experiments, particles in a particle size distribution etc.  The standard error (SE) is the measured standard deviation around a mean value for repeated samplings.  The more sample we take then the nearer the measured result will be to the correct result or ‘truth’.  The rule of thumb is that there’s a 2/3rd’s probability that the ‘truth’ will lie within +/- 1 SE of the measured mean.

Let’s take a pertinent example.  Imagine that the election is poised at 52% to Candidate A and 48% to Candidate B.  We don’t know this of course as we can’t determine the outcome of the election until everyone has voted.  We have to take a sample beforehand.  Let’s take a random and representative sample of 1000 people.  Sounds a lot – at least it’ll be a lot of work for the interviewing company.  The standard error is 1/√1000 or ~ 3.2%.  So the measured value will have an error associated with it of around 3%.  For a normal or Gaussian distribution then 68% (roughly 2/3rd’s) will lie within +/- 1 SE of the measured mean and thus the ‘truth’ will be in this range 68% of the time.  For our election example then this gives the margins of error for the 1000 representative polled sample of 48 +/- 3% and 52 +/- 3% – enough to swing the election the other way.  And that’s only one standard error – for 95% confidence we’d need +/- 6%……. And many, many elections are decided by smaller margins than this.  To reduce this margin of error to 1% we would need to sample 10000 random and representative people.  So we need to decide before we even begin the process, what precision we require in the interviewing or particle size distribution analysis process.

It’s even worse if the sample is unrepresentative or even if the polls have influenced the actual outcome or if the polled sample has actually lied or distorted their responses…  Again a pertinent example, if we decided we’d take a poll by phoning 1000 people taken from the telephone directory then imagine the possible problems:

• We’d probably underestimate the Millennium generation that probably doesn’t have a landline but rather only a mobile device
• We’d not represent those people who couldn’t afford a telephone at all but could still vote
• The time of interview would be crucial – if we phoned during the day then we wouldn’t get the people at work.  If we phoned in the evenings we wouldn’t get the people going out to a restaurant.  If we phoned at the weekend, well, you complete the sentence….

There’s also another big issue in elections – a pool, often huge, of the undecided that can swing an election one way or another.  There are the people that may say one thing at interview, change their minds, or vote for a more conventional or unconventional party.  So what would the polls look like for the Monster Raving Loony Party in relation to the actual election?  In New Hampshire, in the USA, there’s an almost traditional ritual, which the people are apparently proud, of not making one’s mind up until one is in the actual polling booth and faced with the ballot paper….. A statistical polling nightmare…

If the pool of interviewed people isn’t even representative then the situation is even worse.  The analogy is segregation in the particle size situation.  We’re not taking a sample in this case – we’re taking a ‘specimen’ as the mining sampling experts would say.  In this case, all bets are off.  We’re in the garbage in = garbage out situation and there’s no remedy to this.  We simply have nonsense on which to base decisions.

OK, so where’s the particle size analysis link?  If we have a particle size distribution then we may think that we can specify it exactly with a sample.  The key word here is ‘distribution’ implying a true value with an associated margin of error.  So, imagine that we wanted to specify the x99 point of the distribution to a precision of 1%, what implications would this have?  Well, first we’d need 10000 representative particles in the X99+ point of the distribution.  Clearly this 1% would only make up 1/100 of the total mass of the system.  If this top end was at, say 500 microns, we’d need a large amount of particle mass to give us those 10000 particles. Try the calculation for silica (~ 2650 kg/m3).   You should get around 173 g if you do the math correctly…

The battle with our customers is that they want to use a tiny amount of their ‘expensive’ material.  This is akin to sampling the contents of my desk by removing a single pen.  If a smaller than the minimum mass is taken, the margin of error increases.  The only reasonable and logical action is to widen the specification in order to accommodate the increased margin of error.  This the customer doesn’t want to do – you can’t get more from less and we’re back to garbage in = garbage out.

The outlined standard error calculation provides the minimum error based solely on the heterogeneity of the material.  All other errors add to this minimum error.  Pierre Gy (who sadly died last year on November 5th) listed 6 other errors other than this fundamental sampling error (FSE).  These include the nugget effect – now how do you find this in a gold mine?  Not by sampling that’s to be sure (it’s equivalent to the undeterminable x100) – we can’t inspect the particle in…  We have delimitation errors (you can find out about these at my short course: see below)…. We have the analytical error – normally at least 2 orders of magnitude lower than the FSE…. Plus others, that I’ll not bother you with except to remind you of segregation where a representative sample simply isn’t possible unless the whole sample mass is taken….  This is the classic Brazil nut effect – and it’s not related at all to the fact that the Brazil nut is probably the most radioactive food we’d consume – way above the banana. Indeed eating 7 Brazil nuts would give you a similar radioactive intake to being at Fukushima in Japan when the reactor fractured after the tsunami.  And why is the name of the country Brasil on its stamps when we all spell it with a ‘z’?

Watch an interesting video on segregation by Mark Murphy “All samples are wrong – some more than others

Related content

References

1.  W A Shewhart; W Edwards Deming  Statistical Method from the Viewpoint of Quality Control The Graduate School the Department of Agriculture – Washington (1939)
2. D Huff How to lie with statistics W W Norton & Company Inc. New York (1954)
3. O L Davies (Editor) Statistical methods in research and production” Oliver and Boyd.  London and Edinburgh (1961)
4. P Gy Sampling of Particulate Material, Theory and Practice 2nd Edition Elsevier, Amsterdam (1982)