(UPDATE: 27/11/11 - raw churnalism data available HERE - JSON format, zipped, 448kb)
After the churn analysis of the Environment Agency press releases (please read that article for more details and important caveats if you haven't read one of my churn posts before), I followed up with DEFRA.I will be making the raw data publicly available tomorrow for both the Environment Agency and DEFRA churn analyses.
This time I was able to construct the spiders and process the data faster - I also avoided (most of) the unicode problems that plagued me with the EA data, so this analysis can be considered slightly more accurate and slightly less forgiving of the media organisations (though it still strictly follows the rules I set down previously, such as no editing of the press release to remove extraneous information). Along with inevitable issues with some difficult characters slipping through and no editing of the press releases, it still means the data will naturally favour the media organisations. As I said on previous posts, when I make the raw data publicly available, churn analysis by other people will very likely improve upon my methods and yield results more detrimental for the media.
In any case - onto the results. The summary is presented below (click for full size image):
"Quality" press churn results
Summary of results:
- A total of 386 press releases were analysed, from 13th May 2010 to 24th November 2011. These generated 1959 detectable cases of churn. Again, there is probably a lot of interesting data within the "detectable" category that deserves analysis at a later date. For now it is discarded.
- Out of those, 173 were classified as "significant" and 18 as "major".
- The Guardian was the leader in both categories by a long way - accounting for 19.65% of significant churn and 27.78% of major churn.
- The BBC followed close behind in terms of significant churn with 16.18%, though for major churn was beaten into third place by both the Independent and the Daily Mail with a joint 16.67%.
- The Independent came third in the signifcant churn classification.
- A common factor in the most highly churned articles both in this analysis and the previous two appears to be lack of a named author in most cases (though see one of the exceptions detailed below). This suggests the media organisations are aware that what they are doing is not kosher.
- Continuing a theme from the last two churn analyses, the tabloids consistently embarrass the so called "quality press". This time I pulled out the statistics for the UK's major tabloids for comparison (click for full size image):
Tabloid press churn results
When I first started these analyses I fully expected to see a much higher showing of churn by the tabloids. It is interesting to see the contrast. Also out of the churn analyses done so far, it is consistently the Mirror out of the tabloids that has the highest percentage of churn.
As usual I select a few of the more egregarious cases of churn for your entertainment (and importantly - provide a manual submission to the churnalism database so they can be seen visually):
'Bonfire of the Quangos'
Remember that list of Quangos that were to go? Completely cut and pasted from a press release. This one is particularly fascinating because in the two worst cases the cut and paste was the list provided in the press release. It actually included several paragraphs laying out a context that was not cut and pasted across. If it had just been the list in the original both would have scored close to 100% pastes.....
The pastes are so large in any case that the churnalism engine falls over when the 'view' button is clicked to see the visualised version. Be warned if you click it, your browser may hang.
'New service for householders to stop unwanted advertising mail'
Absolute carnage on the churning front here with the majority of the main media outlets represented. The Guardian appeared to like this story so much they cut and pasted it twice - and this time each article has a named author. Where the hell was the editor?
(This blogpost should perhaps also be titled - 'What I did/didn't/did say at the Guardian today....')
There's nothing quite like rank hypocrisy to boil my piss. However, to ensure it is fully evaporated in anger, combine rank hypocrisy with crass stupidity, naked opportunism, complete resistance to facts or reason and censorship.
Hickman decided it was time to form a posse comitatus to try tracking down the source of the climategate emails, laughably using the README textfile included in the latest tranche of releases as the primary source of evidence.
This was one of those pieces - especially as it was in the comment is free if you agree section - that really reveals the Guardian's true colours. Numerous commentators including me (prior to the first round of censorship - sorry - 'comment adjustment') attempted to point out the Guardian's and Hickman's rank hypocrisy on this issue. The most striking and obvious example having been the paper's massive support for Wikileaks, however there were many other examples, including the anonymous Enron whistleblower, as another commenter pointed out. As was repeated again and again, it appeared that all leakers were equal but some were more equal than others in the Guardian's eyes.
This was of course brushed off by Hickman and his part-time principle party of followers in the comments section.
Next I pointed out (prior to 'comment adjustment') that claiming it was the work of a hacker was still just an assumption. Hickman replied to me directly on that and similarly brushed it off. He claimed it was irrelevant. The poor dear didn't seem to realise that if he assumed it was the work of a hacker and in fact it was a leaker then his "investigation" would lead him down to all sorts of blind alleys, not least because the MO and levels of access would be completely different (not to mention the trail of evidence left behind).
There were a plethora of delightfully dense comments in support of Hickman et al and stunning leaps of reasoning. These people were also apparently immune to criticism because they "knew" what they were claiming was true, especially regarding the "hacker" claim. Many pronounced completely ill-informed statements about this showing that i) they knew nothing about IT security and ii) that they couldn't even be bothered to use google to check details. After all, The difference between an internal security breach and a carefully coordinated external breach is vast. Pointman gave an excellent overview after the first climategate - here. Moreover they absolutely did not care about their ignorance. What a familiar pattern, eh? No wonder they were immediately supportive of the "scientists" at the heart of the climategate storm - they're just like them!
There were some absolute crackers amongst the received wisdom of this bunch of easily led zealots and I highly recommend you read through the comments - well those that are left - as it is a laugh a minute.
Komment Macht Frei
Speaking of the comments - when the piece first appeared this morning, it was absolute devastation from the moderator. ALL of my comments bar the first one were censored, as were numerous other comments by others. I had no clue why they'd been removed beyond the fact that we all seemed to disagree intensely with Hickman.
Now I should point out something important here for Guardian watchers - they have two types of post moderation. There is the one we're all familiar with - where the boilerplate 'this comment was moderated because it breached our community (puke) standards' but there's also a much more insidious type and I only noticed it because I've been paying a lot of attention to their censorship pattern over the last couple of years - its what I call "nuking". In this case they remove all evidence that the comment was ever there. It's particularly chilling for freedom of speech because aside from the fact that by looking at the comments one can't actually assess the general level of censorship, if it's *your* comment that disappears in this way it's only your word that it was ever there in the first place....
Now bizaarely, after the comments spilled over onto two pages I happened to click back to the first page to see what else had been censored and was surprised to see that most of my previously "moderated" comments had reappeared (except for the "nuked" ones). I don't know if this is a bug in their software or a disagreement between moderators but it adds even more to the general sense of confusion and latent fear of arbitrary censorship that completely fucks any meaningful contribution over there.
Another important point to be aware of is this: One way to guarantee being censored on the Guardian is if you make a reference to your, or someone else's having been censored you will immediately be censored and they often use the "nuke" option too.
The Guardian is - as a media institution - utterly reprehensible. Most other media outlets are of course too, across the political spectrum. But none outside the BBC attempt to present themselves so often as the default "good guys", nor do their followers similarly regard it as received wisdom...
The climategate 'gait' or the 'out of context paradox'
There's a regular pattern that occurs in any discussion of climategate (1 or 2). It is inconsistent but also entirely consistent with the unthinking nature of many of those who promulgate it:
i) They assert that the emails were "taken out of context"
ii) Responder says that they are not.
iii) A request is then made for evidence.
iv) Responder invites them to read the emails - there are numerous complete email chains, supporting claims against the "scientists" that ONLY MAKE SENSE IN CONTEXT. But the trick is you have to actually read the emails....
A modern day climate "scientist"
Now given how unambiguous some of the exchanges are (in particular those that involve purposefully frustrating FOI inquiries and deleting emails....) one is then prompted to ask exactly what standard of evidence is required. For the evidence before us, if for example we stick with complete email chains rather than individual comments, is a magnitude higher than the typical standard accepted in the vast majority of journalism that we ever read or see. It means that - to be consistent - if one were to completely reject these email chains as sufficient evidence, one would have to throw out almost every received opinion on any quoted person in the press one has ever encountered. Will the zealots do that...no of course they won't. But of course consistency is in the same disused box in their basement as a regard for truth....
One final delicious irony of this of course is that 'The Team' will surely be scratching their heads now, trying to remember what on earth what was said to who. But because they very likely deleted these emails after they had been copied from the mailserver then they have only one place to go to check.....
In several comments on my previous piece of work on churnalism I read at Biased BBC, the activist group 'Frack Off' were mentioned and questions were asked whether they had any detectable churn in the media as online links were often found to them at the BBC and the Guardian sites by Biased BBC readers.
I decided to have a look.
The group is very new and they have only released four official press releases. This meant I could work fast with this data as I wouldn't need to write specialised spiders for gathering, analysing or submitting the data as it could be done manually with such a small data set.
Again I find myself being surprised by the results:
A reminder on the scoring criteria:
>=100 is classified as "detectable" churn. I usually discard these results and they will always be discarded for comparing one set of data to another (e.g. this analysis to the previous environment agency analysis), however it still yields a rich seam of data and as this data set is so small compared to the previous one I decided to take some time to look into some of these. >=500 is classified as "significant" churn. >=1000 is classified as "major" churn - in these cases the articles simply could not have been written without cutting and pasting the bulk of its material from the press release.
- Out of four press releases, three have generated a total of 13 articles with detectable churn according to my criteria (score of >= 100 from the Churnalism engine). 14 were originally found but I removed one Guardian article as it was detected twice (probably because of the similar screeds issued by 'Frack Off' in their press releases).
- Out of those, 5 were significant churn (score of >= 500) - and as they were only a handful I have entered them manually into the Churnalism database so you can see the side by side comparisons yourself. 2 were from the Guardian, 1 from the Mirror, 1 from the Times and 1 from the Daily Mail.
- Several of the remaining 8 articles with detectable churn come very close to the >= 500 criterion (details below) and indeed in a couple of cases the Churnalism engine considers them to be significant enough to display when manually input (an API input by contrast gives an exact score; the churnalism seems to have a less forgiving standard than myself and will display many articles with a score between 400-500 as significant).
Now, even being generous to the media organisations and 'Frack Off', this means 50% of their press releases are being significantly churned - and primarily by the Guardian and the Mirror (one of the Mirror's contributions came close to a "major" ( >= 1000) piece of churn with a score of 870. I suspect that were I less forgiving with my methodology, removing extraneous elements in the initial press release (links, contact information etc) and didn't have to remove some characters to ease processing (e.g. single quotes), this may well have scored as 'major churn'. You can eyeball it yourself in any case here.
I say that 50% assessment is 'being generous' to 'Frack Off' also because it includes a press release they issued yesterday so it will not have had time to percolate through the media yet. If the press release yesterday results in any significant churn (which I will check again in a week or two), that percentage will climb to 75%.
Further details on the data for each press release:
26th October 2011 press release: - submitting it to the API yielded 3 detectable chases of churn and one borderline (score 96), so I looked at them in more detail. One of these articles is from the Guardian and already highlighted by the Churnalism engine as containing significant churn from one of the other FO press releases. Two are Telegraph articles, one by Louise Gray - both name check the activist group. The other is a Times article and unfortunately I can't verify anything beyond the paywall.
2nd November 2011 press release - 7 detectable cases of churn - 4 were "significant" churn and a further 2 came very close to being considered "significant" by my scoring criteria - one from the Independent (score: 436) and one from the Guardian (score: 448). Notice how the Independent article also cites: Chris Huhne, the WWF, Friends of the Earth and Greenpeace. Cuadrilla Resources - the company responsible for the Fracking discussed at the heart of the article get one response of similar length to the others, along with a neutral response from the shadow energy secretary, Tom Greatrex.Cuadrilla don't get a single response in the Guardian article and in mentioning the independent report commissioned by Cuadrilla fails to mention that the report concluded that another earthquake incident was unlikely.
The Churnalism engine considers one of the Guardian articles significant that doesn't hit my threshold (500) - it scores 448, but the manual search on Churnalism.com also yields an entry that the engine considers significant, at the same time, the Daily Mail entry with a score of 521 isn't listed on the manual search so this balances out.
See visual breakdowns of significant / detectable churn for this press release here.
3rd November 2011 press release - four detectable cases of churn, one from the Guardian classifies as "significant".
See visual breakdowns of significant / detectable churn for this press release here.
23rd November 2011 press release - no detectable churn via either manual entry to Churnalism.com or to the more sensitive API system. This could well change however as it was only yesterday this PR was issued.
I find these results very concerning. A small single-issue activist group that has only existed for a matter of months should not be generating such a significant amount of churn with just four press releases. The Guardian and the Mirror in particular appear to be giving the group a free ride (though also see the commentary above on one of the higher scoring Independent articles). The Environment Agency at least has some kind of mandate and does carry out a wide variety of tasks - to that extent it's no surprise some of its press releases are churned (though this is no excuse for the journalists concerned, or indeed for any EA employees who are aware of and have no issue with any kind of symbiotic relationship here).
This is extremely dangerous, especially for such an important issue and shows how groups like 'Frack Off' can be so polarising. On a personal level I do believe there are legitimate environmental concerns regarding Shale Gas. Unlike 'Frack Off' however I consider these issues surmountable. They make clear from their website that they want this to be a polarising issue regardless of the facts. They primarily cite Gasland - a "documentary" itself which is making rational discussion of the issue of Fracking all but impossible. It is primarily a series of anecdotes that are themselves seriously problematic as evidence. 'Frack Off' et al should be citing careful and replicable research such as this for discussion. Why don't they? And moreover why doesn't the media, who are apparently falling over themselves to repeat 'Frack Off' claims without doing the very research most of us expect of them. My guess is that the research isn't nearly alarmist enough for them and has plenty of caveats that prevent them from easily reporting it as such.
(UPDATE 27/11/11 : raw churnalism data now available HERE (JSON format, zipped, 2.1MB))
My work on bot-writing and churnalism has finally started to bear fruit. And its not often I write a blog here that is an "exclusive", however this is one of the few!
A large part of my research over the last year has focused on the nature of digital and virtual technologies and in particular on the nature of censorship and propaganda online. A perennial concern, as outlined in my Privacy 2.0 post is the arrival of 'mass dataveillance', where the monitoring of an individual is far less important - in both import and consequences - than the gathering of masses of data about lots of individuals. This is because those masses of data are able to reveal patterns and links that the individual members of the data set are very likely unaware of.
However, this is something that works for us as much as against us if we're willing to use the freely available data in the public domain. Its not just large corporations of whom one is suspicious who can gather, analyse and deploy the masses of data.
Nick Davies, in his excellent book, 'Flat Earth News', popularised the term "Churnalism". His book, and the research it is based on was an absolutely damning indictment of the UK media, with similar implications for the entirety of Western media itself. Davies said journalists: “....are no longer gathering news but are reduced
instead to passive processors of whatever material comes their way, churning out stories, whether real event or PR artifice, important or trivial, true or false”.
His book was substantially based on research he commissioned by the University of Cardiff. In the course of that research, amongst other things, what they found "suggests that 60% of press articles and 34% of broadcast stories come wholly or mainly from one of these ‘pre-packaged’ sources." - a phenomenon that many of my regular readers will no doubt be familiar with and one that has now sadly started becoming commonplace in other areas too - we've been moving ineluctably from just 'journalism by press release' to also 'science by press release'.
Upon reading this research over a year ago, one of my first thoughts having had years of programming experience was, 'that could be automated!'. And not only could it be automated, but with enough data, direct patterns of bias and influence could also be detected. Lo and behold, around that time, the wonderful churnalism engine appeared.
This site plugs into the Journalisted database, where every single press item is archived online. The churnalism engine uses an algorithm that enables anyone to manually copy and paste text from any source (though most usually a press release) and compare it quickly to the entire journalisted database, it is then able to report back any cases that are likely to have been cut and pasted, along with an estimate of the proportions copied. It means one can effectively trace the provenance of many news stories back to press releases and also assess how much has been copied directly into the article - articles that were are so often led to believe are supposed to uphold high journalistic standards.
I decided that it should be possible to combine the churnalism utility with masses of data in a way that would not have been possible even a decade ago. The ability to write programs ('spiders' or 'bots' in this context) that could gather all of the press releases from a single organisation and then submit them to the churnalism facility, combined with cheap and readily available computing power means this is an entirely achievable goal.
So I programmed spiders that were able to gather every single one of the Envrionment Agency's press release, filter out the formatting tags (from the web page) to get the original text, submit them to the churnalism engine and then store and analyse the results.
The data shows several clear patterns:
- The so called "quality press" are the worst offenders for churning Environment Agency press releases - whilst there were many entries from the tabloids and local papers, their cutting and pasting was less egregarious than the "quality press".
- The BBC is by far and away the worst offender for simply repeating whatever the Environment Agency claimed in its press releases.Out of the 393 articles where "significant" churn had taken place, the BBC were responsible for 44%. Likewise for the 49 articles that had "major" churn (meaning in most cases they were almost complete cut and pastes of the press releases), the BBC was responsible for 30.6%.
- I was able to grab a total of 1962 press releases from the Environment Agency giving almost complete coverage of their press release output for the last two years. This total was orginally slightly higher, however since the first pass of my bots (to gather the links before downloading with the second pass), the EA has inexplicably removed 10 press releases. I also found a handful of duplicates. The lowest level of granularity I was willing to accept as "detectable" churn (see below) yielded 5089 articles.
- I had set the bar very high for counting articles as "churnalism". The churnalism API uses a "scoring" system that identifies how many 15-character chunks had been copied and/or pasted. It represents a compromise between the lengths of the source and end articles - so, for example, if a large proportion of the original press release has been copied, but this represents a lower proportion of the end article, this will still yield a high score. And vice versa (Louise Gray's articles in the Telegraph often followed this latter pattern for example - with her article being much shorter than the original press release it is copied from).
- Setting the filter on the data I gathered to a score of >=100 yielded 5089 articles. I regard this as an acceptable baseline for "detectable" churn, though for current purposes I am discarding this larger data set. One of my hypotheses is that any distinct patterns should be visible throughout and indeed this appears to be the case - those I counted as "detectable" were made up of 1983 articles from the BBC - a very similar proportion to those in the higher scoring category (38.9%).
- My methodology (to be detailed in a much longer post later on censoring.me) has massively favoured the media organisations, primarily in three respects:
i) for submitting the press releases to the churnalism engine, I did not edit the press releases to remove anything extraneous - so they often included repeated titles, contact details etc that would lower the percentage of hits in the churnalism engine.
ii) the original 'screen scrapes' of the website press releases included many characters that were difficult to work with programatically. Single commas cause problems for database processing so these were often removed. Also the data regularly contained unicode - and not all of this would be correctly re-encoded when sent to the churnalism web server. This further reduced the percentage of hits.
iii) Finally - as I already mentioned I set the bar very high for what I counted as cases of definite "churnalism". I decided upon three categories of scoring: 1) "detectable" churn - with scores of 100 or more (in practice this would mean maybe a paragraph had been copied) 2) "significant" churn - with scores of 500 or more (in practice this means more than one paragraph had been copied) and finally 3) "major" churn - a score of 1000 or more meaning the majority, or substantive minority of either/both press release and final article had been copied.
With these three aspects in mind, I consider it almost certain that when other people make use of my data (which I will make publicly available, along with a longer article detailing technical issues) they will find a noticably higher proportion of churnalism. There would also be a case for lowering the significance score of 500. I also think there is probably a lot of useful information to be found in the 5089 articles with "detectable" churn, however I won't be drawing any conclusions from this larger data set at present.
With those caveats and details in mind, here is the summary of the final results, calculated after any duplicates and outliers had been removed (click for larger image):
It should be noted that this data set alone is comparable in size to that used by the research that formed the basis of Nick Davies' 'Flat Earth News'. It took a team of several researchers many months to pore over the same amount of press releases and articles and come to the conclusions he presents in the book. It took me a week to write the spiders, and I'll be doing this again and again for different organisations and likely revealing more hidden patterns in the data. And yes, I'll be reporting them here first and sharing the data publicly!
For those interested, I aim to give access to the data on censoring.me by next week, which I am revamping this weekend as I finally have some substantial research to use the site to showcase!
For the most appalling cases of churn I submitted them manually to the Churnalism site (a different process to submitting them in an automated fashion to the API). Doing this enables me to save individual examples in the Churnalism.com site database and it yields a very useful side by side comparison so it is possible for one to see visually (and compare manually) the worst cases.
For your amusement, amongst my favourites were:
- "Glaciergate should not distract us from climate battle"
Here the chairman of the Environment Agency is asked to write a piece for the BBC. It repeats exactly the majority of a press release issued by the Environment Agency two months beforehand claiming to quote the chairman by declaring what he is going to say at a forthcoming event. Yes I had trouble getting my head around that too.
- "Flood defence project gets small seal of approval"
An absolutely stunning 94% paste job by the BBC. Even the churnalism engine struggles to represent it visually - make sure you click through to the BBC article itself. You can see from eyeballing it and comparing it to the submitted press release that it is a straight cut and paste.
For an explanation of the 'Ammo' prefix, please see here.
My research work is now reaching the stage where I'm able to usefully share parts of it (updates to censoring.me with actual working parts is coming soon too...). This is the first of those contributions and is a necessary prelude to some upcoming posts that will require reading this first.
I was prompted to write this blog entry today with the very sad news of the death of one of the co-founders of Diaspora, 22-year old Ilya Zhitomirskiy. I'll be writing on Diaspora soon and making important contrasts with Facebook, Google+ and Twitter. I think Diaspora is incredibly useful and will argue why in a later post - including recommending why you should switch from the latter three tecnhnologies to Diaspora.
First though, its important to understand Jonathan Zittrain's concept of 'Privacy 2.0'. And why is this included in the 'Ammo' series? Simply because our conceptions of privacy are outdated - scholars and technologists such as Zittrain and his peers (such as Morozov, Benkler etc) are beginning to provide immensely useful analyses and concepts for understanding the brave new digital world we find ourselves in. The powers that be, including the mass media, have found themselves flat-footed by the new social media technologies. And its more than that they are vested interests trying (and failing) to protect their turf. It is that they simply don't have the conceptual know-how to even begin to grasp this new domain and the promise (and pitfalls) it offers. Staying ahead of their (albeit very slow) curve will arm you (and help to protect you against the mendacious individuals who have already grasped it...).
Below is a summary of Zittrain's 'Privacy 2.0' concept. It is written in a very academic style and I make no apologies for that - it is one component amongst many in my current academic toolbox and so is expressed that way. I hope you find it useful:
Instead of simply ‘privacy’, this particular concept appends the ‘2.0’ to reflect the new era of digital and internet privacy that are still being addressed using concepts, practices and legal precedents that can be said to apply to an ‘earlier’ conception of privacy – ‘privacy 1.0’.
The “generative” technologies that form the basis of digital, networking and internet devices and behaviours put old problems of privacy into new and often unexpected configurations. In both the digital and internet landscapes, broadly understood, there are enormous amounts of uncoordinated actions by actors that can be combined in new and often unpredictable ways thanks to these same technologies.
Limitations on actors to preserve freedom has up until very recently focused on constraining institutional actors (governments, large corporations) etc. New privacy problems however go beyond this traditional paradigm, which centres on the collation of data in centralised databases logging potentially sensitive information on individuals. Whilst this is still an issue within the purview of ‘Privacy 2.0’, it is only a small part of a much wider breed of new problems. More modern legislation in the UK, such as the Data Protection Act recognises this to a limited degree, yet still largely targets the same institutional actors as previous ‘Privacy 1.0’ legislation. The precedent setting Privacy Act of 1974 in the U.S. remained limited to public institutions. The 1998 UK Data Protection Act recognises part of the new privacy problem by the casting the net wider to “data controllers” generally and investing them with legal responsibilities. The fears motivating both pieces of legislation however, originate from the idea of “mass dataveillance” – i.e. de facto surveillance via centralised data collection. Solutions such as restraint, disclosure and encryption are appropriate for these ‘Privacy 1.0’ concerns but extremely limited for the new generative technologies.
The generative mosaic
'Generative mosaic' is a term coined by Jonathan Zittrain, in ‘The Future of the Internet’ that I think elegantly expresses the data mining privacy issues now coming to the fore.
Certain datasets collected on individuals, even if only focused on one aspect of their behaviour, allow patterns to be mined from the data that the individual themselves may have no awareness of (and thus may also never have cause to complain if such – potentially advantageous – information is used against them).
Such data can be immensely powerful even when only gathered for a very narrow range of behaviour. For example, Amazon were able to roll out differential pricing of their products according to past customer behaviour. They were caught when some individuals deleted their browser cookies and discovered that the advertised price would change (no longer having had a reference point to the individual’s previous behaviour).
This example brings home the fact that data mining can very quickly produce tangible results with a comprehensive enough data set. Wonga.com currently claim to have a “market leading” bespoke algorithm for predicting whether a customer is likely to default on their loan. The standard assumed default rate in the retail loan sector is 10%. Wonga claim that their default rate remains in single figures, which is particularly astonishing considering their risky lending sector (short term loans of hundreds of pounds at an astronomical interest rate). Their two primary sources of information are a set of approximately thirty questions asked on the initial application followed by “thousands” of online data points. The fact that Wonga have monetized such information so efficiently demonstrates that there are hidden behavioural cues in people’s data available online that most are not aware of.
Compounded with the ‘generative mosaic’ problem is the fact that government and corporate databases are increasingly less threatening privacy concerns than those created by our ‘generative’ digital technologies and those who use them (virtually everyone in the first world and ever increasing numbers in the second and third worlds).
Ever cheaper processors, networks and sensor technology have created billions of constant data gatherers worldwide. Further, the flow of data from (and to and between) said data gatherer is not generally impeded by gatekeepers – unlike the relatively restrained government and corporate sectors.
A key feature of “Web 2.0” is peer production, and the rise of the ‘prosumer’ – people who are constantly consuming and producing new content, often ‘remixing’ the content they have consumed. The process is chaotic, ever changing and usually without gatekeepers. As a result, the surveillers (and ‘sousveillers’ to add Steve Mann’s lexicon) are us. Government and corporate actors and their intermediaries represent an ever shrinking portion of the ‘Privacy 2.0’ landscape.
From Intellectual Property and Copyright to ‘Privacy 2.0’
“The intellectual property conflicts raised by the generative Internet, where people can still copy large amounts of copyrighted music without fear of repercussion, are rehearsals for the problems of Privacy 2.0” – Jonathan Zittrain, ‘The Future of the Internet’, p.210.
Whilst intellectual property and copyright issues generated by modern digital technologies and environments (‘Web 2.0’) are at the forefront, with various legal scholars such as Yochai Benkler and Lawrence Lessig at the coal face philosophising the quintessential issues and concepts, the impact of ‘Privacy 2.0’ is yet to be truly felt or understood. We are – as Zittrain puts it – effectively ‘all on notice’ as anyone can become a youtube superstar in minutes.
Daniel Solove, in ‘The Future of Reputation’, considers the impact this can have, highlighting examples such as the ‘bus uncle’ of Hong Kong and ‘dog poo girl’ of South Korea. Incidents which, whilst public, would have remained relatively ephemeral in the past can now be recorded and spread virally across the globe, often with undesirable results due to a mass public reaction that would never before have been possible. ‘Bus uncle’ was attacked at his workplace in a targeted attack and ‘dog poo girl’ left her job, both as a result of the firestorms resulting from the videos. Lives are easily ruined in these cases because the total outrage generated is completely disproportionate to the social norm (or possibly, law) violated at the time. And as Zittrain puts it, “…ridicule or mere celebrity can be as chilling as outright disapprobation”.
A strong counter –argument to the idea of the participatory panopticon, which itself rests on an assumption of inevitability and technological determinism, (c.f. the quote from McNealy – similar sentiments have been expressed by other prominent figures in IT and data mining industries such as ex Google CEO Eric Schmidt), is the charge that such extensive scrutiny renders us all into automatons – hence the “chilling” effect referred to by Zittrain above. Indeed, Zittrain compares the situation to that of politicians whenever they are in the public eye – and he implies – they have been the first to understand and adapt to “Privacy 2.0” in their public behaviour (even if they are disastrous at articulating and applying these concepts) and as such we should regard their behaviour in modern times as the canary in the coal mine: “Ubiquitous sensors threaten to push everyone toward treating each public encounter as if it were a press conference, creating fewer spaces in which citizens can express their private selves.” (Zittrain, ‘The Future of the Internet’, p.212).
Public statements by politicians cleave to an uncontroversial and bland centre ground. This isn’t just a result of realpolitik; it is also a direct result of ubiquitous media coverage (increasingly now an activity of citizens – especially those from opposing camps) and the ease with which a sentence can be taken out of context. This has a chilling effect that stifles behavioural outliers, and as examples of which, politicians are only the most prominent case. Speech and behaviour in the past was only subject to a relatively small group for disapprobation. Now the exposure group can potentially be society-wide in seconds.
New conceptions of ‘Public’ and ‘Private’
It isn’t just legal conceptions, of the ‘Privacy 1.0’ type, that lag behind. It is also a whole raft of concepts – one of the most important pairings in this case is our notions of ‘public’ and ‘private’ which still inform debates on privacy using ‘Privacy 1.0’ understandings. The most ubiquitous uses of these terms are not subtle enough to capture what we as individuals may want in privacy terms in the ‘Privacy 2.0’ world. Typically we use notions of ‘public’ and ‘private’ and ‘in private’ that forget this. Whilst behaviour ‘in public’ is technically open to the public eye, it is usually only a small number of eyewitnesses, often strangers, who observe it and it remains ephemeral. What were previously private public spaces become public public spaces.
The principle of freedom of speech framed in the U.S. constitution assumed that private conversations in public spaces would not become public broadcasts. There were no means at the time to effect this and now that those means do exist, we are effectively naked in the eyes of the law with no defence and a potentially chilling effect on our behaviour that may only allow new, or radical behaviours or speech to those who are completely disconnected from existing norms and so do not fear them.
Combining this ‘conceptual slippage’, (generally ignored – commercial interests are rarely threatened in the same way by ‘Privacy 2.0’ as they are by ‘Web 2.0’ and its attendant dilution of Intellectual Property and Copyright), with the mass generation of fresh data and ever increasingly convenient and accurate means of tagging and identifying (including the kind of facial recognition being pioneered by Facebook, Google and others), creates the perfect ‘Privacy 2.0’ storm.
Generative ‘mash-ups’ of data and the vast array of tools and APIs available for processing them mean it is increasingly trivial to find answers to questions such as ‘where was person x on date y’. And the answers will increasingly be coming from the general public, not from government or corporate surveillance.
For a practical application of this, see for example Tom Owad’s application for identifying subversives on Amazon via their wishlists using a bot that queries the site. A mashup combining this with photo recognition technologies would be relatively straightforward and itself could be combined and recombined endlessly with other mashups to provide a much sharper slice of someone’s life than is now visible through even the most invasive state database systems in the world. The ‘peer production’ technologies have, effectively, been disruptive for Intellectual Property and Copyright, whilst restrictive for privacy. Even were it possible to circumscribe a database describing the total picture of an individual that is accessible digitally, this would be of little use for what the database itself is changes rapidly, often from one moment to the next. The emergent and inscrutable nature of the outputs of these technologies mean that the current ‘Privacy 1.0’ concepts and legal structure we operate within, from the government to the corporate to the academic worlds, are urgently in need of revision.
In Flanders fields the poppies blow Between the crosses, row on row, That mark our place; and in the sky The larks, still bravely singing, fly Scarce heard amid the guns below.
We are the dead. Short days ago We lived, felt dawn, saw sunset glow, Loved, and were loved, and now we lie In Flanders fields.
Take up our quarrel with the foe: To you from failing hands we throw The torch; be yours to hold it high. If ye break faith with us who die We shall not sleep, though poppies grow In Flanders fields. – Lt.-Col. John McCrae
Be prepared for truly colossal weapons-grade fuckwittery.
"Here’s an upload of a video from a recent ITV documentary into Colonel Gaddhafi’s support of the IRA. It contains shocking footage of a helicopter being shot down using weapons allegedly supplied by that baddie.
Except. Umm. It’s actually ArmA 2." [A COMPUTER GAME]
Do go and read the whole thing. Your blood pressure will shoot up right about the point where the likely provenance of the video is pointed out.
ITV - if brains were dynamite, you wouldn't have enough to blow your fucking hat off. Unfortunately your Stupid is only the apex of the current media-class general cockpuppetfuckwittery.
I used to find the film 'Idiocracy' funny. I don't anymore.
....the catastrophic prophecies of climate disaster require an essential ingredient that all too often is completely missing in debates on the subject: positive water vapour feedback. Without increasing amounts of water vapour in the atmosphere (being a much more powerful GHG than CO2), allegedly as a result of human contributed CO2 (in reality, something that could be caused by any heating mechanism), then any catastrophic predictions are bunk. Period.
(Click for larger version)
Over 50 years of data with no sign of the fabled feedback.
No not money itself, the company that loans it out.
Wonga.com have come in for a significant amount of flak for their business practices. I have to admit, I gawped as soon as I saw the interest rate of a whopping 4,214% typical APR. I initially shrugged it off as a business that would only appeal to the most desperate of people.
However, the creators of the business gave an interview to the UK edition of Wired this month and they have a lot of interesting points to make. None of which are likely to escape the techno-fanboy realm of Wired; and they certainly won't make it into the pages of the likes of the Guardian.
Wonga argue that they are disrupting what has fundamentally been a monolithic monopoly market by providing small, short term loans. Something that the current banking sector in the UK simply does not service. In the interview the creators claim that customers want three things from such a service: "Firstly, simplicity - the ability to borrow what they want, when they want. Secondly, speed - the transaction needs to happen fast. Thirdly, they want to know exactly and clearly what the loan is going to cost them whether they pay on time, pay early or even miss their deadline."
Banks, they argue, have no incentives to meet these needs. Internet technologies however make all three perfectly servicable.
Errol Damelin (one of Wonga's creator) even goes on to make the oft-made by libertarians, rarely-heard by others, corporatist point: "Banks love regulation. They have been better than anyone else at co-opting it to suit themselves. They love embedding themselves into the messy greyness of how policy is created. And that makes it harder for them to innovate. They're all wink-wink. They don't compete. When did you last hear of a bank competing to bring down the cost of CHAPS payments? When was the last time you saw an interesting new Barclays product? I don't think ever."
And looking at the figures Wonga no longer seems so bad or as exploitative as their detractors have made out. Despite the horrifying interest rate, in practice it would mean a loan of £400 repayable over 30 days would come to £525.48. I've occasionally been up the proverbial creek because an employer has screwed up my pay and my rent and bills are due the next day. I can certainly see occasions where Wonga's services could have been useful in the past and in today's increasingly hand-to-mouth economy I can see why other people might have cause to use them too.
The position is that they are not exploiting people, but providing a service to a generation that has had expectations of flexibility and speed for some time in this sector that simply are not met.
Damelin puts its utility this way in the interview: "We do small, short-term things, and the cost of delivering that service is high.....Catching a cab might be expensive, but it's convenient and nobody complains that being charged £15 for getting across London is immoral."
And the competition?
It gets more interesting: In order to provide an incredibly fast turnaround for their loans, Wonga have opted to have sameday loan applications processed solely by their algorithm. You read that right. No human input.
Their algorithm uses a bespoke method of credit scoring that learns from previous transactions. I, and many others it seems, supposed that the business must have a high default rate. Initially they used the same credit scoring methods as the rest of the finance industry, leading to a 50% default rate from the customers who were being granted loans. However as they moved away from this system and onto their own algorithm, the default rate dropped to single figures (they refuse to release the exact figure however they claim that it is industry leading). The typical default rate assumed by banks is 10%.
This means that despite being in the most high-risk financial sector Wonga are outperforming the high-street banks. They speculate that they could possibly do the same for consumer banking. It's an interesting thought that such an upstart could seriously challenge the retail banking monopolies isn't it?
Your virtual body of evidence
In amongst the innovations and the counterpoints to the bad press Wonga have got there is an important warning. One that our errant media has failed to note: Wonga's nearest competitor, U.S. based prosper.com process all of their applications manually and yet end up with a 40% default rate. Wonga's algorithm works on up to approximately 30 pieces of information initially supplied by the applicant and then finds a further "6,000 to 8,000 online data points that relate to the applicant".
Wonga won't say what those data points are, so one is immediately drawn to asking what they must be. Social media will definitely be a large factor (there exists a similar facility named Duedil that harvests social media such as LinkedIn to assess the soundness of companies for example...), which means an enormous amount of people are revealing patterns in their online data that they are probably not aware of themselves. This is a recurring theme in my own research - finding and helping people to expose such patterns, with the normative judgement that people should have access to their own data - and patterns - to enable them to be self-aware enough to avoid being manipulated by others who hold that knowledge. Wonga have mastered these techniques to the point of being able to create a multi-million pound turnover on the back of it, which means the data and the patterns hidden within are of significant value.....
Point someone to this blog entry next time someone questions you as to why you don't want the state gathering more information on you than you want it to....
Today a debate took place in parliament regarding an absolutely crucial issue for the future of the UK and the financial well-being of its citizens: Mark Reckless MP proposed an anti-bailout motion that would have been the first strong anti-Euro (and with it - anti EU) sentiment expressed by the house in decades.
Many a strong word was spoken in the chamber earlier against the EU by many of the handful (circa 50) present - and by MPs from both sides of the aisle. For once I actually felt proud of the MPs participating, and I recommend you watch the debate on iplayer, or read the proceedings in Hansard tomorrow. It was such a sharp contrast to the puppet show that passes for the Prime Minister's Question time and I had to fancy what Parliament would be like if these individuals occupied the front benches instead of the back.
It didn't matter what the attendees actually said. The whips ensured another 250 MPs turned up to guarantee it was the amended version that went through and not the original motion, turning what was a strongly worded motion into a wet-through general statement of discontent that the government can safely ignore.
I sometimes find it difficult containing my anger over such shenanigans in parliament - regarding how easily results can be 'fixed' in this fashion. I don't remember sleeping in and missing the vote for which whips I wanted. Do you? And I find it intolerable that the people who both proposed the wrecking amendment and voted are not obliged to participate in the debate.
The People's Pledge valiantly attempted to encourage people to write to their MPs beforehand to ask them to vote against the amendment. Would that doing so have had any effect in any case where the majority of MPs are concerned. Aside from the whipped MPs I note, with utter disgust, that ALL of the MPs from Sheffield (which has been my home for far too long now...) failed to even turn up to either the debate or the vote. So much for the salt-of-the-earth Labour MPs standing up for the working man and woman, eh? My current MP, Mr. Paul "I ran Sheffield University Student's Union" Blomfeld, ignored my letter on the topic - as indeed he has done with the last few letters sent to him.
(UPDATE: One of my friends has just informed me that however that "Blomfield did manage to find the time to attend a drinks reception for Sheffield Union officers")
The final vote was 267 to 46. Had it been just those present for the debate voting I am confident the original motion, not the amended version, would have been carried. Such is our utterly dysfunctional democracy and its attendant handmaiden Fourth Estate that not only did this happen, but it won't even be reported.
It doesn't matter what we think. It doesn't matter what we want. It doesn't even matter what most of our MPs do either.
The One Ring has called and the Nazgul are answering their master's call.
In a previous post I highlighted the fantastic new facility, Churnalism.com. I used the example of a recent idiotic climate change story to highlight how useful the site was.
However, aside from individual queries, it is possible to use the site to process more substantial amounts of data that can potentially reveal trends or biases, and that is what I have done today.
My interest piqued by the utterly moronic 'Llama' story (eviscerated here) I decided to carry out a thorough survey of exactly how many press releases are cut and pasted (and how much is copied) by our media class in the UK.
My findings shocked even cynical old me.
I took all of the press releases from May and June 2010 available on the Environment Agency site. (Oddly, as I was working through the data the public link to the May 2010 data simply disappeared...however this isn't a problem as I stored the URLs of every single press release so others can verify this for themselves).
The research covered a total of 166 press releases. I will work out the statistical significance later, however I believe the sample is representative as it covers two out of twelve months of data. The data needs to be arranged in a tidier format; when this is done I will add an update to this post with the data in an excel spreadsheet so others can replicate my results.
My methodology was simple, every press release was copied into churnalism.com in its entirety. Some had a lot of data that could have been regarded as extraneous, however I didn't want to start editorialising and putting all of the data in is more favourable to the media organisations. I have carried out some analysis of the data below; I'm sure there are more interesting links and patterns to be found - you are welcome to see what you can do with the data yourself when I upload it. Additionally the figure I used as a percentage was the percentage of the article that is made up of pasted material. Every entry can be verified on Churnalism.com
Summary by Media organisation:
The BBC: The BBC cut and pasted significant content (at least 20%) from 78 articles. Out of those, 39 had 50% or more of their content cut and pasted from the press release. In a number of cases the BBC also generated 2-4 articles from the same press release. In percentage terms, and regarding the sample as representative I conclude that the BBC cuts and pastes content from 45% of Environment Agency press releases. Out of all press releases, 23% are partial (20-49%) copies and 23% are majority copies (50%+) of the Press Release content.
Can you see the political bias yet?
I probably don't have to spell out the implications of this - any level of churnalism is obviously undesirable, yet such an enormous amount carried out by the BBC, uncritically carried on behalf of the Environment Agency, is absolutely not acceptable and completely shocking. BBC 'impartiality is in our genes', or BBC 'the worlds greatest media organisation' my arse. Not to mention the fact that it has billions at its disposal, is publicly funded (by compulsory tax) and is the nation's media monopoly organisation.
None of the other media organisations came even close:
The Independent: 14 Articles cut and pasted. Of those 12 were less than 50% pasted content and 2 were more.
The Telegraph: 10 articles cut and pasted. Of those 7 were less than 50% pasted content and 3 were more.
The Daily Mirror: 7 articles cut and pasted, all of which were lesss than 50% of content pasted.
The Guardian: 7 articles cut and pasted. All were less than 50% of content pasted.
The Express: 6 articles cut and pasted, all of which were lesss than 50% of content pasted.
The Daily Mail: 6 articles cut and pasted, all of which were lesss than 50% of content pasted.
Financial Times: 1 single article, less than 50% pasted.
The Times: 1 single article, less than 50% pasted.
Caveats: The data should not be trusted for statistical analysis on the basis of dates as I believe the environment agency may have put the wrong dates on their some of their Press Releases - sometimes up to a week ahead of articles that have copied them.
There may be some crossover either in articles cut and pasting material from more than one article or from parts of the Environment Agency's press releases being repeated from one press release to another. Further work is required to weed these out (which could either strengthen or weaken the churnalism respectively).
Finally there are likely some minor errors in calculations; I've gone through the data twice so far but will be applying a finer level of attention later.
**Data to be uploaded later in the week**
I'd like to invite others to replicate this work - potentially crowdsourcing such efforts could allow us to check the entire content for the Environment Agency's press releases and churnalist enablers such as the BBC so there would be no ambiguity whatsoever, especially if the data is kept public.
There are countless other comparisons that could be made using churnalism.com between particular organisations, individuals and media organisations to reveal what was previously hidden.
If you're coming and we don't already know eachother in person please do come and say hi. I'll be the scruffy dodgy looking one who you might mistake for a long haired stunt double for Frodo Baggins. My secret night time vigilante identity is 'the incredible hobbit'.
No I haven't been drinking. Yes I will be tonight. :)
As you'd expect, the story is torn to shreds both in the comments on the original story at the Telegraph and at WUWT. Aside from the fact that this cubed idiocy is even tolerated is the fact that it has been reported uncritically by our ever useless media. Just how uncritical is the reporting though? Now we have a superb tool to discover the answer to this question: churnalism.com. This story was an ideal test case.
The premise behind the site is that the majority of content we receive from the media is simply recycled propaganda - a.k.a. 'press releases' - from institutions, organisations and individuals to whom the media are happy to give an uncritical free ride. The site has access to the 'journalisted' database. What the site creators ask users to do is to find press releases and paste the content in, which it then compares with its archive of news articles and provides links to any articles that appear to have been cut and pasted, along with a percentage of content so copied.
Finding the press release is usually fairly straightforward - just look for the organisation or body they are referencing and look for press releases on their site. In the case of the llama-fish article, I soon found the Environment Agency press release here. Copying and pasting this content into churnalism.com, here are the results (click on the image to see a full size version):
As you can see - the press release has been cut and pasted for the majority of content in three of our largest media organisations. The publicly funded BBC has 75% of its content cut and pasted. Great value for license fee money, eh?
The churnalism database now has a permanent record of this here.
Now a number of people have started adding content to the database, the site is now able to offer some interesting categorical breakdowns here.
The more people who make use of this facility, the more useful it will be. In addition it offers API functionality for programmers, opening interesting possibilities for automating the checking of proliferation of bullshit from certain sources (Number 10 I'm looking at you). It also opens up fascinating avenues for genuine comparative research - for example, how much and how often the BBC uncritically cuts and pastes content from certain sources relative to others.
It also hands those of us in the libertarian leaning or climate sceptic communities an immensely powerful weapon - especially if we all make regular use of it and highlight the media's pathetically compliant churnalism. But we need to use it and promote it! Please do share this around and make use of the facility for your own blogs. Rest assured there will be plenty more 'churnalism' stories coming up here thanks to the research facility churnalism.com now offers.
This is a short blog post to explain the new series of blog entries I am beginning today, 'Ammo for the Freedom Fighter'.
Too many of us libertarian leaning bloggers are unduly focused on bad news and analysis of said material. Whilst this is an essential activity for us to undertake (don't expect any less angry commentaries from me), what is often lacking is a focus, if only occasional, on very real actions, techniques or technologies that we can use in our ongoing war against statism and the coming endarkenment.
Forthcoming over the next few weeks will be continuing series from me sharing tools, practices and philosophies that have been of great benefit to myself and others. Some of these things will simply be of help with gathering, analysing, dissecting and disseminating the constant diet of crap feed to us by the political and media classes. Others will be much more positive, focusing on very real ways to help yourself and others become much stronger, more powerful individuals. Literally. I'm increasingly of the belief that it is truly the latter that our hateful would be overlords and corporatists fear most so I hope you give at least one of them a sincere try.
The first entry will be up later this afternoon. Remember to keep on keeping on, all of you.
It's not about the size of the army, but the fury of it's onslaught.
This is a story for everyone (probably meaning all of you) who have been on the receiving end of appalling customer service, possibly involving an endless problem with no solution in sight.
Here is a solid gold rant from a friend of mine on such an issue with Virgin Media regarding internet bandwidth that, owing to its ferocity looks like it might actually achieve something as long as he "promises to not use offensive language in future". This might be the future model for us consumers in the modern age of de rigeur bureaucratic resistance.
Also of great interest is the fact that his very angry email was sparked by the fact that Virgin Media apparently place close attention to comments made about them on Twitter. The Virgin staff watching Twitter contacted him after noticing angry tweets condemning their service.
After being invited to phone a particular member of staff directly to deal with his issue, he was further reprimanded for "offensive language" for referring to the staff he had previously dealt with as "lackeys", leading to him writing in his conclusion that:
"To be honest, this wound up being one of the most surreal conversations I think I’ve ever had with a living breathing human being. At one point, I briefly considered whether or not this could be some sort of prank, the sort of pseudo-comical prank call the Fonejacker or a radio station might make, but even then it was simply too imaginative, too ludicrous for that. Instead, the insanity can only be the reality - that this genuinely was the way Virgin Media dealt with angry customer complaints, like an offended schoolteacher telling you off for using the word tits in front of the girls."
It has now reach the point for me that whenever I hear British officials, Eurocrats or climate catastrophists speak, alarming us with some new supposed crisis or other, all I can hear in my head is this song. I find it a great palliative and recommend its regular use:
Lots of people have been openly wondering about this, including me.
If this video is to be believed then there is a list of reasons as long as your arm that would make any rational person in Assange's situation stay well out of Sweden's reach.
Just as one example, you could just stop at the fact that Karl Rove is advising the Swedish government - yes that Karl Rove, who's attitude to an underling who wouldn't follow the party line was to say "We will fuck him. Do you hear me? We will fuck him. We will ruin him. Like no one has ever fucked him!". The same Karl Rove who appeared to be instrumental in punishing Senator Jo Wilson in nakedly political revenge, for him casting doubt on the case for Iraq's possession of WMDs, by outing his wife, Valerie Plame. Valerie Plame who was at the centre of a CIA operation monitoring Iran's WMD programmes; with part of the fall out being that the CIA estimated it had set them back 10 years in their monitoring of Iran's putative ambitions.
Autonomous Mind has just published further information on the Met Office story, showing that in internal discussions, the Met Office's 'forecasts that were not forecasts' were nethertheless referred to numerous times as forecasts. As AM puts it, "The Met Office logic is that although it quacks like a duck and walks like a duck it is actually a horse."
There's lots of spin going on here, however there is no avoiding the fact that the 'forecast that was not a forecast' was used by the National Grid to determine their winter preparedness report, as I pointed out on Friday. They apparently didn't even have access to the now notorious "secret" report (the "secret" adjective originating from the BBC's Roger Harrabin).
The Mystic Met Office has now responded to the Register's inquiries on this issue, stating that it "has never suggested that we warned cabinet office of an 'exceptionally cold early winter'." - thus throwing Harrabin immediately under the bus.
Harrabin's response? Now that is another interesting story in and of itself. He claims: "This doesn't match a more conclusive forecast I gleaned from a Met Office contact in December" .
So what is this "more conclusive forecast" and who gave it to you Roger?
I note that he is also not paying attention when he says: 'I note a blog report (which I cannot yet verify) saying that a civil servant commented: "The Met Office seasonal outlook for the period November to January is showing no clear signals for the winter."'
Roger, that "comment" is clearly visible in the FOIA material I received, amongst the email traffic. It's more than just a "comment", it is the government outlining its official position, with which the Met Office appears to agree in its return email.
On that particular point Roger, it leaves a striking odd one out. You.
There's also another deception in play here. Harrabin goes on:
'A spokesman for the Cabinet Office told me they had passed the forecast to key stakeholders ("Government departments, local council as appropriate - we don't have a list").' [My emphasis]
Are we really to believe that the "key stakeholders" didn't include the National Grid? (Not that it would have helped much anyway).
You may be wondering why I highlighted that word. Here's another clue:
Both Autonomous Mind and the Register articles highlight a claim found in the Mail: "Last night the Met Office confirmed it had passed on the advice, but a spokesman denied that withholding it from the public was motivated by embarrassment.
‘We did brief the Cabinet Office in October on what we believed would be an exceptionally cold and long winter,’ she said."
Whilst Roger Harrabin is definitely not off the hook - and neither, if the veracity of the Mail's source is to be believed - is the Mystic Met Office, there is another underlying cultural problem:
The use of 'spokespeople'. These are used so often that it now drifts into the background consciousness for most of us, most of the time. Yet it is an extremely insidious propagandistic technique. The use of an anonymous spokesperson gets the organisation, or person, they are representing off the hook. There is no chain of accountability and any statements they make can easily be dismissed in the future.
The fact that both the Met Office and the Cabinet office are deploying spokespeople on this issue concerns me. It means we're not going to get to the actual truth without some serious hard work and implies that someone definitely does have something to hide, even if it is just their own bumbling incompetence.
Whenever this occurs journalists should immediately insist on knowing the identity of the person providing the information. Of course they don't, because they want easy copy, and access to the source of that copy. So it's up to the rest of us to apply the pressure and ask, every single time, who? The reasons why this is such a pressing issue are described eloquently by Heather Brooke in her excellent new book, 'The Silent State':
"Official spokespeople are powerful because they speak for the powerful; anonymity means they can exercise that power without being held individually accountable for it....When a 'spokesman' makes an accusation or spreads a smear, what recourse is there for the target? Anonymising spokespeople suits some journalists because if every source is simply a 'spokesman' or 'official', then it's easy to make up any old quote to suit your story.....As long as secrecy and anonymity reign, public sector bureaucracies will bethe hiding places for the incompetent, lazy and corrupt"
And if that doesn't hammer the point home enough, try this summation from Brooke:
"...we cannot be an informed electorate without access to information and a right to hold officials to account. And if we're not an informed electorate then we cannot call ourselves a democracy." [Emphasis mine]
This use of selective anonymity also has the potential to inflict very real damage beyond just the nature of our democracy:
"...special advisers and spin doctors operate a principle of never admitting a fault. can't we be treated by our leaders as grown-ups? Spin is costly for taxpayers because small problems aren't acknowledged, they are spun into successes or stifled until they reach a magnitude of catastrophic proportion."
And we've just seen this principle in action with regard to the Met Office, the government and the BBC:
Due to yet another year of officially sanctioned lack of preparedness, chaos, suffering and likely unecessary deaths, have occurred. The game of pass the blame parcel will be no comfort to those identified by Anna Raccoon as on the receiving end - "Those pensioners found frozen solid in their front garden, the scenes of half starved refugees huddled against the cold at Heathrow airport, the two kilometre long lines of frozen travellers queuing round the block at St Pancreas Station, the double dip recession caused by the ‘extreme weather’"
All of which no one is willing to take responsibility for as the parties at risk of having to take it are hiding behind 'spokespeople' already. This looks like a tough battle ahead to pin down who is responsible, who is telling us the truth and who is lying.
And its a battle - yet another - only being fought in earnest by the 'fifth estate' of the blogosphere, with little assistance from the 'fourth estate' as they languish in the doldrums of increasing irrelevancy and distant relationship to the truth.