Divided Mind: The Psychology Debate over Video Game Violence and its effects.

The scene, to the average mind, is incongruous. The Supreme Court of America striking down a law backed by onetime action movie thug and then governor of California, Arnold Schwarznegger. Arnold’s people claimed that video games increase aggression, cause neurological damage and more, and were seeking to pass a bill restricting access to them for minors. They pointed to a huge corpus of peer-reviewed scientific evidence backing up this claim. Given the mainstream rhetoric about violent videogames, the pile of evidence and the conservatism of the court, the outcome seemed a foregone conclusion.

I wrote this for Edwin Evans-Thirlwell over at OXM in 2013 – but the article got lost in the great GamesRadar link destruction of 2014. Anyway, videogame violence is blamed for all of society’s ills – Ed had me look at the science.

The scene, to the average mind, is incongruous. The Supreme Court of America striking down a law backed by onetime action movie thug and then governor of California, Arnold Schwarzenegger. Arnold’s people claimed that video games increase aggression, cause neurological damage and more, and were seeking to pass a bill restricting access to them for minors. They pointed to a huge corpus of peer-reviewed scientific evidence backing up this claim. Given the mainstream rhetoric about violent videogames, the pile of evidence and the conservatism of the court, the outcome seemed a foregone conclusion.

Yet the Supreme Court’s decision, by an unusual majority of 7:2, was to reject the bill, on First Amendment grounds – that is, if the bill was passed it would restrict individuals’ freedom of action. Unusually, they went on to comment further that the scientific claims themselves were without merit, so that the bill would likely still have been rejected. The court decision is so clear that it’s worth quoting at length:

“The State’s (California’s) evidence is not compelling. California relies primarily on the research of Dr. Anderson and a few other research psychologists whose studies purport to show a connection between exposure to violent video games and harmful effects on children. These studies have been rejected by every court to consider them, and with good reason: They do not prove that violent video games cause minors to act aggressively (which would at least be a beginning). Instead, “[n]early all of the research is based on correlation, not evidence of causation, and most of the studies suffer from significant, admitted flaws in methodology.” They show at best some correlation between exposure to violent entertainment and minuscule real-world effects, such as children’s feeling more aggressive or making louder noises in the few minutes after playing a violent game than after playing a nonviolent game.”

The judges went on in a similar vein for many damning pages. As the judgement makes clear, this wasn’t the first time that a court has called into question the quality of psychology research into videogames. Amazingly, no court in the US has upheld a single law that seeks to regulate games on the basis of violence, with many of them being rejected on similar grounds.

So what’s going on with the psychology of video games? Why are these scientists claiming more than they should?And will we ever get to the truth about whether games do make adults and children act more aggressively than they otherwise would?

The divide

The key problem here is that there’s no consensus. There’s even a disagreement over the levels of disagreement. Psychologists like Christopher Ferguson of the University of Texas (see interview boxout) argue that neither the pro- or anti-videogames groups have been good scientists and that the field as a whole needs to be more careful in the strength of the claims it makes. From the viewpoint of an outsider, by making criticisms of the methods employed on both sides, he appears to be taking the middle ground. Yet the anti-videogames lobby lumps Ferguson in with the pro- lobby as an opponent.

The other problem is that, before any proper research was done, there was a consensus. With the advent of Doom and Mortal Kombat in the 1990s, games had taken on a more visually violent aspect. Combine that with a spike in youth violence (Federal Interagency Forum on Child and Family Statistics, 2010) and school shootings (notably at Columbine High School in 1999), and it’s understandable parents looked to the new medium as a cause. The timeline below shows you that the consensus political narrative until 2005 was that violent media damages children and that studies, when they’re done, will show this.

That they haven’t shown it has meant that scientists have had to consistently overstate their results to fit with what they were expected to find. The claims mainly come from a small set of researchers, mainly American, including Dr Anderson. Now it’s not unusual in a scientific field for a single researcher to specialise to the extent of dominating that subject and Anderson has taken that role in the field of research into the effects of videogames. Starting from a background in aggression research, he’s been involved in the publication of over 190 papers (at our count), with a third of those since 2005. He’s been so prolific that when the California court rejected Schwarznegger’s bill (before he appealed to the Supreme Court), the decision noted that “approximately half the evidence was from a single scholar” – Dr Anderson, we presume?

His papers claim, as did many papers produced in that era, that video games cause aggression. Anderson himself has claimed repeatedly in the media, research and courts that they do. Many of his cadre have gone beyond that. Huesmann (2007) claimed that the effects were similar in magnitude to those of smoking and lung cancer. Strasberger (2007) claimed videogames could explain up to 30% of societal violence. In 2009, the president of the American Academy of Pediatrics weighed in, claiming erroneously that of 3,500 studies done, only 18 had not found effects of media violence. Even the American Psychological Association released a 2005 report explicitly linking video games with increased aggression. (Dr Anderson sat on both the AAP and APA committees that agreed these statements, having reviewed the literature produced mainly by… Dr Anderson.)

Yet this entire corpus, according to repeated court decisions, is erroneous and proved nothing. Listen to the ruling of the US District Court that struck down two Illinois anti-games laws in 2005. “Neither Dr Anderson’s testimony nor his research establish a causal link between violent video game exposure and aggressive thinking and behavior… researchers in this field have not eliminated the most obvious alternative explanation: aggressive individuals may themselves be attracted to violent videogames.” The court was also unconvinced that any demonstrable impact lasted beyond the short term. (Meanwhile the “expert” testimony by a Dr Kronenberger that gaming reduced frontal lobe brain activity was almost laughed out of court.)

To prove that games cause violence, after all, you must first show that exposure to violent media correlates with aggression, then that it correlates with violence and then that there’s a causative link. It’s a hard link to prove. So far the science, as evaluated by the courts and independent observers, has shown that there is at most a mild short-term correlation between gaming and aggression, with Dr Ferguson arguing that even this small correlation is down to statistical anomalies. This effect size is comparable, at Dr Anderson’s own begrudging admission to another court, to other violent media – such as when children watch a Bugs Bunny cartoon or play Sonic the Hedgehog. The sole qualitative difference between games and music, books and films is, after all, its fundamental interactivity; but is that any worse than my imagining a grisly death in a Stephen King book or seeing someone get dissected in a torture-porn film? The statistics don’t seem to show that.

So how did the causationist scientists end up claiming so much more than the evidence suggests? Of course, research into this area is a self-selecting process, so it’s understandable that it may have attracted researchers with preconceived ideas, who then verified each other’s positions. This applies to the other side too; researchers have been attracted to the field because of the overstated positions of the establishment, seeking to right the wrongs they see. And we’re not saying that this was deliberate; it’s more like a science teacher believing in the science he or she is teaching, even when half the class fail an experiment. 

Psychologists, on both sides of the debate have been stating their position with too much certainty, making “spurious comparisons with medical research” as Ferguson puts it, ignoring contradictory evidence, and claiming more than their data does.

The major problem is that few in the field are performing scientific enquiry as it’s meant to be done. Many psychologists seem to start with the assumption that certain things are true, then try to prove it – a circular argument. example. They ignore evidence to the contrary, such as the US department of Health’s 2001 report or the US Secret Service and Department of Education 2002 report, which found no evidence of a link.

They’re also not very good research. Possibly because of the failure of the peer review process, there’s been little push towards standardisation of measurements or methodology. For example, how do you measure aggression? Some studies measure it by acts of violence against inanimate objects (such as the Bobo doll experiments of the 1960s), others by word association games. Obviously, these measures are just not comparable.

Similarly, as the Australian Government’s 201 report points out “researchers have not devoted sufficient attention to the severity of violent content (e.g. cartoonish vs realistic violence) and whether it has differing effects. Some studies appear to show games featuring cartonish violence are just as harmful as games featuring realistic violence. It is not known whether socially acceptable violence (such as in the course of playing sports) has a different effect to antisocial violence.”

Further issues

The funding for the papers arguing that games cause violence is more dubious. At one time Anderson’s funding came from a now-defunct organisation called the National Institute on Media and the Family. The erratic NIMF produced misleading parental advice condemned by both the US National Parent Teacher Association and the national video games rating agency in the USA.

Another organisation that funds this research is The Centre for Successful Parenting. Despite their name, they exclusively fund research into the “effect of media violence on the brain development of children” (taken from their charity mission statement) and mainly the work of a group around Dr Vincent P. Matthews, who produces studies with results that match their agenda. We couldn’t find out much about this organisation because their two websites are full of dead links, missing pages and out-of-context rhetoric, and our emails bounced. The registered address of its website shows up on Google Street View as a boarded-up Indiana building and is also the registered address of many, many other companies. The only data we can find is on a charities site; this says it was set up by a Steve Stoughton, who seems to run a company which specialises or specialised in setting up campaigning websites (including Mediaviolence.org), and that the charity received an income of just under $400,000 in 2011, with $450,000 in assets.

Of course, the majority of research is not funded by organisations like these. Globally, the research is funded mainly by governments and universities, which is why these organisations concern us so. Imagine the outcry if Activision funded research into videogame violence?

Our conclusion

We are not scientists. We’re not equipped to assess all the data that’s out there, nor mediate the conflicting claims of different authors. From the papers we’ve read, particularly from the impartial Australian government meta-review of 2010, there is evidence that video games have short term effects on aggression, though that evidence has been very badly presented, with many methodological flaws. It’s worth noting that while that seems to be common sense, psychology is not about verifying or debunking common sense; common sense is not useful in science because many of the things it tells us are simplifications or plain wrong, such as the sun going around a flat, immobile earth.

And even if violent video games increase aggression, it has to be shown that this happens in more than the short term, in a way that’s of a significant degree, and that isn’t caused by other underlying factors. Indeed, Ferguson’s most recent research paper found that “depression, antisocial personality traits, exposure to family violence and peer influences were the best predictors of aggression-related outcomes”; violent video games didn’t get a look-in.

This field of science is extremely frustrating to research. If violent video games do increase aggression, we would like to know this, so we could act on it. After all, even a small effect can cause problems if you play several hundred hours of a game. Yet the work of scientists like Craig Anderson actually hinders this cause; if he were a more thorough scientist, if his review bodies actually reviewed his papers rather than acting as a claim to greater authority and if he dealt better with the criticisms of his peers, then we would have better data in this area.


A Timeline of Modern Moral Panic

  • 400 BC The father of philosophy himself, Plato criticises theatre and poetry for corrupting the population. (Ironic, as Plato’s idolised mentor Socrates was put to death for corrupting the youth of Athens.)
  • 1954 The US Senate Subcommittee on Juvenile Deliquency takes an interest in comic books after moral crusaders blame them for poor grades, deliquency and drug use. Many comic publishers adopt a stringent moral code that drives other companies underground or out of business. Parents groups hold comic book burnings; some cities ban comic books.
  • 1964 A Canadian philosopher called Marshal McLuhan revives the term ‘moral panic’, meaning “intense feeling expressed in a population about an issue that appears to threaten the social order”, inadvertantly predicting the next fifty years of hyperbole about negative media influence.
  • 1969 A US National Commission on Causes and Prevention of Violence condemns, in mild terms, the rise of violence on television.
  • 1972 A report commissioned by the US Surgeon General’s office explicitly links TV / movie violence and aggressive behaviour.
  • 1980 and 1982. Two suicides of depressed children who happened to play Dungeons & Dragons convince several Christian fundamentalist groups and elements of the media that roleplaying games are satanic and damage children. Later findings show that D&D players are significantly less likely to kill themselves than the national average.
  • 1992 The book “Big World, Small Screen” wins plaudits from the APA for making a clear link between television and addiction / stereotyping.
  • 1999 The Columbine High School massacre is carried out by two children who play violent video games, notably Doom.
  • 1999 David Grossman testifies before Senate Commerce Committee that US Marine Corps uses the game Doom to train marines. His book “On Killing” says games are murder simulators giving children the skill and will to kill.
  • 2001 Indiana Amusement Machine Ordinance seeks to restrict children’s access to games on grounds of obscenity. It’s struck down by United States court of Appeal.
  • 2002 A Canadian scholar named Jonathan Freedman points out that youth violence has been declining as media violence has been increasing. He is mostly ignored by the media.
  • 2003 St Louis County Ordinance 20,193, restricting children’s access to games on grounds of psychological damage, is struck down by United States court of Appeal.
  • 2005 Two 2002 Illinois laws limiting children’s access to violent and sexual video games are struck down.
  • 2005 Family Entertainment Protection Act (FEPA) to “limit the exposure of children to violent video games” and sponsored by Hillary Clinton and Joseph Lieberman doesn’t make its way into law.
  • 2005 An APA resolution explicitly links video games and aggressive behaviour, thoughts, and decreased sociability.
  • 2011 The Supreme Court strikes down the California law, citing the Brothers Grimm, Looney Tunes and The Divine Comedy as other equally violent media, all of which have freedom of speech protection under the First Amendment.
  • 2013 The APA website still carries many articles arguing that all forms of media cause violence, many with Craig Anderson’s name on. The moral panic goes on.


On the couch with Chris Ferguson, Psychology & Communication Professor at The University of Texas.

What’s your background? What’s your interest in this field?

I’m a clinical psychologist and licensed as a psychologist in Texas. I’d actually been interested more in violent behavior in a general sense at the start of my research career. It was when I started to see people making extreme statements about media violence, like that the effects were similar to smoking and lung cancer or that all the debate was over, that my curiosity got piqued. Once I started to look at the data, I was startled by how little the data actually supported the kinds of extreme claims people were making. I knew that something was going wrong.

The field seems completely split. How did these two camps come into existence? What do they respectively have invested that makes them so fervent in their opinions?

Yes, the field is pretty split. I think there is an “opposite and equal reaction” kind of issue. Prior to the Columbine massacre, video game research was pretty calm, and most scholars acknowledged the research was inconsistent. Then after Columbine a group of scholars started to make more and more extreme statements. Eventually that invited a lot of scrutiny about those claims, and ultimately, harsh criticisms. I think some of those scholars stepped so far out onto the plank that it’s just difficult to retreat to more moderate language without losing face. A few have taken funding from anti-media advocacy groups too, but I think the main issue has to do mainly with personal egos, and a rigid ideology that has grown up about media violence generally over the past few decades and then was rigidly applied to video games in the 2000s post-Columbine.

Would it be unfair to paint this as the D&D and video nasties hysteria all over again?

Actually it’s a pretty direct parallel. I use the example of comic books in the 1950s, where psychiatrists and congress together made extreme claims about the “harm” of such media. Ironically, you’re seeing much the same pattern now. It’s not so much that the hypothesis is bad, or that you couldn’t even make an honest argument for negative effects. It’s that so often the arguments are dishonest, simply ignoring evidence against the speaker’s personal views. In one recent analysis of children, we included a list of things to look for when people are “moral panicking”…I’ll include that study, the list comes toward the end of the paper.

Were the repeated rejections by the various US courts of the anti-videogame lobby’s (for want of a clearer name) conclusions expected at the time? How did the lobby handle them? Have they reduced the strength of their claims?

Well I think initially I expected the typical “moral panic” to hold sway, so in a way I was indeed surprised by the savvy of the jurists. Particularly with the Supreme Court, I don’t think anyone knew what they were thinking. Fortunately they were able to see through the nonsense. The “lobby” as you say, hehe, did not handle it well. They have very clearly doubled-down making, if anything, more extreme claims. Before they used to be good, at least, about not extending their research to societal violence. Now they have dropped all such pretenses and have made direct attributions in the press between video game violence and even mass shootings despite no evidence to link the two. Some of them also have begun advocating scientific censorship…that journalists should not speak to scholars who disagree with them. I suppose this is to be expected…it is an ideology under fire, not an objective science.

Craig Anderson, in particular, seems to have produced much of the literature that drove these cases and was rejected by the courts. Can you point to any of his material, or the anti-videogames lobby’s material, that stands up to scrutiny? Is he a controversial figure in psychology?

Well, I don’t want to personalize it too much. No, I can’t point to any of his work though that would or should survive scrutiny. The courts were quite right to reject it. I suppose the whole field is becoming controversial.

The flaws in these papers seem to be mainly in methodology and standardisation. Does that seem fair, or is there more to it than that?

There’s certainly that, also the way video games are matched in experimental conditions. For instance comparing Modern Warfare to Tetris…sure one is violent, the other not, but they differ in multiple other ways. But perhaps more crucially is the language some of the “lobby” have employed…citation bias, ignoring work that differs with their views…as I understand it, that is an ethical violation, but few people have been courageous enough to call them on that. In general the extreme rhetoric they employ is as much a problem as anything else.

Has the pro-videogame lobby been culpable of similar problems? Your paper seemed to focus more on the egregious mistakes of the antis.

I sometimes see people claim, “No studies have ever linked video games with aggression.” That, of course, is not true. Some studies have, but other studies haven’t. It’s really a matter of the bigger picture…the studies are inconsistent, but combined with the societal data showing declines in youth violence, and a lack of a cross-national pattern of video game consumption correlating with societal violence, we can say that the evidence for a link there is pretty weak. But people on both sides have to be careful of avoiding sweeping generalizations. That having been said, I do tend to feel the antis, as you say, largely set this up by making the first set of extreme statements, provoking that “opposite and equal reaction.” In a broad sense, the scientific peer review process failed in this case.

Looking into the funding behind some of the research into this area seems to throw up shell organisations – for example, the Center for Successful Parenting which seems determined to obfuscate its funding and management. Is it normal in the USA for research to be funded this way?

No, not really, certainly not organizations as murky as the CSP. Most scholars actually do research based on local grants from the university and such. Some get large federal grants such as from NIH or NSF, and others from reputable private foundations (Bill and Melinda Gates, Pew Research, etc.) However advocacy groups, just like the industry, have a financial axe to grind, and as I make clear, I find this to be a conflict of interest when scholars take money from these organizations, just like taking money from the video game industry would be a conflict of interest.

How Are Games Changing SF Literature?

Walking into any bookshop, the science-fiction section seen, from a distance, is healthy; an island of colour and variety amidst the sad faces of the ‘misery memoirs’, the black and bone of the ‘Dark Romance’, and the silver-backed Penguin classics. Yet, get closer, and there’s something strange. The colour comes in bursts, great streaks of the same style dominating the shelves, logos iterating across shelf after shelf. Stars Wars and Star Trek are there, for sure, but they’re not in charge; video game franchises are dominating science fiction and fantasy.


This article originally appeared in PlaySF magazine, way back in 2012.

Walking into any bookshop, the science-fiction section seen, from a distance, is healthy; an island of colour and variety amidst the sad faces of the ‘misery memoirs’, the black and bone of the ‘Dark Romance’, and the silverbacked Penguin classics. Yet, get closer, and there’s something strange. The colour comes in bursts, great streaks of the same style dominating the shelves, logos iterating across shelf after shelf. Stars Wars and Star Trek are there, for sure, but they’re not in charge; video game franchises are dominating science fiction and fantasy.

The video game market is huge, especially compared to original science fiction. Yet, game fiction is often ignored by the publishing industry. I talked to Tony Gonzales, the writer of the Eve Online tie-in novels The Empyrean Age and Templar One, who bemoaned the short shrift given to game fiction; “it’s all piled into Fantasy / Science Fiction,” he said, “located in the most inconspicuous section of the store. It’s the same with digital sales. The obscurity is compounded by the fact that some literary trade publications won’t even review game tie-ins.” So why do SF literary journalists turn their noses up at this burgeoning genre, when it’s bringing new readers to the market?

Gonzales thinks that “general SF purists scoff at gaming because most games reuse ideas and concepts that have been in print for a decade…” Hard science fiction fans also have particular problems with games. Gonzales explains; “(also) most enjoyable games makes some patent flubs to science in the name of creating fun gameplay. That’s pure sacrilege to the hard SF fan because it shatters their immersion… the game audience is used to instant gratification… they have short attention spans and authors trying to capture them better get to the point quickly.”

Given this, it seems necessary for the best SF author to adapt to gamers’ tastes by avoiding challenging material – and this is already happening. “I see a certain amount of literary science fiction trying to appeal to the gamer audience,” Niall Harrison, Editor in Chief of speculative fiction magazine Strange Horizons, said, “Mostly in near-future thrillers that incorporate MMORPGs or ARGs as a plot element – I’m thinking of Charles Stross’ Halting State, and Walter Jon Williams’ This Is Not a Game, not to mention Stephenson’s Reamde.”

It’s also happening in the way that further-future SF is written. “Ten years ago I might have talked about a ‘blockbuster’ sensibility in the work of writers like Richard Morgan (who has since worked on the story for Crysis 2).” says Harrison “Now I’m thinking more of an ‘FPS’ sensibility in novels like Greg Bear’s Hull Zero Three (and Bear has of course written Halo tie-in novels).” Gonzales agreed; “There’s a struggle between what audiences want to read in SF versus what authors who work in the genre want to write.”

Neil Tringham, an ex-games designer and now editor on the Encyclopedia of Science Fiction told me that “there are presumably some people who buy spinoffs from SF games such as Starcraft who wouldn’t otherwise read SF.” What’s new is that the generic science-fiction of the past has been replaced by branded tie-ins, including games. “I do suspect that the part of the book market that was occupied by long running but not especially original sf adventure series, such as E C Tubb’s Dumarest sequence, has to some extent been taken over by long running but not very innovative series based on company owned concepts, such as Games Workshop’s Warhammer 40,000,” said Tringham.

Not that branded science fiction is new, as Strange Horizon’s Niall Harrison notes; “there has been tie-in fiction for decades and well-respected writers have written it in all periods of the field’s history. It’s always looked down on by the ‘serious’ sf readers, and it’s almost always sold buckets more than the original stuff.” Gonzales hasn’t produced his own universe fiction yet but if he did, “just about all brand-driven fiction would outsell my work… the marketing resources that can support that brand will be vastly greater than will ever be thrown by publishers at standalone books.”

It’s just a pity that so much of in-game narratives and worlds of games are cheesy, badly conceived or safe. Take the Mass Effect universe, where the height of daring for the writers is to accurately depict the same-sex relationships that exist in our society today. “I do see a certain amount of gentle mocking of the Mass Effect universe for being built from elements of umpteen existing franchises.” says Harrison. “A possible exception might be BioShock, thanks to its dialogue with the work of Ayn Rand.” (Rand’s ideas – about superhuman entrepreneurs being held back by the average man – informed the story of the dystopian shooter Bioshock). On the whole, though, when the fiction is reviewed it garners bad scores – probably worse than it would get if it wasn’t branded.

However, it’s not only literary journalists who decry the quality of game fiction. Consider the recent comments of EA’s Chuck Beaver, the producer of the Dead Space franchise; “Gears of War… contains atrocious, offensive violations of story basics. Yet it doesn’t seem to ruin it for many, many people. It’s literally the worst writing in games, but seems to have no ill effects.” He even admitted that his own company’s Dead Space was itself “just a simple haunted house story that we later pasted a personal narrative on top of – a lost girlfriend who is really dead.” (He apologised after this statement.) Admittedly, Beaver’s not talking about the books, but if the original narrative of the game is bad, how far can the fiction improve on it? Is game tie-in fiction just bad?

A Quick Philosophy Lesson
Most game fiction falls into the ‘space opera’ category, AKA ‘science fantasy’; that is, unscientific futuristic fiction. It’s enjoyable, but it’s pulp fiction, like Mass Effect. Is there a moral argument for valuing hard science fiction over fantasy, beyond keeping educated SF fans immersed? Well, let’s assume we want to make as many people happy as possible, beyond the fleeting pleasure of actually reading the fiction. An old exponent of keeping people happy was the utilitarian philosopher John Stuart Mill. In On Liberty he talked about “experiments in living” like so;

“As it is useful that while mankind are imperfect there should be different opinions, so is it that there should be different experiments of living; that free scope should be given to varieties of character, short of injury to others; and that the worth of different modes of life should be proved practically, when any one thinks fit to try them.”

Now, speculative fiction has always aimed for this. It’s shown people other ways of living, on the basis of other ways the world could be, and explored the personal (Daniel Keyes’ Flowers for Algernon), social (H.G. Well’s The Time Machine) and moral (James Blish’s A Case of Conscience) consequences of this. It’s been damn good entertainment, yes, but it’s also opened up people’s minds to the possibilities of other ways our societies could work.

The best at this has been hard science fiction, because it takes the technology of the near future and extrapolates how our society would alter just from that – Clarke and Asimov are the classic examples here, though we might also point to John Brunner’s scarily prescient Stand on Zanzibar or Frederick Pohl’s Man Plus. This is exactly the sort of fiction that the new gaming audience doesn’t like and is getting squeezed out.

And the worst at throwing up these models of living? I’d argue the worst is ‘space opera’. Its relevance to our lives is purely in its unjustified assumption of social parallels. And that’s what’s dominating the shelves because SF video games are pure science fantasy. Indeed, they’re bringing new forms of science fantasy into existence, as the Science Fiction Encylopedia’s Tringham explains; “There are two trends in recent books which have at least been influenced by developments in SF games. I’m thinking of what the online Encyclopedia of Science Fiction describes as Science and Sorcery ( the ‘genre-blending juxtaposition of sf and fantasy settings’) and Medieval Futurism ( ‘sf… with heavy overtones of the Middle Ages feudal systems as the governing bodies’.”

As a gamer, I’m overjoyed that games have such a large cultural impact; as a SF reader, I’m ecstatic that they may be extending the reach of SF beyond its niche; and I can’t deny that many of these books tell a terribly good yarn. Yet, as a good utilitarian, I’m depressed to see something so dominant which rarely mingles its undeniable entertainment value with philosophical lessons or images of our possible futures.

A counter-argument, as made to me by PlaySF’s editor Richie when discussing this article, is that the moral questions can still be raised by all fiction, including space opera; “I thought the patent flub of having clones in, say, Eve Online actually brings about interesting questions about the value of life when death becomes only a minor financial concern – which has been done to death in proper SF.” Indeed, SF games have had a positive effect on the acceptance of SF and fantasy ideas, across all media, similar to the way that Margaret Atwood’s or George Orwell’s near future dystopias managed to avoid the label of SF. “I rarely hear SF games discussed for their interest as SF.” says Strange Horizon’s Harrison. “People are excited by Portal because it’s charming and the mechanic is cool, rather than because it includes any particularly new SF ideas.”

Despite my personal pessimism, it’s likely that the fiction of games will improve, dragged up perhaps by the fresh innovation and quality we see coming from the indie development scene, like the hard science of Waking Mars or the softer humanism of To The Moon. Perhaps games will even be fed by the great Sci-Fi books of the past, as Roadside Picnic informed the S.T.A.L.K.E.R. series. If we’re lucky, this fiction will trickle over into the mainstream games and hence books pushed to the SF market. I can hope for all this but given the market’s appetite (and the prevalence of games like the appalling Dark Star), it seems unlikely. As Theodore Sturgeon famously said, “ninety percent of everything is crap.” That’s true with both games and with fiction, derived or not.

Interview: Casey Wimsatt, Symbionica on how games help the Autistic.

Again, this was research for a feature I did for PC Gamer on disabled gaming. Casey Wimsatt makes games proven to improve the social skills of children with Autism.

To the tune of: Tim Buckley – I Know I’d Recognize Your Face

Again, this was research for a feature I did for PC Gamer on disabled gaming. Casey Wimsatt of Facesay makes games proven to improve the social skills of children with Autism.

Can you back up your claims that your technology helps autistic children?
A peer reviewed paper was just published last week might be of interest. The paper is about a randomized controlled study (N=49) showing how my silly FaceSay computer games help kids with autism – both high and low functioning – improve their social interactions on the playground. This is a first, after a decade of brilliant attempts with other tech-based interventions.


The heavy-weights in the autism research field have provided nice quotes. As your story suggests, the broader question of can computer games make a difference is of interest.

How does the technology help the children? Do you think it’s merely shaping their behaviour along Pavlovian lines, or is it helping them actually understand better other people’s feelings?
The games do use some Skinnerish elements of classical Applied Behavioral Analysis, such as errorless learning, but I concocted the games, ala a stranger on a new planet, by synthesizing a “stew” of dozens of ideas. Since there are so many elements, we can’t officially say what contributes what, but I do have some hypotheses and pending patents. Broadly speaking, the games are designed to help the kids become aware of social value of the movements and features of the face, particularly in the area around the eyes. Attention to the eyes is important for a number of “upstream” skills, such as emotion recognition, joint attention and imitation.

As indirect evidence that the learning/acquisition is not just through conditioning and mastery, FaceSay never mentions emotion labels such as “happy”, “sad”, “etc”, but in two RCTs, the FaceSay participants have performed better on standard emotion recognition tests (I think because they are attending more to the clues in the area around the eyes). In the most recent RCT in 2010 (n=31) at a California School District, the FaceSay participants also improved on a standardized neuropsych assessment for Theory of Mind (Nepsy II), even though there is no explicit conditioning/training for the questions in the test.

Do you think there’s a large market for games that are therapeutic like this?
The “serious games” market has promise, but is far from booming. In an era where services funding are on the chopping block, inherently high fidelity interventions like this should provide a big savings and be adopted. Resistance to using computers in a social domain is one challenge (ranging from an ongoing debate whether computers are just more screen time away from people, to some job insecurity in the service provider sector).

Another challenge is getting scientific recognition of the therapeutic value. Posit Science’s games may be are current poster child. They are planning to apply for an FDA approval – something I admire and would like to attempt for FaceSay. In the field of Autism, there have been so many failures, so few successes, whether computer based or not, that ABA therapy is the only accepted form. As data continues to grow for FaceSay, I hope that it will become accepted by insurers, more of whom are being asked to cover autism intervention services.

If you were making a mainstream game, would you make it with the disabled in mind?
Sure. Broadly speaking, I think designing for a wide audience can enhance your creative process.

What tips would you give to mainstream developers who are looking to make games for autistic children?
This is a tough one. There are several dozens of things to consider and the kids, as a group, are very diverse. There’s a cliche “if you’ve met one autistic person, you’ve met one autistic person”. Thinking about sensory processing is one key, though. Talking with parents and autistic adults and piloting with autistic kids is another.

Interview: Dr Stephen Thaler and his Dreaming AIs.

I did a series of interviews for a feature for Computer Shopper way back in January; this was the most interesting of them, with Dr Stephen Thaler, a man who is either doing stuff on the edge of our current knowledge or is a charlatan. He’s nearly convinced me he’s telling the truth and just deliberately misusing language , but if that’s the case then we need to reappraise the current state of play of AI. Key points; he thinks his AIs dream

Warning: He can really talk. This is a long one.

To the tune of: Marina and The Diamonds – I Am Not A Robot

I did a series of interviews for a feature for Computer Shopper way back in January; this was the most interesting of them, with Dr Stephen Thaler, a man who is either doing stuff on the edge of our current knowledge or is a charlatan. He’s nearly convinced me he’s telling the truth and just deliberately misusing language , but if that’s the case then we need to reappraise the current state of play of AI. Key points; he thinks his AIs dream

Warning: He can really talk. This is a long one.

Dr Stephen Thaler

Apart from your Creativity Machines, what is the state of the art in AI?
I’m not really seeing much out there except for hype. My feeling is that large companies and universities are catering to the public misconception that size, speed, and complexity will somehow lead to human-level machine intelligence or the kind of AI anticipated by science fiction. That’s why the press typically jumps on just the announcement that some monstrous corporation is about to embark upon building a human brain simulation (we hold the patents for that ultimately, but they have the cash). It’s also why so much attention is given to projects like Watson. Put a HAL-like voice behind an impressive database search and 90% of the population mistakenly thinks ‘wow’, with just a few more computers, the thing will become conscious.

From my vantage point, evolutionary computing and bayesian networks, both of which require immense efforts by human beings and tackle only low dimensionality problems, seem to be taking on prominence due to the sweat and toil of university PR departments. Certainly, these are not the AI systems anticipated by science fiction. They would require constant repair and update and would lack the intuition and creativity characteristic of the brain.

So, slightly side-stepping your question, Creativity Machine is not only the state-of-art of AI, it is the fundamental building block of future machine intelligence. Pull out the schematic for any soon-to-be-created contemplative and creative AI system and you will be able to circle the two primary ingredients of the Creativity Machine paradigm, at least one noise-irritated neural network generating potential ideas, as at least one other net critiques and guides the stream of consciousness of the other(s). This fundamental architecture will then be able to query other, non-creative, non-contemplative, but fast computational systems like Watson.

Are your CMs scruffy or logic-based? Could you give a very quick summary of how they technically work?
It’s either one, depending upon one’s point of view. Creativity Machine Paradigm is at first blush a scruffy, since it can make itself arbitrarily complex and opaque to humans. The paradigm is neat (what you allude to as logic-based, I think) when one realizes that the very same system can dissect itself to reveal its underlying discrete, fuzzy, and intuitive (i.e., statistical) logic to “meat-based brains” like ours.

Artificial neural networks (ANN), the building blocks of Creativity Machines, are IMHO likewise both “scruffy” and logic-based. They may be thought of as “switches,” real or simulated, that interconnect themselves so as to achieve arbitrarily complex input-output programs, once exposed to representative input and output patterns called “exemplars.” As they are thusly exposed to such data, much of what we think of as intelligence automatically “grows” among the connections joining these switches, establishing the repeating themes within the presented input data environment as well as the relationships between such entities. In short, they can either absorb discreet logic, if presented with appropriate Boolean patterns, or develop the fuzzy logic we typically think of as intuition when presented with data relationships having less “systematic” and more “statistical” interrelationships.

Before proceeding, consider the principal limitation of such an ANN: It is dependent upon a turnover of data patterns arriving from the outside world. Turn off the external traffic of things and events at its input side, and the ANN simply rests idly, producing zero turn over of output patterns. What is sorely needed to achieve the equivalent of brain function is contemplation in which the network activates into a series of patterns (thoughts) totally independent of what’s going on in the environment, while on occasion activating within loose context to external activity (i.e., some event within the network’s environment triggers a series of associated patterns, which neural network aficionados call memories).

Looking to the brain for how we can induce contemplative behavior in ANNs, the first thing we realize is that the brain is not only fed meaningful inputs, what we think of sensory data, but also noise in the form of transient disturbances to the “status quo” within its biological neural nets. In biology, such noise emerges from the inevitable energetic fluctuations within neurons and their interconnections. Summarily speaking, it is Murphy’s Law at work on a microscopic scale, wherein the order acquired by the network during learning is transiently and reversibly destroyed by factors that have their origin in entropy.

We can emulate such disordering effects through the introduction of synthetic perturbations to an ANN. Applying small numerical perturbations to connections or neurons within a trained neural net, it begins to “hallucinate” things and events within the external environment that it has already experienced (see for instance one of those “non-existent” refereed papers, “Virtual Input Phenomena” Within the Death of a Simple Pattern Associator, Neural Networks, 8(1), 55–65). Slightly increment the average perturbation level in the net and it does something extremely profound: It transitions from memory generation to idea formation (discussed in a conference paper at http://imagination-engines.com/mind2.htm.). In other words the net generates patterns that are distinct from what it already knows through its learning experience, to things and/or scenarios that could be. THIS IS PROFOUND! The mathematician would say that the patterns generated in this noise regime largely obey the many constraint relations previously “soaked up” by the ANN during training. More philosophical sorts might say that it produces ideas and strategies that obey the “zen” of the conceptual space the net has previously “seen” in training.

Such critically perturbed neural nets, what I call “imagitrons,” form the idea-generating component of a contemplative and creative AI system. The remaining component is a “stock” item within the field of ANNs that is called a “perceptron.” In short, these nets exemplify how the brain forms opinions about the world, associating an environmental pattern (i.e., the taste of chocolate) with other stored memory patterns (i.e., pleasant experiences, if one likes chocolate.) In the Creativity Machine architecture, the perceptron forms opinions not about environmental patterns, but about the potential ideas streaming from the imagitron. In effect, the CM is a client-server algorithm, with the latter serving up ideas, while the client selects those it deems most advantageous, while numerically taking charge of the noise in variety of ways, to coax the imagitron in the most valuable directions. In effect, both the brain and the Creativity Machine are “Jon Lovitz Machines,” in that some neural nets make computational mistakes while others opportunistically proclaim, “Yeah! that’s the ticket!”

Of course this is only the start. In short, Creativity Machines may take control over the connection of other neural nets into vast, contemplative, brain-like structures called “Supernets.” Likewise they may selectively reinforce the memories of what were in the last instant ideas, thereafter hybridizing these notions into even better ones.

AI is a narrow-but-deep field; people in certain specialisations of it, don’t talk to the others. How is general intelligence (strong AI) looking? Where is it lagging? Where are we ahead? (Relative to humans, if that’s possible.) Do you think we need something to be AI-complete to solve all the other problems of, for example, NLP or social intelligence?
Trying my best to speak as a scientist, rather than a self-promoting capitalist, the only hope of building general artificial intelligence is Creativity Machine paradigm. All other attempts at approaching this problem involve embarrassingly slow humans (i.e., Bayesian and genetic approaches). Nothing else out there assembles itself and then builds, for instance, its own brain-like vision pathways for object recognition, invents self-consistent interpretations to natural language (i.e., semantic disambiguation), and improvises oftentimes Machiavellian tactics and strategies for coping with newly arising situations (our battlefield robots). Here is the answer to the AI-complete problems, all met and exceeded by critically-perturbed neural nets watched and managed by onlooker nets. It’s the simple and elegant way of building general AI, although threatening to our culture’s cherished preconceptions about brain and mind.

Let me offer some additional observations:

  1. All that brain does falls into three classes: (a) pattern-based learning, (b) pattern recognition (i.e., perception) and (c) pattern generation (idea formation), the latter being achieved via the teaming of neural nets in brainstorming sessions (Creativity Machine paradigm). Even the former class, b, pattern recognition, is governed by the Creativity Machine principle, since the patterns our brains witness, originating in the external world, are largely ambiguous, so we wind up having to invent significance to raw sensory input (i.e., sense making).
  2. Creativity Machines may take charge of vast swarms of ANNs allowing them to knit themselves into vast brain-like structures called “Supernets.” These Supernets have exceeded the brain’s 100 billion neuron threshold (August, 1997), but now with millions of connections between synthetic neurons rather the brain’s meager 10,000. The problem is that such immense neural structures need equally vast inputs, and typically need decades to become “wise,” absorbing both their successes and failures in bootstrapping their competencies. [Stay tuned for what that Supernet did. It’s related to that Thalamocortical paper you read…]
  3. Supernets organize into a general AI for any system of sensors and actuators they connect. In that sense, they are general artificial intelligence. Add the equivalent of the human senses and effectors, and they will develop the general intelligence to learn and improvise.
  4. Place a number of such synthetic life forms together and they either annihilate each other or implement collaborative strategies with one another. Those synthetic life forms that survive manifest a social intelligence.
  5. CM paradigm can likewise invent “high-level” psychological theories like NLP. They can even emulate the “wars” that go on between high-level and low-level (computational) psychologists by developing contrasting and competing theories.
  6. To build human-like, general artificial intelligence, one needs a human-like body, because much of what brain does is the monitoring and regulation of the corporeal. Otherwise, we are already building the contemplative, non-corporeal domain experts, where the real financial support comes from. [One needs hands, or at least nubs, to truly appreciate the meaning of grasping a concept.]

Summarily, human-style general intelligence is attainable given sufficient computational resources. In effect, Creativity Machine Paradigm is the “intensive” principle behind such systems, while the “extensive” portion is the hardware.

You talk in rather anthropomorphic terms about your CMs “dreaming” and so forth; is this just for marketing, is it to push a transhumanist agenda, or is this something you believe?
(Let me clear the air about my being a transhumanist. I’m much more realistic, even though I have developed the most important component of a singularitarian vision, the trans-human level intelligence that can invent everything else.)

It’s not something I believe. It’s something I know. They are truly dreaming.

Whether we are talking about the brain or CMs composed of artificial neural networks, memories are stored in the same way, through the connection weights that “grow” between them. Add a little noise and they relive such memories. Add a little more noise and their memories degenerate into false memories (i.e., confabulations) of things and events they have never directly experienced. This progression of activation patterns (thoughts) is called “dreaming” and that’s exactly what we see when we watch a brain dreaming via fMRI or PET, an evolution of neural activation patterns that seem to come from nowhere. Under the hood, the brain is housecleaning, interrogating some of its nets with noise while others take note of what memories or ideas are worthy of reinforcement. Once again, this process is Creativity Machine paradigm at work.

One of the reasons that we perceive the biological neuron as special is the complex structures and mechanisms for protoplasmic growth and maintenance. Otherwise, it is just a switch built from inorganic elements like carbon, hydrogen, and nitrogen.

There’s another fundamental reason for our intrinsic prejudice against the very though of a “machine” dreaming: The main cognitive loop of the brain, the thalamocortical loop, is actually a Creativity Machine. The neural net based cortex thinks things, driven by its internal noise, while the neural network based thalamus gets interested in this stream of pattern-based thoughts within the former, cortical net. Attendant, watching neural nets attach meaning to what to it is a mysterious stream of thoughts coming from out of the blue.
The thalamocortical loops that have survived the evolutionary process are machines for self-reinforcing delusions, one of which is that mind/brain are somehow special, noble, and separate from any inorganic simulations thereof. This gives us the inspiration to avoid cliffs and shotguns pointed toward the head. The net result is that such delusionary minds reproduce and dominate! (sorry)

Admission that synthetic neurons can dream, is one of those ultimate scientific truths that breaks away from the subjective, comforting, and societally reinforced delusion that they don’t.

Is there room for development in your technique, or is it something that, whilst productive and efficient, doesn’t itself have emergent properties? Does any form of AI support emergence?
Everything a CM does is an emergent property, especially when one contrasts what they have learned through direct experience and what they imagine. That’s why an IEI battlefield robot starts as “cybernetic road kill” one moment, and within minutes has developed cunning, even Machiavellian strategies.

The reason I’ve been skeptical is that I can’t find any papers about your tech and I can’t see any articles which verify your claims (except written by similarly low-brow journalists like me.) You could be a genius but you don’t seem shy of publicity, so I’m confused at the lack of articles; convince me!
Where are you looking for articles? I know that there are some ad hominem attacks on the Internet, but I think they’re authored by individuals who haven’t taken the time to look. Sometimes these accusations come from disgruntled academicians who know that I’ve crashed their party as an outsider.

The truth is that there is plentiful reading out there:

  • You’ll need access to various military and government documents such as DTIC. Nevertheless, I have published profusely therein (hundreds of pages). Otherwise, many documents will never see the light of day.
  • There are approximately a thousand pages to read, refereed by neural network and AI specialists out of academia, and then published by the USPTO and patent offices around the world.
  • The fundamental principles behind the Creativity Machine were laid out in the peer-reviewed journal “Neural Networks” and in the beginning wrote many papers in the area of materials discovery, once again in refereed journals.

I hope you understand that I run a company, and can neither afford to publish my trade secrets nor take the time to write such papers. Keep in mind that I have what many have called the “ultimate idea,” so I’m working the horizontal markets (applications A-ZZZ) rather than the vertical (i.e., the academic guilds). Furthermore, the majority of projects I work are behind the closed doors of government.

The proof is in the putting. Look at the wealth of big name corporations and government agencies doing business with my company. Look at the vast suite of US and international patents that have effectively planted the flag and teach, in very plain language, how to implement such systems.

Try these peer-reviewed resources for starters:

As one additional note to this question, it’s hard for some of my academic friends to understand that the science I have developed has such broad and diverse application that I am hard-pressed to thoroughly document what I’ve done. I’m in a relentless race to complete one practical project after another…

Ask A Neuroscientist!

To the tune of: Green Day – Brain Stew

My preamble: One of the few blessings of attending Oxford, save for the acquisition of an archaic process of thought, was my acquaintance with my admirable friend Dr Paul Taylor. Paul is, apart from being an awesome trumpeter, a professor of Neuroscience, with a speciality in attention and… uh, I wasn’t listening to the rest. Something about decision-making and consciousness. Anyway, this is the future, the human brain, the great unknown canyons of the mind; focus!

Paul’s preamble: As part of my reply I have been doing such things as starting to read Neuromancer and then forgetting about it for a bit. My widdlings follow —

Paul's Brain

You stimulate reactions in your patients with with what is basically a big magnet. Is this procedure something that could eventually be automated – that is, before any interface is installed, a short automated configuration period would be needed to identify the relevant brain centres?

—what we sometimes do at the moment is first scan people to see which areas ‘light up’ in response to a task, and then use some clever registration software involving an infrared camera and some tracking pointers so that we can find the part on their skull directly above that activation ‘blob’ – and then zap it. So it could be in a different place for different people but we’d find it.

How fine is our current level of interaction with these areas? What technical innovations would be needed to improve this?

—in one sense not fine at all and in another quite impressive. These stimulators I use are pretty bulky and definitely activate millions of brain cells immediately after this pulse. On the other hand some of the effects can be very sensitive – often we’re able for example to specifically produce a twitch in someone’s right index finger, for example, rather than the other fingers. To improve it we need some means of producing extremely focal magnetic pulses which are strongest at some distance from the stimulating device – it’s still a centimetre or even two from the top of your head down through the skull to the first bit of brain. Or, tiny little nanobots that can somehow cross the blood-brain barrier and be remotely triggered and moved around.

People have been known to kill themselves over things as innocuous as tinnitus – are there any dangers from direct brain stimulation?
–funny you mention tinnitus, it’s one of the few things which transcranial magnetic stimulation has been suggested to be used to treat. Tinnitus – a hallucinatory auditory experience – can be caused by all sorts of different things from the ear to the brain. Some types, maybe, you can make go away entirely by zapping the right bit.
Important to mention here is that the clinical doses are way beyond the experimental doses I use. For example, I use a MAXIMUM of 1500 quite weak pulses in a day. These clinical doses use ten to a hundred times that, every day for months.

The brain is highly adaptable. Does this adaptability vary with age? Brains can rewire themselves to bypass paralysis, etc – is this something we could induce to allow connection with unfamiliar connections? E.g. implanting an coprocessor at an early age which allows a defined level of control over a drone unit, and allowing the brain time to work out how to use this as an extra limb, say?

–OK lots of things here. Brains are as you say highly adaptable – almost to the point of being the best definition of what a brain is – adaptable. The rewiring after paralysis (‘plasticity’) is currently studied a lot, including with TMS. For example, I mentioned above the thing where you can stimulate the motor cortex to produce a motor twitch. You can also do other things, like slow someone at doing something complicatedly fiddly, such as wiring a plug. If you stimulate their ‘premotor’ cortex – another part of brain – normally people get worse at using the hand represented by that side of the brain.  If someone has a stroke such that one side of the brain is damaged then sometimes the other side of the brain starts to take over. You can show that stimulating that other side of the brain starts to have the effects which the old side used to. So now they only get worse if you stimulate the other side.
—you could have a prosthesis implanted so you could control an arm. The wiring would be a bit fiddly though. The best way to go about it though is to make the most of the brain’s adaptability. Rather than trying to plug something in the right way to just replace or extend something like a hand, say, instead just splash something in the cortex and let people figure it out. People are very good this. Did you hear the recent experiment at Duke university with the monkey operating a joystick in new york with it’s motor cortex?
—I was at a conference recently where they were showing video footage of people who’d had both hands amputated after an accident. They then transplanted new hands onto the old hands. People could use them – one patient could very easily strip the wire off a plug and – amazingly – also waved his hands around automatically as he talked. They, again, stimulated the motor cortex to demonstrate how the brain had reorganised.

The Outside Of Prof. Taylor's Head.

Fiction writers always imagine technology to be invasive; are there any benefits from implanted brain tech? What disadvantages are there to implants?

–I guess the problem would be that presumably the implant would have a single fixed function. The brain is always changing though – from milliisecond to millisecond and from year to year. So one problem would be having an implant that could change with the brain.
–The other problem is – well, it’s like this. To be honest, neuroscientists as a whole don’t really have the flying first idea what most parts of the brain are really doing. Neuroscientists individually do, but there’s no real agreement on anything. So everyone would have a different idea of which bit should be plugged in where.

The brain seems to be able to transfer routine actions to a non-conscious brain element eventually; is there anything barring integrated technologies from being subconscious as well?
– absolutely nothing barring that, one would expect that to happen as a matter of course.

This is all stream of consciousness on my part – what have I missed out?

–there’s so much to be said here, i don’t know where to begin. There’s a big research initiative here in Munich on robotics, i’ll look out for new findings that might be of interest.

Thank you, Paul!

Veni, Vidi, Validity – Blue Monday and Valid Arguments

(This post to the tune of…)

Professional statistics-mangler Professor Cliff Arnell is conquering the news again today, for his yearly profile-raiser about this being the most depressing day of the year. As Ben Goldacre has pointed out, he was paid to produce this research by Porter Novelli, a PR firm, who pitched the idea and date out to several academics back in 2005, to persuade people to buy holidays from a client of theirs. However, as any fule logician knos, merely because something has dodgy premises, that doesn’t mean it isn’t true – and vice versa, just because something is right, doesn’t mean that it was arrived at validly.

Admittedly, Arnall’s premises are totally flawed. His first assumption is, not only that depressive the  measurable, but that it’s the same for people all around the world; his statement is so all-encompassing that the ridiculousness of the equation he came up with isn’t really undermined by his self-deprecating honesty in saying “I’m only doing this for the money” – essentially he’s renting out his qualifications to the PR firm. The travel firm had chosen this date because it was the date they wanted people to start booking their holidays, and it was a cheap way of getting lots of national newspaper coverage (compared to advertising).
There’s an argument about validity here – arguments can be valid, but not true, and statements true, but not valid. Cliff Arnall’s argument is valid like so;
1: The day that maximises this equation is the most depressing day
2: January 18th maximises the equation.
C: Therefore January 18th is the most depressing day.
Sadly, his first premise is false, as his equation is utter bollocks, but there’s a second point – it’s possible to have a true conclusion even when all the premises are false.
1: Everything that has either Perpetual Yeast or Infundibulum Baking Soda in rises every day.
2: The sun is 90% Perpetual Yeast.
C: Therefore the sun rises every day.
So this could be the most depressing day, independent of his nonsense – and it has to be admitted that this _is_ a tremendously depressing day in Britain, the day when the glow of the holidays has completely gone and the grind of the next 11 months becomes apparent. Doing a quick straw poll of Facebook and Twitter, there’s significant number of people (above the normal monday whingers) complaining about this being a rubbish day/week. I’m not going to claim that this is statistically significant – just that my experience seems to bear up Arnall’s arbitrary claim. This could, of course, be because those people have seen the Blue Monday coverage in the news, and they’re highly impressionable.
There’s also the point that even if this is the most miserable day of any year, which I doubt considering the snowbound depression many people were in early in the year, or the Mumbai attacks of November 2008, or the tube attacks of 2007, even if it was for Britain, it’s not for the rest of the world. As Goldacre has said, seasonal suicide peaks vary from country to country and there’s been no consistent findings amongst studies. Of course, again, one shouldn’t link suicide peaks to depression peaks – though our intuition is that the two should be linked, the connection isn’t necessary, especially not when talking about the population at large. Many people were depressed when, say, England lost the cricket, or the Princess of our Hearts forgot to put her seatbelt on.
Cliff Arnall is wrong on so many levels; moral, factual, mathematical; that one should really just ignore him, but the total invalidity of his premises sadly doesn’t invalidate his conclusion.