Interview: Casey Wimsatt, Symbionica on how games help the Autistic.

Again, this was research for a feature I did for PC Gamer on disabled gaming. Casey Wimsatt makes games proven to improve the social skills of children with Autism.

To the tune of: Tim Buckley – I Know I’d Recognize Your Face

Again, this was research for a feature I did for PC Gamer on disabled gaming. Casey Wimsatt of Facesay makes games proven to improve the social skills of children with Autism.

Can you back up your claims that your technology helps autistic children?
A peer reviewed paper was just published last week might be of interest. The paper is about a randomized controlled study (N=49) showing how my silly FaceSay computer games help kids with autism – both high and low functioning – improve their social interactions on the playground. This is a first, after a decade of brilliant attempts with other tech-based interventions.


The heavy-weights in the autism research field have provided nice quotes. As your story suggests, the broader question of can computer games make a difference is of interest.

How does the technology help the children? Do you think it’s merely shaping their behaviour along Pavlovian lines, or is it helping them actually understand better other people’s feelings?
The games do use some Skinnerish elements of classical Applied Behavioral Analysis, such as errorless learning, but I concocted the games, ala a stranger on a new planet, by synthesizing a “stew” of dozens of ideas. Since there are so many elements, we can’t officially say what contributes what, but I do have some hypotheses and pending patents. Broadly speaking, the games are designed to help the kids become aware of social value of the movements and features of the face, particularly in the area around the eyes. Attention to the eyes is important for a number of “upstream” skills, such as emotion recognition, joint attention and imitation.

As indirect evidence that the learning/acquisition is not just through conditioning and mastery, FaceSay never mentions emotion labels such as “happy”, “sad”, “etc”, but in two RCTs, the FaceSay participants have performed better on standard emotion recognition tests (I think because they are attending more to the clues in the area around the eyes). In the most recent RCT in 2010 (n=31) at a California School District, the FaceSay participants also improved on a standardized neuropsych assessment for Theory of Mind (Nepsy II), even though there is no explicit conditioning/training for the questions in the test.

Do you think there’s a large market for games that are therapeutic like this?
The “serious games” market has promise, but is far from booming. In an era where services funding are on the chopping block, inherently high fidelity interventions like this should provide a big savings and be adopted. Resistance to using computers in a social domain is one challenge (ranging from an ongoing debate whether computers are just more screen time away from people, to some job insecurity in the service provider sector).

Another challenge is getting scientific recognition of the therapeutic value. Posit Science’s games may be are current poster child. They are planning to apply for an FDA approval – something I admire and would like to attempt for FaceSay. In the field of Autism, there have been so many failures, so few successes, whether computer based or not, that ABA therapy is the only accepted form. As data continues to grow for FaceSay, I hope that it will become accepted by insurers, more of whom are being asked to cover autism intervention services.

If you were making a mainstream game, would you make it with the disabled in mind?
Sure. Broadly speaking, I think designing for a wide audience can enhance your creative process.

What tips would you give to mainstream developers who are looking to make games for autistic children?
This is a tough one. There are several dozens of things to consider and the kids, as a group, are very diverse. There’s a cliche “if you’ve met one autistic person, you’ve met one autistic person”. Thinking about sensory processing is one key, though. Talking with parents and autistic adults and piloting with autistic kids is another.

Interview: Dr Stephen Thaler and his Dreaming AIs.

I did a series of interviews for a feature for Computer Shopper way back in January; this was the most interesting of them, with Dr Stephen Thaler, a man who is either doing stuff on the edge of our current knowledge or is a charlatan. He’s nearly convinced me he’s telling the truth and just deliberately misusing language , but if that’s the case then we need to reappraise the current state of play of AI. Key points; he thinks his AIs dream

Warning: He can really talk. This is a long one.

To the tune of: Marina and The Diamonds – I Am Not A Robot

I did a series of interviews for a feature for Computer Shopper way back in January; this was the most interesting of them, with Dr Stephen Thaler, a man who is either doing stuff on the edge of our current knowledge or is a charlatan. He’s nearly convinced me he’s telling the truth and just deliberately misusing language , but if that’s the case then we need to reappraise the current state of play of AI. Key points; he thinks his AIs dream

Warning: He can really talk. This is a long one.

Dr Stephen Thaler

Apart from your Creativity Machines, what is the state of the art in AI?
I’m not really seeing much out there except for hype. My feeling is that large companies and universities are catering to the public misconception that size, speed, and complexity will somehow lead to human-level machine intelligence or the kind of AI anticipated by science fiction. That’s why the press typically jumps on just the announcement that some monstrous corporation is about to embark upon building a human brain simulation (we hold the patents for that ultimately, but they have the cash). It’s also why so much attention is given to projects like Watson. Put a HAL-like voice behind an impressive database search and 90% of the population mistakenly thinks ‘wow’, with just a few more computers, the thing will become conscious.

From my vantage point, evolutionary computing and bayesian networks, both of which require immense efforts by human beings and tackle only low dimensionality problems, seem to be taking on prominence due to the sweat and toil of university PR departments. Certainly, these are not the AI systems anticipated by science fiction. They would require constant repair and update and would lack the intuition and creativity characteristic of the brain.

So, slightly side-stepping your question, Creativity Machine is not only the state-of-art of AI, it is the fundamental building block of future machine intelligence. Pull out the schematic for any soon-to-be-created contemplative and creative AI system and you will be able to circle the two primary ingredients of the Creativity Machine paradigm, at least one noise-irritated neural network generating potential ideas, as at least one other net critiques and guides the stream of consciousness of the other(s). This fundamental architecture will then be able to query other, non-creative, non-contemplative, but fast computational systems like Watson.

Are your CMs scruffy or logic-based? Could you give a very quick summary of how they technically work?
It’s either one, depending upon one’s point of view. Creativity Machine Paradigm is at first blush a scruffy, since it can make itself arbitrarily complex and opaque to humans. The paradigm is neat (what you allude to as logic-based, I think) when one realizes that the very same system can dissect itself to reveal its underlying discrete, fuzzy, and intuitive (i.e., statistical) logic to “meat-based brains” like ours.

Artificial neural networks (ANN), the building blocks of Creativity Machines, are IMHO likewise both “scruffy” and logic-based. They may be thought of as “switches,” real or simulated, that interconnect themselves so as to achieve arbitrarily complex input-output programs, once exposed to representative input and output patterns called “exemplars.” As they are thusly exposed to such data, much of what we think of as intelligence automatically “grows” among the connections joining these switches, establishing the repeating themes within the presented input data environment as well as the relationships between such entities. In short, they can either absorb discreet logic, if presented with appropriate Boolean patterns, or develop the fuzzy logic we typically think of as intuition when presented with data relationships having less “systematic” and more “statistical” interrelationships.

Before proceeding, consider the principal limitation of such an ANN: It is dependent upon a turnover of data patterns arriving from the outside world. Turn off the external traffic of things and events at its input side, and the ANN simply rests idly, producing zero turn over of output patterns. What is sorely needed to achieve the equivalent of brain function is contemplation in which the network activates into a series of patterns (thoughts) totally independent of what’s going on in the environment, while on occasion activating within loose context to external activity (i.e., some event within the network’s environment triggers a series of associated patterns, which neural network aficionados call memories).

Looking to the brain for how we can induce contemplative behavior in ANNs, the first thing we realize is that the brain is not only fed meaningful inputs, what we think of sensory data, but also noise in the form of transient disturbances to the “status quo” within its biological neural nets. In biology, such noise emerges from the inevitable energetic fluctuations within neurons and their interconnections. Summarily speaking, it is Murphy’s Law at work on a microscopic scale, wherein the order acquired by the network during learning is transiently and reversibly destroyed by factors that have their origin in entropy.

We can emulate such disordering effects through the introduction of synthetic perturbations to an ANN. Applying small numerical perturbations to connections or neurons within a trained neural net, it begins to “hallucinate” things and events within the external environment that it has already experienced (see for instance one of those “non-existent” refereed papers, “Virtual Input Phenomena” Within the Death of a Simple Pattern Associator, Neural Networks, 8(1), 55–65). Slightly increment the average perturbation level in the net and it does something extremely profound: It transitions from memory generation to idea formation (discussed in a conference paper at In other words the net generates patterns that are distinct from what it already knows through its learning experience, to things and/or scenarios that could be. THIS IS PROFOUND! The mathematician would say that the patterns generated in this noise regime largely obey the many constraint relations previously “soaked up” by the ANN during training. More philosophical sorts might say that it produces ideas and strategies that obey the “zen” of the conceptual space the net has previously “seen” in training.

Such critically perturbed neural nets, what I call “imagitrons,” form the idea-generating component of a contemplative and creative AI system. The remaining component is a “stock” item within the field of ANNs that is called a “perceptron.” In short, these nets exemplify how the brain forms opinions about the world, associating an environmental pattern (i.e., the taste of chocolate) with other stored memory patterns (i.e., pleasant experiences, if one likes chocolate.) In the Creativity Machine architecture, the perceptron forms opinions not about environmental patterns, but about the potential ideas streaming from the imagitron. In effect, the CM is a client-server algorithm, with the latter serving up ideas, while the client selects those it deems most advantageous, while numerically taking charge of the noise in variety of ways, to coax the imagitron in the most valuable directions. In effect, both the brain and the Creativity Machine are “Jon Lovitz Machines,” in that some neural nets make computational mistakes while others opportunistically proclaim, “Yeah! that’s the ticket!”

Of course this is only the start. In short, Creativity Machines may take control over the connection of other neural nets into vast, contemplative, brain-like structures called “Supernets.” Likewise they may selectively reinforce the memories of what were in the last instant ideas, thereafter hybridizing these notions into even better ones.

AI is a narrow-but-deep field; people in certain specialisations of it, don’t talk to the others. How is general intelligence (strong AI) looking? Where is it lagging? Where are we ahead? (Relative to humans, if that’s possible.) Do you think we need something to be AI-complete to solve all the other problems of, for example, NLP or social intelligence?
Trying my best to speak as a scientist, rather than a self-promoting capitalist, the only hope of building general artificial intelligence is Creativity Machine paradigm. All other attempts at approaching this problem involve embarrassingly slow humans (i.e., Bayesian and genetic approaches). Nothing else out there assembles itself and then builds, for instance, its own brain-like vision pathways for object recognition, invents self-consistent interpretations to natural language (i.e., semantic disambiguation), and improvises oftentimes Machiavellian tactics and strategies for coping with newly arising situations (our battlefield robots). Here is the answer to the AI-complete problems, all met and exceeded by critically-perturbed neural nets watched and managed by onlooker nets. It’s the simple and elegant way of building general AI, although threatening to our culture’s cherished preconceptions about brain and mind.

Let me offer some additional observations:

  1. All that brain does falls into three classes: (a) pattern-based learning, (b) pattern recognition (i.e., perception) and (c) pattern generation (idea formation), the latter being achieved via the teaming of neural nets in brainstorming sessions (Creativity Machine paradigm). Even the former class, b, pattern recognition, is governed by the Creativity Machine principle, since the patterns our brains witness, originating in the external world, are largely ambiguous, so we wind up having to invent significance to raw sensory input (i.e., sense making).
  2. Creativity Machines may take charge of vast swarms of ANNs allowing them to knit themselves into vast brain-like structures called “Supernets.” These Supernets have exceeded the brain’s 100 billion neuron threshold (August, 1997), but now with millions of connections between synthetic neurons rather the brain’s meager 10,000. The problem is that such immense neural structures need equally vast inputs, and typically need decades to become “wise,” absorbing both their successes and failures in bootstrapping their competencies. [Stay tuned for what that Supernet did. It’s related to that Thalamocortical paper you read…]
  3. Supernets organize into a general AI for any system of sensors and actuators they connect. In that sense, they are general artificial intelligence. Add the equivalent of the human senses and effectors, and they will develop the general intelligence to learn and improvise.
  4. Place a number of such synthetic life forms together and they either annihilate each other or implement collaborative strategies with one another. Those synthetic life forms that survive manifest a social intelligence.
  5. CM paradigm can likewise invent “high-level” psychological theories like NLP. They can even emulate the “wars” that go on between high-level and low-level (computational) psychologists by developing contrasting and competing theories.
  6. To build human-like, general artificial intelligence, one needs a human-like body, because much of what brain does is the monitoring and regulation of the corporeal. Otherwise, we are already building the contemplative, non-corporeal domain experts, where the real financial support comes from. [One needs hands, or at least nubs, to truly appreciate the meaning of grasping a concept.]

Summarily, human-style general intelligence is attainable given sufficient computational resources. In effect, Creativity Machine Paradigm is the “intensive” principle behind such systems, while the “extensive” portion is the hardware.

You talk in rather anthropomorphic terms about your CMs “dreaming” and so forth; is this just for marketing, is it to push a transhumanist agenda, or is this something you believe?
(Let me clear the air about my being a transhumanist. I’m much more realistic, even though I have developed the most important component of a singularitarian vision, the trans-human level intelligence that can invent everything else.)

It’s not something I believe. It’s something I know. They are truly dreaming.

Whether we are talking about the brain or CMs composed of artificial neural networks, memories are stored in the same way, through the connection weights that “grow” between them. Add a little noise and they relive such memories. Add a little more noise and their memories degenerate into false memories (i.e., confabulations) of things and events they have never directly experienced. This progression of activation patterns (thoughts) is called “dreaming” and that’s exactly what we see when we watch a brain dreaming via fMRI or PET, an evolution of neural activation patterns that seem to come from nowhere. Under the hood, the brain is housecleaning, interrogating some of its nets with noise while others take note of what memories or ideas are worthy of reinforcement. Once again, this process is Creativity Machine paradigm at work.

One of the reasons that we perceive the biological neuron as special is the complex structures and mechanisms for protoplasmic growth and maintenance. Otherwise, it is just a switch built from inorganic elements like carbon, hydrogen, and nitrogen.

There’s another fundamental reason for our intrinsic prejudice against the very though of a “machine” dreaming: The main cognitive loop of the brain, the thalamocortical loop, is actually a Creativity Machine. The neural net based cortex thinks things, driven by its internal noise, while the neural network based thalamus gets interested in this stream of pattern-based thoughts within the former, cortical net. Attendant, watching neural nets attach meaning to what to it is a mysterious stream of thoughts coming from out of the blue.
The thalamocortical loops that have survived the evolutionary process are machines for self-reinforcing delusions, one of which is that mind/brain are somehow special, noble, and separate from any inorganic simulations thereof. This gives us the inspiration to avoid cliffs and shotguns pointed toward the head. The net result is that such delusionary minds reproduce and dominate! (sorry)

Admission that synthetic neurons can dream, is one of those ultimate scientific truths that breaks away from the subjective, comforting, and societally reinforced delusion that they don’t.

Is there room for development in your technique, or is it something that, whilst productive and efficient, doesn’t itself have emergent properties? Does any form of AI support emergence?
Everything a CM does is an emergent property, especially when one contrasts what they have learned through direct experience and what they imagine. That’s why an IEI battlefield robot starts as “cybernetic road kill” one moment, and within minutes has developed cunning, even Machiavellian strategies.

The reason I’ve been skeptical is that I can’t find any papers about your tech and I can’t see any articles which verify your claims (except written by similarly low-brow journalists like me.) You could be a genius but you don’t seem shy of publicity, so I’m confused at the lack of articles; convince me!
Where are you looking for articles? I know that there are some ad hominem attacks on the Internet, but I think they’re authored by individuals who haven’t taken the time to look. Sometimes these accusations come from disgruntled academicians who know that I’ve crashed their party as an outsider.

The truth is that there is plentiful reading out there:

  • You’ll need access to various military and government documents such as DTIC. Nevertheless, I have published profusely therein (hundreds of pages). Otherwise, many documents will never see the light of day.
  • There are approximately a thousand pages to read, refereed by neural network and AI specialists out of academia, and then published by the USPTO and patent offices around the world.
  • The fundamental principles behind the Creativity Machine were laid out in the peer-reviewed journal “Neural Networks” and in the beginning wrote many papers in the area of materials discovery, once again in refereed journals.

I hope you understand that I run a company, and can neither afford to publish my trade secrets nor take the time to write such papers. Keep in mind that I have what many have called the “ultimate idea,” so I’m working the horizontal markets (applications A-ZZZ) rather than the vertical (i.e., the academic guilds). Furthermore, the majority of projects I work are behind the closed doors of government.

The proof is in the putting. Look at the wealth of big name corporations and government agencies doing business with my company. Look at the vast suite of US and international patents that have effectively planted the flag and teach, in very plain language, how to implement such systems.

Try these peer-reviewed resources for starters:

As one additional note to this question, it’s hard for some of my academic friends to understand that the science I have developed has such broad and diverse application that I am hard-pressed to thoroughly document what I’ve done. I’m in a relentless race to complete one practical project after another…

Interview: Jesse Schell on gaming, the social sciences, identity loss and behavioural shaping.

Previously Creative Director of the Disney Imagineering Virtual Reality Studio, Jesse Schell thinks hard about the future of games; he’s worked on Toontown Online and teaches game design at Carnegie Mellon university. You can see his amazing DICE talk on the future of gaming here. This interview was conducted for a PC Gamer piece on Social Gaming about a year ago.

To the tune of: Björk – Human Behaviour

Previously Creative Director of the Disney Imagineering Virtual Reality Studio, Jesse Schell thinks hard about the future of games; he’s worked on Toontown Online and teaches game design at Carnegie Mellon university. You can see his amazing DICE talk on the future of gaming here. This interview was conducted for a PC Gamer piece on Social Gaming about a year ago.

Jesse is very small.

You’ve posited that social gaming, or at least the tools developed for it, will become the backbone to how technology integrates with our lives. Your vision, in particular, focussed on direct ‘nudge marketing’ and how, if done crudely, it could become invasive. Do you honestly believe this will happen?
I think you are asking whether there will be annoying kinds of advertising related to games. Have you been on Facebook? Yes! Totally! There will be LOTS and LOTS and LOTS of annoying marketing games, in shapes and forms we can only start to imagine. “Buy a 24 pack of coca-cola, and get 100 free gold in World of Warcraft!” “Tweet about NBC TV shows five times this week, and get 20 farmcash, and a coupon for MacDonalds!” And on, and on, and on…

Is it a good thing? (Use your own moral code here, class).
Is it a good thing? I would say that no, mostly advances in annoying advertising are not good. I mean, a lot of cool and weird game experiments will show up because of this, and that’s good, but for every cool one, there will be twenty that are just irritating.

Will you be pushing this in your own projects (no matter, whether you think it’s good or bad)?
Well, part of what we’re doing at Schell Games are facebook games and other social network games. And for those to succeed, they have to be viral. And to be viral, you have to risk being annoying sometimes. Taking that risk goes with the territory.

Most of us get our happiness from others – so in social games, relationships should be first, content second. So few of them feature any real relationships at all, though, and very little content. How do they get away with this?
I wondered who took my happiness! It was you!

It’s not true when you say they don’t feature real relationships. If that was true, facebook games would work just as well with strangers as they do with your real friends. But they don’t. We don’t want to be ashamed in front of our real friends, and we want to feel equal, or superior, to our real friends, and so, there are powerful forces at work that make us want to succeed at games when our real friends are involved. So, real relationships are at the fore. The games don’t develop these relationships, but they do use them. And as for “very little content”, since when do games require “lots of content”? Where is the “content” in chess? Or draughts (yeah, I’m in my UK groove!)? or football? All a good game needs is a simple interaction with someone whose opinion I care about.

Is this just another consequence of our more efficient living – work has got more efficient, but instead of saving us time we’ve ended doing more than ever. Now we’re saving time on socialising too. The ultimate form of socialising is to feel the long-lasting happiness from being social in the shortest time.
Definitely, part of the appeal of social networks is to be able to socialize efficiently. That’s not a bad thing, historically, that’s what letter writing was for — a way to stay in touch that didn’t involve having to make a journey. Now we just have methods that are 100x more efficient than letter writing. How you choose to use them is up to you.

What are the great unanswered questions in social sciences that gaming could help answer?
One of them is surely this: Exactly what do people find rewarding? The social gaming universe right now is Darwinian experiment, evolving at 100x the speed of traditional videogaming, to find out what people find it rewarding to play, and to spend money on.

Are the major social gaming companies being short-sighted? The way they used playgen payment models, the way their systems don’t merely utilise social networks but almost abuse them – they’re driving the public away. At the moment, they’re still growing quickly enough no-one notices how many are dropping out, but if it ever gets to the stage that it becomes harder to drop out…
Some techniques definitely will gain money and players in the short term, and lose them in the long term. Is it crazy to use these techniques? It’s crazy to use them in the long-term, but in the short term, it will get you money and players, so it would be crazy not to use them! You can always change techniques later — in fact, you definitely will, since players, games, and technology are all changing so fast. None of us know what this stuff really looks like in the long term, so, yeah, a lot of companies are focused on the short term right now.

Is this technology repeatedly top-slicing our society, splicing off those who know how to access and manipulate these new information sources, and leaving them in a position of power over the rest of us?
No — it’s doing the opposite. Wasting the time and money of those who understand the most — which gives everyone else a chance to catch up!

Normally our value systems are inculcated in us through a combination of school and parental behavioural shaping, and a hint of own personality depending on how troublesome we prove. How are these things going to compete with relentless personalised marketing?
It’s a fascinating question! Does personalized marketing change us, or make us more like ourselves? Given the choice between the impersonal marketing that dominated the 20th century, and the freeform, personal marketing of the 21st century, I guess I prefer the latter. But to your question — in the 21st century, people will have an unprecedented freedom to become what they want to become — which means if you don’t like yourself, it’s your own fault.

This behavioural shaping isn’t good in another way – it only reinforces certain acquisitive behaviour. Will moral institutions (religions, humanists, illuminati) have to reorganise as digital lobbyists for the human soul, shifting their millions away from lobbying government for laws to shape behaviour to building their own incentive structures and social networks?
Yes, this is starting to happen now. There are countless grants to try to create videogames to encourage positive behavior of all kinds — better health habits, better learning habits, better environmental habits. It’s a tough battle though — for how can the government afford better games than the junk food, entertainment, and manufacturing industries?

Science fiction writers have been positing a total corporate societal takeover for years, but it hasn’t happened yet (I think). It won’t happen with this either, will it?
You mean like in Jennifer Government, where you need to have a credit card ready when you call an ambulance (everyone should read Jennifer Government, by the way! It would make a great movie, but I don’t think Hollywood has the guts to put out a movie where Nike is the villain)? No, corporations won’t take over the government through games, but they will nibble away at our identities with them, bit by bit.

Back to social gaming. The market’s not matured yet, in any way. Is this still the Wild West? Rife with Red Indies, and the big corporations laying railroads down and trying to tame a land they don’t yet understand?
Yes, mostly.

Facebook has established itself as the premium platform for social games. Do you think that was the only mistake World of Warcraft made – not establishing itself as a platform in it’s own right, when it had such a huge userbase. Do you see Facebook ever being superceded?
“Ever” is a long time. I will say that I believe that Facebook will be the dominant social network five years from now.

Evony – the advertising scandal and Gifford’s admission, in court, of being a liar for marketing purposes-  shouldn’t detract from them having made a passable strategy game. Can marketing and game design continue to be separated like this?
No comment on this question — I don’t know enough about the situation.

Most social games aren’t really games – just addictive mechanics designed to elicit cash. Also not really fun. In fact, in that they keep you from your friends and waste your time, are they completely invidious?
If they weren’t games, and they weren’t engaging, people wouldn’t keep playing them. And sometimes people don’t keep playing them. But when people do play them, and pay to play them, it’s because they are engaging. Remember, games don’t have to be “fun” all the time, they just have to be engaging.

If you were going to make a social game that appealed only to hardcore gamers, what would you do?
We have that! It’s called multiplayer FPS! Remember, it doesn’t have to be on facebook to be a social game!

E3 Day Zero: Paranoia

It was when the morbidly-obese man’s armpit started sweating on my shoulder, as the Armenian driver hurled the minibusload of LA entrants around the corners of downtown, that I realised my hands were aching fit to burst. ‘That would be from all the hand-wringing’ I thought, ‘which would be a natural lead into a flas…’


I’m sat in the plane. I’m going to be deported

To the tune of: Ennio Morricone – Paranoia Prima

It was when the morbidly-obese man’s armpit started sweating on my shoulder, as the Armenian driver hurled the minibusload of LA entrants around the corners of downtown, that I realised my hands were aching fit to burst. ‘That would be from all the hand-wringing’ I thought, ‘which would be a natural lead into a flas…’


I’m sat in the plane. I’m going to be deported. There’s no way around it. I’ve sat here for ten hours, shocked and traumatised, and I’ve come up with a huge range of ideas and excuses. I haven’t moved, I haven’t watched a movie, I’ve just stared at the pixelated plane arcing towards DOOM-LA (on the interactive map which is bizarrely in Spanish) and thought of plans for getting out of it. My hands hurt so much from the endless wringing but at least my fingernails have been chewed a bit shorter, which is lucky as I had to drop my nail-scissors in the sharps bin at security…. anyway, PLANS:

River, Canada
  1. I tell them that I’m not a journalist, I’m a writer, and confuse them with etymology THEN MAKE MY ESCAPE.
  2. (The floes off Greenland are flat like damp sugar, impossibly large and hostile. Glaciers grind over the uninhabitable land.) I grab a guard’s gun and get him to shoot me in the foot, then claim he attacked me, then on the way to the hospital MAKE MY ESCAPE.
  3. (Manitoba is passing beneath, at midday, and the sun and clouds are perfectly reflected in something that might be water or frozen oil. It’s impossible to tell scale from up here.) I tell them honestly that I’m a journalist, but rely on the email I’ve just sent (which I really sent, making me look like a huge dick) telling all my contacts I wasn’t going to do the work for them after all and that I’ve come to LA just to collect assets for Gamespress (which would have been true.)
  4. I walk down the steps from the plane, erroneously assuming they exist, and just keep walking, grabbing a Mexican worker’s dungarees to disguise myself and walk off into LA, MAKING MY ESCAPE.
  5. I try speaking in Greek to them and when a translator turns up, I speak English to him/her, just to confuse them WHILE I MAKE MY…
  6. I admit everything and break down in tears. (This plan almost has me crying on the plane.)
  7. (Flying over Utah and Vegas now, the great stained desert, desolate, mostly uninhabited.) I claim to be a consultant, point out what proportion of my income is from writing (sadly small), and use the contract from Warner Bros, which I inadvertantly brought with me, to prove that I’m a bigwig, ringing Rob Donald if necessary to prove that I’ve worked for them and that I’m not a journalist, oh no no.
  8. If they don’t let me talk, I’ll tell them how beautiful their country is from the air, so clear on this day, and how I regret nothing, nothing! Then GET DEPORTED.
  9. (LA is so huge. How many people are lost in that? I stare and stare and the fear grows as the plane comes into land). I change planes when I get into the airport and sneak in over the border with Mexicanos, disguised as a itinerant Hermanos Rabbi. If I don’t GET SHOT  or GET DEPORTED then everything will be hunky-dory.
  10. Actually, most of the imagined plans ended with and THEN I’M DEPORTED or AND THEN I GET SHOT.

Why am I talking about this? I was going to LA. I was going to LA and…


…a nice old man in a brylon British Airways waistcoat took my passport, just as the departure gate was closing, looked up at me and said;

“A journalist are we, Mr Griliopoulos?”
“URK” I gasp, eloquently.
“Doing any work out at E3?”
“No, you’d have to be mad as crabs.” I actually said, reddening.

He started laughing. My passport didn’t mention journalist or E3. He knew. He KNEW.


My journalist iVisa has expired. I’ve had it since I left OXM, all those years, the Dorian picture of OXM Grill not ageing as Dan does. It let me go to the USA and write stuff but, finally, it’s expired. Just before I’m due to fly to LA. To write stuff. I ring the American embassy, at great expense. A nice lady on the other end of the phone starts organising me an appointment to get a new iVisa, after I told her my name, passport number and when I’m flying, then pauses and sucks her teeth audibly.

“Can you go to Belfast?” she asks.
“URK” I gasp, eloquently.
“Otherwise you can’t get your Visa in time and you can’t fly.”
“Can’t I just…?”
“BUT… I’m not just a journalist, I do other things, like.”
“SIR! I cannot advise that you travel under false pretences. “
“… I have to fly.”


I’m at customs. I’m sure they’re going to pounce. I think of revealing myself, a new plan, pre-empting them, explaining the situation to catch them off balance, and then…

…I’m talking to the security guy. He’s coffee and blue, lots of numbers and badges. And he’s looking up from my biometrics and passport, and raising an eyebrow and;

“What are you here for?”
“Oh, E3, the games convention.”
“Cool! You games guys. That’s why you’re so tired right, you’ve been up all night?”
“YEAH.” I smile fixedly and rub my hands beneath the counter.
“Well, have a great time!” The smile sticks and I MAKE MY ESCAPE.


And that’s why it hurts so much to type this. GOOD START TO LA, I think as I am forced sideways into the large man’s moobs by the latest lost soul the Armenian driver has crammed into the car as he talks loudly about how much better life was under Communism, under a totalitarian ordered system, where he didn’t have to work a seven day week just to feed his family, where he knew his neighbours…