, ,

Interview: Dr Stephen Thaler and his Dreaming AIs.

To the tune of: Marina and The Diamonds – I Am Not A Robot

I did a series of interviews for a feature for Computer Shopper way back in January; this was the most interesting of them, with Dr Stephen Thaler, a man who is either doing stuff on the edge of our current knowledge or is a charlatan. He’s nearly convinced me he’s telling the truth and just deliberately misusing language , but if that’s the case then we need to reappraise the current state of play of AI. Key points; he thinks his AIs dream

Warning: He can really talk. This is a long one.

Dr Stephen Thaler

Apart from your Creativity Machines, what is the state of the art in AI?
I’m not really seeing much out there except for hype. My feeling is that large companies and universities are catering to the public misconception that size, speed, and complexity will somehow lead to human-level machine intelligence or the kind of AI anticipated by science fiction. That’s why the press typically jumps on just the announcement that some monstrous corporation is about to embark upon building a human brain simulation (we hold the patents for that ultimately, but they have the cash). It’s also why so much attention is given to projects like Watson. Put a HAL-like voice behind an impressive database search and 90% of the population mistakenly thinks ‘wow’, with just a few more computers, the thing will become conscious.

From my vantage point, evolutionary computing and bayesian networks, both of which require immense efforts by human beings and tackle only low dimensionality problems, seem to be taking on prominence due to the sweat and toil of university PR departments. Certainly, these are not the AI systems anticipated by science fiction. They would require constant repair and update and would lack the intuition and creativity characteristic of the brain.

So, slightly side-stepping your question, Creativity Machine is not only the state-of-art of AI, it is the fundamental building block of future machine intelligence. Pull out the schematic for any soon-to-be-created contemplative and creative AI system and you will be able to circle the two primary ingredients of the Creativity Machine paradigm, at least one noise-irritated neural network generating potential ideas, as at least one other net critiques and guides the stream of consciousness of the other(s). This fundamental architecture will then be able to query other, non-creative, non-contemplative, but fast computational systems like Watson.

Are your CMs scruffy or logic-based? Could you give a very quick summary of how they technically work?
It’s either one, depending upon one’s point of view. Creativity Machine Paradigm is at first blush a scruffy, since it can make itself arbitrarily complex and opaque to humans. The paradigm is neat (what you allude to as logic-based, I think) when one realizes that the very same system can dissect itself to reveal its underlying discrete, fuzzy, and intuitive (i.e., statistical) logic to “meat-based brains” like ours.

Artificial neural networks (ANN), the building blocks of Creativity Machines, are IMHO likewise both “scruffy” and logic-based. They may be thought of as “switches,” real or simulated, that interconnect themselves so as to achieve arbitrarily complex input-output programs, once exposed to representative input and output patterns called “exemplars.” As they are thusly exposed to such data, much of what we think of as intelligence automatically “grows” among the connections joining these switches, establishing the repeating themes within the presented input data environment as well as the relationships between such entities. In short, they can either absorb discreet logic, if presented with appropriate Boolean patterns, or develop the fuzzy logic we typically think of as intuition when presented with data relationships having less “systematic” and more “statistical” interrelationships.

Before proceeding, consider the principal limitation of such an ANN: It is dependent upon a turnover of data patterns arriving from the outside world. Turn off the external traffic of things and events at its input side, and the ANN simply rests idly, producing zero turn over of output patterns. What is sorely needed to achieve the equivalent of brain function is contemplation in which the network activates into a series of patterns (thoughts) totally independent of what’s going on in the environment, while on occasion activating within loose context to external activity (i.e., some event within the network’s environment triggers a series of associated patterns, which neural network aficionados call memories).

Looking to the brain for how we can induce contemplative behavior in ANNs, the first thing we realize is that the brain is not only fed meaningful inputs, what we think of sensory data, but also noise in the form of transient disturbances to the “status quo” within its biological neural nets. In biology, such noise emerges from the inevitable energetic fluctuations within neurons and their interconnections. Summarily speaking, it is Murphy’s Law at work on a microscopic scale, wherein the order acquired by the network during learning is transiently and reversibly destroyed by factors that have their origin in entropy.

We can emulate such disordering effects through the introduction of synthetic perturbations to an ANN. Applying small numerical perturbations to connections or neurons within a trained neural net, it begins to “hallucinate” things and events within the external environment that it has already experienced (see for instance one of those “non-existent” refereed papers, “Virtual Input Phenomena” Within the Death of a Simple Pattern Associator, Neural Networks, 8(1), 55–65). Slightly increment the average perturbation level in the net and it does something extremely profound: It transitions from memory generation to idea formation (discussed in a conference paper at http://imagination-engines.com/mind2.htm.). In other words the net generates patterns that are distinct from what it already knows through its learning experience, to things and/or scenarios that could be. THIS IS PROFOUND! The mathematician would say that the patterns generated in this noise regime largely obey the many constraint relations previously “soaked up” by the ANN during training. More philosophical sorts might say that it produces ideas and strategies that obey the “zen” of the conceptual space the net has previously “seen” in training.

Such critically perturbed neural nets, what I call “imagitrons,” form the idea-generating component of a contemplative and creative AI system. The remaining component is a “stock” item within the field of ANNs that is called a “perceptron.” In short, these nets exemplify how the brain forms opinions about the world, associating an environmental pattern (i.e., the taste of chocolate) with other stored memory patterns (i.e., pleasant experiences, if one likes chocolate.) In the Creativity Machine architecture, the perceptron forms opinions not about environmental patterns, but about the potential ideas streaming from the imagitron. In effect, the CM is a client-server algorithm, with the latter serving up ideas, while the client selects those it deems most advantageous, while numerically taking charge of the noise in variety of ways, to coax the imagitron in the most valuable directions. In effect, both the brain and the Creativity Machine are “Jon Lovitz Machines,” in that some neural nets make computational mistakes while others opportunistically proclaim, “Yeah! that’s the ticket!”

Of course this is only the start. In short, Creativity Machines may take control over the connection of other neural nets into vast, contemplative, brain-like structures called “Supernets.” Likewise they may selectively reinforce the memories of what were in the last instant ideas, thereafter hybridizing these notions into even better ones.

AI is a narrow-but-deep field; people in certain specialisations of it, don’t talk to the others. How is general intelligence (strong AI) looking? Where is it lagging? Where are we ahead? (Relative to humans, if that’s possible.) Do you think we need something to be AI-complete to solve all the other problems of, for example, NLP or social intelligence?
Trying my best to speak as a scientist, rather than a self-promoting capitalist, the only hope of building general artificial intelligence is Creativity Machine paradigm. All other attempts at approaching this problem involve embarrassingly slow humans (i.e., Bayesian and genetic approaches). Nothing else out there assembles itself and then builds, for instance, its own brain-like vision pathways for object recognition, invents self-consistent interpretations to natural language (i.e., semantic disambiguation), and improvises oftentimes Machiavellian tactics and strategies for coping with newly arising situations (our battlefield robots). Here is the answer to the AI-complete problems, all met and exceeded by critically-perturbed neural nets watched and managed by onlooker nets. It’s the simple and elegant way of building general AI, although threatening to our culture’s cherished preconceptions about brain and mind.

Let me offer some additional observations:

  1. All that brain does falls into three classes: (a) pattern-based learning, (b) pattern recognition (i.e., perception) and (c) pattern generation (idea formation), the latter being achieved via the teaming of neural nets in brainstorming sessions (Creativity Machine paradigm). Even the former class, b, pattern recognition, is governed by the Creativity Machine principle, since the patterns our brains witness, originating in the external world, are largely ambiguous, so we wind up having to invent significance to raw sensory input (i.e., sense making).
  2. Creativity Machines may take charge of vast swarms of ANNs allowing them to knit themselves into vast brain-like structures called “Supernets.” These Supernets have exceeded the brain’s 100 billion neuron threshold (August, 1997), but now with millions of connections between synthetic neurons rather the brain’s meager 10,000. The problem is that such immense neural structures need equally vast inputs, and typically need decades to become “wise,” absorbing both their successes and failures in bootstrapping their competencies. [Stay tuned for what that Supernet did. It’s related to that Thalamocortical paper you read…]
  3. Supernets organize into a general AI for any system of sensors and actuators they connect. In that sense, they are general artificial intelligence. Add the equivalent of the human senses and effectors, and they will develop the general intelligence to learn and improvise.
  4. Place a number of such synthetic life forms together and they either annihilate each other or implement collaborative strategies with one another. Those synthetic life forms that survive manifest a social intelligence.
  5. CM paradigm can likewise invent “high-level” psychological theories like NLP. They can even emulate the “wars” that go on between high-level and low-level (computational) psychologists by developing contrasting and competing theories.
  6. To build human-like, general artificial intelligence, one needs a human-like body, because much of what brain does is the monitoring and regulation of the corporeal. Otherwise, we are already building the contemplative, non-corporeal domain experts, where the real financial support comes from. [One needs hands, or at least nubs, to truly appreciate the meaning of grasping a concept.]

Summarily, human-style general intelligence is attainable given sufficient computational resources. In effect, Creativity Machine Paradigm is the “intensive” principle behind such systems, while the “extensive” portion is the hardware.

You talk in rather anthropomorphic terms about your CMs “dreaming” and so forth; is this just for marketing, is it to push a transhumanist agenda, or is this something you believe?
(Let me clear the air about my being a transhumanist. I’m much more realistic, even though I have developed the most important component of a singularitarian vision, the trans-human level intelligence that can invent everything else.)

It’s not something I believe. It’s something I know. They are truly dreaming.

Whether we are talking about the brain or CMs composed of artificial neural networks, memories are stored in the same way, through the connection weights that “grow” between them. Add a little noise and they relive such memories. Add a little more noise and their memories degenerate into false memories (i.e., confabulations) of things and events they have never directly experienced. This progression of activation patterns (thoughts) is called “dreaming” and that’s exactly what we see when we watch a brain dreaming via fMRI or PET, an evolution of neural activation patterns that seem to come from nowhere. Under the hood, the brain is housecleaning, interrogating some of its nets with noise while others take note of what memories or ideas are worthy of reinforcement. Once again, this process is Creativity Machine paradigm at work.

One of the reasons that we perceive the biological neuron as special is the complex structures and mechanisms for protoplasmic growth and maintenance. Otherwise, it is just a switch built from inorganic elements like carbon, hydrogen, and nitrogen.

There’s another fundamental reason for our intrinsic prejudice against the very though of a “machine” dreaming: The main cognitive loop of the brain, the thalamocortical loop, is actually a Creativity Machine. The neural net based cortex thinks things, driven by its internal noise, while the neural network based thalamus gets interested in this stream of pattern-based thoughts within the former, cortical net. Attendant, watching neural nets attach meaning to what to it is a mysterious stream of thoughts coming from out of the blue.
The thalamocortical loops that have survived the evolutionary process are machines for self-reinforcing delusions, one of which is that mind/brain are somehow special, noble, and separate from any inorganic simulations thereof. This gives us the inspiration to avoid cliffs and shotguns pointed toward the head. The net result is that such delusionary minds reproduce and dominate! (sorry)

Admission that synthetic neurons can dream, is one of those ultimate scientific truths that breaks away from the subjective, comforting, and societally reinforced delusion that they don’t.

Is there room for development in your technique, or is it something that, whilst productive and efficient, doesn’t itself have emergent properties? Does any form of AI support emergence?
Everything a CM does is an emergent property, especially when one contrasts what they have learned through direct experience and what they imagine. That’s why an IEI battlefield robot starts as “cybernetic road kill” one moment, and within minutes has developed cunning, even Machiavellian strategies.

The reason I’ve been skeptical is that I can’t find any papers about your tech and I can’t see any articles which verify your claims (except written by similarly low-brow journalists like me.) You could be a genius but you don’t seem shy of publicity, so I’m confused at the lack of articles; convince me!
Where are you looking for articles? I know that there are some ad hominem attacks on the Internet, but I think they’re authored by individuals who haven’t taken the time to look. Sometimes these accusations come from disgruntled academicians who know that I’ve crashed their party as an outsider.

The truth is that there is plentiful reading out there:

  • You’ll need access to various military and government documents such as DTIC. Nevertheless, I have published profusely therein (hundreds of pages). Otherwise, many documents will never see the light of day.
  • There are approximately a thousand pages to read, refereed by neural network and AI specialists out of academia, and then published by the USPTO and patent offices around the world.
  • The fundamental principles behind the Creativity Machine were laid out in the peer-reviewed journal “Neural Networks” and in the beginning wrote many papers in the area of materials discovery, once again in refereed journals.

I hope you understand that I run a company, and can neither afford to publish my trade secrets nor take the time to write such papers. Keep in mind that I have what many have called the “ultimate idea,” so I’m working the horizontal markets (applications A-ZZZ) rather than the vertical (i.e., the academic guilds). Furthermore, the majority of projects I work are behind the closed doors of government.

The proof is in the putting. Look at the wealth of big name corporations and government agencies doing business with my company. Look at the vast suite of US and international patents that have effectively planted the flag and teach, in very plain language, how to implement such systems.

Try these peer-reviewed resources for starters:

As one additional note to this question, it’s hard for some of my academic friends to understand that the science I have developed has such broad and diverse application that I am hard-pressed to thoroughly document what I’ve done. I’m in a relentless race to complete one practical project after another…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments (

5

)

  1. Anonymous R&D in AÍ

    He is a charlatan! His model of self training ANN has a big flaw. It says that one ANN X monitors ANN Y and than ANN X feeds ANN Y. But he never says how the weights/connections of the ANN are changed! The process of train a ANN is just a process to find the best weights that can solve a problem, and the weights are just floating points numbers!

    He also could release some video of the “machine” working and doing a real AI task.

    And patents and companies any one can create and fake any crazy idea over it.

    Like

    1. Patricia Eriksson

      I feel you may be the charlatan. He doesn’t say that one net feeds another. He says that one net trains another, meaning the trainer is correcting those floating point weights within the trainee (http://imagination-engines.com/iei_stanno.htm). Many in academia have been trying to achieve just this, but he did it decades ago.

      http://topdocumentaryfilms.com/in-its-image/ for the “machine” working, for instance.

      Sorry, the patents have withstood critical review by patent offices around the world, many of the examiners coming from academia. The company has raked in millions using what you call a “fake and crazy idea,” from government and major corporations.

      Anyone can say they do R&D in AI, so I’m not convinced that you are real. I think you’re a wannabe.

      Like

  2. john

    Stephen Thaler is my hero !!! Rock on Man …

    LOng live synthetic intelligence!!

    jb

    Like

  3. Anon

    Unfortunately, this is the kind of knee-jerk reaction one usually gets when people don’t take the time to read the available information and rather let their ego take the driver-seat. You may want to check out his paper titled “The Fragmentation of the Universe and the Devolution of Consciousness” (http://www.imagination-engines.com/documents/devo6.pdf) wherein he goes a little more in-depth into how the weights are changed. It is not that hard to find if one is truly interested. 🙂

    Like

Discover more from {funambulism}

Subscribe now to keep reading and get access to the full archive.

Continue reading