This was one of those panels that has a stacked premise in the title, “The Myth of the Intelligent Machine.” That panel more or less as one rejected that stackedness. David G, with whom I had shared the panel on computers in SF literature, ran a very tight ship and moderated this one quite well. He had us as part of our introductory remarks state whether or not we felt a machine intelligence could be create that had parity with a human intelligence. My answer was resoundingly yes. I don’t remember what the overall definition of parity would be, but mine was that of an intelligence that given the same inputs would achieve the same range of outputs. Thus, a machine intelligence of me getting the feed of the panel through optical and auditory emulation should cover responses that I could plausibly make. The non-determinism of the human animal, and the possibility of these being different on different days was covered, but I pointed out that we take a narrow view when we look at the current sensory information being the inputs. I suggested that all the previous days are the input, and that if the machine intelligence was given the same information as I to make it have a bad day, it is only reasonable that it have responses from the grumpier end of the spectrum.
We talked about neural networks, the limits on learning and that sort of thing. I brought up the work by the guys (couldn’t remember which school they were from) who did backprop neural networks that had delays in them that were closer to those like human neurons, where the firing time was more like 0.1 s per neuron than the nearly instantaneousness of most AI systems. I saw some of the work that did picking speech out of audio with noise introduced, and the AI could pick things I found were inaudible. Someone in the audience corroborated this so I was glad I wasn’t misremembering it (or dreaming it altogether.)
Somewhere late in the panel, I decided that machine representation of human level intelligence by any definition you want was provably possible. If you accept the premise that with sufficient computing power an atom can be modeled exactly, and accept that interactions between atoms and molecules and the quantum level can be modeled eventually then as an outside boundary, the me sitting in that room could be modeled brute force by modeling that volume of space that I occupied. It would require more computing power than exists on earth presently to model any single atom I’m sure, but if you accept those premises then you have to accept the possibility of human intelligence being emulated somewhere well past the singularity.
I remember Rick and Michael both having plenty of (humanly) intelligent things to say and being quite interesting, but sadly I can’t remember specifics anymore. David as I said did a fine job, and overall this was quite a good panel. Lots of audience interaction and lots of smart things said in the crowd.