Friday, August 19, 2011

Life, Psychology, and Intelligence: A Three-Level Model of Intelligence (V2)

Artifical Intelligence (AI) are there for many years, but not as a comprehensive successful solution yet, due to the lack of measurement of intelligence and dynamics of intelligence.

Measurement plays critical roles in the development of dynamics: clock, telescope, microscope, etc. We even do not have a good way to measure intelligence now.

The concept Computational Complexity is used to measure computation, not intelligence. Current supercomputers already have comparable computing power and memory as human brains, but they are not even close to human intelligence now.

This is a clear indication: computation is different from intelligence, so Turing Machine and Computational Complexity Theory are not a right theoretic framework for intelligence and AI.

Turing Machine or equivalences, are actually highly influenced by deterministic models. Back to the dynamics for the real world, there are quantum theories, etc., which suggest the possibility of nondeterministism.

Some scientists tried to build the human mental models based on quantum theories. However, the difference between the time scale of neuron firing and excitations in microtubules and the decoherence time tells us there are some links in the middle still missing.

That is the reason I propose this three-level model: Life, Psychology, and Intelligence, to fit in the gap. Psychology is a subdomain of life, built on top of the other parts of life. Intelligence is a subdomain of both life and psychology, and built on top of the other parts of both life and psychology.

Ilya Prigogine's theory of systems far from equilibrium is a good foundation for life phenomena. My goal is to further study and accumulate the knowledge and models of psychology based on Brussels-Austin group and Ilya Prigogine's theories. Once concrete enough foundation has been constructed for this middleware, we coud combine the quantum theories and intelligence models.

So far, most of AI researches are based on methods of equilibrium. These approaches are difficult to make progresses in many areas. Probably people should try to re-focus AI researches on methods of far from equilibrium, rather than only on those of equilibrium.

There are some discussions of ontological aspect and epistemic aspect in Brussels-Austin approach. In real world, due to measurability, there could be several possibilities:
1) Measurable Determinism, determined by measurable factors: environment and internal states, etc.
2) Unmeasurable Determinism, determined by some factors which are not measurable.
3) Nondeterminism. With this model, there could be somethings such as free wills, etc.

Systems far from equilibrium, combined with the possibilities of measurable or unmeasurable, could be the way to illustrate the complexity of psychology and intelligence.

Based on my three-level model, the psychological concepts such as 'will' or 'free will' could be re-studied based on systems far from equilibrium. New concepts and models of intelliegnce could be proposed based on further studies.

Since life is a typical type of systems far from equilibrium, very different from other systems, Brussels-Austin group approach even could gain some hints from the studies based on my propsoal, to re-energize and push forward their research.

In this proposal, there are seven main points, if no one proposed them before, are my new contributions:
1) Propose the concept of Dynamics of Intelligence
2) AI fails as a comprehensive solution so far, due to the lack of measurement of intelligence and dynamics of intelligence.
3) Computation is different from intelligence. Computational complexity is used to measure computation, not intelligence. Turing Machine is not a right theoretic framework for artificial intelligence.
4) The difference between the time scale of neuron firing and excitations in microtubules and the decoherence time, does not exclude the possiblility to build psychology and intelligence models on quantum theories. Just people need build the missing middleware between them first.
5) The theories of far from equilibrium could be used as the foundation of the middleware mentioned in 3).
6) AI researches better to re-focus on methods of far from equilibrium, rather than only on those of equilibrium.
7) Brussels-Austin approach could use life phenomena as good study targets to gain hints, re-energize their efforts, and push forward the researches.
8) A three-level model for intelligence: Life, Psychology, and Intelliegnce. Psychology depends on the other parts of life. Intelligence depends on the other parts of both life and psychology.

If someone already proposed some or all of these ideas before, please let me know. I would really appreciate.

Saturday, August 6, 2011

The Proposal for A Three-level Intelligence Model

I wrote three articles for my proposal of a new intelligence model, a pure research proposal.

In The Things AI and Robots Can Do Well and Not Well So Far, I summarizes the current status of AI and robotics, to justify the need to develop more advanced AI theories and methodologies.

In Life, Psychology, and Intelligence, I introduced a three-level model of intelligence, based on quantum theories and systems of far from equilibrium. Measurability or unmeasurability highly affect knowledge models, and life is a type of systems far from equilibrium. These are critical factors for a generic intelligence model.

In Reductionism and Whole/Parts,  I described the philosophical reasoning to build this model on low level theories: quantum and far from equilibrium systems. This model is not pure reductionism as explained there.
 
This model assumes the existence of the physical world. Although unmeasurability is mentioned, it doesn't take agnostic position. Because things could be measured up to certain degrees, and we don't know where the up limit is yet even if it DOES exist.

The Difference between Life and Machine

Machines are very useful. However, the tasks they could do are constrained by many factors and grow slowly now. To explore the potential of AI to sustain the growth, people need understand the differences between life and machines
.
Mechanical clocks and watches with extremely high precision and sophistication appeared long, long ago before computers, as well as those old automatons. The steam engines and centrifugal governors back in 18th century can generate power much better than humans. However, they do not have AI. They are only machines.

It is obvious there are still huge differences between life and current machines.

The difficult part is what cause the differences and how much could be reduced. Similiar debates on this topic were repeated again and again for decades without much progress.

This is a clear indication we do not have a proper theoretic framework for AI yet to anwser this question. We even cannot define the most basic concept for AI:the intelligence itself.

Doing some work better than humans is not an evidence of intelligence. A great machine does not necessarily have great AI, which is still very limited in its applications.

There is a concept: strong AI, which means: "the intelligence of a machine that can successfully perform any intellectual task that a human being can" (from wikipedia). Obvious, there is not strong AI so far.

We even do not have a way to evaluate the current status, to measure how far away an AI systems are from strong AI, which may causes illusions to many people.

We have a concept: weak AI, which does not "match or exceed the capabilities of human beings, as opposed to strong AI" (from wikipedia). However weak AI is only a vague phrase, which does not measure how strong or how weak an AI system is.

This is another indication people do not have a proper theoretic framework for AI so far.

To have a good estimation how strong an AI system is, we could redefine strong AI as strong human AI, and break down the big category of weak AI into those such as:
1) strong ape AI, match or exceed apes' capability in all intellectual aspects
2) strong monkey AI, match or exceed monkeys' capability in all intellectual aspects
3) strong cat AI, match or exceed cats' capability in all intellectual aspects
4) strong bee AI, match or exceed bees' capability in all intellectual aspects
5) strong fish AI, match or exceed fishes' capability in all intellectual aspects
etc...

People could add more as they like. When using these fine-grained categories, at least people can have clear sense what are the differences between AI systems and a specific type of life.

Looks we don't have strong cat AI yet. Technical singularity is still far away.

Strong human AI is a superset of both life and machine. We don't know if it is achievable or not yet. It remains as the last boundary to cross between life and machine. I propose a three-level model of Life, Psychology and Intelligence. Hope further research on this model could illustrate how close machines could be made to life and humans.