Abstraction
Do computers already have human level intelligence ? Could they understand and process the semantics of irrational numbers without knowing the exact values ? Human can. How about uncountable sets ? These are necessary to build sciences and real world modelling. Does human intelligence exceed the power of Turing Machine? This paper explains that behavior-based Turing Test cannot measure some intrinsic human intelligence, due to the bottleneck in expression, the bottleneck in capacity, and blackbox issue, etc. And it does not provide a progressive measurement up to human level intelligence. Similar issues exist in other current testing methods, due to the limitations of behavior-based, knowledge-based or task-based, etc. Measurements based on intrinsic mechanisms could provide better testing. This paper identifies several design goals, to further improve the measurement. Gu Test, a progressive generic intelligence measurement with levels and potential structures, is proposed based on these goals, to measure the intrinsic mechanism for semantics, potential and other intelligence. The semantics of irrational numbers and uncountable sets are identified as two test levels. More work need be done to expand the test feature sets and structures, and provide some suggestions for the direction of future Artificial Intelligence (AI) researches.
1. The Measurement of Generic Intelligence
Machines like clocks can do somethings better than humans long, long time ago. However, this does not mean these machines have generic intelligence, or human level intelligence. So some measurement of intelligence is needed.
Before discussing the measurement of generic intelligence, there is a question: whether generic intelligence is needed ? If throwing in more computing power and design better algorithms based on Turing Machine model can solve all problems, there is no need for generic intelligence.
Unfortunately, computers still lack of certain mechanisms which are in human intelligence. Humans have no idea how to add these into computers so far. Computers cannot write software from the beginning. They only run software written by humans, or generate code specified by humans. More generically, humans are highly adaptive, innovative, and can learn many types of knowledge and skills, and can switch from one task to another quickly, etc. Developing intelligence for scientific researches is even more challenging.
Due to such adaptive, innovative, and evolutionary nature, it is extremely difficult to define generic human intellignece accurately, if not impossible. But it is obvious there are big differences between current computers and human intelligence. Testing methods could be used to measure such differences. Clocks can measure time without an accurate definition of time.
Turing Test [1] is the first of such testing methods proposed. Several others were suggested in later years. They could be classified into indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc.
Testing methods can only test a small portion of intelligence due to time limit and availability. So it is very critical what to test and how to test. However, the existing testing methods cannot test some intrinsic human intelligence capabilities, such as how to understand and use the semantics of irrational numbers and uncountable sets, etc., which are fundamental to sciences and real world modelling. And they cannot test the potentials of humans to develop better capabilities.
Current computers can only approximate the values of irrational numbers with very limited semantics. Due to the sensitivity to intial conditions and exponential divergence in nonlinear chaotic phenomena, there are problems in such approximations. In reality, nonlinearity is the norm rather than exception.
Actually nonlinearity and butterfly effect are the main frustrations to von Neumann's meteorology ambitions. It is highly questionable whether algorithms based on Turing Machine model could accomplish generic intelligence.
Since the existing test methods cannot really measure generic intelligence, they cannot provide good guides to AI researches. Actually very little progress in generic intelligence was made during past decades. It is time to change this. Measurements based on intrinsic meachansims could provide better testing.
The following sections will discuss the bottlenecks and issues in Turing Test and other existing test methods first. Several design goals are identified to address these issues and provide better measurement. Gu Test, is proposed to accomplish most of these design goals. Some directions for future researches are discussed.
2. Turing Test and Chinese Room concern
Alan Turing described an imitation game in his paper Computing Machinery and Intelligence, i.e. Turing Test, which tests whether a human could distinguish a computer from another human only via communication without seeing each other.
Turing Test provides two results: pass or fail. It cannot measure partial generic intelligence, i.e. how close a computer system is to generic human intelligence. The testing results depend on the subjective judgement of testers without objective criterions. Objective criterions are needed in scientific experiments, especially for the phenomena in macro physical worlds..
John Seale also raised a Chinese Room issue [2], i.e., computers could pass this test by symbolic processing without really understanding the meanings of these symbols. Due to the limited number of phrases in real usage, it is possible to build a computer system with enough associations between phrases such that humans cannot distinguish the system from humans within limited testing time. However, this does not mean the computers already have human level intelligence.
Chinese Room argument also raise the semantics issue: could computers really understand the semantics of natural languages ?
More important, there are the bottleneck in expression, the bottleneck in capacity, and issues of blackbox test, etc., as described below, which make Turing Test unable to really test generic intelligence.
Turing Test uses interrogation to test, so it only can test those human characteristics which already be understood well by humans and can be expressed in communication. Some people could manage to understand each others by body languages, rich tones, analogy, metaphor, implication and suggestion, etc., in certain environments, which cannot be expressed in pure symbolic processing. So Turing Test behind veils is not a right way to test these intrinsic intelligence abilities. Even if without veils, current test methods still cannot test those abilities or potentails not understood well by humans yet, which obviously cannot be expressed well yet. This is the bottleneck in expression.
There is also a bottleneck in the capacity of communication or storage: even if those rich subtle varieties of information could be digitized, the size of these information could far exceed the capacity of communication or storage. The current von Neumann architectures only have finite memory units. Turing Machine has infinite but countable memory units. Could Turing Machine be enhanced with uncountable memory units ?
There is also a blackbox issue. Say, a system can produce a huge number of digits of an irrational number. It is impractical to wait for these digits one by one within limited testing time. However, it is straightforwrd to examine the code to see whether it implements such a feature correctly.
Chinese Room issue, the bottlenecks in expression or capacity, and the blackbox issue, stem from the testing methods themselves. Due to these problems, certain intrinsic intelligence and potentials cannot be tested with such methods.
However, with whitebox methods, people could measure mechanisms, instead of behaviors. The designers of the systems could explain what and how they implement in their software and hardware. Testers could analysize whether these claims are true or false based on reasoning, and exmaine the systems to see whether they are implemented as expected. This is the procedure of Gu Test.
3. Other Test Methods
There are several other methods aiming at testing generic intelligence. Although some of they could provide some test levels, they cannot measure higher level intelligence close to humans, because they do not measure the mechanisms behind generic intelligence. So they lack the understanding and processing of real semantics, and cannot test the potentials of human development.
One is Feigenbaum test. According to Edward Feigenbaum, "Human intelligence is very multidimensional", "computational linguists have developed superb models for the processing of human language grammars. Where they have lagged is in the 'understand' part", "For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain." [3].
Feigenbaum test is actually a good method to test the knowledge in expert systems. The test tries to produce generic intelligence by aggregating many expert systems. That is why it needs to test extensive knowledge.
However, since these types of knowledge are still expressed and stored in symbolic data, the bottlenecks of expression or capacity still exist. It is still a blackbox test. Although it tries to solve the "understand" part, there are no solutions so far to test real semantics of knowledge from these symbolic data.
Another issue of Feigenbaum test is: individual humans may not have very extensive knowledge in many domains, but they have certain potentials. So testing extensive knowledge may not be necessary, if not impossible. What to be figured out is how to test these potentials.
Minimal Intelligent Signal Test (MIST) [4] is similar to Feigenbaum test. But it only uses binary answer "yes" or "no" as test results so it can leverage statistical inference to analyse the test results. The bottlenecks in expression and capacity still exist. It is still a blackbox testing. By using binary answers, it oversimplifies the knowledge with even less understanding of semantics than Feigenbaum test.
Another method is Shane Legg and Marcus Hutter's solution [5], which is actually agent-based, a good test for the performance of specific tasks. In their framework, an agent sends its actions to the environment and receives observations and rewards from it. If their framework is used to test generic intelligence, then it assumes that all the interactions between humans and their environment could be modeled by actions, observations, rewards, etc. This assumption has not been tested yet. The bottlenecks in expression or in capacity still exist in the definitions of actions, observations, rewards, etc.
Furthermore, Humans have very diversified specialties. It is impractical to aggreagte performance for a very large number of tasks. Humans have the potentials to learn new tasks and be innovative. They could gain deeper observations, take better actions, and gain other rewards than what in the specified task definitions. Such potentials cannot be tested in the blackbox performance testing for specified tasks. So this method does not really test the generic intellignece, too.
If Turing Test is enhanced with vision and manipulation ability, it could become similar to Shane Legg and Marcus Hutter's solution. Interrogation could become task performing. Even if test not behind veils, these bottlenecks and issues still exist.
In a summary, the existing testing methods does not measure generic intelligence well as expected. As a result, the studies of generic intelligence are still clueless. To design a better measurement of generic intelligence, the existing bottlenecks and issues should be resolved. Some design goals should be identified to provide good directions and better solutions.
4. The Design Goals for Better Measurement of Generic Intelligence
Based on the analysis done in previous sections, some design goals are suggested here:
1) Resolve Chinese Room issue, i.e., to test the real understanding of semantics, not just behavior imitating or symbolic processing.
2) Resolve the bottleneck in expression, by not purely relying on interrogation. Find some ways to test those intrinsic intelligence abilities which have not been understood and expressed well.
3) Resolve the bottleneck in capacity, by leverage of some properties of concepts and semantics.
4) Use whitebox test to examine the implemented mechanisms directly.
5) Involve as less domain knowledge as possible, since regular humans may not have much knowledge in specific domains. But find some ways to test the potentials to develop intelligence.
6) develop leveled test schemes up to generic human intelligence, to measure continuous progress in intelligence.
7) develop a framework to test structured and associated intelligence, adaptive and innovative abilities, and diversfied specialties, etc.
5. Gu Test
Based on these design goals, Gu Test is proposed. It comprises a testing procedure with some selected testing features and structures, so it can measure the intrinsic mechanisms for intelligence. Initally it includes two test levels: the understanding and processing of the semantics of irrational numbers and uncountable sets.
More levels and structures could be added in future. However, to make it possible to test the full range of intelligence, it should include only crititcal features as less as possible.
Humans can derive new usages of irrational numbers without knowing the exact values of these numbers. Obviously they understand these semantics. The situation is similar with uncountable sets, but at a more difficult level, whereas regular people with average education have the potential to understand irrational numbers. These intelligences are critical to sciences and real world modelling.
Gu Test is to test whether computers or machines have such intelligence. Humans own such abilities or potentials, but they do not understand why and how these work, and cannot express these semantics, potentials, and intelligence as pure symbolic data yet.
It is a whitebox test. The test procedure is as below:
1) It is up to the designers of the systems to explain what semantics, potentials, or other intelligence, etc., they want to implement and how. In this way, Gu Test does not restrict what and how the designers want to implement, and allows full exploration..
2) Testers analysize whether these claims are true or false based on reasoning. The interpretation and representation of intelligence features only can be judged based on reasoning.
3) Testers examine the software and hardware of the systems, to see whether these mechanisms (including whatever representation of intelligence passed in step 2) are really implemented as expected.
This procedure could be applied to the selected features including irrational numbers and uncountable sets, etc., or to other intelligence features included in Gu Test in future, or those claimed by customers. The procedure also could be applied to low level claims, such as whether some mechanisms at physical level, biological level, or psychological level, etc., are accompolished.
The test does not rely on blackbox interrogation. So it opens the door for designers' and testers' imaginations to test whatever intelligence or mechanisms humans have, without the external bottlenecks in expression or capacity stemming from testing methods.
Irrational number is a primitive concept developed in Pythagoras' age. The concept is necessary to so many domains, but involves very little domain-specific knowledge. Uncountable set is an advanced concept used in modern sciences and mathematics. Physical semantics could be in complete different dimensions. It would be very different challenges to add intelligence in different domains. Although concepts such as time, distance and energy could be good potential candidates.
The current efforts are to achieve the design goals 1) to 6). The work to meet goal 7), i.e., to test structured and associated intelligence, adaptive and innovative abilities, and diversified specialties, etc., will be left to future researches.
6. The Comparison With other Test Methods
As said, Gu Test is very different from indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc. As said, it only selects critical testing features as less as possible. It is a whitebox test. It requires humans designers to explain what intelligence their systems implement and how, and human testers to analysize whether these claims are true or false and examine the systems to see whether they implement these mechanisms as expected. So it can test intrinsic mechanisms instead of behaviors, knowledges, or tasks, etc.
It does not have the bottlenecks in expression or in capacity stemming from testing methods, and could test higher level intelligence such as semantics understanding up to or even beyond human intelligence.
Gu Test represents a complete paradigm shift from previous test methods. It provides some guides or insights related to generic human intelligence, without restricting how to implement these.
7. Future Research
Much more work need be done to add more test levels to Gu Test and meet the design goals 7).
The analysis on the bottlenecks and issues of Turing Test, would naturally lead to the questions of the power and limitations of Turing Machine and von Neumann architecture. This paper does not make any conclusion on what platforms or architectures are better for generic intelligence, as long as they could truly pass the test. Rather, it opens the door to allow people to make full exploration.
To really understand the essentials of intelligence, people have to study the history of knowledge development, including philosophy, mathematics, and sciences, etc. It is a reasonable option to develop intelligence models based on a multi-level structure of physics, life sciences, and psychology.
References
[1] Turing, A. M., 1950, "Computing machinery and intelligence". Mind 59, 433–460.
[2] Searle, John. R., 1980, "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457.
[3] Feigenbaum, Edward A., 2003, "Some challenges and grand challenges for computational intelligence". Journal of the ACM 50 (1): 32–40.
[4] McKinstry, Chris, 1997, "Minimum Intelligent Signal Test: An Alternative Turing Test", Canadian Artificial Intelligence (41)
[5] Legg, S. & Hutter, M., 2006, "A Formal Measure of Machine Intelligence”, Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands, pp.73-80.
Scott Lifan Gu's Blog
Monday, September 3, 2012
Wednesday, August 29, 2012
Different Approaches For Knowledge System Development (v4)
1) Introduction
To really understand how humans develop intelligence and knowledge systems, people have to retrace the whole history of philosophy and mathematics back to ancient time, and study various kinds of methodologies for different purposes.
This article is not a complete review of philosophies and sciences. It only addresses the essentials and methodologies for knowledge development, for the interests of sciences, philosophy, education, and artificial intellignece, etc. So although Dmitri Mendeleev and Albert Einstein are extremely important scientists, they did not contribute significant different approaches. Essentially they followed Galileo-Newton's: the classic scientific approach.
This article does not try to cross the boundary between knowledge and religions. Only religious rites are mentioned.
By approach, it means coherent approach in this paper. Coherence does not guarantee correctness. However, incohenrence is always prone to problems and errors.
2) Various Approaches
Several approaches from ancient time are identified here. They are: Pythagoras-Plato approach; Socrates-Stoicism approach; Euclid-Archimedes approach; Yi-Jing approach from ancient China; approaches from ancient India; approaches from other countries, etc.
Some approaches from medieval age played important roles in sciences: they are Ibn al-Haytham's approach, Al-Biruni's approach, and Avicenna's approach, etc.
Galileo-Newton approach, the foundation of current sciences, evolved from Euclid-Archimedes and Ibn al-Haytham approaches. But there are differences between them. Some non-classic approaches from Charles Darwin, Adam Smith, Sigmund Freud, etc., are also different from Galileo-Newton approach.
The following sections will first discuss the strengths and limitations of the first three ancient approaches, then the medieval approaches, and Galileo-Newton approach; Several non-classic approaches and important theoretic issues will also be discussed.
The ancient approaches from Yi-Jing, India, and other countries, would be discussed in separate articles, if possible.
3) Pythagoras-Plato Approach
Thales is a pioneer in mathematical proof. Egyptians and Babylonians might know Thales Theorem before him, but he was likely the first one providing a valid proof for it.
Although Thales tried to explain natural phenomena not based on mythology, he is a Hylozoist who believed everything is alive, and there is no difference between the living and the dead. He did not build a coherence approach to develop knowledge, but had significant influences on Pythagoras.
Pythagoras probably was the first one who built coherent methodologies to develop knowledge and systematic views. He formed a school of scholars to study philosophy, mathematics, music, etc.
Pythagorean are famous for Pythagorean Theorem. They were pioneers in mathematics as systematic studies. They also proposed a non-geocentric model that the Earth runs around a central fire which suggests both the Sun and the Earth are not the center of universe. They might develop or formulate the idea the Earth is round.
Pythagoras also taught religious rites and practices in his school, so came his beliefs. He and Plato believed there be a perfect and persistent abstract world, and an imperfect and sensible world. They pursued the beauty of abstract perfection. Plato followed this philosophy and developed it into maturity.
Pythagorean made many early contributions to knowledge. They tried to construct complicated things with simpler ones. They believed whole numbers be simple and perfect, and tried to represent all numbers with quotient of two whole numbers. Here they faced the first mathematical crisis: some, actually most of the numbers, cannot be expressed in such a way. They called these irrational numbers.
This discloses the problems of Pythagoras-Plato approach: the way they construct or interpret mathematics may not fit into the reality. The beauty of mathematics may not be able to explain the imperfect and sensible world. Constructivism is important. But how to construct, and to what extent it works ?
Such an issue even has a consequence on modern sciences and technologies: how macro nonlinearity and micro quantum phenomena are related to irrational numbers ? How should irrational numbers be handled on computers ?
The issue of irrational numbers is actually related to measurement. Euclid described it in a better way as mentioned in Section 5, which is not understood properly by many modern mathematicians. Measurement is again associated to nonlinear, chaos, deterministic uncertainty, or computer modelling, etc. So people better think of this issue as a tip of an iceberg, rather than a solved problem which they could forget about it.
4) Socrates-Stoicism approach
Socrates led different beliefs. He did not take the beauty of abstraction as a doctrine. There is a Socratic method. He asked people to question each other. When people discuss their arguments explicitly, they could find the problems and understand them better. Socratic method is an important element of scientific approach.
Socrates asked many questions, but did not give many answers. This might be a good attitude. Socrates taught by playing a role model. By admitting his ignorance, he suggested other people also to admit their ignorance.
Admitting their own ignorance is not beautiful, not even pleasant, but it is an extremely critical step to make further progress. However, this attitude is offensive to many people. Socrates was voted to death eventually, probabily under accumulated anger from others.
Then Socrates played a role model again by accepting the death to show the rule of laws.
Also, Aristotle believed it was Socrates who identified the method of definition and induction, etc., which are essential to sciences in future.
Socrates promoted rationale and ethics, which was followed by Cynicism and Stoicism. However, the strange behaviors of Cynicism actually showed the frustrations faced by this approach: they did not find effective ways to discover much more knowledge. This task would be achieved by Euclid-Archimedes, Ibn al-Haytham, Galileo-Newton, Darwin approaches, etc., later.
Sophism was an enemy to Socrates, and is also an enemy to future sciences. It does not provide a coherent approach, rather it tries to gain advantages by twisting the facts.
5) The Confusion of Aristotle's Philosophy
Aristotelianism is not coherent, too. Although Aristotle adored Socrates and claimed he was against sophism, his way is actually a mixture of Socrates, Plato and sophism without coherence. So he included assertions like a flying arrow is at rest in his book. He might enjoy such a twist, and cannot distinguish sophism from real profoundness.
His syllogism is an unsuccessful summarization and simulation of the methods in mathematical proofs. Aristotle tried to develop a perfect logic to guarantee the correctness of thinking. However, this only shows he did not really know how mathematics work. So he missed a very important factor: how to make the premises in syllogism valid and concrete, i.e. the first principle. Euclid provided a solution on this issue for some problems later.
Furthermore, he was not aware of that the logic based on ignorance or shallowness could suppress innovation and scientific progress. His assumed profoundness is a mental trap which hindered scientific progress for a very long time. His problems were illustrated by Galileo and Gödel later.
Since Aristotle did not know how to apply logic and reasoning correctly, he did not know how to build coherent theories. In his book The Physics he just put togather whatever he knew or believed into huge collections without paying attention to coherence, which brought very limited values to physics, but many misleadings.
Aristotle was actually a naturalist. He made some contributions to zoological taxonomy based on observation. He is not the first one using observation.
Aristotle's world view was invalided by Galileo in future. His logic was disproved by Godel later. His single logic would lead to social Darwinism, which actually takes opposite positions from real Darwinism on many issues, as mentioned in later sections.
His philosophy is to develop knowledge only good enough for real life, which might help people to absorb funds, build influences with assumed profoundness (declaring some simple logics based on ignorance or shallowness would be the only correct mindset, which is really fascinating), or even gain powers, etc.
So his philosophy is thought as one kind of realisms by many people.
6) Euclid-Archimedes Approach
Euclid is the first one who established a concrete systematic theory for a domain. He is more like a scientist, than a pure mathematician.
He did not concern much of the beauty of abstraction. In the proof of the number of prime numbers, he used the word "measure", instead of a number divides another number. Many modern mathematicians think not being abstract enough is Euclid's limitation.
However, Euclid using measurement for division, implies the accurate value of irrational numbers cannot be measured by rational numbers. Measurability is still a critical problem in modern sciences and computer modelling. So, such an expression is not Euclid's limitation, but his insight. He caught the essentials of the problems.
He usually constructed solutions rather than just proving the existence of unknown solutions. His system is incredible concrete after more than two thousand years, much more concrete than many parts of modern mathematics.
He constructed the geometry system by some simple axioms, and derived other theorems from these axioms. Although this looks like Pythagorean's way to construct complicated numbers by simpler whole numbers, they are different.
Euclid guaranteed the correctness of derived theorems by making axioms simple and straightforward, thus self-evident. So Euclid solved the first principle issue in certain extent. Pythagorean did not show why whole numbers could be used to express all numbers, they just believed it be the beauty. Euclid's geometry is a good example of correct logic.
Euclid's approach mainly works in mathematics, and cannot find all theorems and laws in real needs. When Euclid worked on optics, his approach for first principle faced a problem: he did not have justified reasons to choose emission theory instead of intromission theory, although no justified reasons to make the opposite choice at that time, too. These limitations are partially solved by Ibn al-Haytham approach and Galileo-Newton approach, only partially.
Euclid was truly thought as a scientist in early days. Only after Galileo founded Physics, Euclid retired from scientists, and became a mathematician only.
Archimedes, one of the greatest engineers, was highly influenced by Euclid.
7) Medieval approaches
Pythagoras is very insightful in mathematics, philosophy, music, religious practices, etc. Plato developed the philosophy in his style into maturity. Socrates and Stoicism knew the way to develop rationale and ethics. Euclid and Archimedes designed theoretic and real systems in rigorous forms.
They all made big contributions to knowledge development, and still have big influences so far, but also with problems. Their accomplishments are still in very limited extent. It is some medieval scholars who brought in some new factors critical to future sciences.
Ibn al-Haytham used some procedure to do scientific research: observe, form conjectures, Testing and/or criticism of a hypothesis with experimentation, etc. He used this procedure to prove intromission theory.
However, this procedure is not a complete scientific approach. He can only use it to reach individual results, still under Euclid's geometry view. Although he did some brilliant work in optics, due to this geometry view and his way of thinking, he cannot gain deep and comprehensive understanding of physics.
More important, Ibn al-Haytham did not know the problems of Aristotle's world views. So he did not know to get out of the mental trap created by Aristotle's philosophy, to establish new paradigns in sciences and develop systematic theories, which are critical to scientific revolutions. The tasks to change mentality were left to Copernicus, Galileo and Darwin, etc.
Al-Biruni put an emphasis on experimentation. He tried to conceptualize both systematic errors and observational biases, and used repeated experiments to control errors. He might also be a pioneer of comparative sociology.
Avicenna discussed the philosophy of science and summarized several methods to find a first principle. He developed a theory to distinguish the inclination to move and the force. He discussed the soul and the body, the perceptions, etc. Probably he is a very important scholar misunderstood by many modern people.
Only after their efforts, big scientific progresses became possible.
8) Galileo-Newton Approach
Nicolaus Copernicus constructed a systematic theory deviating from Aristotle's world views, thus starting the new age of sciences. However, he (and Johannes Kepler) still followed Euclid-Archimedes approach, a geometry view. He left these problems of world views and mental trap unexplained.
It is Galileo Galilei who formed much substantial understanding of the physical world and triggered a scientific revolution with his comprehensive and systematic thinking. With his remarkable work "Dialogue Concerning the Two Chief World Systems", Galileo combined Socratic Method and Ibn al-Haytham's method. More important, he showed the differences in world views and asked people to get out of the mental trap of Aristotle's philosophy. His systematic thinking and world view led to new scientific theories and establish new paradigms in physical sciences. These had not be done by Ibn al-Haytham's approach, who only could make individual isolated conclusions.
Isaac Newton developed his great theories based on Galileo's approach. So this classic scientific approach is named as Galileo-Newton approach. Although Newton did a great work, he did not explain why he could do these, i.e., summarize well what are the real differences between their approach and Ibn al-Haytham's approach, and what are the limitations of their approach. It is the responsibility of future philosophers to explain the mechanisms behind these.
Galileo-Newton approach mainly work in worlds without considering the effects of life. Thomas Robert Malthus and John Maynard Keynes followed this classific scientific approach and tried to apply it to life and human societies. They made some progresses, but very limited. And they missed something very important to human beings.
Actually this approach does not work well in artificial intelligence, psychology, economics, and other humanity and social sciences, etc. Those fields relate to humans. Measurability and modelling are still big concerns. People could look at Newton's four rules of reasoning stated in his Mathematical Principles of Natural Philosophy, to figure out what the problems really are:
"1.We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
2.Therefore to the same natural effects we must, as far as possible, assign the same causes.
3.The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
4.In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions".
Actually, many important knowledge systems were not built with this approach.
9) Non-Classic Approaches
As great academic contributors, Charles Darwin, Adam Smith, and Sigmund Freud built their theories not with Galileo-Newton approach. These three are listed here in the descending order of closeness to sciences. No surprise, their theories relate to humans.
Some people classified Charles Darwin as a follower of Francis Bacon's methodology. It is not true. Darwin did much more than observation, induction, etc., empiricism methods, to construct his great theories for life and human history. He is the greatest historian in history. He established a paradigm to combine sciences, life, and humanity, etc.
Although Alfred Russel Wallace was thought as a co-founder of evolution theory by many people, I only discussed Charles Darwin here due to pure academic reasons.
It is also the responsibility of future philosophers to illustrate the differences between Darwin's approach and Ibn al-Haytham's approach, and the differences between Darwin's approach and Galileo-Newton approach.
People also should pay attention to the differences between Darwinism and Social Darwinism. They actually took opposite positions on many issues. Darwinism cherishes diversities, and does not agree with the single logic of Aristotelianism.
Epicurus, Arthur Schopenhauer, Friedrich Nietzsche also illustrated many important understandings, related to human natures. Their ways are different from classic approaches, too.
10) The Progresses in Natural Sciences So Far
After Galileo-Newton approach was established, many theories in natural sciences were proposed such as periodic table, genetics, relativity theories, quantum theories, etc. They mainly followed Galileo-Newton approach. Relativity theories are just refinements to Newton dynamics, just as Kepler refined Copernicus' circle orbits with ellipse orbits. Today people know planets do not run in ellipse orbits, too. So people do not know whether relativity theories are the final theories.
There are good examples for work cross different approaches, such as the synthesis of evolution and genetics theories in life sciences. But people do not have good theories in psychology and intelligence yet, and do not know whether Galileo-Newton approach would work well in these fields. Even in biology, most mechanisms can not be explained yet.
The progresses mentioned above mainly happened before World War II. After World War II, there are genome, and the Standard Model of particle physics. The significance of these two progresses could be compared with periodic table. However, since periodic table is the first of such type of models, proposed much earlier, so genome and the Standard Model of particle physics are less important than periodic table, and far less important than Euclid, Copernicus, Newton, and Darwin's theories.
11) Inadequate Summarizing Efforts
Many people tried to summarize the knowledge systems, such as: Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc.
Georg Wilhelm Friedrich Hegel, Karl Marx, Bertrand Russell even made more ambitious efforts.
Just they all missed some or many important aspects.
12) Godel Theorems and the Limitations of Mathematics and Positivism
Godel incompleteness theorems illustrate the intrinsic problems in mathematics systems beyond certain complexity. Actually any systems including Peano Arithmetric cannot be both complete and consistent. So the main goals of Hilbert's program cannot be achieved. This is the foundational crisis in mathematics.
Actually Godel theorems show Aristotle's philosophy is wrong. There is no perfect logic in this world. This happend almost three hundreds years later, after Galileo showed Aristotle's world views are wrong. Darwinsim actually does not agree with the single logic of Aristotelianism, too.
Positivism is an ideal goal for Galileo-Newton approach. However, as said, Charles Darwin, Adam Smith, Sigmund Freud and many others built their theories and systems not strictly with this approach.
In the whole knowledge system, positivism, just like perfect logic, is more like a Utopia, rather than a reality. Even so, as one of important doctrines in sciences, positivism should not be ignored, especially in the conclusion stage.
13) Future Researches
Some of the challenges for future are identified here:
1. How could people establish new paradigms and develop better world views ? These are the mechanims missed in Ibn al-Haytham's approach. Galileo, Newton, Darwin did these well. But they did not summarize well how they achieved these. Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc., did not summarize these well, too.
2. Identify the limitations in Galileo-Newton approach.
3. Identify the limitations in Darwin's approach.
4. Develop an approach for future researches on life, psychology, and intelligence, etc.
It is up to future philosophers to figure these out and illustrate the mechanisms behind these.
To really understand how humans develop intelligence and knowledge systems, people have to retrace the whole history of philosophy and mathematics back to ancient time, and study various kinds of methodologies for different purposes.
This article is not a complete review of philosophies and sciences. It only addresses the essentials and methodologies for knowledge development, for the interests of sciences, philosophy, education, and artificial intellignece, etc. So although Dmitri Mendeleev and Albert Einstein are extremely important scientists, they did not contribute significant different approaches. Essentially they followed Galileo-Newton's: the classic scientific approach.
This article does not try to cross the boundary between knowledge and religions. Only religious rites are mentioned.
By approach, it means coherent approach in this paper. Coherence does not guarantee correctness. However, incohenrence is always prone to problems and errors.
2) Various Approaches
Several approaches from ancient time are identified here. They are: Pythagoras-Plato approach; Socrates-Stoicism approach; Euclid-Archimedes approach; Yi-Jing approach from ancient China; approaches from ancient India; approaches from other countries, etc.
Some approaches from medieval age played important roles in sciences: they are Ibn al-Haytham's approach, Al-Biruni's approach, and Avicenna's approach, etc.
Galileo-Newton approach, the foundation of current sciences, evolved from Euclid-Archimedes and Ibn al-Haytham approaches. But there are differences between them. Some non-classic approaches from Charles Darwin, Adam Smith, Sigmund Freud, etc., are also different from Galileo-Newton approach.
The following sections will first discuss the strengths and limitations of the first three ancient approaches, then the medieval approaches, and Galileo-Newton approach; Several non-classic approaches and important theoretic issues will also be discussed.
The ancient approaches from Yi-Jing, India, and other countries, would be discussed in separate articles, if possible.
3) Pythagoras-Plato Approach
Thales is a pioneer in mathematical proof. Egyptians and Babylonians might know Thales Theorem before him, but he was likely the first one providing a valid proof for it.
Although Thales tried to explain natural phenomena not based on mythology, he is a Hylozoist who believed everything is alive, and there is no difference between the living and the dead. He did not build a coherence approach to develop knowledge, but had significant influences on Pythagoras.
Pythagoras probably was the first one who built coherent methodologies to develop knowledge and systematic views. He formed a school of scholars to study philosophy, mathematics, music, etc.
Pythagorean are famous for Pythagorean Theorem. They were pioneers in mathematics as systematic studies. They also proposed a non-geocentric model that the Earth runs around a central fire which suggests both the Sun and the Earth are not the center of universe. They might develop or formulate the idea the Earth is round.
Pythagoras also taught religious rites and practices in his school, so came his beliefs. He and Plato believed there be a perfect and persistent abstract world, and an imperfect and sensible world. They pursued the beauty of abstract perfection. Plato followed this philosophy and developed it into maturity.
Pythagorean made many early contributions to knowledge. They tried to construct complicated things with simpler ones. They believed whole numbers be simple and perfect, and tried to represent all numbers with quotient of two whole numbers. Here they faced the first mathematical crisis: some, actually most of the numbers, cannot be expressed in such a way. They called these irrational numbers.
This discloses the problems of Pythagoras-Plato approach: the way they construct or interpret mathematics may not fit into the reality. The beauty of mathematics may not be able to explain the imperfect and sensible world. Constructivism is important. But how to construct, and to what extent it works ?
Such an issue even has a consequence on modern sciences and technologies: how macro nonlinearity and micro quantum phenomena are related to irrational numbers ? How should irrational numbers be handled on computers ?
The issue of irrational numbers is actually related to measurement. Euclid described it in a better way as mentioned in Section 5, which is not understood properly by many modern mathematicians. Measurement is again associated to nonlinear, chaos, deterministic uncertainty, or computer modelling, etc. So people better think of this issue as a tip of an iceberg, rather than a solved problem which they could forget about it.
4) Socrates-Stoicism approach
Socrates led different beliefs. He did not take the beauty of abstraction as a doctrine. There is a Socratic method. He asked people to question each other. When people discuss their arguments explicitly, they could find the problems and understand them better. Socratic method is an important element of scientific approach.
Socrates asked many questions, but did not give many answers. This might be a good attitude. Socrates taught by playing a role model. By admitting his ignorance, he suggested other people also to admit their ignorance.
Admitting their own ignorance is not beautiful, not even pleasant, but it is an extremely critical step to make further progress. However, this attitude is offensive to many people. Socrates was voted to death eventually, probabily under accumulated anger from others.
Then Socrates played a role model again by accepting the death to show the rule of laws.
Also, Aristotle believed it was Socrates who identified the method of definition and induction, etc., which are essential to sciences in future.
Socrates promoted rationale and ethics, which was followed by Cynicism and Stoicism. However, the strange behaviors of Cynicism actually showed the frustrations faced by this approach: they did not find effective ways to discover much more knowledge. This task would be achieved by Euclid-Archimedes, Ibn al-Haytham, Galileo-Newton, Darwin approaches, etc., later.
Sophism was an enemy to Socrates, and is also an enemy to future sciences. It does not provide a coherent approach, rather it tries to gain advantages by twisting the facts.
5) The Confusion of Aristotle's Philosophy
Aristotelianism is not coherent, too. Although Aristotle adored Socrates and claimed he was against sophism, his way is actually a mixture of Socrates, Plato and sophism without coherence. So he included assertions like a flying arrow is at rest in his book. He might enjoy such a twist, and cannot distinguish sophism from real profoundness.
His syllogism is an unsuccessful summarization and simulation of the methods in mathematical proofs. Aristotle tried to develop a perfect logic to guarantee the correctness of thinking. However, this only shows he did not really know how mathematics work. So he missed a very important factor: how to make the premises in syllogism valid and concrete, i.e. the first principle. Euclid provided a solution on this issue for some problems later.
Furthermore, he was not aware of that the logic based on ignorance or shallowness could suppress innovation and scientific progress. His assumed profoundness is a mental trap which hindered scientific progress for a very long time. His problems were illustrated by Galileo and Gödel later.
Since Aristotle did not know how to apply logic and reasoning correctly, he did not know how to build coherent theories. In his book The Physics he just put togather whatever he knew or believed into huge collections without paying attention to coherence, which brought very limited values to physics, but many misleadings.
Aristotle was actually a naturalist. He made some contributions to zoological taxonomy based on observation. He is not the first one using observation.
Aristotle's world view was invalided by Galileo in future. His logic was disproved by Godel later. His single logic would lead to social Darwinism, which actually takes opposite positions from real Darwinism on many issues, as mentioned in later sections.
His philosophy is to develop knowledge only good enough for real life, which might help people to absorb funds, build influences with assumed profoundness (declaring some simple logics based on ignorance or shallowness would be the only correct mindset, which is really fascinating), or even gain powers, etc.
So his philosophy is thought as one kind of realisms by many people.
6) Euclid-Archimedes Approach
Euclid is the first one who established a concrete systematic theory for a domain. He is more like a scientist, than a pure mathematician.
He did not concern much of the beauty of abstraction. In the proof of the number of prime numbers, he used the word "measure", instead of a number divides another number. Many modern mathematicians think not being abstract enough is Euclid's limitation.
However, Euclid using measurement for division, implies the accurate value of irrational numbers cannot be measured by rational numbers. Measurability is still a critical problem in modern sciences and computer modelling. So, such an expression is not Euclid's limitation, but his insight. He caught the essentials of the problems.
He usually constructed solutions rather than just proving the existence of unknown solutions. His system is incredible concrete after more than two thousand years, much more concrete than many parts of modern mathematics.
He constructed the geometry system by some simple axioms, and derived other theorems from these axioms. Although this looks like Pythagorean's way to construct complicated numbers by simpler whole numbers, they are different.
Euclid guaranteed the correctness of derived theorems by making axioms simple and straightforward, thus self-evident. So Euclid solved the first principle issue in certain extent. Pythagorean did not show why whole numbers could be used to express all numbers, they just believed it be the beauty. Euclid's geometry is a good example of correct logic.
Euclid's approach mainly works in mathematics, and cannot find all theorems and laws in real needs. When Euclid worked on optics, his approach for first principle faced a problem: he did not have justified reasons to choose emission theory instead of intromission theory, although no justified reasons to make the opposite choice at that time, too. These limitations are partially solved by Ibn al-Haytham approach and Galileo-Newton approach, only partially.
Euclid was truly thought as a scientist in early days. Only after Galileo founded Physics, Euclid retired from scientists, and became a mathematician only.
Archimedes, one of the greatest engineers, was highly influenced by Euclid.
7) Medieval approaches
Pythagoras is very insightful in mathematics, philosophy, music, religious practices, etc. Plato developed the philosophy in his style into maturity. Socrates and Stoicism knew the way to develop rationale and ethics. Euclid and Archimedes designed theoretic and real systems in rigorous forms.
They all made big contributions to knowledge development, and still have big influences so far, but also with problems. Their accomplishments are still in very limited extent. It is some medieval scholars who brought in some new factors critical to future sciences.
Ibn al-Haytham used some procedure to do scientific research: observe, form conjectures, Testing and/or criticism of a hypothesis with experimentation, etc. He used this procedure to prove intromission theory.
However, this procedure is not a complete scientific approach. He can only use it to reach individual results, still under Euclid's geometry view. Although he did some brilliant work in optics, due to this geometry view and his way of thinking, he cannot gain deep and comprehensive understanding of physics.
More important, Ibn al-Haytham did not know the problems of Aristotle's world views. So he did not know to get out of the mental trap created by Aristotle's philosophy, to establish new paradigns in sciences and develop systematic theories, which are critical to scientific revolutions. The tasks to change mentality were left to Copernicus, Galileo and Darwin, etc.
Al-Biruni put an emphasis on experimentation. He tried to conceptualize both systematic errors and observational biases, and used repeated experiments to control errors. He might also be a pioneer of comparative sociology.
Avicenna discussed the philosophy of science and summarized several methods to find a first principle. He developed a theory to distinguish the inclination to move and the force. He discussed the soul and the body, the perceptions, etc. Probably he is a very important scholar misunderstood by many modern people.
Only after their efforts, big scientific progresses became possible.
8) Galileo-Newton Approach
Nicolaus Copernicus constructed a systematic theory deviating from Aristotle's world views, thus starting the new age of sciences. However, he (and Johannes Kepler) still followed Euclid-Archimedes approach, a geometry view. He left these problems of world views and mental trap unexplained.
It is Galileo Galilei who formed much substantial understanding of the physical world and triggered a scientific revolution with his comprehensive and systematic thinking. With his remarkable work "Dialogue Concerning the Two Chief World Systems", Galileo combined Socratic Method and Ibn al-Haytham's method. More important, he showed the differences in world views and asked people to get out of the mental trap of Aristotle's philosophy. His systematic thinking and world view led to new scientific theories and establish new paradigms in physical sciences. These had not be done by Ibn al-Haytham's approach, who only could make individual isolated conclusions.
Isaac Newton developed his great theories based on Galileo's approach. So this classic scientific approach is named as Galileo-Newton approach. Although Newton did a great work, he did not explain why he could do these, i.e., summarize well what are the real differences between their approach and Ibn al-Haytham's approach, and what are the limitations of their approach. It is the responsibility of future philosophers to explain the mechanisms behind these.
Galileo-Newton approach mainly work in worlds without considering the effects of life. Thomas Robert Malthus and John Maynard Keynes followed this classific scientific approach and tried to apply it to life and human societies. They made some progresses, but very limited. And they missed something very important to human beings.
Actually this approach does not work well in artificial intelligence, psychology, economics, and other humanity and social sciences, etc. Those fields relate to humans. Measurability and modelling are still big concerns. People could look at Newton's four rules of reasoning stated in his Mathematical Principles of Natural Philosophy, to figure out what the problems really are:
"1.We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
2.Therefore to the same natural effects we must, as far as possible, assign the same causes.
3.The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
4.In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions".
Actually, many important knowledge systems were not built with this approach.
9) Non-Classic Approaches
As great academic contributors, Charles Darwin, Adam Smith, and Sigmund Freud built their theories not with Galileo-Newton approach. These three are listed here in the descending order of closeness to sciences. No surprise, their theories relate to humans.
Some people classified Charles Darwin as a follower of Francis Bacon's methodology. It is not true. Darwin did much more than observation, induction, etc., empiricism methods, to construct his great theories for life and human history. He is the greatest historian in history. He established a paradigm to combine sciences, life, and humanity, etc.
Although Alfred Russel Wallace was thought as a co-founder of evolution theory by many people, I only discussed Charles Darwin here due to pure academic reasons.
It is also the responsibility of future philosophers to illustrate the differences between Darwin's approach and Ibn al-Haytham's approach, and the differences between Darwin's approach and Galileo-Newton approach.
People also should pay attention to the differences between Darwinism and Social Darwinism. They actually took opposite positions on many issues. Darwinism cherishes diversities, and does not agree with the single logic of Aristotelianism.
Epicurus, Arthur Schopenhauer, Friedrich Nietzsche also illustrated many important understandings, related to human natures. Their ways are different from classic approaches, too.
10) The Progresses in Natural Sciences So Far
After Galileo-Newton approach was established, many theories in natural sciences were proposed such as periodic table, genetics, relativity theories, quantum theories, etc. They mainly followed Galileo-Newton approach. Relativity theories are just refinements to Newton dynamics, just as Kepler refined Copernicus' circle orbits with ellipse orbits. Today people know planets do not run in ellipse orbits, too. So people do not know whether relativity theories are the final theories.
There are good examples for work cross different approaches, such as the synthesis of evolution and genetics theories in life sciences. But people do not have good theories in psychology and intelligence yet, and do not know whether Galileo-Newton approach would work well in these fields. Even in biology, most mechanisms can not be explained yet.
The progresses mentioned above mainly happened before World War II. After World War II, there are genome, and the Standard Model of particle physics. The significance of these two progresses could be compared with periodic table. However, since periodic table is the first of such type of models, proposed much earlier, so genome and the Standard Model of particle physics are less important than periodic table, and far less important than Euclid, Copernicus, Newton, and Darwin's theories.
11) Inadequate Summarizing Efforts
Many people tried to summarize the knowledge systems, such as: Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc.
Georg Wilhelm Friedrich Hegel, Karl Marx, Bertrand Russell even made more ambitious efforts.
Just they all missed some or many important aspects.
12) Godel Theorems and the Limitations of Mathematics and Positivism
Godel incompleteness theorems illustrate the intrinsic problems in mathematics systems beyond certain complexity. Actually any systems including Peano Arithmetric cannot be both complete and consistent. So the main goals of Hilbert's program cannot be achieved. This is the foundational crisis in mathematics.
Actually Godel theorems show Aristotle's philosophy is wrong. There is no perfect logic in this world. This happend almost three hundreds years later, after Galileo showed Aristotle's world views are wrong. Darwinsim actually does not agree with the single logic of Aristotelianism, too.
Positivism is an ideal goal for Galileo-Newton approach. However, as said, Charles Darwin, Adam Smith, Sigmund Freud and many others built their theories and systems not strictly with this approach.
In the whole knowledge system, positivism, just like perfect logic, is more like a Utopia, rather than a reality. Even so, as one of important doctrines in sciences, positivism should not be ignored, especially in the conclusion stage.
13) Future Researches
Some of the challenges for future are identified here:
1. How could people establish new paradigms and develop better world views ? These are the mechanims missed in Ibn al-Haytham's approach. Galileo, Newton, Darwin did these well. But they did not summarize well how they achieved these. Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc., did not summarize these well, too.
2. Identify the limitations in Galileo-Newton approach.
3. Identify the limitations in Darwin's approach.
4. Develop an approach for future researches on life, psychology, and intelligence, etc.
It is up to future philosophers to figure these out and illustrate the mechanisms behind these.
Tuesday, August 7, 2012
Gu Test: A Progressive Measurement of Generic Intelligence (V4)
Abstraction
The purposes of testings are to rule out false claims and measure what have been accomplished, etc. Do computers already have human level intelligence ? Could they understand and process the semantics of irrational numbers without knowing the exact values ? Human can. How about uncountable sets ? These are necessary for sciences. Are there somethings in human intelligence which exceed the power of Turing Machine? This paper explains that Turing Test cannot measure some intrinsic human intelligence, due to the bottleneck in expression, the bottleneck in capacity, and blackbox issue, etc. And it does not provide a progressive measurement for partial human-type intelligence. Similar issues exist in other current test methods. Several design goals are suggested to improve the measurement. Gu Test, a progressive generic intelligence measurement with levels, is proposed based on these goals to address the semantics and other intrinsic intelligence. The semantics of irrational numbers and uncountable sets are identified as two test levels. More work need be done to expand the test feature set, and provide some guides for the direction of future Artificial Intelligence (AI) researches.
1. The Measurement of Generic Intelligence
Machines like clocks can do somethings better than humans long, long time ago. However, this does not mean these machines have generic intelligence, or human level intelligence. So some measurement of intelligence is needed.
Before discussing the measurement of generic intelligence, there is a question: whether generic intellignece is needed ? If throwing in more computing power and design better algorithms based on Turing Machine model can solve all problems, there is no need for generic intelligence.
Unfortunately, computers still lack of somethings which are in human intelligence. Humans have no idea how to add these into computers so far. Computers cannot write software from beginning. They only run software written by humans, or generate code specified by humans. More generically, humans are highly adaptive, innovative, and can learn many types of knowledge and skills, and can switch from one task to another quickly, etc. Developing intelligence for scientific researches is even more challenging.
Due to such adaptive, innovative, and evolutionary nature, it is extremely difficult to define generic human intellignece accurately, if not impossible. But it is obvious there are big differences between current computers and human intelligence. Test methods could be used to measure such differences. Clocks could measure time without an accurate definition of time.
Turing Test [1] is the first of such testing methods proposed. Several others were suggested in later years. They could be classified into indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc.
Testing methods can only test a small portion of intelligence due to time limit and availability. So it is very critical what to test and how to test. However, the existing test methods cannot test some intrinsic human intelligence capabilities, such as how to understand and use the semantics of irrational numbers and uncountable sets, etc., which are fundamental to sciences.
Current computers can only approximate the values of irrational numbers with very limited semantics. Due to the sensitivity to intial conditions and exponential divergence in nonlinear chaotic phenomena, there are problems in such approximations. In reality, nonlinearity is the normal rather than the exception.
Actually nonlinearity and butterfly effect are the main frustrations to von Neumann's meteorology ambitions. It is highly questionable whether algorithms based on Turing Machine model could accomplish generic intelligence.
Since the existing test methods cannot really measure generic intelligence, they cannot provide good guides to AI researches. Actually very little progress in generic intelligence was made during past decades. It is time to change this.
The following sections will discuss the bottlenecks and issues in Turing Test and other existing test methods first. Several design goals are identified to address these issues and better measure generic intelligence. Gu Test, is proposed to accomplish most of these design goals. Some directions for future work are discussed.
2. Turing Test and Chinese Room concern
Alan Turing described an imitation game in his paper Computing Machinery and Intelligence, i.e. Turing Test, which tests whether a human could distinguish a computer from another human only via communication without seeing each other.
Turing Test provides two results: pass or fail. It cannot measure partial generic intelligence, i.e. how close a computer system is to generic human intelligence. The testing results depend on the subjective judgement of testers without objective criterions. Objective criterions are needed in scientific experiments, especially for the phenomena in macro physical worlds..
John Seale also raised a Chinese Room issue [2], i.e., computers could pass this test by symbolic processing without really understanding the meanings of these symbols. Due to the limited number of phrases in real usage, it is possible to build a computer system with enough associations between phrases such that humans cannot distinguish the system from humans within limited testing time. However, this does not mean the computers already have human level intelligence.
Chinese Room argument also raise the semantics issue: could computers understand the semantics of natural languages ?
More important, there are the bottleneck in expression, the bottleneck in capacity, and issues of blackbox test, etc., as described below, which make Turing Test unable to really test generic intelligence.
Turing Test uses interrogation to test, so it only can test those human characteristics which already be understood well by humans and can be expressed in communication. Some people could manage to understand each others by body languages, rich tones, analogy, metaphor, implication and suggestion, etc., in certain environments, which cannot be expressed in pure symbolic processing. So Turing Test behind veils is not a right way to test these intrinsic intelligence abilities. This is the bottleneck in expression.
There is also a bottleneck in the capacity of communication or storage: even if those rich subtle varieties of information could be digitized, the size of these information could far exceed the capacity of communication or storage. The current von Neumann architectures only have finite memory units. Turing Machine has infinite but countable memory units. Could Turing Machine be enhanced with uncountable memory units ?
The bottleneck in expression and the bottleneck in capacity stemmed from the testing methods themselves. Due to Chinese Room issue, the bottleneck in expression and the bottleneck in capacity, certain intrinsic intelligence cannot be tested in blackbox way as in Turing Test. However, with whitebox methods, the designers of the systems could explain what and how they implement in their software and hardware. Testers could analysize whether these claims are true or false based on reasoning, and exmaine the systems to see whether they are implemented as expected.
Say, a system can produce a huge number of digits of an irrational number. It is impractical to wait for these digits one by one within limited testing time. However, it is straightforwrd to examine the code to see whether it implements such a feature correctly.
Turing Test cannot resolve these bottlenecks and issues. It is a black box test, purely based on behavior. Computers could pass this kind of tests by imitating humans without understanding the semantics ?
3. Other Test Methods
There are several other methods aim at testing generic intelligence. Although some of they could provide some test levels, they cannot measure higher level intelligence close to humans. They still lack of the understanding and processing of real semantics.
One is Feigenbaum test. According to Edward Feigenbaum, "Human intelligence is very multidimensional", "computational linguists have developed superb models for the processing of human language grammars. Where they have lagged is in the 'understand' part", "For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain." [3].
Feigenbaum test is actually a good method to test the knowledge in expert systems. The test tries to produce generic intelligence by aggregating many expert systems. That is why it needs to test extensive knowledge.
However, since these types of knowledge are still expressed and stored in symbolic data, the bottlenecks of expression or capacity still exist. It is still a blackbox test. Although it tries to solve the "understand" part, there are no solutions so far to test real semantics of knowledge from these symbolic data.
Another issue of Feigenbaum test is: individual humans may not have very extensive knowledge in many domains, but they have certain potentials. So testing extensive knowledge may not be necessary, if not impossible. What to be figured out is how to test these potentials.
Minimal Intelligent Signal Test (MIST) [4] is similar to Feigenbaum test. But it only uses binary answer "yes" or "no" as test results so it can leverage statistical inference to analyse the test results. The bottlenecks in expression and capacity still exist. It is still a blackbox testing. By using binary answers, it oversimplifies the knowledge with even less understanding of semantics than Feigenbaum test.
Another method is Shane Legg and Marcus Hutter's solution [5], which is actually agent-based, a good test for task performance. In their framework, an agent sends its actions to the environment and received observations and rewards from it. If their framework is used to test generic intelligence, then it assumes that all the interactions between humans and their environment could be modeled by actions, observations, rewards, etc. This assumption has not been tested yet. The bottlenecks in expression or in capacity still exist in the definitions of actions, observations, rewards, etc.
Furthermore, Humans have very diversified specialties. It is impractical to aggreagte performance for a very large number of tasks. Humans have the potentials to learn new tasks and be innovative. They could gain deeper observations, take better actions, and gain other rewards than what in the specified task definitions. Such potentials cannot be tested in the blackbox performance testing for specified tasks. So this method does not really test the generic intellignece, too.
If Turing Test is enhanced with vision and manipulation ability, it could become similar to Shane Legg and Marcus Hutter's solution. Interrogation could become task performing. Same problems exist.
In a summary, the existing testing methods does not measure generic intelligence well as expected. As a result, the studies of generic intelligence are still clueless. To design a better measurement of generic intelligence, the existing bottlenecks and issues should be resolved. Some design goals should be identified to provide good directions and better solutions.
4. The Design Goals for Better Measurement of Generic Intelligence
Based on the analysis done in previous sections, some design goals are suggested here:
1) Resolve Chinese Room issue, i.e., to test the real understanding of semantics, not just behavior imitating or symbolic processing.
2) Resolve the bottleneck in expression, by not purely relying on interrogation. Find some ways to test those intrinsic intelligence abilities which have not been understood and expressed well.
3) Resolve the bottleneck in capacity, by leverage of some properties of concepts and semantics.
4) Use whitebox test to examine the implemented mechanisms directly.
5) Involve as less domain knowledge as possible, since regular humans may not have much knowledge in specific domains. But find some ways to test the potentials to develop intelligence.
6) develop leveled test schemes up to generic human intelligence, to measure continuous progress in intelligence.
7) develop a framework to test structured and associated intelligence, adaptive and innovative abilities, and diversfied specialties, etc.
5. Gu Test
Based on these design goals, Gu Test is proposed. Initally it includes two test levels: the understanding and processing of the semantics of irrational numbers and uncountable sets. More levels could be added in future.
Humans can derive new usages of irrational numbers without knowing the exact values of these numbers. Obviously they understand these semantics. The situation is similar with uncountable sets, but at a more difficult level, whereas regular people with average education have the potential understand irrational numbers.
Gu Test is to test whether computers or machines have such intelligence. These intelligence are critical to sciences, an important part of modern human activities and progresses. Humans own such abilities, but they do not understand why and how these abilities work, and cannot express these semantics, knowledge, and intelligence as pure symbolic data yet.
It is a whitebox test. The test procedure is as below:
1) It is up to the designers of the systems to explain what semantics they implement and how they implement. In this way, Gu Test does not restrict what and how the designers want to implement, and allows full exploration..
2) Testers analysize whether these claims are true or false based on reasoning. The interpretation and representation of semantics only can be judged based on reasoning.
3) Testers examine the software and hardware of the systems, to see whether these mechanisms (including whatever representation of semantics passed in step 2) are really implemented as expected.
This procedure could be applied to irrational numbers, uncountable sets, or others test features in future. So the test does not rely on interrogation, but can test some intrinsic ability. Testers could test whatever intelligence or mechanisms humans have, without the external bottlenecks in expression or capacity stemmed from testing methods.
Irrational number is a primitive concept developed in Pythagoras' age. The concept is necessary to so many domains, but involves very little domain-specific knowledge. Uncountable set is an advanced concept used in modern sciences and mathematics. Physical semantics could be in complete different dimensions. It would be very different challenges to add intelligence in different domains.
The current efforts are to achieve the design goals 1) to 6). The work to meet goal 7), i.e., to test structured and associated intelligence, adaptive and innovative abilities, and diversified specialties, etc., will be left to future researches.
6. The Comparison With other Test Methods
As said, Gu Test is very different from indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc. It is a whitebox test. It requires humans designers to explain what intelligence their systems implement and how, and human testers to analysize whether these claims are true or false and examine the systems to see whether they implement these mechanisms as expected.
So it does not have the bottlenecks in expression or in capacity stemmed from test methods, and could test higher level intelligence such as semantics understanding up to and even beyong human intelligence.
Gu Test represents a complete paradigm shift from previous test methods. It provides some guides or insights related to generic human intelligence, without restricting how to implement these.
7. Future Research
Much more work need be done to add more test levels to Gu Test and meet the design goals 7).
The analysis on the bottlenecks and issues of Turing Test, would naturally lead to the questions of the power and limitations of Turing Machine and von Neumann architecture. This paper does not make any conclusion on what platforms or architectures are better for generic intelligence, as long as they can truly pass the test. Rather, it opens the door to allow people to make full exploration.
To really understand the essentials of intelligence, people have to study the history of knowledge development, including philosophy, mathematics, and sciences, etc. It is a reasonable option to develop intelligence models based on a multi-level structure of physics, life sciences, and psychology.
References
[1] Turing, A. M., 1950, "Computing machinery and intelligence". Mind 59, 433–460.
[2] Searle, John. R., 1980, "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457.
[3] Feigenbaum, Edward A., 2003, "Some challenges and grand challenges for computational intelligence". Journal of the ACM 50 (1): 32–40.
[4] McKinstry, Chris, 1997, "Minimum Intelligent Signal Test: An Alternative Turing Test", Canadian Artificial Intelligence (41)
[5] Legg, S. & Hutter, M., 2006, "A Formal Measure of Machine Intelligence”, Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands, pp.73-80.
The purposes of testings are to rule out false claims and measure what have been accomplished, etc. Do computers already have human level intelligence ? Could they understand and process the semantics of irrational numbers without knowing the exact values ? Human can. How about uncountable sets ? These are necessary for sciences. Are there somethings in human intelligence which exceed the power of Turing Machine? This paper explains that Turing Test cannot measure some intrinsic human intelligence, due to the bottleneck in expression, the bottleneck in capacity, and blackbox issue, etc. And it does not provide a progressive measurement for partial human-type intelligence. Similar issues exist in other current test methods. Several design goals are suggested to improve the measurement. Gu Test, a progressive generic intelligence measurement with levels, is proposed based on these goals to address the semantics and other intrinsic intelligence. The semantics of irrational numbers and uncountable sets are identified as two test levels. More work need be done to expand the test feature set, and provide some guides for the direction of future Artificial Intelligence (AI) researches.
1. The Measurement of Generic Intelligence
Machines like clocks can do somethings better than humans long, long time ago. However, this does not mean these machines have generic intelligence, or human level intelligence. So some measurement of intelligence is needed.
Before discussing the measurement of generic intelligence, there is a question: whether generic intellignece is needed ? If throwing in more computing power and design better algorithms based on Turing Machine model can solve all problems, there is no need for generic intelligence.
Unfortunately, computers still lack of somethings which are in human intelligence. Humans have no idea how to add these into computers so far. Computers cannot write software from beginning. They only run software written by humans, or generate code specified by humans. More generically, humans are highly adaptive, innovative, and can learn many types of knowledge and skills, and can switch from one task to another quickly, etc. Developing intelligence for scientific researches is even more challenging.
Due to such adaptive, innovative, and evolutionary nature, it is extremely difficult to define generic human intellignece accurately, if not impossible. But it is obvious there are big differences between current computers and human intelligence. Test methods could be used to measure such differences. Clocks could measure time without an accurate definition of time.
Turing Test [1] is the first of such testing methods proposed. Several others were suggested in later years. They could be classified into indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc.
Testing methods can only test a small portion of intelligence due to time limit and availability. So it is very critical what to test and how to test. However, the existing test methods cannot test some intrinsic human intelligence capabilities, such as how to understand and use the semantics of irrational numbers and uncountable sets, etc., which are fundamental to sciences.
Current computers can only approximate the values of irrational numbers with very limited semantics. Due to the sensitivity to intial conditions and exponential divergence in nonlinear chaotic phenomena, there are problems in such approximations. In reality, nonlinearity is the normal rather than the exception.
Actually nonlinearity and butterfly effect are the main frustrations to von Neumann's meteorology ambitions. It is highly questionable whether algorithms based on Turing Machine model could accomplish generic intelligence.
Since the existing test methods cannot really measure generic intelligence, they cannot provide good guides to AI researches. Actually very little progress in generic intelligence was made during past decades. It is time to change this.
The following sections will discuss the bottlenecks and issues in Turing Test and other existing test methods first. Several design goals are identified to address these issues and better measure generic intelligence. Gu Test, is proposed to accomplish most of these design goals. Some directions for future work are discussed.
2. Turing Test and Chinese Room concern
Alan Turing described an imitation game in his paper Computing Machinery and Intelligence, i.e. Turing Test, which tests whether a human could distinguish a computer from another human only via communication without seeing each other.
Turing Test provides two results: pass or fail. It cannot measure partial generic intelligence, i.e. how close a computer system is to generic human intelligence. The testing results depend on the subjective judgement of testers without objective criterions. Objective criterions are needed in scientific experiments, especially for the phenomena in macro physical worlds..
John Seale also raised a Chinese Room issue [2], i.e., computers could pass this test by symbolic processing without really understanding the meanings of these symbols. Due to the limited number of phrases in real usage, it is possible to build a computer system with enough associations between phrases such that humans cannot distinguish the system from humans within limited testing time. However, this does not mean the computers already have human level intelligence.
Chinese Room argument also raise the semantics issue: could computers understand the semantics of natural languages ?
More important, there are the bottleneck in expression, the bottleneck in capacity, and issues of blackbox test, etc., as described below, which make Turing Test unable to really test generic intelligence.
Turing Test uses interrogation to test, so it only can test those human characteristics which already be understood well by humans and can be expressed in communication. Some people could manage to understand each others by body languages, rich tones, analogy, metaphor, implication and suggestion, etc., in certain environments, which cannot be expressed in pure symbolic processing. So Turing Test behind veils is not a right way to test these intrinsic intelligence abilities. This is the bottleneck in expression.
There is also a bottleneck in the capacity of communication or storage: even if those rich subtle varieties of information could be digitized, the size of these information could far exceed the capacity of communication or storage. The current von Neumann architectures only have finite memory units. Turing Machine has infinite but countable memory units. Could Turing Machine be enhanced with uncountable memory units ?
The bottleneck in expression and the bottleneck in capacity stemmed from the testing methods themselves. Due to Chinese Room issue, the bottleneck in expression and the bottleneck in capacity, certain intrinsic intelligence cannot be tested in blackbox way as in Turing Test. However, with whitebox methods, the designers of the systems could explain what and how they implement in their software and hardware. Testers could analysize whether these claims are true or false based on reasoning, and exmaine the systems to see whether they are implemented as expected.
Say, a system can produce a huge number of digits of an irrational number. It is impractical to wait for these digits one by one within limited testing time. However, it is straightforwrd to examine the code to see whether it implements such a feature correctly.
Turing Test cannot resolve these bottlenecks and issues. It is a black box test, purely based on behavior. Computers could pass this kind of tests by imitating humans without understanding the semantics ?
3. Other Test Methods
There are several other methods aim at testing generic intelligence. Although some of they could provide some test levels, they cannot measure higher level intelligence close to humans. They still lack of the understanding and processing of real semantics.
One is Feigenbaum test. According to Edward Feigenbaum, "Human intelligence is very multidimensional", "computational linguists have developed superb models for the processing of human language grammars. Where they have lagged is in the 'understand' part", "For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain." [3].
Feigenbaum test is actually a good method to test the knowledge in expert systems. The test tries to produce generic intelligence by aggregating many expert systems. That is why it needs to test extensive knowledge.
However, since these types of knowledge are still expressed and stored in symbolic data, the bottlenecks of expression or capacity still exist. It is still a blackbox test. Although it tries to solve the "understand" part, there are no solutions so far to test real semantics of knowledge from these symbolic data.
Another issue of Feigenbaum test is: individual humans may not have very extensive knowledge in many domains, but they have certain potentials. So testing extensive knowledge may not be necessary, if not impossible. What to be figured out is how to test these potentials.
Minimal Intelligent Signal Test (MIST) [4] is similar to Feigenbaum test. But it only uses binary answer "yes" or "no" as test results so it can leverage statistical inference to analyse the test results. The bottlenecks in expression and capacity still exist. It is still a blackbox testing. By using binary answers, it oversimplifies the knowledge with even less understanding of semantics than Feigenbaum test.
Another method is Shane Legg and Marcus Hutter's solution [5], which is actually agent-based, a good test for task performance. In their framework, an agent sends its actions to the environment and received observations and rewards from it. If their framework is used to test generic intelligence, then it assumes that all the interactions between humans and their environment could be modeled by actions, observations, rewards, etc. This assumption has not been tested yet. The bottlenecks in expression or in capacity still exist in the definitions of actions, observations, rewards, etc.
Furthermore, Humans have very diversified specialties. It is impractical to aggreagte performance for a very large number of tasks. Humans have the potentials to learn new tasks and be innovative. They could gain deeper observations, take better actions, and gain other rewards than what in the specified task definitions. Such potentials cannot be tested in the blackbox performance testing for specified tasks. So this method does not really test the generic intellignece, too.
If Turing Test is enhanced with vision and manipulation ability, it could become similar to Shane Legg and Marcus Hutter's solution. Interrogation could become task performing. Same problems exist.
In a summary, the existing testing methods does not measure generic intelligence well as expected. As a result, the studies of generic intelligence are still clueless. To design a better measurement of generic intelligence, the existing bottlenecks and issues should be resolved. Some design goals should be identified to provide good directions and better solutions.
4. The Design Goals for Better Measurement of Generic Intelligence
Based on the analysis done in previous sections, some design goals are suggested here:
1) Resolve Chinese Room issue, i.e., to test the real understanding of semantics, not just behavior imitating or symbolic processing.
2) Resolve the bottleneck in expression, by not purely relying on interrogation. Find some ways to test those intrinsic intelligence abilities which have not been understood and expressed well.
3) Resolve the bottleneck in capacity, by leverage of some properties of concepts and semantics.
4) Use whitebox test to examine the implemented mechanisms directly.
5) Involve as less domain knowledge as possible, since regular humans may not have much knowledge in specific domains. But find some ways to test the potentials to develop intelligence.
6) develop leveled test schemes up to generic human intelligence, to measure continuous progress in intelligence.
7) develop a framework to test structured and associated intelligence, adaptive and innovative abilities, and diversfied specialties, etc.
5. Gu Test
Based on these design goals, Gu Test is proposed. Initally it includes two test levels: the understanding and processing of the semantics of irrational numbers and uncountable sets. More levels could be added in future.
Humans can derive new usages of irrational numbers without knowing the exact values of these numbers. Obviously they understand these semantics. The situation is similar with uncountable sets, but at a more difficult level, whereas regular people with average education have the potential understand irrational numbers.
Gu Test is to test whether computers or machines have such intelligence. These intelligence are critical to sciences, an important part of modern human activities and progresses. Humans own such abilities, but they do not understand why and how these abilities work, and cannot express these semantics, knowledge, and intelligence as pure symbolic data yet.
It is a whitebox test. The test procedure is as below:
1) It is up to the designers of the systems to explain what semantics they implement and how they implement. In this way, Gu Test does not restrict what and how the designers want to implement, and allows full exploration..
2) Testers analysize whether these claims are true or false based on reasoning. The interpretation and representation of semantics only can be judged based on reasoning.
3) Testers examine the software and hardware of the systems, to see whether these mechanisms (including whatever representation of semantics passed in step 2) are really implemented as expected.
This procedure could be applied to irrational numbers, uncountable sets, or others test features in future. So the test does not rely on interrogation, but can test some intrinsic ability. Testers could test whatever intelligence or mechanisms humans have, without the external bottlenecks in expression or capacity stemmed from testing methods.
Irrational number is a primitive concept developed in Pythagoras' age. The concept is necessary to so many domains, but involves very little domain-specific knowledge. Uncountable set is an advanced concept used in modern sciences and mathematics. Physical semantics could be in complete different dimensions. It would be very different challenges to add intelligence in different domains.
The current efforts are to achieve the design goals 1) to 6). The work to meet goal 7), i.e., to test structured and associated intelligence, adaptive and innovative abilities, and diversified specialties, etc., will be left to future researches.
6. The Comparison With other Test Methods
As said, Gu Test is very different from indistinguishability (or imitation) tests, knowledge aggregation tests, or task aggregation tests, etc. It is a whitebox test. It requires humans designers to explain what intelligence their systems implement and how, and human testers to analysize whether these claims are true or false and examine the systems to see whether they implement these mechanisms as expected.
So it does not have the bottlenecks in expression or in capacity stemmed from test methods, and could test higher level intelligence such as semantics understanding up to and even beyong human intelligence.
Gu Test represents a complete paradigm shift from previous test methods. It provides some guides or insights related to generic human intelligence, without restricting how to implement these.
7. Future Research
Much more work need be done to add more test levels to Gu Test and meet the design goals 7).
The analysis on the bottlenecks and issues of Turing Test, would naturally lead to the questions of the power and limitations of Turing Machine and von Neumann architecture. This paper does not make any conclusion on what platforms or architectures are better for generic intelligence, as long as they can truly pass the test. Rather, it opens the door to allow people to make full exploration.
To really understand the essentials of intelligence, people have to study the history of knowledge development, including philosophy, mathematics, and sciences, etc. It is a reasonable option to develop intelligence models based on a multi-level structure of physics, life sciences, and psychology.
References
[1] Turing, A. M., 1950, "Computing machinery and intelligence". Mind 59, 433–460.
[2] Searle, John. R., 1980, "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457.
[3] Feigenbaum, Edward A., 2003, "Some challenges and grand challenges for computational intelligence". Journal of the ACM 50 (1): 32–40.
[4] McKinstry, Chris, 1997, "Minimum Intelligent Signal Test: An Alternative Turing Test", Canadian Artificial Intelligence (41)
[5] Legg, S. & Hutter, M., 2006, "A Formal Measure of Machine Intelligence”, Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands, pp.73-80.
Saturday, May 5, 2012
Different Approaches For Knowledge System Development (v3)
1) Introduction
To really understand how humans develop intelligence and knowledge systems, people have to retrace the whole history of philosophy and mathematics back to ancient itme, and study various kinds of methodologies for different purposes.
This article is not a complete review of philosophies and sciences. It only addresses the essentials and methodologies for knowledge development, for the interests of sciences, philosophy, education, and artificial intellignece, etc. So although Dmitri Mendeleev and Albert Einstein are extremely important scientists, they are not discussed here since they essentially followed the classic scientific approach: Galileo-Newton approach.
It does not try to cross the boundary between knowledge and religions. Only religious rites are mentioned.
By approach, it means coherent approach in this paper. Coherence does not guarantee correctness. However, incohenrence is always prone to problems and errors.
2) Various Approaches
Several approaches from ancient time are identified here. They are: Pythagoras-Plato approach; Socrates-Stoicism approach; Euclid-Archimedes approach; Yi-Jing approach from ancient China; approaches from ancient India; approaches from other countries, etc.
Some approaches from medieval age played important roles in sciences: they are Ibn al-Haytham's approach, Al-Biruni's approach, and Avicenna's approach, etc.
Galileo-Newton approach, the foundation of current sciences, evolved from Euclid-Archimedes and Ibn al-Haytham approaches. But there are differences between them. Some non-classic approaches from Charles Darwin, Adam Smith, Sigmund Freud, etc., are also different from Galileo-Newton approach.
The following sections will first discuss the strengths and limitations of the first three ancient approaches, the medieval approaches, then Galileo-Newton approach. Several non-classic approaches and important theoretic issues will also be discussed.
The ancient approaches from Yi-Jing, India, and other countries, would be discussed in separate articles, if possible.
3) Pythagoras-Plato Approach
Thales is a pioneer in mathematical proof. Egyptians and Babylonians might know Thales Theorem before him, but he was likely the first one providing a valid proof for it.
Although Thales tried to explain natural phenomena not based on mythology, he is a Hylozoist who believed everything is alive, and there is no difference between the living and the dead. He did not develop a coherence approach, but had significant influences on Pythagoras.
The first philosopher should be Pythagoras, who built a coherent systematic view. He formed a school of scholars to study philosophy, mathematics, music, etc.
Pythagorean are famous for Pythagorean Theorem. They are pioneers in mathematics, a systematic study. They also proposed a non-geocentric model that the Earth runs around a central fire which suggests both the Sun and the Earth are not the center of universe. They might develop or formulate the idea the Earth is round.
Pythagoras also taught religious rites and practices in his school, so came his beliefs. He and Plato believed there be a perfect and persistent abstract world, and an imperfect and sensible world. They pursued the beauty of abstract perfection. Plato followed this philosophy and developed it into maturity.
Pythagorean made many early contributions to knowledge. They tried to construct complicated things with simpler ones. They believed whole numbers be simple and perfect, and tried to represent all numbers with quotient of two whole numbers. Here they faced the first mathematical crisis: some, actually most of the numbers, cannot be expressed in such a way. They called these irrational numbers.
This discloses the problems of Pythagoras-Plato approach: the way they construct or interpret mathematics may not fit into the reality. The beauty of mathematics may not be able to explain an sensible world. Constructivism is important. But how to construct, and to what extent it works ?
Such an issue even has a consequence on modern sciences and technologies: how macro nonlinearity and micro quantum phenomena are related to irrational numbers ? How should irrational numbers be handled on computers ?
The issue of irrational numbers is actually related to measurement. Euclid described it in a better way as mentioned in Section 5, which is not understood properly by many modern mathematicians. Measurement is again associated to nonlinear, chaos, deterministic uncertainty, or computer modelling, etc. So people better think of this issue as a tip of an iceberg, rather than a solved problem which they could forget about it.
4) Socrates-Stoicism approach
Socrates led different beliefs. He did not take the beauty of abstraction as a doctrine. There is a Socratic method. He asked people to question each other. When people discuss their arguments explicitly, they could find the problems and understand them better.
Socrates asked many questions, but did not give many answers. This might be a good attitude. Socrates taught by playing a role model. By admitting his ignorance, he suggested other people also to admit their ignorance.
Admitting their ignorance is not beautiful, not even pleasant, but an extremely critical step to make further progresses. However, this attitude is offensive to many people. Socrates was voted to death eventually, probabily under accumulated anger from others.
Then Socrates played a role model again by accepting the death to show the rule of laws.
Also, "Aristotle attributed to Socrates the discovery of the method of definition and induction, which he regarded as the essence of the scientific method." (from wikipedia).
Socrates promoted rationale and ethics, which was followed by Cynicism and Stoicism. However, the strange behaviors of Cynicism actually showed the frustrations faced by this approach: they did not find effective ways to discover much more knowledge. This task would be achieved by Euclid-Archimedes, Ibn al-Haytham, Galileo-Newton, Darwin approaches later.
Sophism was an enemy to Socrates, and is also an enemy to future sciences. It does not provide a coherent approach.
Aristotelianism is not coherent, too. Although Aristotle adored Socrates and claimed he was against sophism, his way is actually a mixture of Socrates, Plato and sophism without coherence. So he included assertions like a flying arrow is at rest in his book.
His syllogism is an unsuccessful summarization and simulation of the methods in mathematical proofs. Aristotle did not know how the logic really works in mathematics. So he missed a very important factor: how to make the premises in syllogism valid and concrete, i.e. the first principle. Euclid provided a solution on this issue for some problems later.
Since Aristotle did not know how to apply logic and reasoning correctly, he did not know how to build coherent theories. His book The Physics brought little values to physics, but many misleadings. He just put togather whatever he knew or believed into huge collections without paying attention to coherence.
Aristotle was actually a naturalist. He made some contributions to zoological taxonomy based on observation. He is not the first one using observation. And he does not have a coherent approach.
5) Euclid-Archimedes Approach
Euclid is the first one who established a concrete systematic theory for a domain. He is more like a scientist, than a pure mathematician.
He did not concern much of the beauty of abstraction. In the proof of the number of prime numbers, he used the word "measure", instead of a number divides another number. Many modern mathematicians think not being abstract enough is Euclid's limitation.
However, Euclid using measurement for division, implies the accurate value of irrational numbers cannot be measured by rational numbers. Measurability is still a critical problem in modern sciences and computer modelling. So, such an expression is not Euclid's limitation, but his insight. He caught the essentials of the problems.
He usually constructed solutions rather than just proving the existence of unknown solutions. His system is incredible concrete after more than two thousand years, much more concrete than many of modern mathematics.
He constructed the geometry system by some simple axioms, and derived other theorems from these axioms. Although this looks like Pythagorean's way to construct complicated numbers by simpler whole numbers, they are different.
Euclid guarantees the correctness of derived theorems by making axioms simple and straightforward, thus self-evident. So Euclid solved the first principle issue in certain extent. Pythagorean did not show why whole numbers could be used to express all numbers, they just believed it be the beauty. Euclid's geometry is a good example of correct logic.
Euclid's approach mainly works in mathematics, and cannot find all theorems and laws in real needs. When Euclid worked on optics, his approach for first principle faced a problem: he did not have justified reasons to choose emission theory instead of intromission theory, although no justified reasons to make the opposite choice at that time, too. These limitations are partially solved by Ibn al-Haytham approach and Galileo-Newton approach, only partially.
Euclid was truly thought as a scientist in early days. Only after Galileo founded Physics, Euclid retired from scientists, and became a mathematician only.
Archimedes, one of the greatest engineers, was highly influenced by Euclid.
6) Medieval approaches
Pythagoras is very insightful in mathematics, philosophy, music, religious practices, etc. Plato developed the philosophy in his style into maturity. Socrates and Stoicism knew the way to develop rationale and ethics. Euclid and Archimedes designed theoretic and real systems in rigorous forms.
They all made big contributions to knowledge development, and still have big influences so far, but also with problems. Their accomplishments are still in very limited extent. It is some medieval scholars who brought in some new factors critical to future sciences.
Ibn al-Haytham used some procedure to do scientific research: observe, form conjectures, Testing and/or criticism of a hypothesis using experimentation, etc. He used this procedure to prove intromission theory.
However, this procedure is not a complete scientific approach. He can only use it to reach individual results, still under Euclid's geometry view. Although he did some brilliant work in optics, due to this geometry view and his way of thinking, he cannot gain deep and comprehensive understanding of physics.
More important, Ibn al-Haytham did not tell how to build big theories and establish new paradigns in sciences. Those are critical tasks for scientific revolutions, which were left to Galileo-Newton approach and Darwin non-classific approach, etc.
Al-Biruni put an emphasis on experimentation. He tried to conceptualize both systematic errors and observational biases, and used repeated experiments to control errors. He might also be a pioneer of comparative sociology.
Avicenna discussed the philosophy of science and summarized several methods to find a first principle. He developed a theory to distinguish the inclination to move and the force. He discussed the soul and the body, the perceptions. Probably he is a very important scholar misunderstood by many modern people.
Only after their efforts, big scientific progresses became possible.
7) Galileo-Newton Approach
Although Nicolaus Copernicus started the new age of sciences, he (and Johannes Kepler) still followed Euclid-Archimedes approach, a geometry view.
It is Galileo Galilei who formed many substantial understanding of the physical world and triggered a scientific revolution with his comprehensive and systematic thinking. Galileo showed how these systematic thinking could lead to new world theories and establish new paradigms in physical sciences, which is different from Ibn al-Haytham's approach. Without such a way to think, people cannot build a new big theory from individual isolated conclusions. But he did not summarize well what really make these differences.
Isaac Newton developed his great theories based on Galileo's approach. So this classic scientific approach is named as Galileo-Newton approach. Although Newton did a great work, he did not explain why he could do these, i.e., summarize well what are the real differences between their approach and Ibn al-Haytham's approach. Francis Bacon cannot explain the real differences too. The mechanisms behind these differences are left to be explained by future philosophers.
Galileo-Newton approach mainly work in worlds without considering the affects from life. Thomas Robert Malthus and John Maynard Keynes followed this classific scientific approach and tried to apply it to life and human societies. They made some progresses, but very limited. And they missed something very important to human beings.
There are still fundamental limitations in this approach. It does not work well in artificial intelligence, psychology, economics, and other humanity and social sciences, etc. Those fields relate to humans. Measurability and modelling are still big concerns. People could look at Newton's four rules of reasoning stated in his Mathematical Principles of Natural Philosophy, to figure out what the limitations really are:
"1.We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
2.Therefore to the same natural effects we must, as far as possible, assign the same causes.
3.The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
4.In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions".
Two issues are identified here: 1) Galileo, Newton or Bacon did not summarize well what are the differences and the mechanisms behind these differences between the classific scientific approach and Ibn al-Haytham's approach. 2) Galileo-Newton approach itself has fundamental problems when applied to life and humans. Actually, many important knowledge systems were not built with this approach.
8) Non-Classic Approaches
Although as great academic contributors, Charles Darwin, Adam Smith, and Sigmund Freud built their theories not with Galileo-Newton approach. These three are listed here in the descending order of closeness to sciences. No surprise, their theories relate to humans.
Some people classified Charles Darwin as a follower of Francis Bacon's methodology. It is not true. Darwin did much more than observation, induction, etc., empiricism methods, to construct his great theories for life and human history. He is the greatest historian in history. He established a paradigm to combine sciences, life, and humanity, etc.
The differences between Darwin's approach and Ibn al-Haytham's approach, and the differences between Darwin's approach and Galileo-Newton approach, are to be illustrated by future philosophers.
People also should pay attention to the differences between Darwinism and Social Darwinism. They actually took opposite positions on many issues.
Epicurus, Arthur Schopenhauer, Friedrich Nietzsche also illustrated many important opinions, related to human natures. Their ways are different from classic approaches, either.
9) The Progresses in Natural Sciences So Far
After Galileo-Newton approach was established, many theories in natural sciences were proposed such as periodic table, genetics, relativity theories, quantum theories, etc. They mainly followed Galileo-Newton approach. Relativity theories are just refinements to Newton dynamics, just as Kepler refined Copernicus' circle orbits with ellipse orbits. Today people know planets do not run in ellipse orbits, too. So people do not know whether relativity theories are the final theories.
There are good examples for work cross different approaches, such as the synthesis of evolution and genetics theories in life sciences. But people do not have good theories in psychology and intelligence yet, and do not know whether Galileo-Newton approach would work well in these fields. Even in biology, most mechanisms can not be explained yet.
10) Inadequate Summarizing Efforts
Many people tried to summarize the knowledge systems, such as: Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc.
Georg Wilhelm Friedrich Hegel, Karl Marx, Bertrand Russell even made more ambitious efforts.
Just they all missed some or many important aspects.
11) Gödel Theorems and the Limitations of Mathematics and Positivism
Gödel theorems illustrate the intrinsic problems in mathematics systems beyond certain complexity. There are foundational crisis in mathematics as in http://en.wikipedia.org/wiki/Foundations_of_mathematics#Foundational_crisis.
More important, mathematics cannot explain all the potentials in reality.
Positivism is an ideal goal for Galileo-Newton approach. However, as said, Charles Darwin, Adam Smith, Sigmund Freud and many others built their theories and systems not strictly with this approach.
In the whole knowledge system, positivism is more like a Utopia, rather than a reality. Even so, as one of important doctrines in sciences, positivism should not be ignored, especially in the conclusion stage.
12) Future Researches
Some of the challenges for future are identified here:
1. How could people construct big theories and establish new paradigms ? These are the mechanims missed in Ibn al-Haytham's approach. Galileo, Newton, Darwin did their well. But they did not summarize well how they achieved these. Francis Bacon, René Descartes, David Hume, Immanuel Kant, etc., did not summarize these well, too.
2. Identify the limitations in Galileo-Newton approach.
3. Identify the limitations in Darwin's approach.
4. Develop an approach for future researches on life, psychology, and intelligence, etc.
It is up to future philosophers to figure these out and illustrate the mechanisms behind these.
(I delete the content related to computer intelligence and Gu Test in this version. Those are already presented in a seperated paper: Gu Test: A Measurement of Generic Intelligence)
Wednesday, April 18, 2012
Gu Test: A Measurement of Generic Intelligence (v3)
Abstraction
Could computers understand and represent irrational numbers without knowing the exact values, which may be necessary to build sciences ? Humans can. How about uncountable set ? Are there somethings in human intelligence which exceed the power of Turing Machine? The measurement of generic intelligence is critical to further development of artificial intelligence (AI). However, there are various bottlenecks and issues in the existing methods: Turing Test and its variants, which cannot really measure intrinsic intelligence capabilities. Based on the studies of knowledge development, several essential design goals for intelligence measurement are identified to address these issues. A new method: Gu Test is proposed, to meet some of these design goals, distinguish strong AI and regular machines, and provide insights for future directions of AI. Further improvement could be done in future.
1. The Measurement of Generic Intelligence
Could computers understand the concepts of irrational numbers and represent these numbers and the theories based on these numbers without knowing their exact values ? Such concepts and theories are necessary to build sciences and advanced human intelligence. How about uncountable set, etc. ? Such intrinsic intelligence capabilities are important milestones for machine intelligence levels.
The measurement of generic intelligence capabilities is critical to AI, to estimate the current status and look for future improvement, etc. However, the existing measuring methods, such as Turing Test and its variants, are mainly behavior-based, knowledge-based, or task-based, etc. There are various bottlenecks and issues in these solutions. They cannot really measure intrinsic intelligence capabilities.
People could design algorithms on Turing Machine or its improved models. These are mathematics models limited by Gödel's incompleteness theorems. Even worse, there are still problems to implement these models physically.
Current computers use rational numbers to approximate irrational numbers. Due to the sensitivity to intial conditions and exponential divergence in nonlinear chaotic phenomena, there are problems in such approximations. In reality, nonlinearity is the normal rather than the exception. How does humans' intelligence guide their behavior in real physical situations ?
Are there somethings in human intelligence which exceed the power of Turing Machine and its current improvements? A good measurement should point to possible bridges between mathematics models, physical implementations, and human intelligence. Gu Test, is a new measurement to address these issues, distinguish strong AI from regular machines, and provide insights for future directions, etc.
The following sections will discuss Turing Test and its variants with their bottlenecks and issues first. Several design goals are identified to address these issues and better measure generic intelligence. Gu Test, is proposed to achieve these design goals. Some directions for future work are discussed.
2. Turing Test and Chinese Room concern
Alan Turing described an imitation game in his paper Computing Machinery and Intelligence [1], which tests whether a human could distinguish a computer from another human only via communication without seeing each other.
It is a black box test, purely based on behavior. Computers could pass this kind of tests by imitating humans.
So John Seale raised a Chinese Room issue [2], i.e., computers could pass this test by symbolic processing without really understanding the meanings of these symbols.
More important, there are bottlenecks of communication or storage, in expression or in capacity, and the issues of blackbox test and understanding, etc., as described below, which make the current ways of symbolic processing inadequate as generic intelligence.
Turing Test uses interrogation to test, so it only can test those human characteristics which already be understood well by humans and can be expressed in communication. Humans still have very limited understanding of life, psychology, and intelligence. Some people could manage to understand each others by face to face, analogy, metaphor, implying, suggestion, etc., on things which cannot be purely done in symbolic processing. Some people may not. Humans do not know why these methods work or do not work yet. So these intrinsic intelligence abilities not understood well yet could not be expressed or tested via interrogation behind veils. Turing Test does not work in these cases. This is the bottleneck in expression.
Even if the bottleneck in expression could be resolved in some problems, the capacity in communication or storage could still be an issue if purely relying on symbolic processing: say, how to represent the value of an irrational number, and how many irrational numbers they could represent, finite or infinite, countable or uncountable, etc. ? The current von Neumann architectures only have finite memory units. Turing Machine has infinite but countable memory units. Could Turing Machine be enhanced with uncountable memory units ?
Since the methods of face to face, analogy, metaphor, implying, suggestion, etc., does not work in Turing Test or other blackbox tests, is it still possible for computers to be programmed to understand things like irrational numbers or uncountable sets ? There is a blackbox test issue to verify this.
Assume infinite but countable storage as in Turing Machine, or interrogators with infinite testing time, and a computer is able to compute the value of an irrational number a digital by a digital. In blackbox test, how could these interrogators know the computer is only going to display a huge rational number with the same digitals as a portion of an irrational number, or it is going to display a true irrational number? This issue could be resolved by whitebox tests, to review the program in the computer to verify whether they really understand.
Turing Test cannot resolve these bottlenecks and issues.
3. Variants of Turing Test
There are several variants of Turing Test which aim at improving on it.
One is Feigenbaum test. According to Edward Feigenbaum, "Human intelligence is very multidimensional", "computational linguists have developed superb models for the processing of human language grammars. Where they have lagged is in the 'understand' part", "For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain." [3].
Feigenbaum test is actually a good method to test the knowledge in expert systems. The test tries to produce generic intelligence by average out of many expert systems. That is why it needs to test extensive knowledge.
Here, the bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding, still remain in Feigenbaum test as well as in other variants of Turing Test. There are differences between knowledge, concepts and data. The "understanding part" is still to be resolved.
One more issue of Feigenbaum test is: individual humans may not have very extensive knowledge in many domains, but they have potentials. So extensive knowledge may not be necessary, but tests for potentials are.
Another variant is Shane Legg and Marcus Hutter's solution [4], which is actually agent-based, a good test for tasks. Their solution tries to test generic intelligence by average out of many tasks. It still uses behavior imitation and comparison. So it is a blackbox test and a variant of Turing Test.
In their framework, an agent sends its actions to the environment and received observations and rewards from it. If their framework is used to test strong AI, then it assumes that all the interactions between humans and their environment could be modeled by actions/observations/rewards. This assumption has not been tested yet. The bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding, still remains.
Furthermore, there are differences between humans and the definitions of agents. Humans can play some roles of agents, but they are not just agents. They could make paradigm evolution or shift, which usually means gain deeper observations, take better actions, and gain more rewards than what already in any definitions.
Even if Turing Test is enhanced with vision and manipulation ability, or with methods like statistical inference, etc., it still does not resolve the bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding.
These issues could be solved by concept understanding, whitebox test, etc. The measurement of generic intelligence is not just producing the digitals of one or a few irrationl numbers.
4. The Design Goals for Generic Intelligence Measurement
Based on the analysis done in previous sections, some design goals are proposed here:
1) Resolve Chinese Room issue, i.e., to test the real understanding, not just behavior imitating or symbolic processing.
2) Resolve the bottleneck in expression, by not purely relying on interrogation. Find some ways to test those intrinsic intelligence abilities which have not been understood and expressed well.
3) Resolve the bottleneck in capacity, by levergae of the differences between concepts, knowledge and data.
4) Use whitebox test to resolve blackbox test issue.
5) Involve as less domain knowledge as possible, since regular humans may not have much knowledge in specific domains. But include those essential intrinsic capabilities commonly necessary in many domains, with which humans have the potentials to develop intelligence in many domains.
6) Include sequential test levels, since humans are able to make continuous progresses in intelligence.
7) Include a framework to test structured intelligence and be able to make paradigm evolution or shift, since humans have such abilitities.
5. Gu Test
Based on these design goals, Gu Test is proposed. It should include sequential test levels, and be able to test structured intelligence and make paradigm evolution or shift gradually.
The current efforts are to achieve the design goals 1) to 6). The work to meet goal 7) will be left to future researches.
The first test step of Gu Test is: to test whether testees could understand irrational numbers without knowing their exact values. It is a white box test. Average humans with certain basic education can. Current computers most likely cannot. An advanced step could be: to test the understanding of uncountable sets.
These test the real understanding; They do not rely on interrogation, but test some intrinsic ability. Humans have this ability without the issues of bottlenecks in expression or capacity, but they probably do not know why they have this ability yet; It tests some concepts and knowledge which cannot be represented as data; Irrational number is a primitive concept developed in Pythagoras' age, who is a poineer in philosophy and mathematics; The concept is necessary to so many domains, but involves very little domain-specific knowledge. Uncountable set is an advanced concept.
Due to these characteristics, Gu Test is very different from Turing Test and its variants by testing the understanding parts.
Irrational number and Uncountable set are just mathematics concepts. Physical concepts are in complete different dimensions. To make generic intelligence understand physical concepts and represent them would be very different challenges.
6. Comparison With other Test Methods
As said, Gu Test is very different from behavior-based tests, knowledge-based tests, task-based tests, etc. It is a whitebox test, requiring humans to analyze whether a system achieves certain intelligent levels with certain internal capabilities.
So Gu Test represent a complete paradigm shift from previous test methods. And it is not comparable with those previous test methods.
7. Future Research
Much more work need be done to extend Gu Test to include various test levels and meet design goals 7).
The analysis on the bottlenecks and issues of Turing Test, would naturally lead to the questions of the power and limitations of Turing Machine, and what the better models are for artificial intelligence. Does it need uncountable memory units ? If yes, how to implement it. If not, how to enhance the power. People probably have to revisit Church-Turing thesis. It is possible to build models exceeding the power of Turing Machine mathematically. However, it would be a challenge to develop such a model matching physical reality.
To really understand the essentials of intelligence, people have to study the history of knowledge development, philosophy, mathematics, sciences, etc.
References
[1] Turing, A. M., 1950, "Computing machinery and intelligence". Mind 59, 433–460.
[2] Searle, John. R., 1980, "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457.
[3] Feigenbaum, Edward A., 2003, "Some challenges and grand challenges for computational intelligence". Journal of the ACM 50 (1): 32–40.
[4] Legg, S. & Hutter, M., 2006, "A Formal Measure of Machine Intelligence”, Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands, pp.73-80.
Could computers understand and represent irrational numbers without knowing the exact values, which may be necessary to build sciences ? Humans can. How about uncountable set ? Are there somethings in human intelligence which exceed the power of Turing Machine? The measurement of generic intelligence is critical to further development of artificial intelligence (AI). However, there are various bottlenecks and issues in the existing methods: Turing Test and its variants, which cannot really measure intrinsic intelligence capabilities. Based on the studies of knowledge development, several essential design goals for intelligence measurement are identified to address these issues. A new method: Gu Test is proposed, to meet some of these design goals, distinguish strong AI and regular machines, and provide insights for future directions of AI. Further improvement could be done in future.
1. The Measurement of Generic Intelligence
Could computers understand the concepts of irrational numbers and represent these numbers and the theories based on these numbers without knowing their exact values ? Such concepts and theories are necessary to build sciences and advanced human intelligence. How about uncountable set, etc. ? Such intrinsic intelligence capabilities are important milestones for machine intelligence levels.
The measurement of generic intelligence capabilities is critical to AI, to estimate the current status and look for future improvement, etc. However, the existing measuring methods, such as Turing Test and its variants, are mainly behavior-based, knowledge-based, or task-based, etc. There are various bottlenecks and issues in these solutions. They cannot really measure intrinsic intelligence capabilities.
People could design algorithms on Turing Machine or its improved models. These are mathematics models limited by Gödel's incompleteness theorems. Even worse, there are still problems to implement these models physically.
Current computers use rational numbers to approximate irrational numbers. Due to the sensitivity to intial conditions and exponential divergence in nonlinear chaotic phenomena, there are problems in such approximations. In reality, nonlinearity is the normal rather than the exception. How does humans' intelligence guide their behavior in real physical situations ?
Are there somethings in human intelligence which exceed the power of Turing Machine and its current improvements? A good measurement should point to possible bridges between mathematics models, physical implementations, and human intelligence. Gu Test, is a new measurement to address these issues, distinguish strong AI from regular machines, and provide insights for future directions, etc.
The following sections will discuss Turing Test and its variants with their bottlenecks and issues first. Several design goals are identified to address these issues and better measure generic intelligence. Gu Test, is proposed to achieve these design goals. Some directions for future work are discussed.
2. Turing Test and Chinese Room concern
Alan Turing described an imitation game in his paper Computing Machinery and Intelligence [1], which tests whether a human could distinguish a computer from another human only via communication without seeing each other.
It is a black box test, purely based on behavior. Computers could pass this kind of tests by imitating humans.
So John Seale raised a Chinese Room issue [2], i.e., computers could pass this test by symbolic processing without really understanding the meanings of these symbols.
More important, there are bottlenecks of communication or storage, in expression or in capacity, and the issues of blackbox test and understanding, etc., as described below, which make the current ways of symbolic processing inadequate as generic intelligence.
Turing Test uses interrogation to test, so it only can test those human characteristics which already be understood well by humans and can be expressed in communication. Humans still have very limited understanding of life, psychology, and intelligence. Some people could manage to understand each others by face to face, analogy, metaphor, implying, suggestion, etc., on things which cannot be purely done in symbolic processing. Some people may not. Humans do not know why these methods work or do not work yet. So these intrinsic intelligence abilities not understood well yet could not be expressed or tested via interrogation behind veils. Turing Test does not work in these cases. This is the bottleneck in expression.
Even if the bottleneck in expression could be resolved in some problems, the capacity in communication or storage could still be an issue if purely relying on symbolic processing: say, how to represent the value of an irrational number, and how many irrational numbers they could represent, finite or infinite, countable or uncountable, etc. ? The current von Neumann architectures only have finite memory units. Turing Machine has infinite but countable memory units. Could Turing Machine be enhanced with uncountable memory units ?
Since the methods of face to face, analogy, metaphor, implying, suggestion, etc., does not work in Turing Test or other blackbox tests, is it still possible for computers to be programmed to understand things like irrational numbers or uncountable sets ? There is a blackbox test issue to verify this.
Assume infinite but countable storage as in Turing Machine, or interrogators with infinite testing time, and a computer is able to compute the value of an irrational number a digital by a digital. In blackbox test, how could these interrogators know the computer is only going to display a huge rational number with the same digitals as a portion of an irrational number, or it is going to display a true irrational number? This issue could be resolved by whitebox tests, to review the program in the computer to verify whether they really understand.
Turing Test cannot resolve these bottlenecks and issues.
3. Variants of Turing Test
There are several variants of Turing Test which aim at improving on it.
One is Feigenbaum test. According to Edward Feigenbaum, "Human intelligence is very multidimensional", "computational linguists have developed superb models for the processing of human language grammars. Where they have lagged is in the 'understand' part", "For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain." [3].
Feigenbaum test is actually a good method to test the knowledge in expert systems. The test tries to produce generic intelligence by average out of many expert systems. That is why it needs to test extensive knowledge.
Here, the bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding, still remain in Feigenbaum test as well as in other variants of Turing Test. There are differences between knowledge, concepts and data. The "understanding part" is still to be resolved.
One more issue of Feigenbaum test is: individual humans may not have very extensive knowledge in many domains, but they have potentials. So extensive knowledge may not be necessary, but tests for potentials are.
Another variant is Shane Legg and Marcus Hutter's solution [4], which is actually agent-based, a good test for tasks. Their solution tries to test generic intelligence by average out of many tasks. It still uses behavior imitation and comparison. So it is a blackbox test and a variant of Turing Test.
In their framework, an agent sends its actions to the environment and received observations and rewards from it. If their framework is used to test strong AI, then it assumes that all the interactions between humans and their environment could be modeled by actions/observations/rewards. This assumption has not been tested yet. The bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding, still remains.
Furthermore, there are differences between humans and the definitions of agents. Humans can play some roles of agents, but they are not just agents. They could make paradigm evolution or shift, which usually means gain deeper observations, take better actions, and gain more rewards than what already in any definitions.
Even if Turing Test is enhanced with vision and manipulation ability, or with methods like statistical inference, etc., it still does not resolve the bottlenecks of communication or storage, in expression or in capacity, and the issue of blackbox test and understanding.
These issues could be solved by concept understanding, whitebox test, etc. The measurement of generic intelligence is not just producing the digitals of one or a few irrationl numbers.
4. The Design Goals for Generic Intelligence Measurement
Based on the analysis done in previous sections, some design goals are proposed here:
1) Resolve Chinese Room issue, i.e., to test the real understanding, not just behavior imitating or symbolic processing.
2) Resolve the bottleneck in expression, by not purely relying on interrogation. Find some ways to test those intrinsic intelligence abilities which have not been understood and expressed well.
3) Resolve the bottleneck in capacity, by levergae of the differences between concepts, knowledge and data.
4) Use whitebox test to resolve blackbox test issue.
5) Involve as less domain knowledge as possible, since regular humans may not have much knowledge in specific domains. But include those essential intrinsic capabilities commonly necessary in many domains, with which humans have the potentials to develop intelligence in many domains.
6) Include sequential test levels, since humans are able to make continuous progresses in intelligence.
7) Include a framework to test structured intelligence and be able to make paradigm evolution or shift, since humans have such abilitities.
5. Gu Test
Based on these design goals, Gu Test is proposed. It should include sequential test levels, and be able to test structured intelligence and make paradigm evolution or shift gradually.
The current efforts are to achieve the design goals 1) to 6). The work to meet goal 7) will be left to future researches.
The first test step of Gu Test is: to test whether testees could understand irrational numbers without knowing their exact values. It is a white box test. Average humans with certain basic education can. Current computers most likely cannot. An advanced step could be: to test the understanding of uncountable sets.
These test the real understanding; They do not rely on interrogation, but test some intrinsic ability. Humans have this ability without the issues of bottlenecks in expression or capacity, but they probably do not know why they have this ability yet; It tests some concepts and knowledge which cannot be represented as data; Irrational number is a primitive concept developed in Pythagoras' age, who is a poineer in philosophy and mathematics; The concept is necessary to so many domains, but involves very little domain-specific knowledge. Uncountable set is an advanced concept.
Due to these characteristics, Gu Test is very different from Turing Test and its variants by testing the understanding parts.
Irrational number and Uncountable set are just mathematics concepts. Physical concepts are in complete different dimensions. To make generic intelligence understand physical concepts and represent them would be very different challenges.
6. Comparison With other Test Methods
As said, Gu Test is very different from behavior-based tests, knowledge-based tests, task-based tests, etc. It is a whitebox test, requiring humans to analyze whether a system achieves certain intelligent levels with certain internal capabilities.
So Gu Test represent a complete paradigm shift from previous test methods. And it is not comparable with those previous test methods.
7. Future Research
Much more work need be done to extend Gu Test to include various test levels and meet design goals 7).
The analysis on the bottlenecks and issues of Turing Test, would naturally lead to the questions of the power and limitations of Turing Machine, and what the better models are for artificial intelligence. Does it need uncountable memory units ? If yes, how to implement it. If not, how to enhance the power. People probably have to revisit Church-Turing thesis. It is possible to build models exceeding the power of Turing Machine mathematically. However, it would be a challenge to develop such a model matching physical reality.
To really understand the essentials of intelligence, people have to study the history of knowledge development, philosophy, mathematics, sciences, etc.
References
[1] Turing, A. M., 1950, "Computing machinery and intelligence". Mind 59, 433–460.
[2] Searle, John. R., 1980, "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457.
[3] Feigenbaum, Edward A., 2003, "Some challenges and grand challenges for computational intelligence". Journal of the ACM 50 (1): 32–40.
[4] Legg, S. & Hutter, M., 2006, "A Formal Measure of Machine Intelligence”, Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands, pp.73-80.
Friday, March 23, 2012
The Growth of Souls -- Brain, Heart and Pleasure
Brain: there are two types of intelligence: task-oriented and world-oriented.
Heart: it is the heart which makes people relevant, adhensive, or committed to somethings, somewhere, or some other people, etc.
Pleasure: it could be fast-changing, diversified, and temporary. So better let you choose pleasure, not pleasure choose you.
Souls combine hearts with brains and pleasure, and make people coherent.
To think of growth and future, is to understand your soul, with your brain, heart and pleasure.
Genes are not completely selfish. Be serious and meticulous to what already committed and what going to commit. Try to understand the past and future with coherence.
Heart: it is the heart which makes people relevant, adhensive, or committed to somethings, somewhere, or some other people, etc.
Pleasure: it could be fast-changing, diversified, and temporary. So better let you choose pleasure, not pleasure choose you.
Souls combine hearts with brains and pleasure, and make people coherent.
To think of growth and future, is to understand your soul, with your brain, heart and pleasure.
Genes are not completely selfish. Be serious and meticulous to what already committed and what going to commit. Try to understand the past and future with coherence.
Subscribe to:
Posts (Atom)