Abstract
This chapter introduces a novel learning paradigm that underpins the rapid learning ability of noological systems – effective causal learning. The learning process is rapid, requiring only a handful of training instances. The causal rules learned are instrumental in problem solving, which is the primary processing backbone of a noological system. Causal rules are characterized as consisting of a diachronic component and a synchronic component which distinguishes our formulation of causal rules from that of other research. A classic problem, the spatial movement to goal problem, is used to illustrate the power of causal learning in vastly reducing the problem solving search space involved, and this is contrasted with the traditional AI A* algorithm which requires a huge search space. As a result, the method is scalable to real world situations. Script, a knowledge structure that consists of start state, action steps, outcome/goal, and counterfactual information, is proposed to be the fundamental noologically efficacious unit for intelligent behavior. The discussions culminate in a general forward search framework for noological systems that is applied to various scenarios in the rest of the book.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
If earlier the places at which the efficacious force was experienced were in Place 1, Place 2, … to Place N, then the disjunctive sum totality of them would be Place 1 or Place 2, … or Place N, and if these are places on earth or “in the space as we know it,” then the synchronic precondition is “on earth” or “in space as we know it.”
References
Agresti, A., & Franklin, C. (2007). Statistics: The art and science of learning from data (3rd ed.). Boston: Pearson Education, Inc.
Fire, A., & Zhu, S.-C. (2015). Learning perceptual causality from video. ACM Transactions on Intelligent Systems and Technology, 7(2), 23. doi:10.1145/2809782.
Greenspan, J. (2013). Coyotes in the crosswalks? Fuggedaboutit! Scientific American, 309(4), 17. New York: Scientific American.
Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics SSC4, 4(2), 100–107.
Ho, S.-B. (2014). On effective causal learning. In Proceedings of the 7th international conference on artificial general intelligence, Quebec City (pp. 43–52). Berlin: Springer.
Milberger, S., Davis, R. M., Douglas, C. E., Beasley, J. K., Burns, D., Houston, T., & Shopland, D. (2006). Tobacco manufacturers’ defence against plaintiffs’ claims of cancer causation: Throwing mud at the wall and hoping some of it will stick. Tobacco Control, 15(4), iv17–iv26. doi:10.1136/tc.2006.016956.
Moore, D. S., McCabe, G. P., & Craig, B. A. (2009). Introduction to the practice of statistics (6th ed.). New York: W. H. Freeman.
Nolte, J. (2009). The human brain: An introduction to its functional anatomy (6th ed.). Philadelphia: Mosby Elsevier.
Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). Cambridge: Cambridge University Press.
Schank, R., & Abelson, R. (1977). Scripts, plans, goals and understanding. Hillsdale: Lawrence Erlbaum Associates.
Smolin, L. (2013). Time reborn: From the crisis of physics to the future of the universe. Boston: Houghton Mifflin Harcourt.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
Author information
Authors and Affiliations
Problem
Problem
The crawling robot problem is defined as follows. In Fig. 2.29a a robot, resting on the ground, is shown to have a body and two hinged arms extending from its “front” (its right “face”). A first arm, which is the arm that is hinged to the front of the body, makes an angle α with the vertical face. The second arm is hinged to the first arm as shown and it makes an angle β with a vertical line. The second arm’s distant tip is labeled “Tip.” The desired movement for the robot is to keep moving to the right. Both arms have to be moved in a correct sequence to allow the desired movement to take place – i.e., given the states of the arms as shown in Fig. 2.29a, if the second arm were to “swing outward,” resulting in a reduction of β, then the robot will move to the left, which is undesired. The correct solution is first to lift the first arm – reducing the angle α – as shown in Fig. 2.29b. Then, the second arm will not be touching the ground (Tip is above the ground), and now it can swing outward (reducing β) until it swings pass the vertical line as shown in Fig. 2.29c. Now the first arm can be lowered until the second arm’s Tip touches the ground as shown in Fig. 2.29d. After this, if the second arm swings inward toward the body of the robot (reducing the current β), the robot will move to the right. If this correct sequence of actions is repeatedly applied, the robot will keep moving to the right. The problem is to learn/discover this correct sequence of actions.
Reinforcement learning (Sutton and Barto 1998) has been successfully applied to this problem (Kranf Site: www.applied-mathematics.net/qlearning/qlearning.html) – each time the robot emanates a sequence of actions that leads to the desired direction of movement, a positive reinforcement is given, and when it makes an error (i.e., the sequence of actions results in the robot moving to the left), a negative reinforcement is given. However, reinforcement learning requires many learning episodes. Apply rapid causal learning to the problem to obtain a faster solution (See Ho (2014) for hints.)
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Ho, SB. (2016). Rapid Unsupervised Effective Causal Learning. In: Principles of Noology. Socio-Affective Computing, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-319-32113-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-32113-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-32111-0
Online ISBN: 978-3-319-32113-4
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)