Intelligence is the ability to reason, understand, create, learn from experience, plan and execute complex tasks. Artificial intelligence can be defined as the phenomenon of giving machines the ability to perform tasks normally associated with human intelligence. According to Barr and Feigenbaum, “Artificial intelligence is the part of computer science concerned with designing intelligent computer systems, i.e., systems that exhibit the characteristics we associate with intelligence in human behavior.”\
Different definitions of AI given by different books/writers can be basically divided into following dimensions;
Natural Language processing: must be able to communicate in English.
Total Turing test: Total Turing test includes video signals and manipulation capability so that an interrogator can test subject‘s perceptual abilities & object manipulation ability. To pass the total Turing test, a computer must have following additional capabilities:
Thinking Humanly (Cognitive model approach)
To say that a given program thinks like human, some way of determining how humans think is a must have. We should dig out the minute details about the actual workings of human minds. This can be done in following two ways: through introspection to catch our thoughts while they go by and through psychological experiments to monitor each and every human behavior.
Once we have a precise theory of mind, it is possible to express this theory as computer programs.
Thinking rationally (The laws of thought approach)
The law of thought, first codified by Aristotle as a result of study of reasoning process, was supposed to govern the operation of mind. Aristotle gave Syllogisms that always yielded correct conclusion when correct premises are given. For example:
Ram is a man
All men are mortal
ð Ram is mortal
This study initiated the field of logic in Artificial Intelligence that hopes to create intelligent systems by using logic programming. However there are major two obstacles to this approach;
Acting Rationally (The rational Agent approach)
Agent can be defined as anything that acts. A computer agent is expected to have attributes like autonomous control, perceiving their environment, persisting over a prolonged period of time, adapting to change and capable of taking on another’s goal.
Rational behavior is the process of doing the right thing. The right thing here means by that which is expected to maximize the goal achievement with given available information.
Hence, a rational agent is one that acts so as to achieve best outcome or when there is uncertainty, the best-expected outcome.
In the laws of thought approach to AI, emphasis was given to correct inferences. Sometimes, making correct inferences is part of being rational agents, because one way to act rationally is to be able to reason logically to the conclusion and act on that conclusion. However, there are also some other ways of acting rationally that doesn’t involve inference. For example, recoiling from hot stove is a reflex action that is usually more successful than a slower action taken after careful deliberation.
Brief history of AI
In 1943, Warren Mc Culloch and Walter Pitts developed a model of artificial boolean neurons to perform computations. This was the first steps toward connectionist computation and learning (Hebbian learning).
In 1950, Alan Turing developed Computing Machinery and Intelligence which was the first complete vision of AI.
The birth of AI (1956):
Great expectations (1952-1969):
Newell and Simon introduced the General Problem Solver which was the imitation of human problem-solving. In 1952, Arthur Samuel investigated game playing (checkers) with great success. In 1958, John McCarthy invented Lisp which is the second oldest high-level language and is logic oriented and has separation between knowledge and reasoning. Marvin Minsky in 1958 introduced micro-worlds such as blocks- world that required intelligence to solve. This was anti logic oriented.
Collapse in AI research (1966 - 1973):
During the time period of 1966- 1973, the progress in AI was slower than expected due to unrealistic predictions. Some of the systems lacked scalability due to the combinatorial explosion in search. Also the fundamental limitations on techniques and representations invited Minsky and Papert (1969) Perceptrons.
AI revival through knowledge-based systems (1969-1970):
AI becomes an industry (1980 - present):
Connectionist revival (1986 - present): (Return of Neural Network):
Parallel distributed processing (RumelHart and McClelland, 1986); backprop.
AI becomes a science (1987 - present):
The emergence of intelligent agents (1995 - present):
The whole agent problem: “ How does an agent act/behave embedded in real environments with continuous sensory inputs”
Bhatta, J. (2015). cognitive science. kathmandu: swastik pressbsccsit.com. (2015, 08 02).
Retrieved from .bsccsit.com: www.bsccsit.com|JACK. (2002).
ARTIFICIAL INTELLIGENCE. COPELAND: BLACKWELL.Paineau, D. (2002).
Thinking About Consciousness. NEW YORK: OXFORD UNIVERSITY PRESS.