Welcome to Edukum.com

Brief Introduction of AI

Intelligence is the ability to reason, understand, create, learn from experience, plan and execute complex tasks. Artificial intelligence can be defined as the phenomenon of giving machines the ability to perform tasks normally associated with human intelligence. According to Barr and Feigenbaum, “Artificial intelligence is the part of computer science concerned with designing intelligent computer systems, i.e., systems that exhibit the characteristics we associate with intelligence in human behavior.”\
Different definitions of AI given by different books/writers can be basically divided into following dimensions;

Human-centered approaches are an empirical science, involving hypothesis as well as experimental confirmation. A rationalist approach involves the combination of mathematics and engineering.
Acting humanly (Turing test approach)
The Turing test, proposed by Alan Turing in 1950 was designed to convince people whether a particular machine can think or not. He suggested a test based on indistinguishable property of a machine from undeniably intelligent entities- human beings. The test involves a human interrogator who interacts with one human and one machine. Within a given time the interrogator has to find out which of the two the human is, and which one the machine. For this queries are passed to both the human and machine. If the human interrogator cannot distinguish from where the correct answer has come, then the machine is said to have passed the Turing test. Turing test avoids physical interaction with human interrogator. Physical simulation of human beings is not necessary for testing the intelligence.
To pass a Turing test, a machine must have following capabilities;

Natural Language processing: must be able to communicate in English.

  • Knowledge representation: to be able to store what it knows.
  • Automated reasoning: to be able to justify the answer.
  • Machine learning: to be able to adopt changes.

Total Turing test: Total Turing test includes video signals and manipulation capability so that an interrogator can test subject‘s perceptual abilities & object manipulation ability. To pass the total Turing test, a computer must have following additional capabilities:

  • Computer Vision to perceive objects
  • Robotics to manipulate objects and move

Thinking Humanly (Cognitive model approach)
To say that a given program thinks like human, some way of determining how humans think is a must have. We should dig out the minute details about the actual workings of human minds. This can be done in following two ways: through introspection to catch our thoughts while they go by and through psychological experiments to monitor each and every human behavior.
Once we have a precise theory of mind, it is possible to express this theory as computer programs.

Thinking rationally (The laws of thought approach)
The law of thought, first codified by Aristotle as a result of study of reasoning process, was supposed to govern the operation of mind. Aristotle gave Syllogisms that always yielded correct conclusion when correct premises are given. For example:
Ram is a man
All men are mortal
ð Ram is mortal
This study initiated the field of logic in Artificial Intelligence that hopes to create intelligent systems by using logic programming. However there are major two obstacles to this approach;

  1. It’s not easy to take informal knowledge and state that in the formal terms as required by logical notation, particularly when the knowledge is not 100% certain.
  2. Solving problems principally is different from doing it practically. Even the problems with dozens of fact may exhaust computational resources of any computer unless the operating computer has some guidance as which reasoning step to try first.

Acting Rationally (The rational Agent approach)
Agent can be defined as anything that acts. A computer agent is expected to have attributes like autonomous control, perceiving their environment, persisting over a prolonged period of time, adapting to change and capable of taking on another’s goal.
Rational behavior is the process of doing the right thing. The right thing here means by that which is expected to maximize the goal achievement with given available information.
Hence, a rational agent is one that acts so as to achieve best outcome or when there is uncertainty, the best-expected outcome.
In the laws of thought approach to AI, emphasis was given to correct inferences. Sometimes, making correct inferences is part of being rational agents, because one way to act rationally is to be able to reason logically to the conclusion and act on that conclusion. However, there are also some other ways of acting rationally that doesn’t involve inference. For example, recoiling from hot stove is a reflex action that is usually more successful than a slower action taken after careful deliberation.


  • It is more general approach than laws of thought approach, because correct inference is just one of several mechanisms for achieving rationality.
  • It is more controlled approach for scientific development compared to the approaches based on human behavior or human thought because the standard of rationality is clearly defined and completely general.

Brief history of AI
In 1943, Warren Mc Culloch and Walter Pitts developed a model of artificial boolean neurons to perform computations. This was the first steps toward connectionist computation and learning (Hebbian learning).
In 1950, Alan Turing developed Computing Machinery and Intelligence which was the first complete vision of AI.

The birth of AI (1956):

  • Dartmouth Workshop brought together top minds on automata theory, neural nets and the study of intelligence.
  • Allen Newell and Herbert Simon: The logic theorist (first non-numeric thinking program used for theorem proving)
  • For the next 20 years the field was dominated by these participants.

Great expectations (1952-1969):
Newell and Simon introduced the General Problem Solver which was the imitation of human problem-solving. In 1952, Arthur Samuel investigated game playing (checkers) with great success. In 1958, John McCarthy invented Lisp which is the second oldest high-level language and is logic oriented and has separation between knowledge and reasoning. Marvin Minsky in 1958 introduced micro-worlds such as blocks- world that required intelligence to solve. This was anti logic oriented.

Collapse in AI research (1966 - 1973):
During the time period of 1966- 1973, the progress in AI was slower than expected due to unrealistic predictions. Some of the systems lacked scalability due to the combinatorial explosion in search. Also the fundamental limitations on techniques and representations invited Minsky and Papert (1969) Perceptrons.

AI revival through knowledge-based systems (1969-1970):

  • General-purpose vs. domain specific
  • g. the DENDRAL project (Buchanan et al. 1969)
  • First successful knowledge intensive system.
    • Expert systems
  • MYCIN to diagnose blood infections (Feigenbaum et al.)
  • Introduction of uncertainty in reasoning.
    • Increase in knowledge representation research.
  • Logic, frames, semantic nets, …

AI becomes an industry (1980 - present):

  • R1 at DEC (McDermott, 1982)
  • Fifth generation project in Japan (1981)
  • American response …
    This puts an end to the AI winter.

Connectionist revival (1986 - present): (Return of Neural Network):
Parallel distributed processing (RumelHart and McClelland, 1986); backprop.

AI becomes a science (1987 - present):

  • In speech recognition: hidden markov models
  • In neural networks
  • In uncertain reasoning and expert systems: Bayesian network formalism

The emergence of intelligent agents (1995 - present):
The whole agent problem: “ How does an agent act/behave embedded in real environments with continuous sensory inputs”

Bhatta, J. (2015). cognitive science. kathmandu: swastik pressbsccsit.com. (2015, 08 02).

Retrieved from .bsccsit.com: www.bsccsit.com|JACK. (2002).
Thinking About Consciousness. NEW YORK: OXFORD UNIVERSITY PRESS.


#Things To Remember