Sunday, 18 January 2009

Lists

The list is a simple data structure widely used in non-numeric programming.
A list is a sequence of any number of items, such as ann, tennis, tom, skiing. Such a list can be written in Prolog as:
[ann,tennis,tom,skiing]
Lists can be empty or non-empty. Empty list is shown as [] atom. Non-empty list can be viewed as consisting of two things:
The first item called head,
The remaining part called tail.

Unification Rules

Unification rules
A constant unifies only with itself.
Two structures unify if and only if they have the same functor and the same number of arguments, and the corresponding arguments unify recursively.
A variable unifies with anything.
Unification = Matching + Occurs check
What happens when we ask Prolog:
?- X = f(X).
Matching succeeds, unification fails

Unification

How do we ask: ''what does Fred eat ?'‘
eats(fred,oranges).
How do we ask: “what Fred eats” ?
?- eats(fred,what).
But Prolog will say “no” to this question. The reason is that Prolog can't find the relation “eats(fred,what)” in its database.
In this case we have to use a variable which will be unified to match a relation given in the program. This process is known as unification.


Now that we know how to use a variables, we can ask the same question as before using the variable What instead of an atom.
?- eats(fred,What).
What=orange
Yes
In this case Prolog try to unify the variable with an atom. ''What=oranges'' means that the query is successfull when “what” is unified with oranges.


With the same program if we ask :
?- eats(Who,oranges).
Which means who eats oranges. To this query Prolog should answer :
?- eats(Who,oranges).
Who=fred
yes

Monkey and Banana

A monkey is in a room. Suspended from the ceiling is a bunch of bananas, beyond the monkey's reach. In the corner of the room is a box.
How can the monkey get the bananas?
The solution is that the monkey must push the box under the bananas, then stand on the box, and then grab the bananas.
In other variants of the problem, the bananas are in a chest and the monkey first has to open the chest using a key.


move( state(middle, onbox, middle, hasnot),grasp, % Grasp banana
state(middle, onbox, middle, has) ).

move( state(P, onfloor, P, H), climb, % Climb box
state(P, onbox, P, H) ).

move( state(P1, onfloor, P1, H), push(P1, P2), % Push box from P1 to P2
state(P2, onfloor, P2, H) ).

move( state(P1, onfloor, B, H), walk(P1, P2), % Walk from P1 to P2
state(P2, onfloor, B, H) ).

% canget(State,Moves): monkey in State can get banana by performing Moves

canget( state(_,_,_,has), [] ).
canget(State1,[M|List]) :- move(State1,M,State2), canget(State2,List).
solve(Moves) :- canget(state(atdoor,onfloor,atwindow,hasnot),Moves).


For help, use ?- help(Topic). or ?- apropos(Word).

?- [mb].
% mb compiled 0.00 sec, 2,280 bytes
Yes

?- solve(Moves).
Moves = [walk(atdoor, atwindow), push(atwindow, middle), climb, grasp]
Yes

Order of Clauses and Goals: Infinite Loops

Consider the following clause:
P:-p.
This means ‘p is true if p is true’. This is declaratively perfectly correct, but procedurally is quite useless.
?-p.
Using the clause above, the goal p is replaced by the same goal p; this will be in turn replaced by p, etc. Prolog will enter an infinite loop not noticing that no progress is being made.
Any differences in order of the clauses may cause finite loops. For example you can’t move if you don’t get up.


Different variation of clauses produces different answers.
The predecessor procedure consists of two clauses, and one of them has two goals in the body. There are,therefore, 4 variations of this program, which have the same declarative meaning, can be obtained by:
Swapping both clauses, and
Swapping the goals for each order of clause.


The Original version:
pred1(X, Z) :- parent(X, Z).
pred1(X, Z) :- parent(X, Y), pred1( Y, Z).
Variation a:
pred2(X, Z) :- parent(X, Y), pred2( Y, Z).
pred2(X, Z) :- parent(X, Z).
Variation b:
pred3(X, Z) :- parent(X, Z).
pred3(X, Z) :- pred3( X, Y), parent(Y, Z).
Variation c:
pred4(X, Z) :- pred4( X, Y), parent(Y, Z).
pred4(X, Z) :- parent(X, Z).



Let’s see what happens if we ask whether Tom is a predecessor of Pat using the four variations of the predecessor relation:

?- pred1(tom, pat).
yes

?- pred2(tom, pat).
yes

?- pred3(tom, pat).
yes

?- pred4(tom, pat).

Prolog can not find the answer and this is manifested on the terminal by a Prolog message such as “More core needed” or “Stack Overflow”.

Procedural Meaning

The procedural meaning specifies how Prolog answers questions.
Input: a program and a goal list.
Output: a success/failure indicator and an instantiation of variables.
The meaning of the two output results is as follows:
The success/failure indicator is ‘yes’ if the goals are satisfiable and ‘no’ otherwise.
An instantiation of variables is only produced in the case of a successful termination; in the case of failure there is no instantiation.

Declerative Meaning

Declarative reading of the clause “P:-Q,R” are:
P is true if Q and R are true.
From Q and R follows P.
Procedural readings of this clause are:
To solve the problem P, first solve the subproblem Q and then R.
To satisfy P, satisfy Q and then R.


Note that ‘,’ character indicates AND operator.
Note that ‘;’ character indicates OR operator.
AND operator has a higher priority then the OR operator.
“P :- Q, R; S, T, U.” and “P :- (Q, R); (S, T, U).” clauses gives the same results.
Same clauses can be written as:
P :- Q, R.
P :- S, T, U.

Matching

Matching is operation on terms (structures).
Given two terms, they match if:
They are identical, or
They can be made identical by properly instantiating the variables in both terms.
Matching two dates:
date( D1, M1, 2006) = date( D2, june, Y2).
This causes the variables to be instantianted as:
D1 = D2
M1 = june
Y2 = 2006

The terms below don’t match:
date( D, M, 2008) = date( D1,M1, 2000).
In order to check whether the given dates are matching or not in Prolog we use ‘=‘ operator.
?-date( D, M, 2008) = date( D1,May, Y1).

Example:
?- date(D, M, 2001) = date(D1, may, Y1),
date(D, M, 2001) = date(15, M, Y).
D=15, D1= 15, M= may, Y1 = 2001, Y= 2001
Example:
?- date( D1, M1, 2006) = date( D2, june, Y2),
date( D1, M1, 2006) = date( 17, M3, Y3).
D1 = 17, D2 = 17, M1 = june, M3 = june,Y2 = 2006, Y3 = 2006



Matching succeeds or fails; if succeeds then it results in the most general instantiation

To decide whether terms S and T match:
If S and T are constants then they match only if they are identical
If S is a variable and T is anything then matching succeeds, S is instantiated to T and vice versa.
If S and T are structures then they match only if:
S and T have the same principal functor ,and
All their corresponding components match.


Matching succeeds or fails; if succeeds then it results in the most general instantiation

To decide whether terms S and T match:
If S and T are constants then they match only if they are identical
If S is a variable and T is anything then matching succeeds, S is instantiated to T and vice versa.
If S and T are structures then they match only if:
S and T have the same principal functor ,and
All their corresponding components match.

Structures

Structure objects are objects that have several components. The components themselves can be structures.
date(,,).
Structures can be presented using simple geometric objects. The root of the tree is the functor and the leafs are components.

Variables

Variables are string of letters, digits and underscore character. They start with an upper case letter or an underscore character:
X
X2
X_Y
_x123
_12
Variables without a name are called anonymous variables, which is written as a single underscore character.
hasachild(X):-parent(X,_).


Each time a single underscore character occurs in a clause it represents a new anonymous variable.
“Somebody_has_a_child:-parent(_,_).” is equal to,
“Somebody_has_a_child:-parent(X,Y).” but not equal to,
“Somebody_has_a_child:-parent(X,X).”
Is Ali have any children and if it is so what are their names?
?- parent(ali,Y).
Is Ali have any children?
?-parent(ali,_).

Atoms and Numbers

Atoms and variable examples:
Upper case letters: A,B,…,Z
Lower case letters: a,b,…,z
Digits: 0,1,…,9
Special characters: +,-,<,>,/,*,=,:,.,_,~

Atoms can be constructed in three ways:
String of letters,digits and the underscore character starting with a lower case letter:
anna
x25
x_
x__y


Strings of special characters:

.:.
::=
String of characters enclosed in single quotes:
‘Tom’
‘South_America’
‘Sarah Jones’

Numbers used in Prolog include integer numbers and real numbers.
1
-97
-0.035
Using real numbers may cause numerical errors due to rounding, because of this they are not used very often.


Error example in real numbers:
sum(X,Y,Z):-Z is X+Y.

?- sum(2,0.3,C).
C = 2.3 ;
No
?- sum(200,0.0003,C).
C = 200.0 ;
No
The evaluation of the expression “10000+0.0001-10000 “ may result in 0 instead of the correct result 0.0001

First Program

Rules:
In order to work, people must be full and delighted.
work(X) :- delighted(X), full(X).
In order to be full, people must eat something.
full(X) :- ate(X).
In order to be delighted, people must sleep and have fun.
delighted(X) :- slept(X), fun(X).
Facts:
Mustafa ate.
ate(mustafa).
Mustafa slept.
slept(mustafa).
Mustafa had fun.
fun(mustafa).
Query:
Can Mustafa work?




Facts:
eats(fred,oranges).
eats(tony,apple).
eats(john,apple).
Queries:
?- eats(fred,oranges). /* yes
?- eats(john,apple). /*yes
?- eats(mike,apple). /*no

Atoms

likes(john,mary).
In the above fact john and mary are two atomes.
Atoms are usally made from letters and digits with lowercase characters.
Atoms can also be legally made from symbols. The followings are legal atoms :
atoms
hello
zz42
two_words
====>
'two words'


The followings are not legal atoms :
Hello
4hello
_Hello
two words
two-word
The fact likes(john,mary). say that there is a relation between john and mary. It can be read as either john likes mary or mary likes john.

Rules

Prolog clauses such as:
“offspring(Y,X):-parent(X,Y).” are called rules.
There is difference between facts and rules.
Fact syntax and the meaning:
parent(tom,liz).
Rule has two parts a condition (head) and a conclusion part (body).

Facts with Arguments

A general fact model with arguments:
relation(,,...., ).
The arguments can be any legal Prolog term.
The basic Prolog terms are an integer, an atom, a variable or a structure. Various Prolog implementation enhance this basic list with other data types, such as floating point numbers, or strings.

Simple Facts

In Prolog we can make some statements by using facts. Facts either consist of a particular item or a relation between items. For example we can represent the fact that it is sunny by writing the program :
sunny.

We can now ask a query of Prolog by asking
?- sunny.

?- is the Prolog prompt. To this query, Prolog will answer yes. sunny is true because (from above) Prolog matches it in its database of facts.

Facts have some simple rules of syntax.
Facts should always begin with a lowercase letter and end with a full stop.
The facts themselves can consist of any letter or number combination, as well as the underscore _ character.
However, names containing the characters -,+,*,/, or other mathematical operators should be avoided.

SWI-Prolog

SWI-Prolog is an open source implementation of the programming language Prolog, commonly used for teaching and semantic web applications, which is designed by Jan Wielemaker in 1987
SWI-Prolog runs on Unix, Windows, and Macintosh platforms.
SWI-Prolog can be downloaded from “http://www.swi-prolog.org”.


The program, sometimes called database is a text file (*.pl or *.pro) that contain the facts and rules that will be used by the user of the program.
First you have to create a facts and rules file which can be done by using notepad.
It contains all the relations that make this program.
When you launch a program you are in query modequery mode.
This mode is represented by the sign ? - at the begining of the line.
In query mode you ask questions about relations described in the program.

You can run the facts and rules file by double clicking, after running SWI-Prolog, by using “Consult” menu via “File” menu or typing the file name after ?- sign.
After running the file, all rules and facts are stored in prolog.
You can now use queries.

What is Prolog?

Prolog is a logic programming language, which has an association with artificial intelligence and computational linguistics.
It is designed by Alain Colmerauer who is a French computer scientist in 1972.
Prolog’s major implementations: BProlog, ECLiPSe, Ciao Prolog, GNU Prolog, Quintus, SICStus, Strawberry, SWI-Prolog, YAP-Prolog, tuProlog
Prolog’s dialects: ISO Prolog, Edinburgh Prolog


Having its roots in formal logic, and unlike many other programming languages, Prolog is declarative.
The program logic is expressed in terms of relations, and execution is triggered by running queries over these relations.
Relations and queries are constructed using Prolog's single data type, the term.
Relations are defined by clauses.

Clauses are statements about what is true about a problem, instead of instructions how to accomplish the solution, the Prolog system uses the clauses to work out how to accomplish the solution by searching through the space of possible solutions.

Classification of Strategies for State Space Search

Towards the solution:
Complete – strategy is guaranteed to find a solution path, provided such a path exists
admissible - strategy is guaranteed to find the cheapest solution path;
defensible - strategy is guaranteed to find the shortest solution path;
concessible - strategy is not guaranteed to find the cheapest or the shortest solution path although it may occasionally do so;
Incomplete


Towards the use of heuristics:
Blind (exhaustive, uninformed, naive) – explore the state space systematically without any “suspicion” where the final state might be and direction of the search to it.
Breadth-first search;
Depth-first search (depth-first search with leap-frogging (or just depth-first search), depth-first search with backtracking (or just Backtracking)
Uniform-cost search;
Iterative deepening;
Etc.
Heuristic (informed)- exploit state descriptions to select the “most promising” node

Example 3 : 8-Tiles Puzzle

States? - Description which specifies the location of each of 8 tiles (simple 3x3 array). For efficiency, it is useful to include the location of the blank
Initial state? - Any state can be initial
Moves?
move the blank up
move the blank down
move the blank left
move the blank right
Final state? - Any state can be final
Arc costs? – 1
16! possible states

Example 1 : Missionaries and Cannibals Problem

N (where N>=1) missionaries and N cannibals arrive on the left bank of a river. They have a rowing boat with them. The boat will be used for transporting them across the river to the right bank. Each of them know how to row but there are two constraints:
The boat, which requires one person to row, cannot carry more than two persons at a time; and
If the number of missionaries on a bank is one or more, the number of cannibals on that bank cannot be more than the number of missionaries on that bank.
How are the missionaries and the cannibals to be transported across the river?


To set up the state space of this problem we must find a suitable formal representation of:

State – triple <α1,α2,α3>, where
α1 – persons on the left bank;
α2 – persons on the right bank;
α3 – boat position (L – boat is on the left bank; R – boat is on the right bank);
If m denote missionaries, c – cannibals, the initial state is and the final state is <0,NmNc,R>.


To set up the state space of this problem we must find a suitable formal representation of:
Moves


There are five alternative moves:
1m – send one missionary in the boat to sail
2m – … two missionaries…
1m1c – … one missionary and one cannibal…
1c – … one cannibal…
2c – … two cannibals…

State Space Representation of Problems

Explicitly – all states and arrows between them (suitable for small state spaces)
Implicitly – definition of the initial state and the set of allowed moves; it is a potential graph

State Space Search Revised : Introduction

Many of AI problems can be represented in terms of State Space Theory. State Space search techniques provide the conceptual backbone of almost every approach to the systematic exploration of alternatives. They play a key role in many historically important achievements in areas such as:
Computer games (puzzles, chess, strategic games, etc.) – the IBM program Deep Blue wins against Kasparov on chess (1997);
Planning and scheduling;
Natural Language Processing – program PEGAUSUS for speech recognition in real time and travel reservation;
Automated Reasoning and Theorem Proving – the program EQP solves 60 years unsolved problem (1996);
Computer Vision and Robotics – “No hands across America” – autonomous car control from Pittsburgh to San Diego;
Expert Systems – the real time expert system MARVEL keeps closely a huge data flow sent from the space ship Voyager, solves routine tasks and alarms for a serious problems;
Etc.

Backtracking. Idea

Status of the state:
Partly expanded – we have generated at least one of its children, but not all of its children;
Fully expanded – we cannot generate any more children;

The key moment in the idea of this strategy:
We generate one child at a time only!

Classification of Strategies for State Space Search

Towards the use of heuristics:
Blind (exhaustive, uninformed, naive) – explore the state space systematically without any “suspicion” where the final state might be and direction of the search to it.
Breadth-first search;
Depth-first search (depth-first search with leap-frogging (or just depth-first search), depth-first search with backtracking (or just Backtracking)
Uniform-cost search;
Iterative deepening;
Etc.
Heuristic (informed)- exploit state descriptions to select the “most promising” node

Characterization

Stack
Memory – linear function of length
For finite graphs – complete strategy
Incomplete strategy for graphs with infinite branches (often)
Depth-bound algorithms
Cost-bound algorithms
Efficient when many branches of approximately equal length

breadth first search algorithm

procedure breadth_first_search;
begin
OPEN:=[START];
CLOSED:=[ ];

while OPEN<>[ ] do
begin
X := dequeue (OPEN);
if final(X)
then return(success);
else
begin
add X to CLOSED
find successors of X
kill the successors of X that are in CLOSED or OPEN;
enqueue the surviving successors of X into OPEN;
end; {else}
end; {while}

return(failure);
end; {breadth_first_search}

Classification of Strategies for State Space Search

Blind (exhaustive, uninformed, naive) – explore the state space systematically without any “suspicion” where the final state might be and direction of the search to it.
Breadth-first search;
Depth-first search (depth-first search with leap-frogging (or just depth-first search), depth-first search with backtracking (or just Backtracking)
Uniform-cost search;
Iterative deepening;
Etc.
Heuristic (informed)- exploit state descriptions to select the “most promising” node

Data structures used

START – initial state
GOAL – a final state, or a function final()
X – current state explored
OPEN – list of reached but not explored states : frontier, fringe
CLOSED – set of already explored states

Blind State Space Search : General Idea

We will be looking here for one solution only

Expand a state (find its successors / descendants / children / adjacent states)
Forward strategy
Backwards strategy
Bidirectional search

Exit:
A final state has been reached [Success]
All reachable states have been explored and no solution has been found [Failure]

After success – usually reconstruct the solution path

Agent types

Four basic types in order of increasing generality:

Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents

Environment types

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.

Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)

Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.Current action does not depende on the history; current action does not affect the future actions.


Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)

Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

Single agent (vs. multiagent): An agent operating by itself in an environment.

Interactive English tutor

Performance measure: Maximize student's score on test
Environment: Set of students
Actuators: Screen display (exercises, suggestions, corrections)
Sensors: Keyboard

Part-picking robot

Performance measure: Percentage of parts in correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors

Medical diagnosis system

Performance measure: Healthy patient, minimize costs, lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's answers)

PEAS

PEAS: Performance measure, Environment, Actuators, Sensors
Must first specify the setting for intelligent agent design

Rational agents

An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful

Performance measure: An objective criterion for success of an agent's behavior

E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up with penalty for amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Better: One point for a clean square at each time unit

Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Rationality is distinct from omniscience (all-knowing with infinite knowledge)
Rationality <> Perfection



‘Looking’ actions - information gathering, exploration


Learning
Not completely known environment
Improve, not all information from the designer

An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Intelligent Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

Human agent:
Sensors: eyes, ears, and other organs
Actuators: hands, legs, mouth, and other body parts
Robotic agent:
Sensors: cameras, infrared range sensors, etc.
Actuators: various motors

State of the art

IBM’s Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997
Proved a mathematical conjecture (Robbins conjecture) unsolved for decades
No hands across America project (driving autonomously 98% of the time from Pittsburgh to San Diego)
During the 1991 Gulf War, US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people
NASA's on-board autonomous planning program controlled the scheduling of operations for a spacecraft
Proverb solves crossword puzzles better than most humans

Abridged history of AI

1943 McCulloch & Pitts: Boolean circuit model of brain based on artificial neurons
1950 Turing's "Computing Machinery and Intelligence“ – Turing test, subareas, machine learning, genetic algorithms

1952—69 Early enthusiasm, great expectations
1956 Dartmouth meeting: Phrase "Artificial Intelligence" adopted (John McCarthy, Minsky, Shannon, Newell/Simon…) Logic Theorist program
Next 20 years: dominated by MIT, CMU, Stanford, IBM
1950s Early AI programs, including Samuel's checkers program (learned to play better than its inventor), Newell & Simon's General Problem Solver, Gelernter's Geometry Engine
1958 Lisp McCarthy at MIT
1965 Robinson's complete algorithm for logical reasoning


Focus on Strong AI – solving math and logic problems, engaging in dialogues
Focus on Top-down approach – simulate concepts of human brain (planning, reasoning, language understanding, …)
Bottom-up approaches emerge – model low-level concepts (neurons, learning at much lower level, …)

Traditional top-down approach = Good-Old-Fashioned –AI (GOFAI)
Neat / Scruffy approaches

Neat: formal, pure, provable approach
Scruffy, messy: less provable but still yielding useful and significant results


AI’s Winter (a dose of reality) 1966-1973-mid 1980’s-now?
Predictions didn’t materialize
Simon 1957: in 10 years chess champion, major theorem proven. Actually, in 40 years.
Domain knowledge needed (eg automatic translation: ‘the spirit is willing but the flesh is weak’ <> ‘the vodka is good but the meat is rotten’ – US government funding on automatic translation projects cancelled.
AI discovers computational complexity – intractable problems.
Neural network research almost disappears (small mutation of programs …)
Negative results, eg. 1969 Minsky/Papert‘s Perceptrons


1969—79 Early development of knowledge-based systems
Early AI based on general-purpose search mechanisms. Weak methods – do not scale up. Alternative – use powerful domain-specific knowledge base
DENDRAL (first knowledge-intensive system)– infer molecular structure from info from mass spectrometer
Knowledge-based systems, expert systems; intensive work on knowledge representation

1973 Prolog (in France)


1980-present Results-oriented applications
Expert systems
NL systems


1981 Fifth Generation project (Japan), MCC (US) – never met their ambitious goals but lots of useful results, chips, …


1986– AI re-emerges
Strong AI – goal to emulate the full range of human cognitive capabilities
Weak AI – solve specific problems
Now focus on Weak AI, more realistic goals
Neural networks return to popularity
Connectionist models
Fuzzy logic, fuzzy controllers
Speech recognition and generation
Numerous applications
Artificial Life systems
Artificial Immune systems
Algorithms transition from AI algorithms to standard algorithms once they become practically useful


1987-- AI becomes a science
Build on existing theories rather than invent new
Real-world applications
A proper scientific methods – hypotheses are subjected to rigorous experiments, statistical analysis of results

Data mining
Gentle revolutions in robotics, computer vision, knowledge representation


1995-- The emergence of intelligent agents
Complete agent architecture
Web bots
1999 NASA – Deep Space I spacecraft – an agent to provide autonomy of spacecraft for limited durations of time

AI prehistory

Philosophy
Mathematics
Economics
Neuroscience
Psychology
Computer engineering and computer science
Control theory and cybernetics
Linguistics

Acting rationally: rational agent

Rational behavior: doing the right thing

The right thing: that which is expected to maximize goal achievement, given the available information

Doesn't necessarily involve thinking – e.g., blinking reflex – but thinking should be in the service of rational action


An agent is an entity that perceives and acts

Intelligent agents – more than mere programs – attributes such as operating autonomously, perceiving their environment, adapting to change

Uncertainty may be involved

For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance

Computational limitations make perfect rationality unachievable



Logic reasoning is needed but not enough
Correct inference is just one of several possible mechanisms for achieving rationality
All the skills from the Turing test are needed

Thinking rationally: "laws of thought"

Aristotle: what are correct arguments - Yield correct conclusions when given correct premises
Several Greek schools developed various forms of logic: notation and rules of derivation
Direct line through mathematics and philosophy to modern AI: logicist tradition

1965 – programs that could, in principle, solve any solvable problem described in logic notation


Problems:
Not easy to take informal knowledge and state it in formal terms (particularly uncertain knowledge)
Solvable in principle <> practically solvable
Not all intelligent behavior is mediated by logical deliberation
What is the purpose of thinking? What thoughts should I have?

Thinking humanly: cognitive modeling

The General Problem Solver (Newell and Simon) – not important if the problem is solved correctly but rather if the steps correspond to the way humans think

Cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the working of the human mind
-- How to validate? Requires
1) Predicting and testing behavior of human subjects (top-down)
or 2) Direct identification from neurological data (bottom-up)
Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from AI

Acting humanly: Turing Test

Turing (1950) "Computing machinery and intelligence":
"Can machines think?" "Can machines behave intelligently?"
Operational test for intelligent behavior: the Imitation Game

Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes
Anticipated all major arguments against AI in following 50 years

Turing suggested the major components (areas) of AI:
Knowledge representation
Automated reasoning (use the stored knowledge and draw new conclusions)
Machine learning (detect patterns, adapt to circumstances)
Natural language processing (to communicate)
For physical simulation of human also:
Computer vision (perceive objects)
Robotics (manipulate objects, move about)
Full Turing test

Introduction to Artificial Intelligence

For thousands of years we have tried to understand how we think
AI goes further: not only understand but also build intelligent systems


Artificial intelligence = machine intelligence = computational ıntellıgence = intellıgent systems