2. What is intelligence?
• Rational agent model
– Choosing among alternatives in such a way to maximize
achievement of goals within time and other resource constraints
• Ability to make accurate (enough) predictions
• Requires
– Ability to receive and process information
– Remember
– Learn and abstract from information
– Model
– Plan
– Act
– Evaluate progress
3. Strong AI vs Weak/Narrow AI
• Strong AI is general artificial intelligence
– In principle able to learn and act intelligently in a
broad general range, as humans can.
• Narrow AI
– Constrained in problem sets / domains
– Set of techniques for intelligent decisions / actions
– Ubiquitous across many software systems
– Does not attempt to solve the problem of general
intelligence
– Most AI today is narrow AI
5. Knowledge Acquisition
• Input Modalities
– Senses
• Vision
• Hearing
• Data communication
• Touch
• Accelerometers
• Other tech..
– Text/Video
• Linear modalities
• Speech recognition
• Natural language Processing
– Preassembled knowledge / data structures
6. Memory
• Temporal Memory
– Crucial to temporal reasoning
• Cause and affect inference
• Prediction
• Factual Memory
– Searchable fact stores
– Enabling inference
• Associative Memory
– Association between memories. How are memories and inferences strengthened or weakened
by new memories?
• Memory Trimming
– What is the proper tradeoff between detail and size/speed? How is saliency determined for
current and future goals? How does the memory structure cache and prune over time?
• Learning
– What can be inferred or generalized?
– What patterns and abstractions subsuming many facts and saving resources can be garnered?
• Search and Retrieval
7. Knowledge Representation
• Fundamental Goal
– Represent knowledge in a matter facilitating efficient, accurate
retrieval and reasoning
• Categories and Objects
– Categorization of objects is a basic central abstraction form and
greatly enhances efficiency
• Events
• Mental events and objects
• Reasoning Systems for Categories
– Semantic networks
– Description logics
• Reasoning with defaults
– Default facts are specified at category level and inherited
9. KR - Events
• Event <= fluent, time
– A condition that varies over time at a particular
time or time period.
– So KR events are statements about facts
concerning referents in time
– Time-intervals and interval reasoning
– Time varying knowledge
• e.g., President(USA, time)
10. KR – Mental Events
• Beliefs
– Beliefs are internal states of the AI
• Subject to change with changing information
• Self-modeling
• internal state
• Knowledge about internal state of other
agents
• Modal logic
11. Inference
• Logical Inference
– Deduction
• Derives b from a where b is a formal consequence of a. Deriving consequences of what is
known or assumed.
– Induction
• Reasons from experience to an hypothesis generalizing experience. A “jumping to
conclusions”. Does not guarantee accuracy.
– Abduction
• Seeks plausible explanations or necessities for the facts to be as they are.
• Backward chaining from sought result to possible evidence.
• Backward chaining
– Starting from a goal and looking for conditions that support / infer that goal
recursively until known facts or sufficiently strong beliefs are found.
• Forward chaining
– A form of deductive inference chasing that may or may not converge on a
goal.
12. Goal Seeking
• Intelligence is difficult to speak of without goals, that which a
system seeks
• Goal seeking requires
– Goal formulation
– Problem formulation of what actions to consider in light of state and
goal
• Considers cost, likelihood of success of each possible action
– Performance measure
• An Intelligent agent optimizes its performance measurement
• Examples
– Touring problems
– Traveling salesman,
– Robot navigation
– Assembly sequencing
13. Basic Simple Search Techniques
• Exhaustive
– Can be exponential from combinatorial explosion but will find solution if exist
• Uniformed search
– No information for preferring one choice over another at each point
– Breadth first, uniform cost, depth first, depth limited iterative deepening, bi-
directional
• Informed (heuristic) search
– Greedy, best-first
– A* search
• Combines cost to reach node with distance from node to goal
– Memory bound heuristic search (combining iterative deepening with A*)
– Heuristic Sources
• Relaxing problem constraints
• Subproblem recognition from pattern database
• From experience
14. More sophisticated search
• Optimization problems
• Best state according to objective function (global maximum or minimum)
• Hill climbing search
• Simulated annealing
• Local beam search
• Genetic algorithm
– Successor states generate by combination of two or more states with modification
• Continuous space searches
• Searching with nondeterministic actions
– And-or trees
• Each node/action has several possible outcomes or range of outcomes (ands)
• Searching with incomplete perception
• Online search problems
– Real travel cost not just computational for each node traversed
• Depth first is best choice often
• Hill climbing is also workable
– Learning a map of the environment as it goes is important
15. Machine Learning Algorithm Types
• Supervised
– Input data is pre-labeled as to appropriate results. The learner approximates
the labeling function.
• Unsupervised
– Models set of inputs, like clustering
• Semi-supervised
– Combines labeled and unlabeled samples
• Reinforcement
– Learns due to feedback resulting from each attempt/guess. Common in neural
nets
• Transduction
– Tries to predict new outputs based on training inputs,outups and on test
inputs
• Learning to learn
– Learns its own inductive bias based on experience
In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing philosophical disputes in artificial intelligence research. This conflict is over a serious concern: what is the best way to design an intelligent system?http://artificial-intuition.com/index.htmlNeats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.
Depending on the model that AI development takes a AI may or may not be able to transfer its knowledge to another AI. But the entire AI is likely much easier to fully copy.http://artificial-intuition.com/index.html
Cyc project attempted to input millions of “common sense” facts to support AI common sense. Project largely failed in main goal but did produce a corpus for possible future work.A deep weakness of this approach is encoding millions of factoids in formal logic instead of an organic understanding of interrelated concepts. It is not clear that attempting to introspect this interrelation and encode it in such formal terms is possible.
General important categories include Events, Times, Physical Objects, Beliefs. The general field of creating representations for these sorts of things is sometimes called Ontological Engineering.
Modal logic is a type of formal logic that includes modalities like possibility and belief and frequency. Difference between “John is happy” and “John is usually happy”.
Inference is reaching conclusions from what is known that are not present in the known facts. It is also a basis for abstraction / concept formation.Inductive reasoning, also known as induction or inductive logic, or educated guess in colloquial English, is a kind of reasoning that constructs or evaluates inductive arguments. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it.http://en.wikipedia.org/wiki/Deductive_reasoninghttp://en.wikipedia.org/wiki/Inductionhttp://en.wikipedia.org/wiki/Abduction_(logic)Of the candidate systems for an inductive logic, the most influential is Bayesianism[citation needed]. As a logic of induction rather than a theory of belief, Bayesianism does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a (really any) hypothesis, and when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic.http://en.wikipedia.org/wiki/Abductive_reasoning#Deduction.2C_induction.2C_and_abductionInductive reasoning allows for the possibility that the conclusion is false, even where all of the premises are true.[1] For example:All of the swans we have seen are white.All swans are white.
http://en.wikipedia.org/wiki/Inductive_bias
http://en.wikipedia.org/wiki/Decision_tree_learninghttp://en.wikipedia.org/wiki/Association_rule_learninghttp://en.wikipedia.org/wiki/Artificial_neural_networkhttp://en.wikipedia.org/wiki/Genetic_programminghttp://en.wikipedia.org/wiki/Inductive_logic_programminghttp://en.wikipedia.org/wiki/Support_vector_machineshttp://en.wikipedia.org/wiki/Cluster_analysishttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Reinforcement_learningDecision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value.Association rule learning is a method for discovering interesting relations between variables in large databases.