Non-Procedural Programming Languages CMPUT325

Lecture 01 - Introduction to Functional Programming

Basic Computer Architecture

What Does the Program Look like Mid-execution?

Program State

Updating State

Programming Paradigms

Basic, Fortran: Some of the First

Imperative: Algol → Pascal → C

Object-oriented: Smalltalk, Java, C++, Python

Functional: Lisp, Various Others

Logical: Prolog, Answer Set Programming

Program Concepts

Fun - A Simple Functional Language

Syntax Terminology and Interpreters

f(x, y) = x * x + y

Functions

Higher-order Functions

Types of Objects

Primitive Functions on Lists
Other Primitive Functions

Writing Simple Programs

count(L) = 
    if null(L) then 0
    else 1 + count(r(L))

Lecture 02 - Working with Lists

Implementing Simple List Functions

Member

member(x, L) (abbreviated m)

Append

// list[any_1] list[any_2] -> list[any_1 | any_2]

app(L1, L2) = 
    if null(L1) then L2
    else app(rmlast L1, cons(last L1), L2)

rmlast(L) = 
    if null(rest(L)) then first(L)
    

Binary Tree

Lecture 03 - LISP Language Constructs

Accessing List Items

Functions in LISP

(defun fun_name (params...) (body...))

Function Examples

Append
(defun append (L1 L2)
    (if (null L1)
        L2
        (cons (car L1) (append (cdr L1) L2))))
Reverse (using append)
(defun reverse (L) 
    (if (null L) 
        L
        (append (cdr L) (cons (car L) nil)))
Cartesian Product
(defun cartesian (L1, L2)
    (if (null L1)
        nil
        (append (pair-all (car L) L2) 
                (cartesian (cdr L1) L2)))

(defun pair-all (x L)
    (if (null L)
    nil
    (cons (cons x (cons (car L) nil)) 
          (pair-all x (cdr L)))))

nil In LISP

let And Scoping

(let ((id_1 value_1) (id_2 value_2) ... ) (expression))

Equality in LISP

Logical Connectives

Conditionals

cond

(cond 
    (P1 x11 ... x1m)
    (P2 x21 ... x2m)
    (P3 xn1 ... xnm))

if

(if P1 e2 e3)

Lecture 04 - S-expressions

Dotted Pairs

(car (x . y)) -> x
(cdr (x . y)) -> y

Machine-level Representation

Simplest Form

Storing Function Definitions

; (defun f-name (x y) (body))
'(defun . (f-name . (((x /) . (y /)) . (body . /))))

Lecture 05 - Dotted Pairs and Expressions

Dotted Pairs

; IDENTITIES: DOTTED PAIRS

(car (x . y)) -> x
(cdr (x . y)) -> y

Machine-level Representation

Simplest Form

(a . (b c)) ; example s-expression

; Full dotted-pair form
(a . (b . (c . nil)))

; simplest form
(a b c)

Storing Function Definitions

; (defun f-name (x y) (body))
'(defun . (f-name . (((x /) . (y /)) . (body . /))))

Dotted Pairs

Dot syntax: represents car and cdr in list notation, but doesn't require writing it out

Generalizing the Idea of cons

Symbolic Expressions (s-expressions)

Func. Ion Definitions

Lecture 06 - Higher Order Functions

Higher-order Functions

Common Higher-order Functions

First-class Functions as a Language Feature

Lecture 07 - λ Calculus I

[[lambda-cal.pdf]]

[[lambda-reductions.pdf]]

Lambda Calculus

Lambda Functions

; LISP syntax: Lambda expression
(lambda (arg1 ... argn) (body ...))

; LISP syntax: Lambda application
((lambda (arg1 ... argn) (body ...)) arg1 ... argn)
Internal Representation
; function function application 
(function (lambda (x) (+ x 1)))

; result
#<FUNCTION (LAMBDA (X)) {11EAF6E5}>

Syntax of the Lambda Calculus

Land of 10,000 Asides

Currying

; greater than function
; no currying
(lambda (x y) (> x y))
; curried version
(lambda (x) (lambda (y) (> x y)))

; curried application
; aruments: arg-x, arg-y
((lambda (x) ((lambda (y) (> x y)) arg-y)) arg-x)
Procedure
Application

Reductions in Lambda Calculus

[[lambda-example.pdf|Lambda Reduction Examples]]

Beta-reduction

; LISP expression
((lambda (x) (+ x 1)) a)
; beta-reduced LISP expression
(+ a 1)

Alpha-reduction

Free and Bound Variables

Beta vs. Alpha Use-case

((lambda (x) (lambda (z) (x z))) z)
; both a bound z in the lambdas and a free z as a parameter
; direct substitution without renaming
((lambda (x) (lambda (z) (x z))) z)
; sub x -> z
(lambda (z) (z z))
; naming conflict! big L!

Lambda Normal Form

Lecture 08 - λ Calculus II, Electric Boogaloo

[[lambda-reductions.pdf]]

Lambda Calculus Cont.

Order of Reduction

Normal Order Reduction (NOR)
Applicative Order Reduction (AOR)

Church-Rosser Theorem

Church-Rosser Theorem

Theorem

  1. If \(A \to B\) and \(A \to C\), then there exists an expression \(D\) such that \(B \to D\) and \(C \to D\)
    • True even if \(A\) doesn't have a normal form
  2. If \(A\) has a normal form \(E\), then there is a normal order reduction \(A \to E\)

where \(\to\) means a sequence of \(0\) or more reduction steps

Church-Rosser Theorem on Wikipdia

Recursive Primitives

Church Encoding on Wikipedia

Primitive Functions

Natural Numbers
Numeric ("shorthand") Symbolic Functional
\(0\) \(0\) \((λsx\mid x)\)
\(1\) \(s(0)\) \((λsx\mid sx)\)
\(2\) \(s(s(0))\) \((λsx\mid s(sx))\)
\(n\) \(s(\dots s(0)\dots)\) \((λsx\mid s(\dots(sx)\dots))\)
Addition
ADD = (λwzsx | ws(zsx))
Control Flow
T = (λxy | x)
F = (λxy | y)

IF = (λxyz | xyz)
Logical Connectives
NOT = (λx | xFT)

AND = (λxy | xyF)
OR = (λxy | xTy)
Boolean Functions
ZEROT = (λx | x F NOT F)

Recursion, Generalized

Combinators
// DEFINITION
Y = (λy | (λx|y(xx)) (λx|y(xx)))

// APPLICATION
Y N = (Ly | (Lx|y(xx)) (Lx|y(xx))) N  
      -> (Lx|N(xx)) (Lx|N(xx)))  
      -> N ((Lx|N(xx)) (Lx|N(xx)))  
      -> N (N ((Lx|N(xx)) (Lx|N(xx))))  
      ...

Lecture 09 - Context and Closure

[[context-based-intepreter.pdf]]

Context and Closure

Deferred Substitution

Eval Walkthrough

Asides

Function Application in a Context

Asides: let

; let 
(let ((x1 e1) ... (xn en)) e)
; lambda function equivalent to let
((lambda (x1 ... xk) e) e1 ... ek)

Lecture 10 - "Interpreter? I barely know her!"

[[context-based-intepreter.pdf]]

Implementing a Context for Our Interpreter

The eval Function

Evaluation Patterns

Simple Cases

Functions

C = eval[e, n, v]
z = evalList[(e1 ... ek), n, v]

eval[(e e1 ... ek), n, v] ->

eval[
    body(C),
    cons(params(C), names(C)),
    cons(z, values(C))
]

Scoping

z = evalList[(e1 ... ek), n, v]

eval[(let (x1.e1) ... (xk.ek) e), n, v] -> eval[e, cons((x1 ... xk), n), cons(z, v)]

Lecture 11 - SECD "Architecture"

[[SECD Machine.pdf]]

Execution

Stacks 💸

Structure

Operations

OP Description Definition Explanation
NIL push a nil pointer \(s \, e\,(\text{NIL.c})\, d \to (\text{NIL}.s)\, e \, c \, d\) moves a nil pointer from the control stack to the evaluation stack
LD load from the environment \(s \, e\,(\text{LD} (i.j).c)\, d \to (\text{locate}((i.j),e).s)\, e \, c \, d\) uses locate (auxiliary function) to find the value of a variable in the environment
LDC load constant \(s \, e\,((\text{LDC} x) .c)\, d \to (x.s)\, e \, c \, d\) moves a constant from the control stack to the evaluation stack
LDF load function \(s \, e\,((\text{LDF} f) .c)\, d \to ((f.e).s)\, e \, c \, d\) Adds function to eval stack from control stack
AP apply function \(((f.e') v.s)\, e\, (AP.c)\, d \to \text{NIL}\, (v.e')\, f\, (s e c.d)\) applies a function, analogous to a JAL instruction
RTN return \((x.z)\, e'\, (\text{RTN}.q)\, (s e c.d) \to (x.s)\, e\, c\, d\) restores the environment from when the fn was called
SEL select in if-statement \((x.s)\, e\,((\text{SEL}\, \text{ct}\, \text{cf}).c)\, d \to s\, e \, c' \, (c.d))\) delegated to to compile if-statements, since they require special compilation to not execute both paths
JOIN rejoin main control \(s \, e\,(\text{JOIN} .c)\, (\text{cr}.d) \to s\, e \, cr \, d\) used with SEL; adds what was in the dump stack to the control stack
RAP recursive apply (details omitted)
DUM create a dummy env used with RAP

Compilation (LISP)

Built-in Functions

If-then-else

Non-recursive Functions

Recursive Functions

Note: Generating Indices for Identifiers

Lecture 13 - Intro to Logic Programming through Prolog

Prolog Quickstart

Prolog Debugger Overview

Prolog

Grammar

Types of Terms
Binding Variables
% "simple" binding
X = 2

Program Structure

Facts
% relation over constants
coach(trish, nolan).

% universally quantified statement: everything is awesome
awesome(X).
Rules
acquaintance(X, Y) :- coach(X, Y).
Conjunction
% X is a banana if it's yellow, a fruit, and bendy
banana(X) :- yellow(X), fruit(X), bendy(X)
Facts as Rules
Queries
?-acquaintance(trish, nolan).
?- C1, C2, ..., Ck
Evaluation Algorithm
Code Style for Queries
p(W) = append([a1, a2], [b1], W).
?- p(a)         % for some constant a

Lecture 14 - Data Structures in Prolog, Unification, Inference Engine

Data Structures

Lists

[]           % empty list
[a, b, c]    % list with a, b, and c
[F|R]        % a pattern with F and R

Unification

Examples

Most General Unifier

Unification Algorithm

Example
Step System of Equations Substitution Explanation
1 {p(f(g(X,a)), X) == p(f(Y), b))} \(\set{}\) The equation of the two terms is the only term in the system
2 {f(g(X,a)) == f(Y), X == b} \(\set{}\) Both terms follow the structure P(A, B). Since , defines a conjunction, both pars must be true. So, each part is equated in the system, which now has two equations
3 {f(g(b,a)) == f(Y), b == b} \(\set{X/b}\) To make the second equation X == b equal, we substitute \(X/b\), so we add it to the substitution. The second equation is now equal
4 {g(b,a) == Y, b == b} \(\set{X/b}\) Both terms in the first equation are wrapped in \(f\), so they can be equated without it
5 {g(b,a) == g(b,a), b == b} \(\set{{Y/g(b,a), X/b}}\) Finally, we substitute \(Y/g(a,b)\) to make the first equation equal. Both equations are solved, so the unification is complete
Explanation

Occurs Check

Lecture 15 - In-built Predicates and Prolog's Inference Engine

Inference Engine

Built-in Predicates

Arithmetic, Equality, Comparison

In addition to "regular" arithmetic and comparison operators +, -, *, //, <, <=, >, >=, prolog has the following equality operators

Operator Example Explanation
is X is E True if X matches the arithmetic expression E. If X is a variable, then matching means X will be bound to the value of E, implying E is an arithmetic expression
= X = Y True if X and Y are unifiable, i.e. if they can be matched (unified). \= is the negation of this.
=:= E1 =:= E2 True if the values of arithmetic expressions E1 and E2 are equal
=\= E1 =\= E2 True if the values of arithmetic expressions E1 and E2 are not equal
== T1 == T2 True if terms T1 and T2 are identical, i.e. they are syntactically equivalent. The only case where this isn't the same as string equality is if there are free variables with different names in T1 and T2
\== T1 \== T2 True if terms T1 and T2 are not identical, i.e. the negation of ==

Metalogic

Predicate Explanation
var(X) Tests whether X is uninstantiated, i.e., unbound, i.e. a free variable
nonvar(X) Negation of var(X)
atom(X) Tests whether X is, or is bound to, an atom (if it's a variable)
integer(X)
number(X)
Tests whether X is an integer or a number, respectively
atomic(X) Tests with X is either an atom or a number; is the disjunction of atom and number

Finding All Solutions

Predicate Explanation
findall(X, Q, L) Binds to list L a list of all values for X that satisfies query Q, e.g. findall(X, likes(X, Y), L). X should parametrize Q

Inference Engine

We discuss inference for Horn clauses, which are simplified versions of "regular" clausal logic. We will use the syntax of Prolog to express the inference algorithm, but it exists independently.

% Clause (for n >= 0, i.e. possibly no clauses)
A :- B1, ..., Bn

% Goal, with **subgoals** C1, ..., Ck
?- C1, ..., Ck

The resolution principle (or just resolution) is the process by which we find proofs (refutations).

Deriving Goals

We first proceed by attempting to unify the first goal C1 with clause A. If unifier \(w\) unifies C1 and A, the new goal, called a derived goal, becomes:

?- w(B1, ..., Bn, C2, ..., Ck)

B1, ..., Bn, C2, …, Ck are the terms that haven't been unified yet, so they become our derived (new) goal. Notice that we chose to unify the first goal clause C1; we will keep doing this to recursively generate derived goals until we get an empty goal, implying that all the subgoals have been found, and thus a proof exists.

Since clauses may have different resolutions, each resolution gets searched through backtracking, implying a tree structure. Indeed, the path from the main goal ?- C1 ... to ?- w(B1, ..., C2, ...) is an edge on the tree; each possible unifier \(w\) is a different branch from the root. The resolution of ?- w(B1, ..., C2, ...) is a subtree from \(w\)'s node.

Thus, a successful proof (a refutation) is represented by path from the root to an attempted resolution of an empty goal, which is a leaf. Note that leaves are either empty goals (successful resolution) or resolution failures, since these are the only cases that don't derive goals recursively.

A failure occurs when two expressions cannot be unified, e.g. they are non-variables with different values. When a failure is reached, the algorithm backtracks to the last successful resolution (i.e. the parent node) and evaluates the next unifier.

The set of root-{empty-goal} paths is the set of solutions to the query; prolog searches through this tree to find them. Since the resolution algorithm completes the first subgoal before moving on and backtracks on failure, the tree is searched using depth-first search.

Lecture 16 - Cut, Negation, and a Simple Interpreter

Example of Prolog Problem: The N-Queens problem (eclass)

Notes on eclass

Cut !

In Prolog, the cut ! is a goal that succeeds when first reached, but fails if Prolog attempts to backtrack through it. So, it forces prolog to commit to the choices made before the cut.

?- q1, ..., qn, !, r1, ..., rn

The cut is used to constrain the possible values returned by a predicate appearing earlier, e.g. if we only want to consider one unification of that clause. It is used to indicate clauses that don't need to be recalculated, often leading to performance improvements.

Note that since the cut prevents some of the resolution search from happening, the order of the goals (including the cut) now impacts that actual set of results returned.

Usage Example

In Assignment 3, my clique predicate was defined as

clique(L) :- findall(K, node(K), Nodes), !, subset(L, Nodes), connected(L)

The cut is where it is so that we don't re-evaluate findall, since every permutation of the list of nodes is a valid unification to K. However, any order of K produces the same result, so checking all of them produces the same result as checking just one. So, the cut is added to prevent backtracking.

We can also use it to prevent "multiple matching" scenario when just one matching is sufficient, like for the function member.

% here, we have the cut so that we stop looking once we find the member
% otherwise, the query would return n trues for n occurences of
% X in the list
member(X, [X|T]) :- !.
member(X, [H|T]) :- member(X, T).

Construction of Conditionals

We can use the cut to implement conditional logic. Here, we are expressing if p, then q, otherwise r.

x :- p, !, q.
x :- r.

Negation

The negation operator \+ in prolog is equivalent to the logical not \(\lnot\).

It can be defined in terms of the cut:

not(X) :- X, !, fail.
not(X).

Semantics of Negation

The definition of negation in terms of cut implies that negation is the failure to prove, i.e. \+ P iff P is false, which seems to make sense. However, when variables are involved, this isn't always the case. For example, consider:

even(N) :- \+ odd(N), integer(N).
integer(6).
odd(weird).

Prolog's Internal Database

Meta-predicates

Prolog has built-in predicates that directly affect the knowledge of the program (i.e. the database) at evaluation time, so they are useful for the DB.

Name Description
assert(Clause) Adds Clause to the current program being interpreted. Note that where in the program it gets added depends on implementation.
E.g. assert(above(X, Y) :- on(X, Y)).
asserta(Clause) Inserts Clause as the first clause for that predicate
assertz(Clause) Inserts Clause as the last clause for that predicate
clause(Head, Body) Searches the program for a clause whose head matches (i.e. unifies to) Head (i.e. searches for a clause). Head must not be a free variable.
retract(Clause) The first clause matching Clause in the program is deleted
retractAll(Clause) Every clause matching Clause in the program is deleted

Prolog as a Database

These meta-clauses enable us to use prolog like a simple database (which is quite fast), where predicates are used to encode attributes:

A Prolog Interpreter in Prolog

We can write a prolog interpreter in prolog (homeomorphic spaces, iykyk).

% if we interpret true or the empty goal, the program must be true
interp(true).
interp([]).

% interpreting a list of subgoals is like a conjunction, so we
% defer to the conjunction in prolog (,)
% tail-recursive case of interpreter
% we have a cut so that the interpretation can only happen once
interp([H|T]) :- !, interp(H), interp(T).

% if there is a clause matching our program matching P, we evaluate that
% this is the "search and match" step that prolog takes
% the bindings for P's variables get passed to Y automatically
interp(P) :- clause(P :- Y), interp(Y).

% next, we check if P is a built-in predicate by letting prolog evaluate it directly
interp(P) :- P.

% finally, if this fails, we can't evaluate the goal, so it fails and backtracks

Lecture 17 - Constraint Programming and Satisfaction

What is constraint programming (eclass)

Constraint programming studies models of computation based on constraints, the main idea being that if a set of constraints can fully characterize a problem, finding a solution to the constraints solves the problem.

Examples of Constraint Problems

The following problems are (most?) efficiently solved using constraint programming.

Cryptarithmetic Puzzles

Cryptarithmetic puzzles are solved by assigning digits to letters (or more broadly, symbols); these symbols are related by an "equation" like the one below.

    S E N D  
+   M O R E  
--------------  
    M O N E Y

N-Queens Problem

N-queens problem: If \(N\) queens are placed on an \(n\times n\) board, what is the smallest \(n\) such that a configuration exists where no two queens are in the same column or diagonal?

NP-Complete Problems

These include the travelling salesman problem, map-colouring, planning problems, etc.

Constraint-Solving Approaches

When a problem is solved algorithmically, the programmer develops, implements, and runs an algorithm in a programming language; the work is coming up with this algorithm

When a problem is solved with constraint programming, the programmer must formulate/model the problem in the underlying language. The "algorithm" part is deferred to the language implementation

The two main constraint programming approaches are Boolean Satisfiability and Constraint Satisfaction, the latter of which we will study.

Constraint Satisfaction

Constraint Satisfaction Problem

Definition
A constraint satisfaction problem (CSP) consists of

  1. A set of variables \(X=\set{x_1, \dots, x_n}\)
  2. A domain \(D_i\), i.e. set of possible values, for each \(x_i\in X\)
    • These are often consecutive integers, but can be anything, including non-numbers
  3. A set of constraints restricting which values variables can simultaneously have

A solution to a constraint satisfaction problem is an assignment of every variable to a value from its domain in a way where all constraints are satisfied.

A single constraint is local (i.e. applies to just the variables involved), but a solution to all the constraints is global, (i.e. involves all the variables). So, a assignment set may satisfy a single constraint, but won't be a solution if it fails any others.

Systematic Search Algorithms

As alluded to, the generate-and-test paradigm (GT) that generates and test all possible value assignment sets is strictly correct, but inefficient.

The backtracking paradigm incrementally attempts to extend partial solutions towards complete ones by picking the "next" value in the domain for a variable. Backtracking occurs when a constraint-breaking value is picked.

Consistency Techniques

Constraints and the domains of their variables might make the set of constraints inconsistent, especially if some variables have already been bound.

Essentially, we are checking for values that are not possible to satisfy at our current place in the backtrack, implying we shouldn't search through them on this backtracking cycle.

A technique is applied to a constraint to reduce the domain of the variable(s) involved; a domain reduced to \(\emptyset\) indicates unsatisfiability.

Node Consistency

If a constraint has one unassigned variable (e.g. \(X>2\) or \(X = Y\) where \(Y\) is bound from previous backtracking), the variable's domain \(D\) can be filtered using the constraint as a predicate, i.e. all \(d\in D\) not satisfying the constraint are removed from \(D\)

Arc Consistency

If a constraint has two unassigned variables \(X\) and \(Y\) and there is some value \(x\) for \(X\) where there is no value satisfying \(Y\) in its domain, this \(x\) can be removed the domain of \(X\) (for this backtrack).


More info can be found on the Wikipedia Page for the Constraint Satisfaction Problem

Lecture 18 - Constraint Logic Programming (CLP)

Topic 18: Constraint Logic Programming (eclass)

Constraint logic programming combines constraint programming and logic programming. In prolog, this can be achieved by embedding a constraint solver, which replaces prolog's unification process

Constraint solvers are defined by their domains; common ones include finite domains (useful), \(\mathbb{R}\), \(\mathbb{Q}\), booleans, etc.

Solving Strategy

Example

% this program is defined over a number system, e.g. the naturals, reals, etc
p(f(X)) :- X > 1, q(X).
q(X) :- X < 5.

% query
?- p(Z).
Step Program Evaluation Constraint Store
1 ?- p(Z) {}
2 X > 1, q(X) {Z = f(X)}
3 q(X) {Z = f(X), X > 1}
4 X < 5 {Z = f(X), X > 1}
5 \(\varnothing\) {Z = f(X), X > 1, X < 5}

Here, the constraint solver must then be used to determine if {Z = f(X), X > 1, X < 5} is satisfiable, which depends on the domain over which it is defined. These particular constraints are satisfiable over \(\mathbb{R}\), \(\mathbb{N}\), etc.

Constraint Solving in Prolog

Both packages below (clpr and clpfd) come with SWI-prolog.

Solving over \(\mathbb{R}\)

The clpr (constraint logic programming over reals) library can be used in prolog to invoke a constraint solver over \(\mathbb{R}\)

:- use_module(library(clpr))

We program in prolog normally, but enclose constraints we want clpr to solve in curly braces {}

% here, we want X > 1 to be constraint-solved over the reals
% q(X) should still be solved through unification, since it's a constraint on an equation, not a real number
p(f(X)) :- {X > 1}, q(X).
q(X) :- {X < 5}.

Solving over Finite Domains

More info on clpfd can be found in the prolog manual and tutorial page.

Finite domains tend to be the most useful domains to solve constraints over because they are the best at representing "useful" problems, like scheduling, planning, packing, etc.

Structure of a clpfd Program

First, we must load the clpfd library

:- use_module(library(clpfd))
Specifying Constraints

The operators of constraints are prefixed with # to distinguish them from the regular prolog operators.

Both sides of the constraint can be arithmetic expressions, unlike regular prolog.

Restricting Domains

Note: the domains of variables can be constricted in regular prolog as well, e.g. in/2 and ins/2 restrict variables to being integers. We saw this done with things like number, atom, etc. as well

The in operator can restrict a value to a range, which is defined with the notation a..b corresponding to the range \([a, b]\). The union operator \/ can be used to join ranges.

X in 1..4             % regular range
[X, Y, Z] in 1..4     % list of variables in range
X in 1..4\/10..29     % union of ranges

The all_different/1 checks if a list of values are all distinct.

An example of the domain restrictions required for the SEND MONEY problem can be found on eclass.

Searching for Solutions

The labelling predicates label/1 and labelling/2 are used to tell the (clpfd) solver to solve the given list of variables, i.e. assign domain values to them in order and backtrack until a solution is found. This process is called labelling.

label([X, Y, Z]).

Essentially, labelling is the process that turns constraints into lists of possible bindings for all the variables.

An implementation of label gives insight into how it works:

mylabel([]).  
% indomain matches if a variable V is in the global domain
% backtracking through it leads to the domain values being matched in increasing order, mirroring the labelling structure
mylabel([V|Vs]) :- indomain(V), mylabel(Vs).
Search Control Options

The labelling\2 predicate allows a list of search options/parameters to specify the orders of the variables and domains to be solved.

Variable order
Name Predicate Explanation
Leftmost leftmost The first variable is selected (default option)
Minimum min The variable with the smallest lower bound is selected
Maximum max The variable with the largest upper bound is selected
First-fail ff The variable with the smallest domain is selected (tiebreaking defers to leftmost)
Most constrained ffc Like ff, but tiebreaking defers to the largest number of constraints, then leftmost
Constraint order
Name Predicate Explanation
Upward up The constraints are considered in ascending order
Downward down The constraints are considered in descending order

Lecture 19, 20 - CLPFD Predicates and Reification

Local Constraints

Arithmetic Constraints

Arithmetic constraints are the same as their prolog counterparts, but prefixed with #: #=, #\=, #>, #<, #>=, #=<.

Boolean Operators

Name Operator Explanation
Logical Or #\/ Logical or connective
Logical And #/\ Logical and connective
Logical Not #\ Logical not
Implication #==> The logical implication connective, equivalent to (#\ A) #\/ B. #<== also exists
Equivalence #<==> The logical equivalence connective ("if and only if"), equivalent to (A #==> B) #/\ (B #==> A)

Global Constraints

Global constraint predicates have built-in constraint prorogation; they affect the whole program instead of just the expressions they relate. Generally, their clpfd implementations are more efficient than their user-defined ones.

Predicates

Reification

Reification is the process of turning a constraint into a variable (in clfpd) that holds the value true or false. So, we can explicitly reason about whether the variable (and thus the constraint) is true or not within the program itself.

In natural language (english), reification is the process of making something abstract more concrete, sometimes by replacing an abstract concept with a particular instance.

Reification Example - occurs

% NOT REIFIED

% query
q(I,L,N) :- length(L,4), L ins 1..10, occur0(I,L,N), label(L).
% predicate: occur0
% defined 
occur0(_,[],0).
occur0(I,[I|L],N) :- N#=N1+1, occur0(I,L,N1).
occur0(I,[J|L],N) :- I #\= J, occur0(I,L,N).

% REIFIED

% query
t(I,L,N) :- length(L,4), L ins 1..10, occur(I,L,N), label(L).
% predicate: occur
occur(I,Vars,N) :- generate_list(I,Vars,L), sum(L,#=,N).
% helper function: generate_list (encapsulates occur0 logic, written in a reified way). The generated list is an indicator vector; entries 0 mark "don't occur" and 1 the opposite
generate_list(_,[],[]).
% we define the check in terms of boolean logic directly
generate_list(I,[A|R],[T|S]) :- (I #= A #==> T#=1), (I #\= A #==> T#=0), generate_list(I,R,S).

Appendix: General Answer Set Programming Interpretation of Reification

Shoutout to Spencer for explaining this on the discord and providing the example

In ASP in general, reification turns a "regular" program (e.g. a prolog program) and turns it into something that another program can reason about; this "other program" may be a constraint solver, for example.

An example of reification is below:

% non-reified
bagel(1, 2, 3). bread(42) :- bagel(1, 2, 3).

% reified
% aside: what is this representation and how is the reification step implemented?
atom_tuple(0). atom_tuple(0,1). literal_tuple(0). rule(disjunction(0),normal(0)). atom_tuple(1). atom_tuple(1,2). rule(disjunction(1),normal(0)). output(bagel(1,2,3),0). output(bread(42),0).

Lecture 21 - Answer Set Programming

Answer set programming is a declarative programming paradigm for problem/constraint solving and knowledge representation.

Much of the following ASP content is similar in concept and syntax to prolog, but they are fundamentally different things.

In this class, we use the ASP solver clingo.

The ASP Paradigm

Answer Set Programming

Paradigm
Problems are defined in terms of the rules that govern their actual set of solutions. Stable models or answer sets are solutions to a problem

Programs

We encode problem instance \(Q\) in a program \(P\), expressed as non-monotonic logic. Stable models of \(P\) are computed by an ASP solver; these correspond to the solutions to \(Q\).

A normal program is a finite set of rules of the form \(A \leftarrow B_1 \dots B_k, \text{ not } C_1, \dots, \text{ not } C_n\), read "If \(B_1\dots B_k\) are in a solution but none of the \(C_i\) are, then \(A\) is in the same solution". The terms \(A, B_i, C_i\) are atoms in the underlying propositional language.

Programs can be written with variables to generalize non-ground rules over all the constants in a program. Like in prolog, variables are denoted with capital letters.

A constraint is characterized by a rule with an empty (false) head; semantically, this rule cannot apply to any instance, so can't exist in the program at all.

Grounding

A ground program is a program that doesn't contain any variables and has the same answer set as some original program. Grounding is the process of translating a non-ground program into a ground program.

Examples

More examples are in the [[intro-ASP-for-c325 (1).pdf|class slides]].

Answer Sets

Answer Set

Definition
An answer set or stable model \(M\) of program \(P\) is a subset of the rules of grounded \(P\) that satisfies the following properties

Essentially, an answer set is a set of attributes on the constants of a program that satisfy all of the rules and facts of \(P\). A program may have many, one, or no answer sets.

Finding Answer Sets

First, the program is grounded replacing all the variables with variable-free terms and (presumably) instantiating all the general rules with the appropriate constants to turn them into facts.

Then, an ASP solver is called that turns the rules into a set of models that satisfy it.

ASP Language Features (Lparse)

Sets are denoted with \(\set{a_1; \dots; a_n}\) (i.e. with \(;\) delimiters), or \(\set{a_1, \dots, a_n}\) in earlier LParse syntax.

A cardinality constraint \(x \set{a_1, \dots, a_n} \, y\) for \(x, y\in \mathbb{N}\) specifies any subset of \(\set{a_1, \dots, a_n}\) with size between \(x\) and \(y\) inclusive, i.e. \(x \leq \#\set{a_1, \dots, a_n} \leq y\).

Sets can be expressed concisely using the conditional literal \(a : D\), where \(a\) is an atom and \(D\) is a domain predicate. Roughly, it specifies the set of atoms \(a\) that satisfy the domain predicate \(D\).

ASP vs. Prolog

ASP an Prolog have similar syntax (ASP isn't strictly a language, but the syntax of LParse is a common standard), but different paradigms.

In Prolog, we define a knowledge base in terms of rules and facts, then run queries over it; computation happens when "evaluating" these queries by resolution.

In ASP, we still define knowledge and rules, but the program has a solution that it evaluates to, namely the list of answer sets. ASP is a more faithful adaption of horn clause logic than prolog.

Lecture 22 - Answer Set Planning

Planning is a problem in AI that concerns autonomously finding a sequence of steps (a strategy) to solve a given problem.

Representing Planning Problems

In a planning problem, we need to represent the following:

Actions

To generate the sequence of actions that is our solution, we need to know

We also must assume the frame axiom: if an object isn't affected by an action at a given state, then the fluents must preserve the current state of the object.

The following scheme represents an action system in ASP:

% an action may have pre-conditions that must be met
action(Obj,T) :- pre-conditions. 

% **conflicting actions** can be defined by the constraint that both cannot happen at the same time.
% if not specified, then any actions can happen at the same time.
:- action(Obj,T), conflicting-action(Obj,T). 

% we can define if an action affects an object
affected(Obj,T) :- action(Obj,T). 

% an action changes the property of an object 
% this is esentially an action definition
property2(Obj,T+1) :- action(T), property1(Obj,T). 

% fluency
property(Obj,T+1) :- not affected(Obj,T), property(Obj,T)
:- action(Obj1,T), action(Obj2,T). 

We can define auxiliary predicates to tell us information about the state; these are not used to find the actual solution.

Aside: Planing Problems as Graphs

We can represent planning problems as a graph.

This lets us use graph-theoretical knowledge and algorithms to solve planing problems.

This interpretation follows from the fact that predicates over a set are inherently a graph structure.

Lecture 23 - Foundations of Logic Programming

[[logic-foundations-w2024.pdf|Notes]]

1. Syntax of Logic Systems

The syntax of a logical system defines which logical expressions, constructed from logical symbols are legal, i.e. well-formed.

A given logical syntax contains an alphabet, i.e. a lexicon of the symbols that may be used. These usually include:

2. Semantics of Logic Systems

The semantics of logic defines the relationships between formulae.

For set of formulae \(\Sigma\) (interpreted as the conjunction \(\displaystyle\bigwedge\limits_{i=1}^{|\Sigma|}\Sigma_i\)) and formula \(A\), we define the relation \(\Sigma \vDash A\) to mean that every way of interpreting (evaluating) \(\Sigma\) that makes it true also makes \(A\) true.

For propositional logic, since there are a finite number of possible assignments over propositions (since there are a finite number of variables), we can simply define logical consequence by constructing a truth table, i.e. testing every possible value.

3. Inference Rules of Logical Systems

For predicate logic, since we have quantifiers, there may be an infinite number of valuations for a given formula, so we can't simply use a truth table.

Instead, we use an inference system, which consists of a collection of inference rules that can be used to derive new formulae from existing ones.

4. Predicate Logic

Our main question is what the logical consequences of a horn clause L1 <- L2, ..., Lm is, where the variables are all universally quantified (\(\forall\)).

We define the Herbrand Universe \(H_u\) as the set of all "objects" we can use predicates to relate (ground terms); specifically:

We define a ground atom as any atom whose variables are instantiated by terms in \(H_u\). We can construct the set of ground atoms that are logical consequences of \(P\) iteratively:

By definition of this process, everything in a given \(S_i\) is a logical consequence of \(P\).


The remaining material is optional

Lattices and Fixed point Theorem

The Knaster-Tarski fixed-point theorem states that every monotonic operator defined on a complete lattice has a fixed point that can be computed iteratively by repeatedly applying the operator on the "first" element of the domain.

This sequence of computational steps can be characterized as the repeated application of an operator to a set, namely \(S_0\). We define this operator as \(T_P(S) = \set{a \mid a \leftarrow b_1, \dots, b_n \text{ is a ground instance of a clause in } P, b_1 \dots, b_n \in S}\). We can show that the fixed-point theorem applies:

The least fixed point of is the set of ground atoms that are logical consequences of \(P\).

Introduction

I had fun with this course; it should be obvious from all the asides. Hope they are well received :)

Appendix 01 - Accumulators

(defun reverse (L) (reverse-helper L nil))
(defun reverse-helper (L acc) 
    (cond ((null L) acc)))

Differences from Tail Recursion

Appendix 02 - Using Prolog

Sample session (eclass)

Prolog debugging (eclass)

Startup

We install SWI-prolog and start it by typing swipl into the terminal. This opens a session. Prolog can be used exclusively from the terminal, where each line adds to the program.

To load a program from a file (which is how any development will happen), we type

[fileName].

If the program is valid, it will return true and print that to the terminal. Otherwise, it will print false. It may also provide warnings about your program; most commonly, that you have a singleton variable that should be a _ instead.

Queries

Prolog programs define knowledge, and "execution/evaluation" happens by running a query over the program's knowledge. Prolog prompts you for queries with ?- in the terminal, once a program has been loaded.

When a query evaluates, it prints the result. If there are more results (i.e. the query can be unified more than one way, we can press ; to move through all the results). Once the last result has been reached, the query will "finish" and prolog will prompt you for another query with ?-.

Debug

Ports

Regular prolog has four ports that describe the steps taken in program evaluation: Call for predicates getting evaluated, Exit when a clause is done being evaluated, Fail when a predicate cannot be unified, and Redo when a previously evaluated predicate is evaluated with different bindings after a backtrack.

SWI-prolog also has a Unify port, indicating unification of a clause.

Tracing

We can trace how a program evaluates through all of the ports.

?- trace, predicate(arg1, arg2, etc...)

This will show us the port used at each step, and prompt us with what to do next. Use creep to move to the next step.

The visibility of each port in the trace can be controlled, which is useful. For example, we might only care about where the program fails, so we wouldn't look at the Call port

?- visible(+port)   # make port visible
?- visible(-port)   # make port invisible

leash and spy can further be used to control which ports are visible.

Exit

Prolog can be exited by pressing e (for exit) in the terminal (^D also kills the process, but is less graceful).

If a query (or rather, its process) is currently evaluating, it can be stopped by entering a (for abort); this will give you debug information and prompt you with another query. This is useful if your program has an infinite loop; you can get out of it without restarting prolog entirely.

Other

Prolog will guess if you make typos; these are most commonly capitalization errors, but actual mistypes are also detected. Typos are only detected if they are "close" to an exiting predicate or variable by one of the errors above. If a typo is detected, it won't execute the program until you either correct it or confirm you want to run the query you entered.

If we want prolog to fail an undefined predicate (instead of raising an exception), we can declare ?- dynamic(predicateName) before we run the rest of the queries. We can also include this in the program, but it is most useful for debugging.

Appendix 03 - My Final Exam Cheat Sheet

A common problem with making cheat sheets in an instinct to provide as full as summary of the course as possible. In theory, this maximizes its usefulness for an arbitrary student, but not necessarily for any particular student. Being aware of this, I created this cheat sheet with my own weaknesses in mind; its weighting of concepts does not represent that of the course. In particular, it favours logic programming concepts, since those were less familiar to me at the time.

E.g. (a . nil) is the same (a)

Simplest form: Representation with the least amount of dots. To get it: Dots . followed by open parentheses( and the matching closed one.

Alpha (\(\alpha\)) reduction: renaming a variable that is already bound, Beta (\(\beta\)) reduction: given an expression, replace all occurrences of parameters with the argument specified. Use alpha when beta fails due to a naming conflict. A lambda expression that cannot be further reduced by beta reduction is in normal form

Normal Order Reduction (NOR): evaluate the leftmost outermost function application f(g(2)) -> g(2) + g(2) -> 3 + g(2) -> 3 + 3 -> 6. Applicative Order Reduction (AOR): evaluate the leftmost innermost function application f(g(2)) -> f(3) -> 3 + 3 -> 6.

Church Rosser: If A → B and A→C reductions exist, then a D exists where B → D and C → D. Pt 2. If A has normal form E, then there is a normal order reduction A → E. If a normal form exists, NOR guarantees termination.

ADD = (λwzsx | ws(zsx)), T = (λxy | x), F = (λxy | y), IF = (λxyz | xyz), NOT = (λx | xFT), AND = (λxy | xyF), OR = (λxy | xTy), Y = (λy | (λx|y(xx)) (λx|y(xx)))

A closure is a pair \([\lambda, \text{CT}]\), where \(\lambda\) is a lambda function and \(\text{CT}\) is a (possibly empty) context, Always \(\cup \text{CT}_0\). Lambda functions: evaluate to a closure which contains the function body, variable list, and the context when the function was defined. Parts of a closure C: params(C), body(C), names(C), values(C)

The evaluation stack is used to evaluate expressions, The environment is used to keep track of bindings, The control is used to store instructions, The dump is used to store suspended invocation context, i.e. eval that we will come back to later

Built-in functions: A built-in LISP function (OP e1 ... e2) is compiled to "SECD language" instructions (ek' || ... || e' || (OP)) , If-then-else: the LISP function (if e1 e2 e3) is compiled to e1' || (SEL) || (e2' || (JOIN)) || (e3' || (JOIN)), E.g. (* (+ 6 2) 3) is compiled to (LDC 3 LDC 2 LDC 6 + *), || is append, (* (+ 1 2) (- 3 4)) becomes RP-notated 4 3 - 2 1 + *, Lambda Functions: A LISP lambda function (lambda (arg1 ...) (body...)) is compiled to (LDF) || (body' || (RTN)), Function application: A LISP function application (e e1 ... ek) is compiled to (NIL) || ek' || (CONS) || ... || (CONS) || e1' || (CONS) || e' || (AP), Scoping (let) statement: A LISP let statement (let (x1 ... xk) (e1 ... ek) exp) is compiled to (NIL) || ek' || (CONS) || ... || e1' || (CONS LDF) || (e' || (RTN)) || (AP), (letrec (f1 ... fk) (e1 ... ek) exp) is compiled to (DUM NIL) || ek' || (CONS) || ... || e1' || (CONS LDF) || (exp' || (RTN)) || (RAP), E.g. ((lambda (z) ((lambda (x y) (+ (- x y) z)) 3 5)) 6) compiles to (LD (1.1)), (LD (1.2)), (LD (2.1))

For goal ?- C1, C2, ..., Ck, we evaluate each subgoal from left to right, We evaluate by finding a clause in the program whose head "matches" the subgoal, replacing the subgoal with the body of the clause (applying variable bindings if necessary), and (recursively) evaluating that. If the subgoals are eventually solved, the original goal is as well

However, some unifiers are more general than others; \(w_1\) is more general than \(w_2\) if \(w_1\) can be obtained from \(w_2\) (by variable replacement), but \(w_2\) cannot be obtained from \(w_1\). Any unifiable \(t_1\), \(t_2\) have a unique most general unifier.

is arithmetic matching, = unifiable, =:= value of arithmetic expressions are equal, =/= not equal value of arithmetic, == syntactically equivalent. findall(X, Q, L) Binds to list L a list of all values for X that satisfies query Q, e.g. findall(X, likes(X, Y), L). X should parametrize Q

clause A :- B1, ..., Bn, goal ?- C1, ..., Ck, We first proceed by attempting to unify the first goal C1 with clause A. If unifier \(w\) unifies C1 and A, the new goal, called a derived goal, becomes: ?- w(B1, ..., Bn, C2, ..., Ck). If the new goal is empty we have found a unifier.

cut ! is a goal that succeeds when first reached, but fails if Prolog attempts to backtrack through it. So, it forces prolog to commit to the choices made before the cut. having a cut as a last goal in the clause makes sure only one solution gets returned.

% here, we have the cut so that we stop looking once we find the member
% otherwise, the query would return n trues for n occurences of
% X in the list
member(X, [X|T]) :- !.
member(X, [H|T]) :- member(X, T).

if p, then q, otherwise r: x :- p, !, q. x :- r. not(X) :- X, !, fail. not(X).(this defines negation+). Negation means definitely false, not unable to prove. So even(N) :- \+ odd(N), integer(N). integer(6). odd(weird). ?- even(X) fails

Insertion of a tuple: assert(pred(const)). Deletion of a tuple: retract(pred(X)). Query tuples: clause(Head, _). assert(Clause) (assert(above(X, Y) :- on(X, Y))), asserta(Clause), assertz(Clause), clause(Head, Body), retract(Clause), retractAll(Clause)

interp(true). interp([]). interp([H|T]) :- !, interp(H), interp(T). interp(P) :- clause(P :- Y), interp(Y). interp(P) :- P.

A constraint satisfaction problem has a set of variables, a domain those variables are in, and a set of constraints over the variables. The constraints can be "assembled" into primitive constraints and put in the constraint store. A solution is a set of bindings for the variables that satisfy all the constraints.

Node consistency: constraints like \(X>2\) are checked an values not satisfying them are removed from the domain. Arc consistency: constraints defined in terms of one variable that remove some from the others domain. E.g. \(X<Y, Y<Z\) from before, \(X=3\) cannot give a solution for \(Y\), so \(3\) is removed from the domain of \(X\), preventing further search. Consistency is checked at every step of. asearch to prune it.

X in 1..4, [X, Y, Z] in 1..4, X in 1..4\/10..29, all_different/1, label([X, Y, Z]). label/1 and labelling/2 are used to tell the (clpfd) solver to solve the given list of variables, i.e. assign domain values to them in order and backtrack until a solution is found. Impl: mylabel([]). mylabel([V|Vs]) :- indomain(V), mylabel(Vs).

Or #\/, AND #/\, NOT #\, implication #==>These operators can turn true/false values into 1 and 0; this is an example of reification.

We encode problem instance \(Q\) in a program \(P\), expressed as non-monotonic logic. Stable models of \(P\) are computed by an ASP solver; these correspond to the solutions to \(Q\). A normal program is a finite set of rules of the form \(A \leftarrow B_1 \dots B_k, \text{ not } C_1, \dots, \text{ not } C_n\), read "If \(B_1\dots B_k\) are in a solution but none of the \(C_i\) are, then \(A\) is in the same solution". The terms \(A, B_i, C_i\) are atoms in the underlying propositional language. A constraint is characterized by a rule with an empty (false) head. A ground program is a program that doesn't contain any variables and has the same answer set as some original program. Grounding is the process of translating a non-ground program into a ground program.

A cardinality constraint \(x \set{a_1, \dots, a_n} \, y\) for \(x, y\in \mathbb{N}\) specifies any subset of \(\set{a_1, \dots, a_n}\) with size between \(x\) and \(y\) inclusive, i.e. \(x \leq \#\set{a_1, \dots, a_n} \leq y\). Conditional literal \(1 \set{\text{setColor}(v,C): \text{color}(C)}1\) is shorthand for \(1\set{\text{setColor}(v,\text{red}); \text{setColor}(v,\text{blue}); \text{setColor}(v,\text{yellow})} 1\)

Planning problems: steps: time(0..steps), next-state(T2, T1) :- T2 = T1+1, fluency: on(a, table, T), actions: move-to(a, b, T), initial state on(a, b, 0). goal: goal (T) :- time(T), on(a,c,T), ..., goal :- time(T), goal(T). :- not goal. the frame axiom: if an object isn't affected by an action at a given state, then the fluents must preserve the current state of the object.

Sound: new formulae must be consequences of existing ones. This is required, completeness is not. We define the Herbrand Universe \(H_u\) as the set of all "objects" we can use predicates to relate (ground terms); specifically: An any constant \(C\in H_u\), If \(f\) is a function with arity \(n\) and \(t_1, \dots, t_n\in H_u\), then \(f(t_1, \dots, t_n)\in H_u\) as well. We define a ground atom as any atom whose variables are instantiated by terms in \(H_u\). In general, \(S_{n+1}\) contains everything in \(S_n\), as well as every instance of a clause \(H\in P\) where H :- B1, \dots, Bn and \(B_1 \dots B_n \in S_n\) (\(S_0\) is the empty set)