Unlike in set theory, functions in type theory are by definition computable.
open import Data.Nat
f : ℕ → ℕ
f x = x + 2
f' : ℕ → ℕ
f' = λ x → x + 2
n : ℕ
n = 3
β-reduction
{-
f' 3
= (λ x → x + 2) 3
= (x + 2)[x ≔ 3]
= (3 + 2)
= 5
∎
-}
Currying
g : ℕ → (ℕ → ℕ)
g = λ x → (λ y → x + y)
Polymorphism
variable A B C : Set
id : A → A
id x = x
_∘_ : (B → C) → (A → B) → (A → C)
f ∘ g = λ x → f (g x)
{-
(f ∘ f) 2
= 2 + 2 + 2
= 2 + 4
= 6
∎
-}
Combinators
- Every pure λ-term can be translated into S, K
K : A → B → A
K x y = x
S : (A → B → C) → (A → B) → A → C
S f g x = f x (g x)
η-reduction
λ x → f x ≡ f, aka. η-equality
A × B : Set
for products
A ⨄ B : Set
for sums
-- product defined as codata
record _×_ (A B : Set): Set where
constructor _,_
field
proj₁ : A
proj₂ : B
open _×_
{-
note: this is the manual product definition which is automated
by the call to `constructor` on L85.
_,_ : A → B → A × B
a , b = record { proj₁ = a ; proj₂ = b }
-- elements of a record are literally its projections
-}
- Records are codata.
curry
anduncurry
witness that × is a right adjoint of the diagonal.
-- coproduct (also called sum) defined as data
data _⊎_ (A B : Set) : Set where
inj₁ : A → A ⊎ B
inj₂ : B → A ⊎ B
case
anduncase
witness that ⊎ is a left adjoint of the diagonal.- If functions are the right adjoint of ×, what is the left adjoint of ⊎?
- Data is defined by constructors such as ⊎
- Codata (record) is defined by destructors ×, →
Keep in mind that, for example, × can be defined by both data or codata, e.g.
-- product alternatively defined as data
data _×'_ (A B : Set) : Set where
_,_ : A → B → A ×' B
proj₁' : A × B → A
proj₁' (a , b) = a
Thus, × can either be defined by either by saying what the projections are, or by specifying an exact constructor.
- Nullary product (also called unit or
()
), is a product with no projections.
-- the unit type
record ⊤ : Set where
constructor tt
-- the empty type, with no elements
data ⊥ : Set where
case⊥ : ⊥ → C
case⊥ ()
- Note that the unit-like case match for ⊥ is Agda's representation of an impossible case.
In classical logic, prop = Bool
. In intuitionistic logic, prop
is instead the type of evidence for the proposition.
prop = Set -- a naive evidence, see HoTT for a hprop which have at most one inhabitant
variable
P Q R : prop
infixl 5 _∧_
_∧_ : prop → prop → prop
P ∧ Q = P × Q
infixl 3 _∨_
_∨_ : prop → prop → prop
P ∨ Q = P ⊎ Q
infixr 2 _⇒_
_⇒_ : prop → prop → prop
P ⇒ Q = P → Q
-- thus, types as propositions
True : prop
True = ⊤
False : prop
False = ⊥
¬ : prop → prop
¬ P = P ⇒ False
infix 1 _⇔_
_⇔_ : prop → prop → prop
P ⇔ Q = (P ⇒ Q) ∧ (Q ⇒ P)
An example
deMorgan : ¬ (P ∨ Q) ⇔ ¬ P ∧ ¬ Q
proj₁ deMorgan x = (λ p → x (inj₁ p)) , λ q → x (inj₂ q)
proj₂ deMorgan y (inj₁ p) = proj₁ y p
proj₂ deMorgan y (inj₂ q) = proj₂ y q
- Constructors / destructors are a bit more precise than traditional intros/elim rules, where data/codata allow for derivation of what Altenkirch calls "co-eliminators"
-- Tertium non datur
TND : prop → prop
TND P = P ∨ ¬ P
-- the `deMorgan` dual shows the difference between boolean truth-table semantics and evidence-based semantics
deMorgan' : TND P → ¬ (P ∧ Q) ⇔ ¬ P ∨ ¬ Q
proj₁ (deMorgan' (inj₁ p)) h = inj₂ (λ q → h (p , q))
proj₁ (deMorgan' (inj₂ np)) h = inj₁ np
proj₂ (deMorgan' tnd) (inj₁ np) (p , q) = np p
proj₂ (deMorgan' _) (inj₂ nq) (p , q) = nq q
The disjunction between ¬ P ∨ ¬ Q
requires additional positive information.
A classical approach to proving P
is to assume ¬P
and derive a contradiction.
a.k.a. reductio ad absurdum
- Note that the examples used to prove the relations between TND and RAA do not seem to compile,
due to some issue with
Set
not being inductively defined.