linear algebra crash course / review
COS 350 - Computer Graphics
notation
Matrix and Vector Matrix notation
\[
M = \mat{
m_{11} & m_{12} & \ldots & m_{1N} \\
m_{21} & m_{22} & \ldots & m_{2N} \\
\vdots & \vdots & \ddots & \vdots \\
m_{M1} & m_{M2} & \ldots & m_{MN} \\
} = \mat{m_{ij}}
\]
\[\v = \mat{v_1 \\ \vdots \\ v_M} = \mat{v_1 & \ldots & v_M}^T = \left( v_1, \ldots, v_M \right)\]
vectors use column matrix or ordered tuple
will use decorations to denote different "types" of vectors
linear algebra crash course / review
vector representation
vectors
2D/3D objects represented using mathematical vectors
point: location in space
\( \point{p}_\text{2D} = (p_x, p_y)\), \(\point{p}_\text{3D} = (p_x, p_y, p_z) \)
vector: direction and magnitude
\( \v_\text{2D} = (v_x, v_y)\), \(\v_\text{3D} = (v_x, v_y, v_z) \)
on board, may use: \( \overset{\rightharpoonup}{\v} \)
points vs vectors/directions (vs normals)
Although we represent points in vector notation, points and vectors/directions are very different types, and they should be treated differently.
There are ways to distinguish between point and vector types, which we will get to.
But know that most graphics systems or game engines simply use vector notation for all, and it is up to you (the coder) to know how to handle everything correctly.
Also, there is one other vector-like type that we will introduce later which is different from points and vectors: normals.
vector/point operations
addition: component-wise addition
\[\u + \v = \mat{u_1 + v_1 \\ \vdots \\ u_M + v_M}\]
Types:
\(\point{p} + \v = \point{q}\)
point + vector = point
\(\x + \y = \u\)
vector + vector = vector
vector/point operations
subtraction: component-wise substraction
\[\u - \v = \mat{u_1 - v_1 \\ \vdots \\ u_M - v_M}\]
Types:
\(\point{p} - \v = \point{q}\)
point - vector = point
\(\x - \y = \u\)
vector - vector = vector
\(\point{p} - \point{q} = \v\)
point - point = vector
Note: difference of two points is vector from 2nd point to 1st ponit
vector/point operations
Note: adding two points does not make sense, but can be useful or convenient.
For example, compute midpoint of \(\point{p}\) and \(\point{q}\):
incorrect:
\((\point{p} + \point{q}) / 2\)
correct:
\(\point{p} + (\point{q} - \point{p}) / 2\)
both forms are mathematically equivalent, but latter makes sense
CAUTION: notation is often abused, especially in systems that make no distinction between vectors and points
vector/point operations
scaling: magnitude multiplication
\[\alpha\v = (\alpha)(\v) = \mat{\alpha v_1 \\ \alpha v_2}\]
result is vector
scaling vector by scalar magnifies the magnitude
not correct, but useful: scaling point pushes point away/toward origin
Special cases:
scaling by \(-1\) will reverse dir of vector and keep its mag
scaling by \(0\) will result in 0-vector (undefined dir, \(0\) mag)
vector/point operations
dot product: one way of "multiplying" two vectors
\[\u \* \v = \sum_i u_iv_i = u_1v_1 + \ldots + u_Mv_M = ||\u||\ ||\v||\ \ct\]
where \(\theta\) is angle between \(\u\) and \(\v\)
result is scalar
when \(\u\) and \(\v\) are orthogonal/perpendicular, \(\u \* \v\) results in \(0\)
Note: dot product uses a centered dot, which is commonly used when multiplying two scalars (ex: \(2 \cdot 3 \equiv 2 * 3\)).
If you represent a scalar as a 1D vector, and then the results are exactly as expected
vector/point operations
cross product: another way of "multiplying" vectors
\[\u \xx \v = \mat{u_2v_3 - u_3v_2 \\ u_3v_1 - u_1v_3 \\ u_1v_2 - u_2v_1}\]
defined only for 3D vectors
result vector
has direction orthogonal to both input vectors, RHR
has length equal to \(||\u||\ ||\v||\ \st\) where \(\theta\) is angle between \(\u\) and \(\v\)
many other useful properties!
vector/point operations
component-wise product: yet another way of "multiplying" vectors
\[
\vector{a} * \vector{b} = \mat{a_1 \\ \vdots \\ a_M} * \mat{b_1 \\ \vdots \\ b_M} = \mat{a_1 * b_1 \\ \vdots \\ a_M * b_M}
\]
result vector
most of the times when you "multiply" two vectors, this is not the operation you will use
except if the vectors are representing colors (r,g,b)
vector/point operations
length: magnitude of vector
\[||\v|| = \sqrt{\sum_i v_i^2} = \sqrt{v_1^2 + \ldots + v_M^2} = \sqrt{\v \* \v}\]
result is scalar
will sometimes simplify \(||\v||\) as \(|\v|\) (not absolute value)
Notes:
When representing a scalar in vector notation, the length is equal to the absolute value of the scalar.
Technically, square root returns two values (positive and negative), but we will ignore the negative values for now.
The equation above is the usual way of computing length/magnitude, but there are other ways...
vector/point operations #
L1 norm or Manhattan norm or Taxicab norm
\[||\v||_1 = \sum_i |v_i| = |v_1| + \ldots + |v_M|\]
L2 norm or Euclidean norm (previous slide)
\[||\v||_2 = \sqrt{\sum_i v_i^2} = \sqrt{v_1^2 + \ldots + v_M^2} = \sqrt{\v \* \v}\]
L∞ norm or Uniform norm or Maximum norm
\[||\v||_\infty = \max_i |v_i| = \max(|v_1|, \ldots, |v_M|)\]
vector/point operations
normalize: a function that returns the direction of a given vector
\[\mathit{norm}(\v^+) = \frac{\v^+}{||\v^+||} = \hat{\v}\]
input: any non-zero vector (\(\mathit{norm}\) is undefined on 0-vector)
output: direction in same direction as given vector
recall \(|| \direction{v} || = 1\)
quiz: vector/point operations
Given:
\[ \u = \mat{ 2 \\ 3 } \qquad \v = \mat{ 4 \\ -1 }\]
What is the result of \(\u + \v\)?
\(8\)
\(\mat{6 \\ 2}\)
\(\mat{2 & 4 \\ 3 & -1}\)
\(\mat{6 & 2}\)
quiz: vector/point operations
Given:
\[ \u = \mat{2 \\ 3} \qquad \alpha = 4 \]
What is the result of \(\alpha \u\)?
\(20\)
\(\mat{8 \\ 3 }\)
\(\mat{8 \\ 12}\)
\(\mat{2 \\ 12}\)
linear algebra crash course / review
matrix representation
matrix operations
addition
\[T = M + N\]
\[\mat{t_{ij}} = \mat{m_{ij} + n_{ij}}\]
scalar multiplication
\[T = \alpha M\]
\[\mat{t_{ij}} = \mat{\alpha m_{ij}}\]
matrix operations
matrix-matrix multiplication
row-column multiplication
associative, not commutative
\[T = MN = \mat{t_{ij}} = \mat{\sum_k m_{ik} n_{kj}}\]
‹
□ 0
play: 1fps
play: 2fps
play: 4fps
play: 8fps
›
\[\mat{\color{red}{t_{11}} & t_{12} \\ t_{21} & t_{22}} = \mat{\color{red}{m_{11}} & \color{red}{m_{12}} \\ m_{21} & m_{22}} \mat{\color{red}{n_{11}} & n_{12} \\ \color{red}{n_{21}} & n_{22}}\]
\[\begin{array}{rcl}
\color{red}{t_{11}} & \color{red}{=} & \color{red}{m_{11} \cdot n_{11} + m_{12} \cdot n_{21}} \\
t_{12} & = & m_{11} \cdot n_{12} + m_{12} \cdot n_{22} \\
t_{21} & = & m_{21} \cdot n_{11} + m_{22} \cdot n_{21} \\
t_{22} & = & m_{21} \cdot n_{12} + m_{22} \cdot n_{22} \\
\end{array}\]
\[\mat{t_{11} & \color{red}{t_{12}} \\ t_{21} & t_{22}} = \mat{\color{red}{m_{11}} & \color{red}{m_{12}} \\ m_{21} & m_{22}} \mat{n_{11} & \color{red}{n_{12}} \\ n_{21} & \color{red}{n_{22}}}\]
\[\begin{array}{rcl}
t_{11} & = & m_{11} \cdot n_{11} + m_{12} \cdot n_{21} \\
\color{red}{t_{12}} & \color{red}{=} & \color{red}{m_{11} \cdot n_{12} + m_{12} \cdot n_{22}} \\
t_{21} & = & m_{21} \cdot n_{11} + m_{22} \cdot n_{21} \\
t_{22} & = & m_{21} \cdot n_{12} + m_{22} \cdot n_{22} \\
\end{array}\]
\[\mat{t_{11} & t_{12} \\ \color{red}{t_{21}} & t_{22}} = \mat{m_{11} & m_{12} \\ \color{red}{m_{21}} & \color{red}{m_{22}}} \mat{\color{red}{n_{11}} & n_{12} \\ \color{red}{n_{21}} & n_{22}}\]
\[\begin{array}{rcl}
t_{11} & = & m_{11} \cdot n_{11} + m_{12} \cdot n_{21} \\
t_{12} & = & m_{11} \cdot n_{12} + m_{12} \cdot n_{22} \\
\color{red}{t_{21}} & \color{red}{=} & \color{red}{m_{21} \cdot n_{11} + m_{22} \cdot n_{21}} \\
t_{22} & = & m_{21} \cdot n_{12} + m_{22} \cdot n_{22} \\
\end{array}\]
\[\mat{t_{11} & t_{12} \\ t_{21} & \color{red}{t_{22}}} = \mat{m_{11} & m_{12} \\ \color{red}{m_{21}} & \color{red}{m_{22}}} \mat{n_{11} & \color{red}{n_{12}} \\ n_{21} & \color{red}{n_{22}}}\]
\[\begin{array}{rcl}
t_{11} & = & m_{11} \cdot n_{11} + m_{12} \cdot n_{21} \\
t_{12} & = & m_{11} \cdot n_{12} + m_{12} \cdot n_{22} \\
t_{21} & = & m_{21} \cdot n_{11} + m_{22} \cdot n_{21} \\
\color{red}{t_{22}} & \color{red}{=} & \color{red}{m_{21} \cdot n_{12} + m_{22} \cdot n_{22}} \\
\end{array}\]
matrix operations
matrix-vector multiplication
treat vector as column matrix
row-column multiplication
\[\u = M\v\]
\[\mat{\color{red}{u_{1}} \\ u_{2}} = \mat{\color{red}{m_{11}} & \color{red}{m_{12}} \\ m_{21} & m_{22}} \mat{\color{red}{v_{1}} \\ \color{red}{v_{2}}}\]
matrix operations
transpose
\[T = M^T\]
\[\mat{t_{ij}} = \mat{m_{ji}}\]
inverse
important: not all matrices have an inverse
we will not compute inverse directly (expensive, can be numerically unstable)
\[T = M^{-1}, \quad MT = M M^{-1} = M^{-1} M = I\]
special matrices
identity matrix
invariant for multiplication
\[I = \mat{i_{ij}} = \def{}\]
\[I = \mat{1 & 0 \\ 0 & 1}\]
\[\forall M : M = MI = IM\]
special matrices
zero matrix
\[O = \mat{i_{ij}} = 0\]
\[O = \mat{0 & 0 \\ 0 & 0}\]
\[\forall M : M = M + O = O + M\]
matrix operation properties
linearity of multiplication and addition
\[\alpha (A + B) = \alpha A + \alpha B\]
\[M(\alpha A + \beta B) = \alpha MA + \beta MB\]
associativity of multiplication
\[A(BC) = (AB)C\]
matrix operation properties
transpose and inverse of multiplication
\[(AB)^T = B^T A^T\]
\[(AB)^{-1} = B^{-1} A^{-1}\]
quiz: matrix operations
What is the result of \(AB\) if... ?
\[ A = \mat{ 1 & 2 \\ 3 & 4 \\ 5 & 6 } \qquad B = \mat{ 2 \\ 0 }\]
\(\mat{2 \\ 6 \\ 10}\)
\(\mat{2 & 4 \\ 6 & 8 \\ 10 & 12}\)
\(\mat{2 & 4}\)
Cannot multiply \(A\) and \(B\) (incompatible sizes)