Numerical Analysis / Midterm
🧭 CPSC
🔹 1. Floating-Point Representation & Error Analysis
1.1 Definition
A floating-point number has the form
[
x
]
where:
β
base (commonly or ) t
precision (# of digits in mantissa) p
[L, U] exponent range
The set of representable numbers is
( F(β, t, L, U)
1.2 Key Properties
Machine epsilon (εₘ):
[
εₘ\frac{ }{ } β^{ -t} \quad \text{(for rounding to nearest)}
]
Smallest value such that (εₘ > ). Relative error:
εₘ Spacing: constant in relative sense, not in absolute sense.
i.e., distance between consecutive representable numbers grows with magnitude.
1.3 Rounding and Chopping
Method | Description | Max Relative Error |
|---|---|---|
Round to nearest | Rounds to closest representable | |
Chop (truncate) | Truncates digits beyond precision |
Round-to-nearest always yields smaller average relative error.
1.4 Floating-Point Arithmetic Pitfalls
Cancellation error: subtraction of nearly equal numbers
significant digits lost.
Exampe:(ε) ε ) may vanish if ε < machine epsilon. Use algebraic reformulation (e.g., trigonometric identities or rationalized forms). Overflow: |x| > largest representable.
Underflow: |x| < smallest normalized number.
1.5 Stability vs Conditioning
Conditioning: property of the problem
Well-conditioned ⇢ small Δinput
small Δoutput. Ill-conditioned ⇢ small Δinput
large Δoutput.
Stability: property of the algorithm
Stable ⇢ behaves as if solving a slightly perturbed problem.
A stable algorithm can still yield poor results if the problem itself is ill-conditioned.
1.6 Typical NaN & Inf Behavior (MATLAB)
Examples:
0/0 % → NaN
Inf - Inf % → NaN
0*Inf % → NaN
1/0 % → Inf
🔹 2. Error Types and Convergence
Type | Description | Example |
|---|---|---|
Roundoff error | due to finite precision | computing |
Truncation error | from discretization/approximation | finite difference |
Convergence error | stopping iteration early | iteration tolerance not met |
Total error
2.1 Discretization Example
Using the forward difference formula:
[
f'(x_0)
]
Error: ( O(h)
Optimal h: ( h_{opt}
Central difference improves to O(h²).
🔹 3. Root-Finding Methods
3.1 Bisection Method
Requires sign change over [a, b].
Each iteration halves interval length:
[
|b_n - a_n|\frac{|b_0 - a_0|}{ ^n}
]Convergence: linear with rate ½.
Always convergent if f(a)f(b) <
Example (from practice):
After
3.2 Fixed-Point Itraion
Given f(x)
Ieraion ( x_{k
Fixed-Point Theorem:
If:
g continuous on [a, b],
g([a, b])
[a, b], |g′(x)|
k < on [a, b],
Then unique fixed point exists, convergence is linear with rate k.
If |g′(r)|
, divergence may occur.
Common mistake: Using g(x)
Better: scale or choose g with smaller derivative.
3.3 Neto’s etho
[
_{k
]
Quadratic convergence if start sufficinty cose nd f′r)
If f′
near root, may diverge. Requires evaluating derivative each iteration.
Modified version (constant ervatve d:
[
x{k
]
3. Scan Metod
[x_{k
]
Uses finite-difference approximation of f′.
Sueriner covergece (
) Twostartingguessesrequired.
3.5 Comparison
Method | Order | Requires derivative | Guaranteed convergence? |
|---|---|---|---|
Bisection | No | Ys | |
ixed-oint | ossibly | If | |
Secant | No | No | |
Newton | Yes | If close enough |
3.6 Newton’s Failure Example
For ( f(x)
[
f'(x)
🔹 4. Numerical Conditioning & Reformulation
4.1 Well vs Ill-conditioned
role: (f)
Reformulate:
[
\frac{e^x-
]
Better numerically stable for small x.
4.2 Catastrophic Cancellation
A xrssos lk ( }x ).
eormult using ojugate{
�5. Lie lgebaRefreh(for shor nwrs)
51OrtoonalMtrices
( ^Q
I ). oumns (and row)orthonormal.
Properties:
( ||Qx||₂
||x||₂ ) ( \det()
osnuar ivese
Qᵀ.
52 Diagonlizabiliy
A is iagonalizale if
P: (P^{- }AP D ). Sufficien conditions:
n distinct eignvalues.
A symetric (real ase).
5.3 Norm and Trace
Frobenius norm:
[
||A||F\sqrt{\sum{ij} |a_{ij}|^ } \sqrt{tr(A^TA)} \sqrt{\sum σ_i^ }
]Trace identity: tr(AB)
tr(BA).
5.4 Permutation Matrices
Obtained by permuting rows/cols of identity.
Orthogonal: ( P^T P
I ). ( ||PA||_F
||A||_F ).
5.5 Skew-Symmetric Matrices
Aᵀ
A eigenvalues purely imaginary ( iλ). Singular values
|λ|.
🔹 6. Numerical Integration Recurrence Example (Assignment 1)
Given:
[
u_n
_n
Stable sice |
🔹 7. MATLAB Vectorization eview
7.1 Vecto Operations
Gal | MATLAB Cmmand |
|---|---|
Element-wise product |
|
Inner product |
|
Diagonal matrix from vector |
|
Outer product |
|
( |
7.2 Control & Plotting
loglog(h,err,'-*')
semilogy(iter, abs(x - xstar))
legend('Bisection','Newton','Secant')
Use loglog for power-law error scaling (slope
7.3 Symbolic & VPA
digits(
) set variable precision arithmetic. Useful to analyze roundoff vs true error.
Compare double vs single vs vpa precision.
7.4 Common Numerical Checks
if abs(x_k - x_{k-1}) < tol
disp('Converged')
end
Use unique() to remove duplicates (for minima problems).
Always confirm algorithm stops when |Δx| < tolerance.
🔹 8. Convergence Speed Summary
Method | Error Reduction per Iteration | Notes |
|---|---|---|
Bisection | ½ | guaranteed, slow |
Fixed Point | depends on | |
Secant | uelner | |
Neton | fst, eeds drivatve | |
Modifie Newton const d) | linear | slower but smpler |
� 9. Typical onceptual Quesions
Differece between stable algorithm and well-conditioned problem.
Identify which formula causes cancellation and how to fix it.
Compute number of bisection iterations needed for desired tolerance.
Explain why Newton’s method may fail for flat derivatives.
Describe relationship between trace and Frobenius norm.
State conditions for convergence of fixed-point iteration.
Evaluate effect of chopping vs rounding on floating-point representation.
Determine condition number interpretation (sensitivity).
✅ Key Mental Checklist for Exam
Can I classify the type of error (roundoff, truncation, convergence)?
Can I justify whether an algorithm is stable and/or convergent?
Do I know conditions on derivatives that ensure convergence?
Can I apply floating-point rules to predict overflow, underflow, or NaN?
Do I know orthogonality, trace, Frobenius norm, and SVD relations?
Can I recognize or derive U-shaped error vs h plots?
Can I vectorize any MATLAB expression without loops?