1. Sampling Theorem
▶ Theory:
The Sampling Theorem, also known as the Nyquist-Shannon Sampling Theorem, is the foundation of digital signal and image processing. It states that a band-limited analog signal can be perfectly reconstructed from its samples if it is sampled at a frequency greater than or equal to twice the maximum frequency present in the signal.
▶ In image processing:
Images are sampled in both horizontal and vertical directions. Insufficient sampling leads to loss of detail and aliasing.
2. Anti-Aliasing
▶ Theory:
Aliasing is the effect of different signals becoming indistinguishable when sampled, leading to visual artifacts like moiré patterns. It occurs when the sampling rate is too low.
Anti-aliasing techniques involve pre-filtering the image using a low-pass filter to remove high-frequency components before sampling. This ensures the sampled image retains important features without distortion.
3. Image Quantization
▶ Theory:
Quantization is the process of mapping continuous values into a finite set of discrete values. In images, this usually refers to reducing the number of gray levels or color values.
-
Spatial Quantization → Reduces resolution.
-
Intensity Quantization → Reduces the number of brightness levels.
▶ Example:
For an 8-bit image, intensity values range from 0–255. Reducing it to 4 bits maps all values into 16 levels.
Quantization introduces errors (quantization noise), but with intelligent algorithms, quality can be preserved.
4. Orthogonal and Unitary Transforms
▶ Orthogonal Transforms:
-
Orthogonal transforms use basis vectors that are mutually perpendicular.
-
They preserve energy and allow lossless transformations.
-
Examples: DCT, DFT, Haar, Hadamard.
▶ Unitary Transforms:
-
A unitary matrix is the complex counterpart of an orthogonal matrix.
-
It satisfies: (conjugate transpose of U times U is identity).
-
Useful in transforms involving complex values, e.g., DFT.
5. Discrete Fourier Transform (DFT)
▶ Theory:
The DFT transforms a signal or image from the spatial domain to the frequency domain. It represents the image in terms of its frequency components, where low frequencies describe smooth areas, and high frequencies describe edges and noise.
▶ Applications:
-
Image filtering
-
Image compression
-
Frequency analysis
6. Discrete Cosine Transform (DCT)
▶ Theory:
The DCT expresses an image as a sum of cosine functions oscillating at different frequencies. It’s similar to the DFT but uses only cosine components, making it real-valued and more efficient.
▶ Advantage:
-
DCT is highly efficient in energy compaction, making it ideal for image compression, e.g., JPEG.
7. Hadamard Transform
▶ Theory:
The Hadamard transform uses a matrix with only +1 and -1 values and operates on image data using simple addition and subtraction. It is orthogonal and fast to compute.
▶ Use:
-
Image compression
-
Pattern recognition
8. Haar Transform
▶ Theory:
The Haar transform is the earliest wavelet transform. It represents data as a set of averages and differences, making it ideal for multi-resolution analysis (processing the image at multiple scales).
▶ Properties:
-
Simple and fast
-
Good for edge detection
-
Used in image compression and analysis
9. Karhunen-Loeve Transform (KLT) / PCA
▶ Theory:
KLT is a statistical method that transforms data into a set of uncorrelated variables using eigenvalue decomposition. It is data-dependent and optimal for decorrelation and energy compaction.
▶ Steps:
-
Compute the covariance matrix.
-
Calculate eigenvectors and eigenvalues.
-
Project the image onto these eigenvectors.
▶ Applications:
-
Face recognition (Eigenfaces)
-
Compression
-
Dimensionality reduction
Comments
Post a Comment