Wednesday, April 23, 2025

unit 2 - Image quantization and Image Transforms

1. Sampling Theorem

 Definition:

The Sampling Theorem (Shannon-Nyquist) states that a continuous signal can be perfectly reconstructed from its samples if it is sampled at twice the maximum frequency present in the signal.

 Formula:

fs2fmaxf_s \geq 2f_{max}

where:

  • fsf_s = sampling frequency

  • fmaxf_{max} = highest frequency component in the signal

 Application:

Used in digitizing analog images to ensure no information is lost during sampling.


2. Anti-Aliasing

Aliasing:

Occurs when sampling is done below the Nyquist rate, leading to overlapping frequency components and distortion.

Anti-Aliasing:

A process to suppress high frequencies before sampling using low-pass filters to prevent aliasing.


3. Image Quantization

 Definition:

Process of mapping a range of continuous pixel values to a finite number of levels.

 Types:

  • Scalar Quantization: Each pixel is quantized independently.

  • Vector Quantization: Blocks of pixels are quantized together.

Quantization Error:

Error=Original PixelQuantized Pixel\text{Error} = \text{Original Pixel} - \text{Quantized Pixel}

Too few levels → loss of detail, visible banding.


4. Orthogonal and Unitary Transforms

 Orthogonal Transform:

A linear transformation using an orthogonal matrix TT where:

TTT=IT^T T = I

  • Preserves energy.

  • Examples: DFT, DCT, Haar, Hadamard.

 Unitary Transform:

A generalization using complex numbers:

THT=IT^H T = I

where THT^H is the conjugate transpose.


5. Discrete Fourier Transform (DFT)

Formula:

F(u,v)=xyf(x,y)ej2π(uxM+vyN)F(u,v) = \sum_x \sum_y f(x,y) \cdot e^{-j2\pi \left(\frac{ux}{M} + \frac{vy}{N}\right)}

  • Converts spatial image to frequency domain.

  • Captures periodic patterns.

  • Used in filtering, compression.


6. Discrete Cosine Transform (DCT)

Formula (1D):

Xk=n=0N1xncos[πN(n+0.5)k]X_k = \sum_{n=0}^{N-1} x_n \cdot \cos\left[\frac{\pi}{N}(n + 0.5)k\right]

  • Like DFT, but uses only real cosine terms.

  • Energy compaction is high → used in JPEG compression.


7. Hadamard Transform

 Properties:

  • Uses only +1 and −1 (binary values).

  • Fast to compute (no multiplications).

  • Not based on sinusoidal functions.

 Matrix:

Hadamard matrix is recursively defined:

H2=[1111],H2n=[H2n1H2n1H2n1H2n1]H_2 = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}, \quad H_{2^n} = \begin{bmatrix} H_{2^{n-1}} & H_{2^{n-1}} \\ H_{2^{n-1}} & -H_{2^{n-1}} \end{bmatrix}


8. Haar Transform

 Properties:

  • Simplest wavelet transform.

  • Breaks signal into approximation and detail parts.

  • Useful for multi-resolution analysis.

 Steps:

  • Divide signal into pairs.

  • Calculate average and difference.

  • Recurse on averages.


9. Karhunen-Loeve Transform (KLT / PCA)

 Definition:

A statistical transform that decorrelates data. Also known as Principal Component Analysis (PCA).

 Steps:

  1. Calculate covariance matrix.

  2. Compute eigenvalues and eigenvectors.

  3. Transform data using eigenvectors.

 Advantage:

  • Optimal energy compaction.

  • Basis vectors are data-dependent.

  • Used in face recognition, compression.

unit -1 Introduction of Image processing

 

What is Image Processing?

Image processing is a method to perform operations on an image to enhance it or extract useful information. It is a type of signal processing where the input is an image, and the output may be either an image or characteristics/features associated with that image.

Goals of Image Processing

  • Image Enhancement: Improving visual appearance (e.g., contrast, sharpness)

  • Image Restoration: Removing noise or distortion

  • Image Compression: Reducing the amount of data required to represent an image

  • Feature Extraction: Identifying objects, edges, or patterns

  • Image Analysis: Understanding and interpreting image content

  • Object Recognition: Detecting and identifying objects in an image

What is an Image?

An image is a two-dimensional function f(x, y), where x and y are spatial coordinates, and f is the intensity (brightness or color) at that point. For digital images, both x, y, and f are finite and discrete.

Types of Image Representation

  1. Spatial Domain Representation: Direct representation using pixel intensity values in a grid.

  2. Frequency Domain Representation: Using transforms like Fourier to represent the image in terms of its frequency components.

Types of Images

  • Binary Image: Only black and white (pixel values: 0 or 1)

  • Grayscale Image: Shades of gray (pixel values: 0 to 255)

  • Color Image: Consists of multiple channels, commonly RGB (Red, Green, Blue)

  • Indexed Image: Uses a colormap or palette to store color information

Image Models

  1. Geometric Model: Describes the shape and position of image elements.

  2. Photometric Model: Describes the brightness/intensity or color of each point.

  3. Color Models:

    • RGB: Red, Green, Blue components

    • HSV: Hue, Saturation, Value

    • YCbCr: Used in video compression

    • CMYK: Used in printing

Resolution

  • Spatial Resolution: Amount of detail in an image (measured in pixels)

  • Gray-level Resolution: Number of distinct gray levels available (e.g., 8-bit = 256 levels)

Image Size

  • Described in terms of width × height × number of channels (e.g., 512 × 512 × 3 for RGB)

2D Linear System

  • A 2D linear system in image processing refers to a system where the output image is a linear transformation of the input image, usually involving operations like convolution.

  • Linearity implies two properties:

    1. Additivity: T[f1 + f2] = T[f1] + T[f2]

    2. Homogeneity (Scaling): T[a·f] = a·T[f]

  • Spatial Invariance: The system's response doesn’t change when the input is shifted.

  • Example: Applying a kernel (filter) over an image using convolution is a classic example of a 2D linear system:

    g(x,y)=mnh(m,n)f(xm,yn)g(x, y) = \sum_m \sum_n h(m, n) \cdot f(x - m, y - n)

Luminance

  • The measured intensity of light emitted or reflected from a surface in a given direction.

  • Closely related to the perceived brightness, but it's a physical quantity.

  • Important in grayscale and color image processing.

Contrast

  • The difference in luminance or color that makes an object distinguishable from others or the background.

  • High contrast makes features pop; low contrast makes the image appear flat.

  • Often enhanced using techniques like contrast stretching or histogram equalization.

Brightness

  • A subjective visual perception of how much light an image appears to emit or reflect.

  • Can be increased by adding a constant to all pixel intensities.

Color Representation

Images can be represented using various color models, each suitable for different applications:

RGB (Red, Green, Blue)

  • Additive color model (used in screens).

  • Each color is a mix of Red, Green, and Blue components.

CMY/CMYK (Cyan, Magenta, Yellow, Key/Black)

  • Subtractive color model (used in printing).

HSV (Hue, Saturation, Value)

  • Hue: Color type (0° to 360°)

  • Saturation: Color purity

  • Value: Brightness of the color

YUV / YCbCr

  • Used in video processing.

  • Separates brightness (Y) from color information (U and V or Cb and Cr).

Visibility Functions

  • Visibility functions describe how sensitive the human eye is to different spatial frequencies.

  • The Contrast Sensitivity Function (CSF) is a common example. It shows that humans are:

    • Most sensitive to mid-range spatial frequencies

    • Less sensitive to very low or very high frequencies

  • Important in compression algorithms and display optimization.


Monochrome and Color Vision Models

Monochrome Vision Model

  • Uses only intensity (luminance) values.

  • No color, only grayscale from black to white.

  • Basis of early vision systems and useful in medical/scientific imaging.

Color Vision Model

  • Based on how the human eye perceives color using three types of cones:

    • L (long wavelengths) → Red

    • M (medium) → Green

    • S (short) → Blue

  • Color models (like RGB, HSV) are built around this biological model.

  • Opponent Process Theory: Human vision processes color differences (Red-Green, Blue-Yellow) rather than absolute colors.


Shannon Nyquist Theorem

 What should be the ideal size of the pixel? should it be big or small?

The answer is given by the shannon nyquist theorem. As per this theorem, the sampling frequency should be greater than or equal to 2 ✕ fmax, where fmax is the highest frequency present in the image.


SubSampling 

The key idea in image sub-sampling is to throw away every other row and column to create a half-size image. When the sampling rate gets too low, we are not able to capture the details in the image anymore.

Instead, we should have a minimum signal/image rate, called the Nyquist rate.

Using Shannons Sampling Theorem, the minimum sampling should be such that :

Simple Image Model

 Simple Image model 

I(x, y, l) = σ(x, y, l) × L(l)

Mach Bands

Mach band effect is a phenomenon of lateral inhibition of rods and cones, where the sharp intensity changes are attenuated by the visual system.


Components of Digital Camera

The essential components of a digital camera are as follows

A subsystem of sensors to capture the image. The subsystem uses photodiodes to convert light energy into electrical signals.

A subsystem that converts analog signals to digital data.

A storage subsystem for storing the captured images. A digital camera also has an immediate feedback system to see the captured image. Digital cameras can be connected to computers through a cable, to transfer images to the computer system.

Tuesday, April 22, 2025

Digital Imaging System

  A digital imaging system is a set of devices for acquiring, storing, manipulating, and transmitting digital images.

Components-

  1. Human being perceive objects because of light. Light source are of two types- primary and secondary.
  2. The sun and Lamps are example of primary light sources.
wavelength- wavelength is the distance between two successive wave crests or wave troughs in the direction of travel.

Amplitude- Amplitude is the maximum distance the oscillation travels, away from its horizontal axis.

Frequency- The frequency of vibration is the number of waves crossing at a point.

mode Imaging

Reflective Mode Imaging

 Reflective mode imaging represents the simplest form of imaging and uses a sensor to acquire the digital image. All video cameras, digital cameras, and scanners use some types of sensors for capturing the image.

Emissive Type Imaging 

Emissive type imaging is the second type, where the images are acquired from self-luminous objects without the help of a radiation source. In emissive type imaging, the objects are self-luminous. The radiation emitted by the object is directly captured by the senor to form an image. Thermal Imaging is an example of emissive type imaging.

Transmissive Imaging

Transmissive imaging is the third type, where the radiation source illuminates the object. The absorption of radiation by the objects depends upon the nature of the material. Some of the radiation passes through the objects. The attenuated radiation is sensed into an image.

Image Processing Environment

 Radiation Source 

                                             ------------ --------------------------------------------------

                                            |                                    |                                                |

                                                                                                                             

                                        object                   self-luminous object          Transparent Object

                                                 \                                  |                                        /

                                                     \                              |                                    /

                                                        \                                                         /    

                                                           ========================

sensor 

 analog signal 

Digitizer

Digital Computer

✅ UNIT 4 — POSET, LATTICES & BOOLEAN ALGEBRA (DISCRETE MATHEMATICS)

  ✅ UNIT 4 — POSET, LATTICES & BOOLEAN ALGEBRA 1. Poset Partially Ordered Set A pair (A, ≤) where relation is: Reflexive Anti-...