Skip to main content

unit -1 Introduction of Image processing

 

What is Image Processing?

Image processing is a method to perform operations on an image to enhance it or extract useful information. It is a type of signal processing where the input is an image, and the output may be either an image or characteristics/features associated with that image.

Goals of Image Processing

  • Image Enhancement: Improving visual appearance (e.g., contrast, sharpness)

  • Image Restoration: Removing noise or distortion

  • Image Compression: Reducing the amount of data required to represent an image

  • Feature Extraction: Identifying objects, edges, or patterns

  • Image Analysis: Understanding and interpreting image content

  • Object Recognition: Detecting and identifying objects in an image

What is an Image?

An image is a two-dimensional function f(x, y), where x and y are spatial coordinates, and f is the intensity (brightness or color) at that point. For digital images, both x, y, and f are finite and discrete.

Types of Image Representation

  1. Spatial Domain Representation: Direct representation using pixel intensity values in a grid.

  2. Frequency Domain Representation: Using transforms like Fourier to represent the image in terms of its frequency components.

Types of Images

  • Binary Image: Only black and white (pixel values: 0 or 1)

  • Grayscale Image: Shades of gray (pixel values: 0 to 255)

  • Color Image: Consists of multiple channels, commonly RGB (Red, Green, Blue)

  • Indexed Image: Uses a colormap or palette to store color information

Image Models

  1. Geometric Model: Describes the shape and position of image elements.

  2. Photometric Model: Describes the brightness/intensity or color of each point.

  3. Color Models:

    • RGB: Red, Green, Blue components

    • HSV: Hue, Saturation, Value

    • YCbCr: Used in video compression

    • CMYK: Used in printing

Resolution

  • Spatial Resolution: Amount of detail in an image (measured in pixels)

  • Gray-level Resolution: Number of distinct gray levels available (e.g., 8-bit = 256 levels)

Image Size

  • Described in terms of width × height × number of channels (e.g., 512 × 512 × 3 for RGB)

2D Linear System

  • A 2D linear system in image processing refers to a system where the output image is a linear transformation of the input image, usually involving operations like convolution.

  • Linearity implies two properties:

    1. Additivity: T[f1 + f2] = T[f1] + T[f2]

    2. Homogeneity (Scaling): T[a·f] = a·T[f]

  • Spatial Invariance: The system's response doesn’t change when the input is shifted.

  • Example: Applying a kernel (filter) over an image using convolution is a classic example of a 2D linear system:

    g(x,y)=mnh(m,n)f(xm,yn)g(x, y) = \sum_m \sum_n h(m, n) \cdot f(x - m, y - n)

Luminance

  • The measured intensity of light emitted or reflected from a surface in a given direction.

  • Closely related to the perceived brightness, but it's a physical quantity.

  • Important in grayscale and color image processing.

Contrast

  • The difference in luminance or color that makes an object distinguishable from others or the background.

  • High contrast makes features pop; low contrast makes the image appear flat.

  • Often enhanced using techniques like contrast stretching or histogram equalization.

Brightness

  • A subjective visual perception of how much light an image appears to emit or reflect.

  • Can be increased by adding a constant to all pixel intensities.

Color Representation

Images can be represented using various color models, each suitable for different applications:

RGB (Red, Green, Blue)

  • Additive color model (used in screens).

  • Each color is a mix of Red, Green, and Blue components.

CMY/CMYK (Cyan, Magenta, Yellow, Key/Black)

  • Subtractive color model (used in printing).

HSV (Hue, Saturation, Value)

  • Hue: Color type (0° to 360°)

  • Saturation: Color purity

  • Value: Brightness of the color

YUV / YCbCr

  • Used in video processing.

  • Separates brightness (Y) from color information (U and V or Cb and Cr).

Visibility Functions

  • Visibility functions describe how sensitive the human eye is to different spatial frequencies.

  • The Contrast Sensitivity Function (CSF) is a common example. It shows that humans are:

    • Most sensitive to mid-range spatial frequencies

    • Less sensitive to very low or very high frequencies

  • Important in compression algorithms and display optimization.


Monochrome and Color Vision Models

Monochrome Vision Model

  • Uses only intensity (luminance) values.

  • No color, only grayscale from black to white.

  • Basis of early vision systems and useful in medical/scientific imaging.

Color Vision Model

  • Based on how the human eye perceives color using three types of cones:

    • L (long wavelengths) → Red

    • M (medium) → Green

    • S (short) → Blue

  • Color models (like RGB, HSV) are built around this biological model.

  • Opponent Process Theory: Human vision processes color differences (Red-Green, Blue-Yellow) rather than absolute colors.


Comments

Popular posts from this blog

Raster scan Vs Vector Scan

1. Raster Scan Display   How It Works : A raster scan display works by painting an image on the screen pixel by pixel, row by row. It follows a systematic pattern where the electron beam (in CRT monitors) or the display elements (in modern LCD/LED screens) sweep across the screen from left to right, top to bottom, in a series of horizontal lines (scan lines). This process is akin to how a traditional TV screen works.   Process : The display draws the image starting from the top-left corner, moving to the right, then moves to the next row below, and repeats this process until the entire screen is filled. This pattern creates a grid of pixels, where each pixel can have a color and brightness level.   Characteristics : Pixel-based : The screen consists of a grid of pixels, and each pixel can have a distinct color and intensity. Continuous Image : Raster scan displays are capable of displaying detailed and complex images, including photographs and videos, because they break t...

Inheritance

*■ Inheritance*  • Inheritance is a concept in OOP that allows a class to inherit properties and behaviors (methods) from another class. • A class that inherits from another class is called a derived class (or subclass) • The class which gets inherited by another class is called the base class (or superclass). • Inheritance is possible only if there is is-a relationship between parent and child class. • constructors are not inherited in derived class, however the derived class can call default constructor implicitly and if there's a parameterised constructors in bass class then derived class can call it using 'base' keyword.  ____________________________________________  *➤ Rules of Inheritance*  1) C# supports single inheritance, meaning a class can inherit from only one base class. 2) A parent class constructor must be accessible in child class otherwise  inheritance will be not possible. 3) every class, whether user-defined or predefined implicitly derives fr...