Image Processing

Search Results

Having over 35 projects in Image Processing

Image Quantization Using DCT

1045

68

Color quantization reduces the number of colors used in an image; this is important for displaying images on devices that support a limited number of colors and for efficiently compressing certain kinds of images. The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency (rapidly varying) brightness variation. This fact allows one to reduce the amount of information required by ignoring the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.In this project DCT based image quantization is done and results are analysis over the picture color levels.
Color image quantization | Lossless Image Compression_Img
Digital Color Detection in Image Processing

1045

68

Object detection is a task of identifying and detecting object in an image or video. In our project object detection is done on the basics of color. Here the detection is done by verifying the pixels’ of the images and then detecting on the basics of the pixel value, that to which color it signify. Here we do Color detection using image processing to find use the application for different purposes. Detection of color can be on basis mean or on histogram. a distance information calculation unit for dividing a captured image which constitutes a reference of captured images captured by the plurality of image capture units into a plurality of pixel blocks, individually retrieving corresponding pixel positions within the other captured image for the pixel blocks, and individually calculating distance information, and a histogram generation module for dividing a range image representing the individual distance information of the pixel blocks calculated by the distance information calculation unit into a plurality of segments having predetermined sizes, providing histograms relating to the distance information for the respective divided segments, and casting the distance information of the pixel blocks to the histograms of the respective segments.
Color Feature Detection| Object detector_Img
Image Steganography over Bit wise Algorithm

1045

68

Image encryption schemes have been increasingly studied to meet the demand for real-time secure image transmission over the Internet and through wireless networks. Encryption is the process of transforming the information for its security. With the huge growth of computer networks and the latest advances in digital technologies, a huge amount of digital data is being exchanged over various types of networks. It is often true that a large part of this information is either confidential or private. The security of images has become more and more important due to the rapid evolution of the internet in the world today. The security of images has attracted more attention recently, and many different image encryption methods have been proposed to enhance the security of these images. Image encryption techniques try to convert an image to another one that is hard to understand. On the other hand, image decryption retrieves the original image from the encrypted one. In this project implementation of data encryption is done on basis of bit algorithm the scenario follow for data encryption and decryption is as follow 1. Firstly have image in which data is to encrypt 2. Enter message text which is to hide in image 3. Select particular bits of pixels of image as per algorithm and hide data in form of binary in that 4. Finally save image and get encrypted image 5. Decryption of image message is vice verse
Implemtation of LSB Steganography | Steganography with image_Img
Noise Removal Using Wavelet Thresh holding

1045

68

A very large portion of digital image processing is devoted to image denoising. This includes research in algorithm development and routine goal oriented image processing. Image restoration is the removal or reduction of degradations that are incurred while the image is being obtained. Degradation comes from blurring as well as noise due to electronic and photometric sources. Blurring is a form of bandwidth reduction of the image caused by the imperfect image formation process such as relative motion between the camera and the original scene or by an optical system that is out of focus. Image denoising is often used in the field of photography or publishing where an image was somehow degraded but needs to be improved before it can be printed. For this type of application we need to know something about the degradation process in order to develop a model for it. When we have a model for the degradation process, the inverse process can be applied to the image to restore it back to the original form. In this project technique for image restoration or image denoising will include BayesShrink Algorithms for wavelet thresholding
Wavelet Noise removal | Hard-Soft threshold for noise reduction_Img
Optical Character Recognition for Character Classification

1045

68

In the context of script recognition, it may be worth studying the characteristics of various writing systems and the structural properties of the characters used in certain major scripts of the world. ONE interesting and challenging field of research in pattern recognition is Optical Character Recognition (OCR). Optical character recognition is the process in which a paper document is optically scanned and then converted into computer process able electronic format by recognizing and associating symbolic identity with every individual character in the document. With the increasing demand for creating a paperless world, many OCR algorithms have been developed over the years. However, most OCR systems are script-specific in the sense that they can read characters written in one particular script only. Script is defined as the graphic form of the writing system used to write statements expressible in language. That means, a script class refers to a particular style of writing and the set of characters used in it. Languages throughout this world are typeset in many different scripts. In this project there is an implementation of script recognition in which firstly system will acquire image from webcam then after OCR algorithm is applied on captured image in which system will extract features of image and finally recognize the script.
Optical character recognition, OCR using matlab_Img
Contaminants Detection In Cotton

1045

68

Contamination has vital role in deciding the quality of cotton apart from essential properties such as length, strength, fineness. Contamination of raw cotton can take place at every step i.e. from the farm picking to the ginning stage. Contamination, even if it is a single foreign fiber, can lead to the downgrading of yarn, fabric or garments or even the total rejection of an entire batch and can cause irreparable harm to the relationship between growers, ginners, merchants, spinner and textile and clothing mills. An International Textile Manufacturers Federation (ITMF) reported that claims due to contamination amounted to between 1.4 – 3.2% of total sales of 100% cotton and cotton blended yarns. A fairly large number of cotton fibers recognition researches are based on RGB color space. So in this project a system is implemented to find contaminant of cotton so that it can be used with surety. Contaminant or Foreign fibers are detected from cotton on bases of layer separation and thresholding.
Fiber defect detection | Discontinuity testing_Img
Image watermarking using DCT transform

1045

68

Owing to personal computers being applied in many fields and Internet becoming popular and easier to use, most information is transmitted with digital format. Therefore, data copying and back up are more and more easier in the world wide web and multimedia. The copyright and authentication gradually lose their security. How to protect intellectual property becomes important in technical study and research. Recently, the watermarking technique was proposed to solve the problem of protecting the intellectual property. In this project, a watermark embedded in the host image by DCT transform has been developed. There are several papers using the same manner to embed watermark into middle-band coefficients of DCT block. The Joint Photograph Expert Group (JPEG) image compression usually discards the high-band frequency in DCT block including some middle-band data. In this paper the lower-band coefficient of DCT block was employed, since it is robust against the attack by the JPEG. In order to improve the imperceptions, only one bit was embedded in each coefficient of a DCT block.
Image watermarking using DCT_Img
Edge detection based image watermarking

1045

68

Digital watermarking is a process of embedding signature into the media data with just doing few modifications. Adding a visible watermark is a common way of identifying images and protecting them from unauthorized use online. A common practice is to distribute the watermark (or watermarks) across the entire image. We propose more effective content-based sharp point detection watermarking. To increase the embedding capacity the concept of watermark in watermark is used. To increase security we embed encrypted watermarks in the image. This provides an additional level of security for watermarks. For instance if watermarking key is hacked still the attacker will not be able to identify the watermark because it is encrypted. The system which we are going to develop is based on Sharp Point Detection algorithm. In this we will develop an algorithm which will give us points on basis of sharp point detection algorithm and we will place our watermark there.
Image watermarking using Kharies Points | New approach for watermarking_Img
DWT based Image Watermarking

1045

68

In our project we use DWT(discrete wavelet transform) based image watermarking as a category of best techniques for watermarking till date with properties of wavelets. A method and system are disclosed for inserting relationships between or among property values of certain coefficients of a transformed host image. The relationships encode the watermark information. One aspect of the present invention is to modify an STD Method to adapt it to a perceptual model simplified for the wavelet domain. Embodiments of the present invention provide digital watermarking methods that embed a digital watermark in both the low and high frequencies of an image or other production, providing a digital watermark that is resistant to a variety of attacks. The digital watermarking methods of the present invention optimize the strength of the embedded digital watermark such that it is as powerful as possible without being perceptible to the human eye. The digital watermarking methods of the present invention do this relatively quickly, in realtime, and in an automated fashion using an intelligent system, such as a neural network.
Image watermarking using DWT_Img
Blocking Artifact analysis using DCT

1045

68

In this project we are making comparison between different image processing techniques like spatial filtering, localized and Adaptive. The comparison is made on the basis of different parameters like mean square error, peak signal to noise ratio, bit error rate and the visibility of image. Out of these techniques adaptive technique shows good results. It smoothes the artifacts more in comparison to others.We can compress audio signal, video signal, text, fax and images. For medical images lossless compression is used and for other types lossy compression can be used. For compressing an image we can use the DCT technique. But after compression during decompression and recovering original image from compressed image we can face problem of blocking artifacts. Various methods can be used for removing blocking artifacts. One of them is DCT filtering. Further for getting better results we can remove blocking artifacts by spatial and hybrid filtering method. Our experimental results shows that hybrid filtering gives better performance on the bases of better PSNR, BER and MSE.
Blocking Artifacts in digital images | Ringing effect in images_Img
Blocking artifact Removal Approach

1045

68

We propose an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities. Blocking artifact detection and reduction is presented in this project. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function.Then, for each block affected by blocking artifacts, its dc and ac coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluate
Removal of blocking artifacts_Img
Image Fusion Using I-H-S methodology

1045

68

Image fusion is a technique used to integrate a high-resolution panchromatic image with low-resolution multispectral image to produce a high-resolution multispectral image, which contains both the high-resolution spatial information of the panchromatic image and the color information of the multispectral image, although an increasing number of high-resolution images are available along with sensor technology development. Here we proposed a method in which digital image is fused or can say mixed to analyze the color of different objects. Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. we present a computationally efficient color image fusion algorithm for merging infrared and visible images.
Image fussion | Mixing Of images_Img
Fusion Techniques Comparative Analysis

1045

68

Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. The integrated PCA based image fusion system for stamping split detection is developed and tested on an automotive press line.. Different splits with variant shape, size and amount are detected under actual operating conditions. Principal Component Analysis (PCA) is employed to transform original image to its eigen space. By retaining the principal components with influencing eigen values, PCA keeps the key features in the original image and reduces noise level. Then pixel level image fusion algorithms are developed to fuse original images from the thermal and visible channels, enhance the result image from low level and reduce undesirable noises. Finally, an automatic split detection algorithm is designed and implemented to perform online objective automotive stamping split detection.
Wavelet based image fusion | Pca based image fusion_Img
Video Watermarking using Image Processing

1045

68

In this project, a description and comparison between encryption methods and representative video algorithms were presented. With respect not only to their encryption speed but also their security level and stream size. A tradeoff between quality of video streaming and choice of encryption algorithm were shown. Achieving an efficiency, flexibility and security is a challenge of researcher.This project seeks to develop a Robust Watermarking Software based on the research work carried out earlier. The group while exploring various watermarking techniques and algorithms that have been proposed for developing a Robust Watermarking Solution implemented a proposed Robust watermarking solution. A Robust Watermark is more resilient to the tempering/attacks that a multimedia object (Image, Video, and Audio)had to face like compression, image cropping, image flipping, image rotation to name a few.
Video watermarking | Video data encryption_Img
Defect Detection Using Thresh holding Approach

1045

68

This is a project with Fabric discontinuity detection using mat lab image processing toolbox which will detect defect on basis of good samples used to train system at start. Numerous techniques have been developed to detect fabric defects and the purpose of this project is to categorize and/or describe these algorithms. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore on the basis of nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application.
Fabric defect detection | Textile Defect detection system_Img
Finger print recognition System PCT algorithm

1045

68

The proposed method can reduce the searching space in alignment, and what is more attractive is that it obviates the need for extracting minutiae points or the core point to align the fingerprint images. Experimental results show that the proposed method is more robust than using the reference point or using the minutiae to align the fingerprint images.Fingerprint is one of the popular biometric traits used for recognizing a person. Properties which make fingerprint popular are it’s wide acceptability in public and ease in collecting the fingerprint data. In this project we propose a method for fingerprint matching based on minutiae matching. However, unlike conventional minutiae matching algorithms our algorithm also takes into account region and line structures that exist between minutiae pairs. This allows for more structural information of the fingerprint to be accounted for thus resulting in stronger certainty of matching minutiae. Also, since most of the region an analysis’s preprocessed it does not make the algorithm slower.
Finger print matching | Rotation Invariant Matching_Img
Finger print recognition System PHT algorithm

1045

68

We develop a fast approach for their computation using recursion and 8-way symmetry/anti symmetry property of the kernel functions Polar harmonic transforms (PHTs) are orthogonal rotation invariant transforms that provide many numerically stable features. The kernel functions of PHTs consist of sinusoidal functions that are inherently computation Polar harmonic transform (PHT) which can be used to generate rotation invariant features. With PHTs, there is also no numerical instability issue, as with ZM and PZMs which often limits their practical usefulness. A large part of the computation of the PHT kernels can be recomputed and stored. In the end, for each pixel, as little as three multiplications, one addition operation, and one cosine and/or sine evaluation are needed to obtain the final kernel value. In this project, three different transforms will be introduced, namely, Polar Complex Exponential Transform (PCET), Polar Cosine Transform (PCT), and Polar Sine Transform (PST).
Fingerprint recognition | Polar harmonic transform_Img
Object detection Using MATLAB

1045

68

Image processing is a technique of bringing variations in the image as per requirement such as editing, cropping, detection etc. in our project we had done detection on the basics of color, shape and size. Image Processing Toolbox provides a comprehensive suite of reference-standard algorithms and visualization functions for image analysis tasks such as statistical analysis, feature extraction, and measurement. It is useful in identifying similar objects with different colors apart from each other and Identifying similar colored objects with different sizes apart from each other. Here we made an application to count circular segments and checking the efficiency of system by analyzing accuracy of counter using digital image processing. In this we even introduce a novel and new approach for feature extraction on color circular basic.
Feature Segmentation | Image processing based segment counter_Img
Real Time Image Steganography MATLAB

1045

68

This project will explore Steganography from its earliest instances through potential future application. Steganography is the only answer for secure and secret communication. Existing methods in image Steganography focus on increasing embedding capacity of secret data. According to existing methods, the experimental results indicate that two pixels are required for one secret digit embedding. In direction of improve the embedding size of secret data, a novel method of Pixel Value Modification (PVM) by modulus function is proposed. The proposed PVM method can embed one secret digit on one pixel of cover image. Thus, the proposed PVM method gives good quality of stego image. The experimental outputs validate that good visual perception of stego image with more secret data embedding capacity of stego image can be achieved by the proposed method.. Our algorithm offers very high capacity for cover media compared to other existing algorithms. We present experimental results showing the superiority of our algorithm. We also present comparative results with other similar algorithms in image based Steganography.
Pixel based image steganography | image message encoding decoding_Img
Face recognition System using Eigen features

1045

68

Eigenfaces is the name given to a set of eigenvectors when they are used in the computer vision problem of human face recognition The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. In our project we do face recognition technique implemented using PCA eigen for analyzing accuracy of system in field of security or identification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. Our approach treats face recognition as a two-dimensional recognition problem. In this scheme face recognition is done by Principal Component Analysis (PCA)Face images are projected onto a face space that encodes best variation among known face images.
Pca eigen based face recognition_Img
Updated Image Enhancement Approach

1045

68

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional analog photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. In this project we will enhance the quality of an image on basis of various properties of image as discussed above in the introduction so that the quality of the image get enhanced and can be used at various application with enhanced properties. We will select an image from user whose quality is to enhance. Next step is to change the specific property of image as many options are given to the user. After that there is save option for user to save enhanced image as per user need. You can adjust brightness, contrast, and fade for the display of an image as well as for plotted output without affecting the original raster image file
Image contrast enhancement | Image Property changer_Img
CBIR over Color classification Approach

1045

68

Content-based means that the search analyzes the contents of the image rather than the metadata such as keywords, tags, or descriptions associated with the image. In our project we make use of color feature retrieval using histogram. the main objective of this project is to analyze the current state of the art in content-based image retrieval(CBIR) using Image Processing in MATLAB Different implementations of CBIR make use of different types of user queries. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example. Color histograms are widely used for content-based image retrieval. Their advantages are insensitivity to small changes in camera view point. However, a histogram is a coarse characterization of an image, and so images with very different appearances can have similar histograms. We describe a technique for comparing images called histogram renement which imposes additional constraints on histogram based matching. Histogram renement split the pixels in a given bucket into several classes, based upon some local property.
CBIR on basis of color | Feature Based CBIR_Img
CBIR over Color and Shape classification

1045

68

This project reviews the progress of computer vision in the agricultural and food industry then identifies areas for further research and wider application the technique. In this project, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography
Quality Analyzer over various features_Img
Artificial Neural Network for Coin Recognition

1045

68

Dirty coins require machine cleaning frequently. The variations in images obtained between new and old coins are also discussed. Coin recognition process has been divided into seven steps. * Acquire RGB Coin Image, Generate Pattern Averaged Image, Remove Shadow from Image, Crop and Trim the Image, Convert RGB Image to Gray scale, Generate Feature Vector and p ass it as, Input to Trained NN, Give Appropriate Result according, to the, Output of NN. In this project, we propose a method to design a neural network(NN). And also, in order to demonstrate the effectiveness of the proposed scheme, we apply the proposed scheme to coin recognition. In general, as a problem becomes complex and large-scale, the number of operations increases and hard-ware implementation to real systems using NNs becomes difficult. Therefore, we propose the method which makes a small-sized NN system to achieve a cost reduction and to simplify hardware implementation to the real machines.
Coin recognition system | Neural network based Coin recognition_Img
Image Noising Denoising with Multi noise, Filters

1045

68

The DENOISING is the technique that is proposed in 1990.The goal of image denoising is to remove noise by differentiating it from the signal. DENOISING uses thevisual content of images like color, texture, and shape as the image index to retrieve the images from the database. These feature never changed. In this project, we presents a new method for un sharp masking for contrast enhancement of images. Image denoising is a well studied problem in the field of image processing. Use of basic filter to remove the noise and comparative analysis b/w them. The approach employs an adaptive median hat controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and noise detection technique for remove mixed noise from images. A hybrid cumulative histogram equalization is proposed for adaptive contrast enhancement
Image noising and denoising | image noise reduction_Img
Medical images enhancement processing

1045

68

Speckle is signal correlated noise. In ultrasound imagery also other sources of noise are presente depending on the specific application. At least one has to consider the thermal and electronic noise added at the receiver. This section offers some idea about various noise reduction techniques. Unfortunately, the presence of speckle noise in these images affects edges and fine details which limit the contrast resolution and make diagnostic more difficult. Ultrasound imaging system is widely used diagnostic tool for modern medicine. It is used to do the visualization of muscles, internal organs of the human body, size and structure and injuries. Proposed filtering is a techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation
Ultrasound image noise reduction | Speckle noise removal_Img
Plant Dimensions Calculation using image processing

1045

68

Image processing is a methodology used in many of application either in research, quality enhancement, industries etc.image processing is main technology that is used to ind dimensions of products also , in this project we implement a technique for finding dimension of natural plant in which firstly we will acquire image from user and then algorithm to find height and width of that object in image that is dimensions of natural plant. This technique can useful in finding area dimensions of any object that is also useful in finding military application and for calculating far away object size
Plant Dimensions calculating system_Img
Character Recognition for Language processing

1045

68

Optical Character Recognition, or OCR, is a technology that enables you to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera into editable and searchable data. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object categorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbor and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems.
OCR for sign varification | Character detection_Img
Image compression approach using DCT

1045

68

In the JPEG image compression algorithm, the input image is divided into 8-by-8 or 16-by-16 blocks, and the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into a single image. For typical images, many of the DCT coefficients have values close to zero; these coefficients can be discarded without seriously affecting the quality of the reconstructed image. In this project we will implement the image compression techniques that is discrete cosine transform after implementing this technique, then done analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate.
DCT based image compression_Img
Image compression using wavelet approach

1045

68

The 2D discrete wavelet transform (DWT) is the most important new image compression technique of the last decade. Conventionally, the 2D DWT is carried out as a separable transform by cascading two 1D transforms in the vertical and horizontal direction. Therefore, vanishing moments of the high-pass wavelet filters exist only in these two directions. The separable transform fails to provide an efficient representation for directional image features, such as edges and lines, not aligned vertically or horizontally since it spreads the energy of these features across sub bands. In this project we will implement the image compression technique that is discrete wavelet transform after implementing this technique individually and then do analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate
DWT based image compression_Img
Huffman Encoding for image compression

1045

68

Huffman coding can be used to compress all sorts of data. It is an entropy-based algorithm that relies on an analysis of the frequency of symbols in an array. Huffman coding can be demonstrated most vividly by compressing a raster image. Suppose we have a 5×5 raster image with 8-bit color, i.e. 256 different colors. The uncompressed image will take 5 x 5 x 8 = 200 bits of storage. This compression technique is used broadly to encode music, images, and certain communication protocols. Lossless JPEG compression uses the Huffman algorithm in its pure form. Lossless JPEG is common in medicine as part of the DICOM standard, which is supported by the major medical equipment manufacturers (for use in ultrasound machines, nuclear resonance imaging machines, MRI machines, and electron microscopes). Variations of the Lossless JPEG algorithm are also used in the RAW format, which is popular among photo enthusiasts because it saves data from a camera’s image sensor without losing information. This project contains an implementation of HUFFMAN technique for image compression and finally does analysis of technique on basis of PSNR, BER, MSE parameters
Huffman Coding | Huffman based image coding_Img
RLE Encoding for image encoding & compression

1045

68

Lossless methods are used for text compression and image compression in certain environments such as medical imaging where no loss of information is tolerated. Lossy compression methods are commonly applied in image and audio compression and depending upon the fidelity required higher compression ratio. Digital images require an enormous amount of space for storage. Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations. It is not useful with files that dont have many runs as it could potentially double the file size. Run-length encoding performs lossless data compression and is well suited to palette-based iconic images. In the project implementation of RLE based image compression is done by following the method as follow 1. Firstly select image from user which is to compress 2. Enter parameters of RLE algorithm to compress the image 3. Get compressed image by RLE coding 4. Finally calculate compression ratio to check its capability of compression
RLE based image encoding and decoding | RLE image compression_Img
LZW Encoding for image encoding & compression

1045

68

A lot of data hiding methods have been developed as a mean of secret data communication. Accordingly, numerous techniques have been proposed in the name of either steganography or watermarking, which all belong to data hiding techniques in wide sense. In data hiding, the most common way is a target-based method which means that a specific target such as domain (frequency, time, or spatial) is pre-determined before developing a hiding method. In this paper, we are especially interested in developing a k-sslrcs data hiding method that can be generally applied to many common lossless compression applications. Among several approaches in data or image compression. LZW is a well-known technique. Since it refers to a dictionary storing single and their combined symbols, LZW is classified as a dictionary-based technique. So in this project LZW is used to compress the digital image. So that storage capacity of system can be increased and a new approach on image compression is done. Finally compression ratio is calculated for technique’s efficiency.
LZW based image encoding and decoding | LZW image compression_Img
Optical Disk localization Biomedical Application

1045

68

We had proposed a method in which for detection of optic disk and localization is done by using phenomena called as edge detection. Edge detection is a technique of image processing in which for reducing the image as per requirement for feature extraction done by determing the edges of Image per pixel. In our project Edge detection is used for Localization and detection of the optic discs for analyzing digital diabetic retinopathy systems. In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms.
Localisation of Optic Disc | Optic disk detection from eye_Img
Image segmentation Entropy Methodology

1045

68

Segmentation is a task of division. Which have bases as color, pixel in an image. In this project, a fast threshold selection method based algorithm is implemented to speed up the original MCE threshold method in image segmentation. Our main aim is to find various segments inn an image on basis of its feature. It provides effectiveness when the number of regions in the segmentation varies, effectiveness when the number of regions is fixed and Evaluation effectiveness when work on theoretically different segmentation methods. We implement a methodology in which minimum entropy is used for image segmentation. In segmentation, minimum cross entropy (MCE) based multilevel thresholding is regarded as an effective improvement. However, it is very time consuming for real-time applications.
Entropy based Image Segmentation_Img