M.tech Projects

M.tech Projects

Huffman Encoding for image compression

2012

115

Huffman coding can be used to compress all sorts of data. It is an entropy-based algorithm that relies on an analysis of the frequency of symbols in an array. Huffman coding can be demonstrated most vividly by compressing a raster image. Suppose we have a 5×5 raster image with 8-bit color, i.e. 256 different colors. The uncompressed image will take 5 x 5 x 8 = 200 bits of storage. This compression technique is used broadly to encode music, images, and certain communication protocols. Lossless JPEG compression uses the Huffman algorithm in its pure form. Lossless JPEG is common in medicine as part of the DICOM standard, which is supported by the major medical equipment manufacturers (for use in ultrasound machines, nuclear resonance imaging machines, MRI machines, and electron microscopes). Variations of the Lossless JPEG algorithm are also used in the RAW format, which is popular among photo enthusiasts because it saves data from a camera’s image sensor without losing information. This project contains an implementation of HUFFMAN technique for image compression and finally does analysis of technique on basis of PSNR, BER, MSE parameters
Huffman Coding | Huffman based image coding_Img
Multiuser detection in CDMA systems

2012

115

Multi user detection techniques exploit the structure of MAI to achieve interference suppression and provide substantial performance gains over conventional single user detection techniques. This results in interference between multiple direct-sequence users and is referred as MAI. In our project we study two multi-rate access methods of multi-carrier CDMA system. Decorelator detector is implemented in multi user detection to remove MAI from signal this detector is advanced to Matched filter At the receiving end, it is required to detect the signal of the desired user in the presence of MAI. If some interfering transmitters are located closer to the base station as compared to the desired user, the receiver of the intended user receives more interference in comparison to that it would have received without near far effect. The MAI and near-far problem are the two issues, which need considerable attention for reliable detection of the signal of the desired user. The conventional detector considers MAI as external noise and is referred to as single user detection technique
Decorelating based MUD | Analysis of Multiuser detection_Img
LEACH protocol designing

2012

115

LEACH (Low Energy Adaptive Clustering Hierarchy) is a hierarchical-based routing protocol which uses random rotation of the nodes required to be the cluster-heads to evenly distribute energy consumption in the network. In this project leach protocol will be implemented to analyze the working of whole procedure. The steps we will follow are as follow 1. First of all we will have coverage area from user 2. After that we will give initial parameters for the network in which parameters such as nodes, energy of transmitter and receivers etc 3. After that clusters will be made and then energy basis nodes will be declare as half dead node dead node and alive nodes will be find 4. Then on basis of dead nodes we will analyze lifetime of system
LEACH Protocol | Cluster Heads | WSN | Energy Protocols_Img
ACO Travelling Salesman Problem Solution

2012

115

Ant colony optimization (ACO) belongs to the group of Meta heuristic methods. In this project we will solve the travelling sales man problem using ant colony optimization. First of all we will have input from user that is coverage area or area in which the nodes lies. Secondly we will have number of nodes from user that will be total cities in the area specified by user. After that on basis of Euclidean distance we will calculate the initial population of our tsp problem which is to solve by aco. Here our fitness function is on basis of distance and maximum number of cities coverage. To get maximum fitness value we will optimize the initial population using ant colony optimization and will get best rot with objective of minimum distance and maximum nodes cover
TSP | ACO based Sales Man Problem |ACO based Cities Finding_Img
LMS based Audio Signal Enhancement

2012

115

Signal processing is an operation designed for extracting, enhancing, storing, and transmitting useful information. Hence signal processing tends to be application dependent. In contrast to the conventional filter design techniques, adaptive filters do not have constant filter coefficients and no priori information is known. Such a filter with adjustable parameters is called an adaptive filter. Adaptive filter adjust their coefficients to minimize an error signal and can be realized as finite impulse response (FIR), infinite impulse response (IIR), lattice and transform domain filter. The most common form of adaptive filter is the transversal filter using least mean square (LMS) algorithm In this project LMS algorithm is implemented in which step followed for implementation are as • Firstly have audio signal from user • Signal will be mixed with noise • Noisy signal is given to adaptive filter (LMS) • Filtration is performed by adaptive filter • Final de-noised signal is obtained • Analysis is done by comparing de-noised signal with original signal
Adaptive Filters | Noise Removal | Filteration of audio signal_Img
Image compression using wavelet approach

2012

115

The 2D discrete wavelet transform (DWT) is the most important new image compression technique of the last decade. Conventionally, the 2D DWT is carried out as a separable transform by cascading two 1D transforms in the vertical and horizontal direction. Therefore, vanishing moments of the high-pass wavelet filters exist only in these two directions. The separable transform fails to provide an efficient representation for directional image features, such as edges and lines, not aligned vertically or horizontally since it spreads the energy of these features across sub bands. In this project we will implement the image compression technique that is discrete wavelet transform after implementing this technique individually and then do analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate
DWT based image compression_Img
Audio Steganography Using LSB methodology

2012

115

Information hiding is a part of information Security. Steganography is a technique of information hiding that focuses on hiding the existence of secret messages. The aim of steganographic methods is to hide the existence of the communication and therefore to keep any third -party unaware of the presence of the Steganographic exchange In our project we will implement an algorithm to hide text messages in audio signal which can be used for message communication and will decoded at other user end with the decoder software designed with inverse of algorithm to decode messages from audio file send. Embedding secret messages into digital sound is known as audio Steganography. It is usually amore difficult process than embedding messages in other media. Audio Steganography methods can embed messages in WAV, AU, and even MP3 sound files.
LSB based audio steganography_Img
Object detection Using MATLAB

2012

115

Image processing is a technique of bringing variations in the image as per requirement such as editing, cropping, detection etc. in our project we had done detection on the basics of color, shape and size. Image Processing Toolbox provides a comprehensive suite of reference-standard algorithms and visualization functions for image analysis tasks such as statistical analysis, feature extraction, and measurement. It is useful in identifying similar objects with different colors apart from each other and Identifying similar colored objects with different sizes apart from each other. Here we made an application to count circular segments and checking the efficiency of system by analyzing accuracy of counter using digital image processing. In this we even introduce a novel and new approach for feature extraction on color circular basic.
Feature Segmentation | Image processing based segment counter_Img
Fading Channel performance analysis

2012

115

Fading is the term used to describe the rapid fluctuations in the amplitude of the received radio signal over a short period of time. Fading is a common phenomenon in Mobile Communication Channels, where it is caused due to the interference between two or more versions of the transmitted signals which arrive at the receiver at slightly different times. The resultant received signal can vary widely in amplitude and phase, depending on various factors such as the intensity, relative propagation time of the waves, bandwidth of the transmitted signal etc. In this project we have implemented the simulink model for communication data transmitter and receiver and finally Rayleigh fading channel in introduced to check its performance in communication and analysis over BER etc
Communication using Fading channels | Analysis of Rayleigh Fading_Img
Video Watermarking using Image Processing

2012

115

In this project, a description and comparison between encryption methods and representative video algorithms were presented. With respect not only to their encryption speed but also their security level and stream size. A tradeoff between quality of video streaming and choice of encryption algorithm were shown. Achieving an efficiency, flexibility and security is a challenge of researcher.This project seeks to develop a Robust Watermarking Software based on the research work carried out earlier. The group while exploring various watermarking techniques and algorithms that have been proposed for developing a Robust Watermarking Solution implemented a proposed Robust watermarking solution. A Robust Watermark is more resilient to the tempering/attacks that a multimedia object (Image, Video, and Audio)had to face like compression, image cropping, image flipping, image rotation to name a few.
Video watermarking | Video data encryption_Img
Simulink modal for BER analysis OFDM systems

2012

115

OFDM is one of the applications of a parallel-data-transmission scheme, which reduces the influence of multipath fading and makes complex equalizers unnecessary increase dramatically future wireless communications. OFDM is a particular form of Multi-carrier transmission and is suited for frequency selective channels and high data rates In this project we are going to implement a simulink model design for wireless sensor network standard having OFDM concept. This system will be implemented in simulink toolbox of mat lab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like BER, number of errors etc.
OFDM | Error Rate Calculation | Multiplexing_Img
Image Noising Denoising with Multi noise, Filters

2012

115

The DENOISING is the technique that is proposed in 1990.The goal of image denoising is to remove noise by differentiating it from the signal. DENOISING uses thevisual content of images like color, texture, and shape as the image index to retrieve the images from the database. These feature never changed. In this project, we presents a new method for un sharp masking for contrast enhancement of images. Image denoising is a well studied problem in the field of image processing. Use of basic filter to remove the noise and comparative analysis b/w them. The approach employs an adaptive median hat controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and noise detection technique for remove mixed noise from images. A hybrid cumulative histogram equalization is proposed for adaptive contrast enhancement
Image noising and denoising | image noise reduction_Img
Finger print recognition System PHT algorithm

2012

115

We develop a fast approach for their computation using recursion and 8-way symmetry/anti symmetry property of the kernel functions Polar harmonic transforms (PHTs) are orthogonal rotation invariant transforms that provide many numerically stable features. The kernel functions of PHTs consist of sinusoidal functions that are inherently computation Polar harmonic transform (PHT) which can be used to generate rotation invariant features. With PHTs, there is also no numerical instability issue, as with ZM and PZMs which often limits their practical usefulness. A large part of the computation of the PHT kernels can be recomputed and stored. In the end, for each pixel, as little as three multiplications, one addition operation, and one cosine and/or sine evaluation are needed to obtain the final kernel value. In this project, three different transforms will be introduced, namely, Polar Complex Exponential Transform (PCET), Polar Cosine Transform (PCT), and Polar Sine Transform (PST).
Fingerprint recognition | Polar harmonic transform_Img
Contaminants Detection In Cotton

2012

115

Contamination has vital role in deciding the quality of cotton apart from essential properties such as length, strength, fineness. Contamination of raw cotton can take place at every step i.e. from the farm picking to the ginning stage. Contamination, even if it is a single foreign fiber, can lead to the downgrading of yarn, fabric or garments or even the total rejection of an entire batch and can cause irreparable harm to the relationship between growers, ginners, merchants, spinner and textile and clothing mills. An International Textile Manufacturers Federation (ITMF) reported that claims due to contamination amounted to between 1.4 – 3.2% of total sales of 100% cotton and cotton blended yarns. A fairly large number of cotton fibers recognition researches are based on RGB color space. So in this project a system is implemented to find contaminant of cotton so that it can be used with surety. Contaminant or Foreign fibers are detected from cotton on bases of layer separation and thresholding.
Fiber defect detection | Discontinuity testing_Img
Image compression approach using DCT

2012

115

In the JPEG image compression algorithm, the input image is divided into 8-by-8 or 16-by-16 blocks, and the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into a single image. For typical images, many of the DCT coefficients have values close to zero; these coefficients can be discarded without seriously affecting the quality of the reconstructed image. In this project we will implement the image compression techniques that is discrete cosine transform after implementing this technique, then done analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate.
DCT based image compression_Img
Fusion Techniques Comparative Analysis

2012

115

Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. The integrated PCA based image fusion system for stamping split detection is developed and tested on an automotive press line.. Different splits with variant shape, size and amount are detected under actual operating conditions. Principal Component Analysis (PCA) is employed to transform original image to its eigen space. By retaining the principal components with influencing eigen values, PCA keeps the key features in the original image and reduces noise level. Then pixel level image fusion algorithms are developed to fuse original images from the thermal and visible channels, enhance the result image from low level and reduce undesirable noises. Finally, an automatic split detection algorithm is designed and implemented to perform online objective automotive stamping split detection.
Wavelet based image fusion | Pca based image fusion_Img
Digital Color Detection in Image Processing

2012

115

Object detection is a task of identifying and detecting object in an image or video. In our project object detection is done on the basics of color. Here the detection is done by verifying the pixels’ of the images and then detecting on the basics of the pixel value, that to which color it signify. Here we do Color detection using image processing to find use the application for different purposes. Detection of color can be on basis mean or on histogram. a distance information calculation unit for dividing a captured image which constitutes a reference of captured images captured by the plurality of image capture units into a plurality of pixel blocks, individually retrieving corresponding pixel positions within the other captured image for the pixel blocks, and individually calculating distance information, and a histogram generation module for dividing a range image representing the individual distance information of the pixel blocks calculated by the distance information calculation unit into a plurality of segments having predetermined sizes, providing histograms relating to the distance information for the respective divided segments, and casting the distance information of the pixel blocks to the histograms of the respective segments.
Color Feature Detection| Object detector_Img
Real Time Image Steganography MATLAB

2012

115

This project will explore Steganography from its earliest instances through potential future application. Steganography is the only answer for secure and secret communication. Existing methods in image Steganography focus on increasing embedding capacity of secret data. According to existing methods, the experimental results indicate that two pixels are required for one secret digit embedding. In direction of improve the embedding size of secret data, a novel method of Pixel Value Modification (PVM) by modulus function is proposed. The proposed PVM method can embed one secret digit on one pixel of cover image. Thus, the proposed PVM method gives good quality of stego image. The experimental outputs validate that good visual perception of stego image with more secret data embedding capacity of stego image can be achieved by the proposed method.. Our algorithm offers very high capacity for cover media compared to other existing algorithms. We present experimental results showing the superiority of our algorithm. We also present comparative results with other similar algorithms in image based Steganography.
Pixel based image steganography | image message encoding decoding_Img
Character Recognition for Language processing

2012

115

Optical Character Recognition, or OCR, is a technology that enables you to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera into editable and searchable data. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object categorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbor and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems.
OCR for sign varification | Character detection_Img
PAPR reduction Using PTS algorithm

2012

115

Communication is one of the important aspects of life. Signals were initially sent in the analog domain, are being sent more and more in the digital domain. For better transmission, even single carrier waves are being replaced by multi carriers. Multi carrier systems like CDMA and OFDM are now a day’s being implemented commonly. In the OFDM system, orthogonally placed sub carriers are used to carry the data from the transmitter end to the receiver end. Presence of guard band in this system deals with the problem of ISI. But the large Peak to Average Power Ratio (PAPR) of these signal have some effects on the communication systems. The major drawback of orthogonal frequency-division multiplexing (OFDM) is its high peak-to-average power ratio (PAPR), which gets even more substantial if a transmitter with multiple antennas is considered. To overcome this problem, in this project, the partial transmit sequences (PTS) method well known for PAPR reduction in single antenna systems is studied for multi-antenna OFDM. Finally in this project after reduction of PAPR using PTS algorithm performance is checked on basis of number of errors in signal or by calculating PAPR in signal.
PAPR reduction in OFDM system | Partial Transmit Sequences_Img
Digital Video Broadcasting Simulink

2012

115

The demand of wireless communication is growing exponentially and next generation of wireless broadband multimedia communication systems will integrate various function and application in same system which supports large data rates with su?cient robustness to radio channel impairments, requires careful choosing of modulation technique. The suitable choice is orthogonal frequency division multiplexing (OFDM) which is special case of multi-carrier communication system, where single data stream is transmitted over number of lower sub-carrier. This altogether has brought to the conclusion that one radio frequency channel can be used to transmit more than one TV program. Digital video broadcasting (DVB-T) means broadcasting a multiplex, a package of various services. We had implemented DVBT system with addition to an effective scheme called as Orthogonal Frequency Division Multiplexing (OFDM) with which the high bit rate over the frequency selective channel is guaranteed to some extent.
DVBT implementation | Analysis of OFDM syytem_Img
Wireless System Design in Simulink

2012

115

In any communication system, there must be an information source (transmitter), a destination (receiver) and a medium to transmit information between the transmitter and the receiver. Message source originates message such as human voice, a television picture a teletype message or data. The message can be electrical and non-electrical. If it is not electrical, the source transducer will convert it into electrical signal. The transmitter may be consists of analog to digital converter, data compressor, source encoder, channel encoder a modulator or any other complicated subsystems. The receiver may be consists of demodulator, channel and source decoders data expender, digital to analog converter or others. Receiver transducer converts the electrical signal to its original form- the message. Message destination is the actual unit to which the message it sent. The channel is the information transmission medium. This medium can be of different types such as wire, a waveguide, an optical fiber or a wireless link. As the channel act as a filter, during the transmission of the signal (message) through the channel, the signal can be distorted due to the attenuation and phase shift suffered by different frequency component of the signal. Noise will also be added with the transmitted signal during the transmission of the signal through the channel. In this project WSN communication module is implemented and performance is analyzed over the AWGN channel. The analyzing parameters are BER, SNR etc. Signal is generated firstly then is passed through the procedure of transmission then passed through AWGN channel and then at receiver algorithm is implemented and at end the transmitted signal and receiver signal is compared on basis of parameters
AWGN Performance analysis | BER_Img
Stable Election Protocol in EEP

2012

115

Due to small power batteries in WSNs, efficient utilization of battery power is an important factor. Clustering is an efficient technique to extend life time of sensor networks by reducing the energy consumption. Many clustering techniques were introduced to find cluster heads in a cluster. One of them which is Stable election protocol. SEP protocol is for clustered heterogeneous wireless sensor networks. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. In this project, implementation of SEP is done. This project also concludes by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. The analysis show that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.
SEP Protocol | Energy Protocol | Radio Network | Energy Efficient_Img
Artificial Neural Network for Coin Recognition

2012

115

Dirty coins require machine cleaning frequently. The variations in images obtained between new and old coins are also discussed. Coin recognition process has been divided into seven steps. * Acquire RGB Coin Image, Generate Pattern Averaged Image, Remove Shadow from Image, Crop and Trim the Image, Convert RGB Image to Gray scale, Generate Feature Vector and p ass it as, Input to Trained NN, Give Appropriate Result according, to the, Output of NN. In this project, we propose a method to design a neural network(NN). And also, in order to demonstrate the effectiveness of the proposed scheme, we apply the proposed scheme to coin recognition. In general, as a problem becomes complex and large-scale, the number of operations increases and hard-ware implementation to real systems using NNs becomes difficult. Therefore, we propose the method which makes a small-sized NN system to achieve a cost reduction and to simplify hardware implementation to the real machines.
Coin recognition system | Neural network based Coin recognition_Img
Defect Detection Using Thresh holding Approach

2012

115

This is a project with Fabric discontinuity detection using mat lab image processing toolbox which will detect defect on basis of good samples used to train system at start. Numerous techniques have been developed to detect fabric defects and the purpose of this project is to categorize and/or describe these algorithms. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore on the basis of nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application.
Fabric defect detection | Textile Defect detection system_Img
ABS system Design using Fuzzy Logics

2012

115

Antilock braking system (ABS) is an important part to improve the automobiles active safety.ABS is used for motor vehicle to maintain tractive contact with the road surface according to driver inputs while braking, preventing the wheels from locking up and avoiding uncontrolled slidding. It is an automated system which does operation at much faster rate and with better control. In this project we will implement ABS using fuzzy logic in this we will have input from user on basis of parameters as break force, velocity etc on basis of that we will develop a fuzzy logic based system which will be capable of taking decision for how much break is to apply. Fuzzy logic will totally work on basis of if-else conditions given by developer for condition that can be arises for that decision are to made this can be done by giving rule sets to the fuzzy system.
Anti-lock braking system | Fuzzy Logic_Img
DWT based Image Watermarking

2012

115

In our project we use DWT(discrete wavelet transform) based image watermarking as a category of best techniques for watermarking till date with properties of wavelets. A method and system are disclosed for inserting relationships between or among property values of certain coefficients of a transformed host image. The relationships encode the watermark information. One aspect of the present invention is to modify an STD Method to adapt it to a perceptual model simplified for the wavelet domain. Embodiments of the present invention provide digital watermarking methods that embed a digital watermark in both the low and high frequencies of an image or other production, providing a digital watermark that is resistant to a variety of attacks. The digital watermarking methods of the present invention optimize the strength of the embedded digital watermark such that it is as powerful as possible without being perceptible to the human eye. The digital watermarking methods of the present invention do this relatively quickly, in realtime, and in an automated fashion using an intelligent system, such as a neural network.
Image watermarking using DWT_Img
SVM based recognition system in MATLAB

2012

115

In today’s information technology world, security for systems is becoming more and more important. The number of systems that have been compromised is ever increasing and authentication plays a major role as a first line of defence against intruders. The three main types of authentication are something you know (such as a password), something you have (such as a card or token), and something you are (biometric). The pressures on today’s system administrators to have secure systems are ever increasing. One area where security can be improved is in authentication. Iris recognition, a biometric, provides one of the most secure methods of authentication and identification thanks to the unique characteristics of the iris. Once the image of the iris has been captured using a standard camera, the authentication process, involving comparing the current subject’s iris with the stored version, is one of the most accurate with very low false acceptance and rejection rates. This makes the technology very useful in areas such as information security, physical access security, ATMs and airport security. In this project we implement iris recognition technique based on SVM(support vector machine) classifier to recognize the static images with database images to use this application in security purposes.
Iris recognition using SVM classifier | Iris feature extraction_Img
Image Fusion Using I-H-S methodology

2012

115

Image fusion is a technique used to integrate a high-resolution panchromatic image with low-resolution multispectral image to produce a high-resolution multispectral image, which contains both the high-resolution spatial information of the panchromatic image and the color information of the multispectral image, although an increasing number of high-resolution images are available along with sensor technology development. Here we proposed a method in which digital image is fused or can say mixed to analyze the color of different objects. Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. we present a computationally efficient color image fusion algorithm for merging infrared and visible images.
Image fussion | Mixing Of images_Img
CBIR over Color classification Approach

2012

115

Content-based means that the search analyzes the contents of the image rather than the metadata such as keywords, tags, or descriptions associated with the image. In our project we make use of color feature retrieval using histogram. the main objective of this project is to analyze the current state of the art in content-based image retrieval(CBIR) using Image Processing in MATLAB Different implementations of CBIR make use of different types of user queries. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example. Color histograms are widely used for content-based image retrieval. Their advantages are insensitivity to small changes in camera view point. However, a histogram is a coarse characterization of an image, and so images with very different appearances can have similar histograms. We describe a technique for comparing images called histogram renement which imposes additional constraints on histogram based matching. Histogram renement split the pixels in a given bucket into several classes, based upon some local property.
CBIR on basis of color | Feature Based CBIR_Img
Clustering Approach Development in WSN

2012

115

Sensor node is a tiny autonomous device which is used for the monitoring, tracking and surveillance. A number of sensor nodes together form a Wireless Sensor Network. Wireless Sensor Networks are used for a number of applications like monitoring of human body, under water surveillance, military purposes and traffic control etc. The consumption of the Wireless Sensor Networks is increasing day by day as sensor nodes are becoming cheaper. Inefficient or manual placement of sensor nodes leads to the failure of sensor networks. By placing the nodes at a pre-determined optimized location, sensing range can be minimized. Sensing range minimization will lead to increased lifetime because of the less energy consumed during monitoring of targets. Here is implementation of algorithm for efficient clustering and deployment of sensor In this project we will have some initial parameters from user in which the parameters are as • Location points of nodes • Number of sensors So that we can find optimized location of sensor on basis of k-mean clustering Process follow will be 1. Get initial parameters of nodes and sensor 2. Find Euclidean distance between nodes and sensor 3. Do k-mean clustering 4. And on basis of clustering find optimized location for sensor that is optimized location for sensor deployment
Clustering based Sensor Deployment_Img
Optical Disk localization Biomedical Application

2012

115

We had proposed a method in which for detection of optic disk and localization is done by using phenomena called as edge detection. Edge detection is a technique of image processing in which for reducing the image as per requirement for feature extraction done by determing the edges of Image per pixel. In our project Edge detection is used for Localization and detection of the optic discs for analyzing digital diabetic retinopathy systems. In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms.
Localisation of Optic Disc | Optic disk detection from eye_Img
CBIR over Color and Shape classification

2012

115

This project reviews the progress of computer vision in the agricultural and food industry then identifies areas for further research and wider application the technique. In this project, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography
Quality Analyzer over various features_Img
Fuzzy Based Rating System

2012

115

Fuzzy logic can be used as an interpretation model for the properties of neural networks, as well as for giving a more precise description of their performance. An expert in a certain field can sometimes produce a simple set of control rules for a dynamical system with less effort than the work involved in training a neural network. In this project an efficient fuzzy wavelet packet (WP) based feature extraction method and fuzzy logic based disorder assessment technique were used to investigate voice signals of patients suffering from unilateral vocal fold paralysis. Today customer used to check the rating of any product before purchasing it so an fuzzy logic based rating system is developed in Matlab . This project focused on the deconstruction of the front-end components analysis. We induced the fuzzy logic evaluation to the concepts of the linked constraints in the components. By constructing three kinds of fuzzy logic methods - fuzzy synthetic evaluation (FSE), fuzzy interpretive structural modeling (FISM), and fuzzy clustering analysis (FCA), the products modular design and planning application can be achieved
Fuzzy logic based classifier_Img
Antenna Designing with Directivity Analysis

2012

115

The three decade of growth and development of satellite communication has provided the world with international and long distance fixed and mobile satellite services (FSS) that have helped to change the world to what we know today as Global village. The global communication satellite market has been expanded rapidly into personal communication services, mobile communication services, navigational satellite services and broadband satellite services. What makes satellite communication such an attractive market can be summarized in wide area coverage, distance insensitivity, flexibility, multiple access, destination capability and economical reasons. An antenna array is a set of N spatially separated antennas. The number of antennas in an array can be as small as 2, or as large as several thousand (as in the AN/FPS-85 Phased Array Radar Facility operated by U. S. Air Force). In general, the performance of an antenna array (for whatever application it is being used) increases with the number of antennas (elements) in the array; the drawback of course is the increased cost, size, and complexity. In this project there is an implementation of linear –polar antenna array radiation patterns and finally do comparison between them on basis of radiation pattern.
Antenna Designing | Analysis of Antenna Pattern_Img
RLE Encoding for image encoding & compression

2012

115

Lossless methods are used for text compression and image compression in certain environments such as medical imaging where no loss of information is tolerated. Lossy compression methods are commonly applied in image and audio compression and depending upon the fidelity required higher compression ratio. Digital images require an enormous amount of space for storage. Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations. It is not useful with files that dont have many runs as it could potentially double the file size. Run-length encoding performs lossless data compression and is well suited to palette-based iconic images. In the project implementation of RLE based image compression is done by following the method as follow 1. Firstly select image from user which is to compress 2. Enter parameters of RLE algorithm to compress the image 3. Get compressed image by RLE coding 4. Finally calculate compression ratio to check its capability of compression
RLE based image encoding and decoding | RLE image compression_Img
Digital Filter Designing using Genetic Algorithm

2012

115

In this project, a new technique is presented for the design and optimization of digital FIR filters and IIR filters with coefficients that are presented in canonic signed-digit (CSD) format. Since such implementation requires no multipliers, it reduces the hardware cost and lowers the power consumption. The proposed technique considers three goals, the optimum number of coefficients, the optimum word length, and the optimum set of coefficients which satisfies the desirable frequency response and ensures the minimum hardware cost by minimizing the number of nonzero digits in CSD representation of the coefficients using Genetic Algorithms (GA). Comparing with equip ripple method, the proposed technique results in about 30-40 percent reduction in hardware cost. Here, to improve SNR of signal in communication system FIR or IIR filter designing is done, it is part of digital filter designing for signal enhancement several recently proposed optimization-based algorithms for the design of FIR and IIR filters and filter banks are reviewed for linear-phase FIR filters, IIR filters.
Digital filter designing and optimization_Img
PAPR reduction Using SLM algorithm

2012

115

In this project thee performance of peak-to-average power ratio (PAPR) reduction scheme i.e. selected mapping (SLM) scheme is investigated. In the presence of nonlinearity, we analyze the impact of the selected mapping (SLM) technique on bit-error-rate (BER) performance of orthogonal frequency division multiplexing (OFDM) systems in an additive white Gaussian noise channel The SLM technique was first described by Bauml etal. Selective mapping scheme is a technique in which multiple phase rotations are applied to the constellation points, and the one that minimizes the time signal peak is used. Selective mapping involves generating a large set of data vectors all representing the same information. The data vector with the lowest resulting PAPR is selected. Information about the selected and transmitted data vectors is coded and these codes are by an additional sub carriers.
Peak-to-Average Power Ratio Reduction | SLM_Img
PID controller design with fuzzy logics

2012

115

Fuzzy logic (FL) controllers based on fuzzy set theory are used to represent the experience and knowledge of a human operator in terms of linguistic variables that are called fuzzy rules. It is also well-known that PID controllers are particularly adequate for processes whose dynamics can be effectively modeled by a first or second order system. A novel method based on the fuzzification of the set-point weight has been proposed for the tuning of PID controllers. The approach has been shown to be very effective in the set-point following for a large number of processes, whilst the load disturbance attenuation performances obtained are preserved or improved. The devised control structure seems to be particularly appropriate to be adopted in industrial settings, since it requires a small computational effort it is easily tuned and it is compatible with a classical PID controller; that is, it consists of a module that can beaded or excluded without modifying the parameters of the existing PID.
PID controller design | Fuzzy logic based PiD controller_Img
Plant Dimensions Calculation using image processing

2012

115

Image processing is a methodology used in many of application either in research, quality enhancement, industries etc.image processing is main technology that is used to ind dimensions of products also , in this project we implement a technique for finding dimension of natural plant in which firstly we will acquire image from user and then algorithm to find height and width of that object in image that is dimensions of natural plant. This technique can useful in finding area dimensions of any object that is also useful in finding military application and for calculating far away object size
Plant Dimensions calculating system_Img
Blocking artifact Removal Approach

2012

115

We propose an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities. Blocking artifact detection and reduction is presented in this project. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function.Then, for each block affected by blocking artifacts, its dc and ac coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluate
Removal of blocking artifacts_Img
Wireless Routing Protocol Algorithm

2012

115

Distance Vector is term used to describe routing protocols which are used by routers to forward packets between networks. The purpose of any routing protocol is to dynamically communicate information about all network paths used to reach a destination and to select the from those paths, the best path to reach a destination network. The terms distance vector is used to group routing protocols into two broad categories based on whether the routing protocol selects the best routing path based on a distance metric (the distance), finding the path that has the lowest total metric to reach the destination.
Routing Protocol | Efficient Routing | DSR_Img
Route Optimization Using ACO optimization

2012

115

Now a day’s in Wireless communication data is transfer by wireless mediums in that admin generally give source and destination of data many algorithms have been developed for finding best route for selection of next neighbor node for data to reach destination. In this project Ant colony optimization (ACO) is used to solve the best node for next data transfer. First of all we will have input from user that is coverage area in which the nodes lies. Secondly we will have number of nodes from user. After that on basis of Euclidean distance we will calculate the initial population of our problem which is to solve by ACO. ACO is generally an optimization technique which will find best results for an objective which in this is next transfer node. So here our fitness function is on basis of distance. To get maximum fitness value we will optimize the initial population using ant colony optimization and will get best route with objective of minimum distance and reach the destination node from source.
Optimized Routing | ACO | Best Route Finding_Img
Image Quantization Using DCT

2012

115

Color quantization reduces the number of colors used in an image; this is important for displaying images on devices that support a limited number of colors and for efficiently compressing certain kinds of images. The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency (rapidly varying) brightness variation. This fact allows one to reduce the amount of information required by ignoring the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.In this project DCT based image quantization is done and results are analysis over the picture color levels.
Color image quantization | Lossless Image Compression_Img
Wireless System Protocol Design using MATLAB

2012

115

Implementation of a simulink model for 802.11 standard of wireless. This system will be implemented in simulink tool of Matlab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like ber, number of errors etc. The main goal of this Project work is to learn and understand the features of the IEEE standard 802.11a and afterwards, once familiarized with this standard, to develop an OFDM 802.11a PHY layer baseband implementation with the characteristics showed in such standard. Matlab/Simulink is used for designing the model.
Ber Analysis | WSN | Simulink Model | Wimax_Img
Wireless Updated Routing Protocol Design

2012

115

This project is an advancement over the distance algorithm for data transfer and have an objective of delay reduction and fast data transfer. Main objective in this project is to provide maximum throughput or bandwidth to transfer data from Source to destination the explanation of algorithm is:- According to this module a table is created on basis ok acknowledgement. Hence the transmission is carried out on the basis of this content. Table consists of information regarding node distance, neighboring node, bandwidth. This will help us to select the shortest path with accuracy. As communication starts and data is send. The node which has receive the data, now for further communication will act as source and will transmit the data onwards. Now the table is reviewed. If the last node is busy or massively congested then the choice of nodes will be made on the bandwidth bases which mean the node with highest bandwidth will be selected. In case if both of the nodes with high bandwidth are loaded then the choice is made on the distance bases that is least distant node is selected. This technique is more efficient than previous one on basis of delay reduction fast data transfer and maximum channel availability for data transfer.
Bandwidth Efficient | QOS | Channel Availabilty_Img
Finger print recognition System PCT algorithm

2012

115

The proposed method can reduce the searching space in alignment, and what is more attractive is that it obviates the need for extracting minutiae points or the core point to align the fingerprint images. Experimental results show that the proposed method is more robust than using the reference point or using the minutiae to align the fingerprint images.Fingerprint is one of the popular biometric traits used for recognizing a person. Properties which make fingerprint popular are it’s wide acceptability in public and ease in collecting the fingerprint data. In this project we propose a method for fingerprint matching based on minutiae matching. However, unlike conventional minutiae matching algorithms our algorithm also takes into account region and line structures that exist between minutiae pairs. This allows for more structural information of the fingerprint to be accounted for thus resulting in stronger certainty of matching minutiae. Also, since most of the region an analysis’s preprocessed it does not make the algorithm slower.
Finger print matching | Rotation Invariant Matching_Img
Digital Image Quantization colored img

2012

115

A new lossless image compression scheme based on the DCT was developed. This method caused a significant reduction in entropy, thus making it possible to achieve compression using a traditional entropy coder. The method performed well when compared to the popular lossless JPEG method. Future work will focus on finding an efficient hardware implementation, possibly taking advantage of commonality between the new method and the existing DCT-based lossy JPEG method.DCT converts the spatial image representation into a frequency map: the low-order or DC"" term represents the average value in the block
Image quantization using DCT_Img
Face recognition System using Eigen features

2012

115

Eigenfaces is the name given to a set of eigenvectors when they are used in the computer vision problem of human face recognition The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. In our project we do face recognition technique implemented using PCA eigen for analyzing accuracy of system in field of security or identification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. Our approach treats face recognition as a two-dimensional recognition problem. In this scheme face recognition is done by Principal Component Analysis (PCA)Face images are projected onto a face space that encodes best variation among known face images.
Pca eigen based face recognition_Img
Medical images enhancement processing

2012

115

Speckle is signal correlated noise. In ultrasound imagery also other sources of noise are presente depending on the specific application. At least one has to consider the thermal and electronic noise added at the receiver. This section offers some idea about various noise reduction techniques. Unfortunately, the presence of speckle noise in these images affects edges and fine details which limit the contrast resolution and make diagnostic more difficult. Ultrasound imaging system is widely used diagnostic tool for modern medicine. It is used to do the visualization of muscles, internal organs of the human body, size and structure and injuries. Proposed filtering is a techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation
Ultrasound image noise reduction | Speckle noise removal_Img
CBIR image classification over data mining

2012

115

The term content"" in this context might refer to colors
Content based image retrieval_Img
Optical Character Recognition for Character Classification

2012

115

In the context of script recognition, it may be worth studying the characteristics of various writing systems and the structural properties of the characters used in certain major scripts of the world. ONE interesting and challenging field of research in pattern recognition is Optical Character Recognition (OCR). Optical character recognition is the process in which a paper document is optically scanned and then converted into computer process able electronic format by recognizing and associating symbolic identity with every individual character in the document. With the increasing demand for creating a paperless world, many OCR algorithms have been developed over the years. However, most OCR systems are script-specific in the sense that they can read characters written in one particular script only. Script is defined as the graphic form of the writing system used to write statements expressible in language. That means, a script class refers to a particular style of writing and the set of characters used in it. Languages throughout this world are typeset in many different scripts. In this project there is an implementation of script recognition in which firstly system will acquire image from webcam then after OCR algorithm is applied on captured image in which system will extract features of image and finally recognize the script.
Optical character recognition, OCR using matlab_Img
Edge detection based image watermarking

2012

115

Digital watermarking is a process of embedding signature into the media data with just doing few modifications. Adding a visible watermark is a common way of identifying images and protecting them from unauthorized use online. A common practice is to distribute the watermark (or watermarks) across the entire image. We propose more effective content-based sharp point detection watermarking. To increase the embedding capacity the concept of watermark in watermark is used. To increase security we embed encrypted watermarks in the image. This provides an additional level of security for watermarks. For instance if watermarking key is hacked still the attacker will not be able to identify the watermark because it is encrypted. The system which we are going to develop is based on Sharp Point Detection algorithm. In this we will develop an algorithm which will give us points on basis of sharp point detection algorithm and we will place our watermark there.
Image watermarking using Kharies Points | New approach for watermarking_Img
LZW Encoding for image encoding & compression

2012

115

A lot of data hiding methods have been developed as a mean of secret data communication. Accordingly, numerous techniques have been proposed in the name of either steganography or watermarking, which all belong to data hiding techniques in wide sense. In data hiding, the most common way is a target-based method which means that a specific target such as domain (frequency, time, or spatial) is pre-determined before developing a hiding method. In this paper, we are especially interested in developing a k-sslrcs data hiding method that can be generally applied to many common lossless compression applications. Among several approaches in data or image compression. LZW is a well-known technique. Since it refers to a dictionary storing single and their combined symbols, LZW is classified as a dictionary-based technique. So in this project LZW is used to compress the digital image. So that storage capacity of system can be increased and a new approach on image compression is done. Finally compression ratio is calculated for technique’s efficiency.
LZW based image encoding and decoding | LZW image compression_Img
ECG Systems Analysis using DWT

2012

115

The investigation of the ECG has been extensively used for diagnosing many cardiac diseases. The ECG is a realistic record of the direction and magnitude of the electrical commotion that is generated by depolarization and re-polarization of the atria and ventricles. The majority of the clinically useful information in the ECG is originated in the intervals and amplitudes defined by its features (characteristic wave peaks and time durations). The improvement of precise and rapid methods for automatic ECG feature extraction is of chief importance, particularly for the examination of long recordings. In this project we will implement a system which will firstly extract the characteristics of ECG and on basis of that we will find the location and amplitude of details of ECG signal so that we can find the problem that cause to patient we will have ECG signal from static database of patients.
ECG Feature Extraction | ST Segment Detection | Disease Detection_Img
Blocking Artifact analysis using DCT

2012

115

In this project we are making comparison between different image processing techniques like spatial filtering, localized and Adaptive. The comparison is made on the basis of different parameters like mean square error, peak signal to noise ratio, bit error rate and the visibility of image. Out of these techniques adaptive technique shows good results. It smoothes the artifacts more in comparison to others.We can compress audio signal, video signal, text, fax and images. For medical images lossless compression is used and for other types lossy compression can be used. For compressing an image we can use the DCT technique. But after compression during decompression and recovering original image from compressed image we can face problem of blocking artifacts. Various methods can be used for removing blocking artifacts. One of them is DCT filtering. Further for getting better results we can remove blocking artifacts by spatial and hybrid filtering method. Our experimental results shows that hybrid filtering gives better performance on the bases of better PSNR, BER and MSE.
Blocking Artifacts in digital images | Ringing effect in images_Img
Noise Removal Using Wavelet Thresh holding

2012

115

A very large portion of digital image processing is devoted to image denoising. This includes research in algorithm development and routine goal oriented image processing. Image restoration is the removal or reduction of degradations that are incurred while the image is being obtained. Degradation comes from blurring as well as noise due to electronic and photometric sources. Blurring is a form of bandwidth reduction of the image caused by the imperfect image formation process such as relative motion between the camera and the original scene or by an optical system that is out of focus. Image denoising is often used in the field of photography or publishing where an image was somehow degraded but needs to be improved before it can be printed. For this type of application we need to know something about the degradation process in order to develop a model for it. When we have a model for the degradation process, the inverse process can be applied to the image to restore it back to the original form. In this project technique for image restoration or image denoising will include BayesShrink Algorithms for wavelet thresholding
Wavelet Noise removal | Hard-Soft threshold for noise reduction_Img
ADPCM based Coding And Compression

2012

115

The specification of ADPCM opens the door to a host of applications in telecommunication networks. These applications can be divided into three categories: telephone company use, end customer applications, and new service offerings. An adaptive quantizer and an adaptive predictor. The relation between the encoder and the decoder is also depicted. The decoder is simply a subset of the encoder and transmits r(n) as its output instead of c(n). The adaptive predictor, which is composed of two poles and six zeros, computes an input signal estimate ?(n) which is subtracted from input signal s(n) resulting in a difference signal d(n). The adaptive quantizer codes d(n) into codeword c(n) which is sent over the transmission facility. At the receiving end, an ADPCM decoder uses c(n) to attempt to reconstruct the original signal s(n). Actually, only r(n) can be reconstructed which is related to the original input signal s(n)
Adpcm based Coding And Compression | Audio Coding_Img
Image segmentation Entropy Methodology

2012

115

Segmentation is a task of division. Which have bases as color, pixel in an image. In this project, a fast threshold selection method based algorithm is implemented to speed up the original MCE threshold method in image segmentation. Our main aim is to find various segments inn an image on basis of its feature. It provides effectiveness when the number of regions in the segmentation varies, effectiveness when the number of regions is fixed and Evaluation effectiveness when work on theoretically different segmentation methods. We implement a methodology in which minimum entropy is used for image segmentation. In segmentation, minimum cross entropy (MCE) based multilevel thresholding is regarded as an effective improvement. However, it is very time consuming for real-time applications.
Entropy based Image Segmentation_Img
Image Steganography over Bit wise Algorithm

2012

115

Image encryption schemes have been increasingly studied to meet the demand for real-time secure image transmission over the Internet and through wireless networks. Encryption is the process of transforming the information for its security. With the huge growth of computer networks and the latest advances in digital technologies, a huge amount of digital data is being exchanged over various types of networks. It is often true that a large part of this information is either confidential or private. The security of images has become more and more important due to the rapid evolution of the internet in the world today. The security of images has attracted more attention recently, and many different image encryption methods have been proposed to enhance the security of these images. Image encryption techniques try to convert an image to another one that is hard to understand. On the other hand, image decryption retrieves the original image from the encrypted one. In this project implementation of data encryption is done on basis of bit algorithm the scenario follow for data encryption and decryption is as follow 1. Firstly have image in which data is to encrypt 2. Enter message text which is to hide in image 3. Select particular bits of pixels of image as per algorithm and hide data in form of binary in that 4. Finally save image and get encrypted image 5. Decryption of image message is vice verse
Implemtation of LSB Steganography | Steganography with image_Img
Image watermarking using DCT transform

2012

115

Owing to personal computers being applied in many fields and Internet becoming popular and easier to use, most information is transmitted with digital format. Therefore, data copying and back up are more and more easier in the world wide web and multimedia. The copyright and authentication gradually lose their security. How to protect intellectual property becomes important in technical study and research. Recently, the watermarking technique was proposed to solve the problem of protecting the intellectual property. In this project, a watermark embedded in the host image by DCT transform has been developed. There are several papers using the same manner to embed watermark into middle-band coefficients of DCT block. The Joint Photograph Expert Group (JPEG) image compression usually discards the high-band frequency in DCT block including some middle-band data. In this paper the lower-band coefficient of DCT block was employed, since it is robust against the attack by the JPEG. In order to improve the imperceptions, only one bit was embedded in each coefficient of a DCT block.
Image watermarking using DCT_Img
Simulink Modal Design For Wireless System

2012

115

In our project we are going to implement a Simulink model for wireless sensor network standard having OFDM concept. This system will be implemented in Simulink toolbox of mat lab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like BER, number of errors etc. The main goal of this Project work is to learn and understand the features of the wireless transmitter and receiver performance, BER is calculated after model designing by error rate calculation in model to check the accuracy of sensor network standard model for transmitter and receiver synchronization, for the implementation of model Mat lab-Simulink is used.
QAM Transmitter and Receiver Design | Wireless Communication_Img
Video Steganography for Data Hiding

2012

115

Due to increased security threats experienced today, confidential information, such as medical records and banking data, are at risk. Thus, multiple layers of data protection beyond conventional encryption must be considered. The current study presents the design and implementation of a steganographic protocol and a suite of tools that can automatically analyze a flash video (FLV) format and effectively hide information within it for application in a digital records environment. This study proposes several methods of hiding information within an FLV and discusses the corresponding advantages and disadvantages of such methods. The proposed steganographic methods are analyzed qualitatively by using auditory-visual perception tests and quantitatively by using video tags evolution graphs and histograms and RGB averaging analysis. This study assumes a system where sensitive data is embedded inside FLVs and then transmitted to several recipients who hold varying access authorization levels.
Data hiding in video | Video steganography_Img
Signal Equalization in OFDM systems

2012

115

In modern digital communications, it is well known that channel equalization plays an important role in compensating channel distortion. Unfortunately, most channels have time varying characteristic and their transfer functions change with time. Also, time-varying multipath interference and multiuser interference are two major limitations for high speed digital communications. Usually, adaptive equalizers are applied in order to overcome these issues. For adaptive channel equalization, we need a suitable filter structure and proper adaptive algorithms. High-speed digital transmissions mostly suffer from inter-symbol interference (ISI) and additive noise. The adaptive equalization algorithms recursively determine the filter coefficients in order to eliminate the effects of noise and ISI. Several algorithms like Least Mean Square (LMS), Recursive Least Mean Square (RLMS), Normalized Least Mean Square (NLMS) etc., has been proposed to perform this operation of equalization. In this project, we study the adaptive equalization technique with the use of normalized least mean Square algorithm.
Adaptive Channel Equalization | LMS and NLMS Algorithms_Img
Updated Image Enhancement Approach

2012

115

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional analog photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. In this project we will enhance the quality of an image on basis of various properties of image as discussed above in the introduction so that the quality of the image get enhanced and can be used at various application with enhanced properties. We will select an image from user whose quality is to enhance. Next step is to change the specific property of image as many options are given to the user. After that there is save option for user to save enhanced image as per user need. You can adjust brightness, contrast, and fade for the display of an image as well as for plotted output without affecting the original raster image file
Image contrast enhancement | Image Property changer_Img
multiuser detection Comparative analysis

2012

115

The challenge to enhance the capacity of a CDMA system therefore lies in interference management. Many techniques that control and/or suppress interference in CDMA systems by transmit and/or receiver side processing .In this project we will implement two techniques for multiuser detection these two techniques are for 2 users in CDMA system and will analyze on basis of SNR The two techniques are • Least mean square algorithm • Blind mud algorithm First of all we will generate m sequences for our system. After that we will generate data which is to be send. Then we will follow the step of encoding of data using both algorithms for user detection. After that we will analyze the results on basis of • Mean square error • Signal to noise ratio
Multi user detection | CDMA | Matched Filtration_Img
Image Quantization Using DCT

1095

78

Color quantization reduces the number of colors used in an image; this is important for displaying images on devices that support a limited number of colors and for efficiently compressing certain kinds of images. The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency (rapidly varying) brightness variation. This fact allows one to reduce the amount of information required by ignoring the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.In this project DCT based image quantization is done and results are analysis over the picture color levels.
Color image quantization | Lossless Image Compression_Img
Digital Color Detection in Image Processing

1095

78

Object detection is a task of identifying and detecting object in an image or video. In our project object detection is done on the basics of color. Here the detection is done by verifying the pixels’ of the images and then detecting on the basics of the pixel value, that to which color it signify. Here we do Color detection using image processing to find use the application for different purposes. Detection of color can be on basis mean or on histogram. a distance information calculation unit for dividing a captured image which constitutes a reference of captured images captured by the plurality of image capture units into a plurality of pixel blocks, individually retrieving corresponding pixel positions within the other captured image for the pixel blocks, and individually calculating distance information, and a histogram generation module for dividing a range image representing the individual distance information of the pixel blocks calculated by the distance information calculation unit into a plurality of segments having predetermined sizes, providing histograms relating to the distance information for the respective divided segments, and casting the distance information of the pixel blocks to the histograms of the respective segments.
Color Feature Detection| Object detector_Img
Image Steganography over Bit wise Algorithm

1095

78

Image encryption schemes have been increasingly studied to meet the demand for real-time secure image transmission over the Internet and through wireless networks. Encryption is the process of transforming the information for its security. With the huge growth of computer networks and the latest advances in digital technologies, a huge amount of digital data is being exchanged over various types of networks. It is often true that a large part of this information is either confidential or private. The security of images has become more and more important due to the rapid evolution of the internet in the world today. The security of images has attracted more attention recently, and many different image encryption methods have been proposed to enhance the security of these images. Image encryption techniques try to convert an image to another one that is hard to understand. On the other hand, image decryption retrieves the original image from the encrypted one. In this project implementation of data encryption is done on basis of bit algorithm the scenario follow for data encryption and decryption is as follow 1. Firstly have image in which data is to encrypt 2. Enter message text which is to hide in image 3. Select particular bits of pixels of image as per algorithm and hide data in form of binary in that 4. Finally save image and get encrypted image 5. Decryption of image message is vice verse
Implemtation of LSB Steganography | Steganography with image_Img
Noise Removal Using Wavelet Thresh holding

1095

78

A very large portion of digital image processing is devoted to image denoising. This includes research in algorithm development and routine goal oriented image processing. Image restoration is the removal or reduction of degradations that are incurred while the image is being obtained. Degradation comes from blurring as well as noise due to electronic and photometric sources. Blurring is a form of bandwidth reduction of the image caused by the imperfect image formation process such as relative motion between the camera and the original scene or by an optical system that is out of focus. Image denoising is often used in the field of photography or publishing where an image was somehow degraded but needs to be improved before it can be printed. For this type of application we need to know something about the degradation process in order to develop a model for it. When we have a model for the degradation process, the inverse process can be applied to the image to restore it back to the original form. In this project technique for image restoration or image denoising will include BayesShrink Algorithms for wavelet thresholding
Wavelet Noise removal | Hard-Soft threshold for noise reduction_Img
Optical Character Recognition for Character Classification

1095

78

In the context of script recognition, it may be worth studying the characteristics of various writing systems and the structural properties of the characters used in certain major scripts of the world. ONE interesting and challenging field of research in pattern recognition is Optical Character Recognition (OCR). Optical character recognition is the process in which a paper document is optically scanned and then converted into computer process able electronic format by recognizing and associating symbolic identity with every individual character in the document. With the increasing demand for creating a paperless world, many OCR algorithms have been developed over the years. However, most OCR systems are script-specific in the sense that they can read characters written in one particular script only. Script is defined as the graphic form of the writing system used to write statements expressible in language. That means, a script class refers to a particular style of writing and the set of characters used in it. Languages throughout this world are typeset in many different scripts. In this project there is an implementation of script recognition in which firstly system will acquire image from webcam then after OCR algorithm is applied on captured image in which system will extract features of image and finally recognize the script.
Optical character recognition, OCR using matlab_Img
Contaminants Detection In Cotton

1095

78

Contamination has vital role in deciding the quality of cotton apart from essential properties such as length, strength, fineness. Contamination of raw cotton can take place at every step i.e. from the farm picking to the ginning stage. Contamination, even if it is a single foreign fiber, can lead to the downgrading of yarn, fabric or garments or even the total rejection of an entire batch and can cause irreparable harm to the relationship between growers, ginners, merchants, spinner and textile and clothing mills. An International Textile Manufacturers Federation (ITMF) reported that claims due to contamination amounted to between 1.4 – 3.2% of total sales of 100% cotton and cotton blended yarns. A fairly large number of cotton fibers recognition researches are based on RGB color space. So in this project a system is implemented to find contaminant of cotton so that it can be used with surety. Contaminant or Foreign fibers are detected from cotton on bases of layer separation and thresholding.
Fiber defect detection | Discontinuity testing_Img
Image watermarking using DCT transform

1095

78

Owing to personal computers being applied in many fields and Internet becoming popular and easier to use, most information is transmitted with digital format. Therefore, data copying and back up are more and more easier in the world wide web and multimedia. The copyright and authentication gradually lose their security. How to protect intellectual property becomes important in technical study and research. Recently, the watermarking technique was proposed to solve the problem of protecting the intellectual property. In this project, a watermark embedded in the host image by DCT transform has been developed. There are several papers using the same manner to embed watermark into middle-band coefficients of DCT block. The Joint Photograph Expert Group (JPEG) image compression usually discards the high-band frequency in DCT block including some middle-band data. In this paper the lower-band coefficient of DCT block was employed, since it is robust against the attack by the JPEG. In order to improve the imperceptions, only one bit was embedded in each coefficient of a DCT block.
Image watermarking using DCT_Img
Edge detection based image watermarking

1095

78

Digital watermarking is a process of embedding signature into the media data with just doing few modifications. Adding a visible watermark is a common way of identifying images and protecting them from unauthorized use online. A common practice is to distribute the watermark (or watermarks) across the entire image. We propose more effective content-based sharp point detection watermarking. To increase the embedding capacity the concept of watermark in watermark is used. To increase security we embed encrypted watermarks in the image. This provides an additional level of security for watermarks. For instance if watermarking key is hacked still the attacker will not be able to identify the watermark because it is encrypted. The system which we are going to develop is based on Sharp Point Detection algorithm. In this we will develop an algorithm which will give us points on basis of sharp point detection algorithm and we will place our watermark there.
Image watermarking using Kharies Points | New approach for watermarking_Img
DWT based Image Watermarking

1095

78

In our project we use DWT(discrete wavelet transform) based image watermarking as a category of best techniques for watermarking till date with properties of wavelets. A method and system are disclosed for inserting relationships between or among property values of certain coefficients of a transformed host image. The relationships encode the watermark information. One aspect of the present invention is to modify an STD Method to adapt it to a perceptual model simplified for the wavelet domain. Embodiments of the present invention provide digital watermarking methods that embed a digital watermark in both the low and high frequencies of an image or other production, providing a digital watermark that is resistant to a variety of attacks. The digital watermarking methods of the present invention optimize the strength of the embedded digital watermark such that it is as powerful as possible without being perceptible to the human eye. The digital watermarking methods of the present invention do this relatively quickly, in realtime, and in an automated fashion using an intelligent system, such as a neural network.
Image watermarking using DWT_Img
Blocking Artifact analysis using DCT

1095

78

In this project we are making comparison between different image processing techniques like spatial filtering, localized and Adaptive. The comparison is made on the basis of different parameters like mean square error, peak signal to noise ratio, bit error rate and the visibility of image. Out of these techniques adaptive technique shows good results. It smoothes the artifacts more in comparison to others.We can compress audio signal, video signal, text, fax and images. For medical images lossless compression is used and for other types lossy compression can be used. For compressing an image we can use the DCT technique. But after compression during decompression and recovering original image from compressed image we can face problem of blocking artifacts. Various methods can be used for removing blocking artifacts. One of them is DCT filtering. Further for getting better results we can remove blocking artifacts by spatial and hybrid filtering method. Our experimental results shows that hybrid filtering gives better performance on the bases of better PSNR, BER and MSE.
Blocking Artifacts in digital images | Ringing effect in images_Img
Blocking artifact Removal Approach

1095

78

We propose an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities. Blocking artifact detection and reduction is presented in this project. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function.Then, for each block affected by blocking artifacts, its dc and ac coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluate
Removal of blocking artifacts_Img
Image Fusion Using I-H-S methodology

1095

78

Image fusion is a technique used to integrate a high-resolution panchromatic image with low-resolution multispectral image to produce a high-resolution multispectral image, which contains both the high-resolution spatial information of the panchromatic image and the color information of the multispectral image, although an increasing number of high-resolution images are available along with sensor technology development. Here we proposed a method in which digital image is fused or can say mixed to analyze the color of different objects. Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. we present a computationally efficient color image fusion algorithm for merging infrared and visible images.
Image fussion | Mixing Of images_Img
Fusion Techniques Comparative Analysis

1095

78

Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. The integrated PCA based image fusion system for stamping split detection is developed and tested on an automotive press line.. Different splits with variant shape, size and amount are detected under actual operating conditions. Principal Component Analysis (PCA) is employed to transform original image to its eigen space. By retaining the principal components with influencing eigen values, PCA keeps the key features in the original image and reduces noise level. Then pixel level image fusion algorithms are developed to fuse original images from the thermal and visible channels, enhance the result image from low level and reduce undesirable noises. Finally, an automatic split detection algorithm is designed and implemented to perform online objective automotive stamping split detection.
Wavelet based image fusion | Pca based image fusion_Img
Video Watermarking using Image Processing

1095

78

In this project, a description and comparison between encryption methods and representative video algorithms were presented. With respect not only to their encryption speed but also their security level and stream size. A tradeoff between quality of video streaming and choice of encryption algorithm were shown. Achieving an efficiency, flexibility and security is a challenge of researcher.This project seeks to develop a Robust Watermarking Software based on the research work carried out earlier. The group while exploring various watermarking techniques and algorithms that have been proposed for developing a Robust Watermarking Solution implemented a proposed Robust watermarking solution. A Robust Watermark is more resilient to the tempering/attacks that a multimedia object (Image, Video, and Audio)had to face like compression, image cropping, image flipping, image rotation to name a few.
Video watermarking | Video data encryption_Img
Defect Detection Using Thresh holding Approach

1095

78

This is a project with Fabric discontinuity detection using mat lab image processing toolbox which will detect defect on basis of good samples used to train system at start. Numerous techniques have been developed to detect fabric defects and the purpose of this project is to categorize and/or describe these algorithms. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore on the basis of nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application.
Fabric defect detection | Textile Defect detection system_Img
Finger print recognition System PCT algorithm

1095

78

The proposed method can reduce the searching space in alignment, and what is more attractive is that it obviates the need for extracting minutiae points or the core point to align the fingerprint images. Experimental results show that the proposed method is more robust than using the reference point or using the minutiae to align the fingerprint images.Fingerprint is one of the popular biometric traits used for recognizing a person. Properties which make fingerprint popular are it’s wide acceptability in public and ease in collecting the fingerprint data. In this project we propose a method for fingerprint matching based on minutiae matching. However, unlike conventional minutiae matching algorithms our algorithm also takes into account region and line structures that exist between minutiae pairs. This allows for more structural information of the fingerprint to be accounted for thus resulting in stronger certainty of matching minutiae. Also, since most of the region an analysis’s preprocessed it does not make the algorithm slower.
Finger print matching | Rotation Invariant Matching_Img
Finger print recognition System PHT algorithm

1095

78

We develop a fast approach for their computation using recursion and 8-way symmetry/anti symmetry property of the kernel functions Polar harmonic transforms (PHTs) are orthogonal rotation invariant transforms that provide many numerically stable features. The kernel functions of PHTs consist of sinusoidal functions that are inherently computation Polar harmonic transform (PHT) which can be used to generate rotation invariant features. With PHTs, there is also no numerical instability issue, as with ZM and PZMs which often limits their practical usefulness. A large part of the computation of the PHT kernels can be recomputed and stored. In the end, for each pixel, as little as three multiplications, one addition operation, and one cosine and/or sine evaluation are needed to obtain the final kernel value. In this project, three different transforms will be introduced, namely, Polar Complex Exponential Transform (PCET), Polar Cosine Transform (PCT), and Polar Sine Transform (PST).
Fingerprint recognition | Polar harmonic transform_Img
Object detection Using MATLAB

1095

78

Image processing is a technique of bringing variations in the image as per requirement such as editing, cropping, detection etc. in our project we had done detection on the basics of color, shape and size. Image Processing Toolbox provides a comprehensive suite of reference-standard algorithms and visualization functions for image analysis tasks such as statistical analysis, feature extraction, and measurement. It is useful in identifying similar objects with different colors apart from each other and Identifying similar colored objects with different sizes apart from each other. Here we made an application to count circular segments and checking the efficiency of system by analyzing accuracy of counter using digital image processing. In this we even introduce a novel and new approach for feature extraction on color circular basic.
Feature Segmentation | Image processing based segment counter_Img
Real Time Image Steganography MATLAB

1095

78

This project will explore Steganography from its earliest instances through potential future application. Steganography is the only answer for secure and secret communication. Existing methods in image Steganography focus on increasing embedding capacity of secret data. According to existing methods, the experimental results indicate that two pixels are required for one secret digit embedding. In direction of improve the embedding size of secret data, a novel method of Pixel Value Modification (PVM) by modulus function is proposed. The proposed PVM method can embed one secret digit on one pixel of cover image. Thus, the proposed PVM method gives good quality of stego image. The experimental outputs validate that good visual perception of stego image with more secret data embedding capacity of stego image can be achieved by the proposed method.. Our algorithm offers very high capacity for cover media compared to other existing algorithms. We present experimental results showing the superiority of our algorithm. We also present comparative results with other similar algorithms in image based Steganography.
Pixel based image steganography | image message encoding decoding_Img
Face recognition System using Eigen features

1095

78

Eigenfaces is the name given to a set of eigenvectors when they are used in the computer vision problem of human face recognition The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. In our project we do face recognition technique implemented using PCA eigen for analyzing accuracy of system in field of security or identification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. Our approach treats face recognition as a two-dimensional recognition problem. In this scheme face recognition is done by Principal Component Analysis (PCA)Face images are projected onto a face space that encodes best variation among known face images.
Pca eigen based face recognition_Img
Updated Image Enhancement Approach

1095

78

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional analog photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. In this project we will enhance the quality of an image on basis of various properties of image as discussed above in the introduction so that the quality of the image get enhanced and can be used at various application with enhanced properties. We will select an image from user whose quality is to enhance. Next step is to change the specific property of image as many options are given to the user. After that there is save option for user to save enhanced image as per user need. You can adjust brightness, contrast, and fade for the display of an image as well as for plotted output without affecting the original raster image file
Image contrast enhancement | Image Property changer_Img
CBIR over Color classification Approach

1095

78

Content-based means that the search analyzes the contents of the image rather than the metadata such as keywords, tags, or descriptions associated with the image. In our project we make use of color feature retrieval using histogram. the main objective of this project is to analyze the current state of the art in content-based image retrieval(CBIR) using Image Processing in MATLAB Different implementations of CBIR make use of different types of user queries. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example. Color histograms are widely used for content-based image retrieval. Their advantages are insensitivity to small changes in camera view point. However, a histogram is a coarse characterization of an image, and so images with very different appearances can have similar histograms. We describe a technique for comparing images called histogram renement which imposes additional constraints on histogram based matching. Histogram renement split the pixels in a given bucket into several classes, based upon some local property.
CBIR on basis of color | Feature Based CBIR_Img
CBIR over Color and Shape classification

1095

78

This project reviews the progress of computer vision in the agricultural and food industry then identifies areas for further research and wider application the technique. In this project, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography
Quality Analyzer over various features_Img
Artificial Neural Network for Coin Recognition

1095

78

Dirty coins require machine cleaning frequently. The variations in images obtained between new and old coins are also discussed. Coin recognition process has been divided into seven steps. * Acquire RGB Coin Image, Generate Pattern Averaged Image, Remove Shadow from Image, Crop and Trim the Image, Convert RGB Image to Gray scale, Generate Feature Vector and p ass it as, Input to Trained NN, Give Appropriate Result according, to the, Output of NN. In this project, we propose a method to design a neural network(NN). And also, in order to demonstrate the effectiveness of the proposed scheme, we apply the proposed scheme to coin recognition. In general, as a problem becomes complex and large-scale, the number of operations increases and hard-ware implementation to real systems using NNs becomes difficult. Therefore, we propose the method which makes a small-sized NN system to achieve a cost reduction and to simplify hardware implementation to the real machines.
Coin recognition system | Neural network based Coin recognition_Img
Image Noising Denoising with Multi noise, Filters

1095

78

The DENOISING is the technique that is proposed in 1990.The goal of image denoising is to remove noise by differentiating it from the signal. DENOISING uses thevisual content of images like color, texture, and shape as the image index to retrieve the images from the database. These feature never changed. In this project, we presents a new method for un sharp masking for contrast enhancement of images. Image denoising is a well studied problem in the field of image processing. Use of basic filter to remove the noise and comparative analysis b/w them. The approach employs an adaptive median hat controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and noise detection technique for remove mixed noise from images. A hybrid cumulative histogram equalization is proposed for adaptive contrast enhancement
Image noising and denoising | image noise reduction_Img
Medical images enhancement processing

1095

78

Speckle is signal correlated noise. In ultrasound imagery also other sources of noise are presente depending on the specific application. At least one has to consider the thermal and electronic noise added at the receiver. This section offers some idea about various noise reduction techniques. Unfortunately, the presence of speckle noise in these images affects edges and fine details which limit the contrast resolution and make diagnostic more difficult. Ultrasound imaging system is widely used diagnostic tool for modern medicine. It is used to do the visualization of muscles, internal organs of the human body, size and structure and injuries. Proposed filtering is a techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation
Ultrasound image noise reduction | Speckle noise removal_Img
Plant Dimensions Calculation using image processing

1095

78

Image processing is a methodology used in many of application either in research, quality enhancement, industries etc.image processing is main technology that is used to ind dimensions of products also , in this project we implement a technique for finding dimension of natural plant in which firstly we will acquire image from user and then algorithm to find height and width of that object in image that is dimensions of natural plant. This technique can useful in finding area dimensions of any object that is also useful in finding military application and for calculating far away object size
Plant Dimensions calculating system_Img
Character Recognition for Language processing

1095

78

Optical Character Recognition, or OCR, is a technology that enables you to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera into editable and searchable data. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object categorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbor and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems.
OCR for sign varification | Character detection_Img
Image compression approach using DCT

1095

78

In the JPEG image compression algorithm, the input image is divided into 8-by-8 or 16-by-16 blocks, and the two-dimensional DCT is computed for each block. The DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT of each block, and then puts the blocks back together into a single image. For typical images, many of the DCT coefficients have values close to zero; these coefficients can be discarded without seriously affecting the quality of the reconstructed image. In this project we will implement the image compression techniques that is discrete cosine transform after implementing this technique, then done analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate.
DCT based image compression_Img
Image compression using wavelet approach

1095

78

The 2D discrete wavelet transform (DWT) is the most important new image compression technique of the last decade. Conventionally, the 2D DWT is carried out as a separable transform by cascading two 1D transforms in the vertical and horizontal direction. Therefore, vanishing moments of the high-pass wavelet filters exist only in these two directions. The separable transform fails to provide an efficient representation for directional image features, such as edges and lines, not aligned vertically or horizontally since it spreads the energy of these features across sub bands. In this project we will implement the image compression technique that is discrete wavelet transform after implementing this technique individually and then do analysis on basis of parameters like Peak signal to noise ratio, Mean square error, Bit error rate
DWT based image compression_Img
Huffman Encoding for image compression

1095

78

Huffman coding can be used to compress all sorts of data. It is an entropy-based algorithm that relies on an analysis of the frequency of symbols in an array. Huffman coding can be demonstrated most vividly by compressing a raster image. Suppose we have a 5×5 raster image with 8-bit color, i.e. 256 different colors. The uncompressed image will take 5 x 5 x 8 = 200 bits of storage. This compression technique is used broadly to encode music, images, and certain communication protocols. Lossless JPEG compression uses the Huffman algorithm in its pure form. Lossless JPEG is common in medicine as part of the DICOM standard, which is supported by the major medical equipment manufacturers (for use in ultrasound machines, nuclear resonance imaging machines, MRI machines, and electron microscopes). Variations of the Lossless JPEG algorithm are also used in the RAW format, which is popular among photo enthusiasts because it saves data from a camera’s image sensor without losing information. This project contains an implementation of HUFFMAN technique for image compression and finally does analysis of technique on basis of PSNR, BER, MSE parameters
Huffman Coding | Huffman based image coding_Img
RLE Encoding for image encoding & compression

1095

78

Lossless methods are used for text compression and image compression in certain environments such as medical imaging where no loss of information is tolerated. Lossy compression methods are commonly applied in image and audio compression and depending upon the fidelity required higher compression ratio. Digital images require an enormous amount of space for storage. Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations. It is not useful with files that dont have many runs as it could potentially double the file size. Run-length encoding performs lossless data compression and is well suited to palette-based iconic images. In the project implementation of RLE based image compression is done by following the method as follow 1. Firstly select image from user which is to compress 2. Enter parameters of RLE algorithm to compress the image 3. Get compressed image by RLE coding 4. Finally calculate compression ratio to check its capability of compression
RLE based image encoding and decoding | RLE image compression_Img
LZW Encoding for image encoding & compression

1095

78

A lot of data hiding methods have been developed as a mean of secret data communication. Accordingly, numerous techniques have been proposed in the name of either steganography or watermarking, which all belong to data hiding techniques in wide sense. In data hiding, the most common way is a target-based method which means that a specific target such as domain (frequency, time, or spatial) is pre-determined before developing a hiding method. In this paper, we are especially interested in developing a k-sslrcs data hiding method that can be generally applied to many common lossless compression applications. Among several approaches in data or image compression. LZW is a well-known technique. Since it refers to a dictionary storing single and their combined symbols, LZW is classified as a dictionary-based technique. So in this project LZW is used to compress the digital image. So that storage capacity of system can be increased and a new approach on image compression is done. Finally compression ratio is calculated for technique’s efficiency.
LZW based image encoding and decoding | LZW image compression_Img
Optical Disk localization Biomedical Application

1095

78

We had proposed a method in which for detection of optic disk and localization is done by using phenomena called as edge detection. Edge detection is a technique of image processing in which for reducing the image as per requirement for feature extraction done by determing the edges of Image per pixel. In our project Edge detection is used for Localization and detection of the optic discs for analyzing digital diabetic retinopathy systems. In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms.
Localisation of Optic Disc | Optic disk detection from eye_Img
Image segmentation Entropy Methodology

1095

78

Segmentation is a task of division. Which have bases as color, pixel in an image. In this project, a fast threshold selection method based algorithm is implemented to speed up the original MCE threshold method in image segmentation. Our main aim is to find various segments inn an image on basis of its feature. It provides effectiveness when the number of regions in the segmentation varies, effectiveness when the number of regions is fixed and Evaluation effectiveness when work on theoretically different segmentation methods. We implement a methodology in which minimum entropy is used for image segmentation. In segmentation, minimum cross entropy (MCE) based multilevel thresholding is regarded as an effective improvement. However, it is very time consuming for real-time applications.
Entropy based Image Segmentation_Img
Wireless Routing Protocol Algorithm

1095

78

Distance Vector is term used to describe routing protocols which are used by routers to forward packets between networks. The purpose of any routing protocol is to dynamically communicate information about all network paths used to reach a destination and to select the from those paths, the best path to reach a destination network. The terms distance vector is used to group routing protocols into two broad categories based on whether the routing protocol selects the best routing path based on a distance metric (the distance), finding the path that has the lowest total metric to reach the destination.
Routing Protocol | Efficient Routing | DSR_Img
LEACH protocol designing

1095

78

LEACH (Low Energy Adaptive Clustering Hierarchy) is a hierarchical-based routing protocol which uses random rotation of the nodes required to be the cluster-heads to evenly distribute energy consumption in the network. In this project leach protocol will be implemented to analyze the working of whole procedure. The steps we will follow are as follow 1. First of all we will have coverage area from user 2. After that we will give initial parameters for the network in which parameters such as nodes, energy of transmitter and receivers etc 3. After that clusters will be made and then energy basis nodes will be declare as half dead node dead node and alive nodes will be find 4. Then on basis of dead nodes we will analyze lifetime of system
LEACH Protocol | Cluster Heads | WSN | Energy Protocols_Img
Stable Election Protocol in EEP

1095

78

Due to small power batteries in WSNs, efficient utilization of battery power is an important factor. Clustering is an efficient technique to extend life time of sensor networks by reducing the energy consumption. Many clustering techniques were introduced to find cluster heads in a cluster. One of them which is Stable election protocol. SEP protocol is for clustered heterogeneous wireless sensor networks. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. In this project, implementation of SEP is done. This project also concludes by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. The analysis show that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.
SEP Protocol | Energy Protocol | Radio Network | Energy Efficient_Img
Clustering Approach Development in WSN

1095

78

Sensor node is a tiny autonomous device which is used for the monitoring, tracking and surveillance. A number of sensor nodes together form a Wireless Sensor Network. Wireless Sensor Networks are used for a number of applications like monitoring of human body, under water surveillance, military purposes and traffic control etc. The consumption of the Wireless Sensor Networks is increasing day by day as sensor nodes are becoming cheaper. Inefficient or manual placement of sensor nodes leads to the failure of sensor networks. By placing the nodes at a pre-determined optimized location, sensing range can be minimized. Sensing range minimization will lead to increased lifetime because of the less energy consumed during monitoring of targets. Here is implementation of algorithm for efficient clustering and deployment of sensor In this project we will have some initial parameters from user in which the parameters are as • Location points of nodes • Number of sensors So that we can find optimized location of sensor on basis of k-mean clustering Process follow will be 1. Get initial parameters of nodes and sensor 2. Find Euclidean distance between nodes and sensor 3. Do k-mean clustering 4. And on basis of clustering find optimized location for sensor that is optimized location for sensor deployment
Clustering based Sensor Deployment_Img
Wireless System Design in Simulink

1095

78

In any communication system, there must be an information source (transmitter), a destination (receiver) and a medium to transmit information between the transmitter and the receiver. Message source originates message such as human voice, a television picture a teletype message or data. The message can be electrical and non-electrical. If it is not electrical, the source transducer will convert it into electrical signal. The transmitter may be consists of analog to digital converter, data compressor, source encoder, channel encoder a modulator or any other complicated subsystems. The receiver may be consists of demodulator, channel and source decoders data expender, digital to analog converter or others. Receiver transducer converts the electrical signal to its original form- the message. Message destination is the actual unit to which the message it sent. The channel is the information transmission medium. This medium can be of different types such as wire, a waveguide, an optical fiber or a wireless link. As the channel act as a filter, during the transmission of the signal (message) through the channel, the signal can be distorted due to the attenuation and phase shift suffered by different frequency component of the signal. Noise will also be added with the transmitted signal during the transmission of the signal through the channel. In this project WSN communication module is implemented and performance is analyzed over the AWGN channel. The analyzing parameters are BER, SNR etc. Signal is generated firstly then is passed through the procedure of transmission then passed through AWGN channel and then at receiver algorithm is implemented and at end the transmitted signal and receiver signal is compared on basis of parameters
AWGN Performance analysis | BER_Img
PAPR reduction Using PTS algorithm

1095

78

Communication is one of the important aspects of life. Signals were initially sent in the analog domain, are being sent more and more in the digital domain. For better transmission, even single carrier waves are being replaced by multi carriers. Multi carrier systems like CDMA and OFDM are now a day’s being implemented commonly. In the OFDM system, orthogonally placed sub carriers are used to carry the data from the transmitter end to the receiver end. Presence of guard band in this system deals with the problem of ISI. But the large Peak to Average Power Ratio (PAPR) of these signal have some effects on the communication systems. The major drawback of orthogonal frequency-division multiplexing (OFDM) is its high peak-to-average power ratio (PAPR), which gets even more substantial if a transmitter with multiple antennas is considered. To overcome this problem, in this project, the partial transmit sequences (PTS) method well known for PAPR reduction in single antenna systems is studied for multi-antenna OFDM. Finally in this project after reduction of PAPR using PTS algorithm performance is checked on basis of number of errors in signal or by calculating PAPR in signal.
PAPR reduction in OFDM system | Partial Transmit Sequences_Img
PAPR reduction Using SLM algorithm

1095

78

In this project thee performance of peak-to-average power ratio (PAPR) reduction scheme i.e. selected mapping (SLM) scheme is investigated. In the presence of nonlinearity, we analyze the impact of the selected mapping (SLM) technique on bit-error-rate (BER) performance of orthogonal frequency division multiplexing (OFDM) systems in an additive white Gaussian noise channel The SLM technique was first described by Bauml etal. Selective mapping scheme is a technique in which multiple phase rotations are applied to the constellation points, and the one that minimizes the time signal peak is used. Selective mapping involves generating a large set of data vectors all representing the same information. The data vector with the lowest resulting PAPR is selected. Information about the selected and transmitted data vectors is coded and these codes are by an additional sub carriers.
Peak-to-Average Power Ratio Reduction | SLM_Img
Simulink Modal Design For Wireless System

1095

78

In our project we are going to implement a Simulink model for wireless sensor network standard having OFDM concept. This system will be implemented in Simulink toolbox of mat lab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like BER, number of errors etc. The main goal of this Project work is to learn and understand the features of the wireless transmitter and receiver performance, BER is calculated after model designing by error rate calculation in model to check the accuracy of sensor network standard model for transmitter and receiver synchronization, for the implementation of model Mat lab-Simulink is used.
QAM Transmitter and Receiver Design | Wireless Communication_Img
Signal Equalization in OFDM systems

1095

78

In modern digital communications, it is well known that channel equalization plays an important role in compensating channel distortion. Unfortunately, most channels have time varying characteristic and their transfer functions change with time. Also, time-varying multipath interference and multiuser interference are two major limitations for high speed digital communications. Usually, adaptive equalizers are applied in order to overcome these issues. For adaptive channel equalization, we need a suitable filter structure and proper adaptive algorithms. High-speed digital transmissions mostly suffer from inter-symbol interference (ISI) and additive noise. The adaptive equalization algorithms recursively determine the filter coefficients in order to eliminate the effects of noise and ISI. Several algorithms like Least Mean Square (LMS), Recursive Least Mean Square (RLMS), Normalized Least Mean Square (NLMS) etc., has been proposed to perform this operation of equalization. In this project, we study the adaptive equalization technique with the use of normalized least mean Square algorithm.
Adaptive Channel Equalization | LMS and NLMS Algorithms_Img
Wireless System Protocol Design using MATLAB

1095

78

Implementation of a simulink model for 802.11 standard of wireless. This system will be implemented in simulink tool of Matlab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like ber, number of errors etc. The main goal of this Project work is to learn and understand the features of the IEEE standard 802.11a and afterwards, once familiarized with this standard, to develop an OFDM 802.11a PHY layer baseband implementation with the characteristics showed in such standard. Matlab/Simulink is used for designing the model.
Ber Analysis | WSN | Simulink Model | Wimax_Img
Antenna Designing with Directivity Analysis

1095

78

The three decade of growth and development of satellite communication has provided the world with international and long distance fixed and mobile satellite services (FSS) that have helped to change the world to what we know today as Global village. The global communication satellite market has been expanded rapidly into personal communication services, mobile communication services, navigational satellite services and broadband satellite services. What makes satellite communication such an attractive market can be summarized in wide area coverage, distance insensitivity, flexibility, multiple access, destination capability and economical reasons. An antenna array is a set of N spatially separated antennas. The number of antennas in an array can be as small as 2, or as large as several thousand (as in the AN/FPS-85 Phased Array Radar Facility operated by U. S. Air Force). In general, the performance of an antenna array (for whatever application it is being used) increases with the number of antennas (elements) in the array; the drawback of course is the increased cost, size, and complexity. In this project there is an implementation of linear –polar antenna array radiation patterns and finally do comparison between them on basis of radiation pattern.
Antenna Designing | Analysis of Antenna Pattern_Img
Route Optimization Using ACO optimization

1095

78

Now a day’s in Wireless communication data is transfer by wireless mediums in that admin generally give source and destination of data many algorithms have been developed for finding best route for selection of next neighbor node for data to reach destination. In this project Ant colony optimization (ACO) is used to solve the best node for next data transfer. First of all we will have input from user that is coverage area in which the nodes lies. Secondly we will have number of nodes from user. After that on basis of Euclidean distance we will calculate the initial population of our problem which is to solve by ACO. ACO is generally an optimization technique which will find best results for an objective which in this is next transfer node. So here our fitness function is on basis of distance. To get maximum fitness value we will optimize the initial population using ant colony optimization and will get best route with objective of minimum distance and reach the destination node from source.
Optimized Routing | ACO | Best Route Finding_Img
Digital Video Broadcasting Simulink

1095

78

The demand of wireless communication is growing exponentially and next generation of wireless broadband multimedia communication systems will integrate various function and application in same system which supports large data rates with su?cient robustness to radio channel impairments, requires careful choosing of modulation technique. The suitable choice is orthogonal frequency division multiplexing (OFDM) which is special case of multi-carrier communication system, where single data stream is transmitted over number of lower sub-carrier. This altogether has brought to the conclusion that one radio frequency channel can be used to transmit more than one TV program. Digital video broadcasting (DVB-T) means broadcasting a multiplex, a package of various services. We had implemented DVBT system with addition to an effective scheme called as Orthogonal Frequency Division Multiplexing (OFDM) with which the high bit rate over the frequency selective channel is guaranteed to some extent.
DVBT implementation | Analysis of OFDM syytem_Img
Wireless Updated Routing Protocol Design

1095

78

This project is an advancement over the distance algorithm for data transfer and have an objective of delay reduction and fast data transfer. Main objective in this project is to provide maximum throughput or bandwidth to transfer data from Source to destination the explanation of algorithm is:- According to this module a table is created on basis ok acknowledgement. Hence the transmission is carried out on the basis of this content. Table consists of information regarding node distance, neighboring node, bandwidth. This will help us to select the shortest path with accuracy. As communication starts and data is send. The node which has receive the data, now for further communication will act as source and will transmit the data onwards. Now the table is reviewed. If the last node is busy or massively congested then the choice of nodes will be made on the bandwidth bases which mean the node with highest bandwidth will be selected. In case if both of the nodes with high bandwidth are loaded then the choice is made on the distance bases that is least distant node is selected. This technique is more efficient than previous one on basis of delay reduction fast data transfer and maximum channel availability for data transfer.
Bandwidth Efficient | QOS | Channel Availabilty_Img
Simulink modal for BER analysis OFDM systems

1095

78

OFDM is one of the applications of a parallel-data-transmission scheme, which reduces the influence of multipath fading and makes complex equalizers unnecessary increase dramatically future wireless communications. OFDM is a particular form of Multi-carrier transmission and is suited for frequency selective channels and high data rates In this project we are going to implement a simulink model design for wireless sensor network standard having OFDM concept. This system will be implemented in simulink toolbox of mat lab and which will have transmitter and receiver with between them standard methodology used for data transmission and at receiver end there will be a analyzer block which will help us to analyze the performance of system on basis of parameters like BER, number of errors etc.
OFDM | Error Rate Calculation | Multiplexing_Img
Fading Channel performance analysis

1095

78

Fading is the term used to describe the rapid fluctuations in the amplitude of the received radio signal over a short period of time. Fading is a common phenomenon in Mobile Communication Channels, where it is caused due to the interference between two or more versions of the transmitted signals which arrive at the receiver at slightly different times. The resultant received signal can vary widely in amplitude and phase, depending on various factors such as the intensity, relative propagation time of the waves, bandwidth of the transmitted signal etc. In this project we have implemented the simulink model for communication data transmitter and receiver and finally Rayleigh fading channel in introduced to check its performance in communication and analysis over BER etc
Communication using Fading channels | Analysis of Rayleigh Fading_Img
Multiuser detection in CDMA systems

1095

78

Multi user detection techniques exploit the structure of MAI to achieve interference suppression and provide substantial performance gains over conventional single user detection techniques. This results in interference between multiple direct-sequence users and is referred as MAI. In our project we study two multi-rate access methods of multi-carrier CDMA system. Decorelator detector is implemented in multi user detection to remove MAI from signal this detector is advanced to Matched filter At the receiving end, it is required to detect the signal of the desired user in the presence of MAI. If some interfering transmitters are located closer to the base station as compared to the desired user, the receiver of the intended user receives more interference in comparison to that it would have received without near far effect. The MAI and near-far problem are the two issues, which need considerable attention for reliable detection of the signal of the desired user. The conventional detector considers MAI as external noise and is referred to as single user detection technique
Decorelating based MUD | Analysis of Multiuser detection_Img
multiuser detection Comparative analysis

1095

78

The challenge to enhance the capacity of a CDMA system therefore lies in interference management. Many techniques that control and/or suppress interference in CDMA systems by transmit and/or receiver side processing .In this project we will implement two techniques for multiuser detection these two techniques are for 2 users in CDMA system and will analyze on basis of SNR The two techniques are • Least mean square algorithm • Blind mud algorithm First of all we will generate m sequences for our system. After that we will generate data which is to be send. Then we will follow the step of encoding of data using both algorithms for user detection. After that we will analyze the results on basis of • Mean square error • Signal to noise ratio
Multi user detection | CDMA | Matched Filtration_Img
ECG Systems Analysis using DWT

1095

78

The investigation of the ECG has been extensively used for diagnosing many cardiac diseases. The ECG is a realistic record of the direction and magnitude of the electrical commotion that is generated by depolarization and re-polarization of the atria and ventricles. The majority of the clinically useful information in the ECG is originated in the intervals and amplitudes defined by its features (characteristic wave peaks and time durations). The improvement of precise and rapid methods for automatic ECG feature extraction is of chief importance, particularly for the examination of long recordings. In this project we will implement a system which will firstly extract the characteristics of ECG and on basis of that we will find the location and amplitude of details of ECG signal so that we can find the problem that cause to patient we will have ECG signal from static database of patients.
ECG Feature Extraction | ST Segment Detection | Disease Detection_Img
Audio Steganography Using LSB methodology

1095

78

Information hiding is a part of information Security. Steganography is a technique of information hiding that focuses on hiding the existence of secret messages. The aim of steganographic methods is to hide the existence of the communication and therefore to keep any third -party unaware of the presence of the Steganographic exchange In our project we will implement an algorithm to hide text messages in audio signal which can be used for message communication and will decoded at other user end with the decoder software designed with inverse of algorithm to decode messages from audio file send. Embedding secret messages into digital sound is known as audio Steganography. It is usually amore difficult process than embedding messages in other media. Audio Steganography methods can embed messages in WAV, AU, and even MP3 sound files.
LSB based audio steganography_Img
Video Steganography for Data Hiding

1095

78

Due to increased security threats experienced today, confidential information, such as medical records and banking data, are at risk. Thus, multiple layers of data protection beyond conventional encryption must be considered. The current study presents the design and implementation of a steganographic protocol and a suite of tools that can automatically analyze a flash video (FLV) format and effectively hide information within it for application in a digital records environment. This study proposes several methods of hiding information within an FLV and discusses the corresponding advantages and disadvantages of such methods. The proposed steganographic methods are analyzed qualitatively by using auditory-visual perception tests and quantitatively by using video tags evolution graphs and histograms and RGB averaging analysis. This study assumes a system where sensitive data is embedded inside FLVs and then transmitted to several recipients who hold varying access authorization levels.
Data hiding in video | Video steganography_Img
Digital Filter Designing using Genetic Algorithm

1095

78

In this project, a new technique is presented for the design and optimization of digital FIR filters and IIR filters with coefficients that are presented in canonic signed-digit (CSD) format. Since such implementation requires no multipliers, it reduces the hardware cost and lowers the power consumption. The proposed technique considers three goals, the optimum number of coefficients, the optimum word length, and the optimum set of coefficients which satisfies the desirable frequency response and ensures the minimum hardware cost by minimizing the number of nonzero digits in CSD representation of the coefficients using Genetic Algorithms (GA). Comparing with equip ripple method, the proposed technique results in about 30-40 percent reduction in hardware cost. Here, to improve SNR of signal in communication system FIR or IIR filter designing is done, it is part of digital filter designing for signal enhancement several recently proposed optimization-based algorithms for the design of FIR and IIR filters and filter banks are reviewed for linear-phase FIR filters, IIR filters.
Digital filter designing and optimization_Img
ADPCM based Coding And Compression

1095

78

The specification of ADPCM opens the door to a host of applications in telecommunication networks. These applications can be divided into three categories: telephone company use, end customer applications, and new service offerings. An adaptive quantizer and an adaptive predictor. The relation between the encoder and the decoder is also depicted. The decoder is simply a subset of the encoder and transmits r(n) as its output instead of c(n). The adaptive predictor, which is composed of two poles and six zeros, computes an input signal estimate ?(n) which is subtracted from input signal s(n) resulting in a difference signal d(n). The adaptive quantizer codes d(n) into codeword c(n) which is sent over the transmission facility. At the receiving end, an ADPCM decoder uses c(n) to attempt to reconstruct the original signal s(n). Actually, only r(n) can be reconstructed which is related to the original input signal s(n)
Adpcm based Coding And Compression | Audio Coding_Img
LMS based Audio Signal Enhancement

1095

78

Signal processing is an operation designed for extracting, enhancing, storing, and transmitting useful information. Hence signal processing tends to be application dependent. In contrast to the conventional filter design techniques, adaptive filters do not have constant filter coefficients and no priori information is known. Such a filter with adjustable parameters is called an adaptive filter. Adaptive filter adjust their coefficients to minimize an error signal and can be realized as finite impulse response (FIR), infinite impulse response (IIR), lattice and transform domain filter. The most common form of adaptive filter is the transversal filter using least mean square (LMS) algorithm In this project LMS algorithm is implemented in which step followed for implementation are as • Firstly have audio signal from user • Signal will be mixed with noise • Noisy signal is given to adaptive filter (LMS) • Filtration is performed by adaptive filter • Final de-noised signal is obtained • Analysis is done by comparing de-noised signal with original signal
Adaptive Filters | Noise Removal | Filteration of audio signal_Img
SVM based recognition system in MATLAB

1095

78

In today’s information technology world, security for systems is becoming more and more important. The number of systems that have been compromised is ever increasing and authentication plays a major role as a first line of defence against intruders. The three main types of authentication are something you know (such as a password), something you have (such as a card or token), and something you are (biometric). The pressures on today’s system administrators to have secure systems are ever increasing. One area where security can be improved is in authentication. Iris recognition, a biometric, provides one of the most secure methods of authentication and identification thanks to the unique characteristics of the iris. Once the image of the iris has been captured using a standard camera, the authentication process, involving comparing the current subject’s iris with the stored version, is one of the most accurate with very low false acceptance and rejection rates. This makes the technology very useful in areas such as information security, physical access security, ATMs and airport security. In this project we implement iris recognition technique based on SVM(support vector machine) classifier to recognize the static images with database images to use this application in security purposes.
Iris recognition using SVM classifier | Iris feature extraction_Img