Tuesday, July 2, 2013

Matlab Project Titles, Matlab Project Abstracts, Matlab IEEE Project Abstracts, Matlab Projects abstracts for CSE IT ECE EEE MCA, Download Matlab Titles, Download Matlab Project Abstracts, Download IEEE Matlab Abstracts

MATLAB PROJECTS - ABSTRACTS
A Watermarking Based Medical Image Integrity Control System and an Image Moment Signature for Tampering Characterization
In this paper, we present a medical image integrity verification system to detect and approximate local malevolent image alterations (e.g. removal or addition of lesions) as well as identifying the nature of a global processing an image may have undergone (e.g. lossy compression, filtering). 
The proposed integrity analysis process is based on non significant region watermarking with signatures extracted from different pixel blocks of interest and which are compared with the recomputed ones at the verification stage. A set of three signatures is proposed. 
The two firsts devoted to detection and modification location are cryptographic hashes and checksums, while the last one is issued from the image moment theory. In this paper, we first show how geometric moments can be used to approximate any local modification by its nearest generalized 2D Gaussian. 
We then demonstrate how ratios between original and recomputed geometric moments can be used as image features in a classifier based strategy in order to determine the nature of a global image processing. 
Experimental results considering both local and global modifications in MRI and retina images illustrate the overall performances of our approach. With a pixel block signature of about 200 bit long, it is possible to detect, to roughly localize and to get an idea about the image tamper.


A Hybrid Multiview Stereo Algorithm for Modeling Urban Scenes
We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). 
We adopt a two-step strategy consisting first in segmenting the initial mesh-based surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. 
The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). 
The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.


Adaptive fingerprint image enhancement with emphasis on preprocessing of data.
Abstract
This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. 
Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. 
Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. 
These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.


Airborne Vehicle Detection in Dense Urban Areas Using HoG Features and Disparity Maps 
Vehicle detection has been an important research field for years as there are a lot of valuable applications, ranging from support of traffic planners to real-time traffic management. Especially detection of cars in dense urban areas is of interest due to the high traffic volume and the limited space. In city areas many car-like objects (e.g., dormers) appear which might lead to confusion. 
Additionally, the inaccuracy of road databases supporting the extraction process has to be handled in a proper way. This paper describes an integrated real-time processing chain which utilizes multiple occurrence of objects in images. At least two subsequent images, data of exterior orientation, a global DEM, and a road database are used as input data. 
The segments of the road database are projected in the non-geocoded image using the corresponding height information from the global DEM. From amply masked road areas in both images a disparity map is calculated. This map is used to exclude elevated objects above a certain height (e.g., buildings and vegetation). 
Additionally, homogeneous areas are excluded by a fast region growing algorithm. Remaining parts of one input image are classified based on the ‘Histogram of oriented Gradients (HoG)’ features. The implemented approach has been verified using image sections from two different flights and manually extracted ground truth data from the inner city of Munich. The evaluation shows a quality of up to 70 percent.


An Optimized Wavelength Band Selection for Heavily Pigmented Iris Recognition 
Commercial iris recognition systems usually acquire images of the eye in 850-nm band of the electromagnetic spectrum. In this work, the heavily pigmented iris images are captured at 12 wavelengths, from 420 to 940 nm. 
The purpose is to find the most suitable wavelength band for the heavily pigmented iris recognition. A multispectral acquisition system is first designed for imaging the iris at narrow spectral bands in the range of 420-940 nm. Next, a set of 200 human black irises which correspond to the right and left eyes of 100 different subjects are acquired for an analysis. 
Finally, the most suitable wavelength for heavily pigmented iris recognition is found based on two approaches: 1) the quality assurance of texture; 2) matching performance-equal error rate (EER) and false rejection rate (FRR). 
This result is supported by visual observations of magnified detailed local iris texture information. The experimental results suggest that there exists a most suitable wavelength band for heavily pigmented iris recognition when using a single band of wavelength as illumination.


Analysis Operator Learning and Its Application to Image Reconstruction
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. 
An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. 
In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. 
Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.
  

Atmospheric Turbulence Mitigation Using Complex Wavelet-Based Fusion 
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). 
In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. 
We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios.
  

Automatic Detection and Reconstruction of Building Radar Footprints From Single VHR SAR Images 
The spaceborne synthetic aperture radar (SAR) systems Cosmo-SkyMed, TerraSAR-X, and TanDEM-X acquire imagery with very high spatial resolution (VHR), supporting various important application scenarios, such as damage assessment in urban areas after natural disasters. To ensure a reliable, consistent, and fast extraction of the information from the complex SAR scenes, automatic information extraction methods are essential. Focusing on the analysis of urban areas, which is of prime interest of VHR SAR, in this paper, we present a novel method for the automatic detection and 2-D reconstruction of building radar footprints from VHR SAR scenes. 
Unlike most of the literature methods, the proposed approach can be applied to single images. The method is based on the extraction of a set of low-level features from the images and on their composition to more structured primitives using a production system. Then, the concept of semantic meaning of the primitives is introduced and used for both the generation of building candidates and the radar footprint reconstruction. 
The semantic meaning represents the probability that a primitive belongs to a certain scattering class (e.g., double bounce, roof, facade) and has been defined in order to compensate for the lack of detectable features in single images. Indeed, it allows the selection of the most reliable primitives and footprint hypotheses on the basis of fuzzy membership grades. 
The efficiency of the proposed method is demonstrated by processing a 1-m resolution TerraSAR-X spotbeam scene containing flat- and gable-roof buildings at various settings. The results show that the method has a high overall detection rate and that radar footprints are well reconstructed, in particular for medium and large buildings.
  

Compressive Framework for Demosaicing of Natural Images 
Typical consumer digital cameras sense only one out of three color components per image pixel. The problem of demosaicing deals with interpolating those missing color components. In this paper, we present compressive demosaicing (CD), a framework for demosaicing natural images based on the theory of compressed sensing (CS). 
Given sensed samples of an image, CD employs a CS solver to find the sparse representation of that image under a fixed sparsifying dictionary Ψ. As opposed to state of the art CS-based demosaicing approaches, we consider a clear distinction between the interchannel (color) and interpixel correlations of natural images. 
Utilizing some well-known facts about the human visual system, those two types of correlations are utilized in a nonseparable format to construct the sparsifying transform Ψ. Our simulation results verify that CD performs better (both visually and in terms of PSNR) than leading demosaicing approaches when applied to the majority of standard test images.


Context-Based Hierarchical Unequal Merging for SAR Image Segmentation 
This paper presents an image segmentation method named Context-based Hierarchical Unequal Merging for Synthetic aperture radar (SAR) Image Segmentation (CHUMSIS), which uses superpixels as the operation units instead of pixels. 
Based on the Gestalt laws, three rules that realize a new and natural way to manage different kinds of features extracted from SAR images are proposed to represent superpixel context. The rules are prior knowledge from cognitive science and serve as top-down constraints to globally guide the superpixel merging. 
The features, including brightness, texture, edges, and spatial information, locally describe the superpixels of SAR images and are bottom-up forces. While merging superpixels, a hierarchical unequal merging algorithm is designed, which includes two stages: 1) coarse merging stage and 2) fine merging stage. 
The merging algorithm unequally allocates computation resources so as to spend less running time in the superpixels without ambiguity and more running time in the superpixels with ambiguity. Experiments on synthetic and real SAR images indicate that this algorithm can make a balance between computation speed and segmentation accuracy. Compared with two state-of-the-art Markov random field models, CHUMSIS can obtain good segmentation results and successfully reduce running time.


Discrete wavelet transform and data expansion reduction in homomorphic encrypted domain.
Abstract
Signal processing in the encrypted domain is a new technology with the goal of protecting valuable signals from insecure signal processing. In this paper, we propose a method for implementing discrete wavelet transform (DWT) and multiresolution analysis (MRA) in homomorphic encrypted domain. 
We first suggest a framework for performing DWT and inverse DWT (IDWT) in the encrypted domain, then conduct an analysis of data expansion and quantization errors under the framework. To solve the problem of data expansion, which may be very important in practical applications, we present a method for reducing data expansion in the case that both DWT and IDWT are performed. With the proposed method, multilevel DWT/IDWT can be performed with less data expansion in homomorphic encrypted domain. 
We propose a new signal processing procedure, where the multiplicative inverse method is employed as the last step to limit the data expansion. Taking a 2-D Haar wavelet transform as an example, we conduct a few experiments to demonstrate the advantages of our method in secure image processing. 
We also provide computational complexity analyses and comparisons. To the best of our knowledge, there has been no report on the implementation of DWT and MRA in the encrypted domain.
  

Estimating Information from Image Colors: An Application to Digital Cameras and Natural Scenes 
The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations. 
The second was to apply these estimators to simulations of images obtained with five sets of sensors used in digital cameras and with the cone photoreceptors of the human eye. Estimates were obtained for 50 hyperspectral images of natural scenes under daylight illuminants with correlated color temperatures 4,000, 6,500, and 25,000 K. Depending on the sensor set, the mean estimated information available across images with the largest illumination difference varied from 15.5 to 18.0 bits and the mean estimated information retrieved after optimal linear processing varied from 13.2 to 15.5 bits (each about 85 percent of the corresponding information available). 
With the best sensor set, 390 percent more points could be identified per scene than with the worst. Capturing scene information from image colors depends crucially on the choice of camera sensors.


General Constructions for Threshold Multiple-Secret Visual Cryptographic Schemes 
A conventional threshold (k out of n) visual secret sharing scheme encodes one secret image P into n transparencies (called shares) such that any group of k transparencies reveals P when they are superimposed, while that of less than k ones cannot. 
We define and develop general constructions for threshold multiple-secret visual cryptographic schemes (MVCSs) that are capable of encoding s secret images P1,P2,...,Ps into n shares such that any group of less than k shares obtains none of the secrets, while 1) each group of k, k+1,..., n shares reveals P1, P2, ..., Ps, respectively, when superimposed, referred to as (k, n, s)-MVCS where s=n-k+1; or 2) each group of u shares reveals P(ru) where ru ∈ {0,1,2,...,s} (ru=0 indicates no secret can be seen), k ≤ u ≤ n and 2 ≤ s ≤ n-k+1, referred to as (k, n, s, R)-MVCS in which R=(rk, rk+1, ..., rn) is called the revealing list. 
We adopt the skills of linear programming to model (k, n, s) - and (k, n, s, R) -MVCSs as integer linear programs which minimize the pixel expansions under all necessary constraints. The pixel expansions of different problem scales are explored, which have never been reported in the literature. Our constructions are novel and flexible. They can be easily customized to cope with various kinds of MVCSs.


General framework to histogram-shifting-based reversible data hiding.
Abstract
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. 
Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. 
It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
  

Hyperspectral Imagery Restoration Using Nonlocal Spectral-Spatial Structured Sparse Representation With Noise Estimation 
Noise reduction is an active research area in image processing due to its importance in improving the quality of image for object detection and classification. In this paper, we develop a sparse representation based noise reduction method for hyperspectral imagery, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property. T
he main contribution of the paper is in the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. Non-locality means the self-similarity of image, by which a whole image can be partitioned into some groups containing similar patches. The similar patches in each group are sparsely represented with a shared subset of atoms in a dictionary making true signal and noise more easily separated. 
Sparse representation with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imagery by using 3-D blocks instead of 2-D patches for sparse coding, which also makes true signal and noise more distinguished. Moreover, hyperspectral imagery has both signal-independent and signal-dependent noises, so a mixed Poisson and Gaussian noise model is used. 
In order to make sparse representation be insensitive to the various noise distribution in different blocks, a variance-stabilizing transformation (VST) is used to make their variance comparable. The advantages of the proposed methods are validated on both synthetic and real hyperspectral remote sensing data sets.


Image size invariant visual cryptography for general access structures subject to display quality constraints.
Abstract
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. 
This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. 
We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers. 


Interactive Segmentation for Change Detection in Multispectral Remote-Sensing Images 
In this letter, we propose to solve the change detection (CD) problem in multitemporal remote-sensing images using interactive segmentation methods. The user needs to input markers related to change and no-change classes in the difference image. 
Then, the pixels under these markers are used by the support vector machine classifier to generate a spectral-change map. To enhance further the result, we include the spatial contextual information in the decision process using two different solutions based on Markov random field and level-set methods. 
While the former is a region-driven method, the latter exploits both region and contour for performing the segmentation task. Experiments conducted on a set of four real remote-sensing images acquired by low as well as very high spatial resolution sensors and referring to different kinds of changes confirm the attractive capabilities of the proposed methods in generating accurate CD maps with simple and minimal interaction.


Intra-and-Inter-Constraint-Based Video Enhancement Based on Piecewise Tone Mapping 
Video enhancement plays an important role in various video applications. In this paper, we propose a new intra-and-inter-constraint-based video enhancement approach aiming to: 1) achieve high intraframe quality of the entire picture where multiple regions-of-interest (ROIs) can be adaptively and simultaneously enhanced, and 2) guarantee the interframe quality consistencies among video frames. 
We first analyze features from different ROIs and create a piecewise tone mapping curve for the entire frame such that the intraframe quality can be enhanced. We further introduce new interframe constraints to improve the temporal quality consistency. 
Experimental results show that the proposed algorithm obviously outperforms the state-of-the-art algorithms.


Latent Fingerprint Matching Using Descriptor-Based Hough Transform 
Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. 
Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem. 
Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information. 
To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications. 
Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy.


LDFT-Based Watermarking Resilient to Local Desynchronization Attacks 
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. 
In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. 
Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.


Linear Distance Coding for Image Classification 
The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification. 
In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed. 
These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy. 
We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.


Local Directional Number Pattern for Face Analysis: Face and Expression Recognition 
This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods. 
We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions. 
We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations. 
Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks.


Noise Reduction Based on Partial-Reference, Dual-Tree Complex Wavelet Transform Shrinkage 
This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution. 
To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image. 
Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized. 
The result is a map of the directional structures present in the non-enhanced image. Said map is then used to shrink the coefficients of the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced image is computed via the inverse transforms. A thorough numerical analysis of the results has been performed in order to confirm the validity of the proposed approach.


Query-Adaptive Image Search With Hash Codes
ABSTRACT:
Scalable image search based on visual similarity has been an active topic of research in recent years. State-of-the-art solutions often use hashing methods to embed high-dimensional image features into Hamming space, where search can be performed in real-time based on Hamming distance of compact hash codes. 
Unlike traditional metrics (e.g., Euclidean) that offer continuous distances, the Hamming distances are discrete integer values. As a consequence, there are often a large number of images sharing equal Hamming distances to a query, which largely hurts search results where fine-grained ranking is very important. 
This paper introduces an approach that enables query-adaptive ranking of the returned images with equal Hamming distances to the queries. This is achieved by firstly offline learning bitwise weights of the hash codes for a diverse set of predefined semantic concept classes. 
We formulate the weight learning process as a quadratic programming problem that minimizes intra-class distance while preserving inter-class relationship captured by original raw image features. Query-adaptive weights are then computed online by evaluating the proximity between a query and the semantic concept classes. 
With the query-adaptive bitwise weights, returned images can be easily ordered by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level. Experiments on a Flickr image dataset show clear improvements from our proposed approach.


Regional Spatially Adaptive Total Variation Super-Resolution with Spatial Information Filtering and Clustering
Total variation is used as a popular and effective image prior model in the regularization-based image processing fields. However, as the total variation model favors a piecewise constant solution, the processing result under high noise intensity in the flat regions of the image is often poor, and some pseudoedges are produced. 
In this paper, we develop a regional spatially adaptive total variation model. Initially, the spatial information is extracted based on each pixel, and then two filtering processes are added to suppress the effect of pseudoedges. In addition, the spatial information weight is constructed and classified with k-means clustering, and the regularization strength in each region is controlled by the clustering center value. 
The experimental results, on both simulated and real datasets, show that the proposed approach can effectively reduce the pseudoedges of the total variation regularization in the flat regions, and maintain the partial smoothness of the high-resolution image. 
More importantly, compared with the traditional pixel-based spatial information adaptive approach, the proposed region-based spatial information adaptive total variation model can better avoid the effect of noise on the spatial information extraction, and maintains robustness with changes in the noise intensity in the super-resolution process.


Reversible Data Hiding With Optimal Value Transfer 
In reversible data hiding techniques, the values of host data are modified according to some particular rules and the original host content can be perfectly restored after extraction of the hidden data on receiver side. In this paper, the optimal rule of value modification under a payload-distortion criterion is found by using an iterative procedure, and a practical reversible data hiding scheme is proposed. 
The secret data, as well as the auxiliary information used for content recovery, are carried by the differences between the original pixel-values and the corresponding values estimated from the neighbors. Here, the estimation errors are modified according to the optimal value transfer rule. 
Also, the host image is divided into a number of pixel subsets and the auxiliary information of a subset is always embedded into the estimation errors in the next subset. A receiver can successfully extract the embedded secret data and recover the original content in the subsets with an inverse order. This way, a good reversible data hiding performance is achieved.


Reversible Watermarking Based on Invariant Image Classification and Dynamic Histogram Shifting 
In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so. 
Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion. 
In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction. The experiments conducted so far, on some natural images and on medical images from different modalities, show that for capacities smaller than 0.4 bpp, our method can insert more data with lower distortion than any existing schemes. For the same capacity, we achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang , the most efficient approach actually.


Rich Intrinsic Image Decomposition of Outdoor Scenes from Multiple Views
Intrinsic images aim at separating an image into reflectance and illumination layers to facilitate analysis or manipulation. 
Most successful methods rely on user indications [Bousseau et al. 2009], precise geometry, or need multiple images from the same viewpoint and varying lighting to solve this severely ill-posed problem. 
We propose a method to estimate intrinsic images from multiple views of an outdoor scene at a single time of day without the need for precise geometry and with only a simple manual calibration step.


Robust Face Recognition for Uncontrolled Pose and Illumination Changes
Face recognition has made significant advances in the last decade, but robust commercial applications are still lacking. Current authentication/identification applications are limited to controlled settings, e.g., limited pose and illumination changes, with the user usually aware of being screened and collaborating in the process. 
Among others, pose and illumination changes are limited. To address challenges from looser restrictions, this paper proposes a novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE). Its robustness comes from normalization (“correction”) strategies to address pose and illumination variations. 
In addition, two separate image quality indices quantitatively assess pose and illumination changes for each biometric query, before submitting it to the classifier. Samples with poor quality are possibly discarded or undergo a manual classification or, when possible, trigger a new capture. After such filter, template similarity for matching purposes is measured using a localized version of the image correlation index. 
Finally, FACE adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier. Experimental results show that the accuracy of FACE (in terms of recognition rate) compares favorably, and in some cases by significant margins, against popular face recognition methods. In particular, FACE is compared against SVM, incremental SVM, principal component analysis, incremental LDA, ICA, and hierarchical multiscale local binary pattern. 
Testing exploits data from different data sets: CelebrityDB, Labeled Faces in the Wild, SCface, and FERET. The face images used present variations in pose, expression, illumination, image quality, and resolution. 
Our experiments show the benefits of using image quality and reliability indices to enhance overall accuracy, on one side, and to provide for indi- idualized processing of biometric probes for better decision-making purposes, on the other side. 
Both kinds of indices, owing to the way they are defined, can be easily integrated within different frameworks and off-the-shelf biometric applications for the following: 1) data fusion; 2) online identity management; and 3) interoperability. The results obtained by FACE witness a significant increase in accuracy when compared with the results produced by the other algorithms considered.


Robust Hashing for Image Authentication Using Zernike Moments and Local Features 
A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area. Both global and local features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image as a whole. 
The local features include position and texture information of salient regions in the image. Secret keys are introduced in feature extraction and hash construction. While being robust against content-preserving image processing, the hash is sensitive to malicious tampering and, therefore, applicable to image authentication. 
The hash of a test image is compared with that of a reference image. When the hash distance is greater than a threshold τ1 and less than τ2, the received image is judged as a fake. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images approaches zero. Experimental results are presented to show effectiveness of the method.


Scene Text Detection via Connected Component Clustering and Nontext Filtering 
In this paper, we present a new scene text detection algorithm based on two machine learning classifiers: one allows us to generate candidate word regions and the other filters out nontext ones. To be precise, we extract connected components (CCs) in images by using the maximally stable extremal region algorithm. 
These extracted CCs are partitioned into clusters so that we can generate candidate regions. Unlike conventional methods relying on heuristic rules in clustering, we train an AdaBoost classifier that determines the adjacency relationship and cluster CCs by using their pairwise relations. 
Then we normalize candidate word regions and determine whether each region contains text or not. Since the scale, skew, and color of each candidate can be estimated from CCs, we develop a text/nontext classifier for normalized images. This classifier is based on multilayer perceptrons and we can control recall and precision rates with a single free parameter. 
Finally, we extend our approach to exploit multichannel information. Experimental results on ICDAR 2005 and 2011 robust reading competition datasets show that our method yields the state-of-the-art performance both in speed and accuracy.


Secure Watermarking for Multimedia Content Protection: A Review of its Benefits and Open Issues 
The paper illustrates recent results regarding secure watermarking to the signal processing community, highlighting both benefits and still open issues. Secure signal processing, by which indicates a set of techniques able to process sensitive signals that have been obfuscated either by encryption or by other privacy-preserving primitives, may offer valuable solutions to the aforementioned issues. 
More specifically, the adoption of efficient methods for watermark embedding or detection on data that have been secured in some way, which we name in short secure watermarking, provides an elegant way to solve the security concerns of fingerprinting applications.





FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


EMBEDDED SYSTEM PROJECTS IN
Embedded Systems using Microcontrollers, VLSI, DSP, Matlab, Power Electronics, Power Systems, Electrical
For Embedded Projects - 044-45000083, 7418497098 
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or

Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…




Matlab Project Titles, Matlab Project Abstracts, Matlab IEEE Project Abstracts, Matlab Projects abstracts for CSE IT MCA, Download Matlab Titles, Download Matlab Project Abstracts, Download IEEE Matlab Abstracts

Monday, July 1, 2013

NS2 Project Titles, NS2 Project Abstracts, NS2 IEEE Project Abstracts, NS2 Projects abstracts for CSE IT MCA, Download NS2 Titles, Download NS2 Project Abstracts, Download IEEE NS2 Abstracts

NS2 PROJECT - ABSTRACTS
A Rank Correlation Based Detection against Distributed Reflection DoS Attacks 
DDoS presents a serious threat to the Internet since its inception, where lots of controlled hosts flood the victim site with massive packets. 
Moreover, in Distributed Reflection DoS (DRDoS), attackers fool innocent servers (reflectors) into flushing packets to the victim. But most of current DRDoS detection mechanisms are associated with specific protocols and cannot be used for unknown protocols. 
It is found that because of being stimulated by the same attacking flow, the responsive flows from reflectors have inherent relations: the packet rate of one converged responsive flow may have linear relationships with another. Based on this observation, the Rank Correlation based Detection (RCD) algorithm is proposed. 
The preliminary simulations indicate that RCD can differentiate reflection flows from legitimate ones efficiently and effectively, thus can be used as a useable indicator for DRDoS.


A Resource Allocation Scheme for Scalable Video Multicast in WiMAX Relay Networks
This paper proposes the first resource allocation scheme in the literature to support scalable-video multicast for WiMAX relay networks. 
We prove that when the available bandwidth is limited, the bandwidth allocation problems of 1) maximizing network throughput and 2) maximizing the number of satisfied users are NP-hard. To find the near-optimal solutions to this type of maximization problem in polynomial time, this study first proposes a greedy weighted algorithm, GWA, for bandwidth allocation. By incorporating table-consulting mechanisms, the proposed GWA can intelligently avoid redundant bandwidth allocation and thus accomplish high network performance (such as high network throughput or large number of satisfied users). 
To maintain the high performance gained by GWA and simultaneously improve its worst case performance, this study extends GWA to a bounded version, BGWA, which guarantees that its performance gains are lower bounded. 
This study shows that the computational complexity of BGWA is also in polynomial time and proves that BGWA can provide at least 1/ρ times the performance of the optimal solution, where \rho is a finite value no less than one. Finally, simulation results show that the proposed BGWA bandwidth allocation scheme can effectively achieve different performance objectives with different parameter settings.


Adaptive Position Update for Geographic Routing in Mobile Ad Hoc Networks
In geographic routing, nodes need to maintain up-to-date positions of their immediate neighbors for making effective forwarding decisions. Periodic broadcasting of beacon packets that contain the geographic location coordinates of the nodes is a popular method used by most geographic routing protocols to maintain neighbor positions. 
We contend and demonstrate that periodic beaconing regardless of the node mobility and traffic patterns in the network is not attractive from both update cost and routing performance points of view. We propose the Adaptive Position Update (APU) strategy for geographic routing, which dynamically adjusts the frequency of position updates based on the mobility dynamics of the nodes and the forwarding patterns in the network. 
APU is based on two simple principles: 1) nodes whose movements are harder to predict update their positions more frequently (and vice versa), and (ii) nodes closer to forwarding paths update their positions more frequently (and vice versa). 
Our theoretical analysis, which is validated by NS2 simulations of a well-known geographic routing protocol, Greedy Perimeter Stateless Routing Protocol (GPSR), shows that APU can significantly reduce the update cost and improve the routing performance in terms of packet delivery ratio and average end-to-end delay in comparison with periodic beaconing and other recently proposed updating schemes. 
The benefits of APU are further confirmed by undertaking evaluations in realistic network scenarios, which account for localization error, realistic radio propagation, and sparse network.


ALERT: An Anonymous Location-Based Efficient Routing Protocol in MANETs 
Mobile Ad Hoc Networks (MANETs) use anonymous routing protocols that hide node identities and/or routes from outside observers in order to provide anonymity protection. However, existing anonymous routing protocols relying on either hop-by-hop encryption or redundant traffic, either generate high cost or cannot provide full anonymity protection to data sources, destinations, and routes. 
The high cost exacerbates the inherent resource constraint problem in MANETs especially in multimedia wireless applications. To offer high anonymity protection at a low cost, we propose an Anonymous Location-based Efficient Routing proTocol (ALERT). ALERT dynamically partitions the network field into zones and randomly chooses nodes in zones as intermediate relay nodes, which form a nontraceable anonymous route. In addition, it hides the data initiator/receiver among many initiators/receivers to strengthen source and destination anonymity protection. 
Thus, ALERT offers anonymity protection to sources, destinations, and routes. It also has strategies to effectively counter intersection and timing attacks. We theoretically analyze ALERT in terms of anonymity and efficiency. 
Experimental results exhibit consistency with the theoretical analysis, and show that ALERT achieves better route anonymity protection and lower cost compared to other anonymous routing protocols. Also, ALERT achieves comparable routing efficiency to the GPSR geographical routing protocol.


An Efficient and Robust Addressing Protocol for Node Autoconfiguration in Ad Hoc Networks
Address assignment is a key challenge in ad hoc networks due to the lack of infrastructure. Autonomous addressing protocols require a distributed and self-managed mechanism to avoid address collisions in a dynamic network with fading channels, frequent partitions, and joining/leaving nodes. 
We propose and analyze a lightweight protocol that configures mobile ad hoc nodes based on a distributed address database stored in filters that reduces the control load and makes the proposal robust to packet losses and network partitions. 
We evaluate the performance of our protocol, considering joining nodes, partition merging events, and network initialization. Simulation results show that our protocol resolves all the address collisions and also reduces the control traffic when compared to previously proposed protocols.


Analysis of Distance-Based Location Management in Wireless Communication Networks 
The performance of dynamic distance-based location management schemes (DBLMS) in wireless communication networks is analyzed. A Markov chain is developed as a mobility model to describe the movement of a mobile terminal in 2D cellular structures. The paging area residence time is characterized for arbitrary cell residence time by using the Markov chain. The expected number of paging area boundary crossings and the cost of the distance-based location update method are analyzed by using the classical renewal theory for two different call handling models. 
For the call plus location update model, two cases are considered. In the first case, the intercall time has an arbitrary distribution and the cell residence time has an exponential distribution. In the second case, the intercall time has a hyper-Erlang distribution and the cell residence time has an arbitrary distribution. 
 the call without location update model, both intercall time and cell residence time can have arbitrary distributions. Our analysis makes it possible to find the optimal distance threshold that minimizes the total cost of location management in a DBLMS.


Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks 
Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. 
We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. 
The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing-coding tradeoff.


BAHG: Back-Bone-Assisted Hop Greedy Routing for VANET's City Environments 
Using advanced wireless local area network technologies, vehicular ad hoc networks (VANETs) have become viable and valuable for their wide variety of novel applications, such as road safety, multimedia content sharing, commerce on wheels, etc. Multihop information dissemination in VANETs is constrained by the high mobility of vehicles and the frequent disconnections. 
Currently, geographic routing protocols are widely adopted for VANETs as they do not require route construction and route maintenance phases. Again, with connectivity awareness, they perform well in terms of reliable delivery. To obtain destination position, some protocols use flooding, which can be detrimental in city environments. 
Further, in the case of sparse and void regions, frequent use of the recovery strategy elevates hop count. Some geographic routing protocols adopt the minimum weighted algorithm based on distance or connectivity to select intermediate intersections. However, the shortest path or the path with higher connectivity may include numerous intermediate intersections. 
As a result, these protocols yield routing paths with higher hop count. In this paper, we propose a hop greedy routing scheme that yields a routing path with the minimum number of intermediate intersection nodes while taking connectivity into consideration. Moreover, we introduce back-bone nodes that play a key role in providing connectivity status around an intersection. 
Apart from this, by tracking the movement of source as well as destination, the back-bone nodes enable a packet to be forwarded in the changed direction. Simulation results signify the benefits of the proposed routing strategy in terms of high packet delivery ratio and shorter end-to-end delay.


Capacity of Hybrid Wireless Mesh Networks with Random APs 
In conventional Wireless Mesh Networks (WMNs), multihop relays are performed in the backbone comprising of interconnected Mesh Routers (MRs) and this causes capacity degradation. 
This paper proposes a hybrid WMN architecture that the backbone is able to utilize random connections to Access Points (APs) of Wireless Local Area Network (WLAN). In such a proposed hierarchal architecture, capacity enhancement can be achieved by letting the traffic take advantage of the wired connections through APs. 
Theoretical analysis has been conducted for the asymptotic capacity of three-tier hybrid WMN, where per-MR capacity in the backbone is first derived and per-MC capacity is then obtained. Besides related to the number of MR cells as a conventional WMN, the analytical results reveal that the asymptotic capacity of a hybrid WMN is also strongly affected by the number of cells having AP connections, the ratio of access link bandwidth to backbone link bandwidth, etc. 
Appropriate configuration of the network can drastically improve the network capacity in our proposed network architecture. It also shows that the traffic balance among the MRs with AP access is very important to have a tighter asymptotic capacity bound. The results and conclusions justify the perspective of having such a hybrid WMN utilizing widely deployed WLANs.


Channel Allocation and Routing in Hybrid Multichannel Multiradio Wireless Mesh Networks 
Many efforts have been devoted to maximizing network throughput in a multichannel multiradio wireless mesh network. Most current solutions are based on either purely static or purely dynamic channel allocation approaches. 
In this paper, we propose a hybrid multichannel multiradio wireless mesh networking architecture, where each mesh node has both static and dynamic interfaces. We first present an Adaptive Dynamic Channel Allocation protocol (ADCA), which considers optimization for both throughput and delay in the channel assignment. 
In addition, we also propose an Interference and Congestion Aware Routing protocol (ICAR) in the hybrid network with both static and dynamic links, which balances the channel usage in the network. 
Our simulation results show that compared to previous works, ADCA reduces the packet delay considerably without degrading the network throughput. The hybrid architecture shows much better adaptivity to changing traffic than purely static architecture without dramatic increase in overhead, and achieves lower delay than existing approaches for hybrid networks.


Coloring-Based Inter-WBAN Scheduling for Mobile Wireless Body Area Networks 
In this study, random incomplete coloring (RIC) with low time-complexity and high spatial reuse is proposed to overcome in-between wireless-body-area-networks (WBAN) interference, which can cause serious throughput degradation and energy waste. Interference-avoidance scheduling of wireless networks can be modeled as a problem of graph coloring. 
For instance, high spatial-reuse scheduling for a dense sensor network is mapped to high spatial-reuse coloring; fast convergence scheduling for a mobile ad hoc network (MANET) is mapped to low time-complexity coloring. 
However, for a dense and mobile WBAN, inter-WBAN scheduling (IWS) should simultaneously satisfy both of the following requirements: 1) high spatial-reuse and 2) fast convergence, which are tradeoffs in conventional coloring. By relaxing the coloring rule, the proposed distributed coloring algorithm RIC avoids this tradeoff and satisfies both requirements. 
Simulation results verify that the proposed coloring algorithm effectively overcomes inter-WBAN interference and invariably supports higher system throughput in various mobile WBAN scenarios compared to conventional colorings.


Cross-Layer Design of Congestion Control and Power Control in Fast-Fading Wireless Networks
Abstract
We study the cross-layer design of congestion control and power allocation with outage constraint in an interference-limited multihop wireless networks. Using a complete-convexification method, we first propose a message-passing distributed algorithm that can attain the global optimal source rate and link power allocation. Despite the attractiveness of its optimality, this algorithm requires larger message size than that of the conventional scheme, which increases network overheads. 
Using the bounds on outage probability, we map the outage constraint to an SIR constraint and continue developing a practical near-optimal distributed algorithm requiring only local SIR measurement at link receivers to limit the size of the message. Due to the complicated complete-convexification method, however the congestion control of both algorithms no longer preserves the existing TCP stack. 
To take into account the TCP stack preserving property, we propose the third algorithm using a successive convex approximation method to iteratively transform the original nonconvex problem into approximated convex problems, then the global optimal solution can converge distributively with message-passing. Thanks to the tightness of the bounds and successive approximations, numerical results show that the gap between three algorithms is almost indistinguishable. 
Despite the same type of the complete-convexification method, the numerical comparison shows that the second near-optimal scheme has a faster convergence rate than that of the first optimal one, which make the near-optimal scheme more favorable and applicable in practice. Meanwhile, the third optimal scheme also has a faster convergence rate than that of a previous work using logarithm successive approximation method.


DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency in Wireless Mobile Networks
ABSTRACT:
This paper proposes distributed cache invalidation mechanism (DCIM), a client-based cache consistency scheme that is implemented on top of a previously proposed architecture for caching data items in mobile ad hoc networks (MANETs), namely COACS, where special nodes cache the queries and the addresses of the nodes that store the responses to these queries. 
We have also previously proposed a server-based consistency scheme, named SSUM, whereas in this paper, we introduce DCIM that is totally client-based. DCIM is a pull-based algorithm that implements adaptive time to live (TTL), piggybacking, and prefetching, and provides near strong consistency capabilities. 
Cached data items are assigned adaptive TTL values that correspond to their update rates at the data source, where items with expired TTL values are grouped in validation requests to the data source to refresh them, whereas unexpired ones but with high request rates are prefetched from the server. 
In this paper, DCIM is analyzed to assess the delay and bandwidth gains (or costs) when compared to polling every time and push-based schemes. DCIM was also implemented using ns2, and compared against client-based and server-based schemes to assess its performance experimentally. The consistency ratio, delay, and overhead traffic are reported versus several variables, where DCIM showed to be superior when compared to the other systems.


Delay Optimal Broadcast for Multihop Wireless Networks using Self –Interference Cancellation
Conventional wireless broadcast protocols rely heavily on the 802.11-based CSMA/CA model, which avoids interference and collision by conservative scheduling of transmissions. While CSMA/CA is amenable to multiple concurrent unicasts, it tends to degrade broadcast performance significantly, especially in lossy and large-scale networks. 
In this paper, we propose a new protocol called Chorus that improves the efficiency and scalability of broadcast service with a MAC/PHY layer that allows packet collisions. Chorus is built upon the observation that packets carrying the same data can be effectively detected and decoded, even when they overlap with each other and have comparable signal strengths. 
It resolves collision using symbol-level interference cancellation, and then combines the resolved symbols to restore the packet. Such a collision-tolerant mechanism significantly improves the transmission diversity and spatial reuse in wireless broadcast. Chorus' MAC-layer cognitive sensing and scheduling scheme further facilitates the realization of such an advantage, resulting in an asymptotic broadcast delay that is proportional to the network radius. 
We evaluate Chorus' PHY-layer collision resolution mechanism with symbol-level simulation, and validate its network-level performance via ns-2, in comparison with a typical CSMA/CA-based broadcast protocol. Our evaluation validates Chorus's superior performance with respect to scalability, reliability, delay, etc., under a broad range of network scenarios (e.g., single/multiple broadcast sessions, static/mobile topologies).


Detection and Localization of Multiple Spoofing Attackers in Wireless Networks 
Wireless spoofing attacks are easy to launch and can significantly impact the performance of networks. Although the identity of a node can be verified through cryptographic authentication, conventional security approaches are not always desirable because of their overhead requirements. 
In this paper, we propose to use spatial information, a physical property associated with each node, hard to falsify, and not reliant on cryptography, as the basis for 1) detecting spoofing attacks; 2) determining the number of attackers when multiple adversaries masquerading as the same node identity; and 3) localizing multiple adversaries. We propose to use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to detect the spoofing attacks. 
We then formulate the problem of determining the number of attackers as a multiclass detection problem. Cluster-based mechanisms are developed to determine the number of attackers. When the training data are available, we explore using the Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers. 
In addition, we developed an integrated detection and localization system that can localize the positions of multiple attackers. We evaluated our techniques through two testbeds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real office buildings. 
Our experimental results show that our proposed methods can achieve over 90 percent Hit Rate and Precision when determining the number of attackers. Our localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.


Discovery and Verification of Neighbor Positions in Mobile Ad Hoc Networks 
A growing number of ad hoc networking protocols and location-aware services require that mobile nodes learn the position of their neighbors. However, such a process can be easily abused or disrupted by adversarial nodes. 
In absence of a priori trusted nodes, the discovery and verification of neighbor positions presents challenges that have been scarcely investigated in the literature. In this paper, we address this open issue by proposing a fully distributed cooperative solution that is robust against independent and colluding adversaries, and can be impaired only by an overwhelming presence of adversaries. 
Results show that our protocol can thwart more than 99 percent of the attacks under the best possible conditions for the adversaries, with minimal false positive rates.


Distance Bounding: A Practical Security Solution for Real-Time Location Systems 
The need for implementing adequate security services in industrial applications is increasing. Verifying the physical proximity or location of a device has become an important security service in ad-hoc wireless environments. 
Distance-bounding is a prominent secure neighbor detection method that cryptographically determines an upper bound for the physical distance between two communicating parties based on the round-trip time of cryptographic challenge-response pairs. 
This paper gives a brief overview of distance-bounding protocols and discusses the possibility of implementing such protocols within industrial RFID and real-time location applications, which requires an emphasis on aspects such as reliability and real-time communication. 
The practical resource requirements and performance tradeoffs involved are illustrated using a sample of distance-bounding proposals, and some remaining research challenges with regards to practical implementation are discussed.


Distributed Cooperative Caching in Social Wireless Networks 
This paper introduces cooperative caching policies for minimizing electronic content provisioning cost in Social Wireless Networks (SWNET). SWNETs are formed by mobile devices, such as data enabled phones, electronic book readers etc., sharing common interests in electronic content, and physically gathering together in public places. 
Electronic object caching in such SWNETs are shown to be able to reduce the content provisioning cost which depends heavily on the service and pricing dependences among various stakeholders including content providers (CP), network service providers, and End Consumers (EC). 
Drawing motivation from Amazon's Kindle electronic book delivery business, this paper develops practical network, service, and pricing models which are then used for creating two object caching strategies for minimizing content provisioning costs in networks with homogenous and heterogeneous object demands. 
The paper constructs analytical and simulation models for analyzing the proposed caching strategies in the presence of selfish users that deviate from network-wide cost-optimal policies. It also reports results from an Android phone-based prototype SWNET, validating the presented analytical and simulation results.


EAACK—A Secure Intrusion-Detection System for MANETs 
The migration to wireless network from wired network has been a global trend in the past few decades. The mobility and scalability brought by wireless network made it possible in many applications. Among all the contemporary wireless networks, Mobile Ad hoc NETwork (MANET) is one of the most important and unique applications. On the contrary to traditional network architecture, MANET does not require a fixed network infrastructure; every single node works as both a transmitter and a receiver. 
Nodes communicate directly with each other when they are both within the same communication range. Otherwise, they rely on their neighbors to relay messages. The self-configuring ability of nodes in MANET made it popular among critical mission applications like military use or emergency recovery. 
However, the open medium and wide distribution of nodes make MANET vulnerable to malicious attackers. In this case, it is crucial to develop efficient intrusion-detection mechanisms to protect MANET from attacks. With the improvements of the technology and cut in hardware costs, we are witnessing a current trend of expanding MANETs into industrial applications. 
To adjust to such trend, we strongly believe that it is vital to address its potential security issues. In this paper, we propose and implement a new intrusion-detection system named Enhanced Adaptive ACKnowledgment (EAACK) specially designed for MANETs. Compared to contemporary approaches, EAACK demonstrates higher malicious-behavior-detection rates in certain circumstances while does not greatly affect the network performances.


Efficient Algorithms for Neighbor Discovery in Wireless Networks 
Neighbor discovery is an important first step in the initialization of a wireless ad hoc network. In this paper, we design and analyze several algorithms for neighbor discovery in wireless networks. Starting with a single-hop wireless network of n nodes, we propose a Θ(nlnn) ALOHA-like neighbor discovery algorithm when nodes cannot detect collisions, and an order-optimal Θ(n) receiver feedback-based algorithm when nodes can detect collisions. Our algorithms neither require nodes to have a priori estimates of the number of neighbors nor synchronization between nodes. 
Our algorithms allow nodes to begin execution at different time instants and to terminate neighbor discovery upon discovering all their neighbors. We finally show that receiver feedback can be used to achieve a Θ(n) running time, even when nodes cannot detect collisions. 
We then analyze neighbor discovery in a general multihop setting. We establish an upper bound of O(Δlnn) on the running time of the ALOHA-like algorithm, where Δ denotes the maximum node degree in the network and n the total number of nodes. 
We also establish a lower bound of Ω(Δ+lnn) on the running time of any randomized neighbor discovery algorithm. Our result thus implies that the ALOHA-like algorithm is at most a factor min(Δ,lnn) worse than optimal.


Enhanced OLSR for defense against DOS attack in ad hoc networks 
Mobile ad hoc networks (MANET) refers to a network designed for special applications for which it is difficult to use a backbone network. In MANETs, applications are mostly involved with sensitive and secret information. Since MANET assumes a trusted environment for routing, security is a major issue. 
In this paper we analyze the vulnerabilities of a pro-active routing protocol called optimized link state routing (OLSR) against a specific type of denial-of-service (DOS) attack called node isolation attack. Analyzing the attack, we propose a mechanism called enhanced OLSR (EOLSR) protocol which is a trust based technique to secure the OLSR nodes against the attack. 
Our technique is capable of finding whether a node is advertising correct topology information or not by verifying its Hello packets, thus detecting node isolation attacks. 
The experiment results show that our protocol is able to achieve routing security with 45% increase in packet delivery ratio and 44% reduction in packet loss rate when compared to standard OLSR under node isolation attack. Our technique is light weight because it doesn't involve high computational complexity for securing the network.


Exploiting Ubiquitous Data Collection for Mobile Users in Wireless Sensor Networks 
We study the ubiquitous data collection for mobile users in wireless sensor networks. People with handheld devices can easily interact with the network and collect data. We propose a novel approach for mobile users to collect the network-wide data. 
The routing structure of data collection is additively updated with the movement of the mobile user. With this approach, we only perform a limited modification to update the routing structure while the routing performance is bounded and controlled compared to the optimal performance. 
The proposed protocol is easy to implement. Our analysis shows that the proposed approach is scalable in maintenance overheads, performs efficiently in the routing performance, and provides continuous data delivery during the user movement. 
We implement the proposed protocol in a prototype system and test its feasibility and applicability by a 49-node testbed. We further conduct extensive simulations to examine the efficiency and scalability of our protocol with varied network settings.


Harvesting-Aware Energy Management for Time-Critical Wireless Sensor Networks With Joint Voltage and Modulation Scaling 
As Cyber-Physical-Systems (CPSs) evolve they will be increasingly relied on to support time-critical and performance-intensive monitoring and control activities. Further, many CPSs that utilize Wireless Sensor Networking (WSN) technologies will require the use of energy harvesting methods to extend their lifetimes. 
For this application class, there are currently few algorithmic techniques that combine performance sensitive processing and communication with efficient management techniques for energy harvesting. Our paper addresses this problem. We first propose a general purpose, multihop WSN architecture capable of supporting time-critical CPS systems using energy harvesting. We then present a set of Harvesting Aware Speed Selection (HASS) algorithms. 
Our technique maximizes the minimum energy reserve for all the nodes in the network, thus ensuring highly resilient performance under emergency or fault-driven situations. We present an optimal centralized solution, along with an efficient, distributed solution. 
We propose a CPS-specific experimental methodology, enabling us to evaluate our approach. Our experiments show that our algorithms yield significantly higher energy reserves than baseline methods.


In-Network Estimation with Delay Constraints in Wireless Sensor Networks 
The use of wireless sensor networks (WSNs) for closing the loops between the cyberspace and the physical processes is more attractive and promising for future control systems. For some real-time control applications, controllers need to accurately estimate the process state within rigid delay constraints. In this paper, we propose a novel in-network estimation approach for state estimation with delay constraints in multihop WSNs. 
For accurately estimating a process state as well as satisfying rigid delay constraints, we address the problem through jointly designing in-network estimation operations and an aggregation scheduling algorithm. 
Our in-network estimation operation performed at relays not only optimally fuses the estimates obtained from the different sensors but also predicts the upper stream sensors' estimates which cannot be aggregated to the sink before deadlines. 
Our estimate aggregation scheduling algorithm, which is interference free, is able to aggregate as much estimate information as possible from the network to the sink within delay constraints. We proved the unbiasedness of in-network estimation, and theoretically analyzed the optimality of our approach. 
Our simulation results corroborate our theoretical results and show that our in-network estimation approach can obtain significant estimation accuracy gain under different network settings.


Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks 
Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies. 
We propose using low-cost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. 
First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. 
The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. 
We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions


Model-Based Analysis of Wireless System Architectures for Real-Time Applications 
We propose a model-based description and analysis framework for the design of wireless system architectures. Its aim is to address the shortcomings of existing approaches to system verification and the tracking of anomalies in safety-critical wireless systems. We use Architecture Analysis and Description Language (AADL) to describe an analysis-oriented architecture model with highly modular components. 
We also develop the cooperative tool chains required to analyze the performance of a wireless system by simulation. We show how this framework can support a detailed and largely automated analysis of a complicated, networked wireless system using examples from wireless healthcare and video broadcasting.


Network Traffic Classification Using Correlation Information 
Traffic classification has wide applications in network management, from security monitoring to quality of service measurements. Recent research tends to apply machine learning techniques to flow statistical feature based classification methods. The nearest neighbor (NN)-based method has exhibited superior classification performance. 
It also has several important advantages, such as no requirements of training procedure, no risk of overfitting of parameters, and naturally being able to handle a huge number of classes. However, the performance of NN classifier can be severely affected if the size of training data is small. 
In this paper, we propose a novel nonparametric approach for traffic classification, which can improve the classification performance effectively by incorporating correlated information into the classification process. We analyze the new classification approach and its performance benefit from both theoretical and empirical perspectives. 
A large number of experiments are carried out on two real-world traffic data sets to validate the proposed approach. The results show the traffic classification performance can be improved significantly even under the extreme difficult circumstance of very few training samples.


On Exploiting Transient Social Contact Patterns for Data Forwarding in Delay-Tolerant Networks 
Unpredictable node mobility, low node density, and lack of global information make it challenging to achieve effective data forwarding in Delay-Tolerant Networks (DTNs). Most of the current data forwarding schemes choose the nodes with the best cumulative capability of contacting others as relays to carry and forward data, but these nodes may not be the best relay choices within a short time period due to the heterogeneity of transient node contact characteristics. 
In this paper, we propose a novel approach to improve the performance of data forwarding with a short time constraint in DTNs by exploiting the transient social contact patterns. These patterns represent the transient characteristics of contact distribution, network connectivity and social community structure in DTNs, and we provide analytical formulations on these patterns based on experimental studies of realistic DTN traces. 
We then propose appropriate forwarding metrics based on these patterns to improve the effectiveness of data forwarding. When applied to various data forwarding strategies, our proposed forwarding metrics achieve much better performance compared to existing schemes with similar forwarding cost.


Opportunistic MANETs: Mobility Can Make Up for Low Transmission Power
ABSTRACT:
Opportunistic mobile ad hoc networks (MANETs) are a special class of sparse and disconnected MANETs where data communication exploits sporadic contact opportunities among nodes. We consider opportunistic MANETs where nodes move independently at random over a square of the plane. 
Nodes exchange data if they are at a distance at most within each other, where is the node transmission radius. The flooding time is the number of time-steps required to broadcast a message from a source node to every node of the network. 
Flooding time is an important measure of how fast information can spread in dynamic networks. We derive the first upper bound on the flooding time, which is a decreasing function of the maximal speed of the nodes. 
The bound holds with high probability, and it is nearly tight. Our bound shows that, thanks to node mobility, even when the network is sparse and disconnected, information spreading can be fast.


Optimal multicast capacity and delay tradeoffs in MANETs: A global perspective 
In this paper, we give a global perspective of multicast capacity and delay analysis in Mobile Ad-hoc Networks (MANETs). 
Specifically, we consider two node mobility models: (1) two-dimensional i.i.d. mobility, (2) one-dimensional i.i.d. mobility. Two mobility time-scales are included in this paper: (i) Fast mobility where node mobility is at the same time-scale as data transmissions; (ii) Slow mobility where node mobility is assumed to occur at a much slower time-scale than data transmissions. 
Given a delay constraint D, we first characterize the optimal multicast capacity for each of the four mobility models, and then we develop a scheme that can achieve a capacity-delay tradeoff close to the upper bound up to a logarithmic factor. 
Our study can be further extended to two-dimensional/one-dimensional hybrid random walk fast/slow mobility models and heterogeneous networks.


Power Allocation for Statistical QoS Provisioning in Opportunistic Multi-Relay DF Cognitive Networks 
In this letter, we propose a power allocation scheme for statistical quality-of-service (QoS) provisioning in multi-relay decode-and-forward (DF) cognitive networks (CN). By considering the direct link between the source and destination, the CN first chooses the transmission mode (direct transmission or relay transmission) based on the channel state information. 
Then, according to the determined transmission mode, efficient power allocation will be performed under the given QoS requirement, the average transmit and interference power constraints as well as the peak interference constraint. 
Our proposed power allocation scheme indicates that, in order to achieve the maximum throughput, at most two relays can be involved for the transmission. Simulation results show that our proposed scheme outperforms the max-min criterion and equal power allocation policy.


Proteus: Multiflow Diversity Routing for Wireless Networks with Cooperative Transmissions
ABSTRACT:
In this paper, we consider the use of cooperative transmissions in multihop wireless networks to achieve Virtual Multiple Input Single Output (VMISO) links. Specifically, we investigate how the physical layer VMISO benefits translate into network level performance improvements. 
We show that the improvements are nontrivial (15 to 300 percent depending on the node density) but rely on two crucial algorithmic decisions: the number of cooperating transmitters for each link; and the cooperation strategy used by the transmitters. We explore the tradeoffs in making routing decisions using analytical models and derive the key routing considerations. 
Finally, we present Proteus, an adaptive diversity routing protocol that includes algorithmic solutions to the above two decision problems and leverages VMISO links in multihop wireless network to achieve performance improvements. 
We evaluate Proteus using NS2-based simulations with an enhanced physical layer model that accurately captures the effect of VMISO transmissions


Quality-Differentiated Video Multicast in Multirate Wireless Networks 
Adaptation of modulation and transmission bit-rates for video multicast in a multirate wireless network is a challenging problem because of network dynamics, variable video bit-rates, and heterogeneous clients who may expect differentiated video qualities. 
Prior work on the leader-based schemes selects the transmission bit-rate that provides reliable transmission for the node that experiences the worst channel condition. However, this may penalize other nodes that can achieve a higher throughput by receiving at a higher rate. 
In this work, we investigate a rate-adaptive video multicast scheme that can provide heterogeneous clients differentiated visual qualities matching their channel conditions. We first propose a rate scheduling model that selects the optimal transmission bit-rate for each video frame to maximize the total visual quality for a multicast group subject to the minimum-visual-quality-guaranteed constraint. 
We then present a practical and easy-to-implement protocol, called QDM, which constructs a cluster-based structure to characterize node heterogeneity and adapts the transmission bit-rate to network dynamics based on video quality perceived by the representative cluster heads. 
Since QDM selects the rate by a sample-based technique, it is suitable for real-time streaming even without any preprocess. We show that QDM can adapt to network dynamics and variable video-bit rates efficiently, and produce a gain of 2-5 dB in terms of the average video quality as compared to the leader-based approach.


Strategies for Energy-Efficient Resource Management of Hybrid Programming Models
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. 
Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. 
The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. 
In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. 
We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss


Target Tracking and Mobile Sensor Navigation in Wireless Sensor Networks 
This work studies the problem of tracking signal-emitting mobile targets using navigated mobile sensors based on signal reception. Since the mobile target's maneuver is unknown, the mobile sensor controller utilizes the measurement collected by a wireless sensor network in terms of the mobile target signal's time of arrival (TOA). 
The mobile sensor controller acquires the TOA measurement information from both the mobile target and the mobile sensor for estimating their locations before directing the mobile sensor's movement to follow the target. We propose a min-max approximation approach to estimate the location for tracking which can be efficiently solved via semidefinite programming (SDP) relaxation, and apply a cubic function for mobile sensor navigation. 
We estimate the location of the mobile sensor and target jointly to improve the tracking accuracy. To further improve the system performance, we propose a weighted tracking algorithm by using the measurement information more efficiently. Our results demonstrate that the proposed algorithm provides good tracking performance and can quickly direct the mobile sensor to follow the mobile target.


Toward a Statistical Framework for Source Anonymity in Sensor Networks
In certain applications, the locations of events reported by a sensor network need to remain anonymous. That is, unauthorized observers must be unable to detect the origin of such events by analyzing the network traffic. Known as the source anonymity problem, this problem has emerged as an important topic in the security of wireless sensor networks, with variety of techniques based on different adversarial assumptions being proposed. 
In this work, we present a new framework for modeling, analyzing, and evaluating anonymity in sensor networks. The novelty of the proposed framework is twofold: first, it introduces the notion of "interval indistinguishability” and provides a quantitative measure to model anonymity in wireless sensor networks; second, it maps source anonymity to the statistical problem of binary hypothesis testing with nuisance parameters. We then analyze existing solutions for designing anonymous sensor networks using the proposed model. 
We show how mapping source anonymity to binary hypothesis testing with nuisance parameters leads to converting the problem of exposing private source information into searching for an appropriate data transformation that removes or minimize the effect of the nuisance information. 
By doing so, we transform the problem from analyzing real-valued sample points to binary codes, which opens the door for coding theory to be incorporated into the study of anonymous sensor networks. Finally, we discuss how existing solutions can be modified to improve their anonymity.


Toward Privacy Preserving and Collusion Resistance in a Location Proof Updating System 
Today's location-sensitive service relies on user's mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. 
To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. 
We also develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. 
APPLAUS can be implemented with existing network infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.





FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


EMBEDDED SYSTEM PROJECTS IN
Embedded Systems using Microcontrollers, VLSI, DSP, Matlab, Power Electronics, Power Systems, Electrical
For Embedded Projects - 044-45000083, 7418497098 
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or

Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…




NS2 Project Titles, NS2 Project Abstracts, NS2 IEEE Project Abstracts, NS2 Projects abstracts for CSE IT MCA, Download NS2 Titles, Download NS2 Project Abstracts, Download IEEE NS2 Abstracts