MATLAB PROJECTS - ABSTRACTS
A Watermarking Based Medical Image Integrity Control System and an Image Moment Signature for Tampering Characterization
In this paper, we present a medical image integrity verification system to detect and approximate local malevolent image alterations (e.g. removal or addition of lesions) as well as identifying the nature of a global processing an image may have undergone (e.g. lossy compression, filtering).
The proposed integrity analysis process is based on non significant region watermarking with signatures extracted from different pixel blocks of interest and which are compared with the recomputed ones at the verification stage. A set of three signatures is proposed.
The two firsts devoted to detection and modification location are cryptographic hashes and checksums, while the last one is issued from the image moment theory. In this paper, we first show how geometric moments can be used to approximate any local modification by its nearest generalized 2D Gaussian.
We then demonstrate how ratios between original and recomputed geometric moments can be used as image features in a classifier based strategy in order to determine the nature of a global image processing.
Experimental results considering both local and global modifications in MRI and retina images illustrate the overall performances of our approach. With a pixel block signature of about 200 bit long, it is possible to detect, to roughly localize and to get an idea about the image tamper.
A Hybrid Multiview Stereo Algorithm for Modeling Urban Scenes
We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori).
We adopt a two-step strategy consisting first in segmenting the initial mesh-based surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process.
The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout).
The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.
Adaptive fingerprint image enhancement with emphasis on preprocessing of data.
Abstract
This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image.
Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system.
Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied.
These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.
Airborne Vehicle Detection in Dense Urban Areas Using HoG Features and Disparity Maps
Vehicle detection has been an important research field for years as there are a lot of valuable applications, ranging from support of traffic planners to real-time traffic management. Especially detection of cars in dense urban areas is of interest due to the high traffic volume and the limited space. In city areas many car-like objects (e.g., dormers) appear which might lead to confusion.
Additionally, the inaccuracy of road databases supporting the extraction process has to be handled in a proper way. This paper describes an integrated real-time processing chain which utilizes multiple occurrence of objects in images. At least two subsequent images, data of exterior orientation, a global DEM, and a road database are used as input data.
The segments of the road database are projected in the non-geocoded image using the corresponding height information from the global DEM. From amply masked road areas in both images a disparity map is calculated. This map is used to exclude elevated objects above a certain height (e.g., buildings and vegetation).
Additionally, homogeneous areas are excluded by a fast region growing algorithm. Remaining parts of one input image are classified based on the ‘Histogram of oriented Gradients (HoG)’ features. The implemented approach has been verified using image sections from two different flights and manually extracted ground truth data from the inner city of Munich. The evaluation shows a quality of up to 70 percent.
An Optimized Wavelength Band Selection for Heavily Pigmented Iris Recognition
Commercial iris recognition systems usually acquire images of the eye in 850-nm band of the electromagnetic spectrum. In this work, the heavily pigmented iris images are captured at 12 wavelengths, from 420 to 940 nm.
The purpose is to find the most suitable wavelength band for the heavily pigmented iris recognition. A multispectral acquisition system is first designed for imaging the iris at narrow spectral bands in the range of 420-940 nm. Next, a set of 200 human black irises which correspond to the right and left eyes of 100 different subjects are acquired for an analysis.
Finally, the most suitable wavelength for heavily pigmented iris recognition is found based on two approaches: 1) the quality assurance of texture; 2) matching performance-equal error rate (EER) and false rejection rate (FRR).
This result is supported by visual observations of magnified detailed local iris texture information. The experimental results suggest that there exists a most suitable wavelength band for heavily pigmented iris recognition when using a single band of wavelength as illumination.
Analysis Operator Learning and Its Application to Image Reconstruction
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms.
An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator.
In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints.
Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.
Atmospheric Turbulence Mitigation Using Complex Wavelet-Based Fusion
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI).
In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied.
We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios.
Automatic Detection and Reconstruction of Building Radar Footprints From Single VHR SAR Images
The spaceborne synthetic aperture radar (SAR) systems Cosmo-SkyMed, TerraSAR-X, and TanDEM-X acquire imagery with very high spatial resolution (VHR), supporting various important application scenarios, such as damage assessment in urban areas after natural disasters. To ensure a reliable, consistent, and fast extraction of the information from the complex SAR scenes, automatic information extraction methods are essential. Focusing on the analysis of urban areas, which is of prime interest of VHR SAR, in this paper, we present a novel method for the automatic detection and 2-D reconstruction of building radar footprints from VHR SAR scenes.
Unlike most of the literature methods, the proposed approach can be applied to single images. The method is based on the extraction of a set of low-level features from the images and on their composition to more structured primitives using a production system. Then, the concept of semantic meaning of the primitives is introduced and used for both the generation of building candidates and the radar footprint reconstruction.
The semantic meaning represents the probability that a primitive belongs to a certain scattering class (e.g., double bounce, roof, facade) and has been defined in order to compensate for the lack of detectable features in single images. Indeed, it allows the selection of the most reliable primitives and footprint hypotheses on the basis of fuzzy membership grades.
The efficiency of the proposed method is demonstrated by processing a 1-m resolution TerraSAR-X spotbeam scene containing flat- and gable-roof buildings at various settings. The results show that the method has a high overall detection rate and that radar footprints are well reconstructed, in particular for medium and large buildings.
Compressive Framework for Demosaicing of Natural Images
Typical consumer digital cameras sense only one out of three color components per image pixel. The problem of demosaicing deals with interpolating those missing color components. In this paper, we present compressive demosaicing (CD), a framework for demosaicing natural images based on the theory of compressed sensing (CS).
Given sensed samples of an image, CD employs a CS solver to find the sparse representation of that image under a fixed sparsifying dictionary Ψ. As opposed to state of the art CS-based demosaicing approaches, we consider a clear distinction between the interchannel (color) and interpixel correlations of natural images.
Utilizing some well-known facts about the human visual system, those two types of correlations are utilized in a nonseparable format to construct the sparsifying transform Ψ. Our simulation results verify that CD performs better (both visually and in terms of PSNR) than leading demosaicing approaches when applied to the majority of standard test images.
Context-Based Hierarchical Unequal Merging for SAR Image Segmentation
This paper presents an image segmentation method named Context-based Hierarchical Unequal Merging for Synthetic aperture radar (SAR) Image Segmentation (CHUMSIS), which uses superpixels as the operation units instead of pixels.
Based on the Gestalt laws, three rules that realize a new and natural way to manage different kinds of features extracted from SAR images are proposed to represent superpixel context. The rules are prior knowledge from cognitive science and serve as top-down constraints to globally guide the superpixel merging.
The features, including brightness, texture, edges, and spatial information, locally describe the superpixels of SAR images and are bottom-up forces. While merging superpixels, a hierarchical unequal merging algorithm is designed, which includes two stages: 1) coarse merging stage and 2) fine merging stage.
The merging algorithm unequally allocates computation resources so as to spend less running time in the superpixels without ambiguity and more running time in the superpixels with ambiguity. Experiments on synthetic and real SAR images indicate that this algorithm can make a balance between computation speed and segmentation accuracy. Compared with two state-of-the-art Markov random field models, CHUMSIS can obtain good segmentation results and successfully reduce running time.
Discrete wavelet transform and data expansion reduction in homomorphic encrypted domain.
Abstract
Signal processing in the encrypted domain is a new technology with the goal of protecting valuable signals from insecure signal processing. In this paper, we propose a method for implementing discrete wavelet transform (DWT) and multiresolution analysis (MRA) in homomorphic encrypted domain.
We first suggest a framework for performing DWT and inverse DWT (IDWT) in the encrypted domain, then conduct an analysis of data expansion and quantization errors under the framework. To solve the problem of data expansion, which may be very important in practical applications, we present a method for reducing data expansion in the case that both DWT and IDWT are performed. With the proposed method, multilevel DWT/IDWT can be performed with less data expansion in homomorphic encrypted domain.
We propose a new signal processing procedure, where the multiplicative inverse method is employed as the last step to limit the data expansion. Taking a 2-D Haar wavelet transform as an example, we conduct a few experiments to demonstrate the advantages of our method in secure image processing.
We also provide computational complexity analyses and comparisons. To the best of our knowledge, there has been no report on the implementation of DWT and MRA in the encrypted domain.
Estimating Information from Image Colors: An Application to Digital Cameras and Natural Scenes
The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations.
The second was to apply these estimators to simulations of images obtained with five sets of sensors used in digital cameras and with the cone photoreceptors of the human eye. Estimates were obtained for 50 hyperspectral images of natural scenes under daylight illuminants with correlated color temperatures 4,000, 6,500, and 25,000 K. Depending on the sensor set, the mean estimated information available across images with the largest illumination difference varied from 15.5 to 18.0 bits and the mean estimated information retrieved after optimal linear processing varied from 13.2 to 15.5 bits (each about 85 percent of the corresponding information available).
With the best sensor set, 390 percent more points could be identified per scene than with the worst. Capturing scene information from image colors depends crucially on the choice of camera sensors.
General Constructions for Threshold Multiple-Secret Visual Cryptographic Schemes
A conventional threshold (k out of n) visual secret sharing scheme encodes one secret image P into n transparencies (called shares) such that any group of k transparencies reveals P when they are superimposed, while that of less than k ones cannot.
We define and develop general constructions for threshold multiple-secret visual cryptographic schemes (MVCSs) that are capable of encoding s secret images P1,P2,...,Ps into n shares such that any group of less than k shares obtains none of the secrets, while 1) each group of k, k+1,..., n shares reveals P1, P2, ..., Ps, respectively, when superimposed, referred to as (k, n, s)-MVCS where s=n-k+1; or 2) each group of u shares reveals P(ru) where ru ∈ {0,1,2,...,s} (ru=0 indicates no secret can be seen), k ≤ u ≤ n and 2 ≤ s ≤ n-k+1, referred to as (k, n, s, R)-MVCS in which R=(rk, rk+1, ..., rn) is called the revealing list.
We adopt the skills of linear programming to model (k, n, s) - and (k, n, s, R) -MVCSs as integer linear programs which minimize the pixel expansions under all necessary constraints. The pixel expansions of different problem scales are explored, which have never been reported in the literature. Our constructions are novel and flexible. They can be easily customized to cope with various kinds of MVCSs.
General framework to histogram-shifting-based reversible data hiding.
Abstract
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions.
Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework.
It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
Hyperspectral Imagery Restoration Using Nonlocal Spectral-Spatial Structured Sparse Representation With Noise Estimation
Noise reduction is an active research area in image processing due to its importance in improving the quality of image for object detection and classification. In this paper, we develop a sparse representation based noise reduction method for hyperspectral imagery, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property. T
he main contribution of the paper is in the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. Non-locality means the self-similarity of image, by which a whole image can be partitioned into some groups containing similar patches. The similar patches in each group are sparsely represented with a shared subset of atoms in a dictionary making true signal and noise more easily separated.
Sparse representation with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imagery by using 3-D blocks instead of 2-D patches for sparse coding, which also makes true signal and noise more distinguished. Moreover, hyperspectral imagery has both signal-independent and signal-dependent noises, so a mixed Poisson and Gaussian noise model is used.
In order to make sparse representation be insensitive to the various noise distribution in different blocks, a variance-stabilizing transformation (VST) is used to make their variance comparable. The advantages of the proposed methods are validated on both synthetic and real hyperspectral remote sensing data sets.
Image size invariant visual cryptography for general access structures subject to display quality constraints.
Abstract
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design.
This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach.
We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Interactive Segmentation for Change Detection in Multispectral Remote-Sensing Images
In this letter, we propose to solve the change detection (CD) problem in multitemporal remote-sensing images using interactive segmentation methods. The user needs to input markers related to change and no-change classes in the difference image.
Then, the pixels under these markers are used by the support vector machine classifier to generate a spectral-change map. To enhance further the result, we include the spatial contextual information in the decision process using two different solutions based on Markov random field and level-set methods.
While the former is a region-driven method, the latter exploits both region and contour for performing the segmentation task. Experiments conducted on a set of four real remote-sensing images acquired by low as well as very high spatial resolution sensors and referring to different kinds of changes confirm the attractive capabilities of the proposed methods in generating accurate CD maps with simple and minimal interaction.
Intra-and-Inter-Constraint-Based Video Enhancement Based on Piecewise Tone Mapping
Video enhancement plays an important role in various video applications. In this paper, we propose a new intra-and-inter-constraint-based video enhancement approach aiming to: 1) achieve high intraframe quality of the entire picture where multiple regions-of-interest (ROIs) can be adaptively and simultaneously enhanced, and 2) guarantee the interframe quality consistencies among video frames.
We first analyze features from different ROIs and create a piecewise tone mapping curve for the entire frame such that the intraframe quality can be enhanced. We further introduce new interframe constraints to improve the temporal quality consistency.
Experimental results show that the proposed algorithm obviously outperforms the state-of-the-art algorithms.
Latent Fingerprint Matching Using Descriptor-Based Hough Transform
Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion.
Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem.
Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information.
To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications.
Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy.
LDFT-Based Watermarking Resilient to Local Desynchronization Attacks
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable.
In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping.
Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
Linear Distance Coding for Image Classification
The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification.
In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed.
These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy.
We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.
Local Directional Number Pattern for Face Analysis: Face and Expression Recognition
This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods.
We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions.
We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations.
Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks.
Noise Reduction Based on Partial-Reference, Dual-Tree Complex Wavelet Transform Shrinkage
This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution.
To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image.
Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized.
The result is a map of the directional structures present in the non-enhanced image. Said map is then used to shrink the coefficients of the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced image is computed via the inverse transforms. A thorough numerical analysis of the results has been performed in order to confirm the validity of the proposed approach.
Query-Adaptive Image Search With Hash Codes
ABSTRACT:
Scalable image search based on visual similarity has been an active topic of research in recent years. State-of-the-art solutions often use hashing methods to embed high-dimensional image features into Hamming space, where search can be performed in real-time based on Hamming distance of compact hash codes.
Unlike traditional metrics (e.g., Euclidean) that offer continuous distances, the Hamming distances are discrete integer values. As a consequence, there are often a large number of images sharing equal Hamming distances to a query, which largely hurts search results where fine-grained ranking is very important.
This paper introduces an approach that enables query-adaptive ranking of the returned images with equal Hamming distances to the queries. This is achieved by firstly offline learning bitwise weights of the hash codes for a diverse set of predefined semantic concept classes.
We formulate the weight learning process as a quadratic programming problem that minimizes intra-class distance while preserving inter-class relationship captured by original raw image features. Query-adaptive weights are then computed online by evaluating the proximity between a query and the semantic concept classes.
With the query-adaptive bitwise weights, returned images can be easily ordered by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level. Experiments on a Flickr image dataset show clear improvements from our proposed approach.
Regional Spatially Adaptive Total Variation Super-Resolution with Spatial Information Filtering and Clustering
Total variation is used as a popular and effective image prior model in the regularization-based image processing fields. However, as the total variation model favors a piecewise constant solution, the processing result under high noise intensity in the flat regions of the image is often poor, and some pseudoedges are produced.
In this paper, we develop a regional spatially adaptive total variation model. Initially, the spatial information is extracted based on each pixel, and then two filtering processes are added to suppress the effect of pseudoedges. In addition, the spatial information weight is constructed and classified with k-means clustering, and the regularization strength in each region is controlled by the clustering center value.
The experimental results, on both simulated and real datasets, show that the proposed approach can effectively reduce the pseudoedges of the total variation regularization in the flat regions, and maintain the partial smoothness of the high-resolution image.
More importantly, compared with the traditional pixel-based spatial information adaptive approach, the proposed region-based spatial information adaptive total variation model can better avoid the effect of noise on the spatial information extraction, and maintains robustness with changes in the noise intensity in the super-resolution process.
Reversible Data Hiding With Optimal Value Transfer
In reversible data hiding techniques, the values of host data are modified according to some particular rules and the original host content can be perfectly restored after extraction of the hidden data on receiver side. In this paper, the optimal rule of value modification under a payload-distortion criterion is found by using an iterative procedure, and a practical reversible data hiding scheme is proposed.
The secret data, as well as the auxiliary information used for content recovery, are carried by the differences between the original pixel-values and the corresponding values estimated from the neighbors. Here, the estimation errors are modified according to the optimal value transfer rule.
Also, the host image is divided into a number of pixel subsets and the auxiliary information of a subset is always embedded into the estimation errors in the next subset. A receiver can successfully extract the embedded secret data and recover the original content in the subsets with an inverse order. This way, a good reversible data hiding performance is achieved.
Reversible Watermarking Based on Invariant Image Classification and Dynamic Histogram Shifting
In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so.
Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion.
In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction. The experiments conducted so far, on some natural images and on medical images from different modalities, show that for capacities smaller than 0.4 bpp, our method can insert more data with lower distortion than any existing schemes. For the same capacity, we achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang , the most efficient approach actually.
Rich Intrinsic Image Decomposition of Outdoor Scenes from Multiple Views
Intrinsic images aim at separating an image into reflectance and illumination layers to facilitate analysis or manipulation.
Most successful methods rely on user indications [Bousseau et al. 2009], precise geometry, or need multiple images from the same viewpoint and varying lighting to solve this severely ill-posed problem.
We propose a method to estimate intrinsic images from multiple views of an outdoor scene at a single time of day without the need for precise geometry and with only a simple manual calibration step.
Robust Face Recognition for Uncontrolled Pose and Illumination Changes
Face recognition has made significant advances in the last decade, but robust commercial applications are still lacking. Current authentication/identification applications are limited to controlled settings, e.g., limited pose and illumination changes, with the user usually aware of being screened and collaborating in the process.
Among others, pose and illumination changes are limited. To address challenges from looser restrictions, this paper proposes a novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE). Its robustness comes from normalization (“correction”) strategies to address pose and illumination variations.
In addition, two separate image quality indices quantitatively assess pose and illumination changes for each biometric query, before submitting it to the classifier. Samples with poor quality are possibly discarded or undergo a manual classification or, when possible, trigger a new capture. After such filter, template similarity for matching purposes is measured using a localized version of the image correlation index.
Finally, FACE adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier. Experimental results show that the accuracy of FACE (in terms of recognition rate) compares favorably, and in some cases by significant margins, against popular face recognition methods. In particular, FACE is compared against SVM, incremental SVM, principal component analysis, incremental LDA, ICA, and hierarchical multiscale local binary pattern.
Testing exploits data from different data sets: CelebrityDB, Labeled Faces in the Wild, SCface, and FERET. The face images used present variations in pose, expression, illumination, image quality, and resolution.
Our experiments show the benefits of using image quality and reliability indices to enhance overall accuracy, on one side, and to provide for indi- idualized processing of biometric probes for better decision-making purposes, on the other side.
Both kinds of indices, owing to the way they are defined, can be easily integrated within different frameworks and off-the-shelf biometric applications for the following: 1) data fusion; 2) online identity management; and 3) interoperability. The results obtained by FACE witness a significant increase in accuracy when compared with the results produced by the other algorithms considered.
Robust Hashing for Image Authentication Using Zernike Moments and Local Features
A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area. Both global and local features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image as a whole.
The local features include position and texture information of salient regions in the image. Secret keys are introduced in feature extraction and hash construction. While being robust against content-preserving image processing, the hash is sensitive to malicious tampering and, therefore, applicable to image authentication.
The hash of a test image is compared with that of a reference image. When the hash distance is greater than a threshold τ1 and less than τ2, the received image is judged as a fake. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images approaches zero. Experimental results are presented to show effectiveness of the method.
Scene Text Detection via Connected Component Clustering and Nontext Filtering
In this paper, we present a new scene text detection algorithm based on two machine learning classifiers: one allows us to generate candidate word regions and the other filters out nontext ones. To be precise, we extract connected components (CCs) in images by using the maximally stable extremal region algorithm.
These extracted CCs are partitioned into clusters so that we can generate candidate regions. Unlike conventional methods relying on heuristic rules in clustering, we train an AdaBoost classifier that determines the adjacency relationship and cluster CCs by using their pairwise relations.
Then we normalize candidate word regions and determine whether each region contains text or not. Since the scale, skew, and color of each candidate can be estimated from CCs, we develop a text/nontext classifier for normalized images. This classifier is based on multilayer perceptrons and we can control recall and precision rates with a single free parameter.
Finally, we extend our approach to exploit multichannel information. Experimental results on ICDAR 2005 and 2011 robust reading competition datasets show that our method yields the state-of-the-art performance both in speed and accuracy.
Secure Watermarking for Multimedia Content Protection: A Review of its Benefits and Open Issues
The paper illustrates recent results regarding secure watermarking to the signal processing community, highlighting both benefits and still open issues. Secure signal processing, by which indicates a set of techniques able to process sensitive signals that have been obfuscated either by encryption or by other privacy-preserving primitives, may offer valuable solutions to the aforementioned issues.
More specifically, the adoption of efficient methods for watermark embedding or detection on data that have been secured in some way, which we name in short secure watermarking, provides an elegant way to solve the security concerns of fingerprinting applications.
CONTACT US
TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office
WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or
Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost
Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly
We will do any Projects…
FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com
EMBEDDED SYSTEM PROJECTS IN
Embedded Systems using Microcontrollers, VLSI, DSP, Matlab, Power Electronics, Power Systems, Electrical
For Embedded Projects - 044-45000083, 7418497098
ncctchennai@gmail.com, www.ncct.in
Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office
WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or
Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly
We will do any Projects…
Matlab Project Titles, Matlab Project Abstracts, Matlab IEEE Project Abstracts, Matlab Projects abstracts for CSE IT MCA, Download Matlab Titles, Download Matlab Project Abstracts, Download IEEE Matlab Abstracts