Sunday, August 5, 2012

2012, Java IEEE Project Abstracts - Part 1

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN – CLOUD COMPUTING
SLA-BASED OPTIMIZATION OF POWER AND MIGRATION COST IN CLOUD COMPUTING
Cloud computing systems (or hosting datacenters) have attracted a lot of attention in recent years. Utility computing, reliable data storage, and infrastructure-independent computing are example applications of such systems. Electrical energy cost of a cloud computing system is a strong function of the consolidation and migration techniques used to assign incoming clients to existing servers.
Moreover, each client typically has a service level agreement (SLA), which specifies constraints on performance and/or quality of service that it receives from the system. These constraints result in a basic trade-off between the total energy cost and client satisfaction in the system. In this paper, a resource allocation problem is considered that aims to minimize the total energy cost of cloud computing system while meeting the specified client-level SLAs in a probabilistic sense.
The cloud computing system pays penalty for the percentage of a client’s requests that do not meet a specified upper bound on their service time. An efficient heuristic algorithm based on convex optimization and dynamic programming is presented to solve the aforesaid resource allocation problem. Simulation results demonstrate the effectiveness of the proposed algorithm compared to previous work
 

*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
SCALABLE JOIN QUERIES IN CLOUD DATA STORES
Cloud data stores provide scalability and high avail-ability properties for Web applications, but do not support complex queries such as joins. Web application developers must therefore design their programs according to the peculiarities of NoSQL data stores rather than established software engineering practice.
This results in complex and error-prone code, especially with respect to subtle issues such as data consistency under concurrent read/write queries. We present join query support in CloudTPS, a middleware layer which stands between a Web application and its data store.
The system enforces strong data consistency and scales linearly under a demanding workload com-posed of join queries and read-write transactions. In large-scale deployments, CloudTPS outperforms replicated PostgreSQL up to three times


*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
REVENUE MANAGEMENT FOR CLOUD PROVIDERS - A POLICY-BASED APPROACH UNDER STOCHASTIC DEMAND
Competition on global markets forces enterprises to make  use  of  new  applications,  reduce  process  times and cut the costs of their IT-infrastructure. To achieve this  commercial  users  harness  the  benefits  of  Cloud computing  as  they  can  outsource  data  storage  and computation facilities, while saving on the overall cost of  IT  ownership. 
Cloud services can  be  accessed  on demand at any time in a pay-as-you-go manner. However, it is this flexibility of customers that results in great challenges for Cloud service providers. They need to maximize their revenue in the presence of limited fixed resources  and  uncertainty  regarding upcoming  service  requests  while  contemporaneously considering  their  SLAs. 
To address  this  challenge  we introduce  models  that  can  predict  revenue  and utilization  achieved  with  admission-control  policy based  revenue  management  under  stochastic  demand. This allows providers to significantly increase revenue by choosing the optimum policy.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
PERFORMANCE EVALUATION OF MULTIMEDIA SERVICES OVER WIRELESS LAN USING ROUTING PROTOCOL OPTIMIZED LINK STATE ROUTING (OLSR)
Development of multimedia from year to year increasing with the growing support of computer networks such as wireless LAN (WLAN) as a media intermediary. The one of method is streaming multimedia delivery over the internet from the server to the client to respond to client requests to a video and audio contained in a computer network. Factors that affect the streaming is bandwidth.
These factors may cause the process stream is often disrupted when there is not enough bandwidth so that resulted in the loss and delay in delivery. To reduce the occurrence of loss and delay, a routing protocol is needed that can support multimedia service quality package that will be passed on wireless LAN networks.
In this paper will be evaluated on the WLAN performance multimedia services with the help of routing protocol Optimized Link State Routing (OLSR). 


 *------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
PERFORMANCE ANALYSIS OF CLOUD COMPUTINGCENTERS USING M =G=M=MÞ R QUEUING SYSTEMS
Successful development of cloud computing paradigm necessitates accurate performance evaluation of cloud data centers.
As exact modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests, we describe a novel approximate analytical model for performance evaluation of cloud server farms and solve it to obtain accurate estimation of the complete probability distribution of the request response time and other important performance indicators.
The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service, on the other

 
*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
MINING CONCEPT DRIFTING NETWORK TRAFFIC IN CLOUD COMPUTING ENVIRONMENTS
Anomaly-based   network   Intrusion   Detection   Systems   (IDS) model patterns of normal activity and detect novel network attacks. However, these systems depend on the availability of the systems normal traffic pattern profile. But the statistical fingerprint of the normal traffic pattern can change and shift over a period of time due to changes in operational or user activity at the networked site or even system updates.
The changes in normal traffic patterns  over  time  lead  to  concept  drift.  Some  changes  can  be  temporal, cyclical and can be short-lived or they can last for longer periods of time. Depending on a number of factors the speed at which the change in traffic patterns occurs can also be variable, ranging from near instantaneous to the change occurring over the span of numerous months.
These changes in traffic patterns are a cause of concern for IDSs as they can lead to a significant increase   in   false   positive   rates,   thereby   reducing   the   overall   system performance . In order to improve the reliability of the IDS, there is a need for an automated mechanism to detect valid traffic changes and avoid inappropriate ad hoc responses.
ROC curves have historically been used to evaluate the accuracy of IDSs. ROC curves generated using fixed, time- invariant classification thresholds do not characterize the best accuracy that an IDS can achieve in presence of concept-drifting network traffic.
In this paper,  we  present  integrated  supervised  machine  learning  and  control theoretic model (especially for clouds) for detecting concept drift in network traffic patterns.
The model comprises of an online support vector machine based classifier (incremental anomaly based detection), a Kullback- Leibler divergence based relative entropy measurement scheme (quantifying concept drift) and feedback control engine (adapting ROC thresholding). In our proposed  system,  any  intrusion  activity  will  cause  significant  variations, thereby  causing  a  large  error,  while  a  minor  aberration  in  the  variations (concept drift) will not be immediately reported as alert



*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
FRAMEWORK ON LARGE PUBLIC SECTOR IMPLEMENTATION OF CLOUD COMPUTING
Cloud computing enables IT systems to be scalable and elastic. One significant advantage of it is users no longer need to determine their exact computing resource requirements upfront.  Instead, they request computing resources as required, on-demand.
This paper is written to introduce a framework specific for large public sector entities on how to migrate to cloud computing. This paper can then be also be a reference for the Organizations to overcome its limitations and to convince their stakeholders to further implement various types of Cloud Computing service models.
 

*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
ENHANCED ENERGY-EFFICIENT SCHEDULING FOR PARALLEL APPLICATIONS IN CLOUD
Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers.
In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule.
Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
CLOUD CHAMBER: A SELF-ORGANIZING FACILITY TO CREATE, EXERCISE, AND EXAMINE SOFTWARE AS A SERVICE TENANTS
Cloud Chamber is a testbed for understanding how web services behave as tenants in a Software as a Service (SaaS) environment.  This work describes the Cloud Chamber testbed to investigate autonomic resource management of web services in a cloud environment. 
Cloud Chamber is a virtualized environment which provides web servers as services, facilities to apply loads to the tenant services, algorithms for autonomic organization and reconfiguration of service assignments as demand changes, and sensors to capture resource consumption and performance metrics.
The testbed inserts sensors into web servers to collect the resource utilization of CPU cycles, memory consumption, and bandwidth consumption of the individual web services, the web server, and the operating system.  This high resolution performance data generates profiles of the resource usage of each web service and the resource availability of each server. 
The testbed, as described in this work, utilizes these profiles to efficiently place services on servers, thus balancing resource consumption, service performance, and service availability.  Once services have been placed, the testbed monitors changes such as traffic levels, server ch urn, and the introduction of new services. 
The information gathered is used to calculate configurations of service placement which better meet the changing requirements of the environment


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
A TIME-SERIES PATTERN BASED NOISE GENERATION STRATEGY FOR PRIVACY PROTECTION IN CLOUD COMPUTING
Cloud computing promises an open environment where customers can deploy IT services in a pay-as-you-go fashion while saving huge capital investment in their own IT infrastructure. Due to the openness, various malicious service providers may exist. Such service providers may record service information in a service process from a customer and then collectively deduce the customer’s private information.
Therefore, from the perspective of cloud computing security, there is a need to take special actions to protect privacy at client sides. Noise obfuscation is an effective approach in this regard by utilising noise data.
For instance, it generates and injects noise service requests into real customer service requests so that service providers would not be able to distinguish which requests are real ones if their occurrence probabilities are about the same. However, existing typical noise generation strategies mainly focus on the entire service usage period to achieve about the same final occurrence probabilities of service requests.
In fact, such probabilities can fluctuate in a time interval such as three months and may significantly differ than other time intervals. In this case, service providers may still be able to deduce the customers’ privacy from a specific time interval although unlikely from the overall period.
That is to say, the existing typical noise generation strategies could fail to protect customers’ privacy for local time intervals. To address this problem, we develop a novel time-series pattern based noise generation strategy.
Firstly, we analyse previous probability fluctuations and propose a group of time-series patterns for predicting future fluctuated probabilities. Then, based on these patterns, we present our strategy by forecasting future occurrence probabilities of real service requests and generating noise requests to reach about the same final probabilities in the next time interval.
The simulation evaluation demonstrates that our strategy can cope with these fluctuations to significantly improve the effectiveness of customers’ privacy protection


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
A PRICE- AND-TIME-SLOT-NEGOTIATION MECHANISM FOR CLOUD SERVICE RESERVATIONS
When making reservations for Cloud services, con-sumers and providers need to establish service-level agreements through negotiation. Whereas it is essential for both a consumer and a provider to reach an agreement on the price of a service and when to use the service, to date, there is little or no negotiation support for both price and time-slot negotiations (PTNs) for Cloud service reservations.
This paper presents a multi-issue negotiation mechanism to facilitate the following: 1)PTNs between Cloud agents and 2) tradeoff between price and time-slot utilities. Unlike many existing negotiation mechanisms in which a negotiation agent can only make one proposal at a time, agents in this work are designed to concurrently make multiple proposals in a negotiation round that generate the same aggregated utility, differing only in terms of individual price and time-slot utilities.
Another novelty of this work is formulating a novel time-slot utility function that characterizes preferences for different time slots. These ideas are implemented in an agent-based Cloud testbed.
Using the test-bed, experiments were carried out to compare this work with related approaches. Empirical results show that PTN agents reach faster agreements and achieve higher utilities than other related approaches. A case study was carried out to demonstrate the application of the PTN mechanism for pricing Cloud resources


*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
A CLOUD INFRASTRUCTURE FOR OPTIMIZATION OF A MASSIVE PARALLEL SEQUENCING WORKFLOW
Massive Parallel Sequencing is a term used to describe several revolutionary approaches to DNA sequencing, the so-called Next Generation Sequencing technologies.
These technologies generate millions of short sequence fragments in a single run and can be used to measure levels of gene expression and to identify novel splice variants of genes allowing more accurate analysis. The proposed solution provides novelty on two fields, firstly an optimization of the read mapping
algorithm has been designed, in order to parallelize processes, secondly an implementation of an architecture that consists of a Grid platform, composed of physical nodes, a Virtual platform, composed of virtual nodes set up on demand, and a scheduler that allows to integrate the two platforms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
TIME AND COST SENSITIVE DATA-INTENSIVE COMPUTING ON HYBRID CLOUDS
Purpose-built clusters permeate many of today’s or-ganizations, providing both large-scale data storage and comput-ing. Within local clusters, competition for resources complicates applications with deadlines. However, given the emergence of the cloud’s pay-as-you-go model, users are increasingly storing portions of their data remotely and allocating compute nodes on-demand to meet deadlines.
This scenario gives rise to a hybrid cloud, where data stored across local and cloud resources may be processed over both environments. While a hybrid execution environment may be used to meet time constraints, users must now attend to the costs associated with data storage, data transfer, and node allocation time on the cloud.
In this paper, we describe a modeling-driven resource allocation framework to support both time and cost sensitive execution for data-intensive applications executed in a hybrid cloud setting. We evaluate our framework using two data-intensive applications and a number of time and cost constraints.
Our experimental results show that our system is capable of meeting execution deadlines within a 3.6% margin of error. Similarly, cost constraints are met within a 1.2% margin of error, while minimizing the application’s execution time


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
LARGE MARGIN GAUSSIAN MIXTURE MODELS WITH DIFFERENTIAL PRIVACY
As increasing amounts of sensitive personal information is aggregated into data repositories, it has become important to develop mechanisms for processing the data without revealing information about individual data instances.
The differential privacy model provides a framework for the development and theoretical analysis of such mechanisms. In this paper, we propose an algorithm for learning a discriminatively trained multiclass Gaussian mixture model-based classifier that preserves differential privacy using a large margin loss function with a perturbed regularization term.
We present a theoretical upper bound on the excess risk of the classifier introduced by the perturbation


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ITERATIVE TRUST AND REPUTATION MANAGEMENT USING BELIEF PROPAGATION
In this paper, we introduce the first application of the belief propagation algorithm in the design and evaluation of trust and reputation management systems. We approach the reputation management problem as an inference problem and describe it as computing marginal likelihood distributions from complicated global functions of many variables.
However, we observe that computing the marginal probability functions is computationally prohibitive for large-scale reputation systems. Therefore, we propose to utilize the belief propagation algorithm to efficiently (in linear complexity) compute these marginal probability distributions; resulting a fully iterative probabilistic and belief propagation-based approach (referred to as BP-ITRM). BP-ITRM models the reputation system on a factor graph. By using a factor graph, we obtain a qualitative representation of how the consumers (buyers) and service providers (sellers) are related on a graphical structure.
Further, by using such a factor graph, the global functions factor into products of simpler local functions, each of which depends on a subset of the variables. Then, we compute the marginal probability distribution functions of the variables representing the reputation values (of the service providers) by message passing between nodes in the graph. We show that BP-ITRM is reliable in filtering out malicious/unreliable reports.
We provide a detailed evaluation of BP-ITRM via analysis and computer simulations. We prove that BP-ITRM iteratively reduces the error in the reputation values of service providers due to the malicious raters with a high probability. Further, we observe that this probability drops suddenly if a particular fraction of malicious raters is exceeded, which introduces a threshold property to the scheme.
Furthermore, comparison of BP-ITRM with some well-known and commonly used reputation management techniques (e.g., Averaging Scheme, Bayesian Approach, and Cluster Filtering) indicates the superiority of the proposed scheme in terms of robustness against attacks (e.g., ballot stuffing, bad mouthing). Finally, BP-ITRM introduces a linear complexity in the number of service providers and consumers, far exceeding the efficiency of other schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
INCENTIVE COMPATIBLE PRIVACY-PRESERVING DISTRIBUTED CLASSIFICATION
In this paper, we propose game-theoretic mechanisms to encourage truthful data sharing for distributed data mining. One proposed mechanism uses the classic Vickrey-Clarke-Groves (VCG) mechanism, and the other relies on the Shapley value.
Neither relies on the ability to verify the data of the parties participating in the distributed data mining protocol. Instead, we incentivize truth telling based solely on the data mining result. This is especially useful for situations where privacy concerns prevent verification of the data.
Under reasonable assumptions, we prove that these mechanisms are incentive compatible for distributed data mining. In addition, through extensive experimentation, we show that they are applicable in practice


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENSURING DISTRIBUTED ACCOUNTABILITY FOR DATA SHARING IN THE CLOUD
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate.
While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services.
To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies.
We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENHANCED PRIVACY ID: A DIRECT ANONYMOUS ATTESTATION SCHEME WITH ENHANCED REVOCATION CAPABILITIES
Direct Anonymous Attestation (DAA) is a scheme that enables the remote authentication of a Trusted Platform Module (TPM) while preserving the user’s privacy. A TPM can prove to a remote party that it is a valid TPM without revealing its identity and without linkability.
In the DAA scheme, a TPM can be revoked only if the DAA private key in the hardware has been extracted and published widely so that verifiers obtain the corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected of being compromised can be revoked even if the private key is not known.
However, with the full unlinkability requirement intact, if a TPM has been compromised but its private key has not been distributed to verifiers, the TPM cannot be revoked. Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to be compromised after the DAA issuing has occurred. In this paper, we present a new DAA scheme called Enhanced Privacy ID (EPID) scheme that addresses the above limitations.
While still providing unlinkability, our scheme provides a method to revoke a TPM even if the TPM private key is unknown. This expanded revocation property makes the scheme useful for other applications such as for driver’s license. Our EPID scheme is efficient and provably secure in the same security model as DAA, i.e., in the random oracle model under the strong RSA assumption and the decisional Diffie-Hellman assumption


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENFORCING MANDATORY ACCESS CONTROL IN COMMODITY OS TO DISABLE MALWARE
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a grand challenge but also a promising approach. The firmest barriers to apply MAC to defeat malware programs are the incompatible and unusable problems in existing MAC systems.
To address these issues, we manually analyze 2,600 malware samples one by one and two types of MAC enforced operating systems, and then design a novel MAC enforcement approach, named Tracer, which incorporates intrusion detection and tracing in a commercial operating system. The approach conceptually consists of three actions: detecting, tracing, and restricting suspected intruders.
One novelty is that it leverages light-weight intrusion detection and tracing techniques to automate security label configuration that is widely acknowledged as a tough issue when applying a MAC system in practice. The other is that, rather than restricting information flow as a traditional MAC does, it traces intruders and restricts only their critical malware behaviors, where intruders represent processes and executables that are potential agents of a remote attacker.
Our prototyping and experiments on Windows show that Tracer can effectively defeat all malware samples tested via blocking malware behaviors while not causing a significant compatibility problem


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
DOUBLEGUARD: DETECTING INTRUSIONS IN MULTITIER WEB APPLICATIONS
Internet services and applications have become an inextricable part of daily life, enabling communication and the management of personal information from anywhere. To accommodate this increase in application and data complexity, web services have moved to a multitiered design wherein the webserver runs the application front-end logic and data are outsourced to a database or file server.
In this paper, we present DoubleGuard, an IDS system that models the network behavior of user sessions across both the front-end webserver and the back-end database. By monitoring both web and subsequent database requests, we are able to ferret out attacks that an independent IDS would not be able to identify.
Furthermore, we quantify the limitations of any multitier IDS in terms of training sessions and functionality coverage. We implemented DoubleGuard using an Apache webserver with MySQL and lightweight virtualization. We then collected and processed real-world traffic over a 15-day period of system deployment in both dynamic and static web applications.
Finally, using DoubleGuard, we were able to expose a wide range of attacks with 100 percent accuracy while maintaining 0 percent false positives for static web services and 0.6 percent false positives for dynamic web services


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
DETECTING ANOMALOUS INSIDERS IN COLLABORATIVE INFORMATION SYSTEMS
Collaborative information systems (CISs) are deployed within a diverse array of environments that manage sensitive information. Current security mechanisms detect insider threats, but they are ill-suited to monitor systems in which users function in dynamic teams.
In this paper, we introduce the community anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on the access logs of collaborative environments.
The framework is based on the observation that typical CIS users tend to form community structures based on the subjects accessed (e.g., patients’ records viewed by healthcare providers). CADS consists of two components: 1) relational pattern extraction, which derives community structures and 2) anomaly prediction, which leverages a statistical model to determine when users have sufficiently deviated from communities.
We further extend CADS into MetaCADS to account for the semantics of subjects (e.g., patients’ diagnoses). To empirically evaluate the framework, we perform an assessment with three months of access logs from a real electronic health record (EHR) system in a large medical center.
The results illustrate our models exhibit significant performance gains over state-of-the-art competitors. When the number of illicit users is low, MetaCADS is the best model, but as the number grows, commonly accessed semantics lead to hiding in a crowd, such that CADS is more prudent


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
AUTOMATED SECURITY TEST GENERATION WITH FORMAL THREAT MODELS
Security attacks typically result from unintended behaviors or invalid inputs. Security testing is labor intensive because a real-world program usually has too many invalid inputs. It is highly desirable to automate or partially automate security-testing process.
This paper presents an approach to automated generation of security tests by using formal threat models represented as Predicate/ Transition nets. It generates all attack paths, i.e., security tests, from a threat model and converts them into executable test code according to the given Model-Implementation Mapping (MIM) specification.
We have applied this approach to two real-world systems, Magento (a web-based shopping system being used by many online stores) and FileZilla Server (a popular FTP server implementation in C++). Threat models are built systematically by examining all potential STRIDE (spoofing identity, tampering with data, repudiation, information disclosure, denial of service, and elevation of privilege) threats to system functions. The security tests generated from these models have found multiple security risks in each system.
The test code for most of the security tests can be generated and executed automatically. To further evaluate the vulnerability detection capability of the testing approach, the security tests have been applied to a number of security mutants where vulnerabilities are injected deliberately.
The mutants are created according to the common vulnerabilities in C++ and web applications. Our experiments show that the security tests have killed the majority of the mutants


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
A LEARNING-BASED APPROACH TO REACTIVE SECURITY
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack.
Our game-theoretic model follows common practice in the security literature by making worst case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally.
In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
SECURE FAILURE DETECTION AND CONSENSUS IN TRUSTEDPALS
We present a modular redesign ofTrustedPals, a smart card-based security framework for solving Secure Multiparty Computation (SMC). Originally, TrustedPals assumed a synchronous network setting and allowed to reduce SMC to the problem of fault-tolerant consensus among smart cards.
We explore how to make TrustedPals applicable in environments with less synchrony and show how it can be used to solve asynchronousSMC. Within the redesign we investigate the problem of solving consensus in a general omission failure model augmented with failure detectors.
To this end, we give novel definitions of both consensus and the class  Pof failure detectors in the omission model, which we call  Pð omÞ, and show how to implement  Pð omÞ and have consensus in such a system with very weak synchrony assumptions.
The integration of failure detection and consensus into the TrustedPals framework uses tools from privacy enhancing techniques such as message padding and dummy traffic


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
RESILIENT AUTHENTICATED EXECUTION OF CRITICAL APPLICATIONS IN UNTRUSTED ENVIRONMENTS
Modern computer systems are built on a foundation of software components from a variety of vendors. While critical applications may undergo extensive testing and evaluation procedures, the heterogeneity of software sources threatens the integrity of the execution environment for these trusted programs.
For instance, if an attacker can combine an application exploit with a privilege escalation vulnerability, the operating system (OS) can become corrupted. Alternatively, a malicious or faulty device driver running with kernel privileges could threaten the application. While the importance of ensuring application integrity has been studied in prior work, proposed solutions immediately terminate the application once corruption is detected.
Although, this approach is sufficient for some cases, it is undesirable for many critical applications. In order to overcome this shortcoming, we have explored techniques for leveraging a trusted virtual machine monitor (VMM) to observe the application and potentially repair damage that occurs.
In this paper, we describe our system design, which leverages efficient coding and authentication schemes, and we present the details of our prototype implementation to quantify the overhead of our approach. Our work shows that it is feasible to build a resilient execution environment, even in the presence of a corrupted OS kernel, with a reasonable amount of storage and performance overhead.


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
ON PRIVACY OF ENCRYPTED SPEECH COMMUNICATIONS
Silence suppression, an essential feature of speech communications over the Internet, saves bandwidth by disabling voice packet transmissions when silence is detected. However, silence suppression enables an adversary to recover talk patterns from packet timing.
In this paper, we investigate privacy leakage through the silence suppression feature. More specifically, we propose a new class of traffic analysis attacks to encrypted speech communications with the goal of detecting speakers of encrypted speech communications. These attacks are based on packet timing information only and the attacks can detect speakers of speech communications made with different codecs.
We evaluate the proposed attacks with extensive experiments over different type of networks including commercial anonymity networks and campus networks. The experiments show that the proposed traffic analysis attacks can detect speakers of encrypted speech communications with high accuracy based on traces of 15 minutes long on average


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
MITIGATING DISTRIBUTED DENIAL OF SERVICE ATTACKS IN MULTIPARTY APPLICATIONS IN THE PRESENCE OF CLOCK DRIFTS
Network-based applications commonly open some known communication port(s), making themselves easy targets for (distributed) Denial of Service (DoS) attacks. Earlier solutions for this problem are based on port-hopping between pairs of processes which are synchronous or exchange acknowledgments.
However, acknowledgments, if lost, can cause a port to be open for longer time and thus be vulnerable, while time servers can become targets to DoS attack themselves. Here, we extend port-hopping to support multiparty applications, by proposing the BIGWHEEL algorithm, for each application server to communicate with multiple clients in a port-hopping manner without the need for group synchronization.
Furthermore, we present an adaptive algorithm, HOPERAA, for enabling hopping in the presence of bounded asynchrony, namely, when the communicating parties have clocks with clock drifts. The solutions are simple, based on each client interacting with the server independently of the other clients, without the need of acknowledgments or time server(s).
Further, they do not rely on the application having a fixed port open in the beginning, neither do they require the clients to get a “first-contact” port from a third party. We show analytically the properties of the algorithms and also study experimentally their success rates, confirm the relation with the analytical bounds.


*------------*------------*------------*------------*------------*------------*

No comments:

Post a Comment