Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
1Activity
×
0 of .
Results for:
No results containing your search query
P. 1
Reliability and Security in MDRTS: A Combine Colossal Expression

Reliability and Security in MDRTS: A Combine Colossal Expression

Ratings: (0)|Views: 505|Likes:
Published by ijcsis
Numerous types of Information Systems are broadly used in various fields. With the fast development of computer network, Information System users care more about data sharing in networks. Sharing of information and changes made by dissimilar user at different permission level is controlled by super user, but the read/write operation is performed in a reliable manner. In conventional relational database, data reliability was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but cannot update it. If the conventional consistency control method has been used yet, the system’s concurrency will be inadequately influenced. So there are many new necessities for the consistency control in the field of Information system (MDRTS). In present era not only the information grows enormously it also brings together in different nature of data like text, image, and picture, graphic and sound. The problem not limited only to type of data (e.g. databases) it has used in different environment of database like Mobile Database, Distributed, Real Time Database, and Database and Multimedia database. There are many aspects of data reliability problems in mobile distributed real time system (MDRTS), such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects has been modified. In this paper, many cases of data reliability are discussed for Information System. As the mobile computing becomes well-liked and the database grows with information sharing security is a big issue for researchers. Reliability and Security of data is a big confront for researchers because whenever the data is not reliable and secure no maneuver on the data (e.g. transaction) is useful. It becomes more and more crucial when the data changes from one form to another (i.e. transactions) that are used in non-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for reliability and security of databases. Conventional Database Security has focused primarily on
creating user accounts and managing user privileges level to database objects. In this paper we also talk about an impression of the present and past database security challenges.
Numerous types of Information Systems are broadly used in various fields. With the fast development of computer network, Information System users care more about data sharing in networks. Sharing of information and changes made by dissimilar user at different permission level is controlled by super user, but the read/write operation is performed in a reliable manner. In conventional relational database, data reliability was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but cannot update it. If the conventional consistency control method has been used yet, the system’s concurrency will be inadequately influenced. So there are many new necessities for the consistency control in the field of Information system (MDRTS). In present era not only the information grows enormously it also brings together in different nature of data like text, image, and picture, graphic and sound. The problem not limited only to type of data (e.g. databases) it has used in different environment of database like Mobile Database, Distributed, Real Time Database, and Database and Multimedia database. There are many aspects of data reliability problems in mobile distributed real time system (MDRTS), such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects has been modified. In this paper, many cases of data reliability are discussed for Information System. As the mobile computing becomes well-liked and the database grows with information sharing security is a big issue for researchers. Reliability and Security of data is a big confront for researchers because whenever the data is not reliable and secure no maneuver on the data (e.g. transaction) is useful. It becomes more and more crucial when the data changes from one form to another (i.e. transactions) that are used in non-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for reliability and security of databases. Conventional Database Security has focused primarily on
creating user accounts and managing user privileges level to database objects. In this paper we also talk about an impression of the present and past database security challenges.

More info:

Published by: ijcsis on Apr 10, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

04/10/2011

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. . 9, No.
3
, March 2011
Reliability and Security in MDRTS
A Combine Colossal Expression
 
Gyanendra Kumar Gupta
Computer Sc. & Engg. Deptt.Kanpur Institute of TechnologyKanpurr, UP, India, 208001gyanendrag@gmail.com 
A. K Sharma
Computer Sc. & Engg. DepttM.M.M. Engineering CollegeGorakhpur, UP, India, 273010akscse@rediffmail.com 
Vishnu Swaroop
Computer Sc. & Engg. DepttM.M.M. Engineering CollegeGorakhpur, UP, India, 273010rsvsgkp@rediffmail.com 
Abstract— 
Numerous types of Information Systems are broadlyused in various fields. With the fast development of computernetwork, Information System users care more about datasharing in networks. Sharing of information and changes madeby dissimilar user at different permission level is controlled bysuper user, but the read/write operation is performed in areliable manner. In conventional relational database, datareliability was controlled by consistency control mechanismwhen a data object is locked in a sharing mode, othertransactions can only read it, but can not update it. If theconventional consistency control method has been used yet, thesystem’s concurrency will be inadequately influenced. So thereare many new necessities for the consistency control in the fieldof Information system (MDRTS). In present era not only theinformation grows enormously it also brings together indifferent nature of data like text, image, and picture, graphicand sound. The problem not limited only to type of data (e.g.databases) it has used in different environment of database likeMobile Database, Distributed, Real Time Database, andDatabase and Multimedia database. There are many aspects of data reliability problems in mobile distributed real time system(MDRTS), such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects hasbeen modified. In this paper, many cases of data reliability arediscussed for Information System. As the mobile computingbecomes well-liked and the database grows with informationsharing security is a big issue for researchers. Reliability andSecurity of data is a big confront for researchers because whenever the data is not reliable and secure no maneuver on thedata (e.g. transaction) is useful. It becomes more and morecrucial when the data changes from one form to another (i.e.transactions) that are used in non-traditional environment likeMobile, Distributed, Real Time and Multimedia databases. Inthis paper we raise the different aspects and analyze theavailable solution for reliability and security of databases.Conventional Database Security has focused primarily oncreating user accounts and managing user privileges level todatabase objects. In this paper we also talk about animpression of the present and past database securitychallenges.
 
Key Words- System Reliability, Sharing , DataConsistency, Data Privileges, Data Loss, Data Recovery,Integrity, Concurrency Control & Recovery, Distributed Databases, Transactions, Security, Authentication,Integrity, Access Control, Encryption
I.
 
I
NTRODUCTION
(H
EADING
1)Data reliability summarizes the validity, accuracy,usability and integrity of related data between applicationsand across Information Technology. This ensures that eachuser observes a reliable view of the constant data, includingvisible changes made by the user's own transactions(read/write) and transactions of other users or processes[1,2]. Data Reliability problems may arise at any time but arefrequently introduced during or following recovery situationswhen backup copies of the data are used in place of theoriginal data. Reliability is mostly concerned withconsistency [3].Building distributed database system reliability is veryimportant. The failure of a distributed database system canresult in anything from easily repairable errors to disastrousmeltdowns. A reliable distributed database system isdesigned to be as fault tolerant as feasible. Fault tolerancedeals with making the system function in the presence of faults. Faults can occur in any of the components of adistributed system. This article gives a brief overview of thedifferent types of faults in a system and some of their solutions.Various kinds of data consistency have been identified.These include Application Consistency, TransactionConsistency and Point-in-Time ConsistencyII.
 
VARIUOS
 
T
YPE OF
C
ONSISTENCY
A.
 
Point in Time Consistency
Data is said to be Point in Time consistent if all of theinterrelated data components are as they were at any singleinstant in time. This type of consistency can be visualized bypicturing a data center that has experienced a power failure.Before the lights come back on and processing resumes, thedata is considered time consistent, due to the fact that theentire processing environment failed at the same instant of time.Different types of failures may create a situation wherePoint in Time consistency is not maintained. For example,consider the failure of a single logical volume containingdata from several applications. If the only recovery option isto restore that volume from a backup taken sometime earlier,the data contained on the restored volume is not consistent
144 http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. . 9, No.
3
, March 2011
with the other volumes, and additional recovery steps mustbe undertaken.[101]
B.
 
Transaction Consistency
A transaction is a logical unit of work that may includeany number of file or database updates. During normalprocessing, transaction consistency is present only
 
Before any transactions have run.
 
Follow the completion of a successful transactionand before the next transaction begins, and
 
When the application ends normally or thedatabase is closed.A failure of some kind, the data will not be transactionconsistent if transactions were in-flight at the time of thefailure. In most cases what occurs is that once the applicationor database is restarted, the incomplete transactions areidentified and the updates relating to these transactions areeither “backed-out” or processing resumes with the nextdependant write [4].
C.
 
Application Consistency
It is similar to Transaction consistency, but on a grander scale. Instead of data consistency within the scope of a singletransaction, data must be consistent within the confines of many different transaction streams from one or moreapplications. An application may be made up of manydifferent types of data, such as multiple databasecomponents, various types of files, and data feeds from other applications. Application consistency is the state in which allrelated files and databases are in-synch and represent the truestatus of the application.Data Consistency refers to the usability of data and isoften taken for granted in the single site environment. DataConsistency problems may arise even in a single-siteenvironment during recovery situations when backup copiesof the production data are used in place of the original data[5].In order to ensure that your backup data is useable, it isnecessary to understand the backup methodologies that are inplace as well as how the primary data is created andaccessed. Another very important consideration is theconsistency of the data once the recovery has been completedand the application is ready to begin processing.In order to appreciate the integrity of your data, it isimportant to understand the dependent write process. Thisoccurs within individual programs, applications, applicationsystems and databases. A dependent write is a data updatethat is not to be written until a previous update has beensuccessfully completed. In the large systems environments,the logic that determines the sequence in which systems issue“writes” is controlled by the application processing flow andsupported by basic system functions [6].By and large we take synchronization features for grantedand do not give much thought to how they all work together to protect both the integrity and consistency of the data. It isthe integrity of the data and various systems that allowsapplications to restart after a power failure or other unscheduled event.III.
 
D
ATA
L
OSS VS
.
 
D
ATA
C
ONSISTENCY
How does one reconcile the possibility of lost data versusthe integrity and consistency of the data? Often times,traditional backups were created while files were beingupdated. Eventually, backups created in this fashion werereferred to as “fuzzy backups” as neither the consistency nor the integrity of the data could be assured.Importantly it is better idea to capture as many updates aspossible, even if the end result is not consistent. Let usconsider this point within the confines of a "typical" largesystems data center. For the sake of discussion, let us assumethat there are many applications sharing data on hundreds of logical volumes in many thousands of data sets. Whathappens to the integrity of the data if some updates areapplied and others are not? Should this occur, the data is inan artificial state, one that is neither time, transaction nor application consistent? When the applications are restarted, itis likely that some data will be duplicated, while other datawill still be missing. The difficulty here is in identifyingwhich updates were successful, which updates causederroneous results and which updates are missing.In all cases it is preferable to have time consistent data,even if a few partial transactions are lost or rolled back in theprocess.Data loss can be defined as data that is lost and cannot berecovered by another means. Often, individual transactionsor files can be restored or recreated, which is inconvenient,but does not represent a true loss of data. Even in caseswhere some transactional data cannot be recreated or recovered by the data center support teams, it can sometimesbe re-entered by the end user if necessary.If considering an asynchronous Business Continuity andDisaster Recovery solution, it is important to understand thatsome updates may be lost in flight. However, the greater consideration is that the asynchronous solution you selectprovides you time consistent data for all of your interrelatedapplications. In this way, recovery is similar to the processnecessary to achieve Transaction and ApplicationConsistency following an outage at the primary site.Data loss does not imply a loss of data integrity.However, given a choice, most organizations will protectdata consistency—for example, ensuring that bank depositsand withdrawals occur in the proper sequence so that accountbalances reflect a consistent picture any given point in time.This is preferable to processing transactions out of sequence,or, to use our banking example again, to record thewithdrawal and not the preceding deposit [7].
145 http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. . 9, No.
3
, March 2011
IV.
 
T
HE
B
ACKUP
P
ROBLEM
-
 
A
N
O
VERVIEW
 For a set of backup data to be of any value it needs to beconsistent in some fashion; Time, Transaction or Applicationconsistency is required. For an individual data set, one withno dependencies on any other data, this can be accomplishedby creating a simple Point in Time copy of the data andensuring that the data is not updated during the backupprocess.[8]At peek, this appears to be a relatively simple thing toaccomplish -- at least for an individual data set. However, if this data set is being updated by a critical on-line application,there may never be an opportunity to create a consistentbackup-copy without temporarily halting the criticalapplication. With today's dependence on 24x7 processing,the opportunities for even temporarily interrupting criticalapplications to create a window” are seldom available [9].As this problem became more prevalent, there werevarious methods used to attempt to address the situation. Oneof these methods was to create a “fuzzy” backup of the data,that is, to create the backup copy while updates were allowedto continue. Various utilities were used to perform this“backup while open” (BWO), but they all shared the attributethat the backup copy of the data may or may not be useable:If no additional actions were taken to validate and ensure theconsistency of the data, any use of this backup data waspredicated on the hope that “some data is better thannothing” and generally produced unpredictable and/or un-repeatable results.In fact, there are three different possible outcomes, shouldthis fuzzy backup be restored:1.
 
The data is accidentally consistent and useable.This is a happy circumstance that may or may notbe repeatable.2.
 
The data is not consistent and not useable. Asubsequent attempt to use the data detects theerrors and abnormal end subsequent processing.3.
 
The data is NOT consistent, but does not cause anABEND and happens to be useable to theapplication. It is used by subsequent processing andany data errors go undetected and uncorrected. Thisis the worst possible outcome.One of the first things it might be notice when looking atthe records contained on the backup is that they are differentfrom the data records that were present on the file bothbefore the backup started and immediately after the backupended. In fact, the records contained within the backup are acompletely artificial construct and does not accuratelydescribe the contents of the file at any Point in Time. This isnot a consistent backup of the data. It is neither data-consistent within itself nor is it time-consistent from anypoint in time. It is a completely artificial representation of afile that never existed. [10]It is true that different records would have been backedup if the write I/O pattern had been different, or if the backupprocess was either faster or slower. The point here is thatunless the backup could have been processed instantaneously(or at least in the time between two of the file write I/Os), thebackup copy does not represent consistent data within thefile.In order to address this failing, various methods weredeveloped including transaction logging, transaction back outand file reload with applied journal transactions, just to namea few. These methods are all share the attributes of requiringextra effort (before the backup) and additional time -possibly even manual intervention - before the data can beused. More importantly, the corrective process requires anin-depth understanding of both the application and data.These requirements dictate that a unique recovery scenariobe designed for nearly each and every data set.The integrity problem is daunting enough when viewedin the context of just these 20 records, but what about whenthere are interdependencies between thousands of data setsresiding on hundreds (or even thousands) of volumes?In this greater context, simple data consistency withinindividual data sets is no longer sufficient. What is requiredis time consistency across all of the interdependent data. Asit is impossible to achieve this with the traditional backupmethodologies, newer technologies are required to supporttime consistent data?Fortunately, there are solutions available today. For asingle-site solution, FlashCopy with Consistency Groups canbe used to create a consistent Point-in-Time copy that canthen be backed-up by traditional means [11].To guarantee the correct results and consistency of databases, the conflicts between transactions can be either avoided, or detected and then resolved. Most of the existingmobile database CC techniques use the (conflict)serializability as the correctness criterion. They are either pessimistic if they avoid conflicts at the beginning of transactions, or optimistic if they detect and resolve conflictsright before the commit time, or hybrid if they are mixed. Tofulfill this goal, locking, timestamp ordering (TO) andserialization graph testing can be used as either a pessimisticor optimistic algorithm.V.
 
S
ECURITY IN
D
ATABASES
 Database security is the system, processes, andprocedures that protect a database from unintended activity.Unintended activity can be categorized as authenticatedmisuse, malicious attacks or inadvertent mistakes made byauthorized individuals or processes. Database security is alsoa specialty within the broader discipline of computer security. Databases introduce a number of unique securityrequirements for their users and administrators. On one hand,databases are designed to promote open and flexible accessto data. On the other hand, it’s this same open access thatmakes databases vulnerable to many kinds of maliciousactivity. These are just a few of the database securityproblems that exist within organizations. The best way to
146 http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->