You are on page 1of 7

IDENTIFYING WORST AND BEST DRUGS USING ASSOCIATION RULE

MINING

1. S.Ganeshmoorthy 3, Assistant Professor, Sree Naryana Guru College, Coimbatore

2. Santhosh Kumar J.P 11, III BCA&IT, Sree Naryana Guru College , Coimbatore

3. Vishnu Raj.P 12, III BCA&IT, Sree Naryana Guru College, Coimbatore
ABSTRACT matching. And it is makes full use of the all the transformations
adaptive to video frame video content to detect described above.
rate changes. copies. To facilitate the
We propose in
Experimental results also discussion of “video The objective of
this paper a segmentation
demonstrate that the copy” in this paper, we video copy detection is
and graph-based video
proposed method can use the definition of to decide whethera
sequence matching
obtain a better tradeoff
method for video copy video copy in TRECVID query video segment is
between the.
detection. Specifically, 2008 tasks. a copy of a video from
due to the good stability the video dataset. A
and discriminative ability KEY TERMS-Video Definition of copy can be obtained by
of local features, we use copy detection, graph,
SIFT feature, dual-
copy video: A video V1, various
SIFT descriptor for video by means of various transformations. If
threshold method, SVD,
content description transformations such as avideo copy detection
graph-based matching
.However, matching addition, deletion, system finds a matching
based on SIFT descriptor
INTRODUCTION modification (of aspect, video segment,it
is computationally
expensive for large
color, contrast, returns the name of
number of points and the WITH the rapid encoding, and so on), copy video in the video
high dimension. Thus, to development and wide camcording, and so on, database andthe time
reduce the computational application on is transformed into stamp where the query
complexity, we first use multimedia hardware another videoV2,then was copied from.Fig. 1
the dual-threshold and software videoIn content-based shows the framework of
method to segment the technologies, the cost of copy detection task of content-based video
videos into segments with image and video data TRECVID 2008, 10 copy detection. It is
homogeneous content collection, creation, and transformations [3] are composed of two parts:
and extract keyframes storage is becoming defined. These 10 1) An offline step.
from each segment. SIFT increasingly low. Each transformations are as Keyframes are extracted
features are extracted below, see [4] for detail. from thereference video
day tens of thousands of
from the keyframes of the AndT1. Cam-cording; database and features
video data are
segments. Then, we
generated and T2. Picture in picture; are extracted from
propose an SVD-based
published.Among these T3. Insertions these key frames. The
method to match two
video frames with SIFT huge volumes of videos, ofpattern: Different extracted features
point set descriptors. To there exist patterns are inserted should be robustand
obtain the video sequence largenumbers of copies randomly: captions, effective to
matching result, we or near-duplicate subtitles, logo, sliding transformations by
propose a graph-based videos. According to the captions; T4. which the video may
method. It can convert statistics of [1], on undergo. Also, the
the video sequence average, there are 27 Strong re- features can be stored
matching into finding the percentredundant encoding; T5.Change of in an indexing structure
longest path in the frame videos that are gamma; T6, T7. to make similarity
matching-result graph duplicate or nearly Decrease in quality: comparison efficient.
with time constraint. duplicate to the most Blur, change ofgamma 2) An online step. Query
Experimental results (T5), frame dropping, videos are analyzed.
popular version of a
demonstrate that the contrast, compression Features are extracted
video in the search
segmentation and graph-
results from Google (T4), ratio, white noise; from these videos and
based video sequence
matching method can video, YouTube, and T8, T9. Post production: compared to those
detect video copies Yahoo! video search Crop, Shift, stored in the reference
effectively. Also, the engines. As a Contrast,caption (text database. The matching
proposed method has consequence, an insertion), flip (vertical results are then
advantages. Specifically, effective and efficient mirroring), Insertion analyzed and the
it can automatically find method for video copy ofpattern (T3), Picture detection results are
optimal sequence detection has become in picture (the original returned.
matching result from the more and more video is in
disordered matching important. A valid video thebackground); T10. RELATED WORK
results based on spatial copy detection method Combination of random
feature. It can also is based on the fact that five As reviewed in [8],
reduce the noise caused “video itself is transformationamong many content-based
by spatial feature
watermark” [2] and video copydetection
methods have been et al. [17] presented a
proposed. Furthermore, robust content-based
copy is a subset of near The work in videcopy detection
duplicate. Copies have [11] and [12], with [9] method based on local
an origin, whilenear- as the basis,designed spatiotemporal
duplicates may not. region intensity rank features. Keet al. used
Specifically, two news signature along local point features for
videos onthe same timesequence. nearduplicateimage
event from two Specifically, they detection and subimage
broadcasting divided each video detection [18].Law-To et
corporations are framesalong the time al. [19] and Joly et al.
notcopies, but near sequence into several [20] adopted Harris
duplicates since they blocks and cornerpoints [21] as
deliver the proposedaverage gray feature points in video
sameinformation to values for each block. frames. And
audience, although OIS (QUERY IMAGE) Then, they linked thedifference of their
some variations on grayvalues of these methods lies in how to
thescenes may exist. divided blocks describe thefeature
Also, there are many separately along the points. Specifically, Law-
methods proposedon timedirection before To et al. [19] selected
near-duplicate they use those sequence four
detection. information to describe different locations at
The methods the video content. She the space around
on copy and et al. [13], [14] interest points(i.e.,
nearduplicatedetection introduced areal-time these four locations are
can be grouped into two near-duplicate video in the same frame)
types.One type of copy detection system, when theydescribe the
detection methods use UQLIPS,which globally feature points, while
global summarized each video Joly et al. [20]
descriptor.Specifically, to a single vector.Huang selectedfour different
al. compared distance et al. [15] used global locations around
measures and video image feature such as interest points in both
sequence matching colorhistogram and timeand spatial domain.
methods for video copy texture to represent Besides, Law-To et al.
OIS(PIP IMAGE)
detection [2], [9]. They each video frame. Wuet [19] alsodescribed the
employed convolution al. [1] adopted the color trajectory characteristic
formation direction histogram in HSV color of feature points
feature, L1 distance for space todetect and andused labels (such as
ordinal remove the majority of “background” or
intensitysignature duplicates of web “movement”) to
(OIS), and histogram videos.Another type of labelsome feature
intersection for methods are based on points.
colorhistogram feature. local descriptors.The
The results local descriptors on This method
show that the method points, lines, and shape can effectively improve
usingOIS performs play animportant role the robustness and
better. Yuan et al. in image and video copy discriminative ability of
combined OIS with detection. video
colorhistogram feature Amongthem, signature.Similarly,
as a tool for describing descriptors on points Satoh et al. [22]
video sequence are widely used. detected duplicate
Specifically,spatiotempo scenes byusing the
ral interest points were trajectory characteristic
employed to of the feature
classifyhuman actions points.Zhou et al. [23]
and to detect periodic proposed a shot-based
movement [16].Willems interest pointselection
approach for near- betwee need Pre reaction)
duplicate n two distribution signal pairs
search.Methods based points. Slow in from
on global descriptor are  They do not reaction electronic
carried outprimarily by indicate any finding patient
using spatiotemporal temporal database
low-level features of relationship based on
thewhole image. The between X and 4. PROPOSED the new
features used include Y . In addition, SYSTEM measure.
color histogram, they are not The chapter  The system
able to capture discusses about the performs
3. EXISTING the causal proposed system and the
SYSTEM: relationships the contribution of the association
There are between two proposed system. rules for
several approaches event sets. Finding causal support
implemented in existing and rare associations calculation
Drawbacks: between two events or and
to detect the adverse of
Existing system sets of events with performs
drug reactions.
may be challenged relatively low frequency the
o Fuzzy based
to express their is very useful for temporal
system has
knowledge in the various real-world mutations.
been
form of probability applications such as  The proposed
implemented
distributions. medical domain. A drug technique
in existing.
 Failed to used at an appropriate applies a
That failed to
compare two dose may cause one or prediction
bring the
different more adverse drug method
accuracy.
mixture reactions (ADRs), with the
o Several top
treatments although the probability existing
down and
along with the is low. Providing this temporal
bottom up
temporal kind of causal nodes
approaches
occurrence of relationships and Bayesian
proposed to
drugs and drug suggestion for best drug Model
detect of rare
challenging finding can help the (TNBM)
association
reactions and user to prevent or model to
rules are
changes in correct negative data
based on
order to predict outcomes caused by its extracted
traditional
the most antecedents. Mining the from the
interestingne
effective adverse drug patient and
ss measures
treatment. relationships is drug
like support
 Verification of challenging due to the database in
and
each drug difficulty of capturing order to
confidence.
reaction based causality among events explore the
o One
on various and the infrequent probabilisti
pitfall
conditions is nature of the events of c and best
of
very difficult. interest in these relationshi
these
The existing applications. ps between
measu
system  Aims to drug
res is
provides only propose a resistance
that
an approximate data mutations.
they
solution for mining Analyzing
simply
treatment algorithm those rules
find
selection, so to mine by
the
that is not an CADR considerin
statisti
accurate (chronolog g several
cal
system. ical temporally
correla
 Prior adverse divided
tion
Knowledge drug dataset
with  Perfor matching method.
genetic ms Furthermore, we study 5.2 Evaluation
approach is from how to determinean Criteria
proposed. the optimal time jump
Improving existin threshold. To evaluate the
the g performance of video
classificati report. 5.1Experiment copy detection, we use
on and Setting three criteria [52]
prediction  +Deals which have been
accuracy, with We use the defined by
mutation the TRECVID 2008 data set TRECVID organization
and cross several for evaluation [4], committee. These
over attribu which has been widely criteria are minimal
functionalit tes. used in video copy normalized detection
ies has  Improv detection. The video cost rate (MinNDCR),
been es data set includes 438 copy location accuracy,
proposed. accura video files, about 200 computational time
The cy GB data. The query cost, recall,
proposed  Helps videos are provided andprecision. Also, to
system to which are generated intuitively measure the
performs select using the method in copy
P_GA, best [50], [51]. Specifically, localizationaccuracy, we
which is a drug each query is define another
prediction  Effecti constructed by taking a criterion, copy time
scheme ve segment of variable stampaccuracy (CTSA),
with the rankin length from the test as in (5).
slotted g video data set. And the MinNDCR: This
training segment is embedded measure is a tradeoff
dataset 5. into a video which isnot between the cost
changing in the test data set. ofmissing a true
values.
EXPERIMENTAL
RESULTS Then, one or more positive and the cost of
transformations dealing with
Mining these
areapplied to the entire falsepositives. NDCR is
associations is very
In this section, query video segment. In defined as follows:
difficult especially when
we present our the obtained query NDCR ¼ PMissþ _RFA;
events of interest occur
experimental results of videos, some may ð4Þwhere PMissand
infrequently. This has
the proposed graph- contain no test segment RFA are the conditional
developed a new
based video sequence and others may contain probability of amissed
interestingness
matching method for entire segment of one copy and the false alarm
measure which finds
video copy detection. test video segment. In rate, respectively
the exclusive causal-
Two key techniques will TRECVID 2008 data set, [52].NDCRallows to
association and
be evaluated in our there are 2,010 query measure the ability to
infrequent association
experiments. The first videos, which are detect true copies as
based on an experience-
experiment is to generated by extracting well as toavoid false
based Genetic decision
examine the 201 video segments and alarms. The smaller the
model.
effectiveness of the SIFT applying 10 NDCR is, the better
Advantages: transformations on thedetection
 Finds feature point set
matching method based each video segment. In performance is. In the
best theexperiment, we experiment, NDCRs
treatm on SVD. Second, we
examine the evaluate the detection fordifferent decision
ent. performance for10 thresholds are
 No effectiveness of the
proposed graph-based transformations. In our computed and
need of experiment, we use an minimalNDCR, i.e.,
more video sequence
matching method, Intel Core 2 Duo 2.53 MinNDCR, is obtained
trainin GHz PC with 1 G for each
g data. compared with the
traditional sequence memory.
transformation.Copy transformations were [5] O. Ku¨cu¨ ktunc, M. [10] J. Yuan, L.-Y. Duan,
location accuracy: briefly described in Bastan, U. Gu¨du¨ kbay, Q. Tian, S. Ranganath,
Section 1. Table 3 shows and O ¨ .Ulusoy, and C. Xu, “Fast and
This measure the parameters and “VideoCopy Detection Robust Short Video Clip
aims to assess settings of eight Using Multiple Visual Search for Copy
theaccuracy of finding copydetection methods. Cues and MPEG-7 Detection,” Proc. Pacific
the exact extent of the Specifically, in feature Descriptors,” J. Visual Rim Conf. Multimedia
copy in thereference comparison,Hamming Comm. Image (PCM), 2004.
video. It is measured by embedding and Representation, vol.
the F1 measure which geometrical verification 21,pp. 838-849, 2010. [11] C. Kim and B.
isdefined as the aredescribed in [53]. Vasudev,
harmonic mean of BBF-Tree is the SIFT [6] M. Douze, H. Je´gou, “Spatiotemporal
precision and recall. feature matching and C. Schmid, “An Sequence Matching for
CTSA: This measure algorithm described in Image-Based Approach Efficient Video Copy
aims to show the [35]. to Video Copy Detection Detection,” IEEE Trans.
registration with Spatio-Temporal Circuits and Systems for
levelbetween detected 6. REFERENCES Post-Filtering,” IEEE Video Technology, vol.
copy location and the Trans. Multimedia, vol. 15, no. 1, pp. 127-132,
true copy location. [1] X. Wu, C.-W. 12, no. 4, pp. 257-266, Jan. 2005.
CTSA is defined as Ngo, A. Hauptmann, and June 2010.
below and Fig. 7 H.-K. Tan, “Real-Time [12] L. Chen and F.W.M.
illustrates the Near-Duplicate [7] M. Douze, A. Gaidon, Stentiford, “Video
measureComputational Elimination for Web H. Jegou, M. Marszalek, Sequence Matching
time cost: It is Video Search with and C. Schmid,TREC Based on Temporal
measured by the time Content Video Retrieval Ordinal Measurement,”
usedfor detection. and Context,” IEEE Evaluation Notebook Pattern Recognition
Trans. Multimedia, vol. Papers and Slides: Letters, vol. 29, no. 13,
The lower the 11, no. 2, pp. 196- INRIALEAR’s Video pp. 1824-1831, Oct.
computational time, the 207,Feb. 2009. Copy Detection System, 2008.
moreefficient the http://www-
method.Recall and [2] A. Hampapur and R. nlpir.nist.gov/ [13] H.T. Shen, X. Zhou,
precision: Meanwhile, Bolle, “Comparison of projects/tvpubs/tv8.pa Z. Huang, J. Shao, and X.
we also use the Distance Measuresfor pers/inria-lear.pdf, Zhou, “Uqlips: A Real-
standardprecision and Video Copy Detection,” 2008. Time Near-Duplicate
recall measures to Proc. IEEE Int’l Conf. Video Clip Detection
compare the Multimedia and Expo [8] J. Law-To, C. Li, and System,” Proc. 33rd Int’l
effectivenessof our (ICME), pp. 188-192, A. Joly, “Video Copy Conf. Very Large Data
proposed method with 2001. Detection: A Bases (VLDB), pp.
the state-of-the-art Comparative Study,” 1374-1377, 2007.
duplicatedetection [3] TRECVID 2008 Final Proc. ACM Int’l Conf.
approaches. List of Transformations, Image and Video [14] R. Cheng, Z. Huang,
http://www-nlpir. Retrieval, pp. 371-378, H.T. Shen, and X. Zhou,
nist.gov/projects/tv200 July 2007. “Interactive Near-
5.3 Experimental 8/active/copy.detection Duplicate Video
/final.cbcd. [9] A. Hampapur, K. Retrieval and
Results with Hyun, and R. Bolle, Detection,” Proc. ACM
video.transformations.p
TRECVID 2008 df, 2008. “Comparison of Int’l Conf. Multimedia,
Sequence Matching pp. 1001-1002, 2009.
Evaluation [4] Final CBCD Techniques for Video
Metric In this test, we Evaluation Plan Copy Detection,” Proc. [15] Z. Huang, H.T. Shen,
evaluate eight copy TRECVID 2008 (v1.3) SPIE, Storage and J. Shao, B. Cui, and X.
detection methods for http://www- Retrieval for Media Zhou, “Practical Online
10 copy types (T1-T10) nlpir.nist.gov/projects/t Databases, vol. 4676, Near-Duplicate
with TRECVID 2008 v2008/Evaluation- pp. 194-201, Jan. Subsequence Detection
evaluation metric. cbcd-v1.3.htm,2008. 2002. for Continuous Video
These 10 Streams,” IEEE Trans.
Multimedia, vol. 12, no. Alvey Vision Conf., pp. and Networking, vol.
5, pp. 386-397, 147-151, 1988. 2417, pp. 512-518,
Aug. 2010. [22] S. Satoh, M. 1995.
Takimoto, and J. Adachi,
[16] I. Laptev and T. “Scene Duplicate [28] F. Dufaux, “Key
Lindeberg, “Space-Time Detection from Videos Frame Selection to
Interest Points,” Proc. Based on Trajectories of Represent a Video,”
Int’l Conf. Computer Feature Points,” Proc. Proc. IEEE Int’l Conf.
Vision, pp. 432-439, ACM Int’l Workshop Image Processing, vol.
2003. Workshop Multimedia 2, pp. 275-278, 2000.
Information Retrieval,
[17] G. Willems, T. Sept. 2007. [29] K. Sze, K. Lam, and
Tuytelaars, and L.V. G. Qiu, “A New Key
Gool, “Spatio-Temporal [23] X. Zhou, X. Zhou, L. Frame Representation
Features for Robust Chen, A. Bouguettaya, N. for Video Segment
Content-Based Video Xiao, and J.A. Taylor, “An Retrieval,” IEEE Trans.
Copy Detection,” Proc. Efficient Near-Duplicate Circuits and Systems
ACM Int’l Conf. Video Shot Detection Video Technology, vol.
Multimedia Information Method Using Shot- 15, no. 9, pp. 1148-
Retrieval (MIR), pp. Based Interest Points,” 1155, Sept. 2005.
283- IEEE Trans. Multimedia,
290, 2008. vol. 11, no. 5, pp. 879- [30] N. Guil, J.M. Gonza
891, Aug. 2009. ´lez-Linares, J.R. Co´zar,
[18] Y. Ke, R. and E.L. Zapata, “A
Sukthankar, and L. [24] K. Mikolajczyk and Clustering Technique
Huston, “Efficient Near- C. Schmid, “A for Video Copy
Duplicate Performance Evaluation Detection,” Proc. Third
Detection and Sub- of Local Descriptors,” Iberian Conf. Pattern
Image Retrieval,” Proc. IEEE Trans. Pattern Recognition and Image
Ann. ACM Int’l Conf. Analysis and Machine Analysis, Part I, pp. 452-
Multimedia, pp. 869- Intelligence, vol. 27, no. 458, June 2007.
876, 2004. 10, pp. 1615-1630, Oct.
[19] J. Law-To, O. 2005. [31] H. Zhang, J. Wu,
Buisson, V. Gouet- and S. Smoliar, System
Brunet, and N. [25] H.T. Shen, J. Shao, Z. for Automatic Video
Boujemaa, “Robust Huang, and X. Zhou, Segmentation and Key
Voting Algorithm Based “Effective and Efficient Frame Extraction for
on Labels of Behavior Query Processing for Video Sequences
for Video Copy Video Subsequence Having
Detection,” Proc. ACM Identification,” IEEE Both Sharp and Gradual
Int’l Conf. Multimedia, Trans. Knowledge and Transitions, US Patent
pp. 835-844, Data Eng., vol. 21, no. 3, 5,635,982, June 1997.
2006. pp. 321-334, Mar. 2009.
[20] A. Joly, O. Buisson,
and C. Frelicot, [26] H. Liu, H. Lu, and X.
“Content-Based Copy Xue, “SVD-SIFT for Web
Retrieval Using Near-Duplicate Image
Distortion-Based Detection,” Proc. IEEE
Probabilistic Similarity Int’l Conf. Image
Search,” IEEE Trans. Processing (ICIP ’10),
Multimedia, vol. 9, no. 2, pp. 1445-1448, 2010.
pp. 293-306, Feb. 2007.
[21] C. Harris and M. [27] D. Gibbon,
Stephens, “A Combined “Automatic Generation
Corner and Edge of Pictorial Transcripts
Detector,” Proc. Fourth of Video Programs,”
Multimedia Computing

You might also like