You are on page 1of 7

International Journals of Advanced Research in Research Article June

Computer Science and Software Engineering 2017


ISSN: 2277-128X (Volume-7, Issue-6)

Identification of Ragas in Hindustani Classical Music Using


Aaroha and Avaroha
Dr. D. M. Chandwadkar* Dr. M. S. Sutaone
K. K. Wagh Institute of Engineering Government College of Engineering,
Education & Research, Nashik, India Pune, Maharashtra, India
DOI: 10.23956/ijarcsse/V7I6/0335

Abstract— Hindustani Classical Music is one of the oldest music cultures still being performed actively. Despite of the
advancements in the technologies related to music analysis, very little has been tried related to the expressiveness of
Hindustani Classical Music. Ragas are the central structure of Hindustani classical music. Raga can be thought of as
the sequential arrangement of notes that is capable of invoking the emotion of a song. In this paper we have tried to
identify eighteen ragas played by three string instruments: Santoor, Sarod and Sitar using signal processing
techniques. A database consisting of recorded Aaroha and Avaroha of these 18 ragas played by three performers is
used as input to the system. The notes present in the audio file are obtained using Harmonic Product Spectrum
method of pitch detection. Using this technique we could achieve about 85% accuracy. This shows that our approach,
though simple, is effective in solving the problem.

Keywords— Hindustani Classical Music, Raga recognition, Aaroha-Avaroha, Swara, Pitch, Harmonic Product
Spectrum

I. INTRODUCTION
Hindustani Classical Music is one of the oldest musical traditions in the world. The subject of classical Indian music
is rich, with its historical, cultural, aesthetic, theoretical, and performing facets. For the past fifty years, due to the
emigration of Indians and the popularity of Indian artists, it has become widely known to international audience. Ragas
are the building blocks of Hindustani classical music. In its simplest description a Raga is a collection of notes. Actually,
they are a lot more than just a collection of notes. Ragas are the melodic modes on which a Hindustani musical
performance is based.
RAGA: The Melodic Framework
The most fundamental melodic concept in Hindustani classical music is raga. Raga is a melodic abstraction around
which almost all Hindustani classical music is organized. Raga, in the Sanskrit dictionary, is defined as "the act of
coloring or dyeing" (the mind in this context) and "any feeling or passion especially love, affection, sympathy, vehement
desire, interest, joy, or delight". In music, these descriptions apply to the impressions of melodic sounds on both the
artist(s) and listener(s). A raga consists of required and optional rules governing the melodic movements of notes within
a performance.
The term, Raga, first occurred in a technical context in the Brihaddeshi [1] where it is described as "That which is a
special dhwani (tune), is bedecked with swara (notes) and varna and is colorful or delightful to the minds of the people, is
said to be rāga". Hence, raga is neither a tune nor a scale; it is a set of rules which can together be called a melodic
framework.
The rules of a raga can be defined by
 The list of specific notes (swaras) that can be used during playing of the raga
 The manner in which the notes are used, i.e. specific ways of ornamenting notes or emphasizing/de-emphasizing
them
 Manner in which the scale is ascended (Aaroha) or descended (Avaroha)
 Optional or required musical phrases, the way in which to reveal these phrases, and/or combine them
 The octave or frequency range to emphasize
 The relative pacing between the notes
 The time of day and/or season when the raga may be performed so as to invoke the emotions of the raga for
maximum impact on the mental and emotional state of the performer and listener

Observance of these rules during the performance of a raga does not aspire to be purely a technical or intellectual
exercise, but also to evoke the rasa or bhava (the experience, mood, emotion, or feeling) of the raga in both the artist and
the listener. A raga is best experienced rather than analyzed.
Any raga can be characterized by: Aaroha (ascending sequence of notes) and Avaroha (descending sequence of notes),
the set of unique notes in these sequences (scale), Jaati of raga (number of notes in Aaroha and Avaroha), the most
stressed note (Vadi swara), the second most stressed note (Samwadi swara), the notes that are not allowed (Varjit swara),
pakad (catch/characteristic phrase): a set of one or two sequences and Thaat (scale type: swaras that make a raga).

© www.ijarcsse.com, All Rights Reserved Page | 805


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
In Hindustani music, swaras are the seven notes in the scale, denoted by Sa, Re, Ga, Ma, Pa, Dha and Ni. These are
called as Shuddha (pure) swaras. Sa and Pa are fixed swaras. The rest are mutable swaras and each has one 'vikrut'
(different) version. The 5 vikrut swaras have two variations each (komal 're', 'ga', 'dha', 'ni', and teevra /sharp 'ma'), which
account for 12 notes in an octave. We use the symbols S, R, G, M, P, D, N for notating shuddha Sa, Re, Ga, Ma, Pa, Dha,
Ni respectively. For notating komal Re, Ga, Dha, Ni we use symbols r, g, d, n respectively and M’ for tivra Ma.
This document is a template. An electronic copy can be downloaded from the Journal website. For questions on
paper guidelines, please contact the journal publications committee as indicated on the journal website. Information
about final paper submission is available from the conference website.

II. RAGA RECOGNITION


Very little work has been done in the area of applying techniques from computational musicology and artificial
intelligence to Hindustani classical music. In order to identify ragas computationally, swara intonation, scale, note
progressions and pakad/characteristic phrases are used.
Sahasrabudde et al [2] model the raga as finite automata which were constructed using information codified in
standard texts on classical music. A finite automata has a set of states between which the transitions take place. This
approach was used to generate new samples of the Raga, which were technically correct and were indistinguishable from
compositions made by humans. Pandey et al [3] use HMM models to recognize the ragas. They used Aaroha and
Avaroha for identification of ragas and the results were complemented with scores obtained from two pakad matching
modules. The approach was tested on two ragas.
Rajeswari et al [4] recognized ragas by estimating the scale from the given tune and by comparing it with template
scales. Their test data consists of 30 tunes in 3 ragas sung by 4 artists. They use the harmonic product spectrum algorithm
to extract the pitch. The results obtained show 67% accuracy. Shetty et al [5] use a similar approach for raga recognition.
They used the individual swaras used in Aaroha-Avaroha. Neural networks were used for classification. They report an
accuracy of 95% over 90 tunes from 50 ragas, using 60 tunes as training data and the remaining 30 tunes as test data.
Sinith et al [6] also used HMMs of ragas to search for musical patterns in a catalogue of monophonic Carnatic music.
They build models for 6 typical music patterns corresponding to 6 ragas. They report 100% accuracy in identifying an
unknown number of tunes into 6 ragas.
P. Chordia and A. Rae [7] use pitch class profiles and bi-grams of pitches to classify ragas. 17 ragas played by a
single artist on sarod are used as data. They also use the harmonic product spectrum algorithm to extract the pitch. They
have shown that bi-grams are useful in discriminating the ragas with the same scale.
Belle et al [8] used swara intonation to differentiate ragas that share the same scale intervals. They evaluated the
system on 10 tunes, with 4 ragas evenly distributed in 2 distinct scale groups. A detailed survey of computational
analysis of Indian classical music related to automatic recognition of ragas is presented by Koduri et al [9].

III. INSTRUMENTS USED


A string instrument is a musical instrument that produces sound with vibrating strings amplified by one or more of the
three main methods:
 Vibration of a sounding board via a bridge
 Resonance of air in a sound box, often through a sound hole
 Electric pickup for an instrument amplifier driving a loudspeaker

The Indian Santoor is an ancient string musical instrument native to Jammu and Kashmir, with origins in Persia. The
Santoor is a trapezoid-shaped hammered dulcimer often made of walnut, with seventy two strings [10]. In ancient
sanskrit texts, it has been referred to as Shatatantri vina (100-stringed vina). The special-shaped mallets are lightweight
and are held between the index and middle fingers. A typical Santoor has two sets of bridges, providing a range of three
octaves. The Indian Santoor is more rectangular and can have more strings than the Persian counterpart, which generally
has 72 strings. The instrument currently available in the market has 87 strings, clubbed in 29 sets each consisting of 3
strings.
The sitar is a plucked stringed instrument used mainly in Indian classical music. It is used mainly in India and to some
extent in neighboring countries. The name "Sitar" in Persian means "Sè" (Three) and "Tār" (String Pairs) hence it has the
name "Sitar" although a typical sitar used in India has 17-25 strings. It derives its resonance from sympathetic strings, a
long hollow neck and a gourd resonating chamber. It is also said that Sitar is derived from an Indian instrument called
Veena [10].
The Sarod is also a stringed musical instrument, used mainly in Indian classical music. Along with the Sitar, it is the
most popular and prominent instrument in Hindustani (northern Indian, Bangladeshi and Pakistani) classical music. The
Sarod is known for a deep, weighty, introspective sound, in contrast with the sweet, overtone-rich texture of the Sitar,
with sympathetic strings that give it a resonant, reverberant quality. It is a fretless instrument able to produce the
continuous slides between notes known as meend, which is important to Indian music. The Sarod is believed to have
descended from the Afghan rubab, a similar instrument originating in Central Asia and Afghanistan [10]. The name
Sarod roughly translates to "beautiful sound" or "melody" in Persian. It normally has 25 strings classified into three types:
4 main strings, 6 rhythm and drone strings and 15 sympathetic strings. The instrument us played with a plectrum (a
plucking aid) made from coconut shell.

© www.ijarcsse.com, All Rights Reserved Page | 806


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
IV. DATABASE GENERATION
The recording of isolated notes by playing every note in their range, recorded in studio conditions, by these
instruments is done. Eighteen ragas from Hindustani Classical Music are selected and their Aaroha and Avaroha are
recorded using these instruments for raga recognition. These ragas are selected on the basis of stratified random
sampling.
There is a high degree of variability permitted in Hindustani classical music. The reference note (Aadhar Shadaj/
Tonic) is not fixed. It can have different frequencies. As we are using the equal tempered scale, the frequencies of all
other swaras / notes are calculated using this frequency of the reference note. Since covering this wide expanse of
possible compositions is difficult, we recorded the data with some constraints. We used fixed frequency for the reference
note. The Aadhar Shadaj is tuned to a frequency of 262 Hz. In terms of the western system, this is same as tuning the A
above middle C to 440 Hz. With this tuning the frequencies of various notes (Swaras) becomes as shown in Table 1. Also
only one source of sound (single instrument) is used in the input sample (solo performance). Ragas having same scale
(Upayojita swaras) are not considered. The audio files are recorded in .wav format with a sampling frequency of 44100
Hz.

TABLE 1: SWARA AND THEIR FREQUENCIES (3 OCTAVES)


Octave
Mandra Saptak Madhya Saptak Taar Saptak
Swara Saptak →
(3rd Octave) (4th Octave) (5th Octave)
Western ↓ Name Freq Name Freq Name Freq
S C 3_S 131 4_S 262 5_S 523
R C# / Db 3_r 139 4_r 277 5_r 554
R D 3_R 147 4_R 294 5_R 587
G D# / Eb 3_g 156 4_g 311 5_g 622
G E 3_G 165 4_G 330 5_G 659
M F 3_M 175 4_M 349 5_M 698
M’ F# / Gb 3_M’ 185 4_M’ 370 5_M’ 740
P G 3_P 196 4_P 392 5_P 784
D G# / Ab 3_d 208 4_d 415 5_d 831
D A 3_D 220 4_D 440 5_D 880
N A# / Bb 3_n 233 4_n 466 5_n 932
N B 3_N 247 4_N 494 5_N 988
*Frequencies are truncated to the nearest integer value

Apart from the recording of isolated notes, Aaroha and Avaroha of selected 18 Ragas played by all three instruments
were also recorded. The Ragas selected are listed in Table 2.

TABLE 2 : LIST OF SELECTED RAGAS WITH THEIR SCALE [11]


Scale
Name of Raga Thaat Jaati
(Upayojita Swaras)
Basant Poorvi Odhav-Sampoorna S,r,G,M’,P,d,N
Bageshri Kafi Shadav-Shadav S,R,g,M,D,n
Bhairav Bhairav Sampoorna_Sampoorna S,r,G,M,P,d,N
Bhairavi Bhairavi Sampoorna_Sampoorna S,r,g,M,P,d,n
Chandrakauns Bhairavi Odhav_Odhav S,g,M,d,N
Des Khamaj Odhav-Sampoorna S,R,G,M,P,D,n,N
Kafi Kafi Sampoorna_Sampoorna S,R,g,M,P,D,n
Lalit Poorvi Shadav-Shadav S,r,G,M,M’,d,N
Madhuwanti Todi Odhav-Sampoorna S,R,g,M’,P,D,N
Malkauns Bhairavi Odhav_Odhav S,g,M,d,n
Miyan Malhar Kafi Sampoorna_Shadav S,R,g,M,P,D,n,N
Patdeep Kafi Odhav-Sampoorna S,R,g,M,P,D,N

© www.ijarcsse.com, All Rights Reserved Page | 807


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
Piloo Kafi Sampoorna_Sampoorna S,R,g,G,M,P,d,D,n,N
Puria Dhanshri Poorvi Sampoorna_Sampoorna S,r,G,M’,P,d,N
Sohani Marwa Odhav-Shadhav S,r,G,M’,D,N
Tilang Khamaj Odhav_Odhav S,G,M,P,n,N
Todi Todi Sampoorna_Sampoorna S,r,g,M’,P,d,N
Yaman Kalyan Sampoorna_Sampoorna S,R,G,M’,P,D,N

V. RAGA RECOGNITION TECHNIQUE


Raga is neither a scale, nor a mode. It is, however, a scientific, precise, subtle, and aesthetic melodic form with its own
peculiar ascending and descending movement which consists of either a full octave, or a series of five or six notes. An
omission of a jarring or dissonant note, or an emphasis on a particular note, or the transition from one note to another,
and the use of microtones along with other subtleties, distinguishes one raga from the other.
Though it is not giving complete insight of a raga, for simplicity of analysis raga performance can be thought of as
sequence of notes. Hence it becomes a sequential pattern classification problem. Data is not unordered set of samples.
Data elements occur in an order: spatial or temporal. The probability of next data element crucially depends on the order
of occurrence of preceding elements.
Here we have used a method which is similar to the technique used in [5]. The recognition problem was considered as
fundamental frequency detection problem. Also, instead of knowing the sequence of occurrence of the swaras in the
Raga sample (Aaroha-Avaroha), the swaras present in the Raga sample (Scale/ Upayojita Swaras) were found.
Pitch detection / Fundamental frequency detection:
Pitch is a perceptive quality that describes the highness or lowness of a sound. It is related to the frequencies contained
in the signal. Increasing the frequency causes an increase in perceived pitch.
The pitch frequency, Fp, is defined as the frequency of a pure sine wave which has the same perceived pitch as the
sound of interest. In comparison, the fundamental frequency, F0, is defined as the inverse of the pitch period length, P0,
where the pitch period is the smallest repeating unit of a signal. For a harmonic signal this is the lowest frequency in the
harmonic series. The pitch frequency and the fundamental frequency often coincide and are assumed to be the same for
most purposes.
Pitch detectors fall into two general categories: time-domain and frequency domain [12]. The former analysis
examines the original signal, often applying filters and/or convolution to analyze the signal in its original state, amplitude
vs. time. The latter uses a transform (usually the Fast Fourier Transform, FFT) to break the signal down into its
frequency components, yielding information about its amplitude versus frequency. It then analyzes this to determine the
fundamental frequency. Both of these have advantages and disadvantages when it comes to frequency resolution and
processing time.
Time domain methods of pitch detection include zero crossing and autocorrelation methods. In zero crossing, the
times at which the signal crosses from negative to positive are stored. The difference between consecutive crossings
times is used as the period. This simple technique fails if the signal contains harmonics other than the fundamental, as
they can cause multiple zero-crossing per cycle. Autocorrelation is good for detecting perfectly periodic segments within
a signal; however, real instruments and voices do not create perfectly periodic signals. There are usually fluctuations of
some sort, such as frequency or amplitude variations. As we are interested in getting exact value of pitch frequency and
the signal is rich in harmonics, we used the frequency domain method for pitch detection.
The harmonic product spectrum (HPS) is a method for choosing which peak in the frequency domain represents the
fundamental frequency [13]. The basic idea is that if the input signal contains harmonic components then it should form
peaks in the frequency domain positioned along integer multiples of the fundamental frequency. Hence if the signal is
compressed by an integer factor i, then the ith harmonic will align with the fundamental frequency of the original signal.
The HPS involves three steps: calculating the spectrum, downsampling and multiplication. The frequency spectrum,
S1, is calculated using the STFT. S1 is then downsampled by a factor of two using re-sampling to give S2, i.e. resulting
in a frequency domain that is compressed to half its length. The second harmonic peak in S2 now aligns with the first
harmonic peak in S1. Similarly, S3 is created by downsampling S1 by a factor of three, in which the third harmonic peak
aligns with the first harmonic peak in S1. This pattern continues with Si being equal to S1 downsampled by a factor of i,
with i ranging up to the number of desired harmonics to compare. The resulting spectra are multiplied together and
results in a maximum peak which corresponds to the fundamental frequency.
One of the limitations of HPS is that it does not perform well with small input windows, i.e. a window containing only
two or three periods. Increasing the length of STFT, so that the peaks can be kept separated improves the result, at the
cost of losing time resolution.
This method is giving fairly optimal performance in respect of pitch detection accuracy and computational time.

Test Procedure:
Following procedure is used for getting the Scale/ Upayojita Swaras from the Raga sample under consideration:
 The input data is windowed with a window size of 16384 samples (371 ms) and hop size of 8192 samples (185
ms)
 Hanning window is used

© www.ijarcsse.com, All Rights Reserved Page | 808


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
 Fundamental frequency / pitch is obtained for each window using HPS (Harmonic Product Spectrum) method
 The HPS method is used with two variations. In the first case the data samples were downsampled upto second
level only i.e. multiplication of original spectrum with spectrum with downsampling by 2 (HPS 2). In the second
case the data samples were downsampled upto five levels i.e. multiplication of original spectrum with spectrum
with downsampling by 2, 3, 4 and 5 (HPS 5)
 Using these frequencies and Table 1, the notes/ swaras present in the data are identified
 The notes are put in a predefined format (Template)

S r R g G M M’ P d D n N

 A 1 is put in the position of the notes present in the wave file. A 0 is put if it is absent
 This note sequence is compared with the standard note sequence of various Ragas (Template matching) for
identification of the Raga.

Results of Raga Recognition:


The following table (Table 3) shows the result of Raga recognition using these two methods:

TABLE 3: RESULT OF RAGA RECOGNITION FOR VARIOUS RAGAS PLAYED BY THE THREE
INSTRUMENTS
Santoor Sarod Sitar
Sr. No. Raga
HPS2 HPS5 HPS2 HPS5 HPS2 HPS5
1 Basant *C C C C C C
2 Bageshri C C C C IC IC
3 Bhairav *IC C C C C C
4 Bhairavi C C C C C C
5 Chandrakauns C C C C IC IC
6 Des C C C C IC IC
7 Kafi C C C C C C
8 Lalit C C IC C IC C
9 Madhuwanti C C IC IC IC IC
10 Malkauns C C C C C C
11 Miyan Malhar C C IC C IC IC
12 Patdeep C C C C C C
13 Piloo IC IC C C IC IC
14 Puria Dhanashri C C C C C C
15 Sohoni IC C C IC IC IC
16 Tilang C C C C IC IC
17 Todi C C C C IC C
18 Yaman C C C C C C
Correctly recognized/ out of 15/18 17/18 15/18 16/18 8/18 10/18
*C= Correctly recognized, IC = Incorrectly recognized

The above results are analyzed using Chi-square method with 5% level of significance. It shows that the Raga
recognition accuracy depends on the Raga recognition method as well as on the instrument playing the Raga. The Raga
recognition accuracy is the best with HPS 5 method and for the Ragas played by Santoor. The Raga recognition accuracy
is poor for Ragas played by Sitar.
Hence we analyzed the audio files for Sitar and observed that the accuracy is poor because of the meend and the
chikari/ intermittent Sa played during the performance. To overcome the problem and to improve the recognition
accuracy we first performed note onset detection for Sitar and the notes are detected using HPS 5 method. The results of
Raga recognition accuracy obtained using HPS 5 method and HPS 5 method after note onset detection are as shown in
Table 4.

© www.ijarcsse.com, All Rights Reserved Page | 809


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
TABLE 4: RESULT OF RAGA RECOGNITION FOR SITAR WITH NOTE ONSET DETECTION
Sitar
Sr. No. Raga HPS5 with Note
HPS5
Onset Detection
1 Basant C C
2 Bageshri IC IC
3 Bhairav C C
4 Bhairavi C C
5 Chandrakauns IC C
6 Des IC IC
7 Kafi C C
8 Lalit C C
9 Madhuwanti IC IC
10 Malkauns C C
11 Miyan Malhar IC C
12 Patdeep C C
13 Piloo IC IC
14 Puria Dhanashri C C
15 Sohoni IC C
16 Tilang IC IC
17 Todi C C
18 Yaman C C
Correctly recognized/ out of 10/18 13/18

The accuracy of Raga recognition is good using HPS 5 as compared to the other method. These are listed in Table 5.

TABLE 5: RAGA RECOGNITION ACCURACY FOR DIFFERENT METHODS


Sr.No. Technique Accurately recognized/ out of % Accuracy
1 HPS 2 38/54 70.37%
2 HPS 5 46/54 85.19%

VI. CONCLUSION
Identification of Raga in Hindustani Classical Music is a very challenging problem as Raga a very complex structure.
In this paper, we have presented a system for automatic Raga identification which uses scale matching technique. Out of
various characteristics of Raga, here we analyze the sequence of notes for Raga identification. The Aaroha – Avaroha
pattern is well defined for each raga and hence it is very useful feature in identification of the Raga. The system works
successfully for monophonic recording of the Aaroha and Avaroha of these Ragas played by three string instruments:
Santoor, Sarod and Sitar. The maximum accuracy which we obtained is about 85%. The accuracy of Raga identification
is quite good for Santoor and Sarod but it is poor for Sitar. This is because of the meend and the chikari/ intermittent Sa
played during the performance. By using note onset detection this can be improved slightly.
We can improve the Raga recognition method by using additional characteristics of Raga like pakad.
Acknowledgment
The following artists spared their valuable time for the database generation:
 Santoor : Pandit Dr. Dhananjay Daithankar
 Sarod : Pandit Praashekh Borkar
 Sitar : Pandita Ms. Jaya Jog

Pandit Sharadji Sutaone gave his valuable guidance for selection of Ragas. Mr. Mangeshji Waghmare, All India
Radio, Pune extended all his support for this activity.
The recording was done at Studio Saz Sargam, Prabhat Road, Pune by Mrs. Radhika Hangekar.

REFERENCES
[1] P. Sharma and K. Vatsayan, Brihaddeshi of Sri Matanga Muni, South Asian Books, 1992.
[2] H. Sahasrabuddhe and R. Upadhye, “On the computational model of raag music of india,” in Workshop on AI
and Music: European Conference on AI, 1992.

© www.ijarcsse.com, All Rights Reserved Page | 810


Chandwadkar et al., International Journals of Advanced Research in Computer Science and Software Engineering
ISSN: 2277-128X (Volume-7, Issue-6)
[3] G. Pandey, C. Mishra, and P. Ipe, “Tansen: A system for automatic raga identification,” in Proc. of Indian
International Conference on Artificial Intelligence, 2003, pp. 1350–1363.
[4] R. Sridhar and T. Geetha, “Raga identification of carnatic music for music information retrieval,” International
Journal of Recent trends in Engineering, vol. 1, no. 1, 2009, pp. 571–574.
[5] S. Shetty and K. Achary, “Raga Mining of Indian Music by Extracting Arohana-Avarohaana Pattern,” in
International Journal of Recent trends in Engineering, vol. 1, no. 1. Academy Publisher, 2009, pp. 362–366.
[6] M. Sinith and K. Rajeev, “Hidden Markov Model based Recognition of Musical Pattern in South Indian
Classical Music,” in IEEE International Conference on Signal and Image Processing, Hubli, India, 2006.
[7] P. Chordia and A. Rae, “Raag recognition using pitchclass and pitch-class dyad distributions,” in Proc. of
ISMIR, 2007, pp. 431–436.
[8] S. Belle, R. Joshi, and P. Rao, “Raga Identification by using Swara Intonation,” Journal of ITC Sangeet
Research Academy, 2009, vol. 23.
[9] Gopala Krishna Koduri, Preeti Rao, Sankalp Gulati, “A Survey of Raaga Recognition Techniques and
Improvements to the State-of-the-Art,” Sound and Music Computing, 2011.
[10] Wikipedia
[11] Vishnu Narayan Bhatkhande. Hindusthani Sangeet Paddhati. Sangeet Karyalaya, 1934.
[12] Patricio de la Cuadra, Aaron Master, and Craig Sapp. “Efficient pitch detection techniques for interactive
music”, In Proceedings of the International Computer Music Conference, pages 403–406, 2001.
[13] Schroeder, M.R. “Period histogram and product spectrum: New methods for fundamental frequency
measurement,” J. Acoust. Soc. Am., 43(4):829-834, 1968.
[14] S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin,
Germany: Springer-Verlag, 1998.
[15] J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser.
Lecture Notes in Statistics. Berlin, Germany: Springer, 1989, vol. 61.
[16] S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, “A novel ultrathin elevated channel low-temperature poly-Si
TFT,” IEEE Electron Device Lett., vol. 20, pp. 569–571, Nov. 1999.
[17] M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, “High resolution fiber distributed measurements
with coherent OFDR,” in Proc. ECOC’00, 2000, paper 11.3.4, p. 109.
[18] R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S. Patent 5 668 842,
Sept. 16, 1997.
[19] (2002) The IEEE website. [Online]. Available: http://www.ieee.org/
[20] M. Shell. (2002) IEEEtran homepage on CTAN. [Online]. Available: http://www.ctan.org/tex-
archive/macros/latex/contrib/supported/IEEEtran/
[21] FLEXChip Signal Processor (MC68175/D), Motorola, 1996.
[22] “PDCA12-70 data sheet,” Opto Speed SA, Mezzovico, Switzerland.
[23] A. Karnik, “Performance of TCP congestion control with rate feedback: TCP/ABR and rate adaptive TCP/IP,”
M. Eng. thesis, Indian Institute of Science, Bangalore, India, Jan. 1999.
[24] J. Padhye, V. Firoiu, and D. Towsley, “A stochastic model of TCP Reno congestion avoidance and control,”
Univ. of Massachusetts, Amherst, MA, CMPSCI Tech. Rep. 99-02, 1999.
[25] Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification, IEEE Std. 802.11, 1997.

© www.ijarcsse.com, All Rights Reserved Page | 811

You might also like