Professional Documents
Culture Documents
(Major Project)
TOPIC:
Speech
Synthesizer
System
Submitted By:
1|Page
SPEECH SYNTHESIZER SYSTEM
Objective:
The Speech synthesis project aimed at developing a gesture interface for driving
(“conducting”) a speech synthesis system. Four real-time gesture controlled
synthesis systems have been developed. For these “Synthesizers” are based on
formant synthesis and they include refined voice source components. One of them
is based on an augmented LF model (including an aperiodic component), the other
one is based on a Causal/Anticausal Linear Model of the voice source (CALM)
also augmented with an aperiodic component. All these systems are controlled by
various gesture devices. Informal testing and public demonstrations showed that
very natural and expressive synthetic voices can be produced in real time by some
combination of input devices/synthesis system.
Abstract
Speech synthesis is the artificial production of human speech. A hardware system
used for the purpose of speech synthesis is called a speech synthesizer. It can be
implemented in the hardware or software level. Example can be a text
to speech converter (TTS) which converts the input text into the speech. Some
other systems convert the input symbols or representations or phonetic symbols
into speech. The pieces of recorded speech stored in a database can be
concatenated to produce speech as we see in the railway station announcement
system where the pieces of speech are assembled to produce a
complete announcement. Entire words or sentences stored will produce the
maximum clarity in the case of real world applications.
A (TTS) system consists of mainly two parts, a language processing module and a
signal processing module. In languages such as English, language processing is a
major part. All TTS systems existing allow users to specify what is to be spoken,
but do not give any control on how it has to spoken. A signal processing module
then will bring out this speech by making appropriate variations to the sound
database. It will then be possible for our program to sing/speak in a fashion that
one desires
2|Page
SPEECH SYNTHESIZER SYSTEM
Requirements
To use the Java Speech API, a user must have certain minimum software and
hardware available. The following is a broad sample of requirements. The
individual requirements of speech synthesizers and speech recognizers can vary
greatly and users should check product requirements closely.
Software Requirements-
Platform Used : Java (JDK 1.6)
Hardware Requirements-
Processor: Intel P-IV CPU 1.60 GHz.
Minimum 1 GB RAM.
Minimum 40 GB Hard disk.
Wi-Fi Intranet in college premises.
3|Page
SPEECH SYNTHESIZER SYSTEM
Along with the other Java Media APIs, the Java Speech API lets developers
incorporate advanced user interfaces into Java applications. The design goals for
the Java Speech API included:
References:
1. http://java.sun.com/products/java-media/speech/
2. http://tcts.fpms.ac.be/synthesis/maxmbrola/
3. http://www.disc2.dk/tools/SGsurvey/
4|Page