You are on page 1of 57

Synthesized speech using a small Microcontroller

Chapter 1 1. Introduction
Embedded systems play an important role in our day to day life. Due to limited works this systems can be highly optimized by particular needs [27]. Some of the embedded systems applications were controlling the devices with some amount of intelligence built-in [1]. Speech synthesis system has wide range of applications in our daily life. Speech synthesis plays an important role in signal processing. The natural, intelligible synthetic voices in the expanding field of speech technology had a lot of demand [4]. Speech was the natural form of human communication. Automatic speech signal generation on computers is commonly called speech synthesis. Speech synthesis was the artificial production of human speech [2]. Speech synthesizer can be created by concatenating pieces of recorded audio speech that was stored in a database memory. Text-to-speech synthesis was an important object in speech interface, which allows low-bandwidth text to supply a user with easy to gain the information. In speech synthesis field, a number of research programmes have taken place during the last decade [13]. In 1984 an industry expert Mr Drinkwater said that about the synthesis synthesis is going to be the next bam-burning technology [2]. Now a days synthesis speech was useful for many applications. In 1999 an American business magnate said that In a few years personal computers will ... talk back to us [2]. In the past (Allen et al., 1987), the text to speech synthesis was done from a complete knowledge of the acoustics of the language being targeted. For getting text to speech synthesis different techniques were used [5]. Concatenate method was the simplest method for achieving the synthesis speech. By using this method the speech was generated by linking pre recorded speech segments to build syllables, phrases, or words [2].

Synthesized speech using a small Microcontroller The block diagram for the system was shown in the following figure 1.

INPUT

OUTPUT

User interface Converter

MICROCONTROLL ER

Speaker

Figure 1: Block diagram of the system From the above figure the system will have four main sections; the microcontroller of the system would take the commands from user and control the converter. The link between the user interface (computer) and the microcontroller was two ways because after taking the some commands from user the system would also respond with proper feedback. After processing the inputs the output will reflect through the speaker controlled by the microcontroller which was illustrated by the figure 1. 1.1) Aims

The aim of this project is to implement a user interface to a small microcontroller using synthesized speech. 1.2) Objective of the project

The objective of the project was to generate the synthesis speech using a small microcontroller for user interface. For achieving this project to implement a user interface to control the ISD board and generate the synthesis speech system. 1.3) 1.4) Deliverables Design and implement a solid state audio record and playback system. Identify a method and producing completely synthesized speech and implement it. Advantages of the system
2

Synthesized speech using a small Microcontroller This system can be implemented for I. Speech synthesis system was used to audibly communicate information to the user because the audio recordings were too expensive and large to storage on disk. II. This system was used for the blind people to understand the preloaded instructions which were used in the daily usage. III. Text to speech was used in the games applications to talk to the user instead of using text message displayed on the screen. IV. It was used in the railway stations to announce the information about the train timings. V. Conserve the digital storage space. This system was useful for phases that would occupy too much storage space if they were pre-recorded in digital audio format [6].

Chapter 2
3

Synthesized speech using a small Microcontroller

2. Problem Analysis
This chapter was highlighted the requirements for developing the system. These requirements were categorised into two major sections those were hardware requirements and software requirements. Based on the project deliverables the requirements of the system were given below and clearly explained. The block box testings were also mentioned in the ending of this chapter.

2.1) Hardware requirements of the system


This project synthesized speech using a small microcontroller was a type embedded system so it would be a combination of electronic hardware and software code. The following figure was show the system would have electronic hardware capable of control the ISD board.

User

Microcontroller

ISD1420 board

Figure 2.1(a): Block diagram for audio record/playback system

The following figure was show the system would have electronic hardware capable of speech synthesizer which can controlled by the microcontroller. The block diagram for synthesized speech using microcontroller was shown in the Figure 2.1(b)

Synthesized speech using a small Microcontroller

USER

SERIAL COMMUNICATION

MICROCONTRO LLER

SPEECH CONVERTER

SPEAKER

Figure 2.1(b): Text to speech synthesis system block diagram For this system a synthesizer was used for converting the text to speech. 2.1.1) Speech synthesizer A synthesizer was required for getting the speech from text; it should be controlled by microcontroller. The selection of the synthesizer would also be a challenging task there were many kind of synthesizers available in the market also there were some techniques for hardware as well as software techniques were there for getting speech from text. Software approach was good but in implementation quiet complex so hardware approach was followed in this project. The synthesizer was required for this project which would capable to read the English words, numbers and mathematical functions. The synthesizer chip should be light weight and communicate with the host processor. It should have built in rule database for converting the text to speech.

2.1.2) Microcontroller Microcontroller was the heart of this project or one can say it was the master as it controls the other devices so, the selection of the microcontroller would also be a challenging task. Microcontrollers were available from Microchips PIC with a large range of size and features.
5

Synthesized speech using a small Microcontroller In this project the priority was given to microcontroller which one can control the operation of the ISD board and synthesizing the speech within that microcontroller. If the microcontroller was smaller and light weight it would be better to keep the PCB small. Microcontroller with internal features like internal oscillator, interrupts would be better. Microcontroller should have the I/O pins for connecting to external modules. Before making any selection the size and power efficiency could be considered. 2.1.3) Speaker It was a device which converts electrical signal in to sound. Speaker was commonly used to hear sounds. In this project the speaker should be used for hearing the synthesized speech in the output. A speaker was required for this project which would be small, light weight and it should have low impedance, low power ratings. 2.2) Speech synthesis methods:

Speech synthesis methods were used for converting the speech from the text. A synthesis method which was used in this project, it should be simple and easy to implement the system. 2.3) Serial communication to establish a user interface

A module would be required to establish a user interface to control the system. The system should be user control so it would be serially communicated with the host PC. RS232 and MM232r were used for the serial communication. 2.4) Software requirements of the system

Software program was necessary for microcontroller to control the system. MPLAB was the program development tool which can enable the user to write the program for the PIC microcontroller. MPLAB provided the C18 compiler for writing the program. The program was supposed to write for control the ISD board as well as to generate the synthesized speech. 2.4.1) Programming language, development and debugging tool There were two popular languages (Assembly and embedded C) available for eight bit microcontrollers, any language can be used for this system. Embedded C was quite similar to the normal C language but only the difference was in embedded C programming the initializations and some special functions were used, on the other hand the assembly
6

Synthesized speech using a small Microcontroller instruction set was different to the manufacture of the microcontroller from one to another. It was necessary before writing the program to know about the register set (SFR) of that microcontroller. A program could be written in their own way. The software tool MPLAB along the C18 compiler was providing the assembler and compiler. Microcontroller did not understand assembly of the C program which was stored in the ROM of the PIC as a .hex file. The assembler and compiler can converts the instructions to the machine code. The MPLAB was also available the debugging features. By using the MBLAB ICD2 programmer and debugger kit the converted hexadecimal file can be loaded into the microcontroller. 2.5) Black box testing

The output of the system was depending on the corresponding input. For this system the input would be user commands and the output would be speaker. The black box testing should be done for the system. I. II. III. The system should be user interface with the system. The ISD board should be controlled by the microcontroller. The speech should be synthesized within a microcontroller. INPUT OUTPUT

USER

SYSTEM

SPEAKER

Figure 2.5: Black box testing of the system

Chapter 3 3. Problem solutions

Synthesized speech using a small Microcontroller Based on the hardware and software requirements as mentioned before chapter, the solutions were given for both hardware and software. According to the deliverables mentioned in the previous chapter, the first task was to control audio record/playback ISD board and second one was generate the synthesized speech within the microcontroller. The hardware and software solution were given below. 3.1) PIC18F1320 Microcontroller

PIC18F1320 was an 8 bit microcontroller developed by Microchip Company and it was power efficient because used Nano watt technology. This microcontroller was used in this project because it has external oscillator and many other features [30]. PIC18F microcontroller support enhanced USART module and supports RS485, RS232 and LIN1.2 which has auto baud rate and also auto wake up start bit. The microcontroller PIC188F1320 has four crystal modules (LP, XT, HS and HSPLL) and high computational performance. It could be works at high speed up to 16-40MHZ. So PIC18F1320 (Fig 3.1) was used rather than PIC16F series (it works at low speed). For more information about the PIC the references 30 and 27 were studied. It had two ports A and B and port B pins were used in this project. In this project the microcontroller was used to control the audio record/playback device and generating the synthesized speech.

Figure 3.1: PIC18F1320 3.2) Synthesis method

Speech synthesis technologies can be broadly categorized into three categories. They were given below [29]:
8

Synthesized speech using a small Microcontroller Concatenate synthesis Formant synthesis Articulatory synthesis

3.2.1) Concatenate speech synthesis: In this technology the speech can be generated by linking pre-recorded speech segments to make syllables, phrases, or dip hones. Waveform segments were store in a database. It was the simplest method for speech synthesis [14]. 3.2.2) Formant synthesis: Formants were defined as resonant frequencies that occur at the main resonant areas of the vocal tract for a given sound. It consists of artificial reconstruction of the formant characteristics to be produced. 3.2.3) Articulatory synthesis: Articulators were speech organs. It was based on computational models of the human vocal tract and it was a most natural sounding speech synthesis. But it was complex [4]. Finally concatenate method was identified, which was used for converting the speech from text [2]. In this project for synthesizing the speech a synthesizer was used. This synthesizer chip follows the concatenate method because it was a simplest method compare to other method.

3.3)

TTS256 Synthesizer chip

Synthesized speech using a small Microcontroller

Figure 3.3: TTS256 speech synthesizer chip It was very hard to find out the synthesizer for this task because most of the synthesizers (WTS701, SP0256 chips) were not available in the market. Finally after a long searching TTS256 chip was available in speechchips.com. TTS256 was an 8 bit microprocessor and it was programmed with built in 600 letter-to-sound rule database. TTS256 speech chip was a 28 pin DIP and compatible with PIC, BASIC stamp, OOPic [33]. It can generate the speech code from ASCII text and it could be read English words, numbers and mathematical functions. For serial communication parameters it was set the baud rate 9600. The datasheet mentioned in the ref.[33] was studied. The host microcontroller can send up to 128 characters to the TTS256 chip via serial port on 18 th pin (RX). The text which was entered by user was spoken when the TTS receives a new line character or the maximum number of characters (128) has been entered. 3.4) Speaker

Figure 3.4: Speaker The speaker selected in this project, which was having the following feature. 1) It was small and light weight 2) It had low power rating (0.5W) and Impedance (8 ohm)
10

Synthesized speech using a small Microcontroller 3.5) MM232R module

It was a single chip asynchronous serial data transfer interface. In MM232R module FT232RQ chip device was used and it can convert USB to RS232 / RS422 / RS485 converters. RS232 and MM232 were the two solutions for serial communication. MM232R was support for 7 or 8 data bits and one or two stop bits. The data transfer rates from range of 300 baud to 3 mega baud (RS422 / RS485) and 300 baud to 1 mega baud (RS232). It can support for bus powered, high bus powered and self powered USB configurations. RS232 was also used for serial data transfer [33]. If RS232 was used the MAX232 IC should be used for voltage levels, then the cost of the PCB will increase so MM232 was used instead of RS232. The signal voltage levels of the MM232 were in the range from 3.3v to 5v.

Figure 3.3: MM232R 3.6) Program language

Embedded C was used for the programming the PIC. MPLAB was used for writing the program of the control system as well as speech synthesis system. MPLAB IDE was windows based Integrated Development Environment for programming the microcontrollers. The MPLAB provided the ability to the users to create and edit the source code in the editor window. In MPLAB IDE software tool the C18compiler was available to assemble, compile and link the source code [31]. MPLAB ICD 2 debugger and programmer kit was used for load the hexadecimal code to the PIC18F 1320 microcontroller.

11

Synthesized speech using a small Microcontroller

SOURCE FILE

ASSEMBLER

COMPILER

LIBRARY FILES

LINKER

Figure 3.6: MPLAB program structure 3.7) Flow chart for control system

The operation of the audio record and playback system was explained in the form of flow chart. The flow chart for control system was shown in the following figure.

12

Synthesized speech using a small Microcontroller

START

Initialization of the system

Get the data from user

Wrong command

Found any command either r or p

NO

YES PLAY/RECORD the device based on the input

Figure 3.7: Flow chart for control PCB

3.8)

Flow chart for speech synthesis system


13

Synthesized speech using a small Microcontroller The working of the synthesis speech system was explained in the below flow chart. START

Initialize the system and USART

Wait for the data coming from user

Wrong command

SYNTHESIZER

Send the speech to the speaker

If data == r or p YES PLAY/RECO RD Based on the input

NO

Figure 3.8: Flow chart for speech synthesis system 3.9) Gantt chart:

14

Synthesized speech using a small Microcontroller The idea of planning the project Gantt chart to make the scheduled view of the project what is to be done for the project, what was remaining to do in the project and what is done till now. These tasks like referring the references, analysis and project progress were divided and arranged depending on the deliverables achievable. Even setting the mile stones in the Gantt sheet helped made to speed up the implementation of project objectives. Some tasks might be carried on the same time like construction the PCB was during summarising the material and ordering the components. Some of the components ordering were delayed due to choosing what component was used in this project. Writing the software programming was the time taking criteria.

Chapter 4 4. Problem implementation


15

Synthesized speech using a small Microcontroller 4.1) Hardware implementation of the control system

The solutions for getting the hardware and software code of the system were explained in the previous chapter were finally implemented and explained in this chapter, for some of the hardware problems like stabilization more than one solution were given. The outcomes of some solutions were necessary to discuss at this at this as it was necessary to stick to one approach from here onward. According to the deliverables mentioned in the first chapter the primary aim was control the ISD board and second one was to generate synthesis speech system. The description of the hardware and software for these two tasks were separately given below. The software programs were developed and listed in the appendix for both audio record/playback system and synthesized speech system. Before implementing the control system to control the ISD board, the basic operation of the ISD 1420P device was explained below. 4.1.1) Working of the audio record/playback device (ISD1420p board): The system was configured with microphone, on-chip oscillator, speaker amplifier, smoothing filter, and two push buttons. The basic block diagram of the ISD1420P device was shown in the following figure 4.1.1(a).[10]

Play Record

ISD1420P
Speaker

Microphone

(Audio record/playback )

Figure 4.1.1(a) : Block diagram for audio record/playback system The PCB circuit device which was given by the supervisor was shown in the following figure 4.1.1(b).

16

Synthesized speech using a small Microcontroller

Play swich
Mic(input)

Record switch

Speaker (output)

ISD1420P

Figure 4.1.1(b): ISD1420P PCB

The ISD1420P device was used for audio record/playback system. The following operating sequence was demonstrated the working of chip device. For recording the audio data: The device which was having two push buttons, one for recoding the data and another was for play backing. When push button (REC) was pressed, then the REC signal LOW initiated a record cycle from the starting of the data space [28]. When recording process was beginning the red LED was glowing until the data space had been filled. Once the data space was filled then the recording was stopped. The power of the device was automatically down then the REC signal was pulled to high. This process was shown in the following figure 4.1.1(c).

17

Synthesized speech using a small Microcontroller

Figure 4.1.1(c): The PCB when the record switch was pressed For playback the audio data: When the push button(PLAY) was pressed then active low PLAY signal initiated a playback cycle from the starting of the data space which was recorded earlier. If the device was reached to EOM marker then automatically power was down. While the process was going on the green LED was glowing until and unless the EOM marker reached. This playback process was shown in the following figure 4.1.1(d).[28]

Figure 4.1.1(d): The PCB when the PLAY button was pressed 4.1.2) Design of PCB for controlling of the Audio record/playback system
18

Synthesized speech using a small Microcontroller Without using of the PLAY and RECORD buttons the ISD1420 device should be operated. For this a control PCB was designed. The schematic diagram for controlling the audio record and playback device system was designed in Proteus ISIS. The Proteus ISIS professional provided the libraries for constructing the schematic designs. From the libraries picked the PIC18F1320, resistors (10K and 330), LED and capacitor then constructed the circuit for control PCB for controlling the ISD kit.
5V SUPPLY TO BOARD
1 2 +5V CONN-H2

C2
10u

GND

D1 R2

U1
1 2 6 7 3 4 15 16 RA0/AN0 RA1/AN1/LVDIN RA2/AN2/VREFRA3/AN3/VREF+ RA4/T0CKI RA5/MCLR/VPP RA6/OSC2/CLKO RA7/OSC1/CLKI PIC18F1320 PACKAGE=DIL18 VDD=GND VSS=+5V RB0/AN4/INT0 RB1/AN5/TX/CK/INT1 RB2/P1B/INT2 RB3/CCP1/P1A RB4/AN6/RX/DT/KBI0 RB5/PGM/KBI1 RB6/PGC/T1OSO/T1CKI/P1C/KBI2 RB7/PGD/T1OSI/P1D/KBI3 8 9 17 18 10 11 12 13

330R

DIODE-LED

ISD BOARD SWITCH CONNECTOR


1 2 CONN-H2

R1
10k

+5V

Speech Synthesizer, Venu Gopal, MSc Embedded Systems


CONN-DIL8 8 7 6

J2
1 2 3 4

J3 J1

2 3

CONN-SIL4

4 3 2 1

GND

CONN-SIL4

Figure 4.1.2(a):: Schematic diagram for Control PCB 1) 10K resistor for avoiding the PIC reset 2) Data transmitting pin of MM232R. Which was TXD 3) Data receiving pin of MM232R. Which was RXD 4) MM232R 5) PIC18F1320 microcontroller 6) ISD1420 switch board connector 7) 330 resistor decreasing the voltage for LED 8) LED power indicator
19

Synthesized speech using a small Microcontroller 9) Power supply connector The components PIC18F1320, resistors, LED and capacitor were available in the Proteus ISIS library but the component MM232R was not available in the list of Proteus ISIS so four pin connectors were used. The 10K resistor used for avoiding the PIC reset and LED was used for power indication. Some additional elements were also selected like 2 pin Molex connectors for power supply as well as for ISD board. The following PCB was the implementation of the schematic discussed in the previous chapter. The PCB was designed for controlling the audio record/playback device in Proteus Ares software. The PCB layout of the audio record/playback system was shown in the following figure 4.1.2(b).

ISD board connector

LED for power indicator

PIC18F1320 microcontroller Filtering capacitor

MM232R

Power supply connector

Figure 4.1.2(b): PCB layout of the control pcb system. The below figure 4.1.2(c) was showing the top copper and bottom copper layers of the PCB. On the other hand the original PCB design was shown in the figure 4.1.2(d).
20

Synthesized speech using a small Microcontroller

Figure 4.1.2(c): Top layer of the PCB (Left) and bottom layer of the PCB (Right)

Figure 4.1.2(d): Top layer of the Actual PCB (Left) and Bottom layer of the PCB (Left) The bitmap of the control PCB was shown in the following figure 4.1.2(e).

Figure 4.1.2(e): Bitmap of the PCB The PCB development was done so next step was drill the holes of the PCB in order to place the required components. A due care is given that maximum tracks and soldering should go
21

Synthesized speech using a small Microcontroller to the bottom layer. Diameter of the drill was taken as 0.8mm. After drilling, soldering was done at ideal temperature of 325o C. The control PCB for ISD board was shown in the following figure 4.1.2(f).

MM232R

Power connector

PIC1320

LED power indicator

ISD connector

Figure 4.1.2(f): Control PCB for ISD board 4.1.3) Operation of the control system For controlling of the ISD1420 record/playback device the control PCB was designed. MM232R was used for sending data and receiving data [32]. The 5 th pin of the MM232R was connected to 10th pin (RB4) of the PIC18F1320 for receiving the data and 7 th pin of the MM232R was connected to the 9th pin (RB1) of the PIC for transmitting the data from the PIC. MM232R pins 2 and 3 were connected to 5v power supply. Based on the command received from the user via MM232R the PIC will control the operation of record and playback device. The RB2 pin of the PIC was connected to the 27 th pin (active low REC) ISD
22

Synthesized speech using a small Microcontroller board and RB3 pin of the PIC was connected to the 23 rd pin (active low PLAY) ISD board. Whenever the user enter the letter r on the HyperTerminal the MM232R send this signal to the RX of the PIC. Then the PIC send an active low signal to the ISD board, this active low signal (REC) was enable the pin then the will ready for recording. At the other hand the PIC was send the ISD device status (START RECORDING) to the MM232R, this message will display on the HyperTerminal window. Same way if the user entered the letter p, the PIC sends an active low signal to the ISD board then the device start to play. If any letter (except r or p) the PIC was send the status as wrong command to the MM232R. 4.1.4) Testing of the control system The following hardware tests were done for the PCB. 1) For checking short circuit connectivity of the PCB initially the printed circuit board was tested without placing components. 2) After placing the components these individual pins were tested through digital CRO. 3) The power of the components was tested by using the multi meter. HyperTerminal was used for creating a user interface to control the audio record/playback device. MM232R communication protocol was used for serial communication from host to the microcontroller for sending and receiving the data. For creating a serial communication, the required HyperTerminal settings were mentioned in the following figure 4.1.4(a). Com port number which module was connected Baud rate= 9600

23

Synthesized speech using a small Microcontroller Figure 4.1.4(a): HyperTerminal window settings 4.2) Software implementation for control PCB

The PIC18F1320 microcontroller was control the operation of the audio record/playback. For this the PIC should be programmed, so the PIC C program was written by using MPLAB. To write the PIC C program in the MPLAB first selected the new project wizard option, and then PIC18F1320 was selected for microcontroller and C18 compiler was added. The complete PIC C program of control PCB was listed in Appendix. The PIC C program was written in a file save with extension of .C and then this file was added to the source file [27]. For clear explanation the whole program was divided as small sections. As the software code was going to be in embedded C so the initial sections of every C program have a header file followed by some declarations, definitions then the main and other functions. The MPLAB libraries provided all the header files [31]. In this following program HS was used. HS stands for high speed and it allows PIC18F1320 microcontroller to run at 8MHZ [30]. Baud rate was set to 9600 because MM232R communication module can work at this rate. In this system the watch dog timer was disabled and the MCLRE (master set clear) was enabled. /////code for controller board //VENU GOPAL ALLA //000540246 //Date: 25-03-2010 //UNIVERSITY OF GREENWICH /* 1. Initialize the system 2. Initialize the USART 3. Wait for the command to arrive 4. Identify the command 5. Send command to ISD1420 board 6. Play or record sound if required 7. Go to Step 3 */
24

Synthesized speech using a small Microcontroller #include <p18f1320.h> #include <delays> #include <usart.h> #include <portb.h> #include<stdio.h>

#pragma config OSC = HS//High speed frequency= 8MHZ #pragma config WDT = OFF// Disable watch dog timer #pragma config LVP = OFF #pragma config MCLRE = ON//master set and clear

#define CLOCK_FREQ (8000000) // 8MHz #define INSTR_FREQ (CLOCK_FREQ/4) // one instruction takes 4 clock cycles #define DelayMs(ms) Delay1KTCYx((((INSTR_FREQ/1000)*ms)/1000)) #define record PORTBbits.RB2 #define play PORTBbits.RB3 The below program section was the main function, in this function the character value was initialized as zero, disable the analogue inputs and initialization was done for the UART serial communication. Initially the pins RB2 and RB3 of the PIC18F1320 were set to outputs and these values were initialized as zero because the pin outputs are active low signals. //// Main ////// void main() { unsigned char val=0,// the character value val initialized as zero ADCON1 = 0x7F; // Disable analogue inputs and configure as I/O OSCCON = 0x70; // set internal oscillator to 8Mhz INTCON2= 0x00;/// Enable PORT_B Pull_Ups OpenUSART(USART_TX_INT_OFF &
25

Synthesized speech using a small Microcontroller USART_RX_INT_OFF & USART_ASYNCH_MODE & USART_EIGHT_BIT & USART_CONT_RX, 25);//initialisation for serial communication TRISBbits.TRISB2 = 0; TRISBbits.TRISB3 = 0; DelayMs(500); //power on system ready indication record=1; play=1;*/

The below mentioned program was the main operation for control system. Whenever a character would arrive at receiver pin that was RB4 of PIC microcontroller then all statements under this section would get executed. User can send the character through computer by making the use of MM232R serial communication module. When the user entered a character r in HyperTerminal window, then the PIC received this command and send an active low signal to the ISD board. Then the ISD board was ready for recording the voice. The PIC immediately sends the status of the ISD board to MM232R as Recording and this message was displayed on the hyper terminal. When the user entered a character p in HyperTerminal window, then the PIC received this command and send an active low signal to the ISD board. Then the ISD board was ready for playing the audio which was recorded previously. The PIC immediately sends the status of the ISD board to MM232R as Playing Sound and this message was displayed on the hyper terminal. If any key entered in the hyper terminal the PIC send back a message as wrong command. After completion of this whole process the last state statements play=1, record=1 and Val=0 were executed because after completion of this operation the output pins and character value were needed to reset.

while(1) /// embedded forever loop.... { while(!DataRdyUSART()); ///wait until any data arrives
26

Synthesized speech using a small Microcontroller val=ReadUSART(); ///get data switch(val){ case 'p': printf("Playing Sound"); play=0;//start playing break; case 'r': printf("Recording"); record=0; //start recording break; default: printf("wrong command"); } DelayMs(50);/// maintain status record=1; play=1; val=0; }//while(1); }//main()

Hence the program for control system was explained clearly and this PIC C program was compiled by using C18 compiler. The build succeeded program for the control system was shown in the following figure 4.2(a).

27

Synthesized speech using a small Microcontroller

Build succeeded Figure 4.2(a): Error free code for control system With the help of ICD2 kit the PIC was loaded to target device. The PIC was programmed on the ICD2 kit compatibility board. This process was shown in the following figure 4.2(b).

Figure 4.2(b): PIC programmed by using ICD2 kit

28

Synthesized speech using a small Microcontroller Using the MPLAB ICD2 debugger and programmer kit the program successfully loaded into the PIC18F1320. After completion of this loading process in MPLAB window the status of the ICD 2 kit was displayed and it was shown in the figure 4.2(c).

MPLAB ICD2 ready for next operation

Figure 4.2(c): Successful loaded of the program After loading the program into the PIC then the microcontroller placed on the control PCB. The complete hardware connections for control the audio record/playback system was shown in the following figure 4.2(d).

29

Synthesized speech using a small Microcontroller

ISD board

hyper terminal window

Control PCB

Figure 4.2(d): Complete hardware connections of the audio record/playback system

4.3)

Hardware implementation for synthesis speech system

The main aim of the project was to generate the synthesis speech within a small microcontroller. The schematic diagram for synthesis speech system was designed in Proteus ISIS. The Proteus ISIS professional provided the libraries for constructing the schematic designs. The main PCB was could handle the both systems those were it could control the ISD board and it could be generate the synthesis speech within a microcontroller. The main PCB would reduce the cost of the cost and PCB area. The schematic diagram of the speech synthesis system was shown in the following figure 4.3(a).

30

Synthesized speech using a small Microcontroller 8


SPK CONNECTION TTS B
1 2 3 4 5 6 7 1 2 3 4 5 6 7

TTS C
7 6 5 4 3 2 1

CONN-SIL2

5V SUPPLY TO BOARD
1 2 +5V CONN-H2 1 2

TTS A
CONN-SIL7

TTS D
7 CONN-SIL7 6 5 4 3 2 1 CONN-SIL7 GND

C2
10u

CONN-SIL7

D1 R2

U1
1 2 6 7 3 4 15 16 RA0/AN0 RA1/AN1/LVDIN RA2/AN2/VREFRA3/AN3/VREF+ RA4/T0CKI RA5/MCLR/VPP RA6/OSC2/CLKO RA7/OSC1/CLKI PIC18F1320 PACKAGE=DIL18 VDD=+5V VSS=GND RB0/AN4/INT0 RB1/AN5/TX/CK/INT1 RB2/P1B/INT2 RB3/CCP1/P1A RB4/AN6/RX/DT/KBI0 RB5/PGM/KBI1 RB6/PGC/T1OSO/T1CKI/P1C/KBI2 RB7/PGD/T1OSI/P1D/KBI3 8 9 17 18 10 11 12 13

330R

DIODE-LED

ISD BOARD SWITCH CONNECTOR


1 2 CONN-H2

D2 R3
330R DIODE-LED

R1
10k

+5V

CONN-DIL8

MM232 A
1 2 3 4 CONN-SIL4

MM232 C 4 MM232 B
3 2 1 GND

1 2 3 4 CONN-SIL4

8 7 6 5

Speech Synthesizer, Venu Gopal, MSc Embedded Systems

1 2

4 5

Figure 4.3(a): Schematic design for synthesis speech system 1) TT256 synthesizer chip (Instead of this chip 4 serial in line connectors were used because this chip was not available in Proteus) 2) LED indicator 3) 5th pin was for data transmitting (TX) 4) 7th pin for data receiving (RX) 5) MM232R module (Instead of this chip collection of serial in line connectors were used because this chip was not available in Proteus) 6) PIC18F1320 microcontroller 7) LED power indicator 8) 2 pin Molex connectors for external connections
31

Synthesized speech using a small Microcontroller 9) ISD board connector To develop the system the required components were selected by using Proteus ISIS professional software. The component PIC18F1320, resistors, LED and capacitor were available in the Proteus ISIS library but the components MM232R and TTS256 synthesizer chip were not available in the list of Proteus ISIS so collection of four pin serial in line connectors were used for MM232R and four serial in line connectors were used for construct the TT256 chip. The 10K resistor used for avoiding the PIC reset and LED was used for power indication. Some additional elements were also selected like 2 pin Molex connectors for power supply as well as for ISD board. The following PCB was the implementation of the schematic discussed in the previous chapter. The PCB was designed for the generating the speech system in Proteus Ares software. The PCB layout of the synthesis speech system was shown in the following figure 4.3(b).

Figure 4.3(b): PCB layout of the synthesis speech system To avoid the short circuit connections the PCB was designed in both top and bottom layers. For better understanding about the PCB layout the top and bottom layers of the PCB in the Ares and actual PCBs were shown in the figure 4.3(d).
32

Synthesized speech using a small Microcontroller

Figure 4.3(c): Top layer of the PCB (Left side) and Bottom layer of the PCB (Right side)

Figure 4.3(d): Top layer of the Actual PCB (Left side) and Bottom layer of the Actual PCB (Right side) The bitmap of the synthesis speech system PCB layout was shown in the following figure 4.3(e).

Figure 4.3(e): Bitmap of the PCB layout


33

Synthesized speech using a small Microcontroller The PCB development was done so next step was drill the holes of the PCB in order to place the required components. The diameter of the drill was taken as 0.8mm. After drilling, soldering was done at ideal temperature of 325o C. After placing all the components on the PCB of the synthesis speech system PCB was shown in the figure 4.3(f).

6 7

Figure 4.3(f): Complete hardware connections of the synthesis speech system 1) TTS256 synthesizer chip 2) LED indicator 3) LED power indicator 4) PIC18F1320 microcontroller 5) Speaker 6) MM232R serial communication module 7) Speaker connector
34

Synthesized speech using a small Microcontroller 8) Power connector 9) ISD board connector 4.3.1) Pin allocation for microcontroller The pins allocated for the microcontroller were given below. Indicator MM232R Rx, TTS256 Rx MM232R Tx Recording Playing Pin B5 Pin B4 (Rx) Pin B1 (Tx) Pin B2 Pin B3

4.4)

Software implementation for Speech synthesis system

The PIC18F1320 microcontroller was used for generating the synthesis speech system. For this the PIC should be programmed, so the PIC C program was written by using MPLAB. To write the PIC C program in the MPLAB, first selected the new project wizard option, and then PIC18F1320 was selected for microcontroller and C18 compiler was added. The complete PIC C program of the speech synthesis system was listed in Appendix B. The PIC C program was written in a file, save it with extension of .C and then this file was added to the source file. For clear explanation the whole program was divided as small sections. As the software code was going to be in embedded C so the initial sections of every C program have a header file followed by some declarations, definitions then the main and other functions. The MPLAB libraries provided all the header files. In this following program HS was used. HS stands for high speed and it allows PIC18F1320 microcontroller to run at 8MHZ. Baud rate was set to 9600 because MM232R communication module can work at this rate. In this system the watch dog timer was disabled and the MCLRE (master set clear) was enabled. //////////// code for generating the speech synthesis within a microcontroller///////////////
35

Synthesized speech using a small Microcontroller //VENU GOPAL ALLA //UNIVERSITY OF GREENWICH //01-03-2010 1. Initialize the system 2. Initialize the USART 3. Wait for the data to arrive 4. Send the data to TTS256 chip 5. Wait for the command to arrive 6. Identify the command 7. Play or record sound if required 8. Go to Step 3 */ #include <p18f1320.h> #include <delays.h> #include <usart.h>//intialing uart #include <portb.h>

#pragma config OSC = HS//high speed frequency 8MHZ #pragma config WDT = OFF//Disable watch dog timer #pragma config LVP = OFF//low voltage #pragma config MCLRE = ON//master clear reset

#define CLOCK_FREQ (8000000) // 8MHz #define INSTR_FREQ(CLOCK_FREQ/4)//one inst take 4 clock cycles #define DelayMs(ms) Delay1KTCYx((((INSTR_FREQ/1000)*ms)/1000))

#define record PORTBbits.RB2 #define play PORTBbits.RB3 #define led PORTBbits.RB5


36

Synthesized speech using a small Microcontroller The below program section was the main function, in this function the character value was initialized as zero, disable the analogue inputs and initialization was done for the UART serial communication. Initially the pins RB2, RB3 and RB5 of the PIC18F1320 were set as outputs and these values were initialized as zero because the pin outputs were active low signals. The LED was used to indicate the system was ready, when the system was ready the LED would blink up to 500Msec.

//// Main ////// void main() { unsigned char val=0, ADCON1 = 0x7F; // Disable analogue inputs and configure as I/O OSCCON = 0x70; // set internal oscillator to 8Mhz INTCON2= 0x00;/// Enable PORT_B Pull_Ups

OpenUSART(USART_TX_INT_OFF & USART_RX_INT_OFF & USART_ASYNCH_MODE & USART_EIGHT_BIT & USART_CONT_RX, 25);// UART INTIALISATION

TRISBbits.TRISB2 = 0; TRISBbits.TRISB3 = 0; TRISBbits.TRISB5 = 0;// set as O/Ps..

led=1; DelayMs(500); //power on system ready indication led=0; record=1;


37

Synthesized speech using a small Microcontroller play=1;

The below mentioned program was the main operation for control system as well as speech synthesis system. The two tasks those control the ISD board and generate the speech synthesis system were optimized and designed the main PCB. The microcontroller would receive the data which ever the user send through MM232. By using the command val=ReadUSART() the PIC could get the data and by using another command putcUSART(val) send the data to the TTS256 chip, then the TTS256 synthesizer chip would covert the text to speech and send these speech to speaker. Whenever a character would arrive at receiver pin that was RB4 of PIC microcontroller then all statements under this section would get executed. User can send the character through computer by making the use of MM232R serial communication module. When the user entered a character r or R in HyperTerminal window, then the PIC received this command and send an active low signal to the ISD board. When the user entered a character p or P in HyperTerminal window, then the PIC received this command and send an active low signal to the ISD board. Then the ISD board was ready for playing the audio which was recorded previously. After completion of this whole process the last state statements play=1, record=1 and Val=0 were executed because after completion of this operation the output pins and character value were needed to reset.

while(1) /// embedded forever loop.... { while(!DataRdyUSART()); ///wait until any data arrives val=ReadUSART(); ///get data putcUSART(val);///send data to TTS256 IC if (val=='p'||'P') { printf("Playing Sound");// display on hyper terminal play=0;//start playing } if (val=='r'||'R')
38

Synthesized speech using a small Microcontroller { printf("Recording"); record=0; //start recording } else{ printf("wrong command"); } DelayMs(50);/// maintain status record=1; play=1; val=0; }//while(1); }//main()

Hence the program completes, using C18 compiler the program was compiled. The error free program for the speech synthesis system was show in the figure 4.4(a).

39

Synthesized speech using a small Microcontroller

Venu_alla_TTS.c

Build succeeded program Figure 4.4(a): Error free program The above program was loaded in to the PIC18F 1320 microcontroller with the help of MPLAB ICD2 debugger and programmer kit provided by the university. The following figure 33 was show the message after loading the program in to the PIC.

MPLAB ICD 2 ready for next operation

Figure 4.4(b): Successful loading of the program 4.5) Working of the speech synthesis system
40

Synthesized speech using a small Microcontroller HyperTerminal was used for creating a user interface to control the ISD board and synthesis speech system. MM232R communication protocol was used for serial communication from host to the microcontroller for sending and receiving the data. For creating a serial communication, the required HyperTerminal settings were mentioned in the following figure.

Figure 4.5(a): Hyper terminal settings for serial communication After loading the program into the PIC then the microcontroller placed on the synthesis speech system PCB. The complete hardware connections for the speech synthesis system were shown in the following figure 4.5(b).

41

Synthesized speech using a small Microcontroller

Hyper terminal

Speaker MM232R module PIC18F1320 microcontroller TTS256 Speech Synthesizer Figure 4.5(b): Complete hardware connections of the speech synthesis system MM232R serial communication module was used for user interface. The user could write the text on hyper terminal window, the corresponding speech will be produced by the speaker. Two LEDs were used in this system one was for power indication (+5V) and second one indicates whether the program was loaded in to the PIC microcontroller or not. The resistor 10K ohm was used for avoid the reset of the PIC. The final PCB would do the both tasks. The data which was entered on the hyper terminal will send to the 10th pin (RX) of the PIC18F1320 microcontroller then the received data will sent to the 18th pin (RX) of the TTS256 synthesizer chip. The TTS256 synthesizer chip can convert the data which was sent by microcontroller to speech. The 24 th pin of the TTS256 chip was connected to the speaker, 14 th pin was connected to ground and 28 th pin was connected to 5V power supply. The text which was entered by user was spoken when the
42

Synthesized speech using a small Microcontroller TTS256 chip receives a new line character or the maximum number of characters (128) has been entered. By using this PCB the ISD board was also controlled. If the user entered the letter r on the hyper terminal, the microcontroller will send the active low signal to the ISD board then the ISD board start recording the voice. Simultaneously when the user entered the letter p on the hyper terminal, the microcontroller will send the active low signal to the ISD board then device could start playing the sound which was previously recorded in the ISD chip. 4.6) Project expenditure

The components required to implement the speech system was given below. S. NO 1 2 3 4 COMPONENTS PIC18F1320 MM232R module TTS256 synthesizer chip USB lead M/M 3M QUANTITY 2 2 1 2 COST (each component) 2.43 9.99 18.00 2.90

The total money spent on electronic components like PIC microcontroller, MM232R module and USB cable to implement the system was around 30 pounds and this fund was provided by the university. But the main component TT256 synthesizer was purchased personally, it costs around 18 pounds. The total expenditure on the project was around 50 pounds.

Chapter 5 5. Results
43

Synthesized speech using a small Microcontroller In this chapter the system outcomes were explained clearly. Initially the control PCB was designed for control the audio record/playback device (ISD board). The main task of the project synthesis speech systems results were given and explained in the last section. 5.1) Results of the control system

Based on the command which was entered by the user on the hyper terminal the microcontroller controls the ISD board. When the user entered command r on hyper terminal then the microcontroller send the active low signal to the ISD board the system starts recording. The microcontroller sends the status of the ISD board to the 7 th pin (RX) of the MM232R and it was displayed on the hyper terminal. The red LED of the ISD board blink until the system recording the voice. The following figures (5.1(a) and 5.1(b)) were show the process of the system.

r entered by the user Status of the ISD board

Figure 5.1(a): Computer received the data from the PIC when the system starts recording

44

Synthesized speech using a small Microcontroller

LED power indicator

Red LED was blink when recording

Figure 5.1(b): ISD board when recording the voice Same way when the user entered the command p on hyper terminal then the microcontroller send the active low signal to the ISD board the system starts recording. The microcontroller sends the status of the ISD board to the 7 th pin (RX) of the MM232R and it was displayed on the hyper terminal. The green LED of the ISD board blink until the system would play the voice which was stored previously in the ISD1420P. The following figures (5.1(c) and 5.1(d)) were show the process of the system. When the user entered wrong command then the PIC would send message to the computer and it was displayed on the hyper terminal as wrong command.

p entered by the user

Status of the ISD board

Figure 5.1(c): Computer received the data from the PIC when the system start play

45

Synthesized speech using a small Microcontroller

GREEN LED WAS BLINK WHEN SYSTEM

LED POWER INDICATOR Figure 5.1(d): ISD board when playing the voice

5.2)

Outcome for the speech synthesis system

The final PCB which was the main PCB, designed for the controlling of the ISD board and synthesis speech system. The following figure which was shows the two LED indicators, one was for power indication and another one for checking the PIC programmed or not.

46

Synthesized speech using a small Microcontroller

Figure 5.2: The speech synthesis system In the above figure show the LED power indicator was blinking so the 5V power was coming to circuit. Second one was also blinking so the program was also loaded into the PIC. According to the software program the LED blinks up to 55msec but the LED was continuously blinking. Also when the user trying to enter any character on the hyper terminal window and it was not displayed and also the output speech was not coming from the speaker. The system could not work because the software might not match with the hardware connections of the system or might be problem in the software code. Many changes were done in case of modifying the software code as well as in hardware some connections was changed.

Chapter 6
47

Synthesized speech using a small Microcontroller

6)

Discussion of Results

Well the results of the complete system were as expected, but in sometimes it was not picking up the commands entered by the user through the hyper terminal. There was some problem with the playback of the sound possible causes may be compatibility issues of TTS IC. The Embedded C program was logical right and there were no compiling errors. There was no RJ45 connector on the target board so it was not possible to get the live debug simulation on the MPLAB. The sound of the speaker was not that good may be because of the quality of the device, but on the other hand it was also not possible to attach a bid speaker directly to the TTS IC as it cannot supply the required source current. Behaviour of complete system was good system was capable of doing task to some extent for what it was designed to. With this technical discussion for this project comes to an end.

Chapter 7
48

Synthesized speech using a small Microcontroller

7)

Conclusion

After going through this task it was evident that to develop complete system even if its looks simple at first look lots of effort and dedication is required. Various fields required to be combined like good software knowledge, electronics and analytical skills. Sometimes a simple appearing task may take a long time to work accordingly, as happened while tried to achieve a proper communication between microcontroller and the TTS IC. Every element is required to be looked after. Analyzing why something fails or did not worked as required helps a lot and makes the vision clear brings success closer and enhances knowledge.

Chapter 8 8) Future Work


49

Synthesized speech using a small Microcontroller The work on the project was quite satisfactory, but still there were some shortcomings like sometimes system was not working at all, so some changes are suggested in software and electronic design. Another major addition that could be done in future design is the addition of on board debugging mechanism to attach the ICD kit to the main board this could be easily done by adding a RJ connector. Various features like audio filter circuit which would only allow the audible frequencies to pass to the speaker could be added along with some amplifier and volume control. In the embedded program additional commands could be added like for volume control, the duration for which TTS is required to be in play or record mode. Some features like to monitor the errors on the system could be added and if any error occurs then could be displayed on the hyper terminal, and even intimate the user by producing speech from the TTS. There is no limit as such to the additions that could be done in the future to this system as the developed project is just the basic one so many enhancements are possible.

Chapter 9 9) References
1. Juang, B.H. Tsuhan Chen, The past, present, and future of speech processing Volume 15, Issue 3, ISSN: 1053-5888, Pages 24-48.
50

Synthesized speech using a small Microcontroller 2. Henton, Caroline, Challenges and Rewards in Using Parametric or Concatenative Speech synthesis, ISSN: 1381-2416. 3. Kim, Jongkuk - Hahn, Hernsoo Yoon, Uei-Joong-Bae, Myungjin, On a pitch alteration for speech synthesis systems , year 2009, volume 50, issue 4, pages 435-446, ISSN:0929-6212. 4. Rahier, M.C. , Defraeye, P.J. Guebels, P.-P, Patovan, B, A 3 m NMOS high performance LPC speech synthesizer chip , Volume 18, Issue 3, Pages 349-359, ISSN:0018-9200. 5. Henton, Caroline, Challenges and rewards in using parametric or concatenative speech synthesis, Year 2002, Volume 5, Issue 2, Pages 117-131, ISSN: 13812416. 6. Holfelder, Wieland, Interactive remote recording and playback of multicast videoconferences, Year 1998, Volume 21, Issue 15, Pages 1285-1294, ISSN: 0140-3664. 7. Kim, Jong Kuk-Hahn, Hern Soo-Bae, Myung Jin, On a speech multiple system implementation foe synthesis, year 2009, volume 49, issue 4, ISSN: 0929-6212. 8. Hilt, V. Mauve, M. Vogel, J. Effelsberg, W, Recording and playing back interactive media streams, Date Oct 2005, volume 7 issue 5,pages 960-971, ISSN: 1520-9210 9. A. K. Rath and P.K. Meher, Design of a merged DSP microcontroller for embedded systems using discrete orthogonal transform, Year 2006, pages 388394, ISSN: 1549-3636. 10. Schroeder, E.F. platte, H. J. Krahe, D., MSC :stereo audio coding with CDquality and 256 kBit/sec, Date: Nov.1987, Volume CE-33, Issue 4, Pages 512519, ISSN: 0098-3063. 11. Geoff Jackson, Peter Holzmann, A Single-chip Text-to-Speech Synthesis Device Utilizing Analog Nonvolatile Multilevel Flash Storage, Issue Date: NOV.2002, ISSN: 0018-9200. 12. Liu, Fu-Hua-Gu, Liang-Gao, Machael, Applications of Language Modelling in Speech-to-Speech Translation table of content, Year- 2004, ISSN: 1381-2416. 13. Lee, C, -H; Jung, S.-K. ; Kang, H. G., Applying a speaker-Dependent speech compression Technique to concatenative TTS synthesizers, Year- 2007, ISSN: 1558-7916. 14. Henton, Caroline, Challenges and Rewards in using Parametric or concatenative speech synthesis, Year- 2002, Issn: 1381-2416. 15. Goldsmith, John, Dealing with Prosody in a tex-to-speech system table of content, Year- 1999, ISSN: 1381-2416. 16. Sotiris Karabetsos, Pirros Tsiakoulis, Embedded unit selection Text-to-speech synthesis, Year- 2009, ISSN: 0098-3063.
51

Synthesized speech using a small Microcontroller 17. Kim, Jong kuk-Hahn, Hern Soo- Bae, Myung Jin, On a speech Multiple system Implementation for speech synthesis table of content, Year- 2009, ISSN: 09296212. 18. Taylor & Francis, Technical notes on Audio Recording, Year- 2000, ISSN: 08351813. 19. El-Imam, Yousif A. Don, Zuraida Mohammed, Text-to-Speech conversion of standard Malay, Year- 1999, ISSN: 1381-2416. 20. Steve Heath, Embedded System Design, Second Edition, ISBN 0-7506-5546-1. 21. Holzmann, P. Jackson, G.B. Raina, A. Hung-chuan pai Ming-Bing Chang Awsare, S.V. Engh, L.D. Kao,O.C. Palmer, C.R. Chun-Mai Liu Kordesch, A.V. Ken su Hemming,M. An analog and digital record, playback and processing system using a coarse-fine programming method, Date 18-20 April 2001, pages 192-195, ISBN: 0-7803-6412-0. 22. Embedded C by Michale J Pont, Pearson International, ISBN-0173290532, year of publication 2002 23. Rice, S.V. Bailey, S.M, An integrated environment for audio recording,editing, and retrieval, Date Dec 2006, ISBN: 0-7695-2746-9. 24. Kumar, K. Raja Ramaiah, P. Seetha, DSP and Microcontroller based speech processor for auditory prosthesis, Date 20-23 Dec 2006, pages 518-522, ISBN: 1-4244-0716-8. 25. Thiang, Implementation of speech recognition on MCS51 microcontroller for controlling wheelchair, Date : 25-28 Nov 2007, pages 1193-1198, ISBN: 978-14244-1355-3. 26. Sarathy, K.P. Ramakrishnan, A.G, A research bed for unit selection based text to speech synthesis, Date 15-19 Dec 2008, pages 229-232, ISBN: 978-1-42443471-8. 27. PIC Microcontroller and Embedded Systems by Mazidi, Muhammad Ali, Pearson International, ISBN- 0136009026, Year of publication 2007. 28. ISD1420P datasheet http://pdf1.alldatasheet.com/datasheet-pdf/view/80296/ETC/ISD1420P.html 29. Speech synthesis methods www.cse.iitd.ernet.in/esproject/docs/seminars/bakshi/Seminar1.ppt 30. PIC18F1320 datasheet http://ww1.microchip.com/downloads/en/DeviceDoc/39605F.pdf This file was used to study the feature of this microcontroller. 31. MPLAB IDE features http://www.microchip.com/stellent/idcplg? IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en019469&part=SW0 07002 This link was used to study the feature and operations of MPLAB IDE. 32. http://pdf1.alldatasheet.com/datasheet-pdf/view/202286/FTDI/MM232R.html This link was used to study the feature and operations of MM232R.
33. http://www.speechchips.com/downloads/TTS256 Datasheet prelim.pdf 52

Synthesized speech using a small Microcontroller This datasheet was used for understand about TTS256 synthesizer chip. 34. Speech synthesis process http://home.alphalink.com.au/~derekw/pictalker/main.htm

Chapter 10 10) Appendix

10.1) Appendix A: Program code for control system /////code for controller board //VENU GOPAL ALLA //000540246 //Date: 25-03-2010 //UNIVERSITY OF GREENWICH /* 1. Initialize the system 2. Initialize the USART 3. Wait for the command to arrive 4. Identify the command 5. Send command to ISD1420 board 6. Play or record sound if required 7. Go to Step 3 */ #include <p18f1320.h> #include <delays> #include <usart.h> #include <portb.h> #include<stdio.h>

#pragma config OSC = HS//High speed frequency= 8MHZ


53

Synthesized speech using a small Microcontroller #pragma config WDT = OFF// Disable watch dog timer #pragma config LVP = OFF #pragma config MCLRE = ON//master set and clear

#define CLOCK_FREQ (8000000) // 8MHz #define INSTR_FREQ (CLOCK_FREQ/4) // one instruction takes 4 clock cycles #define DelayMs(ms) Delay1KTCYx((((INSTR_FREQ/1000)*ms)/1000)) #define record PORTBbits.RB2 #define play PORTBbits.RB3 while(1) /// embedded forever loop.... { while(!DataRdyUSART()); ///wait until any data arrives val=ReadUSART(); ///get data switch(val){ case 'p': printf("Playing Sound"); play=0;//start playing break; case 'r': printf("Recording"); record=0; //start recording break; default: printf("wrong command"); } DelayMs(50);/// maintain status record=1; play=1; val=0;
54

Synthesized speech using a small Microcontroller }//while(1); }//main()

10.2) Appendix B: Program code for speech synthesis system //////////// code for generating the speech synthesis within a microcontroller/////////////// //VENU GOPAL ALLA //UNIVERSITY OF GREENWICH //01-03-2010 1. Initialize the system 2. Initialize the USART 3. Wait for the data to arrive 4. Send the data to TTS256 chip 5. Wait for the command to arrive 6. Identify the command 7. Play or record sound if required 8. Go to Step 3 */ #include <p18f1320.h> #include <delays.h> #include <usart.h>//intialing uart #include <portb.h>

#pragma config OSC = HS//high speed frequency 8MHZ #pragma config WDT = OFF//Disable watch dog timer #pragma config LVP = OFF//low voltage #pragma config MCLRE = ON//master clear reset

#define CLOCK_FREQ (8000000) // 8MHz #define INSTR_FREQ(CLOCK_FREQ/4)//one inst take 4 clock cycles
55

Synthesized speech using a small Microcontroller #define DelayMs(ms) Delay1KTCYx((((INSTR_FREQ/1000)*ms)/1000))

#define record PORTBbits.RB2 #define play PORTBbits.RB3 #define led PORTBbits.RB5 //// Main ////// void main() { unsigned char val=0,// the character value val initialized as zero ADCON1 = 0x7F; // Disable analogue inputs and configure as I/O OSCCON = 0x70; // set internal oscillator to 8Mhz INTCON2= 0x00;/// Enable PORT_B Pull_Ups OpenUSART(USART_TX_INT_OFF & USART_RX_INT_OFF & USART_ASYNCH_MODE & USART_EIGHT_BIT & USART_CONT_RX, 25);//initialisation for serial communication TRISBbits.TRISB2 = 0; TRISBbits.TRISB3 = 0; DelayMs(500); //power on system ready indication record=1; play=1;*/ while(1) /// embedded forever loop.... { while(!DataRdyUSART()); ///wait until any data arrives val=ReadUSART(); ///get data putcUSART(val);///send data to TTS256 IC if (val=='p'||'P')
56

Synthesized speech using a small Microcontroller { printf("Playing Sound");// display on hyper terminal play=0;//start playing } if (val=='r'||'R') { printf("Recording"); record=0; //start recording } else{ printf("wrong command"); } DelayMs(50);/// maintain status record=1; play=1; val=0; }//while(1); }//main()

57

You might also like