You are on page 1of 3

Audio overviewAudio Integration and Audio Customer Support in Mobile World: What does it Means?.

Being part of multimedia audio team for telecom mobile platforms, I can say that that one the most important goal is to ensure the quality of service (QoS) as our work is in the hands of many mobile station users. With medium, low cost and ultra low cost devices, where the budgetd is low , the audio experience and the quality of sound is are vital to prove the quality of serviceQoS. Even if the target is in the Low low Cost cost and Ultra ultra Low low Cost cost market, the audio development process efforts are is compared comparable with the efforts done during the development of high end mobile terminals used as reference(smartphones). The main common activities responsibility of our team is responsibility to solve issues in our area of expertise, to respond to the customers wishes to implement/add new features , providing both development expertise and continuous support to integrate correctly theirour software with their applications. During integration/development process, Iissues can occur due to DSP algorithm performance, in which case tuning experts are involved, after a rigorous description of the problem. Interfacing the system and its peripheralsperipheries with physical components from customer product boards raises other issues. From product to product, the customer may want to change external components, sohence, the software must be adapted too, to be intomeet the desirableed quality specifications. Other types of issues that can occur are strictly logic-related. The customer can provide a baseband chip equipped with a software suite and several interfaces oriented to the end client and integrator, but the interfaces must be understood and matched to the basebands performance, timing requirements and physical constraints; it becomes obvious that support is needed in this area, as the end client ss requests must be in accordance to the customers products products. The audio team can cover the interfacing process, as well as come up with new and compatible approaches in requests that could be categorized as hard to be done. As withFor all the issues, the root cause must be precisely indicated, so that the software correction or workaround doesnt impact any other modules or the behavior of the entire system. For instance, a pop-noise issue can originate in the digital or in the analogical part of the system. In order to identify its cause, en an engineer would most likely disable the analogical part of the audio system to try and see if the data in the digital part is continuous and if it is a correct representation of what is expected. If the spectral analysis in the digital domain reveals that there are discrepancies in the frequency domain or if there are discontinuities in the overall expected waveform, then the issue can be further analyzed in the faulted domain. Likewise, if there are no issues found in the digital domain, the analogical (often exterior of the baseband chip) part of the system can be investigated in detail because almost for sure there is the problem. Thus a special attention is paid to the calibration and tuning of algorithms and the timing sequence of power-ups and hardware initializations; that is an electrically loaded output driver will most likely produce an undesired effect. The solution in this case is to use sequencers which activate step by step in the right order the analog audio output driver.
Formatted: Font color: Red Formatted: Font color: Red Formatted: Font color: Red Formatted: Indent: First line: 0"

For issues like wrong software management by upper layers where the problem is not present in the analogical output stage, on-chip debugging mechanisms are employed, and the issue is tracked down to the responsible line(s) of code. Certain DSP algorithms may not be properly controlled, or parameters can be corrupted due to misalignment between the initialization files and the DSP parameter space and this must be always checked. This is also the case for accessory insertion and removal. The audio engineer must assure that if the accessory is correctly detected, its tuninged parameters are loaded correctly loaded, and its exact list of algorithms is activated, (e.g. some accessories might have different gains and echo cancelling different algorithms parameters than the default onboard speaker and microphone). If some decoding is ongoing, using codecs like AMR or MP3, the buffering mechanism between different software layers of the baseband must be synchronized. It is very useful to track the transfer mechanism for debugging purpose. If timing constraints are not met, (i.ee.g. decoding takes more time than its supposed to or the interrupt mechanisms preempts the decodinge), these buffers will not behave properly, resulting in discontinuities and an overall bad audio experience. It is up to the audio engineers to ensure proper data and timing negotiations between the different layers in the software stack. In the support and debug context, the audio team makes use of a large range of equipments and software tools such as: audio analyzers, head-torso simulator, acoustic chamber for precise measurements, dedicated audio analysis instruments and network simulators. Debugging is faster by using professional equipment. Last but not least, is oOscilloscope and logic analyzer are used, when it is the case. For on-chip debugging, tools from Lauterbach company are used, T32 - JTAG compliant, which allows for all of the advantages of an In -Circuit Debugger, (likee.g. stepping and breakpoints setting, memory dumping, etc.) providing an in-depth look at the system and at the audio framework in particular. When combining these debugging options with additional software traces or with hardwired tracing mechanisms such as in the case of the DSP, an overview of the system under test can be achieved. As far as the standards compliance is a main goal, the audio team is involved in the development and maintenance of features that are overlaid with the audio framework and existing support; for instance, several entities require that the mobile devices stations must comply with the ECALL eCall/ERAGLONASS safety standard (automatic emergency call in case of a car accident), or with the TTY (a special device that lets people who are deaf, hard of hearing, or speech-impaired use the telephone to communicate, by allowing them to type messages back and forth to one another instead of talking and listeningText Type Telephony) transport mechanism for users with hearing problems. The frameworks that the audio team handles do not specifically support these features, but using the already existing support for algorithms, extra features can be implemented respecting in line with 3GPP standards or other proprietary requirement sets. These algorithms and transport protocols that use the audio stack can be implemented on a wide range of devices, from off the mobile phones to the embedded machine to machine (M2M) (machine to machine) devices. All of the above are achieved in a monitored process, with varying visibility levels for customers, end users and the committed resources. These have an impact over response times and customer satisfaction, thats why this kind of project uses . As with most large scale projects, the audio team uses

Formatted: Font: Calibri, 11 pt

a specific development environment, as any large scale project: a versioning tool from IBM Rational (Clearcase) , assuring concurrent work on source codeflow, a bug tracking tool (DDTS) to keep the history of all the requests/changes. This is how we can and meeting stringent market timing demands and bring satisfaction to the customer in a controlled process.

You might also like