Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Speech technology basics

737 views

Published on

Introduction to speech technology

Published in: Health & Medicine
  • Yes you are right. There are many research paper writing services available now. But almost services are fake and illegal. Only a genuine service will treat their customer with quality research papers. ⇒ www.WritePaper.info ⇐
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • I pasted a website that might be helpful to you: ⇒ www.HelpWriting.net ⇐ Good luck!
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hi there! I just wanted to share a list of sites that helped me a lot during my studies: .................................................................................................................................... www.EssayWrite.best - Write an essay .................................................................................................................................... www.LitReview.xyz - Summary of books .................................................................................................................................... www.Coursework.best - Online coursework .................................................................................................................................... www.Dissertations.me - proquest dissertations .................................................................................................................................... www.ReMovie.club - Movies reviews .................................................................................................................................... www.WebSlides.vip - Best powerpoint presentations .................................................................................................................................... www.WritePaper.info - Write a research paper .................................................................................................................................... www.EddyHelp.com - Homework help online .................................................................................................................................... www.MyResumeHelp.net - Professional resume writing service .................................................................................................................................. www.HelpWriting.net - Help with writing any papers ......................................................................................................................................... Save so as not to lose
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Speech technology basics

  1. 1. Speech Technology- Basics Presenter: Eshwari.G
  2. 2. What is DSP? • Digital signal processing is the processing of signals in a digital form.
  3. 3. SIGNAL Continuous signals x(t) A description of how one parameter varies with another parameter Discrete signals x[n]
  4. 4. DIGITAL SIGNAL DIGITAL signals x[n] Discrete signals x[n]
  5. 5. Analog-to-digital conversion is an electronic process in which a continuously variable (analog) signal is changed, without altering its essential content, into a multi-level (digital) signal. The input to an analog-to-digital converter (ADC) consists of a voltage that varies among a theoretically infinite number of values. Examples are sine waves, the waveforms representing human speech etc. The output of the ADC, in contrast, has defined levels or states. The simplest digital signals have only two states, and are called binary. ANALOG TO DIGITAL CONVERSION
  6. 6. Advantages of digital signals • First, digital signals can be stored easily. • Second, digital signals can be reproduced exactly. All you have to do is be sure that a zero doesn't get turned into a one or vice versa. • Third, digital signals can be manipulated easily. Since the signal is just a sequence of zeros and ones, and since a computer can do anything specifiable to such a sequence, you can do a great many things with digital signals. And what you are doing is called digital signal processing.
  7. 7. BASIC STRUCTURE OF A DIGITAL SIGNAL PROCESSING SYSTEM Pre- amplifier Final- amplifier Analog-Digital Converter Digital- Analog Converter Software (Algorithm) Digital Signal Processor 001101 101010 010110 110101 A/D D/A digitized signal processed digital signal ANALOG input signal amplified ANALOG signal processed ANALOG signal ANALOG output signal
  8. 8. DIGITAL TO ANALOG CONVERSION
  9. 9. BASIC STRUCTURE OF A DIGITAL SIGNAL PROCESSING SYSTEM Pre- amplifier Final- amplifier Analog-Digital Converter Digital- Analog Converter Software (Algorithm) Digital Signal Processor 001101 101010 010110 110101 A/D D/A digitized signal processed digital signal ANALOG input signal amplified ANALOG signal processed ANALOG signal ANALOG output signal
  10. 10. The process of combining signals is called synthesis. Decomposition is the inverse operation of synthesis, where a single signal is broken into two or more additive components. Synthesis & Decomposition
  11. 11. 2041×4 = ? The number 2041 can be decomposed into: 2000+40+1 Each of these components can be multiplied by 4 Then synthesized to find the final answer 8000 + 160 + 4 = 8164 The goal of this method is to replace a complicated problem with several easy ones. Synthesis & Decomposition
  12. 12. • There are infinite possible decompositions for any given signal, but only one synthesis • For example, the numbers 15 and 25 can only be synthesized (added) into the number 40 • In comparison, the number 40 can be decomposed into:1+39, 2+38 & 30+10 etc. Synthesis & Decomposition
  13. 13. Divide & conquer strategy Signal being processed is broken into single components Each component is processed individually Results are reunited SUPERPOSITION
  14. 14. SUPERPOSITION
  15. 15. DECOMPOSITION There are two main ways to decompose signals in signal processing: Impulse decomposition and Fourier decomposition.
  16. 16. Impulse DECOMPOSITION Impulse decomposition breaks an N samples signal into N component signals, each containing N samples. Each of the component signals contains one point from the original signal, with the remainder of the values being zero. A single nonzero point in a string of zeros is called an impulse.
  17. 17. IMPORTANCE OF IMPULSE DECOMPOSITION Impulse Decomposition Impulse decomposition is important because it allows signals to be examined one sample at a time. Similarly, systems are characterized by how they respond to impulses. By knowing how a system responds to an impulse, the system's output can be calculated for any given input. This approach is called convolution
  18. 18. Fourier Decomposition Any N point signal can be decomposed into N/2 signals, half of them sine waves and half of them cosine waves. The lowest frequency cosine wave (called in this xC0 [n] illustration), makes zero complete cycles over the N samples, i.e., it is a DC signal.
  19. 19. Fourier Decomposition The next cosine components: , , and , make 1, 2, xC1 [n] xC2 [n] xC3 [n] and 3 complete cycles over the N samples, respectively. Since the frequency of each component is fixed, the only thing that changes for different signals being decomposed is the amplitude of each of the sine and cosine waves.
  20. 20. CONVOLUTION & FOURIER ANALYSISCONVOLUTION & FOURIER ANALYSIS The two main techniques of signal processing: Convolution and Fourier analysis. Strategy Decompose signals into simple additive components, Process the components in some useful manner, Synthesize the components into a final result. This is DSP.
  21. 21. CONVOLUTIONCONVOLUTION Convolution is a mathematical way of combining two signals to form a third signal. Using the strategy of impulse decomposition, systems are described by a signal called the impulse response. Convolution relates the three signals of interest: the input signal, the output signal, and the impulse response. Convolution provides the mathematical framework for DSP
  22. 22. IMPULSE RESPONSEIMPULSE RESPONSE The delta function is a normalized impulse, that is, sample number zero has a value of one, while all other samples have a value of zero. Delta function is frequently called the unit impulse.
  23. 23. IMPULSE RESPONSEIMPULSE RESPONSE Impulse response is the signal that exits a system when a delta function (unit impulse) is the input. If two systems are different in any way, they will have different impulse responses. Just as the input and output signals are often called x[n] y[n] and , the impulse response is usually given the name is h[n]
  24. 24. IMPULSE RESPONSEIMPULSE RESPONSE • Any impulse can be represented as a shifted and scaled delta function. • Consider a signal, , composed of all zeros except sample number 8, a[n] which has a value of -3. • This is the same as a delta function shifted to the right by 8 samples, and multiplied by -3. • In equation form: a[n] = -3δ[n-8]
  25. 25. IMPULSE RESPONSEIMPULSE RESPONSE  If the input to a system is an impulse, such as , -3δ[n- 8] what is the system's output?  Scaling and shifting the input results in an identical scaling and shifting of the output.
  26. 26. IMPULSE RESPONSEIMPULSE RESPONSE  If -3δ[n-8] results in h[n] , it follows that -3δ[n-8] results in -3h[n-8] h[n]  In words, the output is a version of the impulse response that has been shifted and scaled by the same amount as the delta function on the input.  If you know a system's impulse response, you immediately know how it will react to any impulse.
  27. 27. How a system changes an input signal into an output signal  First, the input signal can be decomposed into a set of impulses, each of which can be viewed as a scaled and shifted delta function.  Second, the output resulting from each impulse is a scaled and shifted version of the impulse response.  Third, the overall output signal can be found by adding these scaled and shifted impulse responses.  In other words, if we know a system's impulse response, then we can calculate what the output will be for any possible input signal.
  28. 28. • It is able to provide far better levels of signal processing than is possible with analogue hardware alone. • It is able to perform mathematical operations that enable many of the spurious effects of the analogue components to be overcome. • In addition to this, it is possible to easily update a digital signal processor by downloading new software. • Once a basic DSP card has been developed, it is possible to use this hardware design to operate in several different environments, performing different functions, purely by downloading different software. • It is also able to provide functions that would not be possible using analogue techniques. Advantages over analogue processing
  29. 29. • It is not able to provide perfect filtering, demodulation and other functions because of mathematical limitations. • In addition to this the processing power of the DSP card may impose some processing limitations. • It is also more expensive than many analogue solutions, and thus it may not be cost effective in some applications. Limitations
  30. 30. SPEECH ANALYSIS Extraction of properties or features from a speech signal Involves a transformation of s(n) into another signal, a set of signal or a set of parameters Objectives Simplification Data reduction
  31. 31. Signal t • Continuous Signal (both parameters can assume a continuous range of values) Vertical Axis (y axis)– Amplitude Horizontal Axis (x axis) – Time The parameter on the y-axis (the dependent variable) is said to be a function of the parameter on the x-axis (the independent variable)
  32. 32. Speech Wave form In this, the time axis is the horizontal axis from left to right and the curve shows how the pressure increases and decreases in the signal Time domain representation.
  33. 33. Frequency domain (spectral) f(ω) Spectrum for a 1-ms pulse f(t)
  34. 34. Time domain vs Frequency domain (Temporal) vs (Spectral) Spectrum at 0.15 seconds into the utterance, in the beginning of the "o" vowel.
  35. 35. SHORT TIME ANALYSIS  Short segments of speech signal are isolated and processed as if they were short segments from a sustained sound  This is repeated as often as desired  Each short segment is called an analysis frame  Result – a single number or set of numbers
  36. 36. SHORT TIME ANALYSIS • ASSUMPTION  Properties of the speech signal change relatively slowly with time  This assumption leads to a variety of speech processing methods
  37. 37. TYPES OF SHORT TIME ANALYSIS  Short Time Energy (Average Magnitude)  Short Time Average Zero crossing rate  Short Time Auto-correlation
  38. 38. Short Time Energy (Average Magnitude) Amplitude of the speech signal varies appreciably with time Amplitude of unvoiced segments is much lower than the amplitude of voiced segments Short time energy provides a convenient representation that reflects these amplitude variations
  39. 39. Short Time Energy (Average Magnitude) 50ms of a vowel Squared version of (a) Energy for a window length = 5 ms
  40. 40. Short Time Average Zero crossing rate A zero crossing occurs when s(n) = 0, for a continuous signal A zero crossing occurs if successive samples have different algebraic signs, for a discrete signal
  41. 41. Short Time Average Zero crossing rate For sinusoids F0 = ZCR/2 For speech signals calculation of F0 from ZCR is less precise High ZCR – Unvoiced speech Low ZCR – Voiced speech Draw back – Highly sensitive to noise. ZCR is a simple measure of frequency content of the signal t
  42. 42. Short Time Autocorrelation Speech signal of s(n) Fourier transform of s(n) = S(e jw ) Energy spectrum = [S(e jw ) ]2 [S(e jw )]2 is called Autocorrelation of s(n) This preserves information about harmonic and formant amplitudes in s(n)
  43. 43. Autocorrelation - Significance Autocorrelation function contains the energy Period can be estimated by finding the location of the first maximum in the auto correlation function. Auto correlation function contains much more information about the detailed structure of the signal.
  44. 44. Autocorrelation - Application Applications 1. F0 estimation 2. Voiced /unvoiced determination 3. Linear prediction.
  45. 45. Cepstrum DFTS(n) LOG MAGNITUDE IDFT S(ejω ) log|S(ejω )| Cepstrum was derived by reversing the first four letters of "spectrum” Cepstrum was introduced by Bogert, Healey and Tukey in 1963 for characterizing the seismic echoes resulting from earthquakes A cepstrum is the result of taking the Inverse Fourier transform (IFT) of the log spectrum as if it were a signal. Originally it was defined as ‘spectrum of spectrum’. Operations on cepstra are labelled as quefrency analysis, liftering, or cepstral analysis
  46. 46. Why Cepstrum? • The cepstrum can be seen as information about rate of change in the different spectrum bands. • It has been used to determine the fundamental frequency of human speech. • Cepstrum pitch determination is particularly effective because the effects of the vocal excitation (pitch) and vocal tract (formants) are additive in the logarithm of the power spectrum and thus clearly separate. • The cepstrum is often used as a feature vector for representing the human voice and musical signals.
  47. 47. Cepstral concepts - Quefrency The independent variable of a cepstral graph is called the quefrency. The quefrency is a measure of time, though not in the sense of a signal in the time domain. For example, if the sampling rate of an audio signal is 44100 Hz and there is a large peak in the cepstrum whose quefrency is 100 samples, the peak indicates the presence of a pitch that is 44100/100 = 441 Hz. This peak occurs in the cepstrum because the harmonics in the spectrum are periodic, and the period corresponds to the pitch.
  48. 48. Cepstral concepts - Rahmonics • The x-axis of the cepstrum has units of quefrency, and peaks in the cepstrum (which relate to periodicities in the spectrum) are called rahmonics. • To obtain an estimate of the fundamental frequency from the cepstrum we look for a peak in the quefrency region
  49. 49. Cepstral concepts - Liftering A filter that operates on a cepstrum might be called a lifter. A low pass lifter is similar to a low pass filter in the frequency domain. It can be implemented by multiplying by a window in the cepstral domain and when converted back to the time domain, resulting in a smoother signal.
  50. 50. Cepstral Analysis • Low quefrency components or samples predominantly correspond to spectral envelope. (Up to about 3 to 4 msec). These are also called cepstral coefficients. • High quefrency components predominantly correspond to periodic excitation or source. (Beyond 4 msec) • If signal is periodic, a strong peak is seen over the high quefrency region at T0, the pitch period. • If signal is unvoiced, components are distributed over all quefrencies.
  51. 51. The cepstral coefficients • Cepstral coefficients can be derived both from the filter- bank and linear predictive analyses. • By keeping only the first few cepstral coefficients and setting the remaining coefficients to zero, it is possible to smooth the harmonic structure of the spectrum. • Cepstral coefficients are therefore very convenient coefficients to represent the speech spectral envelope. • Cepstral coefficients have rather different dynamics, the higher coefficients showing the smallest variances.
  52. 52. Cepstrum Formant can be estimated by locating the peaks in the log spectra For voiced speech there is a peak in the cepstrum For unvoiced speech there is no such peak in the cepstrum Position of the peak is a good estimate of the Pitch Period
  53. 53. Linear Predictive Coding • Linear Predictive Coding (LPC) is one of the most powerful speech analysis techniques • It is one of the most useful methods for encoding good quality speech at a low bit rate. • It provides extremely accurate estimates of speech parameters, and is relatively efficient for computation.
  54. 54. Linear Predictive Coding Source-Excitation signal Transfer Function Speech We can use the LPC coefficients to separate a speech signal into two parts: the transfer function (which contains the vocal quality-formants) and the excitation (which contains the pitch and the loudness)
  55. 55. • LPC analyzes the speech signal by • estimating the formants, • removing their effects from the speech signal, • and estimating the intensity and frequency of the remaining buzz. • The process of removing the formants is called inverse filtering, and the remaining signal is called the residue.
  56. 56. • The numbers which describe the formants and the residue can be stored or transmitted somewhere else. LPC synthesizes the speech signal by reversing the process: use the residue to create a source signal, use the formants to create a filter (which represents the tube), and run the source through the filter, resulting in speech. • Because speech signals vary slowly with time, this process is done on short chunks of the speech signal, which are called frames. Usually 30 to 50 frames per second give intelligible speech with good compression.
  57. 57. Basic Principle A Speech sample can be approximated as a linear combination of past speech samples By minimizing the sum of the squared differences between the actual speech samples and the predicted ones, a unique set of predicted codes can be determined Linear Predictive Coding
  58. 58. Applications 1. F0 estimation 2. Pitch 3. Vocal tract area functions 4. For representing speech for low bit transmission or storage Linear Predictive Coding
  59. 59. Highlights 1. Extremely accurate estimation of Speech Parameters 2. High speed of Computation 3. Robust, reliable & accurate method Linear Predictive Coding
  60. 60. Ways in which the basic models of analysis and the associated parameters from them are used in an integrated system  Diagnostic Applications (CSL & VAGMI)  Digital transmission of voice communication  Non – Machine communication by voice a. Voice Response systems b. Speaker recognition systems c. Speech recognition systems
  61. 61. Pre-emphasis Before Pre- emphasis After Pre- emphasis Boost the amount of energy in the high frequencies. For voiced segments like vowels, there is more energy at the lower frequencies than at the higher frequencies - spectral tilt. Boosting the high frequency energy makes information from these higher formants more available to the acoustic model and improves phone detection accuracy. This pre-emphasis is done with a filter
  62. 62. Windowing Goal of feature extraction is to provide spectral features. Speech is a non-stationary signal, spectrum changes very quickly if we extract spectral features from an entire utterance or conversation. Instead, we want to extract spectral features from a small window of speech that characterizes a particular subphone (its statistical properties are constant within this region). Windowing determines the portion of the speech signal that is to be analyzed by zeroing out the signal outside the region of interest. Pre Emphasis Window DFT Mel filter Bank log IDFT deltas
  63. 63. Windowing techniques • Rectangular • Bartlett • Hamming • Hanning • Blackman • Kaiser The most commonly used are the Rectangular and the Hamming methods
  64. 64. Bartlett Window Rectangular Window
  65. 65. Hanning Window Hamming Window
  66. 66. Kaiser Window Blackman Window
  67. 67. DFT Pre Emphasis Window DFT Mel filter Bank log IDFT deltas Spectrum at 0.15 seconds into the utterance, in the beginning of the "o" vowel.
  68. 68. The Mel frequency Human hearing is not equally sensitive at all frequency bands. Modeling this property of human hearing during feature extraction improves speaker recognition performance. The form of the model used in MFCCs is to warp the frequencies output by the DFT onto the mel scale. A mel (Stevens et al, 1937; Stevens and Volkmann, 1940) is a unit of pitch. Pairs of sounds that are perceptually equidistant in pitch are separated by an equal number of mels. The mapping between frequency in hz and the mel scale is linear below 1000 Hz and logarithmic above 1000 Hz. The mel frequency can be computed from the raw acoustic frequency as follows: f Mel(f) = 1127ln (1+ ------) 700 Pre Emphasis Window DFT Mel filter Bank log IDFT deltas
  69. 69. Mel filter Bank During MFCC computation, we implement this intuition by creating a bank of filters that collect energy from each frequency band, with 10 filters spaced linearly below 1000 Hz and the remaining filters spread logarithmically above 1000 Hz . Finally, we take the log of each of the mel spectrum values. In general, the human response to signal level is logarithmic - humans are less sensitive to slight differences in amplitude at high amplitudes than at low amplitudes. In addition, using a log makes the feature estimates less sensitive to variations in input such as power variations due to the speaker’s mouth moving closer or further from the microphone.
  70. 70. Log magnitude spectrum Magnitude spectrum Log magnitude spectrum Pre Emphasis Window DFT Mel filter Bank log IDFT deltas Replace each amplitude value in the magnitude spectrum with its log Visualize the log spectrum as if itself were a waveform
  71. 71. Cepstrum is the spectrum of the log of the spectrum. By taking the spectrum of the log spectrum, we have left the frequency domain of the spectrum and gone back to the time domain Pre Emphasis Window DFT Mel filter Bank log IDFT deltas IDFT
  72. 72. There is a large peak around 120, corresponding to the Fo There are other various components at lower values on the x-axis. These represent the vocal tract filter (the position of the tongue and the other articulators). Thus, if we are interested in detecting phones, we can make use of just the lower cepstral values. If we are interested in detecting pitch, we can use the higher cepstral values Pre Emphasis Window DFT Mel filter Bank log IDFT deltas Cepstrum
  73. 73. MFCC 12 co-efficients For MFCC extraction, we generally just take the first 12 cepstral values. These 12 coefficients will represent information solely about the vocal tract filter, cleanly separated from information about the glottal source. It turns out that cepstral coefficients have the extremely useful property that the variance of the different coefficients tends to be uncorrelated. This is not true for the spectrum, where spectral coefficients at different frequency bands are correlated. Pre Emphasis Window DFT Mel filter Bank log IDFT deltas MFCC
  74. 74. The extraction of the cepstrum with the inverse DFT results in 12 cepstral coeffcients for each frame. We next add a 13th feature; the energy from the frame. Energy correlates with phone identity and so is a useful cue for phone detection (vowels and sibilants have more energy that stops, etc.). The energy in a frame is the sum over time of the power of the samples in the frame; thus, for a signal x in a window from time sample t1 to time sample t1, the energy is t2 Energy = ∑ x2 [t] t=t1 Pre Emphasis Window DFT Mel filter Bank log IDFT deltas Energy
  75. 75. Deltas Speech signal is not constant from frame to frame. This change, such as the slope of a formant at its transitions, or the nature of the change from a stop closure to stop burst, can provide a useful cue for phone identity. For this reason, we also add features related to the change in cepstral features over time. We do this by adding for each of the 13 features (12 cepstral features plus energy) a delta or velocity feature and a double delta or acceleration feature. Each of the 13 delta features represents the change between frames in the corresponding cepstral energy feature, and each of the13 double delta features represents the change between frames in the corresponding delta features. Pre Emphasis Window DFT Mel filter Bank log IDFT deltas
  76. 76. SPEECH SPECTROGRAPH • A speech spectrograph is a laboratory instrument that displays a graphical representation of the amplitudes of the various component frequencies of speech on a time based plot. • A tool for analyzing vocal output. • It is used for identifying the formants, and for real- time biofeedback in voice training and therapy
  77. 77. SPEECH SPECTROGRAPH (Analog)
  78. 78. Speech Spectrograph (Digital) Pre Emphasis Window DFT Plot Amplitude vs. Frequency Plot Spectro- gram Time
  79. 79. Pre Emphasis Window DFT Plot Amplitude vs. Frequency Plot Spectro- gram Time
  80. 80. SPEECH SPECTROGRAPH • There are two main kinds of analysis performed by the spectrograph, wideband (with a bandwidth of 300-500 Hz) and narrowband (with a bandwidth of 45-50 Hz).
  81. 81. WIDEBAND SPECTROGRAPH • When used for normal speech with a fundamental frequency of around 100-200 Hz, will pick up energy from several harmonics at once and add them together. • The Fo (fundamental frequency) can be determined from the graphic • Also, the frequencies and relative strengths of the first two formants (F1 and F2) are visible as dark, rather blurry concentrations of energy.

×