Thursday, December 9, 2010

APPLICATION OF CONVULATION THEORY IN DIGITAL SIGNAL PROCESSING

INTRODUCTION

In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions. Convolution is similar to cross-correlation. It has applications that include statistics, computer vision, image and signal processing, electrical engineering, and differential equations.

The convolution can be defined for functions on groups other than Euclidean space. In particular, the circular convolution can be defined for periodic functions (that is, functions on the circle), and the discrete convolution can be defined for functions on the set of integers. These generalizations of the convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing.

Computing the inverse of the convolution operation is known as deconvolution.

Convolution is a mathematical way of combining two signals to form a third signal. It is the single most important technique in Digital Signal Processing. Using the strategy of impulse decomposition, systems are described by a signal called the impulse response. Convolution is important because it relates the three signals of interest: the input signal, the output signal, and the impulse response. This chapter presents convolution from two different viewpoints, called the input side algorithm and the output side algorithm.

One of the most important concepts in Fourier theory, and in crystallography, is that of a convolution. Convolutions arise in many guises, as will be shown below. Because of a mathematical property of the Fourier transform, referred to as the convolution theorem, it is convenient to carry out calculations involving convolutions.

But first we should define what a convolution is. Understanding the concept of a convolution operation is more important than understanding a proof of the convolution theorem, but it may be more difficult!

Mathematically, a convolution is defined as the integral over all space of one function at x times another function at u-x. The integration is taken over the variable x (which may be a 1D or 3D variable), typically from minus infinity to infinity over all the dimensions. So the convolution is a function of a new variable u, as shown in the following equations. The cross in a circle is used to indicate the convolution operation.

convolution equals integral of one function at x times other function at u-x

Note that it doesn't matter which function you take first, i.e. the convolution operation is commutative. We'll prove that below, but you should think about this in terms of the illustration below. This illustration shows how you can think about the convolution, as giving a weighted sum of shifted copies of one function: the weights are given by the function value of the second function at the shift vector. The top pair of graphs shows the original functions. The next three pairs of graphs show (on the left) the function g shifted by various values of x and, on the right, that shifted function g multiplied by f at the value of x.

illustration of convolution

The bottom pair of graphs shows, on the left, the superposition of several weighted and shifted copies of g and, on the right, the integral (i.e. the sum of all the weighted, shifted copies of g). You can see that the biggest contribution comes from the copy shifted by 3, i.e. the position of the peak of f.

If one of the functions is unimodal (has one peak), as in this illustration, the other function will be shifted by a vector equivalent to the position of the peak, and smeared out by an amount that depends on how sharp the peak is. But alternatively we could switch the roles of the two functions, and we would see that the bimodal function g has doubled the peaks of the unimodal function f.

The convolution theorem

Because there will be so many Fourier transforms in the rest of this presentation, it is useful to introduce a shorthand notation. T will be used to indicate a forward Fourier transform, and its inverse to indicate the inverse Fourier transform.

http://1.1.1.2/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/ftshort.gif

There are two ways of expressing the convolution theorem:

  • The Fourier transform of a convolution is the product of the Fourier transforms.
  • The Fourier tranform of a product is the convolution of the Fourier transforms.

http://1.1.1.5/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/convtheorem.gif

The convolution theorem is useful, in part, because it gives us a way to simplify many calculations. Convolutions can be very difficult to calculate directly, but are often much easier to calculate using Fourier transforms and multiplication.

Any signal may be understood as consisting of a sequence of impulses. This is obvious in the case of sampled signals, but can be generalized to continuous signals by representing the signal as a continuous sequence of Dirac impulses. We may construct the response of a linear system to an arbitrary input signal as a sum over suitably delayed and scaled impulse responses. This process is called a convolution:


Here f(t) is the input signal and g(t) the output signal; h(t) characterizes the system. We assume that the signals are causal (i.e. zero at negative time), otherwise the integration would have to start at $t'=-\infty$. Taking $f(t)=\delta(t)$, i.e. using a single Dirac impulse as the input, we get g(t)=h(t), so h(t) is in fact the impulse response of the system.

The response of a linear system to an arbitrary input signal can thus be computed either by convolution with the impulse response in time domain, or by multiplication with the transfer function in the Laplace domain, or by multiplication with the complex frequency response in frequency domain.

A reason for choosing the FFT method is that responses are often specified in the frequency domain (this is, as a function of frequency), so one would anyhow need a Fourier transformation to determine the impulse response. Moreover, the impulse response has an infinite duration, so it can never be used in full length. The FFT method, on the other hand, assumes all signals to be periodic, which introduces certain inaccuracies as well; the signals must in general be tapered to avoid spurious results.The interrelations between signal processing in the time and frequency domains.

In digital processing, these methods translate into convolving discrete time series or transforming them with the FFT method and multiplying the transforms. For impulse responses with more than 100 samples, the FFT method is usually more efficient. The convolution method is also known as a FIR (finite impulse response) filtration. A third method, the recursive or IIR (infinite impulse response) filtration, is only applicable to digital signals; it is often preferred for its flexibility and efficiency although its accuracy requires special attention.

DIGITAL SIGNAL PROCESSING

Digital signal processing (DSP) is concerned with the representation of the signals by a sequence of numbers or symbols and the processing of these signals. Digital signal processing and analog signal processing are subfields of signal processing. DSP includes subfields like: audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, biomedical signal processing, seismic data processing, etc.

The world of science and engineering is filled with signals: images from remote space probes, voltages generated by the heart and brain, radar and sonar echoes, seismic vibrations, and countless other applications. Digital Signal Processing is the science of using computers to understand these types of data. This includes a wide variety of goals: filtering, speech recognition, image enhancement, data compression, neural networks, and much more. DSP is one of the most powerful technologies that will shape science and engineering in the twenty-first century.

Since the goal of DSP is usually to measure or filter continuous real-world analog signals, the first step is usually to convert the signal from an analog to a digital form, by using an analog to digital converter. Often, the required output signal is another analog output signal, which requires a digital to analog converter. Even if this process is more complex than analog processing and has a discrete value range, the stability of digital signal processing thanks to error detection and correction and being less vulnerable to noise makes it advantageous over analog signal processing for many, though not all, applications.

DSP algorithms have long been run on standard computers, on specialized processors called digital signal processors (DSPs), or on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional technologies used for digital signal processing including more powerful general purpose microprocessors, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial apps such as motor control), and stream processors, among others.

APPLICATIONS OF DSP

applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital communications, RADAR, SONAR, seismology, and biomedicine. Specific examples are speech compression and transmission in digital mobile phones, room matching equalization of sound in Hifi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, computer-generated animations in movies, medical imaging such as CAT scans and MRI, MP3 compression, image manipulation, high fidelity loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.

Applications of the convolution theorem

Atomic scattering factors

We have essentially seen this before. We can tabulate atomic scattering factors by working out the diffraction pattern of different atoms placed at the origin. Then we can apply a phase shift to place the density at the position of the atom. Our new interpretation of this is that we are convoluting the atomic density distribution with a delta function at the position of the atom.

B-factors

We can think of thermal motion as smearing out the position of an atom, i.e. convoluting its density by some smearing function. The B-factors (or atomic displacement parameters, to be precise) correspond to a Gaussian smearing function. At resolutions typical of protein data, we are justified only in using a single parameter for thermal motion, which means that we assume the motion is isotropic, or equivalent in all directions. (In crystals that diffract to atomic resolution, more complicated models of thermal motion can be constructed, but we won't deal with them here.)

Above, we worked out the Fourier transform of a 1D Gaussian.

http://1.1.1.5/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/gauss2.gif

In fact, all that matters is the displacement of the atom in the direction parallel to the diffraction vector, so this equation is suitable for a 3D Gaussian. All we have to remember is that the term corresponding to the standard deviation refers only to the direction parallel to the diffraction vector. Since we are dealing with the isotropic case, the standard deviation (or atomic displacement) is equal in all directions.

The B-factor is used in an equation in terms of sinθ/λ instead of the diffraction vector, because all that matters is the magnitude of the diffraction vector. We replace the variance (standard deviation squared) by the mean-square displacement of the atom in any particular direction. The B-factor can be defined in terms of the resulting equation.

http://1.1.1.1/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/bfac1.gif

Note that there is a common source of misunderstanding here. The mean-square atomic displacement refers to displacement in any particular direction. This will be equal along orthogonal x, y and z axes. But often we think of the mean-square displacement as a radial measure, i.e. total distance from the mean position. The mean-square radial displacement will be the sum of the mean-square displacements along x, y and z; if these are equal it will be three times the mean-square displacement in any single direction. So the B-factor has a slightly different interpretation in terms of radial displacements.

http://1.1.1.2/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/bfac2.gif

Diffraction from a lattice

The convolution theorem can be used to explain why diffraction from a lattice gives another lattice – in particular why diffraction from a lattice of unit cells in real space gives a lattice of structure factors in reciprocal space. The Fourier transform of a set of parallel lines is a set of points, perpendicular to the lines and separated by a distance inversely proportional to the space between the lines. (This is related to the idea that diffraction from a set of Bragg planes can be described in terms of a diffraction vector in reciprocal space, perpendicular to the set of planes. In fact, we can increase "n" in Bragg's law (nλ=2 d sinθ) to get a series of diffraction vectors for that set of planes.) In the figure below, one set of closely-spaced horizontal lines gives rise to a widely-spaced vertical row of points. A second set of more widely-space diagonal lines gives rise to a more closely-spaced row of points perpendicular to these lines. If we multiply one set of lines by another, this will give an array of points at the intersections of the lines in the bottom part of the figure. The Fourier transform of this lattice of points, which was obtained by multiplying two sets of lines, is the convolution of the two individual transforms (i.e. rows of points) , which generates a reciprocal lattice.

http://1.1.1.3/bmi/www-structmed.cimr.cam.ac.uk/Course/Convolution/lattice.gif

Of course, the same argument can be applied to a 3D lattice.

Diffraction from a crystal

A crystal consists of a 3D array of repeating unit cells. Mathematically, this can be generated by taking the content of one unit cell and convoluting it by a 3D lattice of delta functions. The diffraction pattern is thus the product of the Fourier transform of the content of one unit cell and the Fourier transform of the 3D lattice. Since the transform of a lattice in real space is a reciprocal lattice, the diffraction pattern of the crystal samples the diffraction pattern of a single unit cell at the points of the reciprocal lattice.

Resolution truncation

Truncating the resolution of the data used in calculating a density map is equivalent to taking the entire diffraction pattern and multiplying the structure factors by a function which is one inside the limiting sphere of resolution and zero outside the limiting sphere. The effect on the density is equivalent to taking the density that would be obtained with all the data and convoluting it by the Fourier transform of a sphere. Now, the Fourier transform of a sphere has a width inversely proportional to the radius of the sphere so, the smaller the sphere (i.e. the lower the limiting resolution), the more the details of the map will be smeared out (thus reducing the resolution of the features it shows). In addition, the Fourier transform of a sphere has ripples where it goes negative and then positive again, so a map computed with truncated data will also have Fourier ripples. These will be particularly strong around regions of high density, such as heavy atoms.

Missing data

Similarly, leaving out any part of the data set (e.g. if you are missing a wedge of data in reciprocal space because the crystal died before the data collection was finished, or you didn't collect the cusp data) will lead to distortions of the electron density that can be understood through the convolution theorem. As for resolution truncation, missing data can be understood as multiplying the full set of data by one where you have included data and zero where it is missing. The convolution theorem tells us that the electron density will be altered by convoluting it by the Fourier transform of the ones-and-zeros weight function. The more systematic the loss of data (e.g. a missing wedge versus randomly missing reflections), the more systematic the distortions will be.

Density modification

Solvent flattening

The presentation on density modification shows that the effect of solvent flattening can be understood in terms of the convolution theorem. The effect of solvent flattening is that each structure factor after solvent flattening is obtained by taking the weighted average of its neighbours (in the reciprocal lattice) before solvent flattening, with the weighting terms being obtained as the Fourier transform of the mask around the protein. So the smaller the protein mask (i.e. the bigger the solvent fraction), the wider the features of its Fourier transform and thus the more widely the new phases consult the old phases in the surrounding region of reciprocal space. Similarly, if the mask is more detailed, then its Fourier transform spreads more widely in reciprocal space and there is more mixing of phase information.

Digital Signal Processing Algorithms describes computational number theory and its applications to deriving fast algorithms for digital signal processing. It demonstrates the importance of computational number theory in the design of digital signal processing algorithms and clearly describes the nature and structure of the algorithms themselves.first, it establishes the properties of discrete-time sequence indices and their corresponding fast algorithms; and second, it investigates the properties of the discrete-time sequences and the corresponding fast algorithms for processing these sequences. Digital Signal Processing Algorithms examines three of the most common computational tasks that occur in digital signal processing; namely, cyclic convolution, acyclic convolution, and discrete Fourier transformation. The application of number theory to deriving fast and efficient algorithms for these three and related computationally intensive tasks is clearly discussed and illustrated with examples. Its comprehensive coverage of digital signal processing, computer arithmetic, and coding theory makes Digital Signal Processing Algorithms an excellent reference for practicing engineers. The authors' intent to demystify the abstract nature of number theory and the related algebra is evident throughout the text, providing clear and precise coverage of the quickly evolving field of digital signal processing.

· In statistics, as noted above, a weighted moving average is a convolution.

· In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.

· In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the shadow on the table when you hold your hand between the table and a light source) is the convolution of the shape of the light source that is casting the shadow and the object whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp image with the shape of the iris diaphragm. The photographic term for this is bokeh.

· Similarly, in digital image processing, convolutional filtering plays an important role in many important algorithms in edge detection and related processes.

· In linear acoustics, an echo is the convolution of the original sound with a function representing the various objects that are reflecting it.

· In artificial reverberation (digital signal processing, pro audio), convolution is used to map the impulse response of a real room on a digital audio signal (see previous and next point for additional information).

· In electrical engineering and other disciplines, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function. See LTI system theory and digital signal processing.

· In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.

· In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance.

· This is the fundamental problem term in the Navier Stokes Equations relating to the Clay Institute of Mathematics Millennium Problem and the associated million dollar prize.

· In digital signal processing, frequency filtering can be simplified by convolving two functions (data with a filter) in the time domain, which is analogous to multiplying the data with a filter in the frequency domain

  • In electrical engineering, the convolution of one function (the input) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred.

In digital signal processing and image processing applications, the entire input function is often available for computing every sample of the output function. In that case, the constraint that each output is the effect of only prior inputs can be relaxed.

* Convolution amplifies or attenuates each frequency component of the input independently of the other components.

  • In statistics, as noted above, a weighted moving average is a convolution.
  • In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.
  • In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the shadow on the table when you hold your hand between the table and a light source) is the convolution of the shape of the light source that is casting the shadow and the object whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp image with the shape of the iris diaphragm. The photographic term for this is bokeh.
  • Similarly, in digital image processing, convolutional filtering plays an important role in many important algorithms in edge detection and related processes.
  • In linear acoustics, an echo is the convolution of the original sound with a function representing the various objects that are reflecting it.
  • In artificial reverberation (digital signal processing, pro audio), convolution is used to map the impulse response of a real room on a digital audio signal (see previous and next point for additional information).
  • In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.

BIBLIOGRAPHY

1).

http://www.structmed.cimr.cam.ac.uk/Course/Convolution/convolution.html

2).

http://en.wikipedia.org/wiki/Convolution#Applications

3).

http://wiki.answers.com/Q/State_the_applications_of_linear_convolution

4).

www.springerlink.com/index/94RE9XEULBY99MLX.pdf

5).

wiki.answers.com/.../State_the_applications_of_linear_convolution

No comments:

Post a Comment