Fourier Series And Orthogonal Functions \/\/TOP\\\\
A Fourier series is an expansion of a periodic function in terms of an infinite sum of sines and cosines. Fourier series make use of the orthogonality relationships of the sine and cosine functions. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break up an arbitrary periodic function into a set of simple terms that can be plugged in, solved individually, and then recombined to obtain the solution to the original problem or an approximation to it to whatever accuracy is desired or practical. Examples of successive approximations to common functions using Fourier series are illustrated above.
Fourier Series and Orthogonal Functions
Any set of functions that form a complete orthogonal system have a corresponding generalized Fourier series analogous to the Fourier series. For example, using orthogonality of the roots of a Bessel function of the first kind gives a so-called Fourier-Bessel series.
Using the method for a generalized Fourier series, the usual Fourier series involving sines and cosines is obtained by taking and . Since these functions form a complete orthogonal system over , the Fourier series of a function is given by
The coefficients for Fourier series expansions of a few common functions are given in Beyer (1987, pp. 411-412) and Byerly (1959, p. 51). One of the most common functions usually analyzed by this technique is the square wave. The Fourier series for a few common functions are summarized in the table below.
In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:
and the integral of the product of the two sine functions vanishes.[1] Together with cosine functions, these orthogonal functions may be assembled into a trigonometric polynomial to approximate a given function on the interval with its Fourier series.
Solutions of linear differential equations with boundary conditions can often be written as a weighted sum of orthogonal solution functions (a.k.a. eigenfunctions), leading to generalized Fourier series.
I have often come across the concept of orthogonality and orthogonal functions e.g in fourier series the basis functions are cos and sine, and they are orthogonal. For vectors being orthogonal means that they are actually perpendicular such that their dot product is zero. However, I am not sure how sine and cosine are actually orthogonal. They are 90 out of phase, but there must be a different reason why they are considered orthogonal. What is that reason? Does being orthognal really have something to do with geometry i.e 90 degree angels?
Why do we want to have orthogonal things so often in maths? especially with transforms like fourier transform, we want to have orthogonal basis. What does that even mean? Is there something magical about things being orthogonal?
The concept of orthogonality with regards to functions is like a more general way of talking about orthogonality with regards to vectors. Orthogonal vectors are geometrically perpendicular because their dot product is equal to zero. When you take the dot product of two vectors you multiply their entries and add them together; but if you wanted to take the "dot" or inner product of two functions, you would treat them as though they were vectors with infinitely many entries and taking the dot product would become multiplying the functions together and then integrating over some interval. It turns out that for the inner product (for arbitrary real number L) $$\langle f,g\rangle = \frac1L\int_-L^Lf(x)g(x)dx$$ the functions $\sin(\fracn\pi xL)$ and $\cos(\fracn\pi xL)$ with natural numbers n form an orthogonal basis. That is $\langle \sin(\fracn\pi xL),\sin(\fracm\pi xL)\rangle = 0$ if $m \neq n$ and equals $1$ otherwise (the same goes for Cosine). So that when you express a function with a Fourier series you are actually performing the Gram-Schimdt process, by projecting a function onto a basis of Sine and Cosine functions. I hope this answers your question!
Vectors are orthogonal not if they have a $90$ degree angle between them; this is just a special case. Actual orthogonality is defined with respect to an inner product. It is just the case that for the standard inner product on $\mathbbR^3$, if vectors are orthogonal, they have a $90$ angle between them. We can define lots of inner products when we talk about orthogonality if the inner product is zero. In the case of Fourier series the inner product is:
And yes there is something special about things being orthogonal, in the case of the Fourier series we have an orthogonal basis $e_0(x), \dots, e_n(x),\dots$ of all $2\pi$ periodic functions. Given any function $f$ if we want to write $f$ in this basis we can compute the coefficients of the basis elements simply by calculating the inner product. Since:
Now, the orthogonality you see when studying Fourier series is a different type again. There is a very common, widely-used concept of a vector space, which is an abstract set with some operations on it, that satisfies something like $9$ axioms, which ensures it works a lot like $\mathbbR^n$ in many respects. We can do things like add the "vectors" and scale the "vectors" by some constant, and it all behaves very naturally. The set of real-valued functions on any given set is an example of a vector space. It means we can treat functions much like vectors.
Now, if we can treat functions like vectors, perhaps we can also do some geometry with the, and define an equivalent concept of a dot product? As it turns out, on certain vector spaces of functions, we can define an equivalent notion to a dot product, where we can "multiply" two "vectors" (read: functions), to give back a scalar (a real number). Such a product is called an "inner product", and it too is defined by a handful of axioms, to make sure it behaves how we'd expect. We define two "vectors" to be orthogonal if their inner product is equal to $0$.
When studying Fourier series, you're specifically looking at the space of square-(Lebesgue)-integrable $L^2[-\pi, \pi]$, which has an inner product,$$\langle f, g \rangle := \int_-\pi^\pi f(x)g(x) \mathrmdx.$$To say functions $f$ and $g$ are orthogonal means to say the above integral is $0$. Fourier series are just a series to express functions in $L^2[-\pi, \pi]$ as an infinite sum of orthogonal functions.
Now, we use orthogonality of functions because it actually produces really nice results. Fourier series are a very efficient way of approximating functions, and very easy to work with in terms of calculation. Things like Pythagoras's theorem still hold, and turn out to be quite useful! If you want to know more, I suggest studying Fourier series and/or Hilbert Spaces.
Note that $\langle \sin(nx),\sin(mx)\rangle = \delta_n,m$. It is $1$ if $n=m$ and $0$ else. Same with cosine, and mixed $\sin$ and $\cos$ are $0$. This forms an orthonormal basis. Meaning all the basis vectors are orthogonal, and the inner product of any basis vector with itself is $1$. This gives us a way of computing Fourier series.
simply in quantum mechanics the orthogonal functions equal to zero means that no overlap between them. each function represents the energy state and for non degeneracy states the overlap between states is not found. in vector form means that no projection of each vector on other vector like the unit vector i, j, k are the basis vectors.
The coefficients \eqref2 can also be defined for a function $f$ in the class $L_1=L_1[(a,b),h]$, that is, for functions that are summable with weight function $h$ over $(a,b)$. For a bounded interval $[a,b]$, condition \eqref3 holds if $f\in L_1[(a,b),h]$ and if the sequence $\P_n\$ is uniformly bounded on the whole interval $[a,b]$. Under these conditions the series \eqref1 converges at a certain point $x\in[a,b]$ to the value $f(x)$ if $\phi_x\in L_1[(a,b),h]$.
Let $A$ be a part of $(a,b)$ on which the sequence $\P_n\$ is uniformly bounded, let $B=[a,b]\setminus A$ and let $L_p(A)=L_p[A,h]$ be the class of functions that are $p$-summable over $A$ with weight function $h$. If, for a fixed $x\in A$, one has $\phi_x\in L_1(A)$ and $\phi_x\in L_2(B)$, then the series \eqref1 converges to $f(x)$.
For the series \eqref1 the localization principle for conditions of convergence holds: If two functions $f$ and $g$ in $L_2$ coincide in an interval $(x-\delta,x+\delta)$, where $x\in A$, then the Fourier series of these two functions in the orthogonal polynomials converge or diverge simultaneously at $x$. An analogous assertion is valid if $f$ and $g$ belong to $L_1(A)$ and $L_2(B)$ and $x\in A$.
is satisfied, then the series \eqref1 converges uniformly to $f$ on the whole interval $[a,b]$. On the other hand, the rate at which the sequence $\E_n(f)\$ tends to zero depends on the differentiability properties of $f$. Thus, in many cases it is not difficult to formulate sufficient conditions for the right-hand side of the Lebesgue inequality to tend to zero as $n\to\infty$ (see, for example, Legendre polynomials; Chebyshev polynomials; Jacobi polynomials). In the general case of an arbitrary weight function one can obtain specific results if one knows asymptotic formulas or bounds for the orthogonal polynomials under consideration.
See also [a1], Chapt. 4 and [a2], part one. Equiconvergence theorems have been proved more generally for the case of orthogonal polynomials with respect to a weight function $h$ on a finite interval belonging to the Szegö class, i.e. $\log h\in L$, cf. [a2], Sect. 4.12. For Fourier series in orthogonal polynomials with respect to a weight function on an unbounded interval see [a2], part two.
This incisive text deftly combines both theory and practical example to introduce and explore Fourier series and orthogonal functions and applications of the Fourier method to the solution of boundary-value problems. Directed to advanced undergraduate and graduate students in mathematics as well as in physics and engineering, the book requires no prior knowledge of partial differential equations or advanced vector analysis. Students familiar with partial derivatives, multiple integrals, vectors, and elementary differential equations will find the text both accessible and challenging. The first three chapters of the book address linear spaces, orthogonal functions, and the Fourier series. Chapter 4 introduces Legendre polynomials and Bessel functions, and Chapter 5 takes up heat and temperature. The concluding Chapter 6 explores waves and vibrations and harmonic analysis. Several topics not usually found in undergraduate texts are included, among them summability theory, generalized functions, and spherical harmonics. Throughout the text are 570 exercises devised to encourage students to review what has been read and to apply the theory to specific problems. Those preparing for further study in functional analysis, abstract harmonic analysis, and quantum mechanics will find this book especially valuable for the rigorous preparation it provides. Professional engineers, physicists, and mathematicians seeking to extend their mathematical horizons will find it an invaluable reference as well. 041b061a72