0$, (which is the width of the rarest lattice). Let the
values of the random variable $\xi_1$ be concentrated on the numbers
$kh+b$, $k=\pm1,\pm2,\dots$, with some real number $b$. Then with the
notations of problem~4
$$
\aligned
P(S_n=kh+nb)&=\frac h{\sqrt{2\pi n}\sigma}\exp\left\{
-\frac{(kh+nb-nm)^2} {2n\sigma^2} \right\}+o\(\frac1{\sqrt n}\),\\
&\qquad\qquad k=0,\pm1,\pm2,\dots,
\endaligned \tag5
$$
where $o(\cdot)$ is uniform in the variable $k$. If also the condition
$E|\xi_1|^3<\infty$ is satisfied, then
$$
\align
P(S_n=kh+nb)&=\frac h{\sqrt{2\pi n}\sigma}\exp
\left\{-\frac{(kh+nb-nm)^2}{2n\sigma^2} \right\}+\e(k,n), \\
&\qquad\qquad k=0,\pm1,\pm2,\dots,
\endalign
$$
where $|\e(n,k)|\le\frac Kn$, and the constant $K$ depends only on the
distribution of the random variable $\xi_1$.
\item{7.)} If a sequence of random variables $S_n$, $n=1,2,\dots$,
satisfies relation (5), (it has no importance what additional properties
these random variables have), then
$$
\lim_{n\to\infty}P\(\frac{S_n-nm}{\sqrt n\sigma} 0$, i.e.,
its density function is $\gamma_s(u)=\frac1{\Gamma(s)}u^{s-1}e^{-u}$,
if $u>0$, and $\gamma_s(u)=0$, if $u<0$, where
$\Gamma(s)=\int_0^\infty u^{s-1} e^{-u}\,du$, then its characteristic
function equals $\varphi_s(t)=\frac1{(1-it)^s}$.
\medskip
It may be worth mentioning that the density function $\gamma_s(u)$,
considered in point~h.) shows some similarity with the Poissonian
distribution, and this can be exploited to prove, similarly to
the solution of Problem~2, a good asymptotic formula for the function
$\Gamma(s)$ if $s\to\infty$. This means a generalization of
Stirling's formula, since one prove with the help of some partial
integration that $\Gamma(n)=(n-1)!$.
Let us observe that $\gamma_{s_1}(u)*\gamma_{s_2}(u)=\gamma_{s_1+s_2}(u)$,
where $*$ denotes convolution. This can be seen e.g. with the
help of the form of the characteristic function of $\gamma_s$ and
some important properties of the characteristic functions discussed
later. Further, we can write the identity
$\gamma_s(s)=\frac{s^{s-1}e^{-s}}{\Gamma(s)}
=\frac1{2\pi}\int_{-\infty}^\infty \frac{e^{-ts}}{(1-it)^s}\,ds$
with the help of the inverse Fourier transform formula for density
functions. Then we get by giving a good asymptotic formula for
the expression at right-hand side of this identity (similarly
to the solution of Problem~2) that
$\Gamma(s)\sim\sqrt{2\pi(s-1)}(\frac{s-1}e)^{s-1}$ if $s\to\infty$.
%\vfill\eject
\medskip\noindent
{\script C.) The definition of the convolution and some of its
important properties.}
\medskip
In this series of problems we investigate the asymptotic behaviour
of the distribution and density function of appropriately normalized
partial sums of independent random variables. These distribution or
density functions can be directly expressed by means of the
distribution of density functions of the random variables in these
partial sums. Hence the limit theorems discussed in this text also can
be expressed in the language of distribution (and density) functions
without speaking of sums of independent random variables. To do this we
have to introduce the notion of the convolution operator. We introduce
this notion in a slightly more general form and define the convolution
of integrable (not necessary density) functions and signed measures. We
shall not use the notion of the convolution in this series of problems.
Hence this section could be omitted. Nevertheless, the discussion of
limit theorems without the introduction of the convolution would not
be complete, hence we introduce it. In several investigations of the
analysis and probability theory this notion appears in a natural way,
and it appears in the more general form introduced in this text.
\medskip\noindent
{\bf The definition of the convolution operator.}
{\it If $f(x_1,\dots,x_k)$ and $g(x_1,\dots,x_k)$ are two
$k$-dimensional measurable functions of $k$ variables which are
integrable, i.e.\ $\int |f(x_1,\dots,x_k)|\,dx_1\dots\,dx_k<\infty$ and
$\int |g(x_1,\dots,x_k)|\,dx_1\dots\,dx_k<\infty$, then the convolution
$f*g$ of the functions $f$ and $g$ is the function of $k$ variables
defined by the formula
$$
f*g(x_1,\dots,x_k)=\int
f(u_1,\dots,u_k)g(x_1-u_1,\dots,x_k-u_k)\,du_1\dots\,du_k \tag8
$$
in all such points $(x_1,\dots,x_k)$, where this integral is meaningful.
(In the remaining points we can define the function $f*g$ in an
arbitrary way.)
Let $\mu$ and $\nu$ be two signed measures of bounded variation on the
measurable sets of the $k$-dimensional Euclidean space $R^k$.
This means that we assume that there exist two representations
$\mu=\mu_1-\mu_2$ and $\nu=\nu_1-\nu_2$ such that $\mu_i$ and $\nu_i$,
$i=1,2$, are finite measures on the measurable subsets of the space
$R^k$, i.e.\ $\mu_k(R^k)<\infty$ and $\nu_k(R^k)<\infty$, $k=1,2$.
Let $\mu\times\nu$ denote the direct product of these signed measures
$\mu$ and $\nu$ on the product space $R^k\times R^k=R^{2k}$. Then the
convolution $\mu*\nu$ is the signed measure on the measurable subsets
of the space $R^k$ defined by the formula
$$
\mu*\nu(A)=\mu\times\nu \{(u,v)\colon\; u+v\in A\}\quad\text{for all
measurable sets } A\subset R^k.
$$
In other words, $\mu*\nu$ is the pre-image of the product (signed)
measure $\mu\times\nu$ induced by the transformation $\bold T\colon\;
R^k\times R^k\to R^k$ defined by the formula $\bold T(u,v)=u+v$.
Let $f(x_1,\dots,x_k)$ be a measurable and integrable function of $k$
variables, $\nu$ a measure of bounded variation on the measurable subsets
of $R^k$. The convolution $f*\nu(x_1,\dots,x_k)$ is the following
function of $k$~variables:
$$
f*\nu(x_1,\dots,x_k)=\int
f(u_1,\dots,u_k)\nu(x_1-\,du_1,\dots,x_k-\,du_k)
$$
in all points $(x_1,\dots,x_k)\in R^j$, where this integral is
meaningful. This means that we integrate the function $f(\cdot)$
with respect to the measure $\bar\nu_{x_1,\dots,x_k}$ defined by the
formula
$$
\bar\nu_{x_1,\dots,x_k}(A)=\nu((x_1,\dots,x_k)-A).
$$
(In the remaining points where this integral is not meaningful we define
the function $f*\nu$ in an arbitrary way.)}
\medskip
We have not defined the convolution $f*g(x_1,\dots,x_k)$ of two
functions $f$ and $g$ or the convolution $f*\nu(x_1,\dots,x_k)$ of a
function $f$ and a measure $\nu$ in all points $(x_1,\dots,x_k)\in
R^k$. But this restriction is not so disturbing as one might think at
the first sight. As we shall see in problem~14, these convolutions
exist for almost all points $(x_1,\dots,x_k)\in R^k$ with respect to
the Lebesgue measure in the $k$-dimensional Euclidean space. On the
other hand, in typical applications these convolutions appear as the
density function of a signed measure which is absolutely continuous
with respect to the Lebesgue measure, and such density functions are
determined only almost everywhere.
\medskip
\item{14.)} If $f(x_1,\dots,x_k)$ and $g(x_1,\dots,x_k)$ are two
measurable and integrable functions on the space $R^k$, then the
integral in formula (8) defining the convolution $f*g(x_1,\dots,x_k)$ is
meaningful for almost all points $(x_1,\dots,x_k)\in R^k$ with respect
to the Lebesgue measure. The convolution $f*g$ is a finite and
integrable function on~$R^k$.
\item{} If $\mu$ and $\nu$ are two signed measures of bounded variation
on the space $R^k$, then their convolution $\mu*\nu$ has the same
property.
\item{} If $\mu$ and $\nu$ are two signed measures of bounded
variation on the space $R^k$, and the measure $\mu$ has a density
function $f(u_1,\dots,u_k)$, that is $\mu(A)=\int_A
f(u_1,\dots,u_k)\,du$ for all measurable sets $A\subset R^k$, then the
convolution of these signed measures, the signed measure $\mu*\nu$ has
a density function, and it equals the function $f*\nu$. In particular,
the function $f*\nu(x)$ is integrable. If both signed measures $\mu$
and $\nu$ have a density function $f(u_1,\dots,u_k)$ and
$g(u_1,\dots,u_k)$, then their convolution, the signed measure $\mu*\nu$
also has a density function, and it equals the convolution~$f*g$.
\item{15.)} If $\xi$ and $\eta$ are two independent random vectors on
the space $R^k$, the distribution of $\xi$ is $\mu$, the distribution
of $\eta$ is $\nu$, then the distribution of the sum $\xi+\eta$ is the
convolution $\mu*\nu$. If the random vector $\xi$ has a density
function $f(u_1,\dots,u_k)$, then the sum $\xi+\eta$ also has a
density function, and it equals $f*\nu$. If $\xi$ has a density
function $f$ and $\nu$ a density function $g$, then the sum $\xi+\eta$
also has a density function, and it equals the convolution $f*g$.
\item{} As a consequence, if $\xi_j$, $j=1,\dots,n$, are independent
random vectors with distribution functions $F_j(x)=F_j(x_1,\dots,x_k)$,
$j=1,\dots,n$, $\bar S_n=\frac{\summ_{j=1}^n\xi_j-A}B$ with some
norming factors $A=(A_1,\dots,A_k)$ and $B>0$, then the distribution of
the expression $\bar S_n$ equals $F_1*\cdots*F_n(Bx+A)$. If the random
vectors $\xi_j$ have density functions $f_j$, $1\le j\le n$, then
$\bar S_n$ has a density function of the form $Bf_1*\cdots*f_n(Bx+A)$.
\item{} $f*g=g*f$, $\mu*\nu=\nu*\mu$, $(f*g)*h=f*(g*h)$,
$(\mu_1*\mu_2)*\mu_3=\mu_1*(\mu_2*\mu_3)$, i.e. the convolution
operator is commutative and associative.
\medskip
The next problem is about the relation between the convolution operator
and Fourier transform.
\medskip
\item{16.)} If $f(u_1,\dots,u_k)$ and $g(u_1,\dots,u_k)$ are two
integrable functions on~$R^k$ with Fourier transforms
$$
\varphi(t_1,\dots,t_k)=\int
e^{i(t_1u_1+\cdots+t_ku_k)}f(u_1,\dots,u_k)\,du_1\dots\,du_k
$$
and
$$
\psi(t_1,\dots,t_k)=\int
e^{i(t_1u_1+\cdots+t_ku_k)}g(u_1,\dots,u_k)\,du_1\dots\,du_k,
$$
then the Fourier transform of the convolution $f*g(u_1,\dots,u_k)$
is the function $\varphi(t_1,\dots,t_k)\psi(t_1,\dots,t_k)$.
\item{} If $\mu$ and $\nu$ are two signed measures on $R^k$ of bounded
variation with Fourier transforms (or in other terminology with
characteristic functions
$$
\varphi(t_1,\dots,t_k)=\int
e^{i(t_1u_1+\cdots+t_ku_k)}\mu(\,du_1,\dots,\,du_k)
$$
and
$$
\psi(t_1,\dots,t_k)=\int
e^{i(t_1u_1+\cdots+t_ku_k)}\nu(\,du_1,\dots,\,du_k)
$$
then the Fourier transform of the convolution $\mu*\nu$ equals
$\varphi(t_1,\dots,t_k)\psi(t_1,\dots,t_k)$.
\medskip
In an informal way the following problem can be formulated as
the statement that the convolution is a smoothing operator. The
convolution of two smooth functions is an even smoother function.
For the sake of simplicity we shall consider only functions of one
variable.
\item{17.)} Let $f(u)$ and $g(u)$ be two integrable functions. Let
us assume that the derivatives $\frac{d^j f(u)}{du^j}$ exist, they
are integrable functions, and $\limm_{u\to-\infty}\frac{d^j
f(u)}{du^j}=0$ for all integers $0\le j\le k$ with some integer
$k\ge1$. Let us also assume that the derivatives $\frac{d^j
g(u)}{du^j}$ also exits, they are also integrable functions, and
$\limm_{u\to-\infty}\frac{d^j g(u)}{du^j}=0$ for all integers $0\le
j\le l$ with all integers $l\ge1$. Then the derivative
$\frac{d^{k+l}f*g(u)}{du^{k+l}}$ also exists, it is an integrable
function, and $\limm_{u\to-\infty}\frac{d^{k+l}f*g(u)}{du^{k+l}}=0$.
\item{} Let $f(u)$ and $g(u)$ be two integrable functions such that the
function $f(u)$ has an analytic continuation to the domain
$\{z\colon\; |\Im z|0$. Let us further
assume that $\int |f(u+ix)|\,du<\infty$ for all numbers $|x|0$ and $B>0$ there exists a constant
$K=K(A,B,\e)$ such that $\int_{|u|>K}|f(y-u+ix)g(u)|\,du<\e$ if
$|y|**x_n$, $n=0,1,2,\dots$, i.e.\
$\mu_n([a,b))=F_n(b)-F_n(a)$ for arbitrary pairs of numbers $a 0$ there
exists a trigonometrical polynomial $P_n(t)=\summ_{k=-n}^n a_k e^{ikt}$
such that
$$
\sup_{-\infty**

0$ this is allowed, since in this case the value of the function $P(t)$ in the interval $[-\e,\e]$ is in a small neighbourhood of the number~1.) Simple calculation yields that $$ \align \frac{d\log P(t)}{dt}&=\frac{P'(t)}{P(t)},\qquad \left.\frac{d\log P(t)}{dt}\right|_{t=0}=im,\\ \frac{d^2\log P(t)}{dt^2}&=\frac{P''(t)P(t)-P'(t)^2}{P^2(t)},\qquad \left.\frac{d^2\log P(t)}{dt^2}\right|_{t=0}=-m_2+m^2=-\sigma^2, \endalign $$ hence a Taylor expansion around the origin yields that $$ \log P(t)=imt-\frac{\sigma^2}2t^2+o(t^2), \quad \text{if }|t|<\e. $$ This estimate together with the relation $|P^n(t)|=e^{n\Re \log P(t)}=e^{-n\sigma^2t^2/2+no(t^2)}\le e^{-n\sigma^2t^2/3}$, if $|t|<\e$ and $n\ge n(\e)$ imply that $$ \align |I_2|\le \frac1{2\pi}\int_{\frac1{\e\sqrt n}<|t|<\e} |P^n(t)|\,dt &\le \frac1{2\pi}\int_{\frac1{\e\sqrt n}<|t|<\e}e^{-n\sigma^2t^2/3}\,dt \\ &\le\frac1{\pi\sqrt n}\int_{-\frac1\e}^\infty e^{-\sigma^2t^2/3}\,dt \le \frac{e^{-\sigma^2/4\e^2}}{\sqrt n}. \endalign $$ Furthermore, $$ \align I_1&=\int_{-\frac1{\e\sqrt n}}^{\frac1{\e\sqrt n}} \frac1{2\pi} e^{-ikt+inmt-n\sigma^2t^2/2+o(nt^2)}\,dt\\ &=\int_{-\frac1\e}^{\frac1\e} \frac1{2\pi\sqrt n} e^{i(mn-k)t/\sqrt n-\sigma^2t^2/2+o(t^2)}\,dt \\ &=\int_{-\frac1\e}^{\frac1\e} \frac1{2\pi\sqrt n} e^{i(mn-k)t/\sqrt n-\sigma^2t^2/2}(1+o(t^2))\,dt \\ &=\int_{-\infty}^{\infty} \frac1{2\pi\sqrt n} e^{i(nm-k)t/\sqrt n-\sigma^2t^2/2}\,dt -\int_{|t|>\frac1\e} \frac1{2\pi\sqrt n} e^{i(nm-k)t/\sqrt n-\sigma^2t^2/2}\,dt \\ &\qquad\qquad +o\(\frac1{\sqrt n}\). \endalign $$ On the other hand $$ \left|\int_{|t|>\frac1\e} \frac1{2\pi\sqrt n} e^{i(nm-k)t/\sqrt n-\sigma^2t^2/2}\,dt \right|\le\frac {e^{-\sigma^2/4\e^2}}{\sqrt n}, $$ and by completing the quadratic term in the exponent of the last formula we get that $$ \align &\int_{-\infty}^{\infty} \frac1{2\pi\sqrt n} e^{i(nm-k)t/\sqrt n-\sigma^2t^2/2}\,dt \\ &\qquad =\frac{e^{-(nm-k)^2/2n\sigma^2}} {2\pi\sqrt n} \int_{-\infty}^{\infty} \exp\left\{-\frac{\sigma^2}2\(t-i\frac{nm-k}{\sqrt n}\)^2\right\}\,dt =\frac{e^{-(nm-k)^2/2n\sigma^2}} {\sqrt{2\pi n}\sigma} \endalign $$ by the result of problem~1. These estimates imply that $$ \left| I_1-\frac1{\sqrt{2\pi n}\sigma}\exp\left\{-\frac{(k-nm)^2} {2n\sigma^2} \right\}\right|\le \const\frac{e^{-\sigma^2/4\e^2}} {\sqrt n} $$ if $n>n(\e)$. As the estimates given for the expressions $I_1$, $I_2$ and $I_3$ hold for all $\e>0$ if $n$ is sufficiently large, hence they imply the result of problem~4. \item{5.)} The solution of this problem is similar to that of problem~4. Since the random variable $\xi_1$ has three finite moments in the present case, hence we can make the following better approximation of the function $\log P(t)$: $\log P(t)=imt-\frac{\sigma^2}2t^2+O(t^3)$. Hence $P^n(t)=e^{imnt-n\sigma^2t^2/2+O(nt^3)}$. Then we can solve problem~5 by making estimations similar to those in the proof of problem~4. The only essential difference is that now we define the domain of integration in the definition of the expressions in $I_1$ and~$I_2$ in a different way. Now put $I_1=\int_{|t|0$. Indeed, by introducing the quantity $z=\frac{k-np}{\sqrt n}$ we get the demanded estimate with an error term $\e(n)$ of the following form: $$ \e(n)=\e(n,z)\le\frac C{\sqrt n}e^{-K_1z^2}\[\exp\left\{K_2 \frac{|z|^3}{\sqrt n}+K_3\frac{|z|}{\sqrt n}+\frac{K_4}n\right\}-1\] $$ with appropriate constants $C>0$, and $K_j>0$, $j=1,\dots,4$. Then we have to show that $\e(z,n)\le\frac{\const}n$, if $|z|\le\gamma \sqrt n$. This estimate holds for $n^{1/6}>|z|<\gamma \sqrt n$, since in this case $\e(n,z)\le e^{-K_1z^2/2}$. If $|z|\le n^{1/6}$ then $e(n,z)\le\frac C{\sqrt n}e^{-K_1z^2}\(\frac {|z|^3+|z|+1}{\sqrt n}\)\le\frac{\const}n$, that is the demanded estimate holds also in this case. \item{} If $|k-np|\ge \gamma n$, then the result of problem~5a follows from the relations $P(S_n=k)\le \frac{\const}n$ and $e^{-(k-np)^2/2np(1-p)}<\frac{\const}{n^2}$. Actually, even stronger estimates could be proved. The first estimate is a consequence of the Chebishev inequality, since $P(S_n=k)\le P(|S_n-ES_n|\ge \gamma n)\le \frac{\text{Var\,}S_n}{\gamma^2n^2}\le\frac{\const}n$. The second inequality is obvious. \item{6.)} Let us introduce the random variables $\bar\xi_j=\frac{\xi_j-b}h$, $j=1,\dots,n$, and $\bar S_n=\summ_{j=1}^n\bar \xi_j$. Then $E\bar\xi_j=\frac{m-b}h$ and $\text{Var}\,\bar\xi_j =\frac{\sigma^2}{h^2}$. Since $P(S_n=kh+nb)=P(\bar S_n=k)$, and the random variable $\bar S_n$ is concentrated on the lattice of the integers as on the rarest lattice, statement formulated in this problem follows from the results of problems~4 and~5. \item{7.)} It follows from formula (5) that the relation $$ \lim_{n\to\infty}P\(A<\frac{S_n-nm}{\sqrt n\sigma}B\sqrt n\sigma\}$, $l(k,n)=\frac{kh-nm+nb}{\sqrt n\sigma}$, and $$ \Cal L(A,B)=\left\{l(k,n)=\frac{kh-nm+nb}{\sqrt n\sigma},\;k=0,\pm1,\pm2,\dots\right\}\cap (A,B), $$ i.e.\ the points of the set $\Cal L(A,B)$ are the points of the lattice of width $\frac h{\sqrt n\sigma}$ and containing the point $\frac{nb-nm}{\sqrt n\sigma}$ which fall into the interval $(A,B)$. This implies formula (2.3), since for a fixed number~$n$ the probability at the left-hand side is an approximating sum of the integral at the right-hand side plus an error which tends to zero as $n\to\infty$. \item{} We prove that relation (2.3) also holds for $A=-\infty$. Indeed, for all $\e>0$ we can choose a number $K=K(\e)$ such that $\int_{-K}^K\frac1{\sqrt{2\pi}}e^{-u^2/2}\,du>1-\e$. Then $\limm_{n\to\infty}P\(\left|\frac{S_n-nm}{\sqrt n\sigma}\right| 1-\e$, and $\limm_{n\to\infty}P\(\frac{S_n-nm}{\sqrt n\sigma}<-K\)<\e$. Then $$ \align &\limsupp_{n\to\infty}\left| P\(\frac{S_n-nm}{\sqrt n\sigma} 0$, it implies the statement of problem~7. \item{8.)} Let $\varphi(t)=\int_{-\infty}^{\infty}e^{itx}f(x)\,dx =Ee^{it\xi_1}$ denote the Fourier transform of the density function of the random variable $\xi_1$. Then the Fourier transform of the density function of the random sum $S_n=\xi_1+\cdots+\xi_n$ equals $Ee^{it(\xi_1+\cdots+\xi_n)}=\(Ee^{it\xi_1}\)^n=\varphi^n(t)$. Since $|\varphi(t)|\le1$, hence under the conditions of problem (8) the function $\varphi^n(t)$ is integrable for $k\ge n$, and the density function $f_n(t)$ of the random sum can be expressed by formula~(6) if we replace the function $\varphi(t)$ by $\varphi^n(t)$. Moreover, this relation also holds if we only assume that the function $\varphi^k(t)$ is integrable, and $n\ge k$. The above calculation makes possible to prove problem~8 similarly to problem~4 with some natural modification. Now we have to estimate the integral (6) instead of the integral~ (2) (with the modification that we write $\varphi^n(t)$ instead of the function $\varphi(t)$ in~(6). Further, because of the condition a $E\xi^2<\infty$ the Fourier transform $\varphi(t)$ is twice differentiable, $\varphi'(0)=iE\xi_1$, $\varphi(0)''=-E\xi_1^2$. This means that the analogs of the relations applied in the solution of problem~4 holds in this case. (Later we shall discuss the properties of the Fourier transform $\varphi(t)$ in the general case.) \item{} The only essential difference in the estimation of the integral we have to investigate is that the integral $I_1=\int_{\e<|t|<\pi}e^{-ikt}P^n(t)\,dt$ introduced in the solution of problem~4 now we write $I_1'=I_1'(x)=\int_{\e<|t|<\infty}e^{-itx} \varphi^n(t)\,dt$. Observe that $$ \align I_1'\le\int_{\e<|t|<\infty}|\varphi(t)|^n\,dt&\le \sup_{\e\le |t|<\infty}|\varphi(t)|^{n-k} \int_{\e<|t|<\infty}|\varphi(t)|^k\,dt \\ &\le \const \sup_{\e\le |t|<\infty}\varphi(t)|^{n-k}, \endalign $$ since $\varphi^k(\cdot)$ is an integrable function. For a fixed number $t$, $t\neq0$, $|\varphi(t)|<1$. Further, by the Riemann lemma $\limm_{|t|\to\infty}|\varphi(t)|=0$, and $\varphi(t)$ is a continuous function. (This series of problem also contains the proof of the Riemann lemma.) These facts imply that $\supp_{\e<|t|<\infty} |\varphi(t)| 0$ there exists a constant $R=R(\e)$ such that $P(|\xi|>R)=P\(\summ_{j=1}^k\xi_j^2>R^2\) <\frac\e2$. Put $\delta=\frac{\e}{2R(\e)}$ and consider such numbers $t=(t_1,\dots,t_k)$ for which $|t|^2=\summ_{j=1}^kt_j^2<\delta$. Then $|e^{i(t,\xi)}-1|\le|(t,\xi)|\le\frac\e2$ for $|x|R)\le E|(t-\bar t,\xi)|I(|\xi|\le R)+\frac \e2\le\e$ if $|t-\bar t|\le\delta$, where $I(A)$ denotes the indicator function of a set~$A$. Hence the function $\varphi(t)$ is uniformly continuous. \item{} The characteristic function of a random vector $a\xi+m$ in a point $t\in R^k$, where $a\in R$, $m\in R^k$ is the function $Ee^{i(t,a\xi+m)}=e^{(it,m)}Ee^{i(at,\xi)}=e^{(it,m)}\varphi(at)$, where $\varphi$ denotes the characteristic function of the random vector~$\xi$. \item{} If $\xi_j$, $j=1,\dots,n$, are independent random vectors with characteristic functions $\varphi_j(t)$, then the characteristic function of the random sum $\xi_1+\cdots+\xi_n$ in a point $t\in R^k$ equals $Ee^{i(t,\xi_1+\cdots+\xi_n)}=Ee^{i(t,\xi_1)}\cdots e^{i(t,\xi_n)}=Ee^{i(t,\xi_1)}\cdots Ee^{i(t,\xi_n)}=\prodd_{j=1}^k\varphi_j(t)$. \itemitem{13.) $\,$a.)} If the random variable $\xi$ has standard normal distribution, then $$ Ee^{it\xi}=\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{itu-u^2/2}\,du =e^{-t^2/2}\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-(it-u)^2/2}\,du =e^{-t^2/2} $$ by the result of problem 1. \itemitem{b.)} If the random variable $\xi$ has uniform distribution in the interval $[0,1]$, then $$ Ee^{it\xi}=\int_0^1e^{itu}\,du=\frac{e^{it}-1}{it}. $$ \itemitem{c.)} If the random variable $\xi$ has exponential distribution with parameter $\lambda>0$, then $$ Ee^{it\xi}=\int_0^\infty \lambda e^{itu-\lambda u}\,du =\frac{\lambda}{\lambda-it}. $$ \itemitem{d.)} If $\xi$ is a random variable with Cauchy distribution, then $$ Ee^{it\xi}=\int_{-\infty}^\infty \frac1\pi \frac{e^{itu}}{1+u^2}\,du. $$ This integral can be calculated by means of the residium theorem in the theory of analytic functions. \itemitem{} The function $g(z)=g(z,t)=\frac{e^{itz}}{\pi(1+z^2)}$ is analytic in the plane of complex numbers with two poles $z=\pm i$. The residium of the function $g(z)$ in the point $i$ equals $e^{-t}$, and in the point $-i$ it equals $e^t$. Let us consider the following contour integral. Let us first integrate the function $g(z)=g(z,t)$ on the interval $[-R,R]$ and then on the half-circle $|z|=R$, $\Im z\ge0$ if $t\ge0$ and on the half-circle $|z|=R$, $\Im z\le 0$ if $t\le0$. The above contour integral equals the residium of the function $g(z)$ in the point $i$ if $t>0$ and the residium of this function in the point $-i$ if $t<0$. On the other hand the restriction of the integral to the half-circle of radius $R$ tends to zero if $R\to0$. This implies that $Ee^{it\xi}=\int_{-\infty}^\infty g(t,u)\,du=e^{-|t|}$. \itemitem{} The following argument gives another different a little bit artificial but correct proof of this statement. The characteristic function of the density function $f(x)=\frac12e^{-|x|}$ equals $$ \frac12\int_{-\infty}^\infty e^{-|x|+itx}\,dx=\frac12 \(\frac1{1+it}+\frac1{1-it}\)=\frac1{1+t^2}. $$ Since the function $\frac1{1+t^2}$ is integrable the inverse Fourier transformation formula~(6) implies the desired statement. \itemitem{e.)} If the random variable $\xi$ has Poissonian distribution with parameter~$\lambda>0$, then $$ Ee^{it\xi}=e^{-\lambda} \sum_{k=0}^\infty \frac{\lambda^k}{k!}e^{ikt}=\exp\left\{\lambda(e^{it}-1)\right\}. $$ \itemitem{f.)} If the random variable $\xi$ has binomial distribution with parameters~$n$ and~$p$, then $$ Ee^{it\xi}=e^{-\lambda} \sum_{k=0}^n \binom nkp^ke^{ikt}(1-p)^{n-k} =\(1-p+pe^{it}\). $$ \itemitem{g.)} If $\xi$ is a random variable with negative binomial distribution with parameters $n$ and $p$, then its distribution agrees with the distribution of the random sum $\xi_1+\cdots+\xi_n$, where $\xi_j$, $1\le j\le n$, are independent random variables with negative binomial distribution with parameters~$1$ and~$p$. (To see this property observe that a possible the random variable~$\xi$ has the following probabilistic interpretation: If we make independent experiment after each other which are successful with probability~$p$, then $\xi$ denotes the number of unsuccessful experiments up to the~$n$-th successful experiment. If $\xi_j$ denotes the number of the unsuccessful experiments between the $j-1$-th and $j$-th successful experiments, then we get the above representation.) Hence $Ee^{it\xi}=\(Ee^{it\xi_1}\)^n$. On the other hand, $$ Ee^{it\xi_1}=\sum_{k=0}^\infty (1-p)p^ke^{itk}=\frac{1-p}{1-pe^{it}}. $$ \itemitem{h.)} We get with the change of variables $\bar u=(1-it)u$ that $$ \varphi_s(t)=\frac1{\Gamma(s)}\int_0^\infty e^{-u+itu}u^{s-1}\,du =\frac1{\Gamma(s)}\frac1{1-it)^s}\int_0^\infty e^{-\bar u}\bar u^{s-1}\,d\bar u =\frac1{(1-it)^s}. $$ In this calculation we applied some complex analysis argument. At the change of variables step $\bar u=(1-it)u$ of the calulation the domain of integration became the line $(1-it)u$, $u>0$, instead of the positive abscissa axis. But we can turn back the domain of integration to the positive abscissa axis by means of a usual complex analysis argument, by which the integral of an analytic function on a closed curve equals zero. At this step we have to exploit that the function $e^{-z}$ tends to zero fast as $\Re z\to\infty$. \item{14.)} If $f(x_1,\dots,x_k)$ and $g(x_1,\dots,x_k)$ are two integrable functions, then $$ \align \infty&>\int\int |f(x_1,\dots,x_k)||g(u_1,\dots,u_k)|\,dx_1\cdots dx_kdu_1\dots du_k \\ &\hskip6truecm (\text{with substitution }\bar u_j=x_j+u_j \;j=1,\dots,k ,) \\ &=\int\int |f(x_1,\dots,x_k)| |g(\bar u_1-x_1,\dots,\bar u_k-x_k)| \,dx_1\cdots \,dx_k\,d\bar u_1\dots \,d\bar u_k \\ &=\int\(\int |f(x_1,\dots,x_k)| |g(\bar u_1-x_1,\dots,\bar u_k-x_k)|\,dx_1\cdots dx_k\)d\bar u_1\dots du_k\\ &=\int |f|*|g|(x_1,\dots,x_k)\,dx_1\dots dx_k. \endalign $$ This relation implies that the function $|f*g(x_1,\dots,x_k)|\le |f|*|g| (x_1,\dots,x_k)$ is bounded in almost all points $(x_1,\dots,x_k)\in R^k$. It also implies that $f*g$ is an integrable function. In the sequel we write $x$ instead of $(x_1,\dots,x_k)$ and $u$ instead of $(u_1,\dots,u_k)$. \item{} If $\mu$ and $\nu$ are two measures of bounded variation, then there exists a representation $\mu=\mu_1-\mu_2$, $\nu=\nu_1-\nu_2$ such that $\mu_i$ and $\nu_i$, $i=1,2$, are finite measures. The identity $\mu*\nu=(\mu_1*\nu_1+\mu_2*\nu_2)-(\mu_1*\nu_2+\mu_2*\nu_1)$ holds. Since $\mu_i*\nu_j(R^k)<\infty$ for all indices $i,j=1,2$, this implies that $\mu*\nu$ is a measure of bounded variation. \item{} If the measure $\mu$ has a density function $f$, then we have for all measurable sets $A\subset R^k$ $$ \allowdisplaybreaks \align &\int_A f*\nu(x)\,dx=\int_A\(\int f(u)\nu(x-\,du)\)\,dx =\int_A\(\int f(x-u)\nu(\,du)\)\,dx\\ &\qquad=\int \(\int_A f(x-u)\,dx\)\nu(\,du)=\int\(\int I({x\colon\;x\in A})f(x-u)\,dx\)\nu(\,du)\\ &\qquad=\int\(\int I({v\colon\;u+v\in A})f(v)\,dv\)\nu(\,du)\\ &\qquad=\int\int I({v\colon\;u+v\in A})\mu(\,dv)\nu(\,du)\\ &\qquad=\mu\times\nu\(\{(u,v)\colon\; u+v\in A\}\)=\mu*\nu(A) \endalign $$ and this means that the function $f*\nu$ is the density function of the convolution $\mu*\nu$ of the measures $\mu$ and~$\nu$. \item{} Let us observe that the above calculations also imply that the function $f*\nu(x)$ is finite in almost all points $x\in R^k$, moreover it is integrable. Indeed, the above calculation implies this result with the choice $A=R^k$ if $\mu$ and $\nu$ are (bounded) positive measures. The general case can be reduced to this case if we decompose the measures $\mu$ and $\nu$ as the difference of two positive finite measures. (We may assume that the measures in the decomposition $\mu=\mu_1-\mu_2$ have density function.) \item{} If the measure $\mu$ has a density function $f$ and the measure $\nu$ has a density function $g$ then let us define the measures $\bar \nu_x(A)=\nu(x-A)$ for all $x\in R^k$. The density function of the measure $\bar \nu_x(A)$ equals $g(x-u)$ in the point $u\in R^k$, and the density function of the measure $\mu*\nu$ in the point~$x$ equals $$ \int f(u)\bar \nu_x(du)=\int f(u)g(x-u)\,du=f*g(x) $$ by the already proved results. \item{15.)} It follows from the definition of the convolution that if $\xi$ an $\eta$ are independent random variables with distributions $\mu$ and $\nu$, then the distribution of the random sum $\xi+\eta$ equals $\mu*\nu$. It follows from the result of the previous problem that if the distribution $\mu$ of the random variable $\xi$ has a density function $f$, then the distribution $\mu*\nu$ of the random variable $\xi+\eta$ has a density function $f*\nu$. If also the measure $\nu$ has a density function $g$, then this density function equals $f*g$. \item{} If the distribution of the random variable~$Z$ is $F(x)$, then the distribution of the random variable $\bar Z=\frac{Z-A}B$ with $B>0$ equals $F(Bx+A)$. If the random variable $Z$ has a density function $f(x)$, then the random variable $\bar Z$ has a density function $Bf(Bx+A)$. The previous results imply the statements formulated for the distribution and density function of the random sum $\bar S_n$. \item{} The relation $\mu*\nu=\nu*\mu$ follows from the definition of the convolution. The statement $(\mu_1*\mu_2)*\mu_3=\mu_1*(\mu_2*\mu_3)$ follows from the fact that the identity $$ (\mu_1*\mu_2)*\mu_3(A)=\mu_1*(\mu_2*\mu_3)(A) =\mu_1\times\mu_2\times\mu_3(\{(u,v,w)\colon\;u+v+w\in A\}) $$ holds for all measurable sets~$A$. The analog statements about the convolution of functions can be reduced to these statement if we represent the convolution of functions as the convolution of the density function of the corresponding signed measures. Otherwise these statement also can be simply proved by simple calculation. \item{16.)} If $\mu$ and $\nu$ are two signed measures of bounded variation with Fourier transforms $\tilde f(t_1,\dots,t_k)$ and $\tilde g(t_1,\dots,t_k)$, then $$ \align &\tilde f(t_1,\dots,t_k)\tilde g(t_1,\dots,t_k) \\ &\qquad=\int e^{i(t_1(u_1+v_1)+\cdots+t_k(u_k+v_k))} \mu(\,du_1,\dots,\,du_k) \nu(\,dv_1,\dots,\,dv_k). \endalign $$ The transformation $\bold T(u_1,\dots,u_k,v_1,\dots,v_k) =(u_1+v_1,\dots,u_k+v_k)$, $(u_1,\dots,u_k)\in R^k$, $(v_1,\dots,v_k)\in R^k$, is a measurable transformation from the space \hfill\break $(R^k\times R^k, \Cal B_{2k},\mu\times\nu)$ to the space $(R^k,B_k,\mu*\nu)$, where $\Cal B_{2k}$ and $\Cal B_k$ denote the $\sigma$-algebras in the spaces $R^{2k}$ and~$R^k$. By applying the measure theoretical result which describe how measurable transformations transform integrals for the functions $$ h(x_1,\dots,x_k)=e^{it(x_1+\cdots+x_k)}) $$ and $$ %\align g(u_1,\dots,u_k,v_1,\dots,v_k)= h(\bold T(u_1,\dots,u_k,v_1,\dots,v_k))= e^{i(t_1(u_1+v_1)+\cdots+t_k(u_k+v_k))} % \\ %&=e^{i\((t_1,\dots,t_k), \bold T(u_1,\dots,u_k,v_1,\dots,v_k)\)}, %\endalign $$ %where $\((x_1,\dots,x_k),(y_1,\dots,y_k)\)=\summ_{j=1}^k x_jy_j$ %denotes scalar product, and with the above defined transformation $\bold T(u_1,\dots,u_k,v_1,\dots,v_k)$ we get from the relation written at the beginning of the solution that $$ \tilde f(t_1,\dots,t_k)\tilde g(t_1,\dots,t_k) =\int e^{i(t_1x_1+\cdots+t_kx_k)} \mu*\nu(\,dx_1,\dots,\,dx_k). $$ This implies the statement about the Fourier transform of the convolution of measures. The analogous statement about the Fourier transform of the convolution of density functions follows from this statement and the relation between the convolution of measures and their density functions. \item{17.)} By differentiating the identity $f*g(x)=\int f(x-u)g(u)\,du$ $k$ times we get that $$ \frac {df*g^k(x)}{dx^k}=\int \left.\frac {df^k(v)}{dv^k}\right|_{v=x-u}g(u)\,du =\int \left.\frac {df^k(v)}{dv^k}\right|_{v=u}g(x-u)\,du. $$ The conditions of the problem allow the above successive differentiations. Further, working with the right-hand side of the last formula we can carry out~$l$ additional differentiations and get that $$ \frac {df*g^{k+l}(x)}{dx^{k+l}} =\int\left.\frac {df^k(v)}{dv^k} \right|_{v=u}\left.\frac{dg^l(v)}{dv^l}\right|_{v=x-u}\,du. $$ \item{} If $f(u)$ is analytic function which also satisfies the other conditions of the problem, then the function $$ F(z)=\int f(z-u)g(u)\,du $$ is an analytic continuation of the convolution $f*g(x)$ to the domain $\{z\colon\; \Im z0$ there exists a $k$-dimensional rectangle $\bold K=\bold K(\e)$ such that $\mu_F(\bold K)>1-\e$. (Given a distribution function $F$ in the sequel we shall denote by $\mu_F$ the probability measure on $R^k$ induced by the distribution function $F$.) We also may assume, by enlarging the rectangle $\bold K$ if it is necessary, that the boundary of the rectangles $\bold K$ has zero $\mu_F$ measure. Indeed, the projection of the distribution function $F$ to the $j$-th coordinate is a one-dimensional distribution function, and as a consequence it has at least countably many atoms (points with positive measure with respect the measure induced by this distribution) for all indices $j=1,\dots,k$. This implies that we can choose a larger rectangle $\bold K$ if this is necessary whose boundary has $\mu_F$ measure zero. \item{} Because of the boundedness of the function $f$ the relation $$ \left|\int_{R^k\setminus \bold K}f(x_1,\dots,x_k)\,dF(x_1,\dots,x_k)\right| <\const\e $$ holds, and also $\limsupp_{n\to\infty}\left|\int_{R^k\setminus \bold K}f(x_1,\dots,x_k) \,dF_n(x_1,\dots,x_k)\right|<\const\e$ for all $n=1,2,\dots$ because of the zero $\mu_F$ boundary of the rectangle $\bold K$ and the convergence of the distribution functions $F_n$ to the distribution function~$F$. Furthermore, the function $f$ is uniformly continuous on the rectangle $\bold K$ hence there exists some constant $\delta>0$ such that $|f(x)-f(y)|<\e$ if $|x-y|\le\delta$, and $x,y\in \bold K$. The rectangle $\bold K$ can be decomposed to finitely many rectangles $\Delta_j$, $j=1,\dots, p(\bold K)$, of diameter less than $\delta$ without joint interior points, and such that all these rectangles $\Delta_j$ have boundaries with $\mu_F$ measure zero. These properties imply that $\lim\limits_{n\to\infty} \mu_{F_n}(\Delta_j)=\mu_F(\Delta_j)$ for all indices $j=1,\dots, p(\bold K)$, and because of the uniform continuity of the function~$f$ on the rectangle~$\bold K$ $$ \limsup\left|\int_{\bold K}f\,dF_n-\int_{\bold K}f\,dF\right|<\e. $$ The above inequalities imply that $\limsup\left|\int f\,dF_n-\int f\,dF\right|<\const\e$ with a $\const$ independent of $\e$. Since this inequality holds for all $\e>0$, it implies the statement we wanted to prove. \medskip \item{b.)} The convergence of the integrals implies convergence in distribution: \item{} Let $x=(x_1,\dots,x_k)$ be a point of continuity of the distribution function~$F$. Then for all numbers $\e>0$ there exists a number $\delta=\delta(\e)>0$ such that the points $y=(y_1,\dots,y_k) =(x_1-\delta,\dots,x_k-\delta)$ and $z=(z_1,\dots,z_k) =(x_1+\delta,\dots,x_k+\delta)$ satisfy the inequalities $F(y)>F(x)-\e$ and $F(z) 0$, they imply the statement of part~b.). \item{19.)} Since $\bigcupp_{K=1}^\infty \bold K(K)^k= R^k$, and the rectangles $\bold K(K)^k$, $K=1,2,\dots$, constitute a monotone increasing series of sets, hence $\limm_{K\to\infty}\mu(\bold K(K)^k)=\mu(R^k)=1$, i.e.\ $\mu(\bold K(K)^k)\ge 1-\e$ if $K\ge K(\e)$. \item{} To show that the characteristic function of the probability measure $\mu$ is determined by its characteristic function let us first observe that the integrals $\int f(u)\,d\mu(u)$ determine the measure $\mu$ if we take all continuous functions $f(\cdot)$ with a bounded support. Indeed, the measure $\mu$ of those rectangles $\bold P=[K_1,L_1)\times\cdots\times [K_k,L_k)$ whose boundary has $\mu$ measure zero determine the measure $\mu$. Beside this, we claim that for all numbers $\e>0$ and rectangles $\bold P$ there exists a function $f_{\e,\bold P}(\cdot)$ such that $0\le f_{\e,\bold P}(u)\le1$ for all points $u\in R^k$, $f_{\e,\bold P}(u)=1$ if $u\in \bold P$, and $f_{\e,\bold P}(u)=0$ if $\rho(u,\bold P)>\e$. (In the sequel $\rho(\cdot,\cdot)$ denotes the usual Euclidean distance in the space~$R^k$.) Then the relation $\mu(\bold P)=\limm_{\e\to0}\int f_{\bold P,\e}\,d\mu(u)$ implies the above property. \item{} A possible construction of a function $f_{\bold P,\e}$ with the above properties is the following: Put $f_{\bold P,\e}(u) =1-g_{\bold P,\e}(u)$ and $g_{\bold P,\e}(u)=\min\(1,\frac1\e \rho(u,\bold P)\)$. \item{} Given a continuous function $f(\cdot)$ of compact support together with a sufficiently large number $K>0$ for which the cube $[-K,K]\times\cdots\times [-K,K]$ contains the support of the function $f(\cdot)$ let us define the periodic extension of the function $f(\cdot)$ with period $2K$ by the formula $f_K(u_1+2Kj_1,\cdots,u_k+2Kj_k)=f(u_1,\cdots,u_k)$, $-K\le u_j 0$ there exists a trigonometrical polynomial $$ g_{\e}=g_{\e,f_K}(u_1,\cdots,u_k)=\summ c^{\e}_{j_1,\dots,j_k}e^{i\pi(j_1u_1+\cdots+j_ku_k)/K} $$ such that $\supp_{u\in R^k}|f_K(u)-g_\e(u)| \le\e$. Hence $$ \left|\int f_K(u)\,d\mu(u)-\int g_\e(u)\,d\mu(u)\right|\le\e. $$ On the other hand, $\int g_\e(u)\,d\mu(u)=\summ c^{\e}_{j_1,\dots,j_k} \varphi\(\frac{\pi j_1}{K},\dots,\frac{\pi j_k}{K}\)$, that is the above integral can be calculated by means of the characteristic function of the measure~$\mu$. This implies that this characteristic function determines the integrals of the form $\int f_K(u)\,d\mu(u)$. Hence it also determines the measure~$\mu$. \item{} The proof can be generalized without any essential modification to arbitrary signed measures~$\mu$ with bounded variation. \item{20.)} First we show that for all numbers $a>0$ there exists an even density function $f(u)$ whose Fourier transform $\varphi(t)$ is sufficiently smooth, e.g.\ it is twice differentiable, and it equals zero outside the interval $[-a,a]$. \item{} Indeed, let us consider a continuously differentiable function $g(u)$ which is concentrated in the interval $\[-\frac a2,\frac a2\]$, $g^-(u)=g(-u)$. Then put $h(u)=g*g^-(u)$, $f(u)=\frac{2\pi}M\int e^{itu}h(u)\,du$, where $*$ denotes convolution, and $M=h(0)=\int |f(u)|^2\,du$. We claim that this function $f$ is a density function, and its characteristic function is the function $\frac{h^-(u)}M$, $h^-(u)=h(-u)$, which vanishes outside the interval $[-a,a]$. Indeed, the function $h(\cdot)$ is twice differentiable, (see problem~17), hence its Fourier transform tends to zero in plus--minus infinity with order $|t|^{-2}$ (see e.g.\ problem~28 of this series of problems discussed later), hence the above defined Fourier transform $f(\cdot)$ of the function $\frac {h(u)}M$ is integrable, and we can apply the inverse Fourier transform for it. Since $f(\cdot)$ is en even function, this means that the function $\frac{h^-(u)}M=\int e^{itx}f(u)\,du$ is the Fourier transform of the function $f(u)$. In particular, $\frac{h(0)}M=1=\int f(u)\,du$. Finally, $f(t)\ge0$ for all numbers $t\in R^1$, since the Fourier transform of the function $g*g^-(\cdot)$ equals $\int e^{itu}g*g^-(u)\,du=\int e^{itu}g(u)\,du\int e^{itu}g^-(u)\,du =\left|\int e^{itu}g(u)\,du\right|^2\ge0$. These properties mean that the function $f(\cdot)$ is a density function. (We shall return to the above problem in the second part of this series of problems where such a construction will be useful in a different context.) \item{} Let us consider an even density function $f(u)$ whose characteristic function $\varphi(t)$ is twice differentiable, and vanishes outside of a finite interval $[-a,a]$. Consider a number $T>a$, and let us define the numbers $a_k=\frac1{4\pi T}\int e^{-i\pi tk/T} \varphi(t)\,dt$, $k=0,\pm1,\pm2,\dots$. Let us put weights $a_k$ in the points $\frac{\pi k}T$, $k=0,\pm1,\pm2,\dots$. We claim that in such a way we constructed a probability distribution on the lattice $\frac{\pi k}T$, $k=0,\pm1,\pm2,\cdots$, whose characteristic function to the interval $[-T,T]$ agrees with the restriction of the characteristic function $\varphi(t)$ to this interval. The characteristic function of this discrete distribution is the periodic extension of the restriction of $\varphi(t)$ to the interval~$[-T,T]$ with period $2T$. This statement means in particular, that the above defined discrete distribution together with the distribution with density function $f(\cdot)$ yield an example satisfying the statement of problem~20. \item{} The statement formulated for the discrete distribution with weights~$a_k$ holds, because for one hand a comparison of the definition of the number~$a_k$ with the inverse Fourier transformation formula expressing the function $f(\cdot)$ yields that $a_k=\frac1{2T}f\(\frac{\pi k}T\)\ge0$. On the other hand, the trigonometrical sum $\summ_{k=-\infty}^\infty a_ke^{\pi ik/T}$ is the Fourier series of the function $\varphi(t)$ restricted to the interval~$[-T,T]$. In particular, $\varphi(0)=1=\summ_{k=-\infty} ^\infty a_k$. (As $\varphi(\cdot)$ is a twice differentiable function, hence it equals his Fourier series in all points.) \item{21.)} Let us first show that if the sequence of the probability measures $\mu_n$, $n=1,2,\dots$, is relatively compact, then it is also tight. \item{} Let us assume indirectly this sequence of measures $\mu_n$, $n=1,2,\dots$, is not tight. Then there exists a constant $\e>0$, a subsequence $\mu_{n_k}$ of the sequence of probability measures $\mu_n$ and a sequence of positive numbers $K_n$, $n=1,2,\dots$, such that $K_n\to\infty$, and $\mu_{n_k}([-K_n,K_n]\times\cdots \times[-K_n,K_n])<1-\e$. We shall show that this subsequence $\mu_{n_k}$ of the sequence of measures $\mu_n$ has no sub-subsequence convergent in distribution. This means that the above formulated indirect assumption leads to contradiction. \item{} Indeed, let us assume indirectly that the sequence of measures $\mu_{n_k}$ has a subsequence $\mu_{n_{k_j}}$ which converges in distribution to a probability measure $\mu$. Then, there exists a constant $K>0$ such that $\mu([-K,K]\times\cdots\times[-K,K])> 1-\frac\e2$, and the hyperplanes $u_j=\pm K$, $j=1,2,\dots,k$, have $\mu$ measure zero. Then also the relation $\limm_{j\to\infty}\mu_{n_{k_j}} ([-K,K]\times\cdots\times[-K,K]) =\mu([-K,K]\times\cdots\times[-K,K])$ should hold. But this is not possible, since the $\limsup$ of the probabilities at the left-hand side is smaller than $1-\e$, while the right-hand side is greater than $1-\frac\e2$. \item{} Let us show that if the sequence of measures $\mu_n$ is tight then it is also relatively compact. \item{} We have to show that an arbitrary subsequence of the sequence $\mu_n$ has a sub-subsequence convergent in distribution. For the sake of simpler notations let us denote by reindexing the elements of the subsequence again by $\mu_n$. We have to show that this (also tight) sequence of probability measures $\mu_n$ has a subsequence convergent in distribution. \item{} Let $F_n(u)=F_n(u_1,\dots,u_k)$ denote the distribution function $F_n(u_1,\dots,u_k)=\mu_n(\{(v_1,\dots,v_k)\colon\; v_j 0$ such a constant number $\delta=\delta(\e)>0$ for which $F(u)-\e\le F(u-\delta)\le F(u)\le F(u+\delta)\le F(u)+\e$, where $u\pm\delta=(u_1\pm \delta,\dots,u_k \pm\delta)$. Let us then choose two points $r=(r_1,\dots,r_k)\in R^k$ and $\bar r=(\bar r_1,\dots,\bar r_k)\in R^k$ with rational coordinates such that $u_j-\delta 0$, it implies that $\limm_{j\to\infty}F_{n_j}(u)=F(u)$. \item{22.)} Fix some $\delta>0$ and write the following identity: $$ \aligned &\frac1{2\delta}\int_{-\delta}^\delta \Re[1-\varphi_n(t)]\,dt=\int_{-\delta}^\delta\frac1{2\delta} \int_{-\infty}^\infty \[1-\cos tx\]\,dF_n(x)\,dt\\ &\qquad=\int_{-\infty}^\infty\frac1{2\delta}\int_{-\delta}^\delta [1-\cos tx]\,dt\,dF_n(x) =\int_{-\infty}^\infty\[\frac t{2\delta}-\frac{\sin tx}{2\delta x}\]_{t=-\delta}^{t=\delta}\,dF_n(x)\\ &\qquad=\int_{-\infty}^\infty\(1-\frac{\sin\delta x}{\delta x}\)\,dF_n(x) =\int_{-K}^K\(1-\frac{\sin\delta x}{\delta x}\)\,dF_n(x) \\ &\qquad\qquad+\int_{|x|>K}\(1-\frac{\sin\delta x}{\delta x}\) \,dF_n(x)=I^{\delta}_{1,n}(K)+I^{\delta}_{2,n}(K). \endaligned \tag2.6 $$ \item{} First we show with the help of relation~(2.6) that the validity of formula~(10) implies that the sequence of distribution functions~$F_n$ is tight. Since $\(1-\frac{\sin\delta x}{\delta x}\)\ge0$ for all $x$ and $\delta$, hence the left-hand side of formula~(2.6) yields an upper bound on the expression $I^{\delta}_{2,n}(K)$ for all numbers $\delta>0$, $n\ge1$ and $K>0$. If formula~(10) holds, then for all $\e>0$ there exists a number $\delta=\delta(\e)>0$ and threshold index $n_0=n_0(\delta,\e)$ such that $\frac\e2\ge\int_{|x|>K} \(1-\frac{\sin\delta x}{\delta x}\)\,dF_n(x)$ for $n\ge n_0$. Put $K=\frac2\delta$. Then $1-\frac{\sin\delta x}{\delta x}\ge\frac12$ for all $|x|\ge K$. Hence the previous estimate implies that $\frac\e2\ge \int_{|x|>K}\(1-\frac{\sin\delta x}{\delta x}\)\,dF_n(x)\ge \frac12[(1-F_n(K))+F_n(-K)]$, i.e.\ $\e\ge [(1-F_n(K))+F_n(-K)]$ with this number $K$ if $n\ge n_0$. By increasing the number $K>0$ if it is necessary we can achieve that the above inequality holds for all indices $n\ge1$. This means that the distribution functions $F_n$,~$n=1,2,\dots$, are tight. \item{} Let us prove with the help of formula~(2.6) that the tightness of the distribution functions $F_n$ implies formula~(10) and even its slightly stronger version, formula~$(10')$ where $\limsupp_{n\to\infty}$ is replaced by $\supp_{n\ge1}$. Since $\left|1-\frac{\sin\delta x}{\delta x}\right| \le2$, the tightness of the distribution functions $F_n$ makes possible to choose a number $K=K(\e)>0$ for all $\e>0$ such that $|I^{\delta}_{2,n}(K)|=\left|\int_{|x|>K} \(1-\frac{\sin\delta x}{\delta x}\)\,dF_n(x)\right|\le\frac\e2$ for all numbers $\delta>0$ and $n=1,2,\dots$. After fixing the number $K=K(\e)>0$ we can choose a number $\bar\delta=\bar\delta(\e,K)>0$ such that the inequality $\e\ge1-\frac{\sin\delta x}{\delta x}\ge0$ holds for all numbers $|x| 0$ there exists a threshold index $\bar\delta=\bar\delta(\e)>0$ such that for all numbers $0<\delta<\bar\delta$ $$ 0\le\frac1{2\delta}\int_{-\delta}^\delta \Re[1-\varphi_n^{(j)}(t)]\,dt<\e. $$ Furthermore, as $\limm_{n\to\infty}\Re[1-\varphi^{(j)}_n(t)]= \Re[1-\varphi^{(j)}(t)]$ if $|t|<\delta<\bar\delta$ (we choose a smaller threshold $\bar\delta>0$ if it is necessary), and $0\le\Re[1-\varphi^{(j)}_n(t)]\le2$, it follows from Lebesgue's dominated convergence theorem that $0\le\limsupp_{n\to\infty} \frac1{2\delta}\int_{-\delta}^\delta\Re[1-\varphi^{(j)}(t)]\,dt<\e$. Hence the result of problem~22 shows that the distribution functions of the random variables $\xi^{(j)}_n$, $n=1,2,\dots$, are tight, i.e.\ for all $\e>0$ there exists a constant $K=K(\e)>0$ such that the inequality $P\(\left|\xi^{(j)}_n\right|>K\) <\frac\e k$ holds. Since this statement holds for all numbers $j=1,\dots,k$, it implies that the distributions of the random vectors $\bar\xi_n= (\xi_1^{(n)},\dots,\dots,\xi_k^{(n)})$ are tight. \item{24.)} It follows from the results of problems~21 and~23 that the sequence of distribution functions $F_n(u_1,\dots,u_k)$, $n=1,2,\dots$, is relatively compact, i.e.\ an arbitrary subsequence of the sequence of the distribution functions $F_n$ has a sub-subsequence convergent in distribution if the characteristic functions of the distribution functions $F_n$ converge to a function continuous in the origin. (Moreover, it is enough to assume that this property holds for the restriction of the characteristic functions to the coordinate axes.) To see that under the conditions of the first part of problem~24 the distribution functions $F_n$ converge in distribution it is enough to show that in this case all convergent subsequences of this sequence of distribution functions have the same limit. To justify this reduction of the problem let us choose a convergent subsequence~$F_{n_l}$ which converges to some distribution function $F(u_1,\dots,u_k)$. If this distribution function $F(\cdot)$ were not the limit of the distribution functions $F_n$, then the distribution function $F(u_1,\dots,u_k)$ would have a point of continuity $u=(u_1,\dots,u_k)$ together with a constant $\e>0$ and a sequence of indices $n_j$, $j=1,2,\dots$, such that $|F_{n_j}(u_1,\dots,u_k)-F(u_1,\dots,u_k)|>\e$ for all $j=1,2,\dots$. But this would mean that a convergent subsequence of the sequence of distribution functions $F_{n_j}$, $j=1,2,\dots$, would have a limit different of the distribution function $F(u_1,\dots,u_k)$. \item{} The statement that all convergent subsequences of the distribution functions $F_n$ have the same limit follows from the results of Theorem~A and problem~19. Indeed, it follows from Theorem~A that the characteristic function of the limit distribution function of a convergent subsequence of the distribution functions $F_n$ is the limit of the characteristic functions of the distribution functions in this subsequence. The limit of these characteristic functions does not depend on which convergent subsequence we have considered. But by the result of problem~19 a distribution function is determined by its distribution function. Hence the condition that the characteristic functions of a sequence of distribution functions converge to a function continuous in the origin implies that these distribution functions converge in distribution to a distribution function, and beside this the characteristic function of the limit distribution function equals the limit of the characteristic functions of the distribution functions we have considered. \item{} If a sequence of distribution functions $F_n(u_1,\dots,u_k)$ converges in distribution to a distribution function $F_0(u_1,\dots,u_k)$, then it follows from Theorem~A that the characteristic functions $\varphi_n(t_1,\dots,t_k)$ of these distribution functions converge to the characteristic function $\varphi_0(t_1,\dots,t_k)$ of the distribution function $F_0$ in all points $(t_1,\dots,t_k)\in R^k$. To complete the proof of the Fundamental Theorem we have still to show that this convergence is uniform in all compact subset of the Euclidean space~$R^k$. \item{} To prove this statement let us observe that since the distribution functions $F_n$ converge in distribution they are tight. Hence for all $\e>0$ there exists a constant $K=K(\e)$ such that a sequence of random vectors $\bold\xi_n=(\xi^{(1)}_n,\dots,\xi^{(k)}_n)$, $n=1,2,\dots$, with distribution functions $F_n$ satisfy the inequality $P(|\xi_n|>K)<\frac\e3$ for all indices $n=0,1,2,\dots$. (In the further part of the proof $\xi(\oo)$, $t\in R^k$, $u\in R^k$ denote points of the $k$-dimensional space, and $(u,t)$, $u\in R^k$, $t\in R^k$, denotes the scalar product of the vectors $u$ and $t$.) Let us choose a finite set of points $\bold T=\left\{t^{(1)},\dots,t^{(s)}\right\}\subset \bold K$, $s=s(\bold K,\delta)$, in a compact set $\bold K\subset R^k$ which is $\delta$-dense in the set~$\bold K$, i.e.\ for all $t\in \bold K$ there is a point $t^{(j)}\in \bold T$ such that $\rho(t,t^{(j)})<\delta$. Then $$ \align \left|\varphi_n(t)-\varphi_n(t^{(j)})\right|&= \left|Ee^{i(t,\bold\xi_n)}-e^{i(t^{(j)},\bold\xi_n)}\right| \\ &\le E\left|e^{i(t-t^{(j)},\bold\xi_n)}-1\right| I\(|\bold\xi_n|\le K\)+P(|\bold\xi_n|>K) \le \frac{2\e}3 \endalign $$ for all numbers $n=1,2,\dots$. Further we can choose a threshold index $n_0=n_0(\e)$, such the inequality $\supp_{n\ge n_0}\supp_{t^{(j)}\in\bold T}\left|\varphi_n(t^{(j)})- \varphi_0(t^{(j)})\right|<\frac\e3$ holds. It follows from the last inequalities that $\supp_{t\in\bold K}|\varphi_n(t)-\varphi_0(t)|<\e$ if $n\ge n_0$. This means that the convergence $\varphi_n(t)\to\varphi_0(t)$ is uniform on all compact sets~$\bold K$. \item{25.)} Let $\varphi_0(t)$ denote the characteristic function of the uniform distribution in the interval $[-1,1]$, i.e.\ $\varphi_0(t)=\int_{-1}^1\frac12e^{itu}\,du =\frac{e^{it}-e^{-it}}{2it}$. Let us define the characteristic functions $\varphi_n(t)$ as the characteristic functions of the following discretizations $\mu_n$, $n=1,2,\dots$, of the uniform distribution on the interval $[-1,1]$: $\mu_n\(\frac kn\)=\frac1{2n+1}$, $-n\le k\le n$. Then $$ \varphi_n(t)=\frac1{2n+1}\summ_{k=-n}^n e^{ikt/n}=\frac{e^{i(n+1)t/n} -e^{i(-n+1)t/n}}{(2n+1)(e^{it/n}-1)}. $$ Simple calculation shows that $\varphi_n(t)\to\varphi_0(t)$ for all points~$t\in R^1$, and the convergence is uniform in all finite intervals. On the other hand, the convergence is not uniform on the whole real line, since $\limm_{t\to\infty}\varphi_0(t)=0$, while $\varphi_n(t)=1$ in the points of the form $t=2\pi kn$, $k=0,\pm1,\pm2,\dots$. (The background of this construction: We have approximated the characteristic function of a distribution function having a density function with the characteristic function of more and more closer discretization of this distribution function. These discretized approximations of the original distribution functions had lattice distributions. The characteristic functions of such approximating distributions converge to the characteristic function of the limit distribution by the Fundamental Theorem. Beside this, the characteristic function of a distribution function with a density function tends to zero in the infinity by the Riemann lemma. On the other hand the characteristic function of a lattice valued distribution is a periodic function which has absolute value~1 in certain points.) \item{26.)} An example for case a): Let the measures $\mu_n$ have uniform distribution in the interval $[-n,n]$. Then $\varphi_n(t)=\frac1{2n}\int_{-n}^{n}e^{itu}\,du =\frac{e^{itn}-e^{-itn}}{2int}$. Hence $\limm_{n\to\infty}\varphi_n(t)=0$ if $t\neq0$, and $\limm_{n\to\infty}\varphi_n(0)=1$. \item{} An example for case b): Let $\mu_{2n}(\{n\}) =\mu_{2n}(\{-n\})=\frac12$, and $\mu_{2n+1}$ be the probability measure $\mu_n$ defined in case~a). Then $\varphi_{2n}(t)= \frac12(e^{itn}+e^{-itn})$, and it equals~1 in the points of the form $\frac{2k\pi}n$. This means that in the points of the form $t=\varphi\(\frac{2k\pi}l\)$ $\varphi_{n_k}(t)=0$ for a certain subsequence $n_k$, and $\limm_{k\to\infty}\varphi_{\bar n_k}(t)=1$ for a certain subsequence $\bar n_k$. \item{27.)} Let $F(x)$ denote the distribution function of the random variable $\xi$. Then $\varphi(t)=\int e^{itu}dF(u)$. By successive differentiation we get that $\frac{d^k\varphi(t)}{dt^k}=i^k\int u^ke^{itu}\,dF(u)$, in particular $\left.\frac{d^k\varphi(t)} {dt^k}\right|_{t=0}=i^k\int u^k\,dF(u)=i^kE\xi^k$ if the order of derivation and integration can be changed in the above calculation. The above calculation is legitim if the distribution $F$ is concentrated in the interval $[-K,K]$, because the integrand in the integral expressing the fraction $\frac{\varphi(t+h)-\varphi(t)}h$ satisfies the relation $\frac{e^{i(t+h)u}-e^{itu}}h=iue^{itu}+O(h)$, and for a fixed number $t$ the order $O(h)$ is uniform if $u\in [-K,K]$. \item{} If $E|\xi|=\int|u|\,dF(u)<\infty$, then to prove the statement of problem~27 for the first derivative we introduce the functions $$ G_n(t)=\int_{-n}^n e^{iut}\,dF(u), \quad H_n(t)=i\int_{-n}^n ue^{iut}\,dF(u), \quad n=1,2,\dots, $$ and $G(t)=\int_{-\infty}^\infty e^{iut}\,dF(u)$ \'es $H(t)=i\int_{-\infty}^\infty ue^{iut}\,dF(u)$. Then $\limm_{n\to\infty}G_n(t)=G(t)$, and $\limm_{n\to\infty}\frac{dG_n(t)}{dt}=H(t)$ with $H(t)=i\int_{-\infty}^\infty ue^{iut}\,dF(u)$. We show with the help of the above statements that the function $G(t)$ is differentiable, and $\frac{dG(t)}{dt}=H(t)$. Indeed, $G(t)=\limm_{n\to\infty}G_n(t)=\limm_{n\to\infty}\left[G_n(0)+ \int_0^t H_n(s)\,ds\right]$, hence the relations $\limm_{n\to\infty}G_n(0)=G(0)$, $\limm_{n\to\infty}H_n(s)=H(s)$, and the validity of the inequality $|H_n(s)|\le E|\xi|$ for all numbers $n$ and $s$ together with Lebesgue dominated convergence theorem imply that $G(t)=G(0)+\int_0^t H(s)\,ds$. Hence $\frac{dG(t)}{dt}=H(t)$, and this is what we had to prove. \item{} The statement about the $k$-th derivative of the Fourier transform under the condition $E|\xi|^k<\infty$ can be proved similarly by induction with respect to the parameter~$k$ with the help of the identity $\frac{dG^k(t)}{dt^k} =\frac d{dt}(\frac{dG^{k-1}(t)}{dt^{k-1}})$. Only in this case we have to work with the functions $G_n(t)=i^{k-1}\int_{-\infty}^\infty u^{k-1}e^{iut}\,dF(u)$, $H_n(t)=i^k\int_{-n}^n u^ke^{iut}\,dF(u)$, $n=1,2,\dots$, and $G(t)=i^{k-1}\int_{-\infty}^\infty u^{k-1}e^{iut}\,dF(u)$, $H(t)=i^k\int_{-\infty}^\infty u^ke^{iut}\,dF(u)$. We prove the identity $G(t)=G(0)+\int_0^t H(s)\,ds$ also in this case. \item{} If $Ee^{t\xi}<\infty$ with some number $t>0$, then $P(\xi>x)=P(e^{t\xi}>e^{tx})\le e^{-tx}Ee^{t\xi}\le \const e^{-tx}$ for all numbers~$x\ge 0$. Similarly, $P(\xi<-x)\le \const e^{-tx}$, if $Ee^{-t\xi}<\infty$. Hence $P(|\xi|>x)\le\const e^{-tx}$ if $Ee^{ux}<\infty$ for $|u|\le t$. Conversely, if $G(u)=P(|\xi|>x) \le\const e^{-\alpha x}$, then we get by partial differentiation that $Ee^{t|\xi|}=\int_0^\infty e^{tu}\,dG(u)=[e^{tu}G(u)]_{0}^\infty -\int_0^\infty te^{tu}G(u)\,du<\infty$ in the case $0 x)\le \const e^{-\alpha x}$, then the function $G(z)=\int e^{izx}\,dF(x)$ is analytic in the domain $\{z\colon\;|\Im z|<\alpha\}$, because in an arbitrary compact set in the interior of this domain the function $G(z)$ can be represented as the uniform limit of analytic functions (finite sums approximating this integral). This function $G(z)$ is the analytic continuation of the function $\varphi(t)$ to the above domain. \item{28.)} Let us first prove Riemann's lemma. If $g(u)=I([a,b])$ is the indicator function of an interval $[a,b]$, then $\int e^{itu}g(u)\,du=\frac{e^{ibt}-e^{iat}}{it}\to0$ if $t\to\infty$ or $t\to-\infty$. This relation also holds if $g(u)=\summ_{j=1}^k c_j I([a_j,b_j])$, \i.e.\ $g(\cdot)$ is a finite linear combination of indicator functions of intervals. Such functions constitute an everywhere dense subset of the integrable functions in $L_1$ norm, that is for all numbers $\e>0$ and integrable function $f(\cdot)$ there exists a function $g(\cdot)$ or the above form such that $\int|f(u)-g(u)|\,du<\e$. This relation implies that $\left|\int e^{itu} f(u)\,du-\int e^{itu}g(u)\,du\right|<\e$ for all numbers $t\in R^1$. The Riemann lemma is a consequence of the above relations. \item{} If the function $f(t)$ is $k$ times differentiable, and the first $k$ derivatives are integrable functions on the real line, then we get by successive partial differentiation that $\varphi(t)=i^kt^{-k}\int e^{itu}\left.\frac {df^k(s)}{ds^k}\right|_{s=u}\,du$. This relation together with Riemann's lemma imply that in this case $\varphi(t)=o(t^{-k})$ if $t\to\pm\infty$. \item{} If the function $f(\cdot)$ is analytic in a strip $\{z\colon\; \Re z\in [-A,A]\}$ around the real line and the function $f(-ia+\cdot)$ is integrable for $a0$. The case $t<-0$ can be handled similarly, only the integral has to be replaced to the line $[-\infty+ia,\infty+ia]$ on the positive half-space. \item{29.)} The relation $|\varphi(t)|=1$ holds if and only if $\varphi(t)=e^{ita}$, that is $Ee^{it(\xi-a)}=1$ with some real number $a$. This identity holds if and only if $P(t(\xi-a)\in2\pi \{0,\pm1,\pm2,\dots,\})=1$. This means that $|\varphi(t)|=1$ with some $t\neq0$ if the values of the random variable $\xi$ are concentrated to a lattice $\left\{\frac {2\pi k} t+a,\; k=0,\pm1,\pm2,\dots\right\}$ of width $\frac{2\pi}t$. In particular, the relation $|\varphi(t)|=1$ for all $t\in R^1$ can hold if and only if the random variable takes a constant value with probability~1. If $\xi$ is not a deterministic constant, and it is lattice distributed, then there is a largest $h>0$ such that the distribution of $\xi$ has period~$h$. Indeed, in this case there are two numbers $a$ and $b$, $a\neq b$, such that $P(\xi=a)>0$, and $P(\xi=b)>0$. Then the distribution of $\xi$ may have a period only with $\frac{b-a}k$, where $k$ is a positive integer. Since the distribution of $\xi$ has a period $h>0$, the above statement holds. \item{} To finish the solution of the problem it is enough to observe that the characteristic function is continuous on the real line, and the characteristic function $\varphi(\cdot)$ of a non-lattice valued random variable satisfies the inequality $|\varphi(t)|<1$ if $t\neq0$. Hence $\supp_{A\le |t|\le B}|\varphi(t)|<1$ in this case. \item{30.)} The characteristic function of the random variable $\xi$ considered in this problem equals $\varphi(t)=\frac12\(\cos t+\cos (\sqrt2t)\)$. There exist pairs of integers $(p_n,q_n)$, $n=1,2,\dots$, such that $q_n\to\infty$, and $|\sqrt2q_n-p_n|\le \frac1{q_n}$. (Such pairs of integers $(p_n,q_n)$ can be found as the nominator and denominator of the continued fraction of the number $\sqrt 2$.) Choose $t_n=2\pi q_n$. Then $\cos t_n=1$, and $\limm_{n\to\infty}\cos (\sqrt2t_n)=1$. Hence $t_n\to\infty$, and $|\varphi(t_n)|\to1$, if $n\to\infty$. On the other hand, $\varphi(t)\neq1$ if $t\neq0$. \item{} If $\xi$ is a random variable whose values are concentrated in a subset of the real line consisting of finitely or countably many points, then for all numbers $\e>0$ there exists an integer $s=s(\e)<\infty$ and points $u_1,\dots, u_s$ on the real line such that $P(\xi\in\{u_1,\dots,u_s\})\ge1-\frac\e3$. Further, by a classical (and simple) result of the number theory, by a result of Dirichlet, for all numbers $N\ge1$ there exists an integer $1\le q_N\le N$ and a set of $s$ integers $p_1,\dots,p_s$ such that $|q_N u_k-p_k|\le N^{-1/s}$, for all indices $k=1,\dots,s$. Hence, by choosing the number $N$ sufficiently large we can achieve with the choice $t=2\pi q_N$ that $\Re e^{itu_k}\ge1- \frac\e3$ for all indices $1\le k\le s$. Then $\Re Ee^{it\xi}\ge \summ_{k=1}^s P(\xi=u_k)(1-\frac\e3)-\frac\e3 \ge1-\e$. Since we can make this construction for all $\e>0$, there exists a sequence of positive integers $q_N$, $N=1,2,\dots$, such that the sequence $t_N=2\pi q_N$ satisfies the relation $\limm_{N\to\infty} \varphi(t_N)=1$. Since the random variable $\xi$ is not lattice distributed, hence $\supp_{2\pi\le t\le B}|\varphi_n(t)|\le q<1$ for any $B>2\pi$ with an appropriate constant $q=q(B)<1$. This implies that the above constructed sequence of positive real numbers $t_N$ satisfies the relation $t_N\to\infty$ if $N\to\infty$. \item{31.)} Let us denote the distribution function of the random variable $\xi$ by $F(u)$, its characteristic function by $\varphi(t)$ and put $\Re \varphi(t)=u(t)$. Then for all $h\ge0$ $$ \frac{1-u(h)}{h^2}=\int_{-\infty}^\infty \frac{1-\cos hu}{h^2u^2}u^2\,dF(u), $$ and since $\limm_{h\to0}\frac{1-\cos hu}{h^2u^2}=\frac12$, $\frac{1-\cos hu}{h^2u^2}\ge0$ for all numbers $u\in R^1$ and $h\in R^1$, hence $\liminff_{h\to0} \int_{-\infty}^\infty \frac{1-\cos hu}{h^2u^2}u^2\,dF(u)\ge\frac12 \int_{-\infty}^{\infty}u^2\,dF(u) =\frac12E\xi^2$ by Fatou's lemma. Hence, to solve the problem it is enough to show that $\limsupp_{h\to0}\frac{1-u(h)}{h^2}<\infty$ if the function $\varphi(t)$ is twice differentiable in the origin. If the function $\varphi(t)$ is twice differentiable in the origin, then the same relation holds for the function~$u(t)$. Then the derivative $u'(t)=\frac{du(t)}{dt}$ exists in a small neighbourhood of the origin, and $u'(0)=0$, as $u(\cdot)$ is an even function. Further, $u(0)=1$, $u(t)\le 1$ for all numbers $t\in R^1$, hence $0\le\frac{1-u(h)}{h^2}=\frac{u(0)-u(h)}{h^2}=-\frac {u'(\vartheta h)}h\le \supp_{0\le s\le h} \frac{u'(0)-u'(s)}s<\infty$ for small number $h>0$, if $u(\cdot)$ is twice differentiable in the origin, where $0\le\vartheta\le1$ is an appropriate number, and $u'(\cdot)$ denotes the derivative of the function $u(\cdot)$. Hence $\limsupp_{h\to0}\frac{1-u(h)}{h^2}<\infty$ in this case. \item{} By induction with respect to the parameter $k$ we can see that if $\xi$ is a random variable with distribution function $F$, and the $2k$-th derivative of the characteristic function in the origin is finite, then the random variable $\xi$ has finite $2k$-th moment. Indeed, by the induction hypothesis there exists a distribution function $F^{(k-1)}(du)=\frac{u^{(2k-2)}F(du)}{m_{2k-2}}$ where $m_{2k-2}=\int u^{2k-2}\,dF(u)$. Further, the characteristic function of the distribution function $F^{(k-1)}$ has finite second moment in the origin, since the characteristic function of this distribution function equals the $2k-2$-th derivative of the characteristic function of the distribution function $F$ multiplied by $(-1)^{k-1}m_{2k-2}^{-1}$, and the characteristic function of a random variable with distribution function $F^{(k-1)}(u)$ has finite second derivative in the origin. Hence by the already proven part of this problem a random variable with distribution function $F^{(k-1)}$ has finite second moment. This is equivalent to the statement that a random variable with distribution function $F$ has finite $2k$-th moment. \item{32.)} The solution of this problem applies similar idea as the proof of problem~22. In problem~22 we deduced from some kind of continuity of the characteristic function of a distribution function in the origin some sort of estimate about the tail behaviour of the distribution function. In this problem we exploit that if we know more continuity about the distribution function in the neighbourhood of the origin, then we get sharper estimates about the tail behaviour of the distribution function. \item{} Let $F(x)$ denote the distribution function of the random variable $\xi$, and put $u(t)=\Re \varphi(t)$. The estimate $\left|\frac{1-u(h)}h\right|=|u'(\vartheta h)|\le \const h^\alpha$ holds with an appropriate constant $0<\vartheta<1$ under the conditions of the problem. Then the following analog of formula~(2.6) holds. $$ \allowdisplaybreaks \align \const h^{1+\alpha}&>\int_{-h}^h \frac{1-u(t)}h\,dt= \int_{-1/2h}^{1/2h}\(1-\frac{\sin h x}{hx}\)\,dF(x)\\ &\qquad +\int_{|x|>\frac1{2h}}\(1-\frac{\sin h x}{h x}\) \,dF(x) \ge\int_{|x|>\frac1{2h}}\frac12 \,dF(x) \\ &=\frac12 P\(|\xi|>\frac1{2h}\). \endalign $$ This implies that $P(|\xi|>u|)\le \const u^{-1-\alpha}$ for all numbers $u>0$. Let us introduce the function $G(u)=P(|\xi|>u|)$. Integration by parts yields that $$ E|\xi|=\int_0^\infty |u|\,dG(u) =\[uG(u)\]_0^\infty-\int G(u)\,du<\infty. $$ \item{} If the characteristic function $\varphi(t)$ is $2k+1$-times differentiable in a small neighbourhood of the origin, and the $2k+1$-th differential is a Lipschitz $\alpha$ function, $\alpha>0$, in this small neighbourhood, then let us introduce the distribution function $F^{(k)}(du) =\frac{u^{2k}F(du)}{m_k}$, where $F(\cdot)$ denotes the distribution function of the random variable $\xi$, and $m_k=\int u^{2k}\,dF(u)$. The argument applied in the solution of the previous problem can be adapted to the present case. The already proven part of this problem can be applied to a random variable with distribution function $F^{(k)}$, and it yields that $E|\xi|^{2k+1}<\infty$. \item{33.)} The relation $(-1)^k E\xi^{2k}=\left.\frac{d^{2k} \varphi(t)}{dt^{2k}}\right|_{t=0} =\frac{(2k)!}{2\pi i} \oint_{z=R}\frac{\varphi(z)}{z^{2k+1}}\,dz$ holds for all integers~$k$ by the result of problem~31 and the Cauchy integral formula if the circle with center in the origin and radius $R$ is in the domain of analiticity of the function $\varphi(z)$. Since $\supp_{z=R}|\varphi(z)|<\infty$ the above relation implies that $E\xi^{2k}\le (ak)^{2k}$ with some constant $a>0$, and $P(|\xi|>x)\le\(\frac{ak}x\)^{2k}$ for all positive integers $k\ge1$. If $x\ge C_0$ with some number $C_0>0$, then let us fix the constant $k=\[\frac x{2a}\]$, where $[u]$ is the greatest integer smaller than~$u$. This implies that the inequality $P(|\xi|>x)<\const e^{-\alpha x}$ holds for all numbers $x>0$ with an appropriate constant $\alpha>0$. \item{} Let us remark that we had to apply the above relatively complicated argument, because at the beginning of the proof we did not know that the analytic continuation of the characteristic function of a random variable $\xi$ is always the function $\varphi(z)=Ee^{iz\xi}$. \item{34.)} If the characteristic function of a random variable $\xi$ is integrable, then the result about the inverse Fourier transform can be applied. It implies that the random variable $\xi$ also has a density function $f(x)$, and the identity $f(x)=\frac{1}{2\pi} \int e^{-itx}\varphi(t)\,dt$ holds. Also the formula $\frac{d^kf(x)}{dx^k}=\frac{(-i)^k}{2\pi} \int t^k e^{-itx}\varphi(t)\,dt$ holds if the order of integration and differentiation can be changed in the inverse Fourier transform formula. In the solution of problem~27 we have proved that this change of order can be carried out if the functions $|t|^j|\varphi(t)|$ are integrable for all indices $0\le j\le k$. Hence the statement of this problem holds if $|\varphi(u)|<\const |u|^{-(k+1+\e)}$ with some number $\e>0$. If $|\varphi(u)|<\const e^{-\alpha|u|}$ with some constant $\alpha>0$, then the function $f(z)=\frac1{2\pi} \int e^{-itz}\varphi(t)\,dt$ is an analytic continuation of the density function $f(x)$ of the random variable~$\xi$. \item{35.)} Let $\varphi(t)=E^{it\xi}$ denote the characteristic function of the random variable $\xi_1$. The normalized partial sum $\frac{S_n}{\sqrt n}$ has the characteristic function $\varphi^n\(\frac t{\sqrt n}\)$. Hence by the Fundamental Theorem about convergence in distribution and the result of problem~13a) it is enough to show that $\varphi^n\(\frac t{\sqrt n}\)\to e^{-t^2/2}$ for all numbers $t\in R^1$ if $n\to\infty$. On the other hand, a Taylor expansion of the function $\varphi(t)$ around the point $t=0$ and the result of problem~27 yield that $\varphi(t)=1-\frac{t^2}2+o(t^2)$ if $t=o(1)$, and $\varphi\(\frac t{\sqrt n}\)=1-\frac {t^2}n+o\(\frac1n\) =e^{-t^2/2n+o(n^{-1})}$ if $t=O(1)$. The last relation implies that $\varphi^n\(\frac t{\sqrt n}\)=e^{-t^2/2+o(1)}\to e^{-t^2/2}$ for all fixed number~$t$, if $n\to\infty$. \item{36.)} Let us introduce the function $F(t)=e^{it}-\(1+\frac{it}{1!}+\cdots+\frac{(it)^k}{k!}\)$, and let us consider its derivatives $F^{(j)}(t)$, $j=1,2,\dots,k$. Observe that $F^{(j)}(0)=0$ of all $0\le j\le k$, and $|F^{(k+1)}(t)|=|e^{ikt}|=1$ for all $t\in R^1$. We get by induction with respect to the parameter~$j$ that $|F^{(j)}(t)|\le \int_0^t |F^{(j+1)}(s)|\,ds\le\int_0^t \frac {|s|^{k-j}\,ds}{(k-j)!}=\frac{|t|^{k+1-j}}{(k+1-j)!}$ for all indices $j=k+1,k,\dots,0$. In particular, $|F(t)|\le \frac{|t|^{k+1}}{(k+1)!}$, and this is the statement of the problem. \parindent=30pt \item{37.a)} By applying formula (11) with $k=1$ we get that $|e^{it\xi}-1-it\xi|\le \frac{t^2\xi^2}2$. Taking the expected value of the expression at the left-hand side between the absolute value sign we get that $|\varphi(t)-1|\le \frac{t^2}2E\xi^2$. If $E\xi^2<\e$ with a sufficiently small number $\e=\e(t)>0$, then $|1-\varphi(t)|\le \frac14$, and $|\log\varphi(t)+(1-\varphi(t))|= |\log\(1-(1-\varphi(t))\)-(1-\varphi(t))| \le |1-\varphi(t)|^2\le t^4\(E\xi^2\)^2$. \item{b.)} The sequence of random variables $S_k$ converge in distribution to a normal random variable with expected value $m$ and variance $\sigma^2$ if and only if $$ \limm_{k\to\infty}\prodd_{j=1}^{n_k}\varphi_{k,j}(t) =e^{-\sigma^2t^2/2+imt} \quad\text{for all } t\in R^1. \tag2.7 $$ Let us take the logarithm in the relation~(2.7). We claim that formula~(2.7) is equivalent to formula $(2.7')$ formulated below. $$ \limm_{k\to\infty} \summ_{j=1}^{n_k}\log\varphi_{k,j}(t)=-\frac {\sigma^2t^2}2+imt \quad\text{for all } t\in R^1. \tag$2.7'$ $$ The equivalence of relations (2.7) and $(2.7')$ is less obvious than it may seem at first sight. Some difficulty arises because in the space of complex numbers where we have to work the equation $e^{z_1}=e^{z_2}$ only implies that $z_1=z_2+i2k\pi$ with some integer $k$, but the numbers $z_1$ and $z_2$ may be different. Hence although the implication $(2.7')\Rightarrow (2.7)$ needed in the proof of the central limit theorem is straightforward, the proof of the implication $(2.7)\Rightarrow (2.7')$ needed in the proof of the converse of the central limit theorem requires a more careful argument. \item{} Before the proof of the implication $(2.7)\Rightarrow (2.7')$ let us remark that because of the uniform smallness condition for a fixed value $t\in R^1$ the characteristic function $\varphi_{k,j}(t)$ we consider is in a small neighbourhood of the number~1 for all $1\le j\le n_k$ if the index $k$ is sufficiently large. Hence for a sufficiently large $k\ge k_0(t)>0$ there is a version $\log\varphi_{k,j}(t)$ of the functions $\varphi_{k,j}(t)$ which is in a small neighbourhood of the origin, say $|\log\varphi_{k,j}(t)| <\frac12$. We take this version of the logarithm in formula~$(2.7')$. \item{} To prove the implication $(2.7)\Rightarrow (2.7')$ let us first make the following observation. We have seen in the proof of problem~24 (in the proof of the Fundamental Theorem about convergence of distribution functions) that the convergence in formula (2.7) is uniform in all finite intervals. Beside this, the right-hand side of formula (2.7) is separated both from zero and infinity in a finite interval. Hence we get by taking logarithm in formula (2.7) that for all $\e>0$ and $T>0$ there exists a $k_0=k_0(\e,T)$ and $\delta=\delta(\e,k,T)$ such that $$ \left|\summ_{j=1}^{n_k}\log\varphi_{k,j}(t)-\(-\frac {\sigma^2t^2}2+imt\)+i2\pi l_k(t) \right|<\e \quad\text{if } |t|\le T \text { and } k\ge k_0 \tag$2.7''$ $$ with some integer $l_k(t)$ which may depend both on $k$ and $t$. We have to show that $l_k(t)\equiv0$ in formula~$(2.7'')$. First we prove the weaker statement that $l_k(t)=l_k$, i.e.\ this constant in formula~$(2.7'')$ does not depend on $t$. Indeed, otherwise for all $\delta=\delta_k>0$ a pair of constant $-T\le s,t\le T$, $|t-s|<\delta$ could be found such that $l_k(t)\neq l_k(s)$, i.e.\ $|2\pi i(l_k(t) -l_k(s)|\ge 2\pi$. But this is not possible, because the function $g_k(t)=\summ_{j=1}^{n_k}\log\varphi_{k,j}(t)$ is uniformly continuous in the interval $[-T,T]$. Hence fixing a small $\e>0$ we can write $|g_k(t)-g_k(s)|<\e$ for sufficiently small $\delta=\delta(k,\e,T)>0$. Beside this, also the inequality $\left|\(-\frac{\sigma^2t^2}2+imt\)- \(-\frac{\sigma^2s^2}2+ims\)\right|<\e$ holds if $\delta>0$ is sufficiently small. Let $\e<\frac13$. The above relations together with formula $(2.7'')$ would contradict to the assumption $|2\pi i(l_k(t)-l_k(s)|\ge 2\pi$. Hence $l_k(t)=l_k$. Finally, it is easy to see that $l_k=l_k(0)=1$. Hence relation $(2.7'')$ holds for all $\e>0$ with $l_k(t)\equiv0$, and this implies relation $(2.7')$. \item{} Because of the uniform smallness condition of problem~37 and the already proved part~a) of this problem we can write for $k\ge k_0(t)$ $$ \left|\summ_{j=1}^{n_k}\log\varphi_{k,j}(t)- \summ_{j=1}^{n_k}(\varphi_{k,j}(t)-1)\right|\le t^4\summ_{j=1}^{n_k} \(E\xi^2_{k,j}\)^2\le\const t^4 \max_{1\le j\le n_k}E\xi_{k,j}^2, $$ since $\limm_{k\to\infty}\summ_{j=1}^{n_k}E\xi_{j,k}^2=1$. Because of this relation and the uniform smallness condition $$ \limm_{k\to\infty}\left|\summ_{j=1}^{n_k}\log\varphi_{k,j}(t)- \summ_{j=1}^{n_k}(\varphi_{k,j}(t)-1)\right|=0. $$ These relations also imply part b) of problem 37. \parindent=20pt \item{38.)} Let us fix a number $\e>0$. Then $$ E\xi_{k,j}^2=E\xi_{k,j}^2I(\{(|\xi_{k,j}|<\e\})+ E\xi_{k,j}^2I(\{|\xi_{k,j}|\ge\e\}) \le \e^2+\sum_{j=1}^{n_k} E\xi_{k,j}^2I(\{|\xi_{k,j}|\ge\e\}), $$ hence by the Lindeberg condition $\limsupp_{k\to\infty}\supp_{1\le j\le n_k}E\xi_{k,j}^2\le\e^2$. Since this formula holds for all numbers $\e>0$, it implies the uniform smallness property. \item{} By formula (12) (with the choice $m=0$ and $\sigma=1$) to prove the central limit theorem it is enough to show that under the conditions of this problem $$ \lim_{k\to\infty}\sum_{j=1}^{n_k}\(\varphi_{k,j}(t)-1\)= \lim_{k\to\infty}\summ_{k=1}^{n_k} E\(e^{it\xi_{k,j}}-1-it\xi_{k,j}\)\to-\frac{t^2}2, $$ or since $\limm_{k\to\infty}\summ_{j=1}^{n_k}E\xi^2_{k,j}=1$ it is enough to show that $$ \lim_{k\to\infty}\summ_{j=1}^{n_k} E\(e^{it\xi_{k,j}}-1-it\xi_{k,j}+\frac{t^2}2\xi_{k,j}^2\)\to0. $$ By applying formula~(11) for $k=2$ if $|tx|\le\e$ and for $k=1$ if $|tx|\ge\e$ together with the Lindeberg condition we get that $$ \align \left|\summ_{j=1}^{n_k} E\(e^{it\xi_{k,j}}-1-it\xi_{k,j}+\frac{t^2}2\xi_{k,j}^2\) I(\{|\xi_{k,j}|\le \e\})\right| &\le \summ_{j=1}^{n_k} E\frac{|t\xi_{k,j}|^3}6I(\{|\xi_{k,j}|\le \e\}) \\ &\le\e |t|^3 \summ_{j=1}^{n_k}E\frac{\xi_{k,j}^2}6\le \const\e, \endalign $$ and $$ \align &\lim_{k\to\infty}\left|\summ_{j=1}^{n_k} E\(e^{it\xi_{k,j}}-1-it\xi_{k,j}+\frac{t^2}2\xi_{k,j}^2\)) I(\{|\xi_{k,j}|>\e\})\right| \\ &\qquad\qquad\le \lim_{k\to\infty}\summ_{j=1}^{n_k} Et^2\xi_{k,j}^2 I(\{|\xi_{k,j}|>\e\})=0. \endalign $$ Since these relations hold for all $\e>0$, it implies formula (12) with the choice $m=0$ and $\sigma^2=1$. In such a way we have solved problem~38. \item{39.)} If the conditions of the problem are satisfied, then $\limm_{k\to\infty}\summ_{j=1}^{n_k}\Re(\varphi_{k,j}(t)-1)= -\frac{t^2}2$. Further, since the sum of the random variables in the $k$-th row is almost~1 for large index~$k$, hence $$ \limm_{k\to\infty}\summ_{j=1}^{n_k} E\(\cos (t\xi_{k,j})-1+\frac{t^2\xi_{k,j}^2}2\)=0\quad \text{for all numbers } t\in R^1. $$ Let us observe that $\cos u-1+\frac{u^2}2\ge 0$ for all numbers $u\in R^1$, since we have for the function $F(u)=\cos u-1+\frac{u^2}2\ge 0$, $F(0)=0$, $F'(0)=0$ and $F''(u)=1-\cos u\ge0$ for all numbers $u\in\in R^1$. Besides, $\cos u-1+\frac{u^2}2\ge \frac {u^2}4$ if $|u|>3$. The above inequalities imply that $$ \limm_{k\to\infty}\summ_{j=1}^{n_k} \frac{t^2}4E\xi_{k,j}^2 I\(\left\{|\xi_{k,j}|\ge \frac3t\right\}\)=0. $$ We get the solution of problem~39 from this relation with the choice $t=\frac3\e$. \item{40.)} By the Schwarz inequality $$ \(E\xi_{k,j}I(|\xi_{k,j}|\le\e\)^2= \(E\xi_{k,j}I(|\xi_{k,j}|>\e\)^2\le E\xi_{k,j}^2I(|\xi_{k,j}|>\e). $$ Hence the Lindeberg condition implies that $$ \limm_{k\to\infty} \summ_{j=1}^{n_k}\(E\xi_{k,j} I(|\xi_{k,j}|\le\e)\)^2\le \limm_{k\to\infty} \summ_{j=1}^{n_k}E\xi_{k,j}^2I(|\xi_{k,j}|>\e)=0, $$ and $$ \limm_{k\to\infty}\summ_{j=1}^{n_k}E\xi_{k,j}^2 I(|\xi_{k,j}|\le\e)=1. $$ These two relations imply the first statement of the problem. \item{} To prove the second statement let us first observe that $E(\bar \xi_{k,j}-\xi_{k,j})=E\bar \xi_{k,j}-E\xi_{k,j}=0$, $1\le j\le n_k$. Hence the Chebishev inequality and Lindeberg condition imply that for all $\e>0$ $$ \align P(|S_k-\bar S_k|>\e)&\le\frac{\text{Var}\,(S_k-\bar S_k)}{\e^2}= \frac1{\e^2}\sum_{j=1}^{n_k} \text{Var}\,(\xi_{k,j}-\bar\xi_{k,j})\\ &\le \frac1{\e^2}\sum_{j=1}^{n_k} E\xi_{k,j}^2I(|\xi_{k,j}|>0)\to0. \endalign $$ Since this relation holds for all $\e>0$ it implies the second statement of the problem. \item{41.)} By the H\"older inequality $$ \align \summ_{k=1}^n E\xi_k^2I(|\xi_k|>\e s_n) &\le \sum_{k=1}^n \( E|\xi_k|^{(2+\alpha)}\)^{(2/\alpha+2)} P(|\xi_k|>\e s_n)^{\alpha/(2+\alpha)}\\ &\le \( \summ_{k=1}^n E|\xi_k|^{(2+\alpha)}\)^{(2/\alpha+2)}\(\summ_{k=1}^n P(|\xi_k|>\e s_n)\)^{\alpha/(2+\alpha)}. \endalign $$ On the other hand, by the Chebishev inequality $\summ_{k=1}^n P(|\xi_k|>\e s_n)\le\summ_{k=1}^n\frac{E\xi_k^2}{\e^2s_n^2} =\frac1{\e^2}$. Hence it the conditions of part~a) of problem~41 hold, then $$ \limm_{n\to\infty}\frac1{s_n^2} \summ_{k=1}^n E\xi_k^2I(|\xi_k|>\e s_n)=0, $$ i.e. these conditions imply the Lindeberg condition. \item{} In the case considered at the end of part~a.) $s_n^2\ge \const n$, and $\summ_{k=1}^nE|\xi_k|^{2+\alpha}=o\(n^{(\alpha+2)/2)}\)$ if $n\to\infty$, hence in this case the condition formulated in part~a) of problem~41 is satisfied. \item{} If $\xi_1,\xi_2,\dots$, is a sequence of independent and identically distributed random variables, $E\xi_1=0$, $0 \e s_n)=\frac1{E\xi_1^2} E\xi_1^2I\(|\xi_1|>\e \sqrt{nE\xi_1^2}\)\to0 $$ if $n\to\infty$. This means that the Lindeberg condition holds also in this case. \item{42.)} If the point $x$ is a point of continuity of the limit distribution function $F(\cdot)$, then for all $\e>0$ there exists a $\delta>0$ such that $F(x)-\frac\e2 0$ in such a way that the points $x\pm\delta$ are also points of continuity of the function $F(\cdot)$. Then there exists an index $n_0=n_0(\delta,\e)$ such that $P(S_n x-\delta) <1-F(x-\delta)+\frac\e4$, and $P(|T_n|\ge \delta)<\frac\e4$ if $n\ge n_0$. Then $P(S_n+T_n \delta) x)<1-F(x)+\e$ if $n\ge n_0(\e,\delta)$. Since the above statements hold for all $\e>0$, they imply the statement of the problem. \item{43.)} Let the independent random variables $\xi_n$, $n=1,2,\dots$, have the following distribution: $P(\xi_n=n)=P(\xi_n=-n)=\frac1{4n^2}$, $P(\xi_n=2)=P(\xi_n=-2)=\frac14$, and $P(\xi_n=0)=\frac12-\frac1{2n^2}$, $n=1,2,\dots$. Then $E\xi_n=0$, $E\xi_n^2=1$. Put $X_k=\xi_k I(|\xi_k|\le 2)$, $Y_n=\xi_k I(|\xi_k|>2)$ for all $k=1,2,\dots$. Consider the partial sums $S_n=\summ_{k=n}^n X_k$ and $T_n=\summ_{k=1}^n Y_k$, $n=1,2,\dots$. Then the normalized partial sums $\sqrt{\frac 2n} S_n$ converge in distribution to the standard normal distribution. Indeed, the partial sums of the random variables $X_k$, $k=1,2,\dots$, satisfy the conditions of the central limit theorem, and $EX_k^2=\frac12$, $k=1,2,\dots$. On the other hand, the expressions $\sqrt{\frac 2n} T_n$ converge stochastically to zero if $n\to\infty$. Indeed, $\summ_{k=1}^\infty P(Y_k\neq0)<\infty$, hence with probability~1 only finitely many terms $Y_k(\oo)$ do not equal zero, and $\summ_{k=1}^\infty |Y_k(\oo)|\le K(\oo)$. Since $\sqrt{\frac2n} \summ_{k=1}^n\xi_k=\sqrt{\frac2n}S_n+\sqrt{\frac2n} T_n$, the above calculation and the result of problem~42 imply that the above construction yields an example for the statement of part~a) of problem~43. \item{} Let us make some slight modifications in the construction of the above random variables~$\xi_n$. Let us put similarly to the previous construction $P(\xi_n=2)=P(\xi_n=-2)=\frac14$ and $P(\xi_n=n)=\frac1{4n^2}$. Let us define further $$ P\(\xi_n=\frac1{\sqrt n}\)=\frac12-\frac1{2n^2},\quad\text{and} \quad P\(\xi_n=-n-2n^{3/2}\(1-\frac1{n^2}\)\)=\frac1{4n^2}, $$ $n=1,2,\dots$. Then $E\xi_n=0$, $n=1,2,\dots$. By applying the truncation technique of part~a) and carrying out a natural modification of the calculation following it we get that these random variable $\xi_n$, $n=1,2,\dots$, yield an example for part~b) of problem~43. \item{44.)} Let us choose an arbitrary number $L$ such that $\int u^2 F_0(\,du)>L$. It is enough to show that $\liminff_{n\to\infty}\int u^2 F_n(\,du)\ge L$. There exists such a bounded and continuous function $g(u)=g_L(u)$ for which $g(u)\le u^2$, and $\int g(u)F_0(\,du)\ge L$. Indeed, the function $g(u)=g_L(u)=\min (u^2,K)$ satisfies this property if we choose a sufficiently large constant $K=K(L)>0$. Then the characterization of the convergence in distribution given in Theorem~A implies that $\liminff_{n\to\infty}\int u^2 F_n(\,du)\ge \limm_{n\to\infty}\int g(u) F_n(\,du)=\int g(u)F_0(\,du)\ge L$. \item{45.)} The distribution of the random vector $\bold Z=(Z_1,\dots,Z_m)$ is determined by its characteristic function. (See the result of problem~19.) On the other hand, the characteristic function $E^{i(t_1Z_1+\cdots+t_mZ_m)}$ of the random vector $(Z_1,\dots,Z_m)$ in the point $(t_1,\dots,t_m)$ agrees with the characteristic function of the random variable $Z(t_1,\dots,t_m)$ in the point~1. Hence the characteristic function and distribution function of the random vector $\bold Z$ is determined by the distribution of the above considered one dimensional distributions. \item{46.)} First we show with the help of the Fundamental Theorem about the convergence of distribution functions that the random vectors $\bold Z_n=(Z_{1,n},\dots,Z_{m,n})$, $n=1,2,\dots$, converge in distribution to some $m$-dimensional distribution as $n\to\infty$ if the one-dimensional random variables $Z_n=Z_n(a_1,\dots,a_m)$, $n=1,2,\dots$, converge in distribution for all real numbers $a_1,\dots, a_m$ as $n\to\infty$. Indeed, if the one-dimensional random variables interested in this problem convergence in distribution, then the characteristic functions of the random vectors $\bold Z_n$, $n=1,2,\dots$, converge to a function $\varphi(t_1,\dots,t_m)$ in all points $(t_1,\dots,t_m)\in R^m$, and the restriction of this limit function to the coordinates axes is continuous. Hence by the Fundamental Theorem the random vectors $\bold Z_n$, $n=1,2,\dots$, also converge in distribution, and the characteristic function of the limit distribution is the above limit function. The Fundamental Theorem also implies that the convergence of the random vectors $\bold Z_n$ implies the convergence of the random variables of the random variables $Z_n(a_1,\dots,a_m)$ in distribution. \item{} If the random vectors $\bold Z_n$ converge in distribution, then the limit distribution is determined by its characteristic function which is the limit of the characteristic functions of these random variables. Similarly, the characteristic function of the limit of the one-dimensional random variables we have considered equals the limit of the characteristic function of these random variables. These facts imply the characterization of the limit distribution $\mu$ given in this problem together with the statement that the above characterization determines the limit distribution in a unique way. \item{47.)} Let $\Sigma=\(D_{j,k}\)$, $1\le j,k\le m$, denote the covariance matrix of an $m$-dimensional random vector $(Z_1,\dots,Z_m)$, i.e. let $D_{j,k}=E(Z_j-EZ_j)(Z_k-EZ_k)$, $1\le j,k\le m$. Then the matrix $\Sigma$ is symmetrical. Beside this, $\bold x\Sigma \bold x^*=\summ_{j=1}^m\summ_{k=1}^m x_jE(Z_j-EZ_j)(Z_k-EZ_k)x_k= E\(\summ_{j=1}^k x_j(Z_j-EZ_j)\)^2\ge0$ for all vectors $x=(x_1,\dots,x_m)\in R^m$, and this means that the matrix $\Sigma$ is positive semi-definite. \item{} On the other hand, if $\Sigma$ is an arbitrary $m\times m$ positive semi-definite matrix, then the results of linear algebra imply that there exists a matrix $B$ such that $\Sigma=B^*B$. (The matrix $B$ satisfying this relation is not determined in a unique way. A possible construction of a matrix~$B$ satisfying the above relation can be given in the following way: It is known from linear algebra that a symmetric matrix $\Sigma$ can be represented in the form $\Sigma=U\Lambda U^*$ where the matrix $U$ is unitary and the matrix $\Lambda$ is diagonal with some elements $\lambda_1,\dots,\lambda_m$ in the diagonal. The matrix $\Sigma$ is positive semi-definite if and only if all elements $\lambda_j$, $1\le j\le m$, in the above representation are non-negative. If $\Sigma=U\Lambda U^*$ is a positive semi-definite matrix, then let us define the symmetric matrix $B=U\sqrt \Lambda U^*$, where $\sqrt \Lambda$ is the diagonal matrix with elements $\sqrt {\lambda_j}$, $j=1,\dots,m$, in the diagonal. Then $\Sigma=B^2=B^*B$.) \item{} Let $\Sigma=(D_{j,k})$, $1\le j,k\le m$, be an arbitrary $m\times m$ positive semi-definite matrix, and $\bold M=(M_1,\dots,M_m)\in R^m$ a vector in the Euclidean space~$R^m$. Let $\bold\xi=(\xi_1,\dots,\xi_m)$ be an $m$-dimensional random variable with standard normal distribution, $B=(b_{j,k})$, $1\le j,k\le m$, an $m\times m$ matrix such that $B^*B=\Sigma$. Let us define the $m$-dimensional random vector $\bold \eta=(\eta_1,\dots,\eta_m) =\bold \xi B+\bold M$. Then $\bold \eta$ has normal distribution, and we claim that it has expected value $\bold M$ and covariance matrix $B^*B=\Sigma$. This implies that for all vectors $\bold M\in R^m$ and $m\times m$ positive semi-definite matrices $\Sigma$ there exists an $m$-dimensional normally distributed random vector with expectation zero and covariance matrix $\Sigma$. \item{} Indeed, $E\bold \eta=(E\eta_1,\dots,E\eta_m)=(M_1,\dots, M_m)=\bold M$, and the elements of the covariance matrix of $\eta$ can be calculated in the following way. $$ E(\eta_j-E\eta_j)(\eta_k-E\eta_k)=E\(\summ_{l=1}^m b_{j,l}\xi_l\)\(\summ_{p=1}^m b_{k,p}\xi_k\)=\summ_{l=1}^p b_{j,l}b_{k,l}=D_{j,k} $$ for all indices $1\le j,k\le m$, because $E\xi_l\xi_p=0$, if $l\neq p$, $E\xi_l^2=1$. The last identity means the identity $B^*B=\Sigma$ in coordinate form. \item{} Let us consider an $m$-dimensional random vector $\bold\eta=(\eta_1,\dots,\eta_m)=\bold\xi B+\bold M$ with normal distribution function, where the random vector $\bold\xi=(\xi_1,\dots,\xi_m)$ has standard normal distribution, $B$ is an $m\times m$ matrix, $\bold M=(M_1,\dots,M_m)\in R^m$. Put $B^*B=\Sigma=(D_{j,k})$, $1\le j,k\le m$. Let us calculate the characteristic function $\varphi(t_1,\dots,t_m) =Ee^{i(t_1\eta_1+\cdots+t_m\eta_m)}$ $\bold\eta$ of the random vector $\bold \eta$. To calculate it let us introduce the random variable $\zeta=t_1\eta_1+\cdots+t_m\eta_m$. Then $\zeta$ is a normally distributed random variable with expected value $\bar M=\bar M(t_1,\dots,t_m)=\summ_{k=1}^m t_kM_k=(\bold M,\bold t)$ and variance $\sigma^2=\summ_{j=1}^m\summ_{k=1}^m t_jt_kE\eta_j\eta_k =\summ_{j=1}^m\summ_{k=1}^m t_j t_kD_{j,k}=\bold t \Sigma \bold t^*$, where $\bold t=(t_1,\dots,t_m)$. Hence the characteristic function of the random variable $\zeta$ with normal distribution equals $\psi(u)=Ee^{iu\zeta}=e^{-u^2 \bold t\Sigma t/2+i(\bold M,t) u}$. This implies that $\varphi(t_1,\dots,t_m)=Ee^{i(t_1\eta_1+\cdots+t_m\eta_m)} =\psi(1) =e^{-\bold t\Sigma \bold t^*/2+i(\bold M,\bold t)}$, that is relation~(13) holds. \item{} It follows from formula~(13) that if $\bold \eta$ is an $m$-dimensional random variable with normal distribution, then the characteristic function and as a consequence, the distribution function of this random vector is determined by its expectation and covariance matrix. Let us remark that there can be given two different $m\times m$ matrices $B_1$ and $B_2$ such that $B_1^*B_1=B_2^*B_2$. Let $\bold \xi$ be an $m$-dimensional random vector with standard normal distribution, $B_1$ and $B_2$ two $m\times m$ matrices such that $B_1^*B_1=B_2^*B_2$, an $\bold M\in R^m$ an arbitrary vector. Let us define the random vectors $\eta_1=\bold \xi B_1+\bold M$ and $\eta_2=\bold \xi B_2+\bold M$. Then the expectation and covariance matrix, hence the distribution function of the random vectors $\eta_1$ and $\eta_2$ agree, although this statement is not self-evident because of the relation $B_1\neq B_2$. \item{48.)} We get similarly to the proof of formula~(13) in problem~47., that the characteristic function of the random vector $\eta=(\eta_1,\dots,\eta_l)$ equals $Ee^{i(\bold t,\eta)}=e^{-\bold t\Sigma\bold t^*/2+i(\bold M,\bold t)}$, where $\bold t=(t_1,\dots,t_l)$, and $\Sigma=B^*B$. It follows from this formula that $\eta$ is a normally distributed random vector with covariance matrix $\Sigma$ and expectation vector $\bold M$. \item{} Given a normally distributed random vector $\eta$ of dimension~$m$, let us write it in the form $\eta=\xi B+\bold M$, where $\xi$ is a random vector of dimension $m$ with standard normal distribution. (It can be proved that such a representation is always possible. Actually it would be enough for us such a representation of a random vector with the same distribution as $\eta$. The possibility of such a representation follows from the definition of the normally distributed random vectors.) If we omit some of the coordinates of the random vector $\eta$ and we preserve only $l$ coordinates, then the random vector $\eta'$ we obtain after this delition of coordinates can be presented in the following way. Let us omit those rows of the vector $B$ which have the same indices as the elements of $\eta$ we omitted. Similarly let us omit the coordinates of the vector $\bold M$ with the same indices as the coordines omitted from $\eta$. Let us denote the matrix and vector obtained in such a way by $\bold M'$ and $B'$. Then we have $\eta'=\xi B'+\bold M'$, hence $\eta'$ is normally distributed by the already proved part of the problem. \item{49.)} I shall give two different solutions of the problem, because, this may be instructive. \item{} First soulution: We can write up the characteristic function of the random vector $\eta$, as $Ee^{i(\bold t,\eta)}=e^{-\bold t\Sigma\bold t^*/2+i(\bold M,\bold t)}$ where $\bold t=(t_1,\dots,t_m)$, and $\bold M=(M_1,\dots, M_m)$ is the expected value of $\eta$, with the introduction of some notations in the following way. Let $\bold t_j$ denote the restriction of the vector $\bold t$, and let $\bold M_j$ denote the restriction of vector $\bold M$ to the coordinates $p\in L_j$, $1\le j\le k$. Let us define similarly the matrix $\Sigma_j$ as the restriction of the matrix $\Sigma$ to the coordinates $\sigma_{p,q}$, $p\in L_j$ and $q\in L_j$, $1\le j\le k$. With such a notation $Ee^{i(\bold t,\eta)} =\prodd_{j=1}^k e^{-\bold t_j\Sigma_j\bold t_j^*/2+i(\bold M_j,\bold t_j)}$. (We exploited the properties of the matix $\Sigma$ at this point.) Let $\eta_1',\dots,\eta_k'$ independent, normally distributed random vectors with covariance matrix $\Sigma_j$ and expected value $\bold M_j$, $1\le j\le k$. The characteristic function of E$\eta'_j$ equals $Ee^{i(\bold t_j,\eta'_j)} =e^{-\bold t_j\Sigma_j\bold t_j^*/2+i(\bold M_j,t_j)}$ for all indices $1\le j\le k$. Hence the characteristic function of the random vectors $\eta'=(\eta'_1,\dots,\eta'_k)$ and $\eta$ agree. Hence the distribution of $\eta$ and $\eta'$ also agree, and as a consequence the random vectors $\bar\eta_j$, $1\le j\le k$, are independent of each other, similarly to the random vectors $\eta'_j$. \item{} Second solution: Let us apply the notations introduced in the previous solution. The random vector $\eta'$ defined there is normally distributed with covariance matrix $\Sigma$ and expected value $\bold M$. As the distribution of a normal random vector is determined by its covariance matrix and expected value, hence the distribution of $\eta$ \'es $\eta'$ agree. Therefore the random vectors $\eta_1'$,\dots, $\eta_k'$ are independent, similarly to the random vectors $\bar\eta_1$,\dots, $\bar\eta_k$. \item{50.)} Because of the result of problem~46 it is enough to show that for all vectors $\bold a=(a_1,\dots,a_m)\in R^m$ the normalized partial sums $\frac1{A_n}S_n=\frac1{A_n}S_n(a_1,\dots,a_m) =\frac1{A_n}\summ_{p=1}^m a_pS_{p,n}$, $n=1,2,\dots$ converge in distribution to the normal distribution with expectation zero and variance $\sigma^2=\bold a\Sigma\bold a^*$ if $n\to\infty$. Let us observe that $\frac1{A_n}S_n=\frac1{A_n}\summ_{k=1}^n \eta_k$, $n=1,2,\dots$, where $\eta_k=\summ_{p=1}^m a_p\xi_{p,k}$, $k=1,2,\dots$. Beside this, the random variables $\eta_k$, $k=1,2,\dots$ are independent, $E\eta_k=0$, $E\eta_k^2=\bold a\Sigma_k \bold a^*$. \item{} Let us consider separately the cases $\bold a\Sigma\bold a^*=0$ and $\bold a\Sigma\bold a^*>0$. If $\bold a\Sigma\bold a^*=0$, then $\frac1{A_n^2}ES_n^2=\frac1{A_k^2}\summ_{j=1}^k \bold a\Sigma_j\bold a^*\to0$ if $k\to0$, hence $\frac{S_n}{A_n}$ converges to zero stochastically if $n\to\infty$. That is, in this case the random variables $\frac1{A_n}S_n$ converge in distribution to the (degenerated) normal variable with expectation zero and variance zero. In the case $\bold a\Sigma\bold a^*>0$ let us define the triangular array $\eta_{j,k}=\frac{\eta_j}{A_k\sqrt{\bold a\Sigma\bold a^*}}$, $1\le j\le k$, $k=1,2,\dots$. Then $\summ_{j=1}^k E\eta_{j,k}^2=\frac1{A_k^2}\summ_{j=1}^k \frac{\bold a\Sigma_j\bold a^* }{\bold a\Sigma\bold a^*}\to1$ if $k\to\infty$. Hence in the case $\bold a\Sigma\bold a^*>0$ the convergence of the normalized partial sums $\frac1{A_n}S_n$, $n=1,2,\dots$, to the normal distribution with expectation zero and variance $\bold a\Sigma \bold a^*$ follows from the central limit theorem for triangular arrays formulated in problem~38 if we show that the above defined triangular array $\eta_{j,k}$, $1\le j\le k$, $k=1,2,\dots$, satisfies the Lindeberg condition. This implies the statement of the problem. To prove that the Lindeberg condition is satisfied observe that $$ \allowdisplaybreaks \align 0&\le\sum_{j=1}^{k}E\eta_{k,j}^2I\(\{|\eta_{k,j}|>\e\}\)\le \frac1{A_k^2\bold a\Sigma \bold a^*} \sum_{j=1}^{k}E\(\summ_{p=1}^m|a_p\xi_{p,j}|I\(\{|\xi_{p,j}|> \bar\e A_k\)\)^2, \\ &\le\frac{m}{A_k^2\bold a\Sigma \bold a^*} \summ_{p=1}^m a_p^2 \(\sum_{j=1}^{k}E\xi_{p,j}^2I\(\{|\xi_{p,j}|> \bar\e A_k\)\), \tag2.8 \endalign $$ where $\bar\e=\bar \e(k)=\frac{\e}{m\supp_{1\le p\le m}|a_p|}\cdot \frac1{A_k \bold a\Sigma\bold a^*}$. The first inequality in formula~(2.8) holds, because the inclusion relation $\{\oo\colon\;|\eta_{k,j}(\oo)|>\e\}\subset\bigcupp_{p=1}^M\{\oo\colon\; |\xi_{p,j}(\oo)| >\bar\e A_k\}$ implies that $$ \eta_{j,k}^2I\(\{|\eta_{j,k}|>\e\)\le \frac1{A_k^2\bold a\Sigma \bold a^*} \(\summ_{p=1}^m|a_p\xi_{p,j}|I\(\{|\xi_{p,j}|> \bar\e A_k\)\)^2. $$ The second relation in formula (2.8) is a consequence of the Schwarz inequality. Since the number $\bar\e(k)$ is separated from zero for sufficiently large~$k$, the last estimate together with formula~(14) imply that the expression in formula~(2.8) tends to zero. Hence the Lindeberg condition we wanted to prove holds in the case. \item{} Let us finally observe that if $\bold\xi_k=(\xi_{1,k},\dots,\xi_{m,k})$, $k=1,2,\dots$, is a sequence of independent and identically distributed $m$-dimensional random vectors with expectation zero and finite covariance matrix $\Sigma$, then this sequence of random vectors satisfies relation (14) with the choice $A_n^2=n$. This statement was proved in part~b) of problem~41 for those coordinates $p$ for which $E\xi_{p,1}^2>0$. For those coordinates~$p$ for which $E\xi_{p,1}^2=0$, $\xi_{p,1}\equiv0$. Hence these coordinates can be omitted. \item{51.)} Let us consider first those numbers $p$, $1\le p\le m$ for which the $p$-th element of the diagonal of the (semi-definite) matrix $\Sigma$ $D_{p,p}$ satisfies the inequality $D_{p,p}>0$. Let us define the triangular array $\eta_{k,j}=\eta_{k,j}(p)=\frac{\xi_{p,j}}{A_k D_{p,p}}$, $1\le j\le k$, $k=1,2,\dots$. Then $\limm_{k\to\infty}E\eta_{k,j}^2=0$, and the triangular array $\eta_{k,j}$, $1\le j\le k$, $k=1,2,\dots$ satisfies the condition of uniform smallness and the central limit theorem. Hence by the result of problem~39 the Lindeberg condition formulated in formula~(14) holds for all such indices~$p$. \item{} Since the matrix $\Sigma$ is positive semi-definite, hence $D_{p,p}\ge0$ for all $1\le p\le m$. Therefore we have still consider those indices $p$ for which $D_{p,p}=0$. In this case $\limm_{n\to\infty}\frac1{A_n^2}\summ_{k=1}^nE\xi_{p,k}^2=0$. Since $E\xi_{p,k}^2\ge E\xi_{p,k}^2I(|\xi_{p,k}|>\e A_n)$, relation~(14) also holds for such indices~$p$. %\vfill\eject \beginsection Appendix {\bf The proof of the inversion formula for Fourier transforms.} \medskip\noindent Let us introduce the function $\hat f(u)=\int e^{-itu}\tilde f(u)\,du$. To prove formula~(6) we have to show that $\hat f(u)=f(u)$ for almost all numbers $u\in R^1$. This statement is equivalent to the identity $\int_0^t f(u)\,du=\int_0^t \hat f(u)\,du$ for all numbers~$t\in R^1$. Since $$ \int_0^t \hat f(u)\,du=\frac1{2\pi}\int_{-\infty}^\infty \int_0^t e^{-ius}\tilde f(u)\,ds\,du= \frac1{2\pi}\int_{-\infty}^\infty \frac{e^{-itu}-1}{-itu} \tilde f(u)\,du, \tag A1 $$ we have to show the identity $$ \int_{-\infty}^\infty I_{[0,t]}(u)f(u)\,du=\frac1{2\pi} \int_{-\infty}^\infty \frac{e^{-itu}-1}{-itu} \tilde f(u)\,du, \tag A2 $$ where $I_{[0,t]}(\cdot)$ is the indicator function of the interval $[0,t]$. The identity (A2) is a special case of an important identity of the Fourier analysis, of the Parseval formula. Let us formulate it. \medskip\noindent {\bf Parseval formula.} {\it $$ \int f(u)\bar g(u)\,du=\frac1{2\pi}\int \tilde f(u)\bar{\tilde g}(u)\,du, \tag A3 $$ where $\tilde f(\cdot)$ denotes the Fourier transform and $\bar f(\cdot)$ the conjugate of a function $f(\cdot)$. Formula~(A3) holds if one of the following conditions is satisfied: \medskip \item{a.)} Both functions $f$ and $g$ are square integrable. \item{b.)} Both functions $\tilde f$ and $\tilde g$ are square integrable. \medskip If one of the conditions a.) and b.) is satisfied, then also the other condition holds. In this case the identity $\int |f(u)|^2\,du=\frac1{2\pi}\int|\tilde f(u)|^2,du$ holds (because of the Parseval formula). The transformation $\bold Tf=f\to\frac1{\sqrt{2\pi}}\tilde f$ is an automorphism in the space of square integrable function. This statement means not only the validity of the identity $\int\bold Tf(u)\overline {\bold T g}(u)\,du=\int f(u)\bar g(u)\,du$. It also states that all square integrable functions $f$ can be represented in the form $f=\bold T h$ with a square integrable function~$h$.) Let us finally remark that in a complete formulation of the Parseval formula the notion of the Fourier transform has to be defined for all square integrable but not necessarily integrable functions $f(\cdot)$. This definition can be given by means of the above mentioned $L_2$ isomorphism. For all square integrable functions $f(\cdot)$ there exists a sequence of integrable and square integrable functions $f_n(\cdot)$ which converges to the function $f(\cdot)$ in the $L_2$ norm of square integrable functions, i.e.\ $\int |f_n(u)-f(u)|^2\,du\to0$ if $n\to\infty$. Then the Fourier transform $\tilde f(\cdot)$ of the function $f(\cdot)$ is the limit of the functions $\tilde f_n(\cdot)$ in the $L_2$ norm. This limit always exists, and it does not depend on the choice of the sequence of functions $f_n(\cdot)$ converging to the function $f(\cdot)$.} \medskip In the Parseval formula formulated in this text a norming factor $\frac1{2\pi}$ is present which does not appear in its formulation in text books. The reason of this difference is that we have chosen a different normalization in the definition of the Fourier transform. (We have omitted the factor $\frac1{\sqrt{2\pi}}$ from the definition.) If the Fourier transform of an integrable function $f$ is integrable, then it is also square integrable, since it is a bounded function. Further, the Fourier transform of the function $g(u)=I_{[0,t]}(u)$ is the square integrable function $\tilde g(v)=\int_0^t e^{iuv}\,du= \frac{e^{iv}-1}{iv}$. Hence the formula (A2) (and therefore also formula~(6)) is a consequence of the Parseval formula with the choice of the above functions $f(\cdot)$ and $g(\cdot)$. We can prove that a finite measure $\mu$ with an integrable Fourier transform $\tilde f(u)=\int e^{itu}\mu(\,dt)$ has a density function $f(\cdot)$ defined by formula~(6) with the help of the following smoothing argument. Let us consider for all numbers $\e>0$ the Gaussian measure $\nu_\e$ with expectation zero and variance $\e$. This measure has density function $\varphi_\e(u) =\frac1{\sqrt{2\pi\e}}e^{-u^2/2\e}$ and Fourier transform $e^{-\e u^2/2}$. Let us introduce the convolution $\mu_\e=\mu*\nu_\e$, i.e.\ $\mu_\e(A)=\mu*\nu_\e(A)=\int \mu(A-u)\varphi_\e(u)\,du$. The measure $\mu_\e$ has a density function $f_\e(u)=\int \varphi_\e(u-v)\mu(\,dv)$, and the Fourier transform of this measure is the integrable function $\tilde f_\e(u)=e^{-\e u^2/2}\tilde f(u)$. Hence the function $f_\e(u)$ can be expressed as the inverse Fourier transform of the function $\tilde f_\e(u)$ defined in formula~(6). If $\e\to0$, then $f_\e(u)\to f(u)$, where $f(u)$ is the function defined in formula~(6), and this convergence is uniform in the variable~$u$. On the other hand, the measure $\mu$ is the weak limit of the measures $\mu_\e$ if $\e\to0$, that is the probability measures $\frac {\mu_\e}{\mu(R^1)}$ converge weakly to the probability measure $\frac {\mu}{\mu(R^1)}$. (Let us remark that $\mu(R^1)=\mu_\e(R^1)$.) Hence we get by taking limit $\e\to0$ that $\mu((a,b])=\int_a^b f(u)\,du$ if $\mu(\{a\})=\mu(\{b\})=0$, i.e.\ if the points $a$ and $b$ are points of continuity of the measure~$\mu$. This implies that the function $f$ is the density function of the measure~$\mu$. By some slight modification of the above argument we can prove the above statement also in the case if $\mu$ is a signed measure with bounded variation and integrable Fourier transform. Let us remark that by refining the argument of the above limit procedure and exploiting the~$L_2$ isomorphism property of the Fourier transform, the above result about the density function of a measure $\mu$ and its Fourier transform can be strengthened. It is enough to assume that the Fourier transform of the (signed) measure $\mu$ is square integrable. But in this case the inverse Fourier transform defined in formula~(6) has to be defined by means of the $L_2$ extension of the original integral, and we cannot claim that the density function of the measure~$\mu$ is continuous. \medskip\noindent {\it The proof of the Parseval formula.}\/ Let us first prove the Parseval formula for such simple pairs of functions $(f,g)$ which disappear outside of a finite interval $[-A,A]$, and which are sufficiently smooth, say they have two continuous derivatives. Then by considering the restriction of these functions to some interval $[-\pi T,\pi T]\supset[-A,A]$, $T\pi\ge A$, and the discrete version of the Parseval formula we can write that $$ \int f(u)g(u)\,du=2\pi T \sum_{k=-\infty}^\infty a_k(T)\bar b_k(T), $$ where $a_k(T)=\frac1{2\pi T} \int e^{iku/T}f(u)\,du= \frac1{2\pi T} \tilde f\(\frac kT\)$, and $b_k(T)= \frac1{2\pi T} \tilde g\(\frac kT\)$. But the above expression $2\pi T \summ_{k=-\infty}^\infty a_k(T)\bar b_k(T)$ is an approximating sum of the integral $\int\tilde f(u)\bar {\tilde g}(u)\,du$, and the Fourier transforms $\tilde f(u)$ and $\tilde g(u)$ tend to zero fast as $|u|\to\infty$ because of the smoothness of the functions $f$ and $g$. (See for instance the result of problem~28.) Hence the limit procedure $T\to\infty$ yields formula~(A3) in this special case. The Parseval formula yields with the choice $f=g$ the identity $\int |f(u)|^2\,du=\frac1{2\pi}\int |\tilde f(u)|^2\,du$, further the functions $f$ for which we have proved these identity are everywhere dense in the space of square integrable functions. Hence we get proof of the Parseval formula by extending the isometry $\bold T\colon\; f\to \bold T f=\frac 1{\sqrt {2\pi}}\tilde f$ in the $L_2$ norm to the space of all square integrable functions. To complete the proof we still have to show that this extension of the transformation $\bold T$ maps to the whole space of square integrable functions. To prove this missing part let us consider those functions $f$ which are sufficiently smooth (say they are 10-times differentiable) and to zero sufficiently fast in plus minus infinity (say $|f(u)|\le \const \(1+|u|^{100}\)$). Since such functions constitute an everywhere dense set in the space of square integrable functions it is enough to show that they are in the image space of the operator $\bold T$. We will show that the identity $\bold T \sqrt{2\pi} \tilde{f^-}=f$, where $f^-(u)=f(-u)$, follows from the already proved statements. Also the function $\tilde f$ is smooth, and it tends to zero fast. (This also follows from the statements of problems~27 and~28. Actually the statement of problem~27 deals only with the Fourier transform of probability measures, but it is not difficult to see that this statement also holds for the Fourier transform of all signed measures with bounded variance. We want to exploit that the measures $\mu^{\pm}$, $\mu^\pm(A)=\int_A f^\pm(u)\,du$, $f^+(u)=\max(f(u),0)$, $f^-(u)=-\min(f(u),0)$ have at least 8~moments.) Since both the functions $f(u)$ and the indicator function $I_{[0,t]}(u)$ of the interval $[0,t]$ are square integrable, formula~(A2) holds by the already proved part of the Parseval formula. Since the faction $\hat f$ defined with the help of the function $\tilde f(u)$ at the start of this Appendix is integrable, also formula (A1) holds. Formulas~(A1) and~(A2) together imply that the pair of functions $(f,\tilde f)$ satisfy relation~(6). But this relation is equivalent to the statement that the function $f$ is the Fourier transform of the function $2\pi \tilde {f^-}$, where $f^-(u)=f(-u)$, and this is what we wanted to prove. \vfill\eject \noindent {\bf The proof of Weierstrass second approximation theorem.} \medskip The functions $\frac1{(2\pi)^{k/2}} e^{i(j_1t_1+\cdots+j_kt_k)}$ constitute a complete orthonormal system in the space of square integrable functions which are periodic by $2\pi$ in all their arguments. This important result of the theory of Fourier series implies that all sufficiently smooth and in all their arguments periodic functions are the uniform limits of their Fourier series. Indeed, in this case the Fourier coefficients tend to zero fast, and this implies the uniform convergence. Since such functions are everywhere dense in the supremum norm in the space of continuous functions, this statement implies Weierstrass second approximation theorem. Nevertheless, instead of this argument we present a direct proof of Weierstrass second approximation theorem which does not apply the completeness of the trigonometrical functions in the $L_2$ space. We shall prove Fej\'er's theorem, more precisely its multi-dimensional version. Weierstrass second approximation theorem is a direct consequence of this result. \medskip\noindent {\bf Fej\'er's theorem.} {\it Let $f(x_1,\dots,x_k)$ be a continuous function of $k$~arguments which is periodic by $2\pi$ in all of its arguments. Let us define for all $k$-dimensional vectors $(n_1,\dots,n_k)$ with non-negative integers the trigonometrical sum $$ \align s_{n_1,\dots,n_k}(f)(t_1,\dots,t_k)&=\sum_{j_1=-n_1}^{n_1}\cdots \sum_{j_k=-n_k}^{n_k} A_{j_1,\dots, j_k}e^{i(j_1t_1+\cdots+j_kt_k)},\\ \intertext{where } A_{j_1,\dots,j_k}&=\frac1{(2\pi)^k}\int_{-\pi}^\pi \cdots\int_{-\pi}^\pi e^{-i(j_1u_1+\cdots+j_ku_k)} f(u_1,\dots,u_k)\,du_1\dots\,du_k. \endalign $$ Let us also consider the following Cesaro means $A_n(f)$, $n=1,2,\dots$, of the above trigonometrical sums: $$ A_n(f)(t_1,\dots,t_k)=\frac1{(n+1)^k}\sum \Sb 0\le n_j\le n\\ \text{for all indices }1\le j\le k \endSb s_{n_1,\dots,n_k}(f)(t_1,\dots,t_k). $$ Then $\limm_{n\to\infty}A_n(f)(t_1,\dots,t_k)=f(t_1,\dots,t_k)$, and the above convergence is uniform in all of its arguments $t_1,\dots,t_k$.} \medskip\noindent {\it The proof of Fej\'er's theorem.}\/ The proof of Fej\'er's theorem is based on the following formula: $$ A_n(f)(t_1,\dots,t_k)=\int_{-\pi}^\pi\cdots\int_{-\pi}^\pi f(u_1,\dots,u_k)\bar K_n(t_1-u_1,\dots,t_k-u_k)\,du_1\dots\,du_k, \tag A4 $$ where $$ \bar K_n(u_1,\dots,u_k)=K_n(u_1)\cdots K_n(u_k), \tag A5 $$ and $$ K_n(u)=\frac1{2\pi(n+1)}\sum_{k=-n}^n(n+1-|k|)e^{iuk} =\frac{\sin^2\(\frac{n+1}2u\)} {2\pi(n+1)\sin^2\(\frac u2\)}. \tag A$5'$ $$ These relations hold, since by writing into the definitions of the expressions $A_n(f)$ and $s_{n_1,\dots,n_k}(f)$ the definition of the Fourier coefficient $A_{j_1,\dots,j_k}$ we get relation (A4) together with formula (A5), where $$ \align \bar K_n(u_1,\dots,u_k)&= \frac1{(2\pi(n+1))^k}\sum \Sb 0\le n_j\le n\\ \text{for all indices }1\le j\le k \endSb \sum \Sb |m_j|\le n_j\\ \text{for all indices }1\le j\le k\endSb e^{i(m_1u_1+\dots+m_ku_k)} \\ &=\frac1{(2\pi(n+1))^k}\prod_{j=1}^k \( \sum_{n_j=0}^n \sum_{m_j=-n_j}^{n_j} e^{im_j u_j}\)=K_n(u_1)\dots K_n(u_k), \endalign $$ and the function $K_n(u)$ is defined in the middle term of formula~(A$5'$). This sum can be written in a closed form with the help of the following calculation. $$ \align &\frac1{2\pi(n+1)}\sum_{k=-n}^n(n+1-|k|)e^{iuk}= \frac1{2\pi(n+1)}\(\sum_{k=0}^n e^{iuk}\) \(\sum_{k=0}^n e^{-iuk}\)\\ &\qquad=\frac{1}{2\pi(n+1)}\left|\frac{e^{i(n+1)u}-1} {e^{iu}-1}\right|^2 =\frac1{2\pi(n+1)} \frac{\left|e^{i(n+1)u/2}-e^{-i(n+1)u/2}\right|^2} {\left|e^{iu/2}-e^{-iu/2}\right|^2}\\ &\qquad=\frac{\sin^2\(\frac{n+1}2u\)} {2\pi(n+1)\sin^2\(\frac u2\)}. \endalign $$ The function $K_n(u)$ defined in formula (A$5'$) have the following properties important for us: \medskip \item{(i)} $\int_{-\pi}^\pi K_n(u)\,du=1$. This statement follows from the representation of the functions $K_n(\cdot)$ in the form of a sum. \item{(ii)} $K_n(u)\ge0$ for all numbers $u\in R^1$. \item{(iii)} $\limm_{n\to\infty}\supp_{\e\le |u|\le \pi}K_n(u)=0$ for all numbers $\e>0$. Statements (ii) and (iii) follow from the representation of the functions $K_n(\cdot)$ given in a closed form. Since a function continuous on a compact set is uniformly continuous, there exists a constant $\delta=\delta(\e,f)$ for all $\e>0$ such that the continuous and in its coordinates periodic function $f$ satisfies the inequality $|f(x_1,\dots,x_k)-f(y_1,\dots,y_k)|<\e$ if $|x_j-y_j|<\delta$ for all indices $j=1,\dots,k$. (In this relation we identify the points $x_j+2\pi l$, $l=0\pm1,\pm2,\dots$, and the inequality $|x_j-y_j|<\delta$ means that $|x_j-y_j+2\pi l_l|<\delta$ with an appropriate integer $l_j$.) Let us introduce the notation $\bold B(\delta,(t_1,\dots,t_k))=\{(u_1,\dots,u_k)\colon\; |u_j-t_j|<\delta,\;-\pi\le u_j<\pi,\; j=1,\dots,k\}$. Because of property~(i) $$ \align &A_n(f)(t_1,\dots,t_k)-f(t_1,\dots,t_k)\\ &\qquad =\int_{[-\pi,\pi)^k} \(f(u_1,\dots,u_k)-f(t_1,\dots,t_k)\) K_n(t_1-u_1)\cdots K_n(t_k-u_k)\,du_1\dots\,du_k \\ &\qquad=\int_ {\bold B(\delta,(t_1,\dots,t_k))} [\cdots]\,du_1\dots\,du_k + \int_ {[-\pi,\pi)^k\setminus\bold B(\delta,(t_1,\dots,t_k))} [\cdots]\,du_1\dots\,du_k \\ &\qquad =I_{1,n}(t_1,\dots,t_k)+I_{2,n}(t_1,\dots,t_k). \endalign $$ It follows from the definition of the set $B(\delta,(t_1,\dots,u_t))$, the number $\delta$ and properties~ (i) and~(ii) that $$ \left|I_{1,n}(t_1,\dots,t_k)\right| \le \e\int_{[-\pi,\pi)^k} K_n(t_1-u_1)\cdots K_n(t_k-u_k)\,du_1\dots\,du_k \le\e $$ for all indices $n=1,2,\dots$ and points $(t_1,\dots,t_k)$. On the other hand, by applying the notation $\supp_{(u_1,\dots,u_k)}|f(u_1,\dots,u_k)|= L$ and carrying out the substitutions $t_j-u_j=\bar u_j$ we can show with the help of relations (i), (ii) and~(iii) that $$ \left|I_{2,n}(t_1,\dots,t_k)\right|\le 2L \int_ {[-\pi,\pi)^k\setminus\bold B(\delta,(0,\dots,0))} K_n(\bar u_1)\cdots K_n(\bar u_k)\,d\bar u_1\dots\,d\bar u_k\to0, $$ if $n\to\infty$, since $$ \align &\int\limits\Sb\delta<|\bar u_j|<\pi\\ -\pi\le\bar u_l<\pi,\,l\neq j,\,1\le l\le k\endSb K_n(\bar u_1)\cdots K_n(\bar u_k)\,d\bar u_1\dots\,d\bar u_k=\int_{\delta<|u|<\pi} K_n(u)\,du\\ &\qquad\qquad \le 2\pi\sup_{\delta<|u|<\pi} K_n(u)\to0, \quad \text {ha } n\to\infty. \endalign $$ for all numbers $1\le j\le k$. Since the above estimates hold for all constants $\e>0$ (together with an appropriate number $\delta=\delta(\e,f)$), they imply Fej\'er's theorem. \bye