q$ with some constant
$q>0$ not depending on the number $n$. Moreover, it can be achieved
that this number $q>0$ be arbitrarily close to the number~1.
\item{} Given a sequence $\{x_{j_1},\dots,x_{j_{2n-1}}\}\in A_n$
put $y_1=\summ_{j=1}^{2n-1}x_{j_1}$, $m=m(y_1)=[y_1]$, where $[u]$
denotes the integer part of the number $u$, and introduce the
numbers $M=M(n)=[5m]$ and $y=\frac{y_1+x_M}{\sqrt {2n}}$. Then
$$
P\left.\(\left|\sqrt {\frac2n}\(S_n-\frac12S_{2n}\)\right|
>C\right| \frac{S_{2n}}{\sqrt{2n}}= y\)\ge \frac q2.
$$
This implies that the distribution function $F$ of the random
variables $X_1,X_2,\dots$ does not satisfy {\it Property~A}.
Moreover, the probability of the event that the normalized sum
$\frac{S_{2n}}{\sqrt{2n}}$ takes such a value $y$, for which
the conditional distribution function $F_n(x|y)$ at the left-hand
side of formula (10) satisfies the inequality
$$
\supp_{|x| 0$ not depending
on the number~$n$. (To understand why such a reduced version of
this estimate is sufficient for our purposes we have to remember
that the fluctuation of the Wiener process $W(t,\oo)$ and random
broken line process $\bar S_n(t,\oo)$ is relatively small in small
intervals. A detailed calculation shows that the fluctuations of
these processes is sufficiently small for our purposes in intervals
of length $[0,Kn]$. I formulate the version of Property~A we need
in this case.
\medskip\noindent
{\bf The definition of the modified version of Property A.} {\it We
say that a sequence of independent identically distributed random
variables $X_1,X_2,\dots,$ with distribution function $F$ satisfies
the modified version of Property~A, if the conditional distribution
functions $F^{(n)}_{\bar n}(x|y)$ defined in formula~(15) satisfy
the following asymptotic relation. There exists some numbers $\e>0$,
$K>0$ and threshold index $n_0$ such that
$$
\aligned
1-F^{(n)}_{\bar n}(x|y)&=(1-\Phi(x))
\exp\left\{O\(\frac{x^3+x^2|y|+|y|+1}{\sqrt{\bar n}}\)\right\}\\
&\qquad\quad\text{if }\bar n\ge Kn,\; 0\le x\le \e\sqrt{\bar n},\;
\;0\le |y|\le\e\sqrt{\bar n}\\
F^{(n)}_{\bar n}(-x|y)&=(1-\Phi(x))
\exp\left\{O\(\frac{x^3+x^2|y|+|y|+1}{\sqrt{\bar n}}\)\right\}\\
&\qquad\quad\text{if }\bar n\ge Kn,\;0\le x\le \e\sqrt{\bar n},\;
\;0\le |y|\le\e\sqrt{\bar n}, \endaligned \tag10a
$$
holds, where $\Phi(x)$ is the standard normal distribution function,
and the error term $O(\cdot)$ is uniform in the variables $x$, $y$,
$n$ and $\bar n$.}
\medskip
To prove the {\it Modified version of Property A} \/ if the
distribution function $F$ satisfies relation (2) and condition~a)
in the same way as the original {\it Property~A} was proved it is
enough to show that the density functions $f^{(n)}_{\bar n}(x)$
of the normalized partial sums $\frac{\bar S_{\bar n}}
{\sqrt {\bar n}}=\frac1{\sqrt{\bar n}}\summ_{j=1}^{\bar n}\bar X_j$
satisfy such a version of relations (12a) and (12b) in the {\it Sharp
form of the local central limit theorem}\/ where the number $n$ is
replaced by $\bar n$ everywhere at the right-hand side of these
relations, and it is assumed that $\bar n\ge Kn$.
This version of the {\it Sharp form of the Local Central
Limit Theorem}\/ can be proved by the method of the solution of
Problem~23 in the {\it Theory of Large Deviations~I.}\/ if the
distribution function~$F(x)$ satisfies condition~a.).
The main idea of the proof is that the density function we want
to estimate can be expressed by the inverse Fourier transform
of the characteristic function, or by the analytic continuation
of this formula, provided that the characteristic function and its
analytic continuation is an integrable function. Beside this, the
expression we get in such a way can be well investigated.
We have to study the expressions in the following identity:
$$
\sqrt {\bar n}f^{(n)}(\sqrt{\bar n}x)=\frac1{2\pi}\int
e^{(is-t)x}\frac{\bar R_{\bar n}(s+it)}{\bar R_{\bar n}(it)}\,ds,
$$
where $\bar R_{\bar n}(s+it)=\(R(s+it)e^{2^{-n-1}(t^2-s^2)}\)
^{\bar n}$, and $R(s+it)=\int e^{(is-t)x}F(\,dx)$ is the analytic
continuation of the characteristic function of the $F(x)$
distribution function. This means that $\bar R_{\bar n}(s+it)$ is
the analytic continuation of the distribution function
$F^{(n)}_{\bar n}(x)$. This function, as the function of the
variable $s$ with a fixed parameter~$t$ is integrable, since the
function $e^{2^{-n-1}(t^2-s^2)}$ is integrable, and the function
$R(s+it)$ is bounded. But in the proof of the modified version
of the {\it Sharp form of the Local Central Limit Theorem}\/ we
need some more information. We have to know that the integral
expressing the density function as the inverse Fourier transform
of the characteristic function and its analytic continuation
is essentially localized in a small neighbourhood of the origin,
where the integrand can be well estimated. Condition~a) was imposed
to guarantee this property. The consequence of condition~a) needed
for us is formulated in the following Problem~7.
\medskip
\item{7.)} If the distribution function $F$ satisfies condition~a),
then for all numbers $A>0$ and $B>0$ there exists some number
$\alpha=\alpha(A,B)<1$ such that
$$
\left|\frac{R(s+it)}{R(it)}\right|<\alpha\quad\text{if }
|s|>A\quad\text {and}\quad |t|**0$ and $c>0$. Let us choose randomly
one of the permutations $\{\pi(1),\dots,\pi(2N)\}$ of the numbers
$1,\dots,2N$, by choosing all possible permutations with probability
$\frac1{(2N)!}$, and define the random variable
$$
S_N=\(x_{\pi(1)}+\cdots+x_{\pi(N)}\)-
\(x_{\pi(N+1)}+\cdots+x_{\pi(2N)}\)
$$
It satisfies the following form of the central limit theorem and
its large deviation version:
$$
\align
P\(S_N>\sigma x\sqrt N\)&=\(1-\Phi\(\frac x{\sqrt
N}\)\)\exp\left\{O\(\frac{x^3+1}{\sqrt N}\)\right\},\\
P\(S_N<-\sigma x\sqrt N\)&=\Phi\(-\frac x{\sqrt
N}\)\exp\left\{O\(\frac{x^3+1}{\sqrt N}\)\right\}
\endalign
$$
for all numbers $0\le \e\sqrt N$ with some appropriate number
$\e=\e(c,K)>0$, where the error term $O(\cdot)$ means the
absolute value of the difference of the left-hand side and the main
term at the right-hand side is less than $B\frac{x^3+1}{\sqrt N}$
with a constant $B$ depending only on the parameters $C$ and $K$,
but not on the numbers $x$ and $N$.}
\medskip
The proof of the (non-trivial) Theorem~B will be omitted. It can
be found in the proof of Lemma~3 of the work of J\'anos Koml\'os,
P\'eter Major and G\'abor Tusn\'ady {\it An approximation of
Partial Sums of Independent RV'-s and the Sample DF. II.}\/
Zeitschrift f\"ur Wahrscheinlichkeitstheorie {\bf~34} (1976) 34--58.
Here I only present a heuristic explanation of this result. I also
omit the details of the proof of the {\it Finite Version of the
Approximation Theorem}\/ in the case when condition~b) holds.
Let us make a random pairing $(x_{j_{2k}},x_{j_{2k+1}})$,
$1\le k\le N$, of the numbers $x_1,\dots,x_{2N}$, define
independent, identically distributed random variables
$r_1,\dots,r_N$ such that $P(r_k=1)=P(r_k=-1)=\frac12$,
$1\le k\le N$, and introduce the random variable
$U=\summ_{k=1}^N r_k \(x_{j_{2k}}-x_{j_{2k+1}}\)$. The random
variable $U$ is the sum of independent random variables with
expectation zero, hence we can estimate its distribution well by
means of a normal distribution function with expectation zero and
appropriate variance. But this variance depends on the
pairing $(x_{j_{2k}},x_{j_{2k+1}})$, $1\le k\le N$, of the numbers
we consider. The distribution of the random variable $S_N$
considered in Theorem~B equals the average of the distributions of
the (almost normal) random variables $U$ corresponding to all
possible pairings of the numbers $x_1,\dots,x_{2N}$. To prove that
this average satisfies the statement of Theorem~B it is enough to
show that the variances of the distributions taking part in this
average are typically very close to the number $N\sigma^2$. The
proof of this non-trivial statement is the most important step of
the proof.
In the next step I formulate Problem~8 which enables us to reduce
the proof of the {\it Finite Version of the Approximation Theorem}\/
to the two special cases when the distribution function $F$ of the
independent random variables we are investigating satisfies either
condition~a) or condition~b).
\medskip
\item{8.)} Let us fix some distribution functions $F_1$, $F_2$ and
$G_1$, $G_2$. Let $S^{i}_n$, and $T^{i}_n$, $n=1,2,\dots$, be the
sequences of partial sums of independent, identically distributed
random variables with distribution functions $F_i$ and $G_i$,
$i=1,2$. Let us fix some number $0\le p\le 1$ and define the
distribution functions $F=pF_1+(1-p)F_2$ and $G=pG_1+(1-p)G_2$.
Let us show that some pairs $S_n$ and $T_n$, $n=1,2,\dots$, of
sequences of partial sums of independent, identically distributed
random variables can be constructed with distribution functions $F$
and $G$ in such a way that they satisfy the relation
$$
\align
&P\(\supp_{1\le j\le n}\left|S_j-T_j\right|\ge a+b\) \\
&\qquad \le P\(\supp_{1\le j\le n}\left|S^{(1)}_j-T^{(1)}_j\right|\ge
a\)+ P\(\supp_{1\le j\le n}\left|S^{(2)}_j-T^{(2)}_j\right|\ge b\)
\endalign
$$
for arbitrary real numbers $a>0$, $b>0$ and integer $n>0$.
\item{} Let us reduce with the help of the above statement the proof
of the {\it Finite Version of the Approximation Theorem}\/
to the two special cases when the distribution function
$F$ of the independent random variables we are investigating
satisfy one of the conditions~a) or~b).
\medskip
In the next problems we investigate the converse of the above
Approximation Theorem, that is we are interested in the question which
are the lower bounds for the possibility of approximation of partial
sums with Wiener process or of normalized the empirical distribution
function by Brownian bridge. The proof of these lower bounds is based
on such estimates which give a lower bound on the possibility of
approximation of the distribution function of partial sums of
independent random variables by means of a normal distribution function.
These estimates belong to the estimates of the theory of the central
limit theorem and large deviation theory. Because of some technical
reasons it is more convenient to work with the moment generating
functions of our random variables instead of their distribution. The
result of the next Problem~9 has such a content.
\medskip
\item{9.)} Let $F(x)$ be such a distribution function for which the
moment generating function $R(s)=\int e^{sx}F(\,dx)$
exists in some interval $-a 0$. The value of the moment
generating function $R(s)$ in the interval $[-a,a]$ uniquely
determines the distribution function $F(x)$. (This number $a>0$ can
be chosen sufficiently small.)
\medskip
In the next problem we prove formula~(5) in the case when the random
variable~$X$ has moment generating function in a small neighbourhood
of the origin.
\medskip
\item{10.)} Let $X_1,X_2,\dots$, be a sequence of independent,
identically distributed random variables which satisfy the relation
$R(2s)=Ee^{2sX_1}<\infty$ with some number $s>0$. Let us fix some
positive integer~$n$ and define the random variables
$S_{k,n}=\summ_{j=kn+1}^{k(n+1)}X_j$, $k=1,2,\dots$. Let us choose
a sufficiently large number
$A>0$ and put $N(n)=e^{An}$. Then the relation
$$
\lim_{n\to\infty}\frac1{N(n)R^n(s)}\sum_{k=1}^{N(n)}
e^{sS_{k,n}}=1\quad\text{with probability 1} \tag17
$$
holds if $A>0$ is sufficiently large ($N(n)=e^{An}$), and
$R(s)=Ee^{sX_1}$.
\item{} Let $Y_1,Y_2,\dots$, be a sequence of independent random
variables with standard normal distribution and put
$T_{k,n}=\summ_{j=kn+1}^{k(n+1)}$ with some appropriate real
number~$n$. Let us observe that such a version of relation~(17)
holds in which the random variable $S_{k,n}$ is replaced by
$T_{k,n}$ and the moment generating function $R(s)$ by
$\bar R(s)=Ee^{sY_1}=e^{s^2/2}$. Furthermore by the result of the
previous problem there exists an arbitrary small number $s>0$ for
which $R(s)\neq\bar R(s)$ if $X_1$ is not a standard normal random
variable. Let us prove with the help of the above observation
formula~(5) if the random variable $X_1$ has moment generating
function formula~(5) in a small neighbourhood of the origin.
\medskip
The result of the next problem is about approximation of the
normalized empirical distribution function by a Brownian bridge,
and it is the analog of the previous result.
\medskip
\item{11.)} Let $Z_n(t)$, $0\le t\le1$, be a normalized empirical
distribution with $n$ sample points. (This means that there are
$n$ independent random variables $\xi_1,\dots,\xi_n$ with uniform
distribution on the interval $[0,1]$, and we consider the random
process $Z_n(t)=\frac1{\sqrt n}\(P_n(t)-nt\)$, where
$P_n(t)=\summ_{j=1}^nI(\xi_j**