1 Introduction

Let \(G=\left( \left\{ g_{0},g_{1},\ldots ,g_{n-1}\right\} ,\,\cdot \right) \) be a finite group. Denote by C(G) the set of all permutations \(C=\left( c_{1},c_{2},\ldots ,c_{n}\right) \) of the elements of G. Various properties of groups were defined in terms of the existence of such permutations, possessing certain properties. For example, G is \( round \) if, for every positive integer k with \((k,n)=1\) and for every integers \(m_{1},m_{2},\ldots ,m_{k},\) there exists a \(c\in C(G)\) such that the products \(\left( c_{i+m_{1}}c_{i+m_{2}}\ldots c_{i+m_{k}}\right) _{i=1}^{n}\) are all distinct (where we agree that \(c_{n+1}=c_{1},c_{n+2}=c_{2},\) and so on). This combinatorial property turns out to be equivalent to nilpotence [3]. G is a \(sequenceable\;group\) if there exists a \(c\in C(G)\) such that the partial products \(c_{1},c_{1}c_{2},\ldots ,c_{1}c_{2}\ldots c_{n}\) are all distinct [6]. G is \( harmonious \) if there exists a \(c\in C(G)\) such that the products \(c_{1}c_{2},c_{2}c_{3},\ldots ,c_{n-1}c_{n},c_{n}c_{1}\) of two consecutive elements are all distinct [2].

Our starting point is two papers of Lev [4, 5]. His results refer to abelian groups only, and so are our first results (Theorems 1 and 2). Thus, for now, we write the group operation as + (namely \(G=\left( \left\{ g_{0},g_{1},\ldots ,g_{n-1}\right\} ,\,+\right) \)). Given a uniformly random permutation \(C\in C(G)\), consider the set \(D(C)=\left\{ c_{i+1}-c_{i}:1\le i\le n\right\} .\) (Here, \(g-g'=g+(-g')\), where \(-g'\) is the inverse of \(g'\) in G.) More precisely, as we are interested in the number of times each element appears as a difference \(c_{i+1}-c_{i}\), we consider D(C) as a multi-set. For example, if \(G=\mathbf{Z }_{5}\) (where \(\mathbf{Z }_{n}\) will denote the cyclic group of order n) and \(C=\left( 0,1,2,3,4\right) ,\) then \(D(C)=\left\{ 1^{5}\right\} ,\) while if \(C=\left( 3,0,1,4,2\right) ,\) then \(D\left( C\right) =\left\{ 1^{2},2,3^{2}\right\} .\) Here, we denote the fact that an element g is of multiplicity k in D(C) by writing it as \(g^{k}.\)

Lev considered the three quantities

$$\begin{aligned} \delta _{\mathrm {min}}(G)= & {} \mathrm {min}\left\{ |D(C)|:C\in C(G)\right\} ,\\ \delta _{\mathrm {max}}(G)= & {} \mathrm {max}\left\{ |D(C)|:C\in C(G)\right\} ,\\ \delta _{\text {rnd}}(G)= & {} E(|D(C)|), \end{aligned}$$

where |A| denotes the cardinality of a finite multi-set A (not counting multiplicities) and \(E(\cdot )\) denotes the expected value of a random variable. For example, if \(|G|=2\) or \(|G|=3,\) then \(\delta _{\mathrm {min}}(G)=\delta _{\mathrm {max}}(G)=\delta _{\text {rnd}}(G)=1.\) If \(G=\mathbf{Z }_{4},\) then \(\delta _{\mathrm {min}}(G)=1, \, \delta _{\mathrm {max}}(G)=3\) and \(\delta _{\text {rnd}}(G)=7/3.\) If \(G=\mathbf{Z }_{2}\oplus \mathbf{Z }_{2},\) then \(\delta _{\mathrm {min}}(G)=\delta _{\mathrm {max}}(G)=\delta _{\text {rnd}}(G)=2.\)

Lev [5] found the exact values of the first quantity, and established tight bounds for the other two, for all finite abelian groups G. In particular, he showed that, uniformly for all groups of a certain order, as the order goes to \(\infty \),

$$\begin{aligned} \delta _{\text {rnd}}(G)=(1-e{}^{-1})|G|+O(1). \end{aligned}$$

Similarly, consider the set \(S(C)=\left\{ c_{i}+c_{i+1}:1\le i\le n\right\} .\) Lev obtained analogous results on the quantities \(\sigma _{\mathrm {min}}(G):=\mathrm {min}\left\{ |S(C)|:C\in C(G)\right\} ,\) \(\sigma _{\mathrm {max}}(G):=\mathrm {max}\left\{ |S(C)|:C\in C(G)\right\} \) and \(\sigma _{\text {rnd}}(G):=E(|S(C)|).\)

In this paper, we consider only the random case. Lev proved the above result regarding \(\delta _{\text {rnd}}(G)\) by estimating the probability of each \(h\in G\setminus \left\{ 0\right\} \) to occur in the set D(C). We start with the following question: what is the number of occurrences of \(h\in G\setminus \left\{ 0\right\} \) in the set \(D(C)\,?\) More precisely, we deal with the distribution of this random variable, especially as the order of G increases to \(\infty .\) Next, we ask how these numbers of occurrences behave for several group elements simultaneously.

It turns out that the answers do not depend on G being abelian. Moreover, one may ask the same questions in a much more general combinatorial setting, not necessarily of groups. We get analogous results in this case. In addition, we count the number of occurrences of \(h\in G\) in the set S(C) for any finite group.

In Sect. 2, we state the main results. Section 3 presents the proofs. Since the results for the groups are special cases of the combinatorial results, we will prove only the latter. The proofs rely heavily on results of Alon and Lubetzky [1], concerning convergence to Poisson distribution.

We would like to express our gratitude to the referees for their helpful comments on the first version of the paper.

2 The Main Results

We start with a formulation of our main results in the special case of permutations of abelian groups.

Theorem 1

For a finite abelian group G and a uniformly random permutation C thereof, let \(X_{h}\) denote the number of occurrences of \(h\in G\setminus \left\{ 0\right\} \) in the set D(C). The distribution of \(X_{h}:\)

(a) depends only on the orders of G and h.

(b) converges to a Poisson distribution with parameter 1 as \(|G|\longrightarrow \infty .\)

Thus, \(X_{h}\) is the number of indices \(1\le i\le n\) for which \(c_{i+1}-c_{i}=h\) (for \(i=n,\) the equality reads \(c_{1}-c_{n}=h\)). We now exemplify the first part of the theorem.

Example 1

Let \(G_{1}=\mathbf{Z }_{4},\) \(G_{2}=\mathrm {\mathbf{Z }}_{2}\oplus \mathbf{Z }_{2},\) with \(h_{1}=2\) and \(h_{2}=\left( 1,0\right) \) (or, indeed, any nonzero element of \(G_{2}\)). One verifies easily that \(h_{1}\) does not occur in D(C) for 8 of the permutations of \(G_{1},\) and occurs twice in each of the other 16. The corresponding numbers for \(h_{2}\) are the same.

Let \(\mathrm {Po}\left( \lambda \right) \) be the Poisson distribution with parameter \(\lambda .\) Part (b) of Theorem 1 asserts that

$$\begin{aligned} X_{h}\xrightarrow [|G|\rightarrow \infty ]{{\mathcal {D}}}\mathrm {\mathrm {Po}\left( 1\right) } \end{aligned}$$

uniformly over the elements of \(G\setminus \left\{ 0\right\} .\) Here, given a sequence \((Y_{k})_{k=1}^{\infty }\) of random variables, and a distribution law \({\mathcal {L}},\) we write \(Y_{k}\xrightarrow [k\rightarrow \infty ]{{\mathcal {D}}}\mathrm {{\mathcal {L}}}\) to mean that the sequence converges in distribution to \({\mathcal {L}}.\)

We can represent \(X_{h}\) as a sum of identically distributed Bernoulli variables, namely

$$\begin{aligned} X_{h}=X_{h1}+X_{h2}+\ldots +X_{hn}, \end{aligned}$$

where

$$\begin{aligned} X_{hi}={\left\{ \begin{array}{ll} 1,\,\,\,\,\,\,\,\, &{} c_{i+1}-c_{i}=h,\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Now the sum of n iid \(\mathrm {Ber}(p)\) variables, with \(n\longrightarrow \infty ,\) \(p\longrightarrow 0\) and \(np\longrightarrow \lambda ,\) is asymptotically \( \mathrm {Po} \left( \lambda \right) \)-distributed. In our case, it is clear that \(X_{hi}\sim \mathrm {Ber}\left( \dfrac{1}{n-1}\right) \) for each i. The variables \(X_{hi}\) are certainly dependent, but the dependence seems to be weak, and one may guess therefore that \(X_{h}\) is approximately \(\mathrm {B}\left( n,\dfrac{1}{n-1}\right) \) distributed (where B denotes the binomial distribution), and hence approximately \(\mathrm {Po\left( 1\right) }\)-distributed. Theorem 1 confirms this guess.

The theorem implies a weak form of Lev’s result [5] regarding \(\delta _{\text {rnd}}(G),\) mentioned above.

The size |D(C)| of the set \(D\left( C\right) ,\) mentioned in the introduction, may be expressed in the form

$$\begin{aligned} |D(C)|=\underset{h\in G}{\sum }Y_{h}, \end{aligned}$$

where \(Y_{h}\) is the truncation of \(X_{h}\) at 1. This may be shown to yield easily Lev’s estimate of E(|D(C)|) (although with a larger error term). Moreover, we want to show that, considering simultaneously several such random variables, for different elements of the group, the associated random variables are asymptotically independent. For instance, Theorem 1 implies that if \(G=\mathbb {\mathbf{Z }}_{n}\oplus \mathbb {\mathbf{Z }}_{n},\) then the probability that (13,17) appears 5 times in D(C) is approximately \(\dfrac{e^{-1}}{5!}\) for large n. Theorem 2 implies that the probability that (13,17) appears 5 times in D(C) and (14,0) appears 3 times is approximately \(\dfrac{e^{-2}}{5!\cdot 3!}\) for large n.

Theorem 2

Let G be a finite abelian group and C a uniformly random permutation thereof, \(h_{j},\) \(1\le j\le r\text {,}\) distinct nonzero elements of G,  and \(X_{j}=X_{j}(n),\,1\le j\le r,\) the number of indices i in the range \(\left[ 1,n\right] \) such that \(c_{i+1}-c_{i}=h_{j}.\) Then, \((X_{1},X_{2},\ldots ,X_{r})\xrightarrow [|G|\rightarrow \infty ]{{\mathcal {D}}}\overset{}{(Z_{1},Z_{2},\ldots ,Z_{r}),}\) where \(Z_{1},Z_{2},\ldots ,Z_{r}\) are independent \(\mathrm {Po\left( 1\right) }\)-distributed random variables.

As mentioned above, we will skip the proofs of Theorems 1 and 2.

We are now going to claim that both Theorems 1 and 2 hold in a much more general setting. In fact, denote by \(f\,:G\longrightarrow G\) the function defined by \(f(g)=g+h\) for \(g\in G.\) In Theorem 1, we consider the number of times we see the elements g and f(g) in succession in a random permutation. Our next result is that the asymptotics in Theorem 1 holds if the mapping \(g\rightarrow g+h\) of G is replaced by any bijection without fixed points of an arbitrary finite set S. Denote by C(S) the set of all permutations \(C=\left( c_{1},c_{2},\ldots ,c_{n}\right) \) of the elements of S.

Theorem 3

Let f be a bijection without fixed points of an arbitrary finite set S of size n and C a uniformly random permutation of S. Then, the number \(X_{f}\) of indices \(1\le i\le n\) such that \(c_{i+1}=f(c_{i})\) satisfies \(X_{f}\xrightarrow [|S|\rightarrow \infty ]{{\mathcal {D}}}\mathrm {Po\left( 1\right) .}\)

For the following examples (2 and 3), we recall the notion of a Latin rectangle. An \(r\times n\) Latin rectangle, with \(r\le n\) is an \(r\times n\) matrix, whose entries are integers between 1 and n, such that no row and no column contain any number more than once. Note that, permuting the rows or columns of a Latin rectangle in any way, we still get a Latin rectangle.

Example 2

Let \(L=(\ell _{ij})_{i=1,j=1}^{2,n}\) be a \(2\times n\) Latin Rectangle. Reorder the columns of L uniformly randomly. Let \(L'=(\ell '_{ij})_{i=1,j=1}^{2,n}\) be the resulting Latin rectangle. Denote by X the number of entries in the first row of \(L'\) having the property that the entry to their right is equal to the entry below them. According to Theorem 3, as \(n\rightarrow \infty \), the distribution of X converges to \(\mathrm {Po}(1)\).

Our next result is that the asymptotics in Theorem 2 holds in the setting mentioned above.

Theorem 4

Let \(f_{j},\) \(1\le j\le r,\) be bijections without fixed points of an arbitrary finite set S of size n, such that \(f_{j}(x)\ne f_{j^{'}}(x)\,\mathrm {for}\,1\le j<j^{'}\le r\) and \(x\in S,\) and C a uniformly random permutation of S. Let \(X_{j}=X_{j}(S),\,1\le j\le r,\) be the number of indices \(1\le i\le n\) such that \(c_{i+1}=f_{j}(c_{i}).\) Then, \((X_{1},X_{2},\ldots ,X_{r})\xrightarrow [|S|\rightarrow \infty ]{{\mathcal {D}}}\overset{}{(Z_{1},Z_{2},\ldots ,Z_{r}),}\) where \(Z_{1},Z_{2},\ldots ,Z_{r}\) are independent \(\mathrm {Po\left( 1\right) }\)-distributed random variables.

Example 3

Let \(L=(\ell _{ij})_{i=1,j=1}^{r+1,n}\) be an \((r+1)\times n\) Latin Rectangle. Reorder the columns of L uniformly randomly. Let \(L'=(\ell '_{ij})_{i=1,j=1}^{r+1,n}\) be the resulting Latin rectangle. Denote by \(X_{i},\) \(1\le i\le r\), the number of entries in the first row of \(L'\), having the property that the entry to their right is equal to the entry exactly i rows below them. According to Theorem 4, as \(n\rightarrow \infty \), each \(X_{i}\) converges in distribution to \(\mathrm {Po}(1)\). Since \(X_{1},X_{2},\ldots ,X_{r}\) are almost independent, \(X_{1}+X_{2}+\ldots +X_{r}\) converges in distribution to \(\mathrm {Po}(r)\). Therefore, the probability, that the number of entries in the first row of \(L'\), having the property that the entry to their right is equal to one of the entries below them, is 1, is \(re^{-r}.\)

Theorems 3 and 4 show in particular that Theorems 1 and 2 hold for non-abelian groups as well (where the differences \(c_{i+1}-c_{i}\) are replaced by \(c_{i+1}c_{i}^{-1},\) and we count the number of occurrences of \(h\in G\) in the set D(C)).

Now we formulate an analogue of Theorem 1 for sums (or products in the case of non-abelian groups) instead of differences. Thus, let \(C=\left( c_{1},c_{2},\ldots ,c_{n}\right) \) be a permutation of the elements of G,  and put \(S(C)=\left\{ c_{i}+c_{i+1}:1\le i\le n\right\} .\) How many times does an element \(h\in G\) occur in S(C)? Despite the similarity between this question and the analogous one for differences, the situation now is more complicated. The number of occurrences of \(h\in G\) in the set D(C) depends on the number of ways one can write h as a difference of two distinct elements of G. Each element \(h\in G\setminus \left\{ 0\right\} \) can be written in the form \(g_{2}-g_{1}\) with \(g_{1}\ne g_{2}\) in n different ways, while 0 cannot be written like this.

In the case of sums, the situation is somewhat different. Namely, for any finite abelian group G and \(h\in G,\) denote by \(d_{G}(h)\) the number of occurrences of h on the main diagonal of the addition table of G (or the multiplication table of G in general case.)

Example 4

Let \(G=\mathbf{Z }_{2^{n_{1}}}\oplus \mathbf{Z }_{2^{n_{2}}}\oplus \ldots \oplus \mathbf{Z }_{2^{n_{t}}}\oplus G_{\mathrm {odd}},\) where \(G_{\mathrm {odd}}\) is a finite abelian group of odd order. If \(h\in G\) occurs at all on the main diagonal of the addition table of G,  then \(d_{G}(h)=2^{t}.\)

  1. (a)

    For \(G=\mathbf{Z }_{2}^{m},\) each element is its own inverse. Thus, \(d_{G}(0)=2^{m}\) and \(d_{G}(h)=0\) for \(h\ne 0.\)

  2. (b)

    For \(G=\mathbf{Z }_{4}\oplus \mathbf{Z }_{2}^{m},\) the two elements \(h_{1}=(0,0,\ldots ,0)\,\mathrm {and}\,h_{2}=(2,0,\ldots ,0)\) can be written as a sum of two distinct elements of G in \(2^{m+1}\) ways, while the other elements of G can be written in \(2^{m+2}\) ways. Thus, \(d_{G}(h_{1})=d_{G}(h_{2})=2^{m+1},\) and \(d_{G}(h)=0\) for \(h\ne h_{1},h_{2}.\)

In fact, we will formulate our next result in the case of any finite group, not necessarily abelian. Thus, \(S(C)=\left\{ c_{i}c_{i+1}:1\le i\le n\right\} ,\) and we count the number of occurrences of \(h\in G\) in the set S(C). Hence we need to consider the number of occurrences of elements on the main diagonal of the multiplication table of finite groups.

Example 5

  1. (a)

    If G is any group of odd order n,  each element of G can be written as a product of two distinct elements of G in \(n-1\) ways.

  2. (b)

    For \(G=D_{2m},\) the dihedral group of order 2m,  where m is odd, the elements \(h\in G\) with \(\mathrm {ord}(h)=2\) can be written as a product of two distinct elements of G in 2m ways. The identity element can be written as a product of two distinct elements of G in \(m-1\) ways, and the other elements of G can be written in \(2m-1\) ways.

Theorem 5

Let \(\left( G_{k}\right) _{k=1}^{\infty }\) be a sequence of finite groups of increasing orders, \(C_{k}\) a uniformly random permutation of \(G_{k}\) and \(h_{k}\) an arbitrary element of \(G_{k}\) for each k,  such that \(\dfrac{d_{G}(h_{k})}{|G_{k}|}\underset{k\rightarrow \infty }{\longrightarrow }\alpha \) for some \(\alpha \in \left[ 0,1\right] .\) Then, the number \(X_{k}\) of occurrences of \(h_{k}\) in the multi-set \(S(C_{k})\) is asymptotically \(\mathrm {Po}\)(1-\(\alpha \))-distributed. (For \(\alpha =1,\) the distribution is a point mass at \(\mathrm {0}\).)

Namely, for a large finite group G,  a uniformly random permutation C of G, and an element \(h\in G,\) the number of occurrences of h in the set S(C) is approximately Po(\(\lambda )\)-distributed with \(\lambda =1-\dfrac{d_{G}(h)}{|G|}.\)

Example 6

  1. (a)

    For \(G=\mathbf{Z }_{2}^{m},\) each element is its own inverse. Thus, the number of occurrences of each \(h\in G\setminus \left\{ 0\right\} \) in S(C) is asymptotically \(\mathrm {Po\left( 1\right) }\)-distributed

  2. (b)

    For \(G=\mathbf{Z }_{4}\oplus \mathbf{Z }_{2}^{m},\) \(h_{1}=(0,0,\ldots ,0)\,\mathrm {and}\,h_{2}=(2,0,\ldots ,0)\text {,}\) the number of occurrences of \(h_{1}\) and \(h_{2}\) in S(C) is asymptotically \(\mathrm {Po\left( \dfrac{1}{2}\right) }\)-distributed, and that of the other elements is asymptotically \(\mathrm {Po\left( 1\right) }\)-distributed.

  3. (c)

    If G is of odd order, the number of occurrences of each \(h\in G\) in S(C) is asymptotically \(\mathrm {Po\left( 1\right) }\)-distributed.

  4. (d)

    For \(G=D_{2m},\) with odd m,  the number of occurrences of the identity element in S(C) is asymptotically \(\mathrm {Po\left( \dfrac{1}{2}\right) }\)-distributed, and that of each other is asymptotically \(\mathrm {Po\left( 1\right) }\)-distributed.

Theorem 6

Let \(\left( G_{k}\right) _{k=1}^{\infty }\) be a sequence of finite groups of increasing orders, \(C_{k}\) a uniformly random permutation of \(G_{k}\) and \(h_{kj},\) \(1\le j\le r,\) distinct elements of \(G_{k}\) such that \(\dfrac{d_{G_{k}}(h_{kj})}{|G_{k}|}\underset{k\rightarrow \infty }{\longrightarrow }\alpha _{j}\) for some \(\alpha _{j}\in \left[ 0,1\right] .\) Let \(X_{kj},\,1\le j\le r,\) be the number of indices \(1\le i\le |G_{k}|\) such that \(c_{i}c_{i+1}=h_{kj}.\) Then, \((X_{k1},X_{k2},\ldots ,X_{kr})\xrightarrow [k\rightarrow \infty ]{{\mathcal {D}}}\overset{}{(Z_{1},Z_{2},\ldots ,Z_{r}),}\) where \(Z_{1},Z_{2},\ldots ,Z_{r}\) are independent random variables, \(Z_{j}\sim \mathrm {Po}\left( 1-\alpha _{j}\right) .\)

Example 7

Let \(G=D_{2m}\otimes S_{3}\) and \(h_{1}=\left( e,e\right) ,\,h_{2}=\left( e,c\right) ,\) where e is the identity (of both \(D_{2m}\) and \(S_{3}\)) and c is a cyclic permutation in \(S_{3}.\) The numbers of occurrences of \(h_{1}\) and \(h_{2}\) in S(C) are asymptotically \(\mathrm {Po\left( \dfrac{2}{3}\right) }\)- and \(\mathrm {Po\left( \dfrac{1}{12}\right) }\)-distributed, respectively, and are asymptotically independent as \(m\longrightarrow \infty .\)

The proof of Theorem 6 goes along the same lines as that of Theorem 4, and we will skip the details.

3 Proofs

In the proof of Theorem 3, we will use the following result.

Theorem A

[1]: Let \(X=X(n)\) be a sum of indicator variables, and let \(\mu >0.\) If \(\underset{n\longrightarrow \infty }{\mathrm {lim}}E{\mathop {\left( {\begin{array}{c}X\\ t\end{array}}\right) }\limits ^{}}=\dfrac{\mu ^{t}}{t!}\) for every positive integer t,  then \(X\overset{{\mathcal {D}}}{\longrightarrow }\overset{}{Z}\) as \(n\rightarrow \infty ,\) where \(Z\sim \mathrm {Po}(\mu ).\)

Proof of Theorem 3

Write \(X_{f}=X_{1}+X_{2}+\ldots +X_{n},\) where

$$\begin{aligned} X_{i}={\left\{ \begin{array}{ll} 1,\,\,\,\,\,\,\,\, &{} c_{i+1}=f(c_{i}),\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

In view of Theorem A, it suffices to show that \(\underset{n\longrightarrow \infty }{\mathrm {lim}}E\genfrac(){0.0pt}1{X_{f}}{t}=\dfrac{1}{t!}\) for every positive integer t. Now \(\left( {\begin{array}{c}X_{f}\\ t\end{array}}\right) \) is the number of sets \(\left\{ a_{1},a_{2},\ldots ,a_{t}\right\} \subseteq \left\{ 1,2,\ldots ,n\right\} \) such that \(c_{a_{i}+1}=f(c_{a_{i}})\) for \(1\le i\le t.\) Hence:

$$\begin{aligned} E\genfrac(){0.0pt}0{X_{f}}{t}=\underset{\left\{ a_{1},\ldots ,a_{t}\right\} \subseteq \left\{ 1,2,\ldots ,n\right\} }{\sum }P\left( c_{a_{i}+1}=f(c_{a_{i}})\,,\,1\le i\le t\right) . \end{aligned}$$
(1)

Let \(A=\left\{ a_{1},a_{2},\ldots ,a_{t}\right\} .\) Order the elements of A so that \(a_{1}<a_{2}<\ldots <a_{t}.\) A \( block \) in A is a subset \(\left\{ a_{i},a_{i+1},\ldots ,a_{j}\right\} \) of A consisting of consecutive elements, namely such that \(a_{l+1}-a_{l}=1\;\mathrm {for}\;i\le l\le j-1.\) Denote by \(b=b(A)\) the number of maximal blocks (with respect to inclusion) in any set A. For example, if \(n=10\) and \(t=5,\) and we choose \(A=\left\{ 2,3,4,6,10\right\} ,\) then the blocks are \(\{2,3,4\},\{6\},\{10\},\) so that \(b=3,\) while for \(A=\left\{ 1,2,5,6,10\right\} \) the blocks are \(\{10,1,2\},\{5,6\},\) so that \(b=2.\) We want to find upper and lower bounds on the summands in (1). We start with an upper bound. We may write A in the form

$$\begin{aligned} A=\left\{ a_{11},a_{12},\ldots ,a_{1k_{1}},a_{21},a_{22},\ldots ,a_{2k_{2}},\ldots ,a_{b1},a_{b2},\ldots ,a_{bk_{b}}\right\} , \end{aligned}$$

where \(a_{i,j+1}-a_{i,j}=1\) for \(1\le i\le b\,,\,1\le j\le k_{i}-1,\) and \(a_{i+1,1}-a_{i,k_{i}}\ge 2\) for \(1\le i\le b-1.\) (It may be the case that \(a_{ij}=n\) and \(a_{i,j+1}=1\) for some i and j.) First we choose the first elements of all maximal blocks, namely \(c_{a_{11}},c_{a_{21}},\ldots ,c_{a_{b1}},\) and then all other elements, namely \(c_{a_{ij}}\) for \(j\ge 2,\) as well as the elements to the right of the blocks, i.e., at the places \(a_{i,k_{i}+1}.\) Clearly, the number of possible choices of these elements is \(n(n-1)\cdot \ldots \cdot (n-(b+t-1)).\) Now we bound from above the number of choices satisfying \(c_{a_{i,j+1}}=f(c_{a_{ij})}\) for \(1\le i\le b,\,1\le j\le k_{i}-1.\) We may choose \(c_{a_{11}}\) in n ways, \(c_{a_{21}}\) in \(n-1\) ways, \(\mathbb {\ldots },\) \(c_{a_{b1}}\) in \(n-(b-1)\) ways. For the other elements of the blocks, and those adjacent to the blocks on the right, we have at most one choice. Namely, each such element must be the image of its predecessor under f,  and must not have been encountered earlier. Thus, an upper bound on the probability that \(c_{a_{i}+1}=f(c_{a_{i}})\) for \(1\le i\le t\) is

$$\begin{aligned} \dfrac{n\cdot \ldots \cdot (n-(b-1))}{n\cdot \ldots \cdot (n-(b+t-1))}\le \dfrac{1}{(n-(b+t-1))^{t}}\le \dfrac{1}{(n-2t)^{t}}. \end{aligned}$$
(2)

Since the number of summands on the right-hand side of (1) is \(\genfrac(){0.0pt}0{n}{t},\) we get

$$\begin{aligned} E\left( {\begin{array}{c}X_{f}\\ t\end{array}}\right) \le \genfrac(){0.0pt}0{n}{t}\cdot \dfrac{1}{(n-2t)^{t}}\le \dfrac{1}{t!}\cdot \left( \dfrac{n}{n-2t}\right) ^{t}. \end{aligned}$$

Therefore,

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim\,}\mathrm {sup\;}}E\left( {\begin{array}{c}X_{f}\\ t\end{array}}\right) \le \dfrac{1}{t!}. \end{aligned}$$
(3)

Now we establish a lower bound. We cannot find a uniform lower bound on the summands in (1). Indeed, the probability that \(c_{a_{i}+1}=f(c_{a_{i}})\) for \(1\le i\le t\) may well be 0. For example, if \(f^{2}\) is the identity function, then

$$\begin{aligned} P\left( (c_{a_{1}+1}=f(c_{a_{1}}))\bigcap (c_{a_{1}+2}=f(c_{a_{1}+1}))\right) =0. \end{aligned}$$

(Note that the same situation arises in Theorem 1 if ord\(\left( h\right) =2.\)) Hence, we will find a lower bound on the probability that \(c_{a_{i}+1}=f(c_{a_{i}})\) for \(1\le i\le t\) only for sets \(A=\left\{ a_{1},\ldots ,a_{t}\right\} \) without consecutive elements (i.e., all blocks of A are of length 1). Now we have to choose 2t elements, for the places indexed by A and for those following them. In the same way as previously with \(b=t,\) we get that the number of all possible choices of these elements is \(n(n-1)\cdot \ldots \cdot (n-(2t-1)).\) Now bound from below the number of choices satisfying \(c_{a_{i}+1}=f(c_{a_{i}})\,,\,1\le i\le t.\) For \(c_{a_{1}}\) we have n choices, and for \(c_{a_{1}+1}\) one choice. For \(c_{a_{2}}\) we have at least \(n-3\) choices. In fact, we may choose \(c_{a_{2}}\) as any element of S,  except for \(c_{a_{1}},f(c_{a_{1}})\) and \(f^{-1}(c_{a_{1}}).\) (The provision “at least” is due to the fact that, if \(f^{2}(c_{a_{1}})=c_{a_{1}},\) then these three elements reduce to two.) For \(c_{a_{2}+1}\) we have one choice. We can choose \(c_{a_{3}}\) as any element of S,  except for \(c_{a_{1}},f(c_{a_{1}}),f^{-1}(c_{a_{1}}),c_{a_{2}},f(c_{a_{2}})\) and \(f^{-1}(c_{a_{2}}).\) Thus, for \(c_{a_{3}}\) we have at least \(n-6\) choices. Similarly, for each \(c_{a_{i}}\) we have at least \(n-3(i-1)\) choices and for each \(c_{a_{i}+1}\) just one. Thus, the number of possible choices is at least \(\mathop {\prod }\limits _{i=1}^{t}(n-3(i-1)).\) It follows that a lower bound on the probability that \(c_{a_{i}+1}=f(c_{a_{i}})\) for \(1\le i\le t\) is

$$\begin{aligned} \dfrac{\mathop {\prod }\limits _{i=1}^{t}(n-3(i-1))}{\mathop {\prod }\limits _{i=1}^{2t}(n-i+1)}=\dfrac{n(n-3)\cdot \ldots \cdot (n-3(t-1))}{n(n-1)\cdot \ldots \cdot (n-(2t-1))}\ge \dfrac{(n-3t)^{t}}{n^{2t}}. \end{aligned}$$
(4)

Now we find a lower bound on the number of sets \(A=\left\{ a_{1},a_{2},\ldots ,a_{t}\right\} \) without consecutive elements. For \(a_{1}\) we have n choices. For \(a_{2}\) we have \(n-3\) choices; any element of \(\left\{ 1,2,\ldots ,n\right\} \) except for \(a_{1},a_{1}+1\) and \(a_{1}-1\) will do. In general, for each \(a_{i}\) we have at least \(n-3(i-1)\) choices. Since the order of choosing the elements of A is not important, the number of such sets A is at least

$$\begin{aligned} \dfrac{1}{t!}\mathop {\prod }\limits _{i=1}^{r}(n-3(i-1))=\dfrac{n(n-3)\cdot \ldots \cdot (n-3(t-1))}{t!}\ge \dfrac{(n-3t)^{t}}{t!}. \end{aligned}$$
(5)

By (4) and (5),

$$\begin{aligned} E\genfrac(){0.0pt}0{X_{f}}{t}\ge \dfrac{\left( n-3t\right) ^{2t}}{t!n^{2t}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim\,}\mathrm {inf\;}}E\left( {\begin{array}{c}X_{f}\\ t\end{array}}\right) \ge \dfrac{1}{t!}. \end{aligned}$$
(6)

By (3) and (6),

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim}}E\genfrac(){0.0pt}0{X_{f}}{t}=\dfrac{1}{t!}. \end{aligned}$$

\(\square \)

In the proof of Theorem 4, we will use the following result.

Theorem B

[1]: Let \(X_{j}=X_{j}(n),\,1\le j\le r,\) be sums of indicator variables, and let \(\mu _{1},\ldots ,\mu _{r}>0.\) If \(\underset{n\longrightarrow \infty }{\mathrm {lim}}E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] =\mathop {\prod }\limits _{j=1}^{r}\dfrac{\mu _{j}^{t_{j}}}{t_{j}!}\) for every non-negative integers \(t_{1},\ldots ,t_{r},\) then \((X_{1},X_{2},\ldots ,X_{r})\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{\mathcal {D}}\overset{}{(Z_{1},Z_{2},\ldots ,Z_{r}),}\) where the \(Z_{j}\)-s are independent \(\mathrm {Po}(\mu _{j})\)-distributed variables.

Proof of Theorem 4

In view of Theorem B, we need to show that

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim}}E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] =\mathop {\prod }\limits _{j=1}^{r}\dfrac{1}{t_{j}!},\qquad t_{1},\ldots ,t_{r}\ge 0. \end{aligned}$$

Take \(t_{1},\ldots ,t_{r}\ge 0,\) and denote \(T=\mathop {\sum }\limits _{j=1}^{r}t_{j}\). We will bound \(E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] \). Clearly, \(\mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\) is the number of r-tuples \(\left( A_{1},A_{2},\ldots ,A_{r}\right) \) of subsets of \(\left\{ 1,2,\ldots ,n\right\} \) such that \(\left| A_{j}\right| =t_{j}\) and \(c_{a+1}=f_{j}(c_{a})\) for \(1\le j\le r\) and \(a\in A_{j},\) Hence:

$$\begin{aligned} E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] =\underset{A_{1},A_{2},\ldots ,A_{r}\subseteq \left\{ 1,2,\ldots ,n\right\} }{\sum }P\left( c_{a+1}=f_{j}(c_{a}),\,a\in A_{j},\,1\le j\le r\right) .\nonumber \\ \end{aligned}$$
(7)

Of course, an r-tuple \(\left( A_{1},A_{2},\ldots ,A_{r}\right) \) may contribute to the sum on the right-hand side of (7) only if the \(A_{j}\mathrm {s}\) are pairwise disjoint. Take such an r-tuple \(\left( A_{1},A_{2},\ldots ,A_{r}\right) \), and denote \(A=\mathop {\bigcup }\limits _{j=1}^{r}A_{j}.\) We find upper and lower bounds on each summand on the right-hand side of (7). We start with an upper bound. When we choose a permutation, suppose we choose first the elements in the places indexed by A. We start with the first element of each maximal block of A. There are n choices for the first element in the first block, \(n-1\) choices for the first element in the second block, \(\mathbb {\ldots },\) \(n-(b-1)\) choices for the first element in the last block. For the other elements in each block we have at most one choice; indeed, each such element must be obtained from its predecessor by choosing its image under the appropriate \(f_{i}\) (and must not have been encountered earlier). We also have at most one choice for the element to the right of each maximal block. Thus, the number of choices for the elements in the places indexed by A,  as well as the elements to the right of each maximal block, is at most \(n(n-1)\cdot \ldots \cdot (n-(b-1)).\) The number of all possible choices of \(c_{a}\mathrm {\mathrm {s}}\) for a in A or immediately to the right of any block of A,  when A consists of b maximal blocks, is

$$\begin{aligned} n(n-1)\cdot \ldots \cdot (n-(b+T-1)). \end{aligned}$$

Hence, an upper bound on the probability that \(c_{a+1}=f_{j}(c_{a})\,\mathrm {for}\,a\in A_{j},\,1\le j\le r,\) is

$$\begin{aligned} \dfrac{n(n-1)\cdot \ldots \cdot (n-(b-1))}{n(n-1)\cdot \ldots \cdot (n-(b+T-1))}\le \dfrac{1}{\left( n-2T\right) ^{T}}. \end{aligned}$$
(8)

Since there are \(\genfrac(){0.0pt}0{n}{t_{1},t_{2},\ldots ,t_{r},n-T}\) summands on the right-hand side of (7), we get

$$\begin{aligned} E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right]\le & {} \genfrac(){0.0pt}0{n}{t_{1},t_{2},\ldots ,t_{r},n-T}\cdot \dfrac{1}{\left( n-2T\right) ^{T}}\\= & {} \dfrac{n\cdot \ldots \cdot (n-T+2)(n-T+1)}{\left( \mathop {\prod }\limits _{j=1}^{r}t_{j}!\right) \cdot \left( n-2T\right) ^{T}}=\dfrac{1+o(1)}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$

Therefore,

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim\,}\mathrm {sup\;}}E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] \le \dfrac{1}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$
(9)

Now we establish a lower bound. As in the proof of theorem 3, we will find a lower bound on the probability that \(c_{a+1}=f_{j}(c_{a})\,\mathrm {for}\,a\in A_{j},\,1\le j\le r,\) only for sets \(A=\left\{ a_{1},\ldots ,a_{T}\right\} \) without consecutive elements. We have to choose 2T elements, for the places indexed by A and for those just to their right. The number of all possible choices of these elements is \(n(n-1)\cdot \ldots \cdot (n-(2T-1)).\) Let us estimate the probability that \(c_{a+1}=f_{j}(c_{a})\,\mathrm {for}\,a\in A_{j},\,1\le j\le r.\) For \(c_{a_{1}}\) we have n choices, and for \(c_{a_{1}+1}\) one choice. For \(c_{a_{2}}\) we have at least \(n-3\) choices. In fact, we may choose \(c_{a_{2}}\) as any element of S,  except for \(c_{a_{1}},f(c_{a_{1}})\) and \(f^{-1}(c_{a_{1}}).\) For \(c_{a_{2}+1}\) we have one choice. We can choose \(c_{a_{3}}\) as any element of S,  except for \(c_{a_{1}},f(c_{a_{1}}),f^{-1}(c_{a_{1}}),c_{a_{2}},f(c_{a_{2}}),\) and \(f^{-1}(c_{a_{2}}).\) So for \(c_{a_{3}}\) we have \(n-6\) choices. Similarly, for each \(c_{a_{i}}\) we have \(n-3(T-1)\) choices. Thus, a lower bound on the probability that \(c_{a+1}=f_{j}(c_{a})\,\mathrm {for}\,a\in A_{j},\,1\le j\le r,\) is

$$\begin{aligned} \dfrac{n(n-3)\cdot \ldots \cdot (n-3(T-1))}{n(n-1)\cdot \ldots \cdot (n-(2T-1))}\ge \dfrac{\left( n-3T\right) ^{T}}{n^{2T}}. \end{aligned}$$
(10)

We also need a lower bound on the number of sets \(A_{j}\) without consecutive elements. For \(a_{1}\) we have n choices. For \(a_{2}\) we have \(n-3\) choices; any element of \(\left\{ 1,2,\ldots ,n\right\} \) except for \(a_{1},a_{1}+1\) and \(a_{1}-1\) will do. In general, for each \(a_{i}\) we have at least \(n-3(i-1)\) choices. (The provision “at least” is due to the fact that, for example, if \(a_{2}=a_{1}+2,\) we have \(n-5\) possible choices for \(a_{3}.\)) Since the order of choosing the elements of each \(A_{j}\) is immaterial, the number of all possible choices of sets \(A_{j},\,1\le j\le r,\) is at least

$$\begin{aligned} \dfrac{n(n-3)\cdot \ldots \cdot (n-3(T-1))}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}\ge \dfrac{\left( n-3T\right) ^{T}}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$
(11)

By (10) and (11),

$$\begin{aligned} E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] \ge \dfrac{\left( n-3T\right) ^{T}}{\left( n-2T\right) ^{2T}}\cdot \dfrac{\left( n-3T\right) ^{T}}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}=\dfrac{1+o(1)}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$

Therefore,

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim\,}\mathrm {inf}\;}E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] \ge \dfrac{1}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$
(12)

By (9) and (12),

$$\begin{aligned} \underset{n\longrightarrow \infty }{\mathrm {lim}}E\left[ \mathop {\prod }\limits _{j=1}^{r}{\mathop {\left( {\begin{array}{c}X_{j}\\ t_{j}\end{array}}\right) }\limits ^{}}\right] =\dfrac{1}{\mathop {\prod }\limits _{j=1}^{r}t_{j}!}. \end{aligned}$$

\(\square \)

Proof of Theorem 5

Let G be any of the groups in the sequence \(\left( G_{k}\right) _{k=1}^{\infty }.\) For \(h\in G,\) denote \(\sqrt{h}=\left\{ x\in G|x^{2}=h\right\} .\) Fix an arbitrary h,  and let m be the number of elements of \(G-\sqrt{h},\) that is, the number of group elements whose square is not h. Thus, we have m choices to get an element h in the set S(C). Let X be the number of indices i such that \(c_{i}c_{i+1}=h.\) In view of [1], we need to show that \(E\genfrac(){0.0pt}1{X}{t}\) converges to \(\dfrac{(1-\alpha )^{t}}{t!}\) when the order n of G becomes large and \(\dfrac{m}{n}\rightarrow 1-\alpha .\)

We will find upper and lower bounds on \(E\genfrac(){0.0pt}1{X}{t}\). Now \(\left( {\begin{array}{c}X\\ t\end{array}}\right) \) is the number of sets \(A=\left\{ a_{1},a_{2},\ldots ,a_{t}\right\} \subseteq \left\{ 1,2,\ldots ,n\right\} \) such that \(c_{a_{i}}c_{a_{i}+1}=h\) for \(1\le i\le t.\) Order the elements of such a set A so that \(a_{1}<a_{2}<\ldots <a_{t}.\) We have:

$$\begin{aligned} E\genfrac(){0.0pt}0{X}{t}=\underset{A=\left\{ a_{1},\ldots ,a_{t}\right\} }{\sum }P\left( c_{a_{1}}c_{a_{1}+1}=c_{a_{2}}c_{a_{2}+1}=\ldots =c_{a_{t}}c_{a_{t}+1}=h\right) . \end{aligned}$$
(13)

We start with a lower bound on \(E\genfrac(){0.0pt}1{X}{t}\). Since for \(\alpha =1\) we claim that the limiting distribution is a point mass at 0, so that no lower bound is needed, we assume here that \(\alpha <1.\) In particular, m goes to \(\infty \) with n. There is no good uniform bound on the terms on the right-hand side of (13) for all sets A. For instance, if G is abelian and \(a_{j+1}=a_{j}+1\) for some j,  then the corresponding term in (13) vanishes. Hence we will consider only sets \(A=\left\{ a_{1},\ldots ,a_{t}\right\} \) without consecutive elements. The number of possible choices of elements in the t locations indexed by A and the t following locations is \(n(n-1)\cdot \ldots \cdot (n-2t+1).\) Let us estimate the probability that all products in question are h. For each \(c_{a_{i}}\) we have at least \(m-3(i-1)\) choices, and for each \(c_{a_{i}+1}\) one choice. A lower bound on the probability that all products \(c_{a_{i}}c_{a_{i}+1}\) are h is therefore

$$\begin{aligned} \dfrac{\mathop {\prod }\limits _{i=1}^{t}(m-3i+3)}{\mathop {\prod }\limits _{i=1}^{2t}(n-i+1)}=\dfrac{m(m-3)\cdot \ldots \cdot (m-3t+3)}{n(n-1)\cdot \ldots \cdot (n-2t+1)}\ge \dfrac{(m-3t)^{t}}{n^{2t}}. \end{aligned}$$
(14)

By (5), the number of sets A without consecutive elements is at least \(\dfrac{(n-3t)^{t}}{t!}\). Consequently, using(14),

$$\begin{aligned} E\genfrac(){0.0pt}0{X}{t}\ge \dfrac{(m-3t)^{t}(n-3t)^{t}}{t!n^{2t}}. \end{aligned}$$

Therefore, letting \(|G_{k}|=n_{k},\) \(h_{k}\in G_{k},\) \(\sqrt{h_{k}}=\left\{ x\in G_{k}|x^{2}=h_{k}\right\} ,\) and \(m_{k}=|G_{k}-\sqrt{h_{k}}|,\) we get

$$\begin{aligned} \underset{k\longrightarrow \infty }{\mathrm {lim\,}\mathrm {inf}\;}E\genfrac(){0.0pt}0{X_{k}}{t}\ge \underset{k\longrightarrow \infty }{\mathrm {lim}}\dfrac{m_{k}^{t}n_{k}^{t}}{t!n_{k}^{2t}}=\dfrac{\left( 1-\alpha \right) ^{t}}{t!}. \end{aligned}$$
(15)

Now we establish an upper bound. First, we will consider again sets \(A=\left\{ a_{1},\ldots ,a_{t}\right\} \) without consecutive elements. We have to choose 2t elements, for the places indexed by A and for those following them. The number of all possible choices of these elements is \(n(n-1)\cdot \ldots \cdot (n-(2t-1)).\) Let us estimate the probability that all products in question are h. For each \(c_{a_{i}}\) we have at most m choices, and for each \(c_{a_{i}+1}\) at most one choice. Thus, an upper bound on the probability that all products \(c_{a_{i}}c_{a_{i}+1}\) are h is

$$\begin{aligned} \dfrac{m^{t}}{\mathop {\prod }\limits _{i=1}^{2t}(n-i+1)}=\dfrac{m^{t}}{n(n-1)\cdot \ldots \cdot (n-2t+1)}\le \dfrac{m^{t}}{(n-2t+1)^{2t}}. \end{aligned}$$
(16)

By (5), the number of sets A that do contain consecutive elements is at most

$$\begin{aligned} \genfrac(){0.0pt}0{n}{t}-\dfrac{n(n-3)\cdot \ldots \cdot (n-3(t-1))}{t!}=O(n^{t-1}). \end{aligned}$$
(17)

We first choose the elements of C in the places indexed by A and those following them. The number of all possible choices of these elements for an A consisting of b maximal blocks is \(n(n-1)\cdot \ldots \cdot (n-(b+t-1)).\) Let us estimate the probability that all products in question are h. First we determine the first element of each maximal block. We have m choices for the first element in the first block, \(m-1\) choices for the first element in the second block, \(\mathbb {\ldots },\) \(m-(b-1)\) choices for the first element in the last block. For the other elements in each block and the successor of each block we have at most one choice. Thus, an upper bound on the probability that all products \(c_{a_{i}}c_{a_{i}+1}\) are h is

$$\begin{aligned} \dfrac{m(m-1)\cdot \ldots \cdot (m-(b-1))}{n(n-1)\cdot \ldots \cdot (n-(b+t-1))}\le \dfrac{m^{b}}{(n-2t+1)^{b+t}}. \end{aligned}$$
(18)

By (17) and (18),

$$\begin{aligned} E\genfrac(){0.0pt}0{X}{t}\le & {} \genfrac(){0.0pt}0{n}{t}\cdot \dfrac{m^{t}}{(n-2t+1)^{2t}}\nonumber \\&+\left( \genfrac(){0.0pt}0{n}{t}-\dfrac{n\cdot \ldots \cdot (n-3t+3)}{t!}\right) \cdot \dfrac{n^{t}}{(n-2t+1)^{2t}} \nonumber \\\le & {} \dfrac{m^{t}n^{t}}{t!(n-2t)^{2t}}+O(n^{t-1})\cdot \dfrac{n^{t}}{(n-2t)^{2t}}\cdot \end{aligned}$$
(19)

Therefore,

$$\begin{aligned} \underset{k\longrightarrow \infty }{\mathrm {lim\,}\mathrm {sup\;}}E\genfrac(){0.0pt}0{X_{k}}{t}\le \underset{k\longrightarrow \infty }{\mathrm {lim}}\dfrac{1}{t!}\left( \dfrac{m_{k}}{n_{k}}\right) ^{t}=\dfrac{\left( 1-\alpha \right) ^{t}}{t!}. \end{aligned}$$
(20)

By (15) and (20),

$$\begin{aligned} \underset{k\longrightarrow \infty }{\mathrm {lim}}E\left( {\begin{array}{c}X_{k}\\ t\end{array}}\right) =\dfrac{\left( 1-\alpha \right) ^{t}}{t!}. \end{aligned}$$

\(\square \)