Abstract
We consider the notion of a constant length queue automaton—i.e., a traditional queue automaton with a builtin constant limit on the length of its queue—as a formalism for representing regular languages. We show that the descriptional power of constant length queue automata greatly outperforms that of traditional finite state automata, of constant height pushdown automata, and of straight line programs for regular expressions, by providing optimal exponential and doubleexponential size gaps. Moreover, we prove that constant height pushdown automata can be simulated by constant length queue automata paying only by a linear size increase, and that removing nondeterminism in constant length queue automata requires an optimal exponential size blowup, against the optimal doubleexponential cost for determinizing constant height pushdown automata. Finally, we investigate the size cost of implementing Boolean language operations on deterministic and nondeterministic constant length queue automata.
Introduction
It is well known that the computational power of computing devices can be tuned by restricting the way they access memory. To catch a glimpse of this phenomenon, one may start from the traditional model of a oneway Turing machine equipped with a potentially unbounded and freely accessible working tape representing memory. If we impose a lifo usage of the working tape, still keeping unboundedness, then we obtain a pushdown automaton, whose computational power is strictly lower than that of a oneway Turing machine. Instead, by imposing a fifo memory access policy, we get a queue automaton, whose computational power gets back to that of a oneway Turing machine. However, for all these (and other) models, by fixing a constant^{Footnote 1}—i.e., not depending on the input length—bound on the amount of available memory, we have that their computational power boils down to that of finite state automata, regardless of the memory access policy in use.
For constant memory machines, it is then worth investigating how the memory access policy affects their descriptional power, that is, their capability of succinctly representing regular languages. (We refer the reader to, e.g., [19] for a thoughtful survey on descriptional complexity theory.)
This line of research is settled in [17], where the notion of a constant height pushdown automaton is introduced and studied from a descriptional complexity perspective. Roughly speaking, a constant height pushdown automaton is a traditional pushdown automaton with a builtin constant limit on the height of the pushdown. Optimal exponential and doubleexponential gaps are proved, between the size of constant height deterministic and nondeterministic pushdown automata (dpdas and npdas, respectively) and the size of equivalent deterministic and nondeterministic finite state automata (dfas and nfas, respectively) and classical regular expressions. Moreover, the notion of a straight line program for regular expressions (slp, see Sect. 2) is also introduced in [17], as a formalism equivalent to a constant height npda from a size point of view. In [5], the fundamental problem of removing nondeterminism in constant height npdas is tackled, and a doubleexponential size blowup for determinization is emphasized. Finally, the size cost of boolean operations on constant height dpdas and npdas is analyzed in [3, 4, 6].
In this paper, we investigate the descriptional advantages of replacing the pushdown with a queue storage of fixed size, by considering the notion of a constant length queue automaton. Basically, this device is a traditional queue automaton (see, e.g., [2, 14]), in which the length of the queue cannot grow beyond a fixed constant limit.
As for constant height pushdown automata, in Sect. 3 we single out optimal exponential and doubleexponential gaps between the size of constant length deterministic and nondeterministic queue automata (dqas and nqas, respectively) and the size of equivalent dfas and nfas. In addition, differently from constant height pushdown automata, in Sect. 4 we prove that a queue storage enables a sizeefficient removal of nondeterminism. Precisely, we show that nfas can be simulated by constant length dqas paying by only a linear size increase. This, in turn, leads us to prove that the optimal size cost of removing nondeterminism in constant length nqas is only exponential, in sharp contrast with the optimal doubleexponential size blowup above pointed out for determinizing constant height \(\textsc {npda}\)s.
The higher descriptional power of a queue vs. a pushdown storage for constant memory machines is also emphasized in Sect. 5, where we show that constant height npdas (resp., dpdas) can be simulated by constant length nqas (resp., dqas), paying by only a linear size increase. On the other hand, the opposite simulation features an optimal exponential size cost. This is witnessed by proving, in Sect. 6, that constant length dqas can be exponentially smaller than equivalent slps, this gap being optimal. In Sect. 7, a complete overview on the optimal simulation size costs between finite state automata, constant height pushdown automata, constant length queue automata, and straight line programs is portrayed for reader’s convenience. In Sect. 8, as typically done in the literature for formalisms defining regular languages (see, e.g., [3, 4, 6, 16, 19, 22,23,24]), the problem of establishing the size cost of implementing boolean language operations on constant length \(\textsc {dqa}\)s and \(\textsc {nqa}\)s is tackled. As a useful tool in this investigation, analogously to what is done for constant height pushdown automata [3,4,5,6, 17], a normal form for constant length queue automata is defined, where at most one symbol is enqueued on each move. The size cost of converting constant length queue automata into their normal form is analyzed. Finally, in Sect. 9, we sum up obtained results and provide possible developments of our work.
Preliminaries: adding memory to finite state automata
We assume the reader is familiar with basics in formal language theory, and we refer to, e.g., [18, 20] for an extensive presentation of the topic. The set of all words (including the empty word \(\varepsilon \)) on a finite alphabet \(\varSigma \) is denoted by \(\varSigma ^*\). The length of a word \(w\in \varSigma ^{*}\) is denoted by w, and we let \(\varSigma ^{i}\) be the set of words on \(\varSigma \) of length \(i\ge 0\) (with \(\varSigma ^0=\{\varepsilon \}\)). Moreover, we let \(\varSigma ^{\le k}=\bigcup _{i=0}^k\varSigma ^i\). A language L on \(\varSigma \) is any subset \(L\subseteq \varSigma ^*\). The complement of L is denoted by \(L^c\).
Finite state automata. A nondeterministic finite state automaton (nfa, see, e.g., [18, 20]) is formally defined as a 5tuple \(A=\langle Q,\varSigma ,\delta ,q_{0},F\rangle \), where Q is the finite set of states, \(\varSigma \) the finite input alphabet, \(q_{0}\in Q\) the initial state, \(F \subseteq Q\) the set of final, or accepting, states, and \(\delta \) is the transition function mapping \(Q\times (\varSigma \cup \{\varepsilon \})\) to finite subsets of Q. The computation of A on an input string \(x=\sigma _1\sigma _2\cdots \sigma _n\in \varSigma ^*\) begins in the initial state \(q_0\) by scanning the first input symbol \(\sigma _1\). Next, the rules established by the transition function \(\delta \) are subsequently applied. An application of \(\delta \) is called move. An input string is accepted, if there exists a computation beginning in the state \(q_{0}\) and ending in some final state \(q\in F\) after reading the entire input. If such a computation does not exist, then the input string is rejected. The set of all inputs accepted by A is denoted by L(A) and called the accepted language. The automaton A is deterministic (dfa), if there are no \(\varepsilon \)transitions in \(\delta \) and, for every \(q\in Q\) and \(a\in \varSigma \), we have \(\delta (q,a)\le 1\).
The other two computational models we shall be dealing with can be obtained by equipping finite state automata with some auxiliary memory storage. Depending on whether such memory is used in a lifo or fifomode, we have pushdown or queue automata, respectively. In particular, we will be interested in the case in which the auxiliary memory storage has a fixed constant size (not depending on input length).
Constant height pushdown automata. A nondeterministic pushdown automaton (npda, see, e.g., [18, 20]) is formally defined as a 7tuple \(A=\langle Q,\!\varSigma , \varGamma , \delta , q_{0}, \perp , F\rangle \), where \(Q, \varSigma , q_{0}\) and F are defined as for nfas, \(\varGamma \) is the pushdown alphabet, \(\perp \ \in \varGamma \) is the initial symbol in the pushdown store, and the transition function \(\delta \) maps \(Q \times (\varSigma \cup \{\varepsilon \}) \times \varGamma \) to finite subsets of \(Q \times \varGamma ^{*}\!\). At any time, the pushdown content may be represented by a string where the leftmost symbol is the top of the pushdown, while the rightmost is its bottom. Let \(\delta (q,\sigma ,X)\ni (p,\gamma )\). Then A, being in the state q, reading the input symbol \(\sigma \) and the pushdown symbol X on the top of the pushdown, can reach the state p, replace X by \(\gamma \), and finally advance input scanning to the next input symbol only if \(\sigma \ne \varepsilon \). An input string is accepted, if there exists a computation beginning in the state \(q_{0}\) with \(\perp \) in the pushdown, and ending in some final state \(q\in F\) after reading the entire input. The set of all inputs accepted by A is denoted by L(A). The automaton A is deterministic (dpda), if for any \(q\in Q\), \(\sigma \in \varSigma \cup \{\varepsilon \}\) and \(X\in \varGamma \), we have \(\delta (q,\sigma ,X)\le 1\), and if \(\delta (q,\varepsilon ,X)\) is defined then \(\delta (q,a,X)=0\) for any \(a\in \varSigma \).
A constant height npda [17] is obtained from a traditional npda by imposing that the pushdown store can never contain more than h pushdown symbols, for a fixed constant \(h\ge 0\) not depending on input length. By definition, any attempt to store more than h symbols in the pushdown results in rejection. This constant memory bounded device is formally specified by an 8tuple \(A=\langle Q,\varSigma ,\varGamma ,\delta ,q_{0},\perp ,F,h\rangle \), where \(h\ge 0\) is the builtin pushdown height, while all other components are defined as above.
Constant length queue automata. A nondeterministic queue automaton (nqa, see, e.g., [2, 14]) is formally defined as a 7tuple \(A=\langle Q,\varSigma ,\varGamma ,\delta ,q_{0},\vdash ,F\rangle \), where \(Q,\varSigma ,q_{0},F\) are defined as for nfas, \(\varGamma \) is the queue alphabet, \(\vdash \ \in \varGamma \) is the initial symbol in the queue store, and the transition function \(\delta \) maps \(Q \times (\varSigma \cup \{\varepsilon \}) \times \varGamma \) to finite subsets of \(Q \times \{D,K\}\times \varGamma ^{*}\!\). At any time, the queue content may be represented by a string where the leftmost symbol is the head of the queue, while the rightmost is its tail. Let \(\delta (q,\sigma ,X)\ni (p,\chi ,\omega )\). Then A, being in the state q, reading \(\sigma \) on the input and X as the head of the queue, can reach the state p, delete (resp., keep) X if \(\chi =D\) (resp., \(\chi =K\)), enqueue \(\omega \) (i.e., append \(\omega \) after the tail), and finally advance input scanning to the next input symbol only if \(\sigma \ne \varepsilon \). So, a transition with parameter D (resp., K) means that we delete (resp., keep) the symbol at the head of the queue. An input string is accepted, if there exists a computation beginning in the state \(q_{0}\) with \(\vdash \) in the queue, and ending in some final state \(q\in F\) after reading the entire input. The set of all inputs accepted by A is denoted by L(A). The automaton A is deterministic (dqa), if for any \(q\in Q\), \(\sigma \in \varSigma \cup \{\varepsilon \}\) and \(X\in \varGamma \), we have \(\delta (q,\sigma ,X)\le 1\), and if \(\delta (q,\varepsilon ,X)\) is defined then \(\delta (q,a,X)=0\) for any \(a\in \varSigma \).
A quick comment on our definition of a queue automaton is in order. In the definition of several automata models, two special types of moves, namely stationary moves and \(\varepsilon \)moves, are sometimes allowed. Basically, a stationary moves takes place on welldefined “configurations” established in the transition function where a particular state and a particular input symbol must be scanned. Acting such a move may possibly modify the state of the automaton, some auxiliary storage (if any), while the input head has to stay put. On the other hand, an \(\varepsilon \)move is defined on a particular state without the need of reading a particular input symbol. After an \(\varepsilon \)move, the state and some storage (if any) content may be changed, while the input head is allowed to move or stay put.
In our definition of a queue automaton, we allow \(\varepsilon \)moves but not stationary moves on the input, this latter feature being usually assumed (see, e.g., [2, 14]). Our choice is motivated by guaranteeing direct and fair comparisons between queue automata and pushdown automata where stationary moves are never considered. However, it is not hard to see that queue automata with \(\varepsilon \)moves and queue automata with stationary moves are descriptionally equivalent.
A constant length nqa is obtained from a traditional nqa by imposing that the queue can never contain more than h symbols, for a fixed constant \(h\ge 0\) not depending on input length. Any attempt to store more than h symbols in the queue results in rejection. This constant memory bounded device is formally specified by an 8tuple \(A=\langle Q,\varSigma ,\varGamma ,\delta ,q_{0},\vdash ,F,h\rangle \), where \(h\ge 0\) is the builtin queue length, while all other components are defined as above.
Some common notions and observations. Throughout the rest of the paper, we will say that two automata A and \(A'\) are equivalent whenever \(L(A)=L(A')\) holds true. Moreover, for the sake of conciseness and when no possible confusion arises, we will be using the designation constant memory automaton to denote either a constant height npda or a constant length nqa. We observe that a constant memory automaton can be replaced by a corresponding “traditional”—i.e., without a builtin limit h on its memory storage—automaton by storing in its finite control states a counter recording permitted memory amounts (i.e., a number ranging within \(\{1,\ldots ,h\}\)). This increases the number of states by the multiplicative factor h (see, e.g., Lemma 3, where this technique is used in the proof). Indeed, notice that, for \(h=0\), the definition of a constant memory automaton exactly coincides with that of a finite state automaton.
Concerning acceptance mode, our constant memory automata are defined to accept by final states. However, in case of nondeterministic devices, at the cost of one more state it is always possible to accept by a single final state and with empty memory at once. This result is proved, e.g., in [17] for constant height npda; for constant length nqa, the same can be obtained with a similar approach.
For a constant memory automaton, a fair size measure (see, e.g., [3, 17]) should take into account all the components the device consists of, namely: (i) the number of finite control states, (ii) the size of the memory alphabet, and (iii) the builtin memory limit. So, we adopt the following
Definition 1
The size of a constant memory automaton with state set Q, memory alphabet \(\varGamma \), and memory limit h, is specified by measuring Q, \(\varGamma \), and h.
We observe that this definition immediately implies that the size of an nfa, for which clearly \(\varGamma =0=h\) holds true, is completely determined by the number of its states.
Straight line programs for regular expressions. A regular expression over a given alphabet \(\varSigma \) is inductively defined as:

(i)
\(\emptyset \), \(\varepsilon \), or a for any symbol \(a\in \varSigma \),

(ii)
\((r_{1} + r_{2})\), \((r_{1}\cdot r_{2})\), or \(r_{1}^{*}\!\), if \(r_{1}\) and \(r_{2}\) are regular expressions.
The language represented by a given regular expression is defined in the usual way (see, e.g., [18, 20]). With a slight abuse of terminology, we will often identify a regular expression with the language it represents.
A convenient and concise way of specifying regular expressions is provided by straight line programs. Given a set of variables \(X=\{x_{1},\ldots ,x_{\ell }\}\), a straight line program for regular expressions (slp, see [17]) on \(\varSigma \) is a finite sequence of instructions
where the ith instruction \(\text {instr}_{i}\) has one of the following forms:

(i)
\(x_{i}:=\emptyset \), \(x_{i}:=\varepsilon \), or \(x_{i}:=a\) for any symbol \(a\in \varSigma \),

(ii)
\(x_{i}:=x_{j}\!+\!x_{k}\), \(x_{i}:=x_{j}\!\cdot \!x_{k}\), or \(x_{i}:=x_{j}^{*}\), for \(1\le j,k<i\).
The program P expands to the regular expression in \(x_{\ell }\) (output variable), obtained by nested macroexpansion of the variables \(x_{1},\ldots ,x_{\ell 1}\), using the right parts of their instructions. Note that point (ii) in the definition of instruction form imposes a loopless structure to slps, naturally leading to their digraph representation analogous to that of boolean circuits (see, e.g., [1, 17]).
The length of P is defined to be \(\ell \), i.e., the number of its variables or equivalently of its instructions. Yet, we remark that a variable may occur several times in the right parts of the instructions P consists of. Such a number of occurrences is called the fanout of that variable. The fanout of \(x_{\ell }\) is 0, while the fanout of any other variable is at least 1 (if no useless instructions occur). The fanout of P is the maximum fanout of its variables.
Definition 2
The size of an slp is specified by measuring its length and its fanout.
It is easy to see that a regular expression may be seen as an slp with fanout 1. In general, due to fanout power, straight line programs can be exponentially more succinct than regular expressions [17].
We end this section by a quick comment on our way of writing and denoting the type of size increasing when simulating computational models. As the reader noticed, due to their different “hardware”, the size of the models under consideration here clearly is not always measured by the same parameters. So, with a slight abuse of terminology we say, e.g., that the size blowup is exponential in the simulation of the model A with the model B whenever some parameter measuring the size of B is bounded above by some parameter of A appearing at the exponent. An example of this attitude can be seen in the discussion before Proposition 2.
Comparing constant length queue automata and finite state automata
We start comparing the descriptional power of constant memory automata with that of classical finite state automata. In [17], it is proved that any constant height npda (resp., constant height dpda) can be converted into an equivalent nfa (resp., dfa) paying by an exponential size increase, this size cost being optimal, that is, necessary in some cases. An analogous result may be obtained for converting constant length nqas and dqas:
Theorem 1
For each constant length nqa \(A=\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \), there exists an equivalent nfa \(A'=\langle Q',\varSigma , \delta ', q_0',F'\rangle \) with \(Q'\le {Q}\!\cdot \!\varGamma ^{\le h}\). Moreover, if A is a constant length dqa then \(A'\) is a dfa.
Proof
The key idea is to keep the queue content of A, represented by a string in \(\varGamma ^{\le h}\), in the finite control state. The transitions of \(A'\) reflect stepbystep the evolution of both state and queue content in A. So, we let our nfa \(A'=\langle Q',\varSigma , \delta ', q_0',F'\rangle \) have \(Q'=Q\times \varGamma ^{\le h}\!\), \(q_0'=[q_0,\vdash ]\), \(F'=F\times \varGamma ^{\le h}\!\), and \(\delta '\) defined as follows, for any \(p,q\in Q\), \(\omega \in \varGamma ^{\le h}\!\), \(\sigma \in \varSigma \cup \{\varepsilon \}\), and \(X\in \varGamma \) (recall that \(X^0=\varepsilon \)):

if \((p,\chi ,\omega )\in \delta (q,\sigma ,X)\) and \(\alpha \in \varGamma ^{\le h1}\), then \([p,X^e\alpha \omega ]\in \delta '([q,X\alpha ],\sigma ),\) provided that \(X^e\alpha \omega \le h\), where \(e=0\) (resp., \(e=1\)) if \(\chi =D\) (resp., \(\chi =K\)).
The reader may easily verify that \(L(A')=L(A)\), and that this transformation preserves determinism. \(\square \)
Let us now show the optimality of the exponential simulation costs presented in Theorem 1. Consider the following witness language, for each \(h>0\), each alphabet \(\varGamma \), and a separator symbol \(\sharp \not \in \varGamma \):
Such a language is accepted by small constant length dqas, while any accepting nfa must be exponentially larger:
Theorem 2
For each \(h>0\) and each alphabet \(\varGamma \):

(i)
The language \(D_{\varGamma ,h}\) is accepted by a constant length dqa with 3 states, queue alphabet \(\varGamma \cup \{\sharp ,\vdash \}\), and queue length \(h+1\).

(ii)
Any nfa accepting the language \(D_{\varGamma ,h}\) must have at least \({{\varGamma ^{\le h}}}\) states.
Proof
For point (i), we informally describe the behavior of a constant length dqa A for \(D_{\varGamma ,h}\), when accepting the input string \(w\sharp w\in D_{\varGamma ,h}\). First, A stores \(w\sharp \) in its queue by remaining in its initial state and using no more than \(h+1\) queue cells. Then, by switching to another state, A symbolbysymbol compares the input suffix w against the queue content by dequeuing any matching symbol. Finally, by reading the sole symbol \(\sharp \) in the queue, A reaches a final state by an \(\varepsilon \)move. The formal definition of A, correctly managing also inputs not in \(D_{\varGamma ,h}\), may be easily fixed by the reader.
For point (ii), assume by contradiction an nfa B for \(D_{\varGamma ,h}\) exists, with less than \(\varGamma ^{\le h}\) states. Suppose also that any nonfinal state of B has an outgoing path leading to a final state; otherwise, we can remove the state without altering the accepted language. By counting arguments, there exist \(v, w\in \varGamma ^{\le h}\) such that \(v\ne w\) and the computation of B on both v and w may end up in the same nonfinal state q. Now, let \(\alpha \) be a word leading B from q to a final state. Consequently, we have that both \(v\alpha \) and \(w\alpha \) belong to \(D_{\varGamma ,h}\). From this, we get that \(\beta \sharp v\beta =\alpha =\beta \sharp w\beta \) for some \(\beta \in \varGamma ^{\le h}\). Hence, \(v=w\) against the hypothesis \(v\ne w\). \(\square \)
Now, we investigate the size tradeoff between constant length nqas and dfas. By Theorem 1, any constant length nqa can be simulated by an equivalent nfa, paying by an exponential size increase. In turn, the classical powerset transformation of nfas into dfas (see, e.g., [18, 20]) induces another exponential size blowup. So, we get
Proposition 1
Any constant length nqa with state set Q, queue alphabet \(\varGamma \), and queue length h can be converted into an equivalent dfa with \(2^{Q\cdot \varGamma ^{\le h}}\) states.
The doubleexponential simulation cost pointed out in Proposition 1 is optimal. In fact, for each \(h>0\), each alphabet \(\varGamma \), and a separator symbol \(\sharp \not \in \varGamma \), we define the language
The following theorem proves that \(S_{\varGamma ,h}\) can be accepted by a constant length nqa where the number of states and the length of the queue are both linear in h, while any equivalent dfa requires a number of states which cannot be less than doubleexponential in h:
Theorem 3
For each \(h>0\) and each alphabet \(\varGamma \):

(i)
The language \(S_{\varGamma ,h}\) is accepted by a constant length nqa with O(h) states, queue alphabet \(\varGamma \cup \{\vdash \}\), and queue length h.

(ii)
Any dfa accepting the language \(S_{\varGamma ,h}\) must have at least \({2^{\varGamma ^{h}}}\) states.
Proof
For point (i), informally an nqa A for \(S_{\varGamma ,h}\) sweeps the first half of the input string while counting modulo h in its finite state control, in order to delimit each block \(v_i\). While doing this, A nondeterministically chooses a block \(v_i\) to be stored in the queue, and skips the others until \(\sharp \) is reached. From this point on, A sweeps the second half of the input string as done for the first half, but this time it nondeterministically chooses a block \(w_j\) to be matched against the block \(v_i\) previously stored in the queue. Clearly, the length of the queue is h, and O(h) states suffice to count modulo h and perform queuing/dequeuing operations as described.
For point (ii), assume by contradiction the existence of a dfa A for \(S_{\varGamma ,h}\) featuring less than \(2^{\varGamma ^{h}}\) states. By counting arguments, there exist two different subsets of \(\varGamma ^h\), say \(B=\{x_1,x_2,\ldots ,x_m\}\) and \(C=\{y_1,y_2,\ldots ,y_n\}\), such that A reaches the same state q after processing either the input string \(\alpha =x_1x_2\cdots x_m\) and the input string \(\beta =y_1y_2\ldots y_n\). Without loss of generality, assume that \(u\in B\) and \(u\notin C\). Clearly, the word \(\alpha \sharp u\) belongs to \(S_{\varGamma ,h}\), while the word \(\beta \sharp u\) does not. However, starting from q and deterministically processing the same suffix \(\sharp u\), we get that A accepts \(\alpha \sharp u\in S_{\varGamma ,h}\) if and only if it accepts \(\beta \sharp u\not \in S_{\varGamma ,h}\), a contradiction. \(\square \)
The cost of determinizing constant length queue automata
Let us now focus on the cost of removing nondeterminism in constant length \(\textsc {nqa}\)s. We are going to prove an optimal exponential cost, in sharp contrast with the realm of constant height \(\textsc {npda}\)s, where an optimal doubleexponential cost is proved in [5].
As a preliminary result, we complete the picture given in the previous section by studying the missing simulation. Precisely, we show that, by having a constant length queue at our disposal, we can remove nondeterminism in finite state automata paying by only a linear size increase.
Theorem 4
For each nfa \(A=\langle Q,\varSigma ,\delta ,q_1,F\rangle \), there exists an equivalent constant length dqa \(A'=\langle Q',\varSigma ,\varGamma ,\delta ',q_0,\vdash ,F',h\rangle \) such that \(\left{Q'}\right \le 2\cdot \left{Q}\right\cdot \left{\varSigma }\right+\left{\varSigma }\right+3\), \(\left{\varGamma }\right\le Q+2\), and \(h \le 2\cdot Q+2\).
Proof
The key idea is to simulate the behavior of the powerset automaton of A (i.e., the dfa obtained from A by the powerset construction [18, 20]) by using the queue of \(A'\) to store the set of states in which A might currently be. Each possible transition of A is simulated in \(A'\) by consuming the state q at the head of the queue, and enqueuing the set of successors of q on the current input symbol. Basically, \(A'\) performs a breadthfirst traversal of the computation digraph of A on a given input string, searching for a possible accepting state.
Formally, we define our constant length dqa \(A'=\langle Q',\varSigma ,\varGamma ,\delta ',q_0,\vdash ,\) \(F',h\rangle \) as having:

\( Q' = \{\, e_a^q,\, {\tilde{e}}_a^q \mid q\in Q, a\in \varSigma \,\} \cup \{\, t_a \mid a\in \varSigma \,\} \cup \{\,q_0,r,r_F\,\},\)

\(\varGamma =Q\cup \{\sharp ,\vdash \}\),

\(F'= \{r_F\}\) if \(q_1\notin F\), otherwise \(F'=\{q_0,r_F\}\) if \(q_1\in F\),

\(h=2\cdot Q+2\).
Let us now focus on defining the transition function \(\delta '\). The first transition initializes the queue, reads the first input symbol, and stores it in the finite state control. So, for every \(a\in \varSigma \), we let
Roughly speaking, along the computation of \(A'\), the queue content will be a string in \(\varGamma ^*\) of the form \(\alpha \sharp \beta \!\vdash \), with \(\alpha ,\beta \in Q^*\), having the following meaning: the prefix \(\alpha \) represents the set of states in which A might currently be in, while the factor \(\beta \) represents the set of states A might reach upon reading the current input symbol. This queue content is managed by \(A'\) as follows: the first symbol of \(\alpha \) (i.e., the state q at the head of the queue) is consumed and its successor states in A are enqueued. To accomplish this task, a queue rotation is required to avoid multiple storing of the same state in the second part \(\beta \) of the queue. To this aim, the transitions on the state \(e_a^q\) rotate the queue content until \(\sharp \) is reached, while those on \({\tilde{e}}_a^q\) still rotate the queue content, together with deleting the successors of q already occurring in \(\beta \). Formally, let \(Q=\{q_1,q_2,\ldots ,q_n\}\) be the state set of A, and for any \(P=\{q_{i_1},q_{i_2},\ldots ,q_{i_\ell }\}\subseteq Q\) with \(i_1<i_2<\cdots <i_\ell \) define \(w_P\) as the string \(q_{i_1} q_{i_2} \cdots q_{i_\ell }\) \(\in \varGamma ^*\). For any \(a\in \varSigma \) and \(p,q\in Q\), we let
If there is no further state symbol of A in the first part of the queue, i.e., if the head of the queue is \(\sharp \), we need to process \(\beta \) upon the next input symbol. To this aim, the queue content is modified from \(\sharp \beta \!\vdash \) to \(\beta \sharp \!\vdash \). We get this by rotating the queue in the states r and \(r_F\), while checking whether some state in F shows up in the queue. If this is the case, \(A'\) enters the accepting state \(r_F\) at the end of rotation; otherwise, it enters the state r. Formally, for any \(a\in \varSigma \) and \(q\in Q\), we let
To see that \(L(A')=L(A)\), let the dfa \(A''\) be the powerset automaton from A. First, note that \(\varepsilon \) is accepted by \(A'\) if and only if \(\varepsilon \) is accepted by \(A''\). Now, assume that \(A''\) is in some state \(P_1\subseteq Q\), reads an input symbol \(a\in \varSigma \), and goes to successor state \(P_2\subseteq Q\). Further, assume that \(A'\), after consuming the input symbol a, is in the state \(t_a\), and its queue content is \(w_1\sharp \!\vdash \), for some permutation \(w_1\) of \(w_{P_1}\)—note that this situation is established since the beginning, after the first transition of \(A'\). Then, for each state symbol q in \(w_1\), the dqa \(A'\) deletes this symbol from \(w_1\), and adds the state symbols from \(\delta (q,a)\) to the second part of the queue between the \(\sharp \) and \(\vdash \) symbols. When all state symbols from \(w_1\) are processed, the queue content of \(A'\) is \(\sharp w_2\!\vdash \), where \(w_2\) is some permutation of \(w_{P_2}\). Now, \(A'\) sees the \(\sharp \) symbol in the queue and scans the word \(w_2\), trying to reveal some accepting state symbol \(q\in F\) by rotating the queue content symbol by symbol. When \(A'\) sees the \(\vdash \) symbol, the scanning of \(w_2\) is completed. In this situation, \(A'\) is in the accepting state \(r_F\) if and only if there is an accepting state of A in the state set \(P_2\). Then, the next input symbol is consumed, the new queue content is \(w_2\sharp \!\vdash \), and \(A'\) is ready to simulate the next step of \(A''\). Therefore, after completely sweeping the input string, \(A'\) is in the accepting state \(r_F\) if and only if \(A''\) entered an accepting state.
Clearly, the dqa \(A'\) has \(2\cdot \left{Q}\right\cdot \left{\varSigma }\right+\left{\varSigma }\right+3\) states and \(Q+2\) queue symbols. Moreover, notice that at any time the queue contains: any state symbol \(q\in Q\) at most once in the first part and at most once in the second part, at most one \(\sharp \) symbol, at most one \(\vdash \) symbol. So, the queue length never exceeds \(2\cdot Q+2\). \(\square \)
As a consequence of Theorem 4, we can settle the claimed optimal exponential size cost for the determinization of constant length nqas:
Theorem 5
For each constant length nqa \(A= \langle Q, \varSigma ,\varGamma , \delta , q_0,\vdash , F,h\rangle \), there exists an equivalent constant length dqa \(A'=\langle Q', \varSigma ,\varGamma ', \delta ', q_0',\vdash ', F',h'\rangle \) which satisfies \({Q'} \in O(Q\cdot {\varGamma ^{\le h}}\cdot \varSigma )\), \({\varGamma '}\in O(Q\cdot {\varGamma ^{\le h}})\), and \(h'\in O(Q\cdot {\varGamma ^{\le h}})\). Furthermore, the size cost of this conversion is optimal.
Proof
First, we use Theorem 1 to transform the given constant length nqa into an equivalent nfa, paying by an exponential size increase. Then, we use the linear transformation in Theorem 4 to obtain the desired constant length dqa.
To get the optimality of this exponential size blowup, assume by contradiction a subexponential size increase. Then, by Theorem 1, we would get a subdoubleexponential size cost for converting constant length nqas into dfas, against the doubleexponential optimality pointed out at Theorem 3. \(\square \)
Comparing constant length queue automata and constant height pushdown automata
We begin by showing that constant height pushdown automata can be simulated by constant length queue automata paying by only a linear size increase.
Theorem 6
For any constant height npda \(A=\langle Q,\varSigma ,\varGamma ,\delta ,q_{0},\perp ,F,h\rangle \), there exists an equivalent constant length nqa \(A'\) with \(2\cdot Q\) states, \(\varGamma +1\) queue symbols, and queue length \(h+1\). Moreover, if A is a constant height dpda then \(A'\) is a constant length dqa.
Proof
The key idea is to maintain the pushdown storage in the queue so that the queue head (resp., tail) represents the symbol at the top (resp., bottom) of the pushdown. The simulation in \(A'\) of a move of A works as follows:

(i)
the head (corresponding to the top of A) is consumed,

(ii)
the string to pile at the top of the pushdown is enqueued, preceded by a special separator symbol \(\sharp \notin \varGamma \),

(iii)
the whole queue content is rotated symbol by symbol, until \(\sharp \) is consumed.
It is easy to see that at the end of this rotation the new queue content reflects the pushdown content in A after the simulated move.
So, we define our constant length nqa \(A'=\langle Q',\varSigma ,\varGamma \cup \{\sharp \},\delta ',q_{0},\perp ,F,h+1\rangle \) as having \(Q'=Q\cup \{r_q:q\in Q\}\), and \(\delta '\) stated as follows. Given in A the move \(\delta (q,\sigma ,Z)\ni (p,\gamma )\), with \(\sigma \in \varSigma \cup \{\varepsilon \}\), we let
The first transition enqueues the pushed string and prepares the queue rotation which is then accomplished by the second transitions. Finally, the third transition switches the state according to \(\delta \).
Clearly, \(Q'=Q+\{r_q:q\in Q\}=2\cdot Q\), while the set of queue symbols has cardinality \(\varGamma +1\). Also, notice that the queue rotation process increases by 1 the length of the queue at the first step, due to appending \(\sharp \). In conclusion, observe that no nondeterminism is induced by moves from the states in \(\{r_q:q\in Q\}\). So, if A is deterministic, then \(A'\) is deterministic as well. \(\square \)
For the reverse conversions, i.e., from constant length queue automata to constant height pushdown automata, we notice that Theorem 1 directly implies an exponential upper bound since a finite automaton can be seen as a pushdown automaton that does not use its own pushdown store. Therefore,
Proposition 2
For each constant length nqa \(A=\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \), there exists an equivalent constant height npda \(A'=\langle Q',\varSigma , \varGamma ', \delta ', q_0',\bot ,F',h'\rangle \) which satisfies \(Q'\le {Q}\!\cdot \!\varGamma ^{\le h}\) and \(\left{\varGamma '}\right=h' = 1\). Moreover, if A is a constant length dqa then \(A'\) is a constant height dpda.
A corresponding exponential size cost lower bound for converting constant length queue automata to constant height pushdown automata, that is, the optimality of Proposition 2, will be proved later in Proposition 6, Sect. 6.
We end this section, by addressing the size costs for the conversions of constant height npdas to constant length dqas, and constant length nqas to constant height dpdas. We show an optimal exponential size cost for the former conversion, and a doubleexponential size cost for the latter.
Proposition 3
For each constant height npda \(A= \langle Q, \varSigma ,\varGamma , \delta , q_0,\bot , F,h\rangle \), there exists an equivalent constant length dqa \(A'=\langle Q', \varSigma ,\varGamma ', \delta ', q_0',\vdash , F',h'\rangle \) which satisfies \({Q'} \in O({Q}\cdot {\varGamma ^{\le h}}\cdot \varSigma )\), \({\varGamma '} \in O({Q}\cdot {\varGamma ^{\le h}})\), and \( h' \in O({Q}\cdot {\varGamma ^{\le h}})\). Furthermore, the size coast of this conversion is optimal.
Proof
For the size cost upper bound, it suffices to apply Theorem 6 and get from A an equivalent constant length nqa which in turn, according to Theorem 5, can be transformed into an equivalent constant length dqa \(A'\) with the claimed size.
For the optimality, we notice that if every constant height npda could be transformed to an equivalent constant length dqa with a subexponential size blowup, then we could use the exponential transformation from Proposition 2 to convert the resulting constant length dqa into an equivalent constant height dpda. Altogether, this would yield a subdoubleexponential size cost for removing nondeterminism in constant height npdas, against the results proved in [5]. \(\square \)
Proposition 4
For each constant length nqa \(A=\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \), there exists an equivalent constant height dpda \(A'=\langle Q',\varSigma , \varGamma ', \delta ', q_0',\bot ,F',h'\rangle \) which satisfies \(Q'\le 2^{{Q}\cdot \varGamma ^{\le h}}\) and \({\varGamma '}=h' = 1\). Furthermore, the size cost of this conversion is optimal.
Proof
The size cost upper bound directly follows from Proposition 1. For the optimality, we observe that a subdoubleexponential size cost conversion plus the linear transformation in Theorem 6, from constant height npdas to constant length nqas would yield a subdoubleexponential cost for removing nondeterminism in constant height npdas, against the results proved in [5]. \(\square \)
Comparing queue automata and straight line programs
Let us state the cost of turning constant length nqas into slps. By composing Proposition 2 with a result in [17] relating the size of a constant height npda with the size of an equivalent \(\textsc {slp}\)s we get:
Proposition 5
For each constant length nqa \(A\!=\!\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \), there exists an equivalent slp of length \(O(Q^4 \cdot \varGamma ^{\le h}^4 \cdot \left{\varSigma }\right)\) and fanout \(O(Q^2 \cdot \varGamma ^{\le h}^2)\).
Proof
The exponential size cost is obtained by composing the exponential simulation from constant length nqas to constant height npdas in Proposition 2 with the polynomial transformation given in [17], from constant height npdas to slps. \(\square \)
For the optimality of Proposition 5, we consider the following language, for each \(h>0\), each alphabet \(\varGamma \), and separator symbols \(\$,\sharp \notin \varGamma \):
Theorem 7
For each \(h>0\) and each alphabet \(\varGamma \):

(i)
The language \(L_{\varGamma ,h}\) is accepted by a constant length dqa with O(h) states, queue alphabet \(\varGamma '=\varGamma \cup \{\vdash ,\sharp \}\), and queue length \(h+1\).

(ii)
Any slp accepting the language \(L_{\varGamma ,h}\) must have at least \({\varGamma ^h}\) variables.
Proof
A constant length dqa for \(L_{\varGamma ,h}\) stores the prefix \(\sharp u\) of a given input word in its queue, while checking that \(\left{u}\right = h\). This requires O(h) states. Then, it matches its queue content against the rest of the input word by rotating the queue symbol by symbol, rejecting whenever a bad input format problem or a wrong symbol match is revealed. Upon reading the input symbol \(\$\) and the symbol \(\sharp \) at the head of the queue, the dqa enters a final state.
Let us now switch to slps for \(L_{\varGamma ,h}\), and first introduce some terminology. In an slp, we call starvariable any variable x occurring on the lefthand side in a starinstruction of the form \(x:=y^*\). Moreover, we denote by L(x) the language represented by the regular expression computed in x.
Thus, let P be an slp computing a regular expression for \(L_{\varGamma ,h}\), and let \(P'\) be the slp obtained from P by replacing every starinstruction \(x:=y^*\) by the instruction \(x:=\varepsilon \). Clearly, \(P'\) describes a finite language for which we let m be the length of the longest word. It is easy to see that, for any word \(z\in L_{\varGamma ,h}\) with \(\left{z}\right > m\), the slp P must make use of a starvariable producing some nonempty factor of z. By applying this observation to the word \(z_u=(\sharp u)^{m}\$ \in L_{\varGamma ,h}\), with \(u\in \varGamma ^h\), we get the existence of a starvariable \(x_u\) in P such that:

(i)
\(z_u=z_{u,1} z_{u,2} z_{u,3}\) with \(\varepsilon \ne z_{u,2}\in L(x_u)\), and

(ii)
for all \(z_{u,2}'\in L(x_u)\), we have \(z_{u,1} z_{u,2}' z_{u,3}\in L_{\varGamma ,h}\).
The nonempty factor \(z_{u,2}\) must contain at least one \(\sharp \) symbol. Otherwise, P would describe a word with a factor \(\omega \in \varGamma ^*\) satisfying \(\left{\omega }\right>h\), which cannot belong to \(L_{\varGamma ,h}\). If \(z_{u,2}\) contains at least two \(\sharp \) symbols, then it contains the factor \(\sharp u\sharp \). If \(z_{u,2}\) contains only one \(\sharp \) symbol, i.e., if \(z_{u,2} = v_1\sharp v_2\) for \(v_1,v_2\in \varGamma ^*\), then \(v_1\) (resp., \(v_2\)) is a suffix (resp., prefix) of u. In fact, since \(z_{u,2}^2 \in L(x_u)\), it must be \(v_2v_1=u\).
Thus, we have shown that, for every word \(u\in \varGamma ^h\), there exists a starvariable \(x_u\) such that \(L(x_u)\) contains a word having \(\sharp u\sharp \) as a factor. If P has less than \({\varGamma ^h}\) variables, then clearly there exist \(u,v\in \varGamma ^h\), with \(u\ne v\), satisfying \(x_u=x_v\). This implies that the language \(L(x_u)\) contains words that have either \(\sharp u\sharp \) and \(\sharp v\sharp \) as factors. Moreover, since \(L(x_u)\) is closed under star operation, we get that P describes words of the form \(\alpha \sharp u\sharp \beta \sharp v\sharp \gamma \not \in L_{\varGamma ,h}\), for \(\alpha ,\beta ,\gamma \in (\varGamma \cup \{\sharp ,\$\})^*\), a contradiction. Thus, P must have at least \({\varGamma ^h}\) variables. \(\square \)
As a consequence of Theorem 7, we get the optimality of the exponential size cost conversions of constant length queue automata to constant height pushdown automata addressed in Proposition 2:
Proposition 6
The exponential conversions from constant length dqas to constant height npdas, and from constant length nqas (resp., dqas) to constant height npdas (resp., dpdas) in Proposition 2 are optimal.
Proof
Assume, by contradiction, a subexponential size cost conversion from constant length dqas to constant height npdas. Then, by the polynomial size cost conversion of constant height npdas to slps shown in [17], one could also convert any constant length dqa to an equivalent slp of subexponential size, against Theorem 7. We can argue similarly for the other conversions. \(\square \)
We conclude by converting slps to constant length nqas or dqas:
Proposition 7
The size blowup for converting slps to equivalent constant length dqas (resp., nqas) is exponential (resp., linear). Furthermore, the former exponential transformation is optimal.
Proof
In [17], a linear size cost transformation from slps to constant height npdas is presented. Together with the linear size cost conversion from constant height npdas to constant length nqas established in Theorem 6, this yields a linear size cost conversion from slps to constant length nqas. In turn, Theorem 5 implies an exponential size cost upper bound for converting of slps to constant length dqas.
For the optimality of this latter exponential size blowup, assume by contradiction a subexponential size cost conversion exists, from slps to constant length dqas. Then, by a preliminary size cost polynomial transformation from constant height npdas to slps, plus a successive exponential size cost conversion from constant length dqas to constant height dpdas, one could get a subdoubleexponential size cost for removing nondeterminism in constant height npdas, against results proved in [5]. \(\square \)
Summary of simulation costs
For reader’s ease of mind, we sum up in Fig. 1 the main relations among the sizes of the different types of formalisms for regular languages considered in this paper. Such size relations are briefly commented below the figure.
Linear and polynomial size blowups. The linear size costs of the transformations dfa \(\rightarrow \) nfa, dfa \(\rightarrow \) hdqa, nfa \(\rightarrow \) hnqa, dfa \(\rightarrow \) hnqa, hdqa \(\rightarrow \) hnqa, and hdpda \(\rightarrow \) hnpda are trivial. Let us focus on the other linear and polynomial costs:

nfa \(\rightarrow \) hdqa: the linear cost comes from Theorem 4.

slp \(\rightarrow \) hnqa: the linear cost comes from Proposition 7.

hdpda \(\rightarrow \) hdqa, hdpda \(\rightarrow \) hnqa, and hnpda \(\rightarrow \) hnqa: the linear costs come from Theorem 6.

slp \(\rightarrow \) hnpda: the linear cost comes from [17].

hnpda \(\rightarrow \) slp: the polynomial cost comes from [17].
Exponential and doubleexponential size blowups.

nfa \(\rightarrow \) dfa: the exponential cost is known from [31], its optimality from [30].

hdqa \(\rightarrow \) dfa, hdqa \(\rightarrow \) nfa, and hnqa \(\rightarrow \) nfa: the exponential costs come from Theorem 1, their optimality from Theorem 2.

hnqa \(\rightarrow \) dfa: the doubleexponential cost comes from Proposition 1, its optimality from Theorem 3.

hdqa \(\rightarrow \) slp: the exponential cost comes from Proposition 5, its optimality from Theorem 7.

slp \(\rightarrow \) hdqa: the optimal exponential cost comes from Proposition 7.

hnqa \(\rightarrow \) hdqa: the optimal exponential cost comes from Theorem 5.

hnqa \(\rightarrow \) slp: the exponential cost comes from Proposition 5, its optimality from Theorem 7.

hdqa \(\rightarrow \) hdpda, hdqa \(\rightarrow \) hnpda, and hnqa \(\rightarrow \) hnpda: the exponential costs come from Proposition 2, their optimality from Proposition 6.

hnpda \(\rightarrow \) hdqa: the optimal exponential cost comes from Proposition 3.

hnqa \(\rightarrow \) hdpda: the optimal doubleexponential cost comes from Proposition 4.

hnpda \(\rightarrow \) hdpda: the optimal doubleexponential cost comes from [5].
Boolean language operations on constant length queue automata
Let us now study the size cost of implementing boolean language operations on constant length nqas and dqas. This is a commonly investigated topic in descriptional complexity, for formalisms defining regular languages. For instance, for constant height \(\textsc {npda}\)s and \(\textsc {dpda}\)s, results can be found in [3, 4, 6]. Indeed, also for nonclassical language acceptors, such as different variants of quantum finite automata, several results on the size complexity of boolean language operations are established (see, e.g., [7,8,9,10, 12, 13]).
To simplify our constructions, sometimes it will be useful to assume that our queue automata enqueue at most one symbol at any move, that is, any transition \((p,\chi ,\omega )\in \delta (q,\sigma ,X)\) satisfies \(\omega \le 1\). In this case, the queue length changes at most by 1 at any move, and we say that the queue automaton is in normal form.
The next technical lemma shows that any constant length queue automaton can be converted in normal form, by preserving determinism. To suitably evaluate the size cost of this conversion, we adopt a size measure accounting for the “complexity” of the instructions of the queue automaton.
According to [18], the size of an automaton M can be defined as the length of a string describing its transition function. For a queue automaton, if the ith transition of M is \(\delta (q,\sigma ,X)\ni (p,\chi ,Y_{1}\ldots Y_{k})\), with \(k\ge 0\) and \(Y_{s}\in \varGamma \), it can be written down as a string \(t_i=q\,\sigma \, X{p}\,\chi \, Y_{1}\ldots Y_k\). (Here we assume, without loss of generality, that Q, \(\varSigma \), \(\varGamma \) and \(\{D,K\}\) are mutually disjoint sets. This makes decoding of transitions unambiguous.) So, globally M can be written down as \(t_1\cdots \ t_m\in (Q\cup \varSigma \cup \varGamma \cup \{D,K\})^*\), a string listing all machine’s instructions one after another. By charging “1” for the constant part in each transition, we define the size of M as
As the reader may easily verify, we have that \(M\in O(Q^2\cdot \varGamma ^2)\).
We are now ready to establish the size cost of converting constant length queue automata into normal form:
Lemma 1
Given a constant length nqa \(A =\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \), there exists an equivalent constant length nqa \(A' =\langle Q',\varSigma ,\varGamma , \delta ', q_0, \vdash , F,h\rangle \) in normal form, with \(Q'\le A+1\). Moreover, if A is deterministic, then \(A'\) is deterministic as well.
Proof
Consider an instruction \(\rho =\delta (q,\sigma ,X)\ni (p,\chi ,\gamma )\) of A, with \(\gamma \in \varGamma ^*\). If \(\gamma \le 1\), then we let \(\delta '(q,\sigma ,X)\ni (p,\chi ,\gamma )\). Otherwise, assuming \(\gamma =Y_{1}\ldots Y_k\) with \(k>1\), we add \(k1\) new states \(s_1, \ldots ,s_{k1}\) and, for \(1\le i\le k2\) and all \(Z\in \varGamma \), we translate \(\rho \) into the following instructions:
Let us count the number of states required by \(A'\). For the instruction \(\rho \), the automaton \(A'\) uses the states \(q,\, s_1, \ldots ,s_{\gamma 1}\,, p\). If q is reachable by some nonempty input string, then it is already counted in some other instruction translation. Therefore, simulating the instruction \(\rho \) requires at most \(\max \{\gamma ,1\}\) additional states. The only state that might not be counted is the initial state \(q_0\). Altogether, we get \(Q'\le A+1\). \(\square \)
Another simplification we need for combining two constant length nqas in order to implement boolean language binary operations, is to assume that such automata have the same queue length. The evaluation of the size blowup to implement this assumption is the subject of the following:
Lemma 2
Given a constant length nqa \(A=\langle Q,\varSigma ,\varGamma , \delta , q_0, \vdash , F,h\rangle \) in normal form, for any \(h'>h\) we can construct an equivalent constant length nqa \(A'\) in normal form, having queue length \(h'\), queue alphabet cardinality bounded by \(\varGamma +1\), and featuring \(\min \{h\cdot Q, \ Q\cdot (\varGamma +1) + h'h\}\) states. Moreover, if A is deterministic, then \(A'\) is deterministic as well.
Proof
First, we notice that a trivial stepbystep simulation of A by \(A'\) using a queue of length \(h'\) would not be able to reproduce the situation in which A rejects by attempting to enqueue more than h symbols. Thus, one possibility is to build \(A'\) by integrating an integer counter within the finite state control of A, keeping track of the current queue length and immediately rejecting whenever the counter attempts to exceed h. This clearly implies the usage of \(h\cdot Q\) states in \(A'\).
Another possibility is that \(A'\) initially inserts \(h'h\) dummy symbols in the queue, which will never be deleted. This clearly reduces \(A'\)’s queue capacity to h, and requires \(h'h\) new states to be implemented. Along the computation, \(A'\) maintains the simulated queue content of A before the dummy symbols by using a queue rotation subroutine. For this subroutine, we add new states of the form \(r_p^Y\) storing the state p to be entered and the symbol Y to be moved from the head to the tail of the queue (see, e.g., proofs of Theorems 4 and 6). This implies the usage of \(Q+Q\cdot \varGamma +h'h\) states in \(A'\) and a queue alphabet which is \(\varGamma \) equipped with a dummy symbol. \(\square \)
We are now ready to implement boolean language binary operations on constant length nqas which, by considering Lemmas 1 and 2 above, are assumed to be in normal form and working with the same queue length. We begin by considering the intersection:
Theorem 8
Given two constant length nqas \(A_1=\langle Q_1,\varSigma ,\varGamma _1,\) \(\delta _1, c_1, \vdash , F_1,h\rangle \) and \(A_2=\langle Q_2,\varSigma ,\varGamma _2, \delta _2, c_2, \vdash , F_2,h\rangle \) in normal form, we can construct a constant length nqa A accepting \(L(A_1)\cap L(A_2)\), having queue length \(h+1\), queue alphabet cardinality \((\varGamma _1+1)\cdot (\varGamma _2+1)+1\), and featuring \(Q_1\cdot (\varGamma _1+1)^2\cdot Q_2\cdot (\varGamma _2+1)^2\) states.
Proof
The key idea is that A simulates in parallel the behavior of \(A_1\) and \(A_2\), using its queue divided into two tracks. To this aim, the queue symbols of A consist of pairs of symbols from \(\varGamma _1\times \varGamma _2\). Analogously, the states of A come from \(Q_1\times Q_2\). The two components of the elements of these two sets are responsible of simulating the computation steps of \(A_1\) and \(A_2\) on the corresponding tracks. Since the queue operations of \(A_1\) and \(A_2\) are generally not synchronized, a special blank queue symbol \(b\not \in \varGamma _1\cup \varGamma _2\) is inserted at the end of the shortest track to allow tracks aligning. Every enqueuing operation requires a rotation subroutine in order to keep the \(b\)’s at the end of the shortest track. To this purpose, a special queue symbol \((\sharp ,\sharp )\), with \(\sharp \not \in \varGamma _1\cup \varGamma _2\), is introduced, together with new elements that will be used as state components of A during the rotation subroutine. Namely, for \(i\in \{1,2\}\), we let:

\(R_i=\{r_p^Y:p\in Q_i, Y\in \varGamma _i\}\). An element \(r_p^Y\) stores the state p to be entered and the queue symbol Y to be enqueued at the end of the nonblank part of the corresponding track.^{Footnote 2}

\(S_i=\{s_p^X:p\in Q_i, X\in \varGamma _i\}\). An element \(s_p^X\) stores the state p to be entered and the queue symbol X to be enqueued at the next rotation step during the rotation subroutine. This is required whenever the operations of \(A_1\) and \(A_2\) on the queue are not synchronized.

\(T_i=\{t_p^{Y,X}:p\in Q_i, X,Y\in \varGamma _i\}\). An element \(t_p^{Y,X}\) stores the state p to be entered, the queue symbol Y to be enqueued at the end of the nonblank part of the corresponding track, and the queue symbol X to be enqueued at the next rotation step during the subroutine.
Formally, we define \(A=\langle Q,\varSigma ,\varGamma , \delta , (c_1,c_2), (\vdash ,\vdash ), F,h+1\rangle \) with

\(Q=\{(p_1,p_2):p_i\in Q_i\cup R_i\cup S_i\cup T_i,\ i\in \{1,2\}\}\),

\(\varGamma =(\varGamma _1\cup \{b\})\times (\varGamma _2\cup \{b\})\cup \{(\sharp ,\sharp )\}\), for \(b,\sharp \notin \varGamma _1\cup \varGamma _2\),

\(F=F_1\times F_2\).
The transition function \(\delta \) simulates \(\delta _1\) and \(\delta _2\) on the two tracks, and performs the rotation subroutine whenever required.
Particular attention must be paid to handle \(\varepsilon \)moves. If one of the two automata \(A_1\) or \(A_2\) requires to perform an \(\varepsilon \)move, the other automaton must be synchronized by replicating an \(\varepsilon \)move too, in which state and queue are not modified. To this aim, we add the following rules to \(\delta _i\), for \(i\in \{1,2\}\), which can be simulated by \(\delta \). For any \(q\in Q_i\) and \(X\in \varGamma _i\), we let:
For the sake of brevity, here we formalize the rules in \(\delta \) for the case delete–keep, i.e., simulating the transitions
where \(\sigma \in \varSigma \cup \{\varepsilon \}\) and \(Y_1,Y_2\ne \varepsilon \). We define \(\delta \), for any \((Z_1,Z_2)\in \varGamma _1\times \varGamma _2\), as:
where the last rule takes place whenever the two tracks have the same length. Instead, in case of different queue lengths, we have two different sets of rules depending on whether a \(b\) symbol appears in the track dedicated to \(A_1\) or in that for \(A_2\). So, for any \(X\in \varGamma _2\), we let:
while, for any \(X\in \varGamma _1\), we let:
The reader may fill the definition of \(\delta \) for the cases \(Y_1=\varepsilon \) or/and \(Y_2=\varepsilon \). We simply notice that for \(Y_2=\varepsilon \), instead of having the second component of the states in the form \(t_{p_2}^{Y_2,Z}\), we have \(s_{p_2}^{Z}\) since we do not record \(Y_2\) to be inserted.
The rules in \(\delta \) for simulating moves of type keep–delete are managed symmetrically, while moves keep–keep and delete–delete are replicated simply by parallel simulations (in these two latter cases, involved states are only from the set \((Q_1\cup R_1)\times (Q_2\cup R_2)\)).
It is easy to see that A is a constant length nqa, with queue length at most \(h+1\). Moreover, since each component of the states of A is from the set \(Q_i\cup R_i\cup S_i\cup T_i\), we have the claimed number of states. \(\square \)
We observe that the two constant length \(\textsc {nqa}\)s considered in Theorem 8 for intersection enjoy particular properties, e.g., being in normal form and operating with the same queue length. However, by considering the transformation size costs in Lemma 1 and Lemma 2, the reader may easily verify that the state cost of implementing the intersection for two general constant length \(\textsc {nqa}\)s turns out to be at most the square of the state cost pointed out in Theorem 8, multiplied by the length of the shortest queue among the two involved constant length \(\textsc {nqa}\)s.
Let us now focus on intersecting constant length \(\textsc {dqa}\)s. It is not hard to see that the construction provided in Theorem 8 does not preserve determinism. However, such a construction can be adapted to implement the intersection of two realtime constant length \(\textsc {dqa}\)s by a constant length dqa. We recall that a queue automaton is said to be realtime whenever it does not present \(\varepsilon \)moves (see, e.g., [14]):
Corollary 1
Given two realtime constant length dqas \(A_1=\langle Q_1,\varSigma ,\varGamma _1, \delta _1, c_1, \vdash , F_1,h\rangle \) and \(A_2=\langle Q_2,\varSigma ,\varGamma _2, \delta _2, c_2, \vdash , F_2,h\rangle \) in normal form, we can construct a constant length dqa A accepting \(L(A_1)\cap L(A_2)\), having queue length \(h+1\), queue alphabet cardinality \((\varGamma _1+1)\cdot (\varGamma _2+1)+1\), and featuring \(Q_1\cdot (\varGamma _1+1)^2\cdot Q_2\cdot (\varGamma _2+1)^2\) states.
Proof
We can consider the construction provided in the proof of Theorem 8, by simply observing that there the only rules adding nondeterminism are the ones displayed in Eq. (1) for handling \(\varepsilon \)moves synchronization. Since \(A_1\) and \(A_2\) are realtime, we here do not need these additional rules any more. \(\square \)
For general realtime constant length \(\textsc {dqa}\)s, again by considering Lemma 1 and Lemma 2, the reader may easily verify that the state cost of implementing intersection turns out to be at most the square of the state cost pointed out in Corollary 1, multiplied by the length of the shortest queue among the two involved constant length \(\textsc {dqa}\)s.
Instead, in case of two nonrealtime constant length \(\textsc {dqa}\)s, the idea is to turn one of the two devices into an equivalent \(\textsc {dfa}\) according to Theorem 1, remove \(\varepsilon \)moves from the \(\textsc {dfa}\), and finally embed the dfa into the finite control of the other \(\textsc {dqa}\). The resulting constant length \(\textsc {dqa}\) implements the intersection with a number of states bounded by \({Q_1}\cdot Q_2\cdot \varGamma _2^{\le h_2}\), where \(Q_1,Q_2,\varGamma _2,h_2\) have the usual meaning.
Now, we turn to implementing the union. For this operation, the construction is simpler since we can exploit nondeterminism:
Theorem 9
Given two constant length nqas \(A_1=\langle Q_1,\varSigma ,\varGamma _1,\delta _1, c_1, \vdash , F_1,h\rangle \) and \(A_2=\langle Q_2,\varSigma ,\varGamma _2, \delta _2, c_2, \vdash , F_2,h\rangle \), we can construct a constant length nqa A recognizing \(L(A_1)\cup L(A_2)\), with queue length h, queue alphabet \(\varGamma _1\cup \varGamma _2\), and \(Q_1+Q_2 +1\) states.
Proof
Without loss of generality, we can assume that \(Q_1\) and \(Q_2\) are disjoint sets. The states of A are from the set \(Q_1\cup Q_2\), plus a new initial state c, the queue alphabet is \(\varGamma _1\cup \varGamma _2\), and the set of final states is \(F_1\cup F_2\). As a first step, A performs a nondeterministic \(\varepsilon \)move from c leading to enter either \(c_1\) or \(c_2\). Then, A simulates \(\delta _1\) on states from \(Q_1\), and \(\delta _2\) on states from \(Q_2\). It is not hard to see that A accepts whenever either \(A_1\) or \(A_2\) accepts. \(\square \)
In case of two constant length \(\textsc {nqa}\)s with queues of different lengths, by Lemma 2 one may easily obtain that the state cost for their union turns out to be at most \(Q_1+h_2\cdot Q_2 +1\), where \(Q_1,Q_2,h_2\) have the usual meaning.
For implementing the union in the realtime deterministic case, we can use the same construction for intersection, addressed in the proof Corollary 1, with small modifications. Such modifications are basically due to handling situations in which one of the two queue automata rejects before consuming the whole input. To this aim, we present a technical lemma:
Lemma 3
Given a realtime constant length dqa \(A=\langle Q,\varSigma ,\varGamma ,\delta , c, \vdash , F,h\rangle \), we can construct an equivalent realtime constant length dqa \(A'=\langle Q',\varSigma ,\varGamma ,\) \(\delta ', c, \vdash , F',h\rangle \) with \(Q'\le h\cdot Q +2\), which always consumes the whole input before halting. Moreover, if A is in normal form, then \(A'\) is in normal form as well.
Proof
Being A realtime, there cannot exist infinite loops due to \(\varepsilon \)moves. The only situations where A might halt before the end of the input string are the following:

(i)
queue underflow and overflow,

(ii)
undefined \(\delta \) for some configurations.
To fix situation (i), we store the current length of the queue in the finite state control of \(A'\), i.e., the states of \(A'\) has the form \((q, \ell )\in Q\times \{1,\ldots , h\}\). Underflow and overflow situations can be detected from the values of \(\ell \), and can be managed by redirecting the corresponding computation to a new trap state rej. A particular situation might arise, where the queue length of A is zero, the input has been completely scanned, and A enters an accepting state. To handle this situation, we equip \(A'\) with a new accepting state acc and, for \(\delta (q,a, Z)=(p,D,\varepsilon )\) with \(p\in F\), we let
for any \(\sigma \in \varSigma \) and \(X\in \varGamma \). When entering the state rej, we consume the whole input by the rules \(\delta '(rej,\sigma ,X)=(rej,K,\varepsilon )\).
To fix situation (ii), we complete the transition function by adding new rules \(\delta '(q,\sigma ,X)=(rej,K,\varepsilon )\), whenever \(\delta (q,\sigma ,X)\) is not defined.
Therefore, \(Q'=Q\times \{1,\ldots , h\}\cup \{rej,acc\}\), \(F'=F\times \{1,\ldots , h\}\cup \{acc\}\), and \(\delta '\) is a total function that simulates \(\delta \) on states in \(Q'\) by updating the counter and preventing queue underflow and overflow as above addressed. \(\square \)
We are now ready to show the size cost of the union operation in the realtime deterministic case:
Corollary 2
Given two realtime constant length dqas \(A_1=\langle Q_1,\varSigma ,\varGamma _1,\delta _1, c_1, \vdash , F_1,h\rangle \) and \(A_2=\langle Q_2,\varSigma ,\varGamma _2, \delta _2, c_2, \vdash , F_2,h\rangle \) in normal form, we can construct a constant length dqa A accepting \(L(A_1)\cup L(A_2)\), having queue length \(h+1\), queue alphabet cardinality \((\varGamma _1+1)\cdot (\varGamma _2+1)+1\), and \((h\cdot Q_1+2)\cdot (\varGamma _1+1)^2\cdot (h\cdot Q_2+2)\cdot (\varGamma _2+1)^2\) states.
Proof
The construction of A works out similarly to that in the proof of Corollary 1. However, we here have to handle cases in which one of the two simulated automata \(A_1\) or \(A_2\) halts before consuming the whole input and the other accepts. To this aim, we first transform \(A_1\) and \(A_2\) into \(A'_1\) and \(A'_2\) according to Lemma 3. Then, to get A, we compose \(A'_1\) and \(A'_2\) as in Corollary 1, except for the set of final states which is now defined to be \((F'_1\times Q'_2)\cup (Q'_1\times F'_2)\). \(\square \)
For general realtime constant length \(\textsc {dqa}\)s, the reader may easily verify that the state cost of implementing the union turns out to be the same as the one above discussed for intersecting general realtime constant length \(\textsc {dqa}\)s, multiplied by the factor \(h_1\cdot h_2\). This latter factor follows from the construction in Lemma 3, aimed to accept or reject at the end of the input only.
Instead, in case of nonrealtime devices, we use the same construction above addressed for implementing the intersection of nonrealtime constant length \(\textsc {dqa}\)s, thus yielding the same state complexity.
Finally, we consider the size cost of implementing the complement operation:
Theorem 10
Given a constant length nqa \(A=\langle Q,\varSigma ,\varGamma ,\delta , q_0, \vdash , F,h\rangle \), we can construct a constant length dqa \(A'\) accepting \(L(A)^c\), with queue length \(2\cdot (Q\cdot \varGamma ^{\le h})+4\), queue alphabet cardinality \(Q\cdot {\varGamma ^{\le h}}+3\), and \(2\cdot (Q\cdot \varGamma ^{\le h}+1)\cdot \varSigma +\varSigma  +3\) states.
Proof
First, we transform A into an equivalent nfa, according to Theorem 1. The transition function of the obtained nfa might be partial and containing \(\varepsilon \)moves. Therefore, we complete this function by adding transitions to a new trap state in which the remaining part of the input is consumed, and remove \(\varepsilon \)moves by standard tools (see, e.g., [18, 20]) without increasing the number of states.
Then, we use Theorem 4 to convert the obtained nfa to an equivalent constant length dqa. Considering the abovementioned properties of the nfa to be converted and how the construction of Theorem 4 works, it is not hard to see that the resulting constant length dqa always sweeps the whole input, accepting or rejecting at the end only. So, by swapping accepting and rejecting states, we obtain a constant length dqa for \(L(A)^c\), with the claimed features. \(\square \)
For complementing a realtime constant length \(\textsc {dqa}\), it is enough to use Lemma 3 to get an equivalent automaton consuming the whole input, and then swap accepting and rejecting states of this automaton. Thus, we have
Corollary 3
Given a realtime constant length dqa \(A=\langle Q,\varSigma ,\varGamma ,\delta , q_0, \vdash , F,h\rangle \), we can construct a realtime constant length dqa for \(L(A)^c\), having queue length h, queue alphabet \(\varGamma \), and featuring \(h\cdot Q + 2\) states.
In case of nonrealtime constant length \(\textsc {dqa}\)s, the state cost for the complement remains the same. In fact, before applying the construction used for proving Corollary 3, it suffices to precompute and remove infinite loops due to \(\varepsilon \)moves.
Conclusions
In this paper, we have considered the notion of a constant length queue automaton—i.e., a traditional queue automaton with a builtin constant limit on the length of its queue—as a formalism for representing regular languages.
We have shown that the descriptional power of constant length queue automata is higher than that of several formalisms for defining regular languages by providing optimal exponential and doubleexponential size gaps.
In particular, we have proved that constant height pushdown automata can be simulated by constant length queue automata paying only by a linear size increase, and that removing nondeterminism in constant length queue automata requires an optimal exponential size blowup, against the optimal doubleexponential cost for determinizing constant height pushdown automata. This proves the descriptional advantages of using a queue instead of a pushdown in the realm of constant memory machines.
Finally, we have investigated the size cost of implementing boolean language operations on deterministic and nondeterministic constant length queue automata.
Among possible future lines of research, one may investigate restricted variants of constant memory automata, for instance devices working on unary, i.e., singleletter, input alphabets (see, e.g., [11, 15, 26, 27]), or inputdriven devices [25]. We also would like to emphasize the interest in twoway devices [28].
Concerning the investigation of the size cost of language operations on constant length queue automata, it would be interesting to consider further typical operations on regular languages, such as Kleene’s star, reversal, quotient, shuffle, homomorphisms and inverse homomorphisms, etc. Moreover, one could extend the investigation on language operations for restricted models, such as realtime constant length \(\textsc {nqa}\)s. Also the size optimality should be proved or improved constructions should be designed, for the language operations considered in this paper.
Notes
 1.
Actually, even \(o(\log n)\) or \(o(\log \log n)\) space bounds (see, e.g., [29]).
 2.
Rotating one track symbol from head to tail cannot be done in a single move since two notnecessarily synchronized tracks must be managed. This is the reason why \(R_i\)’s sets are provided.
References
 1.
Aho, A.V., Hopcroft, J.E., Ullman, J.D.: The Design and Analysis of Computer Algorithms. AddisonWesley, Reading (1974)
 2.
Allevi, E., Cherubini, A., Crespi Reghizzi, S.: Breadthfirst phrasestructure grammars and queue automata. In: Proceedings of the Mathematics Foundations of Computer Science, Lecture Notes in Computer Science, vol. 324, pp. 162–170. Springer (1988)
 3.
Bednárová, Z., Geffert, V., Mereghetti, C., Palano, B.: The sizecost of Boolean operations on constant height deterministic pushdown automata. Theor. Comput. Sci. 449, 23–36 (2012)
 4.
Bednárová, Z., Geffert, V., Mereghetti, C., Palano, B.: Boolean language operations on nondeterministic automata with a pushdown of constant height. In: Proceedings of the Computer Science Symposium Russia, Lecture Notes in Computer Science, vol. 7913, pp. 100–111. Springer (2013)
 5.
Bednárová, Z., Geffert, V., Mereghetti, C., Palano, B.: Removing nondeterminism in constant height pushdown automata. Inf. Comput. 237, 257–267 (2014)
 6.
Bednárová, Z., Geffert, V., Mereghetti, C., Palano, B.: Boolean language operations on nondeterministic automata with a pushdown of constant height. J. Comput. Syst. Sci. 90, 99–114 (2017)
 7.
Bertoni, A., Mereghetti, C., Palano, B.: Golomb rulers and difference sets for succinct quantum automata. Int. J. Found. Comput. Sci. 14, 871–888 (2003)
 8.
Bertoni, A., Mereghetti, C., Palano, B.: Trace monoids with idempotent generators and measureonly quantum automata. Nat. Comput. 9, 383–395 (2010)
 9.
Bianchi, M.P., Mereghetti, C., Palano, B.: Complexity of Promise Problems on Classical and Quantum Automata, Computing with New Resources—Essays Dedicated to J. Gruska on the Occasion of His 80th Birthday, pp. 161–175. Springer (2014)
 10.
Bianchi, M.P., Mereghetti, C., Palano, B.: On the power of oneway finite automata with quantum and classical states. In: Proceedings of the Implementation of Automation Applications, Lecture Notes in Computer Science, vol. 8587, pp. 84–97. Springer (2014)
 11.
Bianchi, M.P., Mereghetti, C., Palano, B., Pighizzini, G.: On the size of unary probabilistic and nondeterministic automata. Fund. Inf. 112, 119–135 (2011)
 12.
Bianchi, M.P., Mereghetti, C., Palano, B.: Quantum finite automata: advances on Bertoni’s ideas. Theor. Comput. Sci. 664, 39–53 (2017)
 13.
Bianchi, M.P., Palano, B.: Behaviours of unary quantum automata. Fund. Inf. 104, 1–15 (2010)
 14.
Cherubini, A., Citrini, C., Crespi Reghizzi, S., Mandrioli, D.: QRT FIFO automata, breadthfirst grammars and their relations. Theor. Comput. Sci. 85, 171–203 (1991)
 15.
Chrobak, M.: Finite automata and unary languages. Theor. Comput. Sci. 47, 149–158, (1986); Corrigendum, ibid 302, 497–498 (2003)
 16.
Gao, Y., Moreira, N., Reis, R., Yu, S.: A survey on operational state complexity. J. Autom. Lang. Comb. 21, 251–310 (2017)
 17.
Geffert, V., Mereghetti, C., Palano, B.: More concise representation of regular languages by automata and regular expressions. Inf. Comput. 208, 385–94 (2010)
 18.
Harrison, M.A.: Introduction to Formal Language Theory. AddisonWesley, Reading (1978)
 19.
Holzer, M., Kutrib, M.: Descriptional complexity—an introductory survey. In: Scientific Applications of Language Methods, pp. 1–58. Imperial College Press (2010)
 20.
Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation. AddisonWesley, Reading (2001)
 21.
Jakobi, S., Meckel, K., Mereghetti, C., Palano, B.: Queue automata of constant length. In: Proceedings of the Descrete Complete Formal Systems, Lecture Notes in Computer Science, vol. 8031, pp. 124–135. Springer (2013)
 22.
Kutrib, M., Malcher, A., Mereghetti, C., Palano, B.: Descriptional complexity of iterated uniform finitestate transducers. In: Proceedings of the Descrete Complete Formal Systems, Lecture Notes in Computer Science, vol. 11612, pp. 223–234. Springer (2019)
 23.
Kutrib, M., Malcher, A., Mereghetti, C., Palano, B.: Iterated uniform finitestate transducers: descriptional complexity of nondeterminism and twoway motion. In: Proceedings of the Descrete Complete Formal Systems, Lecture Notes in Computer Science, vol. 12442, pp. 117–129. Springer (2020)
 24.
Kutrib, M., Malcher, A., Mereghetti, C., Palano, B.: Deterministic and nondeterministic iterated uniform finitestate transducers: computational and descriptional power. In: Proceedings of the Computability in Europe 2020, Lecture Notes in Computer Science, vol. 12098, pp. 87–99. Springer (2020)
 25.
Kutrib, M., Malcher, A., Mereghetti, C., Palano, B., Wendlandt, M.: Deterministic inputdriven queue automata: finite turns, decidability, and closure properties. Theor. Comput. Sci. 578, 58–71 (2015)
 26.
Mereghetti, C., Pighizzini, G.: Optimal simulations between unary automata. SIAM J. Comput. 30, 1976–1992 (2001)
 27.
Mereghetti, C., Palano, B.: Quantum automata for some multiperiodic languages. Theor. Comput. Sci. 387, 177–186 (2007)
 28.
Malcher, A., Mereghetti, C., Palano, B.: Descriptional complexity of twoway pushdown automata with restricted head reversals. Theor. Comput. Sci. 449, 119–133 (2012)
 29.
Mereghetti, C.: Testing the descriptional power of small Turing machines on nonregular language acceptance. Int. J. Found. Comp. Sci. 19, 827–843 (2008)
 30.
Meyer, A.R., Fischer, M.J.: Economy of description by automata, grammars, and formal systems. In: Proceedings of the IEEE 12th Symposium Switching on Automation Theory, pp. 188–191 (1971)
 31.
Rabin, M., Scott, D.: Finite automata and their decision problems. IBM J. Res. Dev. 3, 114–125 (1959)
Acknowledgements
The authors wish to thank the anonymous referees for valuable comments and remarks.
Funding
Open access funding provided by Università degli Studi di Milano within the CRUICARE Agreement.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A very preliminary version of this work was presented at the 15th International Conference on Descriptional Complexity of Formal Systems (DCFS), London, Ontario, Canada, July 2225, 2013, and it is published in [21].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jakobi, S., Meckel, K., Mereghetti, C. et al. The descriptional power of queue automata of constant length. Acta Informatica 58, 335–356 (2021). https://doi.org/10.1007/s00236021003987
Received:
Accepted:
Published:
Issue Date:
Mathematics Subject Classification
 68Q10
 68Q45