Büchi Good-for-Games Automata Are Eﬀiciently Recognizable

Good-for-Games (GFG) automata offer a compromise between deterministic and nondeterministic automata. They can resolve nondeterministic choices in a step-by-step fashion, without needing any information about the remaining sufﬁx of the word. These automata can be used to solve games with ω -regular conditions, and in particular were introduced as a tool to solve Church’s synthesis problem. We focus here on the problem of recognizing Büchi GFG automata, that we call Büchi GFGness problem : given a nondeterministic Büchi automaton, is it GFG? We show that this problem can be decided in P , and more precisely in O ( n 4 m 2 | Σ | 2 ) , where n is the number of states, m the number of transitions and | Σ | is the size of the alphabet. We conjecture that a very similar algorithm solves the problem in polynomial time for any ﬁxed parity acceptance condition.


Introduction
The fundamental difference between determinism and nondeterminism is one of the deep questions asked by theoretical computer science. The P versus NP problem is an emblematic example of the fact that many basic questions about the power of nondeterminism are still not well-understood. In this work, we investigate an automaton model that offers a middle ground between determinism and nondeterminism, while retaining some advantages of both paradigms in the framework of automata theory. Although this model was introduced as a tool to solve a specific problem -Church's synthesis -we believe that it is a natural stepping stone on our way to get a better understanding of the power of nondeterminism in automata theory. We will start by mentioning the historical motivation for the model of Good-for-Games (shortly GFG) automata. One of the classical problems of automata theory is synthesisgiven a specification, decide if there exists a system that fulfils it and if there is, automatically construct one. The problem was posed by Church [5] and solved positively by Büchi and Landweber [3] for the case of ω -regular specifications. Henzinger and Piterman [10] have proposed the model of GFG automata, that can be seen as a weakening of determinism while still preserving soundness and completeness when solving the synthesis problem. An automaton is GFG if there exists a strategy that resolves the nondeterministic choices, by taking into account only the prefix of the input ω -word read so far. The strategy is supposed to construct an accepting run of the automaton whenever an ω -word from the language is given. The notion of GFG automata was independently discovered in [6] under the name history-determinism, in the more general framework of regular cost functions. It turns out that deterministic cost automata have strictly smaller expressive power than nondeterministic ones and therefore history-determinism is used whenever a sequential model is needed.
We emphasize the fact that although the model of GFG automata requires the existence of a strategy resolving the nondeterminism, this strategy is not used in algorithms but only in proofs. Therefore, it is not a part of the size of the input in computations based on GFG automata. The model of GFG automata offers a compromise between determinism and nondeterminism: in particular, as deterministic automata, it preverses soundness of composition with games and trees [10,1], while as nondeterministic automata, it can exhibit exponential succinctness compared to deterministic automata [15]. Properties of GFG automata are currently being actively investigated, and most of what we know about them has been uncovered only very recently, with several important questions still open. A brief history of recent advances in the understanding of GFG automata is given in Section 1.1 "Related works".
A major challenge in the understanding of GFG automata is to be able to decide efficiently whether an input automaton is GFG. If C is an accepting condition, for instance C ∈ { Büchi, coBüchi, Parity } , we call C GFGness problem the following decision problem: Input: A nondeterministic automaton A with accepting condition C Output: Is A a GFG automaton ? This problem has been posed in [10], where an EXPTIME algorithm is given for the general case of parity automata. The algorithm makes use of a deterministic automaton for L(A) , which can be built in exponential time. The problem is further studied in [15], where the following results are obtained: • The coBüchi GFGness problem is in P.
• The Büchi GFGness problem is in NP.
• In general, the C GFGness problem is at least as hard as solving games with winning condition C . This is tight for automata accepting all infinite words.
The precise complexity of the GFGness problem for Büchi and all higher parity conditions remained open. In particular, even for parity conditions using only 3 ranks, the only known upper bound is EXPTIME. In [14], an incremental algorithm to build GFG automata is given. This algorithm uses as a subroutine an algorithm deciding the GFGness problem. This gives an additional motivation to pinpoint the complexity of the GFGness problem, as it is a bottleneck of the algorithm from [14].
In this work, we tackle the Büchi case, and we show that the Büchi GFGness problem is in P. More precisely, we show that for a Büchi automaton A on alphabet Σ with n states and m transitions, we can decide whether A is GFG in O(n 4 m 2 |Σ| 2 ). We do so by reducing the GFGness problem to a game where 3 tokens move in A . The correctness of the reduction is showed using an intermediate construction using doubly exponentially many tokens.

Related Works
In the survey [7] two important results about GFG automata over finite words are mentioned: first that every GFG automaton over finite words contains an equivalent deterministic subautomaton, second that the GFGness problem is in P for automata on finite words. Additionally, a conjecture stating that every parity GFG automaton over ω -words contains an equivalent deterministic subautomaton is posed. In [1], examples were given of Büchi and coBüchi GFG automata which do not contain any equivalent deterministic subautomaton. Moreover, a link between GFG and tree automata was established: an automaton for a language L of ω -words is GFG if and only if its infinite tree version accepts the language of trees that have all their branches in L . Experimental evaluation of GFG automata and their applications to stochastic problems were discussed in [13]. In [15], it is shown that for co-Büchi automata (and all higher parity conditions), GFG automata can be exponentially more succinct than deterministic ones. For Büchi automata, this gap is not exponential, and only a quadratic upper bound is known. Typeness properties of GFG automata are established in [2], as well as complexities for changing between several acceptance conditions. In [14], the model of GFG automata is generalized to the notion of width of a nondeterministic automaton, GFG automata corresponding to width 1 , and an incremental algorithm is given to build GFG automata from nondeterministic automata. The games with tokens we define in the present work are very similar in spirit to the k -simulation games introduced in [9]. However, our games cannot be seen directly as particular instances of k -simulation games, as the specific dynamics of the games are different.

Definitions
We will use Σ to denote a finite alphabet. The empty word is denoted ε . If i ≤ j , the set . The cardinal of a set X is denoted |X| . If u ∈ Σ * is a word and L ⊆ Σ * is a language, the left quotient of L by u is

Automata
A nondeterministic automaton A is a tuple (Q, Σ, q 0 , ∆, F) where Q is the set of states, Σ is a finite alphabet, q 0 ∈ Q is the initial state, ∆ : Q × Σ → 2 Q is the transition function, and F ⊆ Q is the set of accepting states. We will often write p a − → q or p a − → q ∈ ∆ to signify that q ∈ ∆(p, a) , i.e. there is a transition from p to q labelled by a . If for all (p, a) in Q × Σ , ∆(p, a) = ∅ , we say that the automaton is complete. In the following, we will assume that all automata are complete, by adding a rejecting sink state ⊥ if needed. If for all (p, a) ∈ Q × Σ , |∆(p, a)| = 1 , we say that A is deterministic. If u = a 1 a 2 . . . is an infinite word of Σ ω , a run of A on u is a sequence q 0 q 1 q 2 . . . such that for all i > 0 , we have q i ∈ ∆(q i−1 , a i ) . A run is said to be Büchi accepting if it contains infinitely many accepting states, and coBüchi accepting if it contains finitely many non-accepting states. Automata on infinite words will be called Büchi and coBüchi automata, to specify their acceptance condition. Finally, we define the parity condition on infinite runs: each state q has a rank rk(q) ∈ N , and an infinite run is accepting if the highest rank appearing infinitely often is even. An automaton on infinite words using this acceptance condition is a parity automaton. The language of an automaton A , denoted L(A) , is the set of words on which the automaton A has an accepting run. If p is a state of A , the language accepted by A with p as initial state will be denoted L(p) . A language is called ω -regular if it is recognized by a nondeterministic Büchi automaton, or

16:4
Büchi Good-for-Games Automata Are Efficiently Recognizable equivalently by a deterministic parity automaton. Two automata are said equivalent if they recognise the same language.
An automaton A is Good-for-Games (GFG) if there exists a function σ : Σ * → Q (called GFG strategy) that resolves the nondeterminism of A depending only on the prefix of the input word read so far: over every word u = a 1 a 2 a 3 . . . (finite or infinite depending on the type of automaton considered), the sequence of states σ(ε)σ(a 1 )σ(a 1 a 2 )σ(a 1 a 2 a 3 ) . . . is a run of A on u , and it is accepting whenever u ∈ L(A) . For instance every deterministic automaton is GFG. See [1] for more introductory material and examples on GFG automata.

Games
A game G = (V E , V A , v I , E, W) of infinite duration between two players Eve (Player E ) and Adam (Player A ) consists of: a finite set of positions V being a disjoint union of V E and V A ; an initial position v I ∈ V ; a set of edges E ⊆ V × V ; and a winning condition describing which edge should be played given the history of the play u ∈ V * and the current position v ∈ V . A strategy has to obey the edge relation, i.e. there has to be an edge in E from v to σ P (u, v) . A play π is consistent with a strategy σ P of a player P ∈ {E, A} if for every n such that π(n) ∈ V P we have π(n A strategy for Eve (resp. Adam) is positional if it does not use the history of the play, i.e. it is a function We say that a strategy σ P of a player P is winning if every play consistent with σ P is winning for player P . In this case, we say that P wins the game G .
A game is positionally determined if one of the players has a positional winning strategy in the game. A game is half-positionally determined if whenever Eve wins, she has a positional winning strategy.
A finite-memory strategy for Eve is a tuple (M, m 0 , σ M , upd) where • M is a finite set called the memory, and m 0 ∈ M is the initial memory state.
Such a tuple induces a strategy σ E : V * × V E → V for Eve in the original sense as follows. First, the function upd * : A game is finite-memory determined if one of the players has a finite-memory winning strategy.

Remark 1.
In the rest of the paper, for readability purposes, we will define games in a slightly more informal manner. Namely we will allow sequences of moves of Eve and Adam going through implicit states in the game. Note that it is always possible to come back to the formal version defined in this section. We will also use examples of automata where the acceptance condition is defined on transitions rather than on states, for clarity purposes.

Winning Conditions
A parity game is a game where W is a parity condition, i.e. where every position v has rank rk(v) ∈ N , and the winning set W consists of infinite words for which the maximal rank appearing infinitely often is even. The degree of a parity game is the number of ranks used in its parity condition.
We will use the following results on parity games: 12,4]). Parity games are positionally determined, and can be solved in QuasiP.
Theorem 3 ([16, 11, 17]). Parity games of fixed degree can be solved in polynomial time. In particular, parity games of degree 3 with n positions and m edges can be solved in O(n · m) .
If the winning set W is an ω -regular language of V ω , we say that the game is ω -regular.
Solving an ω -regular game G = (V, E) can be costly: the classical procedure from [3] is to build a deterministic automaton D recognizing W , and building a new game G • D of size |V| × |D| with winning condition inherited from the acceptance condition of D . Thus, the idea is to simplify the winning condition of the game at the expense of a blowup in the number of positions.
Remark 5. The original motivation for GFG automata [10] is that it is sufficient for the correctness of this algorithm to take D GFG instead of deterministic. This also explains the name "good-for-games" introduced in [10]. Remarkably, D can be used in this algorithm without any knowledge of the GFG strategy witnessing that D is GFG: as long as such a strategy exists, D can be used in place of a deterministic automaton to solve any ω -regular G with winning condition W .

Game Characterization of GFG Automata
The main goal of this paper is to give an efficient decision procedure for the Büchi GFGness problem. In order to decide whether an input automaton is GFG, it is natural to replace the abstract definition from Section 2.1 with a more operational one. The winning condition is the following: Eve wins if either the word u = a 1 a 2 a 3 . . . chosen by Adam is not in L(A) , or if the run ρ = p 0 p 1 p 2 . . . she constructed is accepting (i.e. there are infinitely many i such that p i ∈ F ).

The GFG Game
The GFG game actually corresponds to the original definition of GFG automata in [10]: an automaton A is GFG if and only if Eve wins G GFG (A) . It is shown in [1] that the definition we gave in Section 2.1 for GFG automata is equivalent.

Solving the GFG Game
Notice that G GFG (A) is an ω -regular game. Therefore, by Remark 5, in order to solve it we need a GFG automaton for the language W = {(u, ρ) ∈ A ω × Q ω | u / ∈ L or ρ Büchi accepting} . The Büchi condition can be recognized easily by a deterministic 2-state automaton, but for the u / ∈ L part, we need a GFG automaton for the complement of L . A GFG automaton for L would also do, since we can consider the game where the roles of the players are reversed, thereby complementing the accepting condition. Thus, in order to decide whether an automaton for L is GFG, we seem to need a GFG automaton for L . In [15], this approach is actually used for the coBüchi GFGness problem: an auxiliary GFG automaton for L is computed, allowing to decide whether the original input automaton is itself GFG.
In the present work, we will circumvent this issue, and instead consider relaxations of the GFG game G GFG (A) , called token games that aim at mimicking the GFG game while enjoying a simpler winning condition.

Token Games
Suppose we have fixed a Büchi automaton A = (Q, Σ, q 0 , ∆, F) for the rest of this section. We define associated token games that will help deciding whether A is GFG.

First Attempt: the Game G 1
As seen in Section 3.2, the difficulty of solving the GFG game G GFG (A) comes from the fact that L(A) appears in the winning condition of G GFG (A) . A natural attempt to circumvent this difficulty is to replace the condition " u / ∈ L(A) " by "Adam cannot build an accepting run of A on u ". This would simplify the winning condition, turning it into a boolean combination of Büchi conditions, thus making the game solvable in polynomial time by Theorem 3.
We therefore define G 1 (A) as a modification of G GFG (A) , where in addition to choosing letters, Adam must additionally build a run witnessing that u ∈ L(A) . If he fails to do so, Eve automatically wins the game. Therefore, we can view a play as Adam choosing letters, and both Eve and Adam possessing a token, and moving it in the automaton in order to build an accepting run. Here is a formal definition of G 1 (A) : Definition 6 ( G 1 (A) ). We define the game G 1 (A) as follows. The game is played on arena Q 2 , starting from (q 0 , q 0 ) . Each turn, from position (p, q) : Eve wins the game if either the run ρ = p 0 p 1 . . . she chose is accepting, or the run λ = q 0 q 1 . . . chosen by Adam is rejecting.
Notice that at each turn, Eve must choose a transition before Adam does. This is not abritrary, as the other way around would trivialize the game: if Adam chooses q a − → q before Eve chooses p a − → p , then Eve can simply copy all choices of Adam, and will always win G 1 (A) even if A is not GFG.

Lemma 7. If A is GFG, then Eve wins G 1 (A) .
Proof. If Eve has a winning strategy in G GFG (A) , she can simply use the same strategy in G 1 (A) , ignoring the second component of the position. If the run λ built by Adam is accepting, this means the word u that has been played is in L(A) , and therefore by definition of the winning condition of G GFG (A) , the run ρ built by Eve is accepting.
The hope behind the definition of G 1 (A) is that if Eve wins, then she can win without using the extra information given by the second component of the position. This would make G 1 (A) equivalent to G GFG (A) , and allow us to decide the Büchi GFGness problem in polynomial time by Theorem 3.
Unfortunately, we can show that the converse of Lemma 7 does not hold: there is a Büchi automaton B such that Eve wins G 1 (B) but B is not GFG.
Indeed, let B be the following automaton, recognizing L(B) = (a + b) * a ω (the accepting state is drawn with a double circle): Lemma 8. The automaton B is not GFG, but Eve wins G 1 (B) .
Proof. Adam wins G GFG (B) with the following strategy: play the letter a until Eve decides to move to the state q (if she never moves, she fails to build an accepting run for a ω which is accepted by the automaton), then play ba ω from there; Eve is forced into state ⊥ and cannot build an accepting run for a word of the form a m ba ω which is accepted by the automaton.
On the other hand, the strategy for Eve to win G 1 (A) is to simply go where the token of Adam currently is.
This means that in general, if A is a Büchi automaton, G 1 (A) is not a good enough approximation of G GFG (A) and does not characterize GFGness of A .

Allowing More Tokens
Since the game G 1 (A) is too easy for Eve compared to the GFG game, it is natural to try to make the game harder for Eve, by allowing Adam to build several runs in parallel, some of them being allowed to fail as long as one accepts. Indeed, it is sufficient that one accepting run exists in order to guarantee that the input word chosen by Adam is in L(A) .
The game G k (A) can be summed up as follows: Adam chooses a word, Eve moves a token in the automaton while Adam moves k tokens. After ω moves, if one of Adam's tokens followed an accepting run, then Eve's token must also have followed an accepting run. We note simply G k instead of G k (A) in the rest of the document, in order to lighten notations. Below is a formal definition of the game G k . Definition 9 ( G k ). For any integer k ≥ 2, we define the game G k as follows. The game is played on arena Q k+1 , starting from (q 0 , q 0 , . . . , q 0 ) . Each turn, from position (p, q 1 , . . . , q k ) : Eve wins the game if either the run ρ = p 0 p 1 . . . she chose is accepting, or all runs λ 1 , . . . , λ k chosen by Adam are rejecting. Lemma 10. G k can be seen as a parity game with 3 parities.
Proof. We define a new parity condition on G k as follows: A play is won by Eve in G k iff ρ is accepting or all runs λ 1 , . . . , λ k are rejecting iff the play contains infinitely many positions with rank 2 or finitely many positions with rank 1 iff Eve wins according to the new parity condition. Notice that we use the fact that if infinitely many positions have rank 1 , then there is an i ∈ [1, k] such that the (1 + i) th component of the position (corresponding to the i th token of Adam) is in F infinitely many times.
By Theorem 2, this means that G k is positionally determined for all k . We will now turn to the particular case where k = 2, and obtain a precise upper bound on the complexity of solving G 2 . Let n be the number of states of A and m its number of transitions.

Some Consequences of Winning G 2
The main result of the present work will be that Eve winning G 2 on A is equivalent to A being GFG. One direction is actually trivial: Proposition 13. If A is GFG, then Eve has a winning strategy for all G k .
Proof. Eve can ignore Adam's tokens and play her GFG strategy. If the word played by Adam is in L(A) she will build an accepting run, if not Adam will not be able to build one with any of his tokens.
One of the key steps of the proof is to show that if Eve wins against 2 tokens for Adam, she can actually win against any number of tokens.

Theorem 14.
If Eve wins G 2 then she wins G k for any k .
Proof. Let σ 2 be a positional winning strategy for Eve in G 2 . We proceed by induction on k , the idea being that σ k+1 will be obtained by having σ k play against the first k tokens and then σ 2 against the last token and the output of σ k . More precisely: If Eve wins G k , by Theorem 2 and Lemma 10, she has a winning positional strategy σ k : Q k+1 × Σ → Q in G k . Define the finite-memory strategy σ k+1 = (M, m 0 , µ, upd) where • the memory M is the set of states Q , and the initial memory state m 0 is q 0 • the update function is upd(m, (p, q 1 , . . . , q k+1 )) = σ k (m, q 1 , . . . , q k ) • µ(m, (p, q 1 , . . . , q k+1 )) = σ 2 (p, m, q k+1 ) picks the move actually played by Eve In a play using σ k+1 , the memory m takes the value of σ k playing against q 1 , . . . , q k so if any of these tokens follows an accepting run, then so will m . The moves of Eve are chosen by playing σ 2 against m and q k+1 so that if either of these follow an accepting run, so will Eve's token. In the end, if any of Adam's tokens follows an accepting run, then either q k+1 or m does as well and therefore, by correctness of σ 2 , the strategy σ k+1 is winning for Eve in G k+1 . Because G k+1 is a parity game, there exists another winning strategy σ k+1 for Eve that is positional. Since Eve wins G 2 , she wins G k for all k ≥ 2 by induction.
Winning G 2 also implies an important property regarding residuals, that will be key in the proof of our main theorem.

Definition 15 (residual automaton). A transition p a
− → q is called residual if L(q) = a -1 L(p) (remember that only L(q) ⊆ a -1 L(p) holds in general). An automaton is residual if all its transitions are residual. Given an automaton A , we define the associated residual automaton A r as A where all non-residual transitions have been removed. An automaton is pre-residual if it accepts the same language as its residual automaton.

Lemma 16.
If Eve wins G 2 on A , we have: Proof. Assume that A is not pre-residual, i.e. L(A) = L(A r ) . Since A r is obtained from A by removing transitions, we always have L(A r ) ⊆ L(A) . So there is u ∈ L(A) \ L(A r ) , i.e. any accepting run for u must take a non-residual transition at some point. Then Adam can win G 2 in the following way: play the letters of u and have the first token follow Eve's token, and the second one follow an accepting run for u . If Eve never takes any non-residual transition, she cannot build an accepting run for u and loses; if she eventually takes a non-residual transition p a − → q , then Adam picks another transition p a − → q such that there is v ∈ L(q ) \ L(q) , move the first token to q and start playing the letters of v from there. Adam can build an accepting run for v from q with the first token, while Eve is unable to do so from q . Therefore, this is a winning strategy for Adam in G 2 , a contradiction.
For the second property, we show that if Eve wins G 2 (A) with a strategy σ 2 , then σ 2 is actually well-defined and winning for G 2 (A r ) . First note that any reachable position (p, q, r) of G 2 (A r ) has the property that L(p) = L(q) = L(r) since the initial position is (q 0 , q 0 , q 0 ) and only residual transitions can be taken. But in a position (p, q, r) such that

16:10
Büchi Good-for-Games Automata Are Efficiently Recognizable L(p) = L(q) = L(r) , σ 2 cannot pick a non-residual transition p a − → p , otherwise Adam can start playing a word that is not in L(p ) and win the game. So σ 2 is a valid strategy to play in G 2 (A r ) . Moreover, any play of G 2 (A r ) is in particular a play of G 2 (A) , and we showed that L(A) = L(A r ) , so σ 2 is a winning strategy in G 2 (A r ) .
Finally, suppose Eve has a GFG strategy σ for A r . Then this strategy is also well-defined on A and wins the GFG game because L(A) = L(A r ) .

Remark 17.
Any GFG automaton is pre-residual, but the converse does not hold.
The proof that any GFG automaton is pre-residual is stated in the appendix of [15]. In our setting, it is a corollary of Proposition 13 together with the first item of Lemma 16.
We give two counter-examples for the converse: a Büchi automaton B on Σ = {a, b, c} with L(B) = (Σ * abΣ * c) ω , and a {1, 2, 3} -parity automaton C accepting (a + b) ω . In both cases, we label transitions with parity ranks. Automata B and C are pre-residual, and in fact residual: all their states accept the language of the automaton, so all transitions are residual. However, they are not GFG: we can give a winning strategy for Adam in the GFG game in both cases. 2 For B , Adam first plays a . If Eve goes to q , then Adam plays abc , bringing Eve back to p . If Eve stays in p , then Adam plays bc , leaving Eve in p . Repeating this process leads Adam to build a word of L(B) , while preventing Eve from seeing any Büchi transition.
For C , Adam can play a whenever Eve is in s and b whenever she is in t .

Deciding GFGness
Before we get to the sequence of results leading to the proof of our main result, let us quickly outline the approach.
We already know that if Eve is winning G 2 then she wins G k for any k (Theorem 14) and the main idea is to find a k for which she will be able to move k tokens so that at least one follows an accepting run, and then play σ k against these virtual tokens. We can note that by simply splitting tokens at any nondeterministic choice, she will be able to explore all the possible runs, and as k grows bigger she can keep doing it for a longer time. The results of subsection 5.1 (specifically Lemma 19) essentially guarantee there is a k large enough so that following this approach, she will eventually reach accepting states.
It then remains to use this to precisely formulate Eve's strategy to win against an hypothetic winning strategy for Adam in the GFG game, reaching a contradiction (Theorem 20).

Powerset Automaton
We will assume here that A is residual. We review a few properties of the powerset automaton that will be useful in our setting.
Definition 18. Given a residual Büchi automaton A = (Q, q 0 , ∆, F) we define the powerset automaton of A , 2 A = (2 Q , Σ, {q 0 }, ∆ , F ) where • ∆ (q, a) = p∈q ∆(p, a) • q ∈ F when there is q ∈ q such that q ∈ F Note that it is well known that as such 2 A does not necessarily recognize the same language as A . However, it is always true that L(A) ⊆ L(2 A ) . More precisely, for any state p ∈ Q , we have L(p) ⊆ L({p}) . This property will be sufficient for our purpose. Let us write q• w for the sequence of states visited by 2 A when reading w from state q .
The following lemma will be crucial in the proof of the main theorem: it tells us that if Adam is choosing letters according to a finite-memory winning strategy for the GFG game, then the number of turns before being able to see an accepting state while reading these letters is bounded. This bound will allow to follow all the possible runs up to that accepting state with a finite number of tokens.

Lemma 19.
If τ is a finite-memory winning strategy for Adam in G GFG (A) , then there exists an integer K τ such that: if w is a sequence of letters of length K τ chosen by τ from a state p 0 (reached by playing τ from the starting position), and q is any state with L(p 0 ) = L(q) then {q}• w contains an accepting state.
Proof. Let M be the memory of τ and let K τ = |Q × M × 2 Q | . Let w = a 1 a 2 . . . a K τ be a word of length K τ that can be played by τ in G GFG (A) from a position p 0 reachable in G GFG (A) with some memory m 0 . Consider the sequence where the p i 's describe the states of A in this play, the m i 's are Adam's memory states, and the q i 's are the states of 2 A reached upon reading the letters of w starting from {q} . By choice of K τ , there must be i < j such that (p i , m i , q i ) = (p j , m j , q j ) . This means that there is a prefix uv of w such that Eve can force the strategy τ to play uv ω from (p 0 , m 0 , {q}) , while guaranteeing that on the suffix v ω , the run of 2 A (corresponding to the third component) repeats the same cycle C from q i to q j = q i .
Because τ is winning for Adam, and A is residual, we must have uv ω ∈ L(p 0 ) = L(q) . Since L(q) ⊆ L({q}) , we have uv ω ∈ L({q}) , and therefore the cycle C must contain an accepting state of 2 A . Since the cycle C is present in {q}• w , this concludes the proof.

Two Tokens Are Enough
Theorem 20. If Eve wins G 2 on a residual automaton A , then A is GFG.
Proof. Assume by contradiction that Eve wins G 2 but Adam wins G GFG (A) . By Theorem 4, he can do so with a finite-memory strategy τ (with memory of size exponential in |Q| ). Let K = K τ given by Lemma 19, and c = max{|∆(p, a)| | p ∈ Q, a ∈ Σ} be the degree of nondeterminism of A . Let N = c K and T = N · |Q| , so that when moving T tokens on A , at least one state will hold N or more tokens at any given time. Recall that by Theorem 14,  Eve has a positional winning strategy σ T : Q T+1 × Σ → Q in G T . Notice that T is doubly exponential in |Q| .
We will now define a finite-memory strategy σ = (M, m 0 , σ M , upd) for Eve in G GFG (A) . The strategy σ will be defined according to the following intuition: Eve plays against τ by simulating T tokens moving in A , and chooses her actual moves in G GFG (A) by playing σ T against these virtual tokens. The memory M of σ is Q T , and its initial memory state is m 0 = (q 0 , . . . , q 0 ) . We now describe the update function upd of σ . This amounts to giving a strategy for moving T tokens in A , when letters are given by the opponent step-by-step. We will consider that some tokens are active and the others are passive. Tokens are moved according to the following rules: • Initially, the T tokens are in q 0 , and are all active. • At each nondeterministic choice, active tokens are divided evenly between possible successors.
• Passive tokens are moved arbitrarily.
• If an accepting state is reached by some token, then choose a state p containing at least N tokens, and set the tokens in this state to active, and all others to passive. We call this a reset point p .
We will also consider that the initial position is a reset point. An illustration of this update strategy is given in Figure 1.
Finally, we define σ M : M × (Q × Σ) → Q by σ M (q, p, a) = σ T (p, q, a) . We can now consider the play ρ of σ against τ in G GFG (A) . Let p i , q i , a i be respectively the state of Q , the memory state of σ , and the letter played by τ after i moves in ρ . Notice that since A is residual, for all i and q ∈ q i , we have L(p i ) = L(q) . This allows us to use Lemma 19 in the following.
We first show that there are infinitely many i such that q i contains an accepting state. Consider p a reset point in the play. Starting from at least N = c K active tokens in p , and dividing them evenly at each step, the update strategy can cover all states reached by 2 A from {p} with some active tokens during K steps. By Lemma 19, the memory will reach an accepting state within these K steps, and can therefore restart at another reset point without ever running out of active tokens. This shows that there are infinitely many i such that q i contains an accepting state. Since M is a finite tuple, there is one of its a components j such that the j th coordinate of q i is accepting for infinitely many i . By correctness of σ T , we obtain that there are infinitely many i such that p i is accepting. This implies that the play ρ of G GFG (A) is won by Eve, a contradiction with the assumption that τ is winning for Adam.
Corollary 21. On any Büchi automaton A , Eve wins G 2 if and only if A is GFG.

Proof.
A consequence of Lemma 16 and the above theorem: if Eve wins G 2 on A , then she also does on A r , which implies that A r is GFG, and therefore A is GFG as well. We already saw the other direction in Proposition 13.
By Theorem 12, we can now state our main result: Theorem 22. The Büchi GFGness problem is in P, and more precisely in O(n 4 m 2 |Σ| 2 ) .
Remark 23. Let us discuss briefly the possible extension of this proof to other parity cases.
On one hand Theorem 14 is true regardless of the acceptance condition (the proof does not rely on the automaton being Büchi), which is quite promising. But on the other, the adaptation of Lemma 19 proves problematic, and without this lemma it seems difficult to find a way to move T virtual tokens so that at least one of them follows an accepting run, which we rely on critically in the proof of Theorem 20. Already in the coBüchi case a substitute technique is missing, although our current work focuses on using some of the techniques from [15] to prove Theorem 20 in this case, hoping this will eventually lead to a technique working for any parity condition.

Conclusion
We showed that the Büchi GFGness problem can be decided in P, by introducing new techniques using token games. While it seems that our proof cannot be directly used to solve efficiently the parity GFGness problem, the game G 2 could still be relevant in this more general setting. We did not find any example of a non-GFG parity automaton A such that Eve wins G 2 (A) , so in our opinion it is plausible that Eve wins G 2 (A) if and only if A is GFG for any parity automaton A . Since for any fixed acceptance condition (for instance parity condition of fixed degree), the game G 2 can be solved in P, this would put the GFGness problem in P for any fixed acceptance condition, with an algorithm that is already known.