Ich hatte vor einiger Zeit mal in einem englischsprachigen Forum Diskussionsbeiträge geschrieben, die im wesentlichen auf einem Paper von Alexandrov et al. basieren, der einige konzeptionelle Probleme der LQG analysiert (aufgrund der Länge der Beiträge übersetze ich sie nicht)

I would like to continue discussing canonical LQG based on chapter 2 from

http://arxiv.org/abs/1009.4475

Critical Overview of Loops and Foams

Authors: Sergei Alexandrov, Philippe Roche

(Submitted on 22 Sep 2010)

Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.

As a summary, Alexandrov and Roche present (and discuss in detail) indications that canonical LQG in its present form suffers from quantization ambiguities (ordering ambiguities, unitarily in-equivalent quantizations, Immirzi-parameter), insufficient treatment of secondary second class constraints leading to anomalies, missing off-shell closure of constraint algebra, missing consistent definition of the Hamiltonian constraint and possibly lack of full 4-dim diffeomorphism invariance. They question some of the major achievements like black hole entropy, discrete area spectrum, uniquely defined kinematical Hilbert space (spin networks).

In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …). I would like to discuss this chapter 3 in a separate thread. For the rest of this post I will try to present the most important sections from chapter 2, i.e. canonical LQG.

Although we have raised the points already known by the experts in the field, they are rarely spelled out explicitly. At the same time, their understanding is crucial for the viability of these theories. ...

Thus, we will pay a particular attention to the imposition of constraints in both loop and SF quantizations. Since LQG is supposed to be a canonical quantization of general relativity, in principle, it should be straightforward to verify the constraint algebra at the quantum level. However, in practice, due to peculiarities of the loop quantization this cannot be achieved at the present state of knowledge. Therefore, we use indirect results to make conclusions about this issue. ...

Although LQG can perfectly incorporate the full local Lorentz symmetry, we find some evidences that LQG might have problems to maintaining space-time diffeomorphism symmetry at the quantum level. Thus, we argue that it is an anomalous quantization of general relativity which is not physically acceptable. ...

Since the action (2.1) possesses several gauge symmetries, 4 diffeomorphism symmetries and 6 local Lorentz invariances in the tangent space, there will be 10 corresponding first class constraints in the canonical formulation. Unfortunately, as will become clear in section 2.2, there are additional constraints of second class which cannot be solved explicitly in a Lorentz covariant way. To avoid these complications, one usually follows an alternative strategy. It involves the following three steps:

... the boost part of the local Lorentz gauge symmetry is fixed from the very beginning by

choosing the so called time gauge, which imposes a certain condition on the tetrad field

...the three first class constraints generating the boosts are solved explicitly w.r.t. space components of the spin-connection; ...

An important observation is that the spectrum (2.14) is proportional to the Immirzi parameter. This proportionality arises due to the difference between ... having canonical commutation relations with the connection. It signifies that this parameter, which did not play any role in classical physics, becomes a new fundamental physical constant in quantum theory. ...

A usual explanation is that it is similar to the theta-angle in QCD [35]. However, in contrast to the situation in QCD, the formalism of LQG does not even exist for the most natural value corresponding to the usual Hilbert–Palatini action. Moreover, the Immirzi parameter enters the spectra of geometric operators in LQG as an overall scale, which is a quite strange effect. Even stranger is that the canonical transformation (2.6) turns out to be implemented non-unitarily, so that the area operator is sensitive to the choice of canonical variables. To our knowledge, there is no example of such a phenomenon in quantum mechanics. ...

Below we will argue that the dependence on the Immirzi parameter is due to a quantum anomaly in the diffeomorphism symmetry, which in turn is related to a particular choice of the connection used to define quantum holonomy operators (2.7). ...

Alexandrov and Roche do not address all weak points related to Thiemann's construction of the Hamiltonian but refer to

14] H. Nicolai, K. Peeters, and M. Zamaklar, “Loop quantum gravity: An outside view,” Class. Quant. Grav. 22 (2005) R193, arXiv:hep-th/0501114.

[15] T. Thiemann, “Loop quantum gravity: An inside view,” Lect. Notes Phys. 721 (2007) 185–263, arXiv:hep-th/0608210.

In the following Alexandrov and Roche present alternative quantization approaches developped by Alexandrov et al.; this does not really solve these issues but it clarifies the main problems, i.e. quantization ambiguities, inequivalent quantization schemes, secondary second class constraints missed by using the Poisson structure instaed of the Durac brackets.

The reduction of the gauge group originates from the first two steps in the procedure leading to the AB canonical formulation on page 6. Therefore, it is natural to construct a canonical formulation and to quantize it avoiding any partial gauge fixing and keeping all constraints generating Lorentz transformations in the game. The third step in that list (solution of the second class constraints) can still be done and the corresponding canonical formulation can be found in [51]. However, this necessarily breaks the Lorentz covariance. On the other hand, it is natural to keep it since the covariance usually facilitates analysis both at classical and quantum level. Thus, we are interested in a canonical formulation of general relativity with the Immirzi parameter, which preserves the full Lorentz gauge symmetry and treats it in a covariant way. ...

The presence of the second class constraints is the main complication of the covariant canonical formulation. They change the symplectic structure on the phase space which must be determined by the corresponding Dirac bracket. ...

They identify a two-parameter family of inequivalent (!) connections; their main results are

...First of all, the connection lives now in the Lorentz Lie algebra so that its holonomy operators belong to a non-compact group. This is a striking distinction from LQG where the compactness of the structure group SU(2) is crucial for the discreteness of geometric operators and the validity of the whole construction.

...The symplectic structure is not anymore provided by the canonical commutation relations of the type (2.3) but is given by the Dirac brackets.

...In addition to the first class constraints generating gauge symmetries, the phase space to be quantized carries second class constraints. Although they are already taken into account in the symplectic structure by means of the Dirac brackets, they lead to a degeneracy in the Hilbert space constructed ignoring their presence ...

(Remarkably some of these issues identified by Alexandrov and Roche in the generalized canonical approach will show up in the "new SF" models which have become popular over the last years)

Thus, we see that the naive generalization of the SU(2) spin networks to their Lorentz analogues is not the correct way to proceed. A more elaborated structure is required. The origin of this novelty can be traced back to the presence of the second class constraints which modified the symplectic structure and invoked a connection different from the usual spinconnection. ...

The spectrum (2.40) depends explicitly on the parameters a, b entering the definition of the connection. This implies that the quantizations based on different connections of the two-parameter family are all inequivalent. ...

Finally, we notice that the projected spin networks are obtained by quantizing the phase space of the covariant canonical formulation ignoring the second class constraints. Therefore they form what can be called enlarged Hilbert space and as we mentioned above this space contains many states which are physically indistinguishable. To remove this degeneracy one has to somehow implement the second class constraints at the level of the Hilbert space. ...

(The authors claim that this is what is missed in all SF models)

Alexandrov and Roche present two special choices for the connection (on physical grounds) in order to demonstrate how one could proceed

1) LQG in a covariant form from which the standard time-gauge LQG framework can be recovered

2) CLQG which leads to physically different results!

Although the commutativity of the connection is a nice property, there is another possibility of imposing an additional condition to resolve the quantization ambiguity, which has a clear physical origin. Notice that the Lorentz transformations and spatial diffeomorphisms, which appear in the list of conditions (2.34), do not exhaust all gauge transformations. What is missing is the requirement of correct transformations under time diffeomorphisms generated by the full Hamiltonian. Only the quantity transforming as the spin-connection under all local symmetries of the theory can be considered as a true spacetime connection. ...

In particular, it involves a Casimir of the Lorentz group and hence this spectrum is continuous. But the most striking and wonderful result is that the spectrum does not depend on the Immirzi parameter! Moreover, one can show that this parameter drops out completely from the symplectic structure ...

Thus, it [the Immirzi parameter] remains unphysical as it was in the classical theory, at least at this kinematical level. ...

From our point of view, this is a very important fact which indicates that LQG may have troubles with the diffeomorphism invariance at the quantum level. ...

But as we saw in the previous subsection, there is an alternative choice of connection, suitable for the loop quantization, which respects all gauge symmetries. Besides, the latter approach, which we called CLQG, leads to results which seem to us much more natural. For example, since it predicts the area spectrum independent on the Immirzi parameter, there is nothing special to be explained and there is no need to introduce an additional fundamental constant. Moreover, the spectrum appears to be continuous which is very natural given the non-compactness of the Lorentz group and results from 2+1 dimensions (see below). Although these last results should be taken with great care as they are purely kinematical and obtained ignoring the connection non-commutativity, in our opinion, the comparison of the two possibilities to resolve the quantization ambiguity points in favor of the second choice. ...

The authors have serious doubts that the discrete area spectrum should be taken as a physical result:

In fact, there are two other more general issues which show that the LQG area spectrum is far from being engraved into marble. First, the area operator is a quantization of the classical area function and, as any quantization, is supplied with ordering ambiguities ...

Second, the computation of the area spectrum has been done only at the kinematical level. The problem is that the area operator is not a Dirac observable. It is only gauge invariant, whereas it is not invariant under spatial diffeomorphisms and does not commute with the Hamiltonian constraint. This fact raises questions and suspicions about the physical relevance of its spectrum and in particular about the meaning of its discreteness, even among experts in the field [77, 78]. ...

In the following Alexandrov and Roche show that there are to different interpretations on quantization of gravity, namely the Dirac scheme and the relational interpretation

The difference between the two interpretations and the importance of this issue has been clarified in [77, 81]. Namely, the authors of [77] proposed several examples of low dimensional quantum mechanical constrained systems where the spectrum of the physical observable associated to a partial observable is drastically changed. This is in contradiction with the expectation of LQG that the spectrum should not change. Then in [81] it was argued that one should not stick to the Dirac quantization scheme but to the relational scheme. Accepting this viewpoint allows to keep the kinematical spectra unchanged. Thus, the choice of interpretation for physical observables directly affects predictions of quantum theory and clearly deserves a precise scrutiny. ...

Whereas the relational viewpoint seems to be viable, the work [77] shows that if we adhere only to the first interpretation, which is the most commonly accepted one, then it is of upmost importance to study the spectrum of complete observables. Unfortunately, up to now there are no results on the computation of the spectrum of any complete Dirac observable in full LQG. ...

In our opinion, all these findings and the above mentioned issues clearly make the discreteness found in LQG untrustable and suggest that the CLQG spectrum (2.52) is a reasonable alternative. ...

I don't think that this means that LQG is wrong and CLQG is right. But it definitly means that even canonical LQG is far from being unique!

However, since the spectrum (2.52) is independent of the Immirzi parameter, the challenge now is to find such counting which gives the exact coefficient 1/4, and not just the proportionality to the horizon area. In fact, the last point is the weakest place of the LQG derivation comparing to all other derivations existing in the literature. ...

Besides, there are two other points which make the LQG derivation suspicious. First, it is not generalizable to any other dimension. If one draws direct analogy with the 4-dimensional case, one finds a picture which is meaningless in 3 dimensions and does not allow to formulate any suitable boundary condition in higher dimensions. ...

(This aspect is discussed in a series of papers by Thiemann et al., but I have to admit that I did not check for results relevant in this context)

Last but not least Alexandrov and Roche focus on diffeomorphism invariance, non-separability of LQG Hilbert space and non-uniqueness of the Hamiltonian constraint.

In fact, there still remain some continuous moduli depending on the relative angles of edges meeting at a vertex of sufficiently high valence [96]. Due to this the Hilbert space HGDiff is not separable and if one does not want that the physics of quantum gravity is affected by these moduli, one is led to modify this picture. To remove this moduli dependence, one can extend Diff(M) to a subgroup of homeomorphisms of M consisting of homeomorphisms which are smooth except at a finite number of points [97] (the so called “generalized diffeomorphisms”). If these points coincide with the vertices of the spin networks, the supposed invariance under this huge group will identify spin networks with different moduli and solve the problem. However, this procedure has different drawbacks. First, the generalized diffeomorphisms are not symmetries of classical general relativity. Moreover, they transform covariantly the volume operator of Rovelli–Smolin but not the one of Ashtekar–Lewandowski which is favored by the triad test [39]. This analysis indicates that these generalized diffeomorphisms should not be implemented as symmetries at quantum level and, as a result, we remain with the unsolved problem of continuous moduli. ...

In the Dirac formalism the constraints Hi only generate diffeomorphisms which are connected to the identity. Therefore, there is a priori no need for defining HGDiff to be invariant under large diffeomorphisms. On the other hand, in LQG these transformations, forming the mapping class group, are supposed to act trivially. This is justified in [2] (section I.3.3.2) to be the most practical option given that the mapping class group is huge and not very well understood. ...

Thus, the simplest option taken by LQG might be an oversimplification missing important features of the right quantization. ...

That means that LQG may simply miss the full diffeomorphism symmetry, especially all structures related to large diffeomorphisms may be lost. ... Moreover, a huge arbitrariness is hidden in the step suggesting to replace the classical Poisson brackets, as for example (2.22), by quantum commutators. In general, this is true only up to corrections in h and on general ground one could expect that the Hamiltonian constructed by Thiemann may be modified by such corrections. This is a bit disappointing situation for a would be fundamental quantum gravity theory. In principle, all this arbitrariness should be fixed by the requirement that the quantum constraints reproduce the closed Dirac constraint algebra. However, the commutators of quantum constraint operators are not under control ...

Here's the summary of chapter 2

Trying to incorporate the full Lorentz gauge symmetry into the standard LQG framework based on the SU(2) group, we discovered that LQG is only one possible quantization of a twoparameter family of inequivalent quantizations. All these quantizations differ by the choice of connection to be used in the definition of holonomy operators — the basic building blocks of the loop approach. LQG is indeed distinguished by the fact that the corresponding connection is commutative. Nevertheless, a more physically/geometrically motivated requirement selects another connection, which gives rise to the quantization called CLQG. Although the latter quantization has not been properly formulated yet, it predicts the area spectrum which is continuous and independent on the Immirzi parameter, whereas LQG gives a discrete spectrum dependent on the IP. We argued that these facts lead to suspect that LQG might be an anomalous quantization of general relativity: in our opinion they indicate that it does not respect the 4d diffeomorphism algebra at quantum level. If this conclusion turns out indeed to be true, LQG cannot be physically accepted. At the same time, CLQG is potentially free from these problems. But due to serious complications, it is far from being accomplished and therefore the status of the results obtained so far, such as the area spectrum, is not clear. We also pointed out that some of the main LQG results are incompatible either with other approaches to the same problem or with attempts to generalize them to other dimensions. We consider these facts as supporting the above conclusion that LQG is not, in its present state, a proper quantization of general relativity.

So much for today!

## LQG seriously ill?

### LQG seriously ill?

Gruß

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

### Re: LQG is seriously ill

I would like to continue discussing SF (i.e. PI) models of LQG based on chapter 3 from

http://arxiv.org/abs/1009.4475

Critical Overview of Loops and Foams

Authors: Sergei Alexandrov, Philippe Roche

(Submitted on 22 Sep 2010)

Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.

In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …).

In a nutshell, LQG is supposed to give an Hamiltonian picture of quantum gravity based on the use of specific variables (connections), whereas spin foam models are certain type of discretized path integral approach to the quantization. A priori these are different approaches using different methods and leading to different results. Of course, in the best case their predictions should coincide and they should be just equivalent quantizations. But at present such an agreement has not been achieved yet.

First Alexandrov and Roche start with a new perspective based on SFs

Most of the constructions of SF models of 4-dimensional general relativity heavily rely on the Plebanski formulation and translate the classical relation between BF theory and gravity directly to the quantum level. In other words they all employ the following strategy: 1. discretize the classical theory putting it on a simplicial complex;

2. quantize the topological BF part of the discretized theory;

3. impose the simplicity constraints at the quantum level.

Thus, instead of quantizing the complicated system obtained after imposing the constraints, they first quantize and then constrain. This strategy is behind all the progress achieved in the construction of 4-dimensional SF models. However, at the same time, this is a very dangerous strategy and, as we believe, it is the reason why most of these models cannot be satisfactory models of quantum gravity. As we will show, it is inconsistent with the Dirac rules of quantization and is somewhat misleading.

They introduce the models [21] (ELPR) and [20] (FK)

Although the models of [21] (ELPR) and [20] (FK) are in general different from each other and obtained using different ideas, they have several common inputs. First, they both rely on the idea allowing to effectively linearize the simplicity constraints

They discuss the BF model, why it fails to provide a quantization of GR, and they discuss how BF serves as a basis for the new model; they show that some shortcomings of the BF models are not resolved in [21] (ELPR) and [20] (FK)

But in fact (3.19) is stronger because it excludes the topological sector of Plebanski formulation. Thus, the linearization solves simultaneously the problem of the BC model that it does not distinguish between the gravitational and the topological sectors. The new constraint leads directly to the sector we are interested in. ... Second, both models suggest to quantize an extension of Plebanski formulation which includes the Immirzi parameter. This results in crucial deviations from the results of the BC model already at the level of imposing the diagonal simplicity constraint.

The construction of [21] (ELPR) and [20] (FK) contains some 'quantization ambiguities' as expected from chapter 2.

Then, as has been noted already in [131], the constraint (3.21) does not have solutions except some trivial ones. However, appealing to the ordering ambiguity, the authors of the model [21] adjusted the operator in (3.21) so that the constraint does have solutions.

In the following they stress deviations from well-established standard qunatization procedures used to quantize & modify BF leading to the new models

The main suggestion of this model, [EPRL] which distinguishes it from the BC model and was first realized in [128], is that the simplicity constraints should be imposed only in a weak sense that is instead of imposing the constraints on the allowed states [annihilating states] one only requires [vanishing of their expectation values sandwiched between physical states!] This is justified by noting that after identification of the bivectors Bf with generators of the gauge group or a combination thereof (3.20), the simplicity constraints become non-commutative and imposing them strongly leads to inconsistencies, as is well known for any second class constraints. This does not concern the diagonal simplicity constraint which lies in the center of the constraint algebra and therefore can still be imposed strongly leading to the restriction (3.21) on the allowed representations

[in FK] one starts again from the partition function for BF theory (3.12) where the simplicity constraints should be implemented as restrictions on the representation labels. However, before doing that one makes a refinement of the decomposition (3.12) using the coherent state techniques developed in [19] Here we concentrate on the Euclidean case. Although the Lorentzian case was also considered in [20], the corresponding construction is much more complicated and even the Immirzi parameter has not been incorporated in it so far.

They doubt that the SFs and canonical LQG agree in the kinematical Hilbert space; the 'proofs' are mostly based on unphysical i.e. non-gauge invariant quantities subject to quantization anomalies.

Due to this fact it was claimed that the boundary states of the new models are the ordinary SU(2) spin networks [128] and it is now widely believed that there is a perfect agreement between the new SF models and LQG at the kinematical level [10]. However, it is easy to see that this is just not true. First of all, the states induced on the boundary of a spin foam are not the ordinary spin networks, but projected ones considered in section 2.2.2

However, on one hand, there is no any fundamental reason to perform such a projection. And on the other hand, this relation shows that the kinematical states of LQG and the boundary states of the EPRL model are indeed physically different and the agreement between their labels is purely formal. The claimed agreement is often justified by comparison of the spectra of geometric operators, area [21] and volume [138]. By appropriately adjusting the ordering, the spectra in spin foams and LQG can be made coinciding. However, the operators, which are actually evaluated in these papers, are not the standard ones, but shifted by constraints.

But in the EPRL approach the boundary states are supposed to be integrated over these normals so that the operator corresponding to (3.39) is simply not defined! On top of that, even if one drops the integration over xt, as we argue below, and gets a well defined operator on a modified state space, we see that the quantization of the geometric operators is not unique. To get the coincidence with LQG requires ad hoc choice of the ordering and of the classical expression to be quantized.

After their discussion of EPRL and FK they focus on the imposition of constraints. To understand these issues one has to be familiar with the Dirac quantization procedure (second class constraints, Dirac brackets instead of Poisson brackets)!

All models presented in the previous section have been derived following the strategy of section 3.1.2: first quantize and then constrain. Now we want to reconsider the resulting constructions taking lessons from the canonical approach. As we showed in the previous section, the spin foam quantization originates in Plebanski formulation of general relativity. The canonical analysis of this formulation has been carried out in [139, 140, 141] and turns out to be essentially equivalent to the Lorentz covariant canonical formulation of the Hilbert–Palatini action [18] once eijkBjk is identified with ~ Pi. The Immirzi parameter is also easily included and appears in the same way. Thus, the canonical structure to be quantized can be borrowed from section 2.2.1. In particular, the role of the simplicity constraints is played by the constraints (2.30).

In section 3.4.1. Alexandro and Roche show based on a toy model how the two quantization strategies

1) a la Dirac and

2) 'first quantize using Poisson - then constrain'

may lead models which look equivalent at first sight but are definitly inequivalent of one carefully inspects their details.

I do not go into the details here but I expect that everybody aiming to understand Alexandrov's reasoning will carefully follow his arguments and will understand in detail Dirac's constraint quantization procedure! The analysis of secondaray second class constraints modifying the symplectic structure on phase space i.e. replacing Poisson with Dirac brackets is key to the whole chapter 3!

They compare the two quantization strategies in the toy model:

Comparing the results of the two approaches, one observes a drastic discrepancy: gamma is either quantized or not. In the former approach for a non-rational gamma the quantization simply does not exist, whereas there are no any obstructions in the latter ... Taking into account that the second approach represents actually a result of several possible methods, which all follow the standard quantization rules, it is clear that it is the second quantization that is more favorable. The quantization of ? does not seem to have any physical reason behind itself. In fact, it is easy to trace out where a mistake has been done in the first approach: it takes too seriously the symplectic structure given by the Poisson brackets, whereas it is the Dirac bracket that describes the symplectic structure which has a physical relevance. It is easy to see that this leads to inconsistency of the first quantization. For example, the Hamiltonian H, ... is simply not defined on the subspace spanned by linear combinations of (3.54) ...

The next topic is relevant as response to Rovelli's "we don't need a Hamiltonian"

However, one can take a “minimalistic” point of view and do not require the existence of a well defined Hamiltonian on the constrained state space. (We thank Carlo Rovelli for discussion of this possibility.) After all, spin foam models are designed to compute transition amplitudes. Therefore, we are really interested not in the Hamiltonian itself, but in its matrix elements and the latter can be defined by using the Hamiltonian and the scalar product on the original unconstrained space

However, this expectation turns out be wrong. As is clear from the derivations in [20, 132] and has been explicitly demonstrated in a simple cosmological model [142], the vertex amplitude actually appears as a matrix element of the evolution operator. This requires to consider expectation values of higher powers of the Hamiltonian for which the property (3.62) does not hold anymore. This leads to deviations of results obtained by the spin foam strategy from those which are based on the well grounded canonical quantization

Let us summarize what we learnt studying the simple model (3.42):

• The strategy based on “first quantize, then constrain” leads to a canonical quantization which is internally inconsistent as the Hamiltonian operator is ill-defined on the constrained state space.

• The origin of the problem as well as the quantization of the parameter ? can be traced back to the use of the Poisson symplectic structure which does not take into account the presence of the second class constraints.

• Besides, this approach completely ignores the presence of the secondary second class constraint which is crucial for suppressing the fluctuations of non-dynamical variables and producing the right vertex amplitude in discretized theory.

• An attempt to interpret the results of such quantization only as an approach to compute transition amplitudes using (unphysical) Hamiltonian (3.61) does not work as they turn out to be incompatible with the results of the standard (path integral or canonical) quantization. As a result, the transition amplitudes computed in this way do not have any consistent canonical representation.

In our opinion, all these problems are just manifestations of the fact that the rules of the Dirac quantization cannot be avoided. This is the only correct way to proceed leading to a consistent quantum theory.

The example presented above explicitly reveals the main problems of the new SF models and their origin. All these models start from the symplectic structure provided by the simple BF theory, which ignores constraints of general relativity. In particular, they all use the usual identification of the B-field with the generators of the gauge group, or its gamma-dependent version (3.20), when the constraints are translated into quantum level. But this identification does not agree with the symplectic structure of general relativity

Alexandro and Roche pay attentio to the simplicity constraint which is key to the SF formalism and which seems to be the weakest point. It is this constraint which is requred to turn BF into GR - and it is this constraint which is quantized in the wrong way!

In fact, a special care which is paid to the diagonal simplicity, when it is imposed strongly whereas the cross simplicity constraints are imposed only weakly, results from another common confusion. As we explained in section 3.3.1, this is done because the diagonal simplicity is in the center of the non-commutative constraint algebra of all simplicity constraints and thus interpreted as first class. But this classification would be correct only if there were no other constraints to be considered. It completely ignores the presence of the secondary constraints. The latter do not commute with all simplicity and in particular with the diagonal simplicity. As a result, all these constraints are second class and should be quantized via the Dirac bracket. Given all this, we expect that the new SF models suffer from inconsistencies which we met in the previous subsection. They can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian. In particular, there is no reason to expect that the new models may be in agreement with LQG or any of its modifications. Note that this incompatibly with the canonical quantization manifests itself in the issues involving the Hamiltonian. This is why one does not see it in a semiclassical analysis or in any investigation restricting to the kinematical level. It should be stressed that this critics is not just about face or edge amplitudes, which depend on details of the path integral measure but can be found in principle from consistency on the gluing of simplices [135]. In fact, the ignorance of the secondary second class constraints has much more profound implications and, what is the most important, it affects the vertex amplitude (see the next subsection). The standard prescription that the vertex is obtained by evaluating the boundary state of a 4-simplex on a flat connection is a direct consequence of the employed strategy, which starts by quantizing the topological BF theory, and should be modified to take into account all constraints of general relativity. Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity. A good candidate should allow a quantum mechanical representation in terms of wave functions, Hamiltonian, etc., especially if one hopes to find a viable loop quantization of gravity. The point we are making here is that the SF models derived using the strategy “first quantize and then constrain” do not satisfy this requirement.

As I stressed a couple of times the Lagrangian PI is a derived object which cannot be seen as a fundamental entity. The main problem is the construction of the measure taking into account the second class constraints. It is this step where the SF construction seems to fail up to now; it is this step where some weak points from the BF model do show up agains

The SF representation of quantum gravity can be seen as an outcome of a Lagrangian path integral for discretized Plebanski formulation of general relativity. However, the Lagrangian or a configuration space path integral is a derived concept. A more fundamental one is the path integral over the phase space. Its measure can be rigorously derived and in particular it contains d-functions of all second class constraints. On the other hand, the Lagrangian path integral can be obtained from the canonical one only at certain very special circumstances.

Therefore, ..., if one wants to calculate transition functions as one does in SF models, one must use the canonical path integral.The main consequence of this conclusion is that, as we mentioned above, the secondary second class constraints should appear explicitly in the integration measure. We believe that this is an important point missed by the present-day SF models.

Alexandro and Roche stress that all these defects of the new models may be invisble in the semiclassical sector. That means that reproducing GR in the IR is not sufficient as a test for successfil quantization. Tis is trivial as there are several inequivalent models having the same IR limit (this applies to any quantum theory, not only to GR)

One might argue that since the secondary constraints appear as stability conditions for the primary ones and the latter are imposed in the path integral at every moment of time, the secondary constraints should follow automatically and need not to be imposed explicitly. For example, in SF models based on Plebanski formulation one could expect that all set of simplicity constraints ensures the simplicity of bi-vectors at all times and thus it is enough. However, this argument works only at the quasiclassical level where the equations of motion are satisfied. Off-shell the quantum fluctuations of degrees of freedom fixed classically by the secondary constraints are not suppressed if the constraints are not inserted in the path integral.

It is also not seen at the quasiclassical level since the missing constraint is obtained on mass shell anyway. Therefore, it is not in contradiction with the fact that the semiclassical asymptotics of the EPRL and FK amplitudes reproduce the Regge action [147, 148, 149], i.e., the correct classical limit. The problem is that the secondary constraints are not imposed strongly at the quantum level and as a result one might expect the appearance of additional quantum degrees of freedom in the new models

http://arxiv.org/abs/1009.4475

Critical Overview of Loops and Foams

Authors: Sergei Alexandrov, Philippe Roche

(Submitted on 22 Sep 2010)

Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.

In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …).

In a nutshell, LQG is supposed to give an Hamiltonian picture of quantum gravity based on the use of specific variables (connections), whereas spin foam models are certain type of discretized path integral approach to the quantization. A priori these are different approaches using different methods and leading to different results. Of course, in the best case their predictions should coincide and they should be just equivalent quantizations. But at present such an agreement has not been achieved yet.

First Alexandrov and Roche start with a new perspective based on SFs

Most of the constructions of SF models of 4-dimensional general relativity heavily rely on the Plebanski formulation and translate the classical relation between BF theory and gravity directly to the quantum level. In other words they all employ the following strategy: 1. discretize the classical theory putting it on a simplicial complex;

2. quantize the topological BF part of the discretized theory;

3. impose the simplicity constraints at the quantum level.

Thus, instead of quantizing the complicated system obtained after imposing the constraints, they first quantize and then constrain. This strategy is behind all the progress achieved in the construction of 4-dimensional SF models. However, at the same time, this is a very dangerous strategy and, as we believe, it is the reason why most of these models cannot be satisfactory models of quantum gravity. As we will show, it is inconsistent with the Dirac rules of quantization and is somewhat misleading.

They introduce the models [21] (ELPR) and [20] (FK)

Although the models of [21] (ELPR) and [20] (FK) are in general different from each other and obtained using different ideas, they have several common inputs. First, they both rely on the idea allowing to effectively linearize the simplicity constraints

They discuss the BF model, why it fails to provide a quantization of GR, and they discuss how BF serves as a basis for the new model; they show that some shortcomings of the BF models are not resolved in [21] (ELPR) and [20] (FK)

But in fact (3.19) is stronger because it excludes the topological sector of Plebanski formulation. Thus, the linearization solves simultaneously the problem of the BC model that it does not distinguish between the gravitational and the topological sectors. The new constraint leads directly to the sector we are interested in. ... Second, both models suggest to quantize an extension of Plebanski formulation which includes the Immirzi parameter. This results in crucial deviations from the results of the BC model already at the level of imposing the diagonal simplicity constraint.

The construction of [21] (ELPR) and [20] (FK) contains some 'quantization ambiguities' as expected from chapter 2.

Then, as has been noted already in [131], the constraint (3.21) does not have solutions except some trivial ones. However, appealing to the ordering ambiguity, the authors of the model [21] adjusted the operator in (3.21) so that the constraint does have solutions.

In the following they stress deviations from well-established standard qunatization procedures used to quantize & modify BF leading to the new models

The main suggestion of this model, [EPRL] which distinguishes it from the BC model and was first realized in [128], is that the simplicity constraints should be imposed only in a weak sense that is instead of imposing the constraints on the allowed states [annihilating states] one only requires [vanishing of their expectation values sandwiched between physical states!] This is justified by noting that after identification of the bivectors Bf with generators of the gauge group or a combination thereof (3.20), the simplicity constraints become non-commutative and imposing them strongly leads to inconsistencies, as is well known for any second class constraints. This does not concern the diagonal simplicity constraint which lies in the center of the constraint algebra and therefore can still be imposed strongly leading to the restriction (3.21) on the allowed representations

[in FK] one starts again from the partition function for BF theory (3.12) where the simplicity constraints should be implemented as restrictions on the representation labels. However, before doing that one makes a refinement of the decomposition (3.12) using the coherent state techniques developed in [19] Here we concentrate on the Euclidean case. Although the Lorentzian case was also considered in [20], the corresponding construction is much more complicated and even the Immirzi parameter has not been incorporated in it so far.

They doubt that the SFs and canonical LQG agree in the kinematical Hilbert space; the 'proofs' are mostly based on unphysical i.e. non-gauge invariant quantities subject to quantization anomalies.

Due to this fact it was claimed that the boundary states of the new models are the ordinary SU(2) spin networks [128] and it is now widely believed that there is a perfect agreement between the new SF models and LQG at the kinematical level [10]. However, it is easy to see that this is just not true. First of all, the states induced on the boundary of a spin foam are not the ordinary spin networks, but projected ones considered in section 2.2.2

However, on one hand, there is no any fundamental reason to perform such a projection. And on the other hand, this relation shows that the kinematical states of LQG and the boundary states of the EPRL model are indeed physically different and the agreement between their labels is purely formal. The claimed agreement is often justified by comparison of the spectra of geometric operators, area [21] and volume [138]. By appropriately adjusting the ordering, the spectra in spin foams and LQG can be made coinciding. However, the operators, which are actually evaluated in these papers, are not the standard ones, but shifted by constraints.

But in the EPRL approach the boundary states are supposed to be integrated over these normals so that the operator corresponding to (3.39) is simply not defined! On top of that, even if one drops the integration over xt, as we argue below, and gets a well defined operator on a modified state space, we see that the quantization of the geometric operators is not unique. To get the coincidence with LQG requires ad hoc choice of the ordering and of the classical expression to be quantized.

After their discussion of EPRL and FK they focus on the imposition of constraints. To understand these issues one has to be familiar with the Dirac quantization procedure (second class constraints, Dirac brackets instead of Poisson brackets)!

All models presented in the previous section have been derived following the strategy of section 3.1.2: first quantize and then constrain. Now we want to reconsider the resulting constructions taking lessons from the canonical approach. As we showed in the previous section, the spin foam quantization originates in Plebanski formulation of general relativity. The canonical analysis of this formulation has been carried out in [139, 140, 141] and turns out to be essentially equivalent to the Lorentz covariant canonical formulation of the Hilbert–Palatini action [18] once eijkBjk is identified with ~ Pi. The Immirzi parameter is also easily included and appears in the same way. Thus, the canonical structure to be quantized can be borrowed from section 2.2.1. In particular, the role of the simplicity constraints is played by the constraints (2.30).

In section 3.4.1. Alexandro and Roche show based on a toy model how the two quantization strategies

1) a la Dirac and

2) 'first quantize using Poisson - then constrain'

may lead models which look equivalent at first sight but are definitly inequivalent of one carefully inspects their details.

I do not go into the details here but I expect that everybody aiming to understand Alexandrov's reasoning will carefully follow his arguments and will understand in detail Dirac's constraint quantization procedure! The analysis of secondaray second class constraints modifying the symplectic structure on phase space i.e. replacing Poisson with Dirac brackets is key to the whole chapter 3!

They compare the two quantization strategies in the toy model:

Comparing the results of the two approaches, one observes a drastic discrepancy: gamma is either quantized or not. In the former approach for a non-rational gamma the quantization simply does not exist, whereas there are no any obstructions in the latter ... Taking into account that the second approach represents actually a result of several possible methods, which all follow the standard quantization rules, it is clear that it is the second quantization that is more favorable. The quantization of ? does not seem to have any physical reason behind itself. In fact, it is easy to trace out where a mistake has been done in the first approach: it takes too seriously the symplectic structure given by the Poisson brackets, whereas it is the Dirac bracket that describes the symplectic structure which has a physical relevance. It is easy to see that this leads to inconsistency of the first quantization. For example, the Hamiltonian H, ... is simply not defined on the subspace spanned by linear combinations of (3.54) ...

The next topic is relevant as response to Rovelli's "we don't need a Hamiltonian"

However, one can take a “minimalistic” point of view and do not require the existence of a well defined Hamiltonian on the constrained state space. (We thank Carlo Rovelli for discussion of this possibility.) After all, spin foam models are designed to compute transition amplitudes. Therefore, we are really interested not in the Hamiltonian itself, but in its matrix elements and the latter can be defined by using the Hamiltonian and the scalar product on the original unconstrained space

However, this expectation turns out be wrong. As is clear from the derivations in [20, 132] and has been explicitly demonstrated in a simple cosmological model [142], the vertex amplitude actually appears as a matrix element of the evolution operator. This requires to consider expectation values of higher powers of the Hamiltonian for which the property (3.62) does not hold anymore. This leads to deviations of results obtained by the spin foam strategy from those which are based on the well grounded canonical quantization

Let us summarize what we learnt studying the simple model (3.42):

• The strategy based on “first quantize, then constrain” leads to a canonical quantization which is internally inconsistent as the Hamiltonian operator is ill-defined on the constrained state space.

• The origin of the problem as well as the quantization of the parameter ? can be traced back to the use of the Poisson symplectic structure which does not take into account the presence of the second class constraints.

• Besides, this approach completely ignores the presence of the secondary second class constraint which is crucial for suppressing the fluctuations of non-dynamical variables and producing the right vertex amplitude in discretized theory.

• An attempt to interpret the results of such quantization only as an approach to compute transition amplitudes using (unphysical) Hamiltonian (3.61) does not work as they turn out to be incompatible with the results of the standard (path integral or canonical) quantization. As a result, the transition amplitudes computed in this way do not have any consistent canonical representation.

In our opinion, all these problems are just manifestations of the fact that the rules of the Dirac quantization cannot be avoided. This is the only correct way to proceed leading to a consistent quantum theory.

The example presented above explicitly reveals the main problems of the new SF models and their origin. All these models start from the symplectic structure provided by the simple BF theory, which ignores constraints of general relativity. In particular, they all use the usual identification of the B-field with the generators of the gauge group, or its gamma-dependent version (3.20), when the constraints are translated into quantum level. But this identification does not agree with the symplectic structure of general relativity

Alexandro and Roche pay attentio to the simplicity constraint which is key to the SF formalism and which seems to be the weakest point. It is this constraint which is requred to turn BF into GR - and it is this constraint which is quantized in the wrong way!

In fact, a special care which is paid to the diagonal simplicity, when it is imposed strongly whereas the cross simplicity constraints are imposed only weakly, results from another common confusion. As we explained in section 3.3.1, this is done because the diagonal simplicity is in the center of the non-commutative constraint algebra of all simplicity constraints and thus interpreted as first class. But this classification would be correct only if there were no other constraints to be considered. It completely ignores the presence of the secondary constraints. The latter do not commute with all simplicity and in particular with the diagonal simplicity. As a result, all these constraints are second class and should be quantized via the Dirac bracket. Given all this, we expect that the new SF models suffer from inconsistencies which we met in the previous subsection. They can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian. In particular, there is no reason to expect that the new models may be in agreement with LQG or any of its modifications. Note that this incompatibly with the canonical quantization manifests itself in the issues involving the Hamiltonian. This is why one does not see it in a semiclassical analysis or in any investigation restricting to the kinematical level. It should be stressed that this critics is not just about face or edge amplitudes, which depend on details of the path integral measure but can be found in principle from consistency on the gluing of simplices [135]. In fact, the ignorance of the secondary second class constraints has much more profound implications and, what is the most important, it affects the vertex amplitude (see the next subsection). The standard prescription that the vertex is obtained by evaluating the boundary state of a 4-simplex on a flat connection is a direct consequence of the employed strategy, which starts by quantizing the topological BF theory, and should be modified to take into account all constraints of general relativity. Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity. A good candidate should allow a quantum mechanical representation in terms of wave functions, Hamiltonian, etc., especially if one hopes to find a viable loop quantization of gravity. The point we are making here is that the SF models derived using the strategy “first quantize and then constrain” do not satisfy this requirement.

As I stressed a couple of times the Lagrangian PI is a derived object which cannot be seen as a fundamental entity. The main problem is the construction of the measure taking into account the second class constraints. It is this step where the SF construction seems to fail up to now; it is this step where some weak points from the BF model do show up agains

The SF representation of quantum gravity can be seen as an outcome of a Lagrangian path integral for discretized Plebanski formulation of general relativity. However, the Lagrangian or a configuration space path integral is a derived concept. A more fundamental one is the path integral over the phase space. Its measure can be rigorously derived and in particular it contains d-functions of all second class constraints. On the other hand, the Lagrangian path integral can be obtained from the canonical one only at certain very special circumstances.

Therefore, ..., if one wants to calculate transition functions as one does in SF models, one must use the canonical path integral.The main consequence of this conclusion is that, as we mentioned above, the secondary second class constraints should appear explicitly in the integration measure. We believe that this is an important point missed by the present-day SF models.

Alexandro and Roche stress that all these defects of the new models may be invisble in the semiclassical sector. That means that reproducing GR in the IR is not sufficient as a test for successfil quantization. Tis is trivial as there are several inequivalent models having the same IR limit (this applies to any quantum theory, not only to GR)

One might argue that since the secondary constraints appear as stability conditions for the primary ones and the latter are imposed in the path integral at every moment of time, the secondary constraints should follow automatically and need not to be imposed explicitly. For example, in SF models based on Plebanski formulation one could expect that all set of simplicity constraints ensures the simplicity of bi-vectors at all times and thus it is enough. However, this argument works only at the quasiclassical level where the equations of motion are satisfied. Off-shell the quantum fluctuations of degrees of freedom fixed classically by the secondary constraints are not suppressed if the constraints are not inserted in the path integral.

It is also not seen at the quasiclassical level since the missing constraint is obtained on mass shell anyway. Therefore, it is not in contradiction with the fact that the semiclassical asymptotics of the EPRL and FK amplitudes reproduce the Regge action [147, 148, 149], i.e., the correct classical limit. The problem is that the secondary constraints are not imposed strongly at the quantum level and as a result one might expect the appearance of additional quantum degrees of freedom in the new models

Gruß

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

### Re: LQG is seriously ill

Alexandrov proves that there is a two-parameter family of inequivalent theories; one of them is equivalent to the standard LQG, another one (CLQG) differs in the kinematical sector (the area spectrum is not a Dirac invariant, therefore this need not be a serious problem).Then Alexandrovs shows that "standard" LQG cannot be formulated in d<>4 dimensions, and he shows that especially in d=3 it is not even defined (the qunatization is special to d=4 b/c one uses SO(3,1) ~ SU(2)*SU(2)). Therefore it's reasonable to believe in a family of theories! In addition he shows that the "standard" LQG fails to be in-line with the Dirac constraint quantization procedure and that it quantizes the wrong symplectic structure. He shows that the implementation of the constraints is unreasonable and that this may result in anomalies in the diffeomorphism group at the quantum level. He demonstrates that standard LQG misses certain structures of the diffeomorphism group, especially the mapping class group.

All this could be a problem to canonical quantization only, but Alexandrov shows that the new SF models (as of spring 2010) suffer from the same problems b/c again the treatment of the secondary second class constraints is similar to the canonical approach.

The problem is that neither is the derivation of the SFs in-line with well-known QM formalisms, nor do the results prove that GR is recovered in the IR. So I agree that there is one theory, but it is still unclear whether it's consistent and unique. It could very well be that there is no theory at all, or that there is an uncountable family of theories.

Alexandrov and Roche conclude that all these shortcomings are related to the previously discussed problems of the canonical approach. Therefore using the weaker PI formalism and focussing on the semiclassical limit does not help. It's the construction of the theory, not the derivation of its limiting cases and physical results that has to be re-investigated.

Summarizing our analysis of the spin foam models, we see that already the proper consideration of the diagonal simplicity constraint shows that all attempts to include the Immirzi parameter following the usual strategy lead to inconsistent quantizations. All problems of the EPRL and FK models can be traced back to that they quantize the symplectic structure of the unconstrained BF theory and impose (a half of) the second class constraints after quantization. So we conclude that the widely used strategy “first quantize and then constrain” does not work and should be abandoned. ... However, a concrete implementation of the secondary constraints both in the boundary states and in the vertex amplitude (3.73) remains problematic. This is precisely the same problem which supplies the canonical approach. In particular, it is closely related with the non-commutativity of the connection. Thus, although it is possible to obtain a picture similar to the loop quantization, the goal, which is to provide a credible spin foam model and, in particular, a vertex amplitude correctly taking into account all constraints of general relativity, is far from being accomplished.

I would like to summarize some points where Alexandrov and Roche suspect problems

All this could be a problem to canonical quantization only, but Alexandrov shows that the new SF models (as of spring 2010) suffer from the same problems b/c again the treatment of the secondary second class constraints is similar to the canonical approach.

The problem is that neither is the derivation of the SFs in-line with well-known QM formalisms, nor do the results prove that GR is recovered in the IR. So I agree that there is one theory, but it is still unclear whether it's consistent and unique. It could very well be that there is no theory at all, or that there is an uncountable family of theories.

Alexandrov and Roche conclude that all these shortcomings are related to the previously discussed problems of the canonical approach. Therefore using the weaker PI formalism and focussing on the semiclassical limit does not help. It's the construction of the theory, not the derivation of its limiting cases and physical results that has to be re-investigated.

Summarizing our analysis of the spin foam models, we see that already the proper consideration of the diagonal simplicity constraint shows that all attempts to include the Immirzi parameter following the usual strategy lead to inconsistent quantizations. All problems of the EPRL and FK models can be traced back to that they quantize the symplectic structure of the unconstrained BF theory and impose (a half of) the second class constraints after quantization. So we conclude that the widely used strategy “first quantize and then constrain” does not work and should be abandoned. ... However, a concrete implementation of the secondary constraints both in the boundary states and in the vertex amplitude (3.73) remains problematic. This is precisely the same problem which supplies the canonical approach. In particular, it is closely related with the non-commutativity of the connection. Thus, although it is possible to obtain a picture similar to the loop quantization, the goal, which is to provide a credible spin foam model and, in particular, a vertex amplitude correctly taking into account all constraints of general relativity, is far from being accomplished.

I would like to summarize some points where Alexandrov and Roche suspect problems

- the problems of the EPRL and FK models can be traced back to that they quantize the symplectic structure of the unconstrained BF theory [Poisson brackets] and impose (a half of) the second class constraints after quantization.
- So we conclude that the widely used strategy “first quantize and then constrain” does not work and should be abandoned.
- As a result, all these constraints are second class and should be quantized via the Dirac bracket.
- These inconsistencies ... can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian [obtained in correspondence with Dirac's quantization procedure therefore taking into account second class constraints properly]
- In fact, the ignorance of the secondary second class constraints ... affects the vertex amplitude [and the measure].
- Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity.

Gruß

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

### Re: LQG is seriously ill

Als Abschluss möchte ich auf ein weiteres Paper eingehen, das ähnliche Probleme diskutiert

... Benjamin Bahr, Rodolfo Gambini, Jorge Pullin released a paper on arxiv addressing the issues raised by Alexandrov at al.

http://arxiv.org/abs/1111.1879

Discretisations, constraints and diffeomorphisms in quantum gravity

Authors: Benjamin Bahr, Rodolfo Gambini, Jorge Pullin

(Submitted on 8 Nov 2011)

Abstract: In this review we discuss the interplay between discretization, constraint implementation, and diffeomorphism symmetry in Loop Quantum Gravity and Spin Foam models. To this end we review the Consistent Discretizations approach, which is an application of the master constraint program to construct the physical Hilbert space of the canonical theory, as well as the Perfect Actions approach, which aims at finding a path integral measure with the correct symmetry behavior under diffeomorphisms.

In one section on the canonical approach they say

LQG is a canonical approach, in which the kinematical Hilbert space is well-understood, and the

states of which can be written as a generalization of Penrose’s Spin-Network Functions [14]. Although

the formalism is inherently continuous, the states carry many discrete features of lattice

gauge theories, which rests on the fact that one demands Wilson lines to become observables.

The constraints separate into (spatial) diffeomorphism- and Hamiltonian constraints. While

the finite action of the spatial diffeomorphisms can be naturally defined as unitary operators,

the Hamiltonian constraints exist as quantizations of approximate expressions of the continuum

constraints, regularized on an irregular lattice. It is known that the regularized constraints do

not close, so that the algebra contains anomalies, whenever the operators are defined on fixed

graphs which are not changed by the action of the operators [15]. If defined in a graph-changing

way, the commutator is well-defined, even in the limit of the regulator going to zero in a controlled

way [16]. However, the choice of operator topologies to choose from in order for the limit

to exist is nontrivial [17], and the resulting Hamiltonian operators commute [18]. Since they

commute on diffeomorphism-invariant states, the constraint algebra is satisfied in that sense.

Furthermore, however, the discretization itself is not unique, and the resulting ambiguities survive

the continuum limit [19]. In the light of this, it is non-trivial to check whether the correct

physics is encoded in the constraints.

Then they comment on Regge-like discretizations

Another problem arises within any attempt to build a quantum gravity theory based on

Regge discretizations: Generically, breaking of gauge symmetries within the path integral measure

leads to anomalies, in particular problematic in interacting theories [55]. In quantum

theories based on Regge calculus, the path integral will, for a very fine triangulation, contain

contributions of lots of almost gauge equivalent discrete metrics, each with nearly the same amplitude.

Hence the amplitude will not only contain infinities of the usual field theoretic nature,

but also coming from the effective integration over the diffeomorphism gauge orbit. This is in

particular a problem for the vertex expansion of the spin foam path integral, as advocated in

[56]. The triangulations with many vertices will contribute much more to the sum than the triangulations

with few vertices, so that, at every order, the correction terms dominate, rendering

the whole sum severely divergent.

Their conclusion is not very positive

On the other side, in chapter 3 we have demonstrated in which sense the diffeomorphism

symmetry is broken in the covariant setting, in particular in Regge Calculus. We have described

how one might hope to restore diffeo symmetry on the discrete level, by replacing the Regge

action with a so-called perfect action, which can be defined by an iterative coarse-graining

process. On the quantum side, this process resembles a Wilsonian renormalization group flow,

and it has been shown how, with this method, one can construct both the classical discrete

action, as well was the quantum mechanical propagator, with the correct implementation of

diffeomorphism symmetry, in certain mechanical toy models, as well as for 3d GR. Whether

a similar construction works in 4d general relativity is still an open question, being related to

issues of locality and renormalizability, which are still largely open in this context.

... Benjamin Bahr, Rodolfo Gambini, Jorge Pullin released a paper on arxiv addressing the issues raised by Alexandrov at al.

http://arxiv.org/abs/1111.1879

Discretisations, constraints and diffeomorphisms in quantum gravity

Authors: Benjamin Bahr, Rodolfo Gambini, Jorge Pullin

(Submitted on 8 Nov 2011)

Abstract: In this review we discuss the interplay between discretization, constraint implementation, and diffeomorphism symmetry in Loop Quantum Gravity and Spin Foam models. To this end we review the Consistent Discretizations approach, which is an application of the master constraint program to construct the physical Hilbert space of the canonical theory, as well as the Perfect Actions approach, which aims at finding a path integral measure with the correct symmetry behavior under diffeomorphisms.

In one section on the canonical approach they say

LQG is a canonical approach, in which the kinematical Hilbert space is well-understood, and the

states of which can be written as a generalization of Penrose’s Spin-Network Functions [14]. Although

the formalism is inherently continuous, the states carry many discrete features of lattice

gauge theories, which rests on the fact that one demands Wilson lines to become observables.

The constraints separate into (spatial) diffeomorphism- and Hamiltonian constraints. While

the finite action of the spatial diffeomorphisms can be naturally defined as unitary operators,

the Hamiltonian constraints exist as quantizations of approximate expressions of the continuum

constraints, regularized on an irregular lattice. It is known that the regularized constraints do

not close, so that the algebra contains anomalies, whenever the operators are defined on fixed

graphs which are not changed by the action of the operators [15]. If defined in a graph-changing

way, the commutator is well-defined, even in the limit of the regulator going to zero in a controlled

way [16]. However, the choice of operator topologies to choose from in order for the limit

to exist is nontrivial [17], and the resulting Hamiltonian operators commute [18]. Since they

commute on diffeomorphism-invariant states, the constraint algebra is satisfied in that sense.

Furthermore, however, the discretization itself is not unique, and the resulting ambiguities survive

the continuum limit [19]. In the light of this, it is non-trivial to check whether the correct

physics is encoded in the constraints.

Then they comment on Regge-like discretizations

Another problem arises within any attempt to build a quantum gravity theory based on

Regge discretizations: Generically, breaking of gauge symmetries within the path integral measure

leads to anomalies, in particular problematic in interacting theories [55]. In quantum

theories based on Regge calculus, the path integral will, for a very fine triangulation, contain

contributions of lots of almost gauge equivalent discrete metrics, each with nearly the same amplitude.

Hence the amplitude will not only contain infinities of the usual field theoretic nature,

but also coming from the effective integration over the diffeomorphism gauge orbit. This is in

particular a problem for the vertex expansion of the spin foam path integral, as advocated in

[56]. The triangulations with many vertices will contribute much more to the sum than the triangulations

with few vertices, so that, at every order, the correction terms dominate, rendering

the whole sum severely divergent.

Their conclusion is not very positive

On the other side, in chapter 3 we have demonstrated in which sense the diffeomorphism

symmetry is broken in the covariant setting, in particular in Regge Calculus. We have described

how one might hope to restore diffeo symmetry on the discrete level, by replacing the Regge

action with a so-called perfect action, which can be defined by an iterative coarse-graining

process. On the quantum side, this process resembles a Wilsonian renormalization group flow,

and it has been shown how, with this method, one can construct both the classical discrete

action, as well was the quantum mechanical propagator, with the correct implementation of

diffeomorphism symmetry, in certain mechanical toy models, as well as for 3d GR. Whether

a similar construction works in 4d general relativity is still an open question, being related to

issues of locality and renormalizability, which are still largely open in this context.

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

### Re: LQG is seriously ill

Hallo zusammen.

Es gab ja in letzter Zeit einige kritische Diskussionen zum Status der LQG. Ich möchte euch nochmal auf dieses Thema aufmerksam machen - auch wenn die englischen Texte möglicherweise zunächst abschreckend wirken. In Zukunft soll es natürlich auf Deutsch weitergehen.

Anbei noch einige interessante Links: die englische Wikipedia hat inzwischen einige recht gute Übersichtsartikel zu dem Thema. Sollte also Interesse daran bestehen, was die LQG tatsächlich ausmacht, wo die aktuellen Forschungsschwerpunkte liegen, und wo möglicherweise ernsthafte Probleme verborgen sind, dann wäre das ein guter Startpunkt.

http://en.wikipedia.org/wiki/Loop_quantum_gravity

http://en.wikipedia.org/wiki/History_of ... um_gravity

http://en.wikipedia.org/wiki/Ashtekar_variables

http://en.wikipedia.org/wiki/Hamiltonia ... int_of_LQG

http://en.wikipedia.org/wiki/Spin_network

http://en.wikipedia.org/wiki/Spin_foam

Es gab ja in letzter Zeit einige kritische Diskussionen zum Status der LQG. Ich möchte euch nochmal auf dieses Thema aufmerksam machen - auch wenn die englischen Texte möglicherweise zunächst abschreckend wirken. In Zukunft soll es natürlich auf Deutsch weitergehen.

Anbei noch einige interessante Links: die englische Wikipedia hat inzwischen einige recht gute Übersichtsartikel zu dem Thema. Sollte also Interesse daran bestehen, was die LQG tatsächlich ausmacht, wo die aktuellen Forschungsschwerpunkte liegen, und wo möglicherweise ernsthafte Probleme verborgen sind, dann wäre das ein guter Startpunkt.

http://en.wikipedia.org/wiki/Loop_quantum_gravity

http://en.wikipedia.org/wiki/History_of ... um_gravity

http://en.wikipedia.org/wiki/Ashtekar_variables

http://en.wikipedia.org/wiki/Hamiltonia ... int_of_LQG

http://en.wikipedia.org/wiki/Spin_network

http://en.wikipedia.org/wiki/Spin_foam

Tom

«Hier konnte niemand sonst Einlaß erhalten, denn dieser Eingang war nur für dich bestimmt. Ich gehe jetzt und schließe ihn.»

### Wer ist online?

Mitglieder in diesem Forum: 0 Mitglieder und 2 Gäste