- (Information) Paradox Lost

Tim Maudlin

arXiv:1705.03541 [physics.hist-ph]

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces –

*at all finite times*. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen

*et al*.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

## 1,249 comments:

1 – 200 of 1249 Newer› Newest»Maybe we should stop talking of just "information" -which is more akin to communication engineering- and start thinking of "Information Processes", in which case we have a process transformed into another process after some nice free fall exercise. (Think of a "black hole CPU"!)

https://www.youtube.com/watch?v=7zVeOYlhA78

Tnx Bee! Between the two of you, you have provided much clarity for this reader!

Dear Dr. B.

There is a thing I do not understand in the remnant solutions.

As I understand it, black holes evaporate (almost) completely but a lot of information is left in the remnant. I was taught that information requires a carrier that has energy/mass. In the end, the remnant should have a lot of information but very little mass.

So, I gather one or both of these "understandings" of mine must be wrong, but which one is it?

Cosmic background radiation is blackbody 2.72548 kevins (2.34864 eV). This black hole (BH) temperature is 2.263×10^(-8) solar masses, lifetime 2.432×10^35 Gyr.

No contemporary BH evaporates- by a huge margin. Low mass primordial BH would be exploding. A 13.82 Gyr BH is 8.7×10^(-20) solar masses with radius 2.6×10^(-7) nm, oozing extreme observables.LIGO events GW150914 and GW151226 merged to equilibrium within milliseconds, 4.6% emitted binding energy both, 2D+ε soap bubbles merging with no interior volume and no wildly gyrating singularities therein.

BH theories give exotic predictions because they model unreal constructs. arXiv:1705.01597

Nothingis empirical.For the philosophy it doesn't matter the black hole paradox. It happens to an astronomical scale very far away from human's problems.It is a problem for the gods, not for the humans.

If spacetime were continuous over the whole time of the existance of the black hole, wouldn't that remove the paradox - since information could eventually escape as the event horizon recedes? Wouldn't that move the question to whether or not spacetime is continuous?

Sabine, I'm having trouble understanding what, if any, substantive dispute there is between you and Maudlin on this.

The main point of the blog post seems to be that there's nothing new in Maudlin's paper. But that's also the main point of Maudlin's paper, whose abstract says, "The resources for resolving the "paradox" are familiar and uncontroversial, as has been pointed out in the literature."

Perhaps there's a disagreement whether it's worth writing something that reminds people of something familiar and uncontroversial. But if these are things to which insufficient attention is being paid, and whose significance hasn't been appreciated, then, it seems to me, it is worth doing.

Also, I think there's an important difference between saying that the significance of these familiar facts hasn't been appreciated---which is what Maudlin says---and saying that there are physicists that don't understand them.

Is the main point of disagreement, then, over Maudlin's claim that the significance of the fact that Sigma_2 is not a Cauchy surface has not been widely appreciated?

Theophanis,

It's been said many times before, but clearly not often enough. The reference to information is a red herring. It is entirely irrelevant exactly what is meant by information here.

Rob,

Yes, very little energy, and potentially lots of bits. This means very little energy per bit.

Wayne,

For all I can tell I don't disagree with Maudlin. I merely think the paper lacks some context and makes physicists look rather stupid by leaving out part of the story. I hope to have provided that part of the story here.

Ambi,

Space-time in GR is continuous. I'm not sure what you mean.

"So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers."

I suspect a lot of this is just Tim Maudlin's style. It's pretty much he always writes. In philosopher-philosopher discussions too he often comes across as somewhat condescending - this tone can cross over to contemptuous.

It'd be unfortunate if it aggravated any philosophers vs physicists conflicts. I don't think it's specific to that divide at all.

In my very humble opinion, the conflict-laden relationship between physicists and philosophers is a consequence of the orthogonality of their respective language-based axioms and premises. Hence, conversations between them are hopelessly bogged down by the impedance of their dialog.

Sabine,

I mean, if one has only passed the event horizon, information can still travel into all directions, including outwards. It's just that it cannot escape to infinity anymore. Under the math of ART with classical gravity, information travels outwards slower than spacetime gets stretched by the ongoing collapse, so it would remain in the black hole forever.

If however the stretching of spacetime would be somehow reversed (eg by mass loss due to Hawking radiation), information could continue to travel outwards - removing the paradox.

If one has already passed inside the apparent horizon, information couldn't travel outwards in the first place. But how does one know?

There is a real question in my mind as to whether philosophers have anything useful or interesting about science.

I would recommend you to read the paper by Prof. Dr. Stefan Hofmann from LMU Munich on "Classical versus Quantum Completeness."

https://arxiv.org/abs/1504.05580

Jillur,

I know the paper. Also, by way of a weird coincidence, I talked to Stefan just yesterday. The paper you mention is relevant to the point I made in my paper with Lee, but not to the one discussed here.

CIP,

Ironically I've spent the whole week at the Munich Center for Mathematical Philosophy talking to philosophers, so I'm inclined to say the answer is yes. Even on the bh infloss problem I think that's the case. Unfortunately, Maudlin's paper doesn't address what I think are the relevant points. I'm kind a curious to see if anyone else in the community takes note of it at all. Best,

B.

Ambi,

I don't know what you mean by 'stretching' but, yes, if the black hole evaporate there are cases in which an outwards traveling particle will eventually come out again. I can't see what this has to do with your previous comment thought.

There is a conflict between Philosophers and Physicists only because many students of philosophy (and unfortunately, some professional philosophers also) are of the belief that arguing with words is enough. They therefore find it perplexing that physicists and mathematicians prefer to manipulate symbols. Manipulating symbols does not confer understanding they say. Utterly rubbish.

Hell, philosophers still consider the "Paradox of the Pile" a paradox while those of us who learned mathematics know this is just a failure of proper definition. Humans do not have the cognitive ability to understand nature without the symbolism of mathematics. It's just a fact.

We have already known since the time of Russel that words alone are not sufficient in arguments. Words are self-referential and inherently contradictory. I wish more students of philosophy understand that. Then they can appreciate why manipulating symbols (aka rewriting rules) are very powerful ways of creating understanding.

Sabine, I understand that particles can be in a superposition, so why not spacetime? Isn't it possible that a black hole is in some kind of superposition with a white hole? Only the white hole part has a very low probability amplitude. Can't information escape via the low probability white hole counterpart? Just like the sun has a small probability to teleport 1 lightyear away.

Patat,

Yes, that's possible. It's also exceedingly unlikely though.

I think it is unlikely on a short time interval. On a extremely long time interval I think it is guaranteed. It takes eons for a black hole to evaporate.

Silly question, but what is the status of energy conservation over the lifetime of a black hole?

My understanding is that a physically meaningful conservation law arises from a symmetry of the vacuum (or a nominal background) state (rather than just a mathematical symmetry of the theory).

I see two problems applying this to black holes:

1. It is hard to see how what the time-translation invariant background is that includes a singularity appearing and/or disappearing.

2. For realistic black holes, the universe will expand by a significant factor over their lifetime, and the locality of the black hole deviating from the approximation of a uniformly expanding universe. So presumably we bang our head against the fact that energy is only locally conserved in GR.

Getting back to the information paradox: if there is a 'problem' with conservation of energy for black holes, then loss of information is natural. The lost information can be associated with lost energy, and you can hide both, by either accountancy (put them in a column labeled 'lost') or a philosophy (invent child universes spawned from the black hole).

Bee, I don't see any reason to be annoyed by Maudlin. It is a lucid paper, and if the physicists didn't beat him to writing it, I don't think they can complain, even though they already know everything that Maudlin wrote.

The fragments of Cauchy surfaces we consider for say, analyzing LHC experiments do not extend all the way to a black hole in Andromeda or in the center of the Milky Way and that is why unitarity holds to a sufficiently good approximation in our neighborhood. Because physics is local we don't need Cauchy surfaces that extend all across the universe for our experiments. If I had to know the complete geometry and topology of the universe in order to do experiments in a laboratory, then physics would be impossible to do. Fortunately, nature hasn't put us in that position.

But as to why virtual blackholes don't damage unitarity in the same experiments - that I don't know, and can only wave my hands and say that the effect must be small because of the weakness of gravity. On the other hand maybe it is a significant effect, and the constant degradation of our seemingly complete Cauchy surface by virtual blackholes is what introduces time-asymmetry at our scale.

Arun,

The paper doesn't even mention several points that are relevant to the argument, as I laid out in above blogpost. I wouldn't call that 'lucid'.

Well, I think Tim Maudlin has come to the comments here previously; I hope he does so again. It might be productive.

Bee, you yourself wrote: "It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense."

It is exactly these huge bubbles of nonsense (blackholes is not the only one) that theoretical high energy particle physicists as an internal community have failed to puncture on their own, even though the antidote to the bubbles is known within the community. Maybe it takes outsiders - complete outsiders like Maudlin, and partial outsiders like Woit - to make the theoretical HEP physics community come to order.

Isn't it possible for all te superpositions of spacetimes to create an escape route from the interior of the black hole to the outside world, without breaking causality? And information can escape this way? A sort of spacetime tunneling.

Arun,

Yes, I agree. That's why I found this article so disappointing. It's easy to dismiss, and I seriously doubt anyone in the field will take it seriously because it's obviously missing key points.

Patat,

yes, and somewhere in the multiverse that's exactly what happens. It's exceedingly unlikely though that this happens in the universe we inhabit.

What I understand from you is that the probability of information tunneling out of a black hole is too small to guarantee that all Hawking radiation contains information about the black hole interior. I imagined Hawking radiation as all the information eventually escaping the black hole via tunneling. But it is too unlikely. But... if information escapes a black hole, isn't it a white hole by definition?

I'm probably missing something here, but isn't Maudlin's point (or Maudlin's construal of Wald's point, or what have you) that the usual reasoning regarding unitarity violation in BH evaporation is invalid? I.e. we can only expect unitarity along complete Cauchy surfaces, which the spacelike surface that is usually invoked in claiming that 'unitarity is violated' (his \Sigma_2) fails to be. So when we calculate that the evolution from some pre-evaporation Cauchy surface to \Sigma_2 is non-unitary, then well, big woop---there's no reason it ought to be, even in vanilla QM.

If that's indeed the point, and the argument is correct, then I think pointing it out is a tremendously useful thing, at least to me, personally---even if it might be clear to every expert in the field (in which case there seem to be lots of papers written by experts that are less than clear on this point), my understanding of the problem always was, basically, 'the evolution from \Sigma_1 to \Sigma_2 is non-unitary, but should be unitary'. If it's in fact correct to say 'the evolution from \Sigma_1 to \Sigma_2 is non-unitary, and there's no reason to expect it to be', then I think the 'problem' as such is far less pressing than usually presented.

Jochen,

Yes, you are missing something. It is correct that the state on the incomplete surface will generically be non-unitarily related to the earlier complete surface because you're leaving behind part behind the horizon. It is incorrect to think that this alone solve the problem. Every experiment that we do is located outside of the black hole horizon. The problem being that for all we can tell unitarity works just fine. Why if, as you said, it shouldn't be so? Now, you could say that maybe it just isn't unitary and we haven't notice, or information comes out after all, etc etc. That's the very story that Hawking started 40 years ago. Let me say this again: Just noting that there is a mathematical reason why it shouldn't be unitary does *not* remove what is normally considered paradoxical.

Thanks for your answer. So, if I understand you correctly, the puzzle is in fact that the evolution between \Sigma_1 and \Sigma_2 appears to be unitary, and thus, that using data of \Sigma_2 we ought to be able to reconstruct the quantum state at \Sigma_1. But is there actually an experiment we can do in the lab that would probe this? Seems to me that unitarity ought to still hold for any experiment where everything between preparation and measurement is kept well away from black holes, even if it's violated 'globally'. Is that not the case?

Jochen,

Black holes can in principle be produced in any particle collision - that's quantum mechanics for you. If they exist at all, they should be there in intermediate states. I actually explained this in the above blogpost. The question is, what does the scattering-matrix look like. Yes, you might say you can just do without unitarity, and people have tried to make that work - some still believe that's the way to go (see Unruh et al), and so on. I'm not saying that accepting non-unitarity is not an option, I am just saying you have to make it work, and people have tried to make it work rather unsuccessfully. In any case, if Maudlin's point was to say we should reconsider non-unitarity, then he should have explained at least how that's not a problem with observation etc. Which is an argument that can be made and has been made - and yes, maybe there's something new to say about this - but it's not the argument he did make.

Sorry to keep pestering you, I'll let it go after this post, but to me, it's not clear that one should expect any non-unitarity in scattering processes even if the evolution between slices \Sigma_1 and \Sigma_2 is non-unitary---after all, 'intermediate' black holes really are just terms in a perturbation expansion for what's itself a unitary operator; that this perturbation series should introduce any non-unitarity seems odd to me. So I don't see that I should worry about the non-unitarity introduced by virtual black holes any more than I should worry about being sucked into them. ;)

I guess what I'd want to see is that if there is some non-unitarity to be expected in ordinary laboratory scattering processes, how big of an effect it would have to be, and whether it should be obvious to present-day experiments, or within reach of experiment, or completely non-accessible. In short, I'd like to know if there is any actual difference in phenomenology between the case where there just isn't any unitary evolution between \Sigma_1 and \Sigma_2, and the case where there is, and thus, an information paradox exists; because if there isn't such a difference, then I think it wouldn't be unreasonable to conclude there's also no problem.

There seem to be some basic confusions here about experimental bounds on violation of unitarity. There have been some papers about such bounds, but the empirical part has nothing to do with collision experiments. A test for unitarity has to look for interference effects, such as neutron interferometry experiments, which are about as far as you can get from particle collisions. Detailed investigations of observable empirical signatures of violation of unitarity have been most extensively studied for the GRW collapse theory, where we have an exact equation to work with. So far, no detectable effect of the non-unitary evolution in that theory has been found, and people have been looking hard. (Similar comments apply to Penrose's gravitational collapse theory, although it is not as well-defined as GRW.) So the idea that any violation of unitarity must have presently noticeable effects is just wrong. And the idea that one should even be looking at particle collisions is probably wrong.

Arun's comments at the start are correct: evaporating black holes outside the earth would lead to a mixed state of the universal wavefunction on Sigma 2. But (to say the least!) no one has ever made or ever will make an empirical prediction based on the universal wave function. So evaporating black holes outside the lab make absolutely no empirical difference for predictions about what happens in the lab. What about evaporating black holes in the lab? Well, in overwhelmingly most labs, there just aren't any. Where would they come from? Sabine suggests that they might be formed in particle collisions, but that would require enough energy to form them. No reason to think it has ever been done. There would obviously be a signature of particles going in and only thermal radiation coming out. (I would also expect offhand that the huge proton decay experiments would have noticed such a signature if there were microscopic evaporating black holes floating around somehow. There aren't any.)

The next confusion concerns "virtual evaporating black holes". Suffice it to say that what are called "virtual" things are not real. They are mathematical fictions, used to make certain calculations. In addition, no one in history has included "virtual black hole evaporation" in any actual calculation ever made. It would, on any view, be a process with, say, particles going in and thermal radiation coming out. They would only show up in a quantum theory of gravity, which of course does not exist yet.

The idea that people have tried to make this idea work and did not succeed is unfounded. No one has been looking specifically at the sort of pure-to-mixed evolution that falls out of this analysis: pure Cauchy-to-Cauchy evolution followed by tracing out. The papers I am aware of are a paper by Ellis, Hagelin, Nanopoulos and Srednicki, which is pretty careful and looks specifically at experiments like neutron interferometry which are actually relevant, and a sort of silly paper by Banks, Susskind and Peskin that does nothing relevant. Again, it is highly relevant to look at the work on the GRW theory, which has the advantage of being an exact theory that violates unitarity. That work is certainly a refutation of any claim that violating unitarity must lead to some obvious, presently existing empirical problem.

(Con't)

Jochen's comment above is right on target. The reason there has not been much work on the consequences of pure-to-mixed evolution is that there has been the completely incorrect claim that such evolution violates quantum theory somehow, and if you want to keep the fundamental postulates of quantum theory intact you have to have pure-to-pure evolution from Sigma 1 to Sigma 2. What my paper points out is that this is the opposite of the truth. Quantum theory only implies pure-to-pure evolution for Cauchy-To-Cauchy evolution. If we take the Penrose diagram that Hawking provides seriously, then Sigma 1-to-SIgma 2 is Cauchy-to-non-Cauchy. Not only does quantum theory not predict that this evolution will be pure-to-pure, it predicts that it will be pure-to-mixed if the state on Sigma 2 (= Sigma 2out) is entangled with the state of Sigma 2in, inside the event horizon. If they are entangled, tracing out over Sigma 2in will yield a mixed state on Sigma 2 out. Finally, Quantum Field Theory implies that the state on Sigma 2in will be highly entangled with the state on Sigma 2out. So not only is the common claim that "quantum mechanics predicts that the state on Sigma 2 will be pure" not well founded, it is the exactly and precisely false. Fundamental principles of quantum mechanics and QFT entail that the state on Sigma 2 will be mixed.

There will be a discussion of the empirical implication of failure of unitarity in the next version of the paper, which should be done soon.

Sabine, thanks for this blog post bringing the matter to my attention. I've been doing a lot of reading and writing on the subject of late, and this is very timely. As for Maudlin's

"condescending tone by which he attempts public ridicule", I rather fear things are going to get worse.Tim,

Three things. First, as I've said a number of times, I am actually quite sympathetic to your take on the matter. Regarding your assertion that no one has ever looked at virtual black holes, however, you might want to check out this (and the long string of references before and after that).

Second, next thing people will come with is the BH entropy, for reasons see blogpost.

Third, most of them don't believe there's a singularity to begin with and they don't believe in remnants (see second point) meaning the final slice is actually complete.

Let me repeat that I am not telling you this because *I* think this is a good argument, but because I've heard this story forwards and backwards 10 million times.

And once you're at that point, the only thing one can conclude is that some people like it this way and some people like it that way and we'll keep on discussing this forever. (Which is pretty much what I wrote in my recent blogpost on the topic if you recall.)

Jochen,

You might want to have a look at this paper for phenomenological consequences. Please note that my point here is not to say that abandoning unitarity is not an option, but merely to say it's an option that has been discussed and I can't see what new has been added to this discussion.

FWIW, I discuss black hole entropy in the next version.

philosophers of physics trying to do physics reminds me of something i once read that goes "when you see a flying pig, you shouldn't critique how well it flies; you should be impressed that it flies at all". there's also the saying "you can't talk the talk unless you've walked the walk". and finally, there's the Nobel laureate Bob Dylan lyric from "Positively 4th Street" that goes

I wish that for just one time you could stand inside my shoes

And just for that one moment I could be you

Yes, I wish that for just one time you could stand inside my shoes

You'd know what a drag it is to see you

This "philosophers vs. physicists" meme is completely off base. Not many people work in the foundations of physics. Very few physicists do. The community that actually works in foundations consists of philosophers, mathematicians, and physicists. If you think there is an error in this paper you are free to point it out. But if there is going to be actual progress, physicists have to stop being so defensive. Respond to the arguments, not ad hominem. (Sorry: a philosopher's phrase.)

Dylan again: There's something happening and you don't know what it is, do you, Mr. Jones?

Sabine and Tim,

what exactly are the bases of your models of black holes: The static black hole? The dynamic collapse towards a black hole? Or yet other models? And in what coordinate systems are you operating in?

Can I add that when I said I fear things are going to get worse, I wasn't referring to Tim Maudlin's tone. I was thinking of public perception of the black hole physics community. The recent inflation hoo-hah is more of the same.

To lard your paper with gratuitous ad hominem comments, and then complain about someone noticing... Well, we have your measure, Mr. Maudlin.

Ambi Valent,

The model is more or less a sequence of static black hole with successively smaller masses and hence smaller event horizons. The basic idea is to use a static black hole as a fixed background space-time, calculate the Hawking radiation, derive an energy flux from that, use a principle of global conservation of energy to argue that the black hole must lose and equivalent mass, then switch to a static background space-time of a black hole with the new, smaller mass, rinse and repeat. There is no reply principled way to deal with the emission of the Hawking radiation and the backreaction of the metric all in one swoop. That is the backreaction problem after which the blog is named.

I should add, although it is not mentioned in this paper, I think that there are conceptual problems with this whole story. But that is the subject of another, even more controversial, paper.

Ambi Valent,

It's a collapsing black hole with the evaporation-part added as a guess since nobody knows exactly what happens. The considerations in Tim's paper only concern the causal structure and the coordinate system is entirely irrelevant for that. This is the usual situation in that kind of discussion.

Araybold,

Please point out any ad hominem comment in my paper, which you say is larded with them. I am certain there is not a single one. Are you sure you know what the phrase means?

Suggestion for the decractors of Tim:

Is a confirm of evaporation of a black hole a lack of content in the article of Tim?

Anyway I think that Tim "flows" with a ratio of a one article per one article!

Joke apart, I think very interesting the article of Tim as ever.

Nonetheless I have to thanks Sabina for posting it.

Great Job!!

I think there is a critical mistake in this Maudlin paper, namely that the crux of the argument -- that Sigma_2 is not a Cauchy slice -- cannot be concluded from the arguments given.

The mistake involves an erroneous over-interpretation of the Penrose diagram for the evaporating black hole. A Penrose diagram suffices only to represent the causal structure of a classical spacetime, i.e. a solution to general relativity or its extensions. It's basically a picture of the metric, the classical field that determines the spacetime geometry. One can attempt to modify the diagram for a particular solution (e.g. a black hole formed from collapse) to account for weak quantum-gravitational effects, such as Hawking radiation, leading to Maudlin's figure 4, but this depiction is just a cartoon, and if read too literally it will lead to incorrect conclusions.

When strong quantum-gravitational effects are important there is no notion of locality since the metric undergoes large quantum fluctuations, just like any other quantum field. At the end stages of black hole evaporation (or perhaps earlier), quantum-gravitational effects dominate. The Penrose diagram does not capture this physics by its very definition, since the diagram is very literally a depiction of the spacetime's causal structure, which it should be emphasized is not even a well-defined concept in quantum gravity. In the paper, Maudlin gives elementary arguments based on the semiclassical causal structure to argue that Sigma_2 cannot be a Cauchy slice, but these arguments are applied precisely to a situation in which classical GR is not valid, the causal structure receives large quantum corrections. One cannot conclude that the geodesics in question fail to make it to Sigma_2 without knowing the full dynamics of quantum gravity.

In fact, we know from AdS/CFT that evolution from Sigma_1 to Sigma_2 is indeed unitary. Maudlin does address holography briefly at the end of the paper, but unless I have missed something, his argument in the second paragraph of that section is identical to the remnant scenario (the idea that the post-evaporation Cauchy slice contains some degrees of freedom that don't escape the horizon). This scenario has long been ruled out by basic physical considerations, as discussed in most comprehensive reviews of the information paradox. It is also ruled out explicitly by AdS/CFT.

While I don't want to wade into the personal waters, I will remark that the provocative tone used throughout the paper could cause offense in several places, and that Tim shouldn't be surprised when some read the paper as larded with derision.

dark star,

At the beginning of the paper, I state explicitly that the outcome of the paper will be one of two things: either the "paradox" will no longer be considered paradoxical (and in particular it will no longer be claimed that there is a fundamental conflict between quantum theory and general relativity), or the exact nature of the paradox will be clarified. I take it that you are suggesting the latter resolution. Let me make some comments before turning to the proposed resolution.

The first comment is that the Penrose diagram that I am commenting on is universally used in presenting the "paradox", from Hawking onward. Given the conventions for Penrose diagrams, it depicts an exact causal structure (i.e conformal structure) to be analyzed. That is the structure I do analyze. The diagram does not, as you assert, give a picture of the metric, but only of the conformal structure (causal structure). As such, adding the infalling matter does not affect the diagram. A Penrose diagram is not "just a cartoon" that can be variously interpreted, it is a precise depiction of a conformal structure. Of course, I clean up Hawking's diagram, which is a bit vague on certain points (such as whether the EE is in the space-time or not) but I also discuss the situation with and without that specification. So I am taking the diagram seriously.

Now that might be a mistake. Maybe the diagram has never, since Hawking's paper, been meant to be taken seriously. If so, then the clarity of the usual presentations, from Hawking on, has been seriously lacking. There is no warning given that one ought not to take the diagram seriously, of how it might be misleading. So at the very least, there has been an extremely serious breakdown in the clarity and precision with which the paradox has been standardly presented. As a side remark, neither George Ellis nor Robert Wald nor Ted Jacobson, all of whom are prominent experts in General Relativity, have made any complaint about the diagram or the resulting analysis. So even on your account there has been some sort of widespread misunderstanding in the professional community.

(Con't)

How does your presentation of the paradox go? Well, you say that "in quantum gravity" the causal structure is not well-defined, so the Penrose diagram should not be taken seriously. There are several puzzles about this. One is this: since no theory of quantum gravity actually exists, how do you know that causal structure is not well-defined? More particularly, how do you know it is ill-defined in a way that renders the diagram incorrect? You seem to be importing in results from a non-existent theory, whereas the paradox was supposed to provide some clues to discovering that very theory.

This logical structure seems to be exactly backwards. One gets the paradox by trying to take both GR as we have it and quantum theory as we have it and deriving a contradiction from their conjunction. In the normal presentation, the contradiction is supposed to be something like this: GR implies that the evolution from Sigma 1 to Sigma 2 is pure-to-mixed and loses information (is not retrodictable), while quantum theory demands that the evolution from Sigma 1 to Sigma 2 must be unitary, deterministic, pure-to-pure and retrodictable. On this presentation, which I claim is the usual one, one is forced to choose between GR and quantum theory. I address this presentation by pointing out that in the situation as presented in the Penrose diagram this is a false dichotomy and hence a false paradox: sticking to both quantum theory and GR, and taking the diagram seriously, leads to the conclusion that the state on Sigma 2 should be mixed and non-retrodictable. This does not contradict quantum theory but rather is demanded by quantum theory in this setting.

On your understanding, what is the paradox? We start with a space-time whose conformal and causal structure is somehow undefined in certain places, so the whole concept of a Cauchy surface is not well defined. We are not given anything like a Penrose diagram depicting the situation. On what basis, then, are we to conclude anything about the state on Sigma 2? We know that quantum theory, even in plain vanilla Minkowski spacetime, demands Cauchy-to-Cauchy evolutions that are unitary and predictable and retropredictable, and also that it allows for and generically predicts Cauchy-to-non-Cauchy evolutions that are not unitary, are pure-to-mixed, and are not retrodictable. If the correct "quantum gravity" space-time structure does not allow for the definition of a Cauchy surface, then we have no grounds to expect anything, one way or the other, about the Sigma 1 to Sigma 2 evolution. How is that a paradox? However it comes out, we have not violated any principle.

Now about AdS/CFT. How do you think it follows, if AdS/CFT is true, that the evolution from Sigma 1 to Sigma 2 is unitary? If pure states on the boundary always map to pure states in the bulk, then we know that a pure-to-pure transition on the boundary maps to a pure-to-pure transition in the bulk. (This all assumes that the conformal structure on the boundary is unproblematic.) Fine. And let me even grant that the initial pure state on the boundary maps to a pure state on Sigma 1 in the bulk. But by what argument can one conclude that the final pure state on the boundary maps to *the state on Sigma 2* in the bulk? All we know is that it maps to some pure state in the bulk. Why not the state on Sigma 2 U Sigma 2in is the bulk? According to my analysis, this state ought to be pure, and the state on Sigma 2 alone mixed. Somehow you conclude that CFT says it must the the state on Sigma 2 that is mapped to, but I can't see any argument at all to that conclusion. We would need a full dictionary connecting states on the boundary to states in the bulk to conclude anything, and we have no such dictionary. So my solution is not "ruled out explicitly" by AdS/CFT.

(Con't)

In fact, simple dimensional considerations assure us that the map from states on surfaces on the boundary to states in the bulk must be highly non-trivial and non-local: you are connecting states of different dimensionalities to one another. Why can't the state on a connected surface on the boundary map to a state on a disconnected surface in the bulk? These surface-to-surface mappings cannot be continuous, for the dimensional considerations just given.

Finally, you say that remnant scenarios have "long been ruled out by basic physical considerations". I would dearly love to know what those "basic physical considerations" are, as I have not been able to find them and no one will tell me what they are. Or at least point me to a comprehensive review that, in your view, lays out these considerations in a clear way. I have asked many people for where I might find a clear statement of the paradox, and quite simply have never gotten one. But of course I may have missed it. So a suggestion for where to look would be greatly appreciated.

I also don't want to go into detail about the tone of the paper, but I will say two things. At the beginning I say that the paper is a provocation. It is meant to be. I also explain why it is and what sorts of response would be appropriate. The other is that I have had a prominent physicist say that he thought the tone was not provocative save for the last paragraph, which is a parody of Hume that he recognized, but thought that other physicists might not recognize. Maybe people find that passage offensive. Philosophers, knowing the reference, find it amusing. I may well remove it from the final version. But if you can point to anything else in the paper, from beginning to end, that has some offensive tone, I would appreciate pointing it out. I have been told on this blog that the paper is full of ad hominem arguments, when it does not contain a single one. I am explicitly arguing that the theoretical physics community has been on a wild goose chase for forty years. If that claim is true, then there is not going to be a way to make it that does not raise a lot of hackles. And if it isn't, then at least we can get a clear account of why my argument is incorrect and what the paradox really is. But you ought to at least consider the possibility that some of the hostile reaction is attributable not to the tone of the paper, but to the thesis of the paper. I have tried to present the argument deliberately, clearly, and slowly, and have repeated key points. I do that because of long experience having the key points in papers overlooked or misconstrued. If that comes off as pedantic, that is a drawback. But it makes the target of refutation easy to find. If there is a mistake in the paper, it is somewhere explicitly on the page. For the reasons given above, I do not think you have located any error.

dark star,

It's right that the Penrose diagram that Tim has in his paper is only one possible time-evolution since we don't know what happens in the QG regime. However, it happens to be the case that Tim is looking at, so I can't see how that's a mistake. You may question whether it's *relevant*, but nothing wrong with it.

But you state that remnant scenarios are ruled out and that is wrong. Please point me towards the literature that you refer to. I'd be very surprised if you actually have any argument to back up your claim.

Just to complete the above:

"If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, "Does it contain any abstract reasoning concerning quantity or number?" No. "Does it contain any experimental reasoning concerning matter of fact and existence?" No. Commit it then to the flames: for it can contain nothing but sophistry and illusion." - David Hume,

An Enquiry Concerning Human Understanding, 1748.Tim Maudlin: What is this dogma "virtual things are not real"? I see it everywhere, so they must teach it in physics graduate schools. I totally disagree. You cannot draw a line between "virtual" particles and "real" particles. Every particle is to some degree off-shell, and so virtual. If real black holes violate unitarity, so do virtual ones.

Does anybody have a theory which draws a distinction between "virtual particles" and "real particles", and shows how "real black holes" can violate unitarity while "virtual black holes" don't? I don't think so, and if you know of one, please give a reference.

Sabine,

Thanks for posting this, I found both your article and the ensuing discussion very interesting. I have to say, though, that I find the

titleof your article more condescending than anything in Tim's article (save, perhaps, the final humorous paragraph, which may be taken amiss by those unfamiliar with the riffed passage in Hume). I don't see that you point to anything that Tim hasfailedto understand; at most you are criticizing him for not saying more about certain issues.As I see it Tim is trying to bring a bit of clarity to an area of discussion that has suffered from, at least, a serious lack of it [clarity]. And I take it that you would agree that greater clarity is sorely needed, since you say "... research on the black hole information loss problem has grown into a huge bubble of nonsense." If things have gone off the rails in this way, all efforts to put things back on track should be lauded.

I look forward to (hopefully) seeing followup articles by Tim on the things he hinted at: the original argument for BH evaporation, AdS/CFT, . . .

Sabine,

It may well be a cartoon of the case that Tim is looking at, but it does not accurately depict the spacetime geometry (which probably does not exist in the sense we are accustomed to) near the "evaporation event", hence it is irrelevant for his discussion of the Cauchyness of Sigma_2.

This may surprise you, but my understanding is that the arguments against remnants are strong. I'll point to [9209058], [9304027], [9412159] and [9501106] as examples of basic physical arguments against. Recent reviews discussing AdS/CFT and remnants include [1409.1231] and especially [1703.02143]. I would be interested to know what issues you take with these arguments.

Tim,

Re: post 1.

As I said in my post, a Penrose diagram depicts only the causal structure of the spacetime (which is determined by the metric). In quantum gravity the causal structure is simply not well-defined and therefore in any situation where quantum gravity is strong, the Penrose diagram is no better than a cartoon.

Taking the diagram seriously is your error, and one that has been made by many physicists over the history of the information paradox. Careful reviews will emphasize that the diagram is not meant to be taken literally, though the point is often not made explicit in papers since our ignorance of what happens in the strong QG regime is common knowledge. I agree that one should be more careful in presenting the picture, at least when there is a risk of confusion, though I disagree that there is a widespread misunderstanding on this point in the community, at least among experts. To your side remark, two of your "prominent experts" have views on the information paradox that lie well outside the mainstream (and that I believe are refuted both by AdS/CFT and the boundary nature of the gravity hamiltonian). The other, if pressed, would likely tell you that the diagram can't be taken too seriously. I think we can have a discussion about the merits of your argument without appeal to their opinions, though.

I'm happy to follow the presentation of the paradox in [1409.1231] for the sake of concreteness. I agree that, absent a theory of quantum gravity, we cannot make definite statements about what happens to causal structure in the quantum gravity regime. The general expectation is that it is not well-defined, but I do not need this for the argument. All I was saying is that the geometry receives large quantum corrections in the quantum gravity regime -- which is what we mean by it being quantum -- hence a large departure from the naive picture in figure 4.

In fact, I can even make an argument without invoking quantum effects, though those are certainly relevant too. Near the singularity the curvature is large, and so higher-derivative terms in the gravitational action become important. This means that the spacetime in the high-curvature regime near the singularity (or near the horizon at the end-stages of evaporation) is modified by these classical, post-Einstein-Hilbert gravitational corrections, so that one cannot hope to use figure 4 as a literal spacetime diagram.

By the way, our theory of perturbative quantum gravity is string theory, which certainly "exists", and while I did not need to import any stringy results (all my claims follow from effective field theory without invoking any details of the UV physics), my statements are consistent with our knowledge of the stringy physics.

Re: "I address this presentation by pointing out that in the situation as presented in the Penrose diagram this is a false dichotomy and hence a false paradox: sticking to both quantum theory and GR, and taking the diagram seriously, leads to the conclusion that the state on Sigma 2 should be mixed and non-retrodictable. This does not contradict quantum theory but rather is demanded by quantum theory in this setting."

This is the point, you cannot stick to GR, it breaks down. The question has always been how, and whether it allows information to escape. The so-called paradox is the conflict between the naive GR(+semiclassical quantum fields) prediction and the constraints of quantum mechanics.

My interpretation of your argument is the following: you point out that GR predicts that the state on Sigma_2 is mixed, and then take issue with the claim that QM implies unitary evolution to Sigma_2, since you argue that Sigma_2 cannot be a Cauchy slice (though your argument involves following geodesics through a high-curvature quantum gravity regime, which you cannot possibly do). Even if I believed your argument was well-justified, it would lead you immediately to the remnant or baby universe scenarios, which I addressed in my response to Sabine. (cont'd)

"We start with a space-time whose conformal and causal structure is somehow undefined in certain places, so the whole concept of a Cauchy surface is not well defined... On what basis, then, are we to conclude anything about the state on Sigma 2? If the correct "quantum gravity" space-time structure does not allow for the definition of a Cauchy surface, then we have no grounds to expect anything, one way or the other, about the Sigma 1 to Sigma 2 evolution. How is that a paradox? However it comes out, we have not violated any principle."

The sharpest answer comes from AdS/CFT. We start with a pure state on the boundary in the vacuum, dual to vacuum in the bulk, then act with sources on the boundary to create a black hole in the bulk (if you prefer, you can think of this as evolving the boundary with a time-dependent hamiltonian). At t=0 in the boundary, before we've turned on the sources, the bulk is just empty AdS and there are no obstructions to picking a Cauchy slice. Much later the black hole will have evaporated, and the gravitational field is weak everywhere in the bulk, so we can construct a bulk Cauchy slice dual to the evolved boundary slice.

"Now about AdS/CFT. How do you think it follows, if AdS/CFT is true, that the evolution from Sigma 1 to Sigma 2 is unitary? If pure states on the boundary always map to pure states in the bulk, then we know that a pure-to-pure transition on the boundary maps to a pure-to-pure transition in the bulk. (This all assumes that the conformal structure on the boundary is unproblematic.)"

The last bit is not an assumption, it is trivially true. Time evolution in quantum field theory is by definition unitary regardless of the manifold on which it lives. The CFTs in the correspondence, for example N=4 SYM, are ordinary, unitary field theories on fixed spacetime backgrounds.

"Fine. And let me even grant that the initial pure state on the boundary maps to a pure state on Sigma 1 in the bulk. But by what argument can one conclude that the final pure state on the boundary maps to *the state on Sigma 2* in the bulk? All we know is that it maps to some pure state in the bulk. Why not the state on Sigma 2 U Sigma 2in is the bulk?"

If the bulk dual to Sigma_2 has some piece behind the horizon at late times, it's a remnant, or baby universe, by definition.

"According to my analysis, this state ought to be pure, and the state on Sigma 2 alone mixed. Somehow you conclude that CFT says it must the the state on Sigma 2 that is mapped to, but I can't see any argument at all to that conclusion. We would need a full dictionary connecting states on the boundary to states in the bulk to conclude anything, and we have no such dictionary. So my solution is not "ruled out explicitly" by AdS/CFT."

Knowing that the dictionary exists is different from knowing the details of the mapping. We have very high confidence that the dictionary exists; the details are irrelevant to the argument that it implies unitary evolution in the bulk, which is implied by its existence.

"In fact, simple dimensional considerations assure us that the map from states on surfaces on the boundary to states in the bulk must be highly non-trivial and non-local: you are connecting states of different dimensionalities to one another."

Agreed, this is why we call it "holography". The bulk-boundary map is indeed highly nontrivial and highly nonlocal, but nobody promised you a rose garden.

I gave some refs for remnants in my response to Sabine.

Re: tone, you represent at least one false statement (that Sigma_2 cannot be Cauchy) as trivially true, and then suggest that failure to recognize it as such has led physicists on a wild goose chase for decades. I personally read this as hubris more than anything else, and would have pushed harder to understand why such a claim has not gained more traction in the community before publication.

Carl3,

The purpose of the title is point out he tried to understand it, which is arguably true. For all I can tell the whole purpose of his paper is explain why he doesn't understand why physicists spend time thinking about the problem, so clearly he failed at it. But you're jumping to conclusions about my intention. I'd say that I myself fail to understand why my colleagues discuss a lot of the issues they do discuss (and I wrote about this previously), hence my remark about the bubble. You could have read my title as "philosopher puzzled about insanity in theoretical physics." That you didn't says more about you than about me.

dark star,

If you post 7-digit arxiv numbers, please include the category. But let us take the Preskill review as example, it's a good starting point. It's full with phrases like "it seems" so and "it seems so". If you bother to look at the references quoted, they contain nothing to back up the claims in the paper. The large-volume explanation has never been ruled out. The example mentioned in the paper is a red herring (seriously, go and read the papers). Most troubling though, any such argument implicitly assumes that effective field theory holds *at the Planck scale* which is clearly unwarranted. The pair production "problem" is a non-problem, both because we have all reasons to assume eft to break down and because there's no reason to believe remnants must be long-lived or degenerate at long wave-lengths.

This has been said many times (even Tim debunks this claim), so why do you keep bringing this up?

I'm not sure why I would be interested in remnants in AdS/CFT, can you tell me why the papers you mention are important to the issue? Best,

B.

Peter Shor,

As I understand it, "virtual particles" are just mathematical artifacts that arise in doing perturbation theory. Similarly, Feynman diagrams do not depict any real physical events: they are just a handy mnemonic device for keeping track of a bunch of terms that contribute to the exact solution to an equation. Of course, to go into this properly one would have to be precise about the sense in which any particle is "real". As far as a know, none of this is taught in physics graduate school, where the concept of "physical reality" is not much used. That is why physicists cannot agree about whether the wavefunction of a system is "real", or even what that might mean. It is also why most physicists cannot explain how to solve the measurement problem.

Certainly, the idea that there is no fundamental difference between virtual and real particles would need some strong defense. For example, in the GRW theory, real particles suffer GRW collapses. "Virtual particles" do not, and could not, if the theory is to work.

There is some confused talk that ties "virtual particles" to "fluctuations", and that mentions the Heisenberg time/energy uncertainty principle, as if a virtual particle can exist as a short-term fluctuation, but the longer lived it is the less energy it must have. This talk is confused because of a misunderstanding of the term "fluctuation". There are, for example, "quantum fluctuations" in the Minkowski vacuum state, but the state itself is stationary, and does not fluctuate at all. The so-called "fluctuations" are expectation values for certain quantities if they were measured. No physical change corresponds to them.

But in the end, all of this is not really relevant to the paper. I do not say that real evaporating black holes violate unitarity and virtual ones do not, as you relate. I explicitly say that real black holes, including evaporating ones, do not violate unitarity in the only place where we could expect it, namely for Cauchy-to-Cauchy evolution. The same would be true for virtual evaporating black holes, if there were any.

dark star

OK, a lot to go through here. You continue to insist that the diagram is a "cartoon" because quantum gravity. The diagram has been used since Hawking's original paper, and continues to be used, when discussing the "paradox", and no warnings or disclaimers are given. Certainly, Hawking himself thought that there is a fundamental breakdown of unitarity on the basis of the diagram, so he took it seriously. Your claim that "the point is often not made explicit in papers since our ignorance of what happens in the strong QG regime is common knowledge" is impossible to refute and impossible to prove, of course. Since you yourself say that "Taking the diagram seriously is your error, and one that has been made by many physicists over the history of the information paradox", at the very least the paper shows that physicists have been imprecise and sloppy in ways that have misled other physicists. But, as I said, if one is not to take the diagram seriously, what is one to take seriously? What is the paradox supposed to be?

Perhaps the operative paragraph of your post is this:

"This is the point, you cannot stick to GR, it breaks down. The question has always been how, and whether it allows information to escape. The so-called paradox is the conflict between the naive GR(+semiclassical quantum fields) prediction and the constraints of quantum mechanics."

This shows that you have not understood my argument at all. What I have argued is that there is no conflict between naive GR and the constraints of quantum mechanics, which is why there is no paradox. We certainly agree that quantum mechanics does not require that all evolutions be pure-to-pure and preserve information: Wald's example in plain vanilla Minkowski space-time is a counter-example to that. The only constraint we have from quantum mechanics is that Cauchy-to-Cauchy evolution must be pure-to-pure, and the black hole evaporation scenario suggests no violation of that at all. The evolution from Sigma 1 to Sigma 2 U Sigma 2in can perfectly well be pure-to-pure, unitary, and preserve information. The evolution from Sigma 1 to Sigma 2 arises from this pure-to-pure evolution followed by tracing out over Sigma 2in. This leaves a mixed state on Sigma 2 that fails to preserve information. This does not violate quantum theory: it instantiates it.

We know from Wald's example that not every evolution is pure-to-pure, or unitary, or preserves information. And I have given you a strict criterion about when it must be unitary and preserve information: when it is Cauchy-to-Cauchy If you insist that the space-time of the evaporating black hole does not have a definite causal structure, so that the very notion of a Cauchy surface is not applicable, then you need to replace this criterion with another one that can be applied. Absent such a criterion we have no grounds to expect anything in particular about the evolution from Sigma 1 to Sigma 2 .And the criterion better reduce to being Cauchy-to-Cauchy in regimes where a space-time emerges. Without the criterion there is no paradox.

Con't

You say that by definition my solution is a remnant solution. You are free to define things as you like: I claim that is the right solution. Looking at the literature, I find most "remnants" not to yield disconnected Cauchy surfaces, and the literature says that to get a remnant the evaporation has to stop at Planck scale and not run to completion. In my solution the evaporation does run to completion. But this is all just semantics: I claim that my solution, whatever you call it, is a consequence of quantum theory and General Relativity. It contradicts neither of them.

My point about not having the dictionary in AdS/CFT is simple: without it, why not conclude that the unitary evolution on the boundary maps to the unitary evolution from Sigma 1 to Sgma 2 U sigma 2in? Then there is no paradox. In this sense, the details of the map are critical, not irrelevant.

"Re: tone, you represent at least one false statement (that Sigma_2 cannot be Cauchy) as trivially true, and then suggest that failure to recognize it as such has led physicists on a wild goose chase for decades." But your so-called false statement is trivially true in the Penrose diagram! And I don't see how any imaginable correction of the diagram in the high-curvature regime could render it false.

Carl3: Thank you.

Sabine: Perhaps your ear for English is flawed, but anyone would take your title (and even more your Twitter, which added "unsuccessfully") to be derisive of the paper and, by extension, of philosophers in general. It has universally been taken that way to my knowledge. I think I understand the situation with respect to the paradox perfectly well, that I have unravelled the paradox, as it were. It turns on the error described in the section "But it no longer exists". In any case, Carl3's reaction does not say more about him than about you: it says a lot about the natural understanding of the title. And you say, apparently derisively, that there is nothing new in the paper which, as Wayne pointed out above, is exactly what I say in the abstract. To be clear: none of the principles of quantum theory or of General Relativity that I make use of in the paper is new or unknown. But the consequences of these principles has not been appreciated. Nor, to my knowledge, is the way one of Geroch's theorems breaks down and the other doesn't.

Tim,

"Perhaps your ear for English is flawed, but anyone would take your title (and even more your Twitter, which added "unsuccessfully") to be derisive of the paper and, by extension, of philosophers in general. It has universally been taken that way to my knowledge.No really, lol - I wonder why.

As I see it, Bee has little to no dispute with Tim's physics.

As I see it, Bee's criticism of Tim's paper is that the first version needs to address some more issues.

As I see it Bee's title refers to the fact that the philosopher has not yet understood why physicists spend so much time on this evaporated paradox. That is not a problem of physics but rather perhaps one of sociology. To quote Bee, "For all I can tell the whole purpose of his paper is explain why he doesn't understand why physicists spend time thinking about the problem, so clearly he failed at it."

That is, the problem is: "why is lack of conceptual clarity so acceptable among modern-day physicists?"

The papers below are relevant to some of the issues Maudlin raises. (Apologies for spamming!)

The 2006 paper discusses the Cauchy surface issue. The 2009 paper notes that decoherence (For All Practical Purposes -- FAPP per Bell) mimics pure to mixed evolution. An experiment which can go beyond FAPP to detect BH unitarity violation would also be able to detect Everett branches. Small amounts of pure to mixed evolution are not excluded and perhaps never will be.

Black holes, information and decoherence

https://arxiv.org/abs/0903.2258

We investigate the experimental capabilities required to test whether black holes destroy information. We show that an experiment capable of illuminating the information puzzle must necessarily be able to detect or manipulate macroscopic superpositions (i.e., Everett branches). Hence, it could also address the fundamental question of decoherence versus wavefunction collapse.

Spacetime topology change and black hole information

https://arxiv.org/abs/hep-th/0608175

Topology change -- the creation of a disconnected baby universe -- due to black hole collapse may resolve the information loss paradox. Evolution from an early time Cauchy surface to a final surface which includes a slice of the disconnected region can be unitary and consistent with conventional quantum mechanics. We discuss the issue of cluster decomposition, showing that any violations thereof are likely to be unobservably small. Topology change is similar to the black hole remnant scenario and only requires assumptions about the behavior of quantum gravity in planckian regimes. It does not require non-locality or any modification of low-energy physics.

Arun,

Yes, excellent summary.

Dear Stephen,

Thanks so much for the references. They are both highly relevant, and I will cite them in the next version.

There is a terminological question that I have, which I think (from your paper) you might help me with. dark star above writes: "If the bulk dual to Sigma_2 has some piece behind the horizon at late times, it's a remnant, or baby universe, by definition." Now there is a little confusion here since Sigma 2 would be in the bulk, not on the boundary, so the claim is really about the bulk dual of the final Cauchy surface on the boundary. But the terminological question is this: is there a standard meaning of "remnant" and "baby universe"? I have sometimes been told that the solution in my paper is a remnant solution, but my impression is that remnants require that the evaporation not "run to completion", and hence leave a connected Cauchy surface. I suppose the solution would be a "baby universe", but I'm just not sure how these terms are used. You say the solution is not a remnant solution, so I infer you have something like this criterion in mind. Can you confirm that?

Thanks,

Tim

Sabine,

Sorry about that, everything I listed was hep-th :)

I agree that parts of the Preskill paper are vague. Absent a theory of nonperturbative quantum gravity, it's hard to know the rules of the game, as you say. Perhaps I should not have cited the classic papers against remnants as those are well-known to you, but I did cite two modern reviews that I believe refute the arguments you gave with Smolin (especially the Marolf review I highlighted). For example, we know that there must exist an EFT description of any would-be remnant from AdS/CFT.

You should care about remnants in holography since we can sharply formulate questions about quantum gravity in that context. Many of the lessons also extrapolate to asymptotically flat black holes, by undoing the decoupling limit. Also, small black holes in AdS are almost identical to AF black holes. I really don't understand why one would simply ignore what we've learned from the correspondence. Do you think that the resolution of the info. paradox is fundamentally different in other circumstances?

I also don't understand why you say that remnants can be short-lived or non-degenerate. My understanding of remnants is that they store all the information left over after BH evaporation. If they were short-lived it seems like they would be equivalent to the ordinary evaporation-to-completion scenarios, and if non-degenerate they would not store the required information so would not solve the paradox.

Would you mind explaining the large-volume proposal to me?

My main interest in joining this conversation was to point out the flaw in Tim's naive GR argument, and that his holographic example invokes remnants. I would be happy to have convinced on these points; the viability of remnants is a separate (but obviously very interesting) issue.

Tim,

I understand your argument just fine, but it seems like mine hasn't made it across. I am pointing out that you are erroneously using classical GR to describe the end-stages of black hole evaporation. It does not matter if you reconcile some naive GR prediction with QM, GR breaks down towards the end of evaporation (if not before), and so its predictions are both irrelevant and inapplicable to any resolution of the paradox.

Maybe another wording would be helpful. Whether or not a statement is true or false in the Penrose diagram could not matter less, since the Penrose diagram is not an accurate description of the physics in the end stages of BH evaporation. It then follows logically that any arguments based on the Penrose diagram are irrelevant.

"And I don't see how any imaginable correction of the diagram in the high-curvature regime could render it false."

I trust this was not meant as an argument -- nothing follows logically from one's lack of imagination. I described two types of corrections that can wildly change the spacetime geometry: quantum, and higher-curvature, both of which are relevant in this situation. Unless you have a physical argument that these corrections do not change the spacetime geometry, it's hard to take this seriously.

I addressed your concerns about the existence of Cauchy surfaces with the holographic example, which you may want to spend some time with. I was careful to stick to a physical setup where we know the details of the map. Also, just to keep things clear: the flaw in your argument based on the Penrose diagram is independent of this point; I only mentioned holography since it furnishes an explicit counterexample.

As for remnants, all the problems I alluded to manifest whether the Cauchy surface is connected or disconnected. The thing that leads to problems is having a lot of entropy in a very low-mass object.

As for AdS/CFT, as I said before, if some piece of the evolved Cauchy surface is stuck behind the horizon, it's a remnant. This is true whether or not we know the details of the map, it's a definition, and yes, it's just semantics. However, this is not: we know enough about the dictionary to conclude that there would be a low-energy state in the field theory for every state of the remnant, while there is strong evidence that this is not the case in any holographic field theory.

dark star,

AdS/CFT presumes the solution that's why I'm not interested in it as an approach to information loss (though it is interesting for other reasons). It may be self-consistent, but that doesn't help because, to state the obvious, we don't live in AdS. I don't know what you mean by 'results extrapolate'. It's a non-continuous limit from a space with to a space without (conformal) boundary.

"I also don't understand why you say that remnants can be short-lived or non-degenerate. My understanding of remnants is that they store all the information left over after BH evaporation. If they were short-lived it seems like they would be equivalent to the ordinary evaporation-to-completion scenarios,"Sure that's exactly what they are, except that the strong interpretation of the BH entropy doesn't hold. Maybe one shouldn't call them remnants in this case, you are right. I do that just because most people in the community have no idea what the weak interpretation of the BH entropy is, but they know remnants.

"Would you mind explaining the large-volume proposal to me?"The proposal is that the volume is large. More seriously, it's explained in the review I mentioned better than I can possibly do here. The point is simply that if the volume is large, there's no reason why the remnant's information should decouple in the EFT limit, hence they're not indistinguishable.

Sabine,

I don't understand at all why you say that AdS/CFT presumes the solution. The nature of the resolution is implied by AdS/CFT, which we believe for a host of completely unrelated reasons, both from the bottom-up and top-down. I would appreciate clarification on your stance here.

By reintroducing the coupling of the supergravity fields in the asymptotically flat region to the CFT on the branes, one undoes the near-horizon limit. Of course the theory that one gets from this is a theory with dynamical gravity, but the dynamics near the horizon are still described by the CFT, though now the radiation can escape to infinity. This is what I mean by extrapolation.

Whether or not we live in AdS seems completely irrelevant to me. If you propose to ignore the resolution of the paradox in AdS, where the nature of the resolution is a consequence (not a presumption) of the duality, you must believe that the resolution of the info. paradox is fundamentally different depending on the boundary conditions at infinity. To my knowledge there is no positive evidence to suggest this, and in addition there's the negative evidence I gave above. Let's suppose it was the case, though. There would have to be some mechanism that would tell the black hole (in its end stages of evaporation when it is far from the boundary and much smaller than the AdS curvature scale) whether or not it lived in AdS. This mechanism would have to be very nonlocal, and stretch into regions with low curvature. Do you have a proposal for it?

I'll spend some more time with your review later but I don't see how distinguishability of the remnant states affects the EFT argument. They would still have to be there in the theory and show up in transition amplitudes as well as the spectrum.

I also posted a separate response to Tim when I posted my response to you, which I don't see above. Let me know if that bit didn't make it through.

dark star,

Sorry, I had missed one of your comments, it should appear now.

In AdS/CFT you only look at fields that can be expanded around the boundary. If you'd want to say something about information loss/preservation, you should look at fields that have *no* expansion around the boundary. (And I know there's been some discussion about this. I am not aware though anything conclusive came out of it.)

In fact I do believe that the solution of the paradox is fundamentally different whether or not you assume you only have fields that can be expanded around the AdS boundary. Hence my reminder that the limit \Lambda \to 0 isn't continuous, and there's no reason to believe it is.

Be that as it may, it doesn't matter what I believe or you believe or anyone believes. There are different mathematically consistent solutions to this problem and we can discuss this forever back and forth and write papers about it and we'll not agree on anything. I don't think this is science any more. Best,

B.

dark star

I asked some questions about your argument, and the supposed flaw in my paper, but I cannot see any answer to them. So let me address what you say directly.

You complain that the Penrose diagram that is universally used when explicating the "paradox" is not to be taken seriously, so nothing can come of analyzing it. That is a very odd position to take. After all, the idea is to actually present a paradox. If some contradiction with basic principles arises from analysis of the diagram, then one can say that there must be something wrong with it, or else abandon a basic principle.Certainly, the "paradox" is often presented this way, as if fundamental GR principles (taking GR as exact) entail that the state on Sigma 2 must be mixed while fundamental QM principles say it must be pure. If this were correct, then we could conclude that either GR or QM has to be modified to deal with this case, and it becomes an important test case for the character of quantum gravity. But what I show is that there is no such conflict between GR and QM, taking the diagram seriously as a representation of the conformal structure. I take it you do not disagree with any of that.

I take it that Wald's simple case also establishes to your satisfaction that sometimes pure states evolve into mixed states even in the complete absence of any exotic or extreme space-time structure. And also that where there is a well-defined conformal structure, the criterion for the different sorts of evolution is clear: Cauchy-to-Cauchy is always pure-to-pure and information-preserving, while Cauchy-to-non-Cauchy always loses information and typically is pure-to=mixed. Do you dispute any of that?

If you don't dispute it, then here is the problem with your position. If, as you say, there is no well-defined causal structure in the high curvature area, then there is no longer a distinction between Cauchy and non-Cauchy surfaces. And without such a distinction, we don't have any criterion for when to expect pure-to-pure evolution vs. pure-to-mixed. If the causal structure is not well-defined, then we have no reason to expect any evolution to be pure-to-pure as opposed to pure-to-mixed.So then there just isn't any paradox at all. What, on your telling, is the paradox supposed to be?

Con't

You also have not understood my objection to the relevance of AdS/CFT. Again, I will grant that the boundary has an unproblematic conformal/causal structure, so we can identify Cauchy surfaces there. We begin with a pure state on a Cauchy surface C1 on the boundary. Let's grant that this maps to a pure state on Sigma 1 in the bulk. The pure state on the boundary evolves to a pure state on another Cauchy surface C2 on the boundary. Let's grant that this in turn represents some pure state in the bulk. So I am granting you things left and right that have not been proven, including that there is a correspondence at all. But all of this granting still does not get you to your conclusion. The question now is: what state in the bulk corresponds to the new pure state on C2 on the boundary? As far as I can tell, you just assert that it must be a pure state on Sigma 2 in the bulk rather than, say, a pure state on a disconnected such as Sigma 2 U Sigma 2in in the bulk. But by what principle are you entitled to that conclusion? Without having a detailed translation manual, you simply cannot make such a conclusion.In fact, what you have is not a paradox but just a pile of ignorance about what is going on in the bulk.

As I understand it, my solution is not a "remnant" solution but a "baby universe" solution. You seem to think that there is some physical argument against such solutions. The only thing that looks like an argument is this claim: "The thing that leads to problems is having a lot of entropy in a very low-mass object." If that is your objection, then it is answered in the next version of the paper. Short answer: the arguments that attempt to connect entropy to mass and to information are all invalid. You will like that part of the paper even less than this. But it's not hard to show.

Sabine,

I agree about the irrelevance of our beliefs but I'm disappointed that you take this perspective, every positive argument I've given is supported by calculations in a UV-complete theory of quantum gravity. Apart from that, most everything else I've said has been either a logical deduction or a question. I'm not sure at all why you think this isn't science, but if that's your stance then it's probably not productive to discuss further.

Tim,

The idea is to understand what happens in the end stages of black hole evaporation, i.e. resolve whether the information escapes or not, and how. The idea is not to present a paradox for the sake of presenting a paradox -- we're physicists, not philosophers. I agree that taking the diagram seriously as a representation of the causal structure (which is emphatically not correct) would lead one to conclude that there is no paradox.

I also agree that in the absence of a well-defined Cauchy surface, such as in the high-curvature regime near the singularity, it's impossible to use Wald's criterion for pure-to-pure evolution (which itself is certainly true in non-gravitational theories). However this does not imply that pure-to-pure evolution does not occur, only that that justification is lost. This is why I keep repeating my AdS/CFT example: in that case, we do have Cauchy slices on the boundary, and since the boundary evolution preserves information the bulk must too. No paradox, just a lack of understanding of how the information gets out. I believe this is the consensus view in the community. (If you still want to harp on the well-definedness of bulk Cauchy surfaces, go back to my first example: in that case, one can explicitly construct bulk Cauchy surfaces at early and late times dual to the initial and final boundary Cauchy surfaces, since all grav. fields are weak at those times. In this example one can also see explicitly that there is no piece of the bulk Cauchy behind the horizon.)

You say you are granting me things left and right that have not been proven. It is often true that things are unproven, but nevertheless there is an overwhelming amount of evidence they are true, and no evidence that they are false. This is the case with AdS/CFT, and if it weren't very few people would take it seriously.

Let me briefly address baby universe vs remnant scenarios. I think the former are even easier to rule out holographically. You seem to be defining a baby universe as something with a disconnected Cauchy surface. This implies that once the baby universe forms, the bulk Cauchy surface must remain disconnected at all future times, otherwise one could retrodict from some point on the future Cauchy surface to some point between the two pieces of the past surface, in contradiction with the initial surface being Cauchy. Now, since the exterior piece of the Cauchy surface is connected to the boundary, the interior piece must be disconnected from the boundary. Therefore no signal can travel from the interior piece to the boundary, so the state on the boundary must be independent of the state of the baby universe. But the boundary state was dual to *everything* in the bulk before the black hole formed, and in this scenario after evaporation it is only dual to the degrees of freedom outside the baby universe, and this implies that the boundary evolution is not pure-to-pure.

If instead there's something more like a remnant that stays in contact with the boundary, then it's ruled out by the entropic reasoning you're about to disprove. It may be worth mentioning that the Bekenstein-Hawking formula has been proven in string theory, via the Strominger-Vafa counting of D-brane states that become black hole microstates at strong coupling. I hope you will also point out their error, or the problem with string theory at least, in your followup :)

dark star,

It seems like we are in agreement on some points. Let's see if we can make further progress.

You say: "This is why I keep repeating my AdS/CFT example: in that case, we do have Cauchy slices on the boundary, and since the boundary evolution preserves information the bulk must too. No paradox, just a lack of understanding of how the information gets out." This is, of course, question-begging since in the scenario implied by the Penrose diagram information is not lost and the Cauchy-to-Cauchy evolution is unitary, but the information doesn't get out. I know you don't want to take the diagram seriously, but it is at least worthy of note that if you do take it seriously there is no paradox, unitarity is not violated dynamically, and information is not lost. So even if the Penrose diagram is inaccurate, it provides an example of a certain kind of solution that you seem to be ignoring. That is, you are equating "Information is not lost" with "information escapes", but you can have one without the other. Maybe the correct diagram, or whatever replaces a diagram in the strong gravity regime, afford the same solution.

In any case, if the Cauchy-to-Cauchy criterion cannot be applied in the strong gravity regime, it is worthwhile to figure out what could take its place.

I cannot follow your supposed refutation of the baby universe scenario. The idea, as I understand it, is that there is a duality between the surface and the bulk: for every state on one there is a state on the other such that the dynamics between the one set is isomorphic to the dynamics between the other. There is no need, in implementing such a duality, that any signals "travel from the interior piece to the boundary". It is not that the boundary and the bulk communicate with each other but that at the appropriate level of abstraction they mimic each other. As I have said, the relation between bulk states and boundary states must be extremely complicated and non-intuitive, because you are mapping between spaces of different dimensionality. There won't even exist any 1-to-1 continuous map between points on the boundary and points in the bulk. So there is no reason at all to think that the geometrical features of a set of points on the boundary (such as connectedness) must be carried by the map to similar features in the bulk. Even at early and late times, the dimensionality problem remains.

I must say, the more I look into the original Bekenstein papers that people cite the worse it becomes. FWIW, I was just raising some of these objections and Rovelli said that one shouldn't pay attention to Bekenstein, since the papers are so confused. But when I asked him for a clear, accurate account of the BH entropy/ area law, he said he could not think of one off the top of his head. This seems to be a field where a lot of things are taken as well established, but not in the papers that first announced them, and no where else either. It is very curious.

I know it's a joke about string theory. But if there was no coherent conceptual foundation for the area/entropy law in the first place, I am not going to think it at all plausible that it is confirmed clearly by string-theoretic arguments.

>> in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles

Indeed, but in any experiment we can do, e.g. at the LHC, the energy of such a virtual b.h. (with any reasonable contribution) would be well below the Planck mass i.e. far from the quasi-classical limit where the information loss problem is discussed.

Wolfgang,

1) Quantum effects of black holes are more pronounced the *lighter* the black hole is, not the heavier it is.

2) If it's virtual it can have any energy/mass.

dark star,

"Bekenstein-Hawking formula has been proven in string theory, via the Strominger-Vafa counting of D-brane states that become black hole microstates at strong coupling. I hope you will also point out their error, or the problem with string theory at least, in your followup :)"Which presumes a bulk-boundary correspondence. As I said above, it's a circular argument. You put in x, you get out x. Please have a look at this paper. Hyperentropic cases exist in GR. Where are they in AdS/CFT? Answer: They aren't there because you have assumed they aren't there - they're not states that can be expanded around the boundary.

Having said that, that the BH entropy counts microstates also leads to the firewall problem. That's a major issue because it means you'll have to give up the equivalence principle or quantum mechanics, or both, whereas unifying them was what string theory was supposed to do in the first place. But giving up string theory is of course not an option, so it has to be fixed somehow.

Sorry, link to paper got lost, it's here: https://arxiv.org/abs/0706.3239v2

ad 1) In which sense can any particle fall into a microscopic b.h. with radius < Planck if its wavelength is much larger? So in which sense would a virtual b.h. pose an inf0rmation loss problem?

ad 2) >> any energy Yes, but the contribution of an off-shell virtual b.h. to any S-matrix element would be strongly suppressed (at least exponentially) for energies much larger than the collision energy, which is well below Planck. Therefore its contribution would for all practical purposes be unmeasurable.

Wolfgang,

1) The probability is non-zero.

2) If you know this so exactly, why don't you write a paper about it and publish it.

Wolfgang,

Thanks for the comment. The real issue, as you say, is the magnitude of any effect. Extensive detailed analysis of the GRW theory looking for testable empirical signatures of its massive failure of unitarity have not found any yet, so there is a very big gap between "There is loss of unitarity in scenario X" and "the loss would result in detectable differences in the phenomena that present technology could confirm". My intuition is exactly that the presence of virtual evaporating black holes in the Feynman diagrams would not contribute to any detectable difference in the predictions for scattering. I don't need a complete paper on the topic, but if you can expand a bit on point 2 for a non-expert that would be much appreciated.

Tim,

without a full understanding of quantum gravity (even with string theory one does not know how to handle black holes yet, fuzzballs vs ER=EPR) one can only make basic estimates.

e.g. Wick rotation suggests that the contribution of a black hole of mass m > planck to any Feynman diagram is suppressed by a factor exp( -k^2 ) or exp( -(m/E)^2 ) if E is the energy of the collision event.

>> complete paper

There are some basic estimates of proton decay due to virtual black holes,

see e.g. arxiv.org/abs/1703.10038

The expected lifetime (depending on assumptions, see table 1) is about 10^45 years for

non-susy no extra dimensions gravitation - i.e. a factor of 10^11 larger of what we could currently detect.

ps: This comment is a replica of one that seemed loss (feel free to delete it if it is double).

Bee,

When a Hawkin radiation virtual particle pair is created at event horizon, the particle falling towards BH should carry information with it, and thus increase the amount of information within BH.

For me it sounds like the amount of information is in BH is increasing when the BH is shrinking by Hawking radiation, even if no more mass is added to it.

Now, if we surround the BH with a spherical mirror (to make it into thermal equilibrium "with itself"), the escaped particles are re-captured by the BH, and the amount of the information in the BH increases without a limit. This sounds like it should not happen, right?

BR, -Topi

Topi,

The Hawking particles are entangled which means they're pure states which means if you throw the pair in it's as good as throwing nothing in. If you don't have the math, just imagine they annihilate each other.

Bee,

Exactly as I thought it should be.

Now, lets change the setup so that we have a system of two black holes orbiting each other from a safe distance. The system is surrounded by a spherical mirror. If the masses equal, we again have thermal equilibrium.

Now a constant (>0) part of Hawking radiation from BH 1 is captured by BH 2, and vice versa. And this radiation is carrying information with it.

And now, if Hawking radiation is purely thermal (what's the exact definition, not entangled with anything else?), then both black holes should be gaining information without gainig mass, right?

Br, -Topi

Topi,

No, now they're entangled. You lack some very basic knowledge and should do some reading. I can't replace elementary physics lecture on this blog.

Tim,

Sorry for dropping off for a bit. We do indeed seem to agree here and there.

First, let me address "equating "Information is not lost" with "information escapes"". If information is not lost, but does not escape, it has to go somewhere. These scenarios are usually called remnants or baby universes, depending on whether or not the missing info gets to keep on interacting with the exterior of the black hole or not. It is certainly true that lacking knowledge of the dynamics of quantum gravity we cannot, without more input, rule out such scenarios. But we do have more inputs: effective field theory, and holography, as I discussed above.

"In any case, if the Cauchy-to-Cauchy criterion cannot be applied in the strong gravity regime, it is worthwhile to figure out what could take its place."

Sure, though now you're trying to solve quantum gravity.

"There is no need, in implementing such a duality, that any signals "travel from the interior piece to the boundary". It is not that the boundary and the bulk communicate with each other but that at the appropriate level of abstraction they mimic each other."

This is absolutely true, I was too rough in my description. It is possible (in fact generic) that bulk information in regions causally separated from the boundary is encoded nonetheless via entanglement. One needs the stronger arguments of holography, that remnants and baby universes alike would necessarily exist in an effective field theory as low-energy, high-entropy states, to rule them out definitively. (Without holography one might have argued that these objects had no EFT description, as Sabine appears to. In the absence of an EFT description it's not clear what the rules of the game are; but at any rate we are guaranteed an ordinary field theory description by AdS/CFT.)

"As I have said, the relation between bulk states and boundary states must be extremely complicated and non-intuitive, because you are mapping between spaces of different dimensionality."

Yes, and not just because you are mapping between spaces of different dimensionality; one of them has dynamical spacetime geometry, while the other does not.

"There won't even exist any 1-to-1 continuous map between points on the boundary and points in the bulk. So there is no reason at all to think that the geometrical features of a set of points on the boundary (such as connectedness) must be carried by the map to similar features in the bulk."

Doesn't affect the EFT argument but agreed. The map could in principle be very weird in the QG regime.

"I must say, the more I look into the original Bekenstein papers that people cite the worse it becomes. FWIW, I was just raising some of these objections and Rovelli said that one shouldn't pay attention to Bekenstein, since the papers are so confused. But when I asked him for a clear, accurate account of the BH entropy/ area law, he said he could not think of one off the top of his head. This seems to be a field where a lot of things are taken as well established, but not in the papers that first announced them, and no where else either. It is very curious.

I know it's a joke about string theory. But if there was no coherent conceptual foundation for the area/entropy law in the first place, I am not going to think it at all plausible that it is confirmed clearly by string-theoretic arguments. "

Carlo is right that the original Bekenstein argumentation is somewhat rough, but the conceptual foundation for the area/entropy relation is solid, and rigorous proofs of it in various forms have proliferated in the literature (for example, see the work of Aron Wall on the generalized second law). These proofs involve nothing more than ordinary general relativity and its extensions. It is a great success that string theory reproduces such a thoroughly-tested relation.

dark star,

Excellent! We have a lot of agreement here, and it is also reassuring that you mention Wall. This is work I didn't know and has separately been brought up to me recently, so I know where to look. My usual method is to look at the original papers first, such as Hawking and Bekenstein in this case, to get at the the basic reasoning. As you know, my paper is largely a criticism of Hawking. If flaws in his original argument were pointed out at some point, he himself seems not to have noticed because he was still repeating them verbatim 30 years later.

Let's see if we agree about this. One solution to the supposed paradox is a "baby universe" scenario, in which the "missing information" is contained somewhere that cannot interact with the future part of the original space-time that contains the items that did not pass the event horizon. So I am defending such a scenario, of the kind that is acknowledged to solve the "paradox". My main point, then, is that this is exactly the solution one arrives at by taking the usual Penrose diagram seriously and taking both the fundamental principles of QM (pure-to-pure, unitary, information-preserving evolution in the appropriate circumstances) and of GR (information is preserved always and essentially only for Cauchy-to-Cauchy evolution) seriously. There is no conflict between QM and GR here if we take the diagram seriously; rather they jointly imply a "baby universe" solution. So if one uses the diagram in explicating the "paradox" (as is pretty universally done), then the presentation ought to lead to a baby universe scenario rather than anything called a "paradox". And if you don't think the diagram can be right in some essential way, and don't have anything better to use, then there is not really a paradox either: just a place where you have no idea what to say. A paradox usually means a situation where things you have reason to believe lead to conclusions you find hard to accept (e.g. the Banach-Tarsi paradox, or Zeno's paradoxes, or Schrödinger's cat paradox), not just a situation where you don't have any clear notion what the right theory is.

Just a word about the Penrose diagram. The singularity in the diagram is a curvature singularity, and it is perfectly reasonable to think that GR must break down there. It is also not impossible that GR doesn't break down, and that the singularity is an indication that space-time just ends. But either way, it is not really the singularity that is the source of the supposed paradox. If you imagine "curing the singularity" by patching on a wormhole or something, you still get the disconnection of the Cauchy surfaces that is characteristic of a "baby universe" scenario. As I show, that does require something unusual at the Evaporation Event—either a point-like naked singularity or a failure of manifold structure—but the rest of the singularity in the diagram is uninvolved in the situation. Between the two possibilities, I would have thought that a breakdown of manifold structure would be the more plausible since most people seem to think that the manifold structure does not really exist anywhere: it is an approximation that breaks down at Planck scale. I myself have never seen a clear argument for that, but if one accepts it then the scenario EE-in does not require any fundamental novelty at all.

Con't.

So it seems to me that the main issue to discuss is whether AdS/CFT, if we accept it, rules out the baby universe scenario. It certainly can't be ruled out in any obvious way on empirical grounds. Unlike remnants, which are part of the connected space in which we do our experiments, the baby universes are disconnected from us. Assuming that the Hamiltonian has interaction terms that require spatio-temporal connectedness of the interacting entities, the baby universes will not molest our experiments or experiences. If we have reason to reject the baby universe scenario implied by the Penrose diagram it is not on straightforward empirical grounds (as it might be for remnants).

So it looks like we need to pursue this comment: "One needs the stronger arguments of holography, that remnants and baby universes alike would necessarily exist in an effective field theory as low-energy, high-entropy states, to rule them out definitively. (Without holography one might have argued that these objects had no EFT description, as Sabine appears to. In the absence of an EFT description it's not clear what the rules of the game are; but at any rate we are guaranteed an ordinary field theory description by AdS/CFT.)"

Of course, I want to argue something stronger: a baby universe, being disconnected from us after the evaporation, is a high-entropy (i.e. high information-bearing) *zero* energy state as far as we a concerned: it's existence plays no role at all in energy-balance calculations we do. The usual way that the interior of a black hole enters into considerations of energy is via properties of the event-horizon or via the ADM or Bondi masses. But once the event horizon has evaporated, neither of these make any contribution to energy accounting for us. This is not to say the one cannot apply concepts of energy to the baby universe at all, from the inside rather than the outside as it were. This discussion takes us into the very concept of energy and the appropriate conditions for its deployment, which is a interesting question on which I have views you will probably find even more outrageous. So let's put that aside for the moment and only return to it if necessary.

Con't

So: does holography generally and AdS/CFT in particular rule out baby universe scenarios? Here I just can't see any argument at all. We grant that there is nothing weird about the dynamics on the boundary: the boundary always has a straightforward conformal structure, so the Cauchy-to-Cauchy criterion always applies, and we know that pure Cauchy states on the boundary always evolve into pure Cauchy states without losing unitarity or information. And let's grant (this is a serious concession without justification as far as I can see, but let's make it) that a pure state on some Cauchy surface on the boundary maps to a state like the state on Sigma 1 in the bulk, i.e. maps to a pure state on a Cauchy surface in the bulk. Now let the pure state on the boundary evolve to some pure state on a later Cauchy surface on the boundary. That state in turn maps to some state in the bulk, and we grant it will be a pure state. If we know that the relevant state is the state on Sigma 2 in the bulk then we would have an argument against baby universes. But it is exactly here that I don't see any argument at all. (I have been corresponding with another physicist who says that physicists just *want* the state to be the state on Sigma 2. I said that then we don't have a paradox, but just a disappointment. He granted this characterization.) If the baby universe scenario is correct, one would expect the later state on the boundary to correspond to a state on Sigma 2 U Sigma 2in in the bulk.

Can there be an EFT description of the post-evaporation baby universe on its own? I don't see why not. Can there be one of the post-evaporation disconnected piece outside the event horizon that corresponds to Sigma 2? I don't see why not again. Of course, you don't expect an EFT description everywhere: not at the Evaporation Event, for example. But since EFT is just an approximation that is to be expected. On this reading all that the holography argument buys is an assurance that there is some sort of description of the situation in the bulk that allows for unitary information-preserving evolution. But the description tied to the Cauchy surfaces in the Penrose diagram does exactly that. So it seems that the Penrose diagram describes a perfectly consistent physical story that implies the baby universe scenario, and there is no reason not to accept it. If so, then it is not a paradox at all.

If I have missed something here, let me know.

Cheers,

Tim

Tim,

The question of whether AdS/CFT permits "baby universes" or "disconnected Cauchy surfaces" has come up repeatedly in this context over the years. There is a standard argument that rules this out, which, for whatever reason, one hears in seminar talks and in private discussions, but is not stated clearly in the literature as far as I know. Here is the argument:

Suppose indeed that the final pure state in the CFT is dual to a bulk state defined on the union of Sigma_2 and the slice inside the horizon. One point that will be important here is that energy in gravity is defined on the boundary of a system, so the energy operator in the CFT is dual to bulk operators on the boundary of Sigma_2. Now, you are basically proposing that the bulk Hllbert space at late time is the tensor product of a Hilbert space on Sigma_2 and a Hilbert space on the slice behind the horizon. The trouble here is that the states behind the carry will then carry no energy. That is, we can consider a bulk state consisting of vacuum on Sigma_2 tensored with some state behind the horizon. These states carry zero energy, and according to your proposal there are many such states if the black hole is large. So this implies that the CFT has a huge number of zero energy states, which is false since we know that the CFT has a unique vacuum state. The bottom line is that baby universes are ruled out in AdS/CFT because they would correspond to zero energy states, and there no such states in the CFT.

As I said, I have heard talks at Santa Barbara by, I believe, Wald and Jacobson, where the proposal has been raised and refuted. I could point you to some recordings of the relevant discussion if you are interested.

AdS/CFT really does seem to require that the state on Sigma_2 is pure, but how that comes about in the bulk is quite mysterious.

Dear black hole guy,

Great! So this is exactly the kind of thing I need now. I have been searching through the literature for some good arguments, and am repeatedly coming up dry, and this is clearly stated and helpful. This is exactly what I hoped for when I say at the beginning of the paper that the result may be at least a clear statement of the paradox. It would, of course, still be odd that anyone thought there was a problem until AdS/CFT, but OK. And really odd that it's so hard to find a statement as crisp and clear as this in the literature.

So let's focus down on this claim.

First point:

We need to know that the energy operator in the CFT is dual to the energy operator in the bulk. Is that anything that has been established? And a mean here specifically an energy operator, not whatever it is that implements the dependency on time, or more precisely some t-coordinate. There is really no "time operator" in QM, but a relation is usually postulated between a Hamiltonian operator and "time translation" in some preferred co-ordinate system. All of these connections (between "energy" and some Hamiltonian operator, between a Hamiltonian and time-translation, what is meant by time-translation at all) have to be dealt with delicately here. So can you say a bit more about what is meant by "the energy operator"?

Second point:

The whole notion of "energy" in this context is not at all clear. (It is actually not at all clear in any context where there are no global timelike Killing fields, and there sure as hell aren't any here. That's another rant, but maybe I'll have to go into that as well in this paper.) I am guessing (correct me), that what you have in mind by "energy in gravity is defined on the boundary of a system, so the energy operator in the CFT is dual to bulk operators on the boundary of Sigma_2." is a reference to either the ADM mass or the Bondi mass, which you are then equating to the "energy" of the entire state on Sigma 2. And then you want to play the same game on the disconnected bit Sigma 2in. So what you want to argue (I apologize for the "you": I know you are just reporting this, so don't have to defend it) is that since the Bondi mass of Sigma 2 is equal to the Bondi mass of Sigma 1, and since the "total energy" (i.e. total Bondi mass) is globally conserved through "time", the Bondi mass of Sigma 2in must be zero. It that it?

Let's pause in the argument here.

If this much is right, then we need to ask: are the Bondi (or ADM, but I think Bondi is better here) masses even defined in this situation? As far as I can tell, the answer is "no". They both require asymptotically flat space-times, and the technical requirements for that are pretty stringent and certainly not met for this case. (I mean: it requires much more than that the space-time becomes "closer to locally flat the further out I go".) So what exactly is this energy operator defined on the boundary supposed to be?

Con't.

Third point:

Whatever the energy operator is supposed to be, you are appealing to a principle of global energy conservation. That is being used even before the disconnection of the Cauchy slice occurs. That is, when you say " The trouble here is that the states behind the [horizon] will then carry no energy"I take it the thought is this: the Hawking radiation creates an energy flux going out from the event horizon. That energy has to "come from" somewhere, and the only place it can come from is behind the horizon. Hence the energy (and mass) behind the horizon must shrink, and hence the horizon itself shrink. By 'you', of course, I here mean Hawking: that is the argument he gives to the conclusion that the black hole shrinks in the first place.

I actually think that this argument cannot be defended, but had planned to wait on a subsequent paper to take that up. That is, I don't think one can in any rigorous way argue that even if black holes radiate they shrink. That's going to be an even more annoying paper than this one, and I had hoped not to take that up as well here. So let's just grant it for the moment.

So as the event horizon shrinks, the energy content on the bit behind the horizon shrinks, and if the event horizon shrinks to zero and diapers entirely, the energy content of Sigma 2in must be zero. The baby universe has zero energy.

I'm going to grant you all of this for the sake of argument. Now look at the next step really carefully. You new ask us to consider a vacuum state on Sigma 2. But that makes no sense at all. Sigma 2 has inherited all of the energy (however that has been defined) from Sigma 1, and that is presumably not zero. But you can't just somehow delete all the energy from Sigma 2 and still have it be Sigma 2! I mean this is GR after all. If you radically "change the energy" on a slice you have to change the geometry of the slice, but then its not the same slice. It just not kosher to think of the slice as somehow "fixed" and the energy content nonetheless variable. The suggestion to "consider a vacuum state on Sigma 2" is self-contradictory.

Fourth point:

Let me grant you the self-contradictory bit (I'm being pretty generous here). OK: so we have a zero energy state on Sigma 2in and a vacuum state (and hence zero energy state, I assume?) on Sigma 2. Furthermore, there are potentially lots of distinct zero energy states that Sigma 2in can be in, so it can carry the information. So there are lots of distinct zero energy states on Sigma 2in U Sigma 2. This assumes that the "energy" attributed to Sigma 2in can just be added to the "energy" attributed to Sigma 2. Further, we have by assumption that Sigma 2in U Sigma 2 is dual to a Cauchy surface on the boundary. At this point it is essential that the dual state in the CFT is also a vacuum state of the CFT. Is there any argument for that? We know the state in the bulk is dual to *something* on the surface. ("Know" granting AdS/CFT, or course, which is not proven.) Do we know it must be a vacuum state in the CFT? Has that much of the "dictionary" been worked out?

Con't

Fifth point

This whole conflation of vacuum with zero energy with "unique state" appears to contradict the way everyone talks about evaporating black holes in the first place. The usual story about how the event horizon manages to shrink at all is this: paired with the Hawking radiation coming off the event horizon there is, as it were, anti-Hawking radiation going into the event horizon. This anti-Hawking radiation carries negative energy (relative to some definition of "energy") and so " cancels out" some positive energy behind the horizon.This is basically the story of how the global conservation of energy that Hawking presumes in his original paper is implemented. But this very story allows for multiple complex states of zero energy in the bulk, namely all the different states where the "positive energy" content behind the horizon exactly balances the "negative energy" content. There should be lots and lots of such states. So the chain of inferences zero energy —> vacuum —> unique is already rather strongly denied in the bulk.If the vacuum state on the boundary is unique, uniqueness of the boundary vacuum state on the boundary just cannot imply uniqueness of "zero energy" states in the bulk. This is observation does not depend on the existence of disconnected Cauchy surfaces in the bulk.

Any help on any of these points would be much appreciated.

Thanks,

Tim

a) The vacuum state is never unique but a matter of choice. There are infinitely many different vacuum states for the black hole spacetime. The question is what requirements are necessary to chose a unique vacuum and which one is physically meaningful.

b) The requirement that the stress-energy be regular at the horizon does *not* single out a unique vacuum state, it singles out a spectral distribution of the Hawking radiation. This confusion is the whole reason for the supposed firewall problem. I explained that in detail here, but the important proof of regularity is in Candelas' paper (ref [12]).

Tim,

Thanks for your response -- I think further discussion will be illuminating. As a general comment, I think your paper gives a nice overview of the argument for information loss, but leaves out most of the arguments against. I can elaborate on this later, but one such argument comes from AdS/CFT, as we are now discussing.

First topic: energy. In GR, to make sense of energy you need to consider spacetimes with boundary conditions that admit some asymptotic symmetries including time translation. In our case, we need asymptotically AdS boundary conditions. The full spacetime has no Killing vectors in general, but as you go out to the boundary you have some approximate Killing vectors (all of this can be made rigorous and precise of course). AdS itself is invariant under the conformal group, so this is our asymptotic symmetry group. By a standard procedure, one can then construct operators out of the metric near the boundary whose commutation relations are those of the conformal algebra. One of these operators is the Hamiltonian, and it is rigorously conserved under time evolution. Again, I stress that it is built entirely out of the boundary behavior of the metric. Now, the boundary CFT also has a conformal symmetry group, and corresponding operators which commute to the conformal algebra. So we identify the corresponding gravity and CFT operators. Is this unique? Yes: e.g. anything you add on the gravity side which is built out the gravitational field solely in the interior will vanish by virtue of the constraint equations. There are no other operators on either side that commute to the conformal algebra and are nonvanishing on-shell.

Next, now that we have a uniquely defined Hamiltonian, we can ask for the ground state of the system. For, say, N=4 super Yang-Mills theory it is definitely the case that there is a unique lowest energy state, i.e. a unique vacuum. The vacuum is invariant under the full conformal group. Similarly, in the bulk there is a unique spacetime invariant under the full group, namely exact AdS. So we identify these states. Furthermore, we understand very well the bulk-boundary mapping for states that in the bulk are small perturbations around AdS, basically a gas of gravitons and other particles. The energy spectra etc. match up. What is not well understood is the mapping when the state become energetic enough that self-gravitation becomes important, e.g. for black holes.

cont..

cont

Coming back to the case of black hole evaporation,starting from pure AdS, we can prepare an initial state of ingoing quanta such that we understand the bulk-boundary mapping. A black hole then forms and evaporates and we examine the system at "late time", allowing for the presence of Sigma_2 and Sigma2_in. By my comments above, the energy computed in the CFT will agree with the energy computed in the bulk as an integral over the boundary of Sigma2. This is a rigorous result that follows because energy is exactly conserved on the two sides and we had agreement to begin with.

Now, the state on Sigma is assumed to be some diffuse gas of quanta (I can elaborate on this if necessary, but it follows as long as the total energy is not too large). The question is whether the CFT Hilbert space is somehow dual to a tensor product Hilbert space on Sigma2 plus Sigma2in. If that is the case, I can then perturb the state on Sigma2in in various ways for example by adding some quanta, while keeping everything on Sigma2 fixed. In particular, the energy on Sigma2 remains fixed simply because the energy is built out of the asymptotic gravitational field on Sigma2, and that has been held fixed. So the prediction is that there is a huge extra degeneracy of states corresponding to adding excitations on Sigma2in. This prediction is not borne out on the CFT side.

To make this even sharper, on Sigma2 I can act with destruction operators to remove all the quanta on Sigma2, and since this is the regime where I well understand the bulk-boundary map, I can do the same on the CFT side. So now the total energy is that of the vacuum, hence the CFT must be in the vacuum state. As above, this is a unique state, so it is not possible to say that the CFT is dual to the above tensor product. It predicts a huge degeneracy that does not exist in the relevant CFTs.

The conclusion is that the CFT is dual to states of Sigma2 alone, and hence the state on Sigma2 must be pure, since the CFT state is pure. There could be some additional Hilbert space on Sigma2in, but the CFT has no access to it.

So this rules out baby universes in standard AdS/CFT example (where the CFT has a uniqe vacuum) in the sense that whatever is going on in the baby universe it completely decouples from the part of the physics that CFT represents.

I don't think I addressed all you points, but perhaps it best to pause at this stage.

black hole guy,

You write " AdS itself is invariant under the conformal group..." What does this mean? Not every conformal transformation is an isometry of AdS.

And in what sense can any operator that lives near the boundary be identified as the Hamiltonian? It is not generating the temporal evolution of the whole state.

More questions later.

Tim

"What is not well understood is the mapping when the state become energetic enough that self-gravitation becomes important, e.g. for black holes."

black hole guy, when there is a black hole in the bulk, i.e, there is a horizon, what is the corresponding structure in the CFT? e.g., how do we know that the dictionary of states does not require a disconnected boundary component when there is a horizon in the interior?

Tim,

1) The "conformal group" refers to that of the boundary CFT, which is the same group as the isormetry group of the dual AdS space. That is, d (space) + 1 (time) dimensional AdS has isometry group SO(d+1,1), which is also the symmetry group of (d-1) space + 1 (time) dimensional CFT.

2) That is how energy works in the canonical formulation of gravity, as originally pointed out by Regge and Teitelboim many years ago. The Hamiltonian takes the form of a volume integral over a Cauchy surface plus a boundary term. The volume integral generates time evolution of local fields via commutation. But the volume terms are also proportional to the constraint equations, so they vanish on-shell. Only the surface term survives on-shell. Therefore, the eigenvalues of the energy operator, and its matrix elements, can determined by measurements at the boundary.

b h b,

I went and looked at Regge and Teitleboim from 1975 and at that point they admit that the canonical theory cannot actually be used to calculate anything. Since the results you mention are not derived there I can't tell what the assumptions of such a framework are.Do you have a reference to the papers with the actual results? I would like to understand the assumptions involved.

Thanks,

Tim

I seem to have missed the memo... why does nobody choose standard coordinate time as a time coordinate? And when was that decided? (I suspect it was before Hawking radiation was postulated)

Tim,

I am not sure I understand the question/complaint. R&T explain how the structure of the Hamiltonian I described comes about. This is also explained, for example, in appendix E of Wald's textbook. The corresponding story in AdS was first worked out (I think) in the 1985 paper by Henneaux and Teitelboim "Asymptotically AdS spaces". A pedagogical review of the general idea (though just touching on the full case of GR) is given in arXiv:1601.03616 by Banados and Reyes, for example. A version of this story is common to all gauge theories and generally covariant theories, the simplest example being the electric charge generator in Mawell electrodynamics. The charge generator is the sum of a volume integral, which vanishes on-shell using Gauss' law, and a boundary term which measures the electric flux. At the much more technical level, see the 1999 paper by Wald and Zoupas which considers generally covariant theories in great generality and rigor.

I haven't read R&T in years, and I am not sure what the comment about nothing being calculable refers to. You can definitely use this formalism to explicitly compute the energy/generator of time translation as a surface integral at infinity.

P.S. In my previous comment, note that I should have written the symmetry group as SO(d,2) rather than SO(d+1,1), assuming we have Lorentzian signature.

Arun,

Any "disconnected" structure in the bulk runs into the same problem, assuming by "disconnected" you mean that the Hilbert space has a tensor product structure. The degrees of freedom in the "internal" factor of the Hilbert space commute with the Hamiltonian of the "external factor". But the latter Hamiltonian is identified with the CFT Hamiltonian, and in a CFT we don't have degrees of freedom that commute with the Hamiltonian.

To make this even sharper, on Sigma2 I can act with destruction operators to remove all the quanta on Sigma2, and since this is the regime where I well understand the bulk-boundary map, I can do the same on the CFT side.Do these destruction operators have to be defined on a complete Cauchy surface? If yes, this argument doesn't work, because we don't know yet that Sigma2 is a complete Cauchy surface.

Ambi Valent,

It doesn't cover the whole space-time.

Black Hole Guy,

I just need a good reference I can get to. (I'm traveling and so don't have Wald to hand.) The R & T paper I did find just did not have the content you seem to be referencing. Maybe here's a quick question: the conditions for being asymptotically flat in Wald are quite rigorous, and would not be fulfilled in any realistic scenario of black hole formation. Are the conditions for being asymptotically AdS similarly rigorous? If so, I don't see how any mathematical results about Asymptotically AdS space-times can speak to our situation.I mean it is pretty hard for there to be a well-defined way to add a boundary to an open space-time in the way one does in an asymptotically flat space-time.

Black Hole Guy,

Above you said to Arun that "this is the regime where I well understand the bulk-boundary map". Can you elaborate? Since it is a map between states of different dimensionalities, it has to be a mess. What do you mean that you well understand it?

Tim

Tim,

This discussion is in danger of fragmenting into too many separate directions so I will try to stay focussed. Let's not get into issues of asymptotic flatness, since they are not directly relevant here. Except I will note that the R&T paper I was referring to is "Role of surface integrals in the Hamitonian formulation of GR", which, as the title indicates, most definitely contains the content I was referring to. Presumably you were looking at a different paper.

In any event, the case of energy in asymptotically AdS spacetimes is actually easier to handle than in asymptotically flat space. There is a large and careful literature on this, some of which you will find reviewed in arXiv:1211.6347 by Marolf et. al. Yes, there is a well defined way to define the boundary of AdS, and the energy (and other symmetry generators) are given as integrals over a spacelike surface in this boundary. This has all been worked out in great detail and is pretty rigorous by the standards of theoretical physics.

Now here are a few comments regarding my statement that we have a good understanding of the bulk-boundary map in the regime where the bulk state looks like a diffuse gas of quanta. If you look in any reference about large N gauge theories you will see that in this limit local gauge invariant operators act like "generalized free fields" in the sense that their correlation functions factorize into products of 2-point functions. The low energy Hilbert space is obtained by acting with some number of such operators on the vacuum state. Further, each operator can be Fourier expanded (I use Fourier expansion in the general sense as an expansion in a complete set of functions) and the Fourier modes are all independent, essentially because the operator is a product of "elementary fields" and so doesn't obey a linear wave equation. The total energy is additive due to the factorization property. Now, in the bulk one has a collection of weakly interacting fields on AdS which one canonically quantizes in the standard manner. The space of normalizable solutions is one-to-one correspondence with the space of Fourier modes just mentioned. The bulk field does obey a linear wave equation, and this fixes the "radial momentum" in terms of the other Fourier modes. Each mode is associated with a creation operator that one applies to the vacuum to build up the Hilbert space. This establishes the isomorphism between the two descriptions. This is all well known and discussed in many places. There is no mismatch between the dimensionalities of two sides for the reason I already stated: in the bulk the space of solutions is cut down by the wave equation, whereas boundary operator do not obey a linear wave equation because they are products of gauge-variant fields. The reason why black holes are harder to understand is that in this regime the relevant CFT states are those in which the gauge invariant operators "deconfine" into their constituents.

These following attempt to create a map between the CFT and the AdS bulk around a blackhole.

http://hep.physics.uoc.gr/mideast7/talks/thursday/Papadodimas.pdf

and

https://arxiv.org/abs/1211.6767

----

Also:

https://arxiv.org/pdf/1402.6378.pdf

"The eternal hole in AdS has two asymptotically AdS boundaries, so the usual notion

of AdS/CFT duality [9] suggests that the eternal hole spacetime is dual to two CFTs. The two AdS boundaries are not connected, so we have two disconnected CFTs."

Do these two disconnected CFTs vanish when the blackhole is not eternal?

I agree with essentially everything black hole guy has said, and I don't want to generate any extraneous threads of discussion. But it might be useful to point out hep-th/0606141, which gives an explicit formula relating a bulk local field to a collection of CFT operators (to leading order in G_Newton, though this can be improved). AFAIK this construction breaks down a short distance (~the scrambling time) inside the horizon, but it is functional and unambiguous for fields on the outside. This is what I had in mind in my holographic example involving throwing stuff into AdS from the boundary, and explicitly provides the bulk-boundary map in the diffuse gas regime mentioned in BHG's argument.

The point is that we understand in some detail the boundary representation of fields outside the horizon, which is enough to conclude that any remnant/baby universe would have to be a low-energy, high-entropy state in the CFT.

dark star,

Yes, in some sense you can't avoid low-energy high-entropy states. I'll go further: in the scenario derived in my paper, the baby universe is high-entropy (in some sense of "entropy)" and zero-energy (in some sense of "energy"). But that has nothing to do with the particulars of this solution: it is endemic to the whole set-up starting with Hawking.

Think of the story about how the black hole manages to lose mass and shrink in the first place. With respect to some way of defining positive and negative energy modes of the field, positive energy Hawking radiation comes from the vicinity outside the event horizon and propagates to null infinity. And negative energy anti-Hawking radiation comes from the vicinity inside the event horizon and propagates to the singularity. The black hole loses mass not by having the matter that originally fell in somehow disappear, but by counteracting its positive energy with negative energy. And in the appropriate sense, a combination of positive energy matter and negative energy radiation has more entropy (more degrees of freedom) than the positive energy matter alone. As the total energy goes down the total entropy (in the relevant sense) goes up. In the baby universe this ends when the horizon disappears, and the resulting baby universe has zero energy (with respect to the definition of energy that has been in use) and high entropy.

Note that the shrinking-total-energy-and-growing-total-entropy is essential to the way the shrinkage is understood to occur. So no matter how the process ends, or where the initial information "goes", you had better be able to deal with high-entropy low-energy states. If you think those are problematic then you have problems from the get-go. If this is a problem for the CFT then everybody has it.

"And in the appropriate sense, a combination of positive energy matter and negative energy radiation has more entropy (more degrees of freedom) than the positive energy matter alone. As the total energy goes down the total entropy (in the relevant sense) goes up."

The interior hawking modes do have negative energy with respect to the time-translation generator at infinity, but just because you can combine positive and negative numbers in many ways to make zero doesn't mean that there are actually a huge number of degrees of freedom inside the small black hole. If you demand that spacetime be vacuum at the horizon, the spectrum of excitations is completely fixed, with states of any given energy populated thermally.

We can be precise about the time-evolution of entropy in Hawking's description of the evaporation scenario. The Hawking process does not produce any entropy since the interior and exterior modes are in a pure state. It is true that the Hawking process leads to growing entanglement between the interior and exterior, since the modes are pairwise entangled.

Extrapolating the Hawking process down to the end-stages of BH evaporation, "shrinking-total-energy-and-growing-total-entropy" does naively seem essential, but there are two possibilities for the true physics, depending on whether or not the information escapes. If info does not escape and entropy keeps increasing, then you end up with either a baby universe or a remnant. If the information eventually gets out, the entanglement between inside and outside needs to start going down at some point in the evaporation process, by unitarity. This could happen for example via modification of the Hawking process (which would mean that the state at the horizon would have to be nonvacuum), though there have been other suggestions. But it's not remotely inevitable that you have to deal with high-entropy low-energy states -- which is good, since as discussed above they are ruled out sharply by AdS/CFT.

dark star,

I think you need to slow down a bit. Your post appears to be self-contradictory. If, as you say (and is correct) the exterior and interior modes are entangled, then it is not true that the interior and exterior modes are in a pure state. They are each in mixed states, of course.

This is also a place where you have to be very careful about which entropy you are talking about. You are jumping between the von Neumann entropy and the statistical mechanical entropy and the Shannon entropy and the thermodynamic entropy without keeping track of what is relevant. That's why I put "in the relevant sense" above. This is characteristic of this literature going back to Bekenstein. Of course the universal quantum state is always pure and has von Neumann entropy zero. That says nothing at all about information capacity or degrees of freedom or thermodynamic entropy. Nothing.

I can't follow what you mean by "as discussed above". In what black hole guy posted?

black hole guy,

We seem to have a miscommunication here. The mismatch of dimensionality I am talking about is purely spatio-temporal dimensionality, not the dimensionality of the space of solutions. Take a Cauchy slice on the boundary. It should be in a pure state. Let's grant that by the translation manual, that maps to some pure state in the bulk. The question was: what is the spatio-temporal structure of that bulk state? Here is a plausible guess: states on Cauchy surfaces in the boundary map to states on Cauchy surfaces in the bulk. Then if the baby universe scenario is correct (which is what I have been arguing is implied by taking the Penrose diagram seriously), all we get is that the state on a connected surface in the boundary maps to a state on a disconnected surface in the bulk. If the relevant state in the bulk is disconnected. then it just won't be a state of a diffuse weakly interacting gas. It may be on Sigma 2out, which is just full of Hawking radiation, but it certainly isn't on Sigma 2in. So your argument begs the question at the outset. If the baby universe scenario is right, you are not in the regime where the bulk/boundry map is well understood in the sense you defend.

Tim,

Sorry, perhaps my language was confusing. A Bell pair is in a pure state. Either half of it is in a mixed state, but the state of both together is pure. Same goes for the Hawking quanta.

It is indeed quick to move between the various entropies, but the entanglement entropy is a good one to use, as it must go to zero at the end of evaporation by unitarity, unless there's a remnant/baby universe. The thermo. entropy is less interesting since it involves coarse-graining by definition and so does not have to decrease, while the others are related to entanglement.

BHG gave an argument from holography that remnants must be low-energy high-entropy objects in the field theory. The problems with low-energy high-entropy objects in field theories are clearly discussed in every paper on remnants I originally linked you and Sabine to.

I also don't follow your counterargument to BHG. You don't need to understand the details of the map to conclude that remnants/baby universes are zero-energy objects, this follows just from the boundary nature of the gravity hamiltonian. As I understand, he was trying to clarify to you how the duality works (in the diffuse gas scenario where we have an explicit construction) given the apparent mismatch between dimensionalities of the Hilbert spaces, since that seems to be a point of concern for you. The argument that remnants/BUs are high-entropy low-energy states requires only the observation that any stuff on Sigma2_in cannot possibly make a contribution to the CFT energy, since the bulk hamiltonian is a boundary term on-shell and you posit that Sigma2_in is geometrically disconnected from the boundary. Any ignorance of the details of the map does not affect this argument.

Tim,

Following up on dark star, let me again summarize your scenario and then explain why it is ruled out:

Scenario: At early time we have a pure state in the CFT and in the bulk, and the bulk Hilbert space lives on a connected Cauchy surface. At "late times" the bulk Hilbert space lives on a disconnected Cauchy surface with components Sigma2 and Sigma2in, and the state is a pure (and entangled) state in the tensor product Hilbert space, with one tensor factor associated to each connected component of the Cauchy surface. Also, at late times the CFT is in a pure state. There is no paradox if the CFT HIlbert space is the same as the tensor product Hilbert space in the bulk, with the pure CFT state mapping to the pure bulk state in the tensor product.

Refutation: The (unique) energy operator in the bulk is built out of the gravitational field on the boundary of Sigma2 (this is a rigorous statement as I have explained) and it is mapped to the energy operator in the CFT. Therefore, excitations localized in Sigma2in carry zero energy. More precisely, I can act with any operator on Sigma2in, and it will commute with the energy operator, since the latter lives on Sigma2. There is therefore a huge degeneracy of states -- many states with the same energy. This contradicts what we know about the CFTs that arise in explicit AdS/CFT examples. This rules out the scenario.

Note that this doesn't rule out baby universes, but it does say that their Hilbert spaces are not part of the CFT, and hence they are irrelevant to the mechanism by which the CFT manages to maintain purity of the state as expressed in bulk language.

Again, this why there is a paradox: your scenario is indeed what low energy field theory in the bulk suggests, but it cannot be realized in an AdS/CFT setup where the CFT lacks a huge degeneracy of states, as in the cases we understand.

dark star wrote:

This could happen for example via modification of the Hawking process (which would mean that the state at the horizon would have to be nonvacuum), though there have been other suggestions.In my very humble opinion, if anyone proved that the state at the horizon had to be nonvacuum in AdS/CFT, I would take it as a reductio ad absurdum proof that the AdS/CFT black hole theory has a big hole in it.

Arun,

" if anyone proved that the state at the horizon had to be nonvacuum in AdS/CFT, I would take it as a reductio ad absurdum proof that the AdS/CFT black hole theory has a big hole in it."That's what the firewall paper did...

black hole guy,

So lets really focus in on this energy operator that lives on the boundary. There is, on the one hand, some operator that lives on the boundary that serves as the Hamiltonian of the CFT. That operator is dual to some operator in the bulk. You call this the "(unique) energy operator in the bulk". I'm not sure where the "(unique)" comes from: I have been reading Marolf and he does not claim to prove it is unique. Maybe this is where we are supposed to grant that the CFT is dual to the gravity theory in the bulk In the sense of the existence of an isomorphism between a complete set of states and operators on one and a complete set of states and operators on the other. So what is this dual operator in the bulk?

It is interesting to note that there are at some "energy" operators for the bulk that also live on the boundary: the ADM mass and Bondi mass operators. Let's suppose, for the sake of the point I am making, that the Hamiltonian of the CFT is dual to one of these. (As far as I can tell, it could even be one of these operators that *is* the Hamiltonian of the CFT.)

Now: as soon as one allows for a definition of energy that permits negative energy states, one should expect a huge degeneracy of zero energy states: states with exactly as much positive energy as negative energy particles. This sort of scenario is actually essential to the whole black-hole-evaporation narrative. In the baby universe scenario, the baby universe "detaches" (as it were) at the Evaporation event, after which there is zero interaction between the states on Sigma 2in and Sigma 2out. Each will be in a mixed state at that point, and only the joint state will be pure.

There will indeed be a huge—probably infinite—degeneracy of zero-energy states. But so what? If there is no interaction term they won't have observable consequences.

Off the top of my head (and this is really off the top of my head!) I might mathematically model the situation like this. The initial state in the bulk is a pure state on a product Hilbert space. It should be a product of infinitely many Hilbert spaces, but for illustration take just 2. The initial state is a pure product state: Whatever pure state is on Sigma 1 X zero vector on Hilbert space 2. The universal state evolves unitarily, with the state on Hilbert space 2 remaining the zero state until (relative to the right "energy operator") some "negative energy" particles are created, i.e. until the Hawking radiation begins. At that point the universal state—always a pure state—is no longer a product state, and the state on Hilbert space 2 is no longer the zero vector. The state on Hilbert space 2 is a zero-energy eigenstate, but entangled with the state on Hilbert space 1. There is a non-zero interaction Hamiltonian in this phase. Once the Evaporation Event has occurred, the interaction term in the Hamiltonian disappears as well, leaving a pure entangled state on the product Hilbert space. Throughout this whole process, the total ADM or Bondi mass is a constant, an eigenstate of an "energy" operator on the boundary.

This is just a rough sketch, of course, but I am trying to give a picture of how to deal with negative energy particles. As I said, once you have them you will have a huge degeneracy of zero energy states. That seems inevitable. And without negative energy particles the whole evaporation scenario cannot be implemented.

The firewall paper did not prove that the state at the event horizon has to be nonvacuum. They rather postulated that the state of the whole system is always a product state of a pure state exterior to the event horizon and a pure state in the interior. They then try to show that implementing this requirement entails that the state at the horizon is radically non-vacuum. Like Arun, I would take this to be a reductio of the supposition that it is always a product state.

Tim,

First, there is no notion of Bondi energy in AdS since there is no notion of null infinity: conformal infinity is a timelike surface. The definition of energy in AdS in analogous to ADM mass in asymptotically flat space, and it can be derived by a similar logic. As I have said, it is built out of the metric near the boundary. There are multiple ways to derive the explicit expression, and they all give the same result, possibly up to an irrelevant constant shift. So it is this operator that is dual to the Hamiltonian of the CFT. Assuming AdS/CFT duality, the spectra of these two operators must then agree. The CFT Hamiltonian does not have the kind of degeneracy you are talking about, and so neither does the gravity Hamiltonian. So baby universes are not part of the Hilbert space that AdS/CFT deals with. I am just restating my argument over and over it seems.

I agree with you that effective field theory in the bulk seems to lead to a different conclusion; so you have to decide whether you are going to give up something about low energy effective field theory, or give up AdS/CFT. The former is what leads people to things like the firewall. You may find the latter more palatable, but this becomes less so when you start appreciating the impressive and varied tests that AdS/CFT has passed in highly nontrivial ways. Which option is more radical is at present a matter of taste. Regarding a breakdown of effective field theory, note that no one has been brave enough to perform experiments inside a black hole horizon, so experimental data here is limited, to say the least.

How does the Hawking-Page phase transition in the bulk manifest itself in the boundary CFT?

black hole guy,

I think the reason we keep coming back to this is that the papers I am reading simply do not assert what you claim. For example, Marolf describes the AdS/CFT conjecture as the conjecture that for every state in the CFT there is a corresponding state in the AdS, but he is quite explicit that it is not part of the conjecture that this corresponding state is unique. If there are multiple states in the AdS that correspond to the unique CFT vacuum, then by operating on any one of these states with boundary operators each will give rise to a folium of states. This is exactly what one would expect in the baby universe scenario: Each vacuum state in the CFT corresponds to any state in the bulk which is the vacuum state on Sigma 2out and a zero energy state in whatever baby universes there may be, So there is a massive degeneracy in the bulk of states that correspond to the CFT vacuum. Further, you can't create a zero-energy baby universe in the bulk by the action of creation operators on the boundary (not at all surprising since the baby universes do not connect to the boundary.

In fact, when he comes to discuss this point (he is doing the bag of gold universes, but it is the same issue) Marolf says that most researchers would understand things this way: there are many bulk states that correspond to any given CFT state. According to him, the AdS/CFT conjecture does not rule this out. You seem to think that is does. So a lot hinges on this. Can you provide a citation to an argument for the stronger claim that each CFT state corresponds to *a unique* bulk state, rather than Marolf's claim that each CFT state corresponds to "some* bulk state?

I had been assuming the stronger claim myself, since the "holographic hypothesis" often seems to be portrayed as the strong claim. And as such, the holographic hypothesis is very, very puzzling just due to dimensional considerations. But if Marolf is correct, the AdS/CFT correspondence simply does not entail the stronger claim, and in fact the dimensional issue goes away if every CFT state corresponds to an infinitude of bulk states. So everything hangs together in a satisfactory way if the correspondence is the weaker claim rather than the stronger. Do you have any grounds to assert the stronger?

Tim,

First, it would be helpful if you would provide the reference. I will assume you are talking about section V of 0810.4886 by Marolf. If so, I am afraid you are not getting his point, since he what he is saying is the same thing I am saying. In particular, let me point out that I wrote: "Note that this doesn't rule out baby universes, but it does say that their Hilbert spaces are not part of the CFT, and hence they are irrelevant to the mechanism by which the CFT manages to maintain purity of the state as expressed in bulk language. " and "So baby universes are not part of the Hilbert space that AdS/CFT deals with."

Marolf clearly advances the following interpretation. The bulk theory has different superselection sectors. One superselection sector consists of the AdS vacuum and all states that can be obtained from it by acting with boundary operators. In this superselection sector there is a one-to-one map between bulk and boundary states. The vast number of "bag of gold" states lie in different superselection sectors. So if we start in the AdS vacuum and prepare a collapsing shell of matter by acting with boundary operators these bag of gold states (or the closely related baby universe states) are totally irrelevant, since by definition of being in a different superselection sector they will never be accessed in the course of time evolution. This is the same thing I was saying in my comments quoted above. Personally, I would like to say that baby universes can be created during black hole evaporation, but that they will always be created in a unique state, assuming that the black hole was created by boundary operators. So the baby universe will never be entangled with the state on Sigma2

I can't come up with any sensible interpretation of what you are saying. It reads as if you are saying that the full bulk state can be some entangled state in the tensor product of Sigma2in and Sigma2, and then there is some "projection map" that takes this to a state on Sigma2, and the latter is what the CFT describes. But this projection map would have to be something like a trace over the Sigma2in Hilbert space, which would generically leave a mixed density matrix on Sigma2. This clearly can't work, since the CFT evolution is from pure states to pure states. In any case, while I don't know what you are saying I am certain that Marolf and I agree and that a baby universe resolution is not compatible with standard AdS/CFT.

Tim,

Perhaps the following additional comments will be useful. Suppose the Hilbert space is a product of two factors, with one factor representing the baby universe Hilbert space. CFT operators (equivalently operators localized near the AdS boundary) act purely within the other factor. So there is a superselection sector given by product states. In particular, the Hamiltonian is a CFT operator so product states remain product states under time evolution. Since no entanglement arises, the presence of the baby universe factor has no bearing on the fact that the density matrix for the other factor remains pure, assuming it was pure before the black hole was formed.

"The vast number of "bag of gold" states lie in different superselection sectors. So if we start in the AdS vacuum and prepare a collapsing shell of matter by acting with boundary operators these bag of gold states (or the closely related baby universe states) are totally irrelevant, since by definition of being in a different superselection sector they will never be accessed in the course of time evolution."As I said ages ago, it's a circular argument. You define these states to not be there, then how is it surprising you conclude they're not there?

black hole guy,

No, that's not it. This is the point. What exactly is the content of AdS/CFT and how does it bear, if true, on the baby universe scenario that arises from taking the usual Penrose diagram seriously? You say that it does not rule the scenario out. OK: then the whole thing is not relevant to the point in my paper. But let's push further. We agree that there is a Hamiltonian operator for the CFT, and the that CFT state remains pure (on Cauchy surfaces of the boundary) under the action of that operator. What does this imply about what is going on in the bulk? Well, that obviously depends on the significance to the bulk operator that is dual to the Hamiltonian of the CFT. Now as far as I can tell, that operator is an operator also defined on the boundary of the the bulk, just as the ADM mass is defined by characteristics of the boundary of the space-time, not anything that operates in the interior. So that operator really tells you nothing at all, or very little about what is going on in the interior. It just characterizes features of the boundary.

But there is no reason to believe that the ADM mass changes at all through the black-hole-formation-and-evaporation in the bulk. Whatever is going on in the bulk, whether baby universes are being spawned or bags of gold formed, or singularities cured, won't effect the ADM mass at all. Hence the fact that the bulk state remains an eigenstate of the ADM mass tells us nothing at all about, e.g., whether the Cauchy surface in the bulk disconnects and the state in the bulk is an entangled state between Sigma 2in and Sigma 2out.

Being in a superselection sector of the ADM mass operator, indeed of every boundary operator, has no bearing I can see on the question under discussion. Why should how the "paradox" is resolved show up on the boundary at all?

It is at this point that issues of "holography" come up. If there were a full isomorphism between the boundary and bulk, that is, an isomorphism of the operators and states, then everything that happens in the bulk would somehow be reflected on the boundary. But as I read Malorf (and this is exactly the paper you mention), the AdS/CFT conjecture isn't that at all. Marla's statement of the conjecture is so mild that I can't see that it has any beating on our question at all.

As far as I can tell, your last comment about this commits the following error. There is the CFT Hamiltonian. It correlates with *some* bulk operator. That bulk operator is not the Hamiltonian of the bulk, but rather some operator on the boundary of the bulk. Let's say that it corresponds to the ADM mass. Now suppose the CFT state is an eigenstate of its Hamiltonian, and so is always an eigenstate. All that follows is that the bulk state in an eigenstate of the ADM mass, and always remains one. But no one denies that. Since the CFT Hamiltonian doesn't correspond with the bulk Hamiltonian, no considerations about it can determine whether product states in the bulk become entangled later. Can you spell out this argument in more detail?

Bee,

"As I said ages ago, it's a circular argument. You define these states to not be there, then how is it surprising you conclude they're not there?"That is incorrect: whether bag of gold states can be produced during time evolution is a well defined question with a definite yes or no answer -- it is not a definition.

black hole guy,

It's a well-defined question after you have discarded a vast number of states, so then you can conclude there's no vast number of states. Congrats.

Bee,

"It's a well-defined question after you have discarded a vast number of states, so then you can conclude there's no vast number of states. Congrats."

??? Is this supposed to be an argument? To state the obvious, you can't just by hand discard states in a theory. Once you have decided to admit a certain class of states you are compelled to include all those that can be reached from them by Hamiltonian evolution. Conversely, If some states can't be reached by Hamiltonian evolution then whether you choose to include them or not is physically irrelevant. If you have some argument that the bag of gold states can be created by a physical process starting from ordinary low energy states then you should explain that, otherwise this is just noise

Tim,

I am afraid you are seriously confused about Hamiltonians in GR, as is clear from your statement:

"That bulk operator is not the Hamiltonian of the bulk, but rather some operator on the boundary of the bulk. Let's say that it corresponds to the ADM mass."The ADM mass and the bulk Hamiltonian are the same thing, as you can read about in Wald's textbook for example. More precisely, the bulk Hamiltonian is the sum of two terms: a volume integral and a boundary term. The volume integral is proportional to the constraints and so vanishes on-shell, while the boundary term is what one calls the ADM mass. So acting on physical states, which are by definition annihilated by the constraints, the Hamiltonian is given entirely by the boundary term. Once you appreciate this point I suspect my argument will become clear.Also, in your message you talk about being in an eigenstate of the Hamiltonian. This doesn't seem like a good thing to consider: all observables are time independent in such eigenstates, whereas we want to consider the time dependent process of black hole formation and decay.

black hole guy,

What you say is correct, but it doesn't address my point at all. The moment you assume that AdS/CFT correctly describes what goes on in our universe (ie, not AdS) you have discarded the states. Hence you can't use AdS/CFT as an argument that these states don't exist because you have defined them to not be there qua assumption. How hard can it be to see that?

Bee,

It does address your point. Forget AdS/CFT. Can you form these states by a physical process starting from the kind of states that we do know are present? If not, then you should discard these states, sine including them has no effect. This is the physical way of approaching the question: it is not an arbitrary assumption or definition, as you seem to be suggesting.

Since I'm way past the stage of being able to understand this stuff on my own, what is the meaning of this, from https://arxiv.org/abs/hep-th/0106112, Eternal Black Holes in AdS, section 3.

"Let us return to the eternal black holes in AdS. The correlation functions in the

boundary field theory clearly cannot decay to zero at large times. The problem is solved once we remember that the AdS/CFT prescription is [b]to sum over all geometries with prescribed boundary conditions.[/b]"

...

...

"One could ask how to restore unitarity in the case that we start with a pure state in a single copy of the field theory. In fact we can consider the Z2 quotients we discussed above, which produce black holes with a single boundary. The two point correlation function also decays exponentially in this case. [b]Once we remember that there are other ways of filling in the geometry we realize that we get non-decaying contributions to the correlation function. [/b]"

It seems to me that the above is saying that there needs to be more than one solution to the equations of motion in the bulk given the boundary conditions, i.e., the history of the CFT; otherwise we end up with paradoxes.

black hole guy,

My best interpretation of your question is whether there is a physical process to form a black hole which doesn't make any sense.

black hole guy,

This may be just the point I need! So if I am understanding correctly, the quantum state of the interior is "governed" by the Wheeler-deWitt equation that just annihilates the states rather than providing a time evolution as we expect the Hamiltonian to do. All of this AdS/CFT stuff is built on Wheeler-deWitt in the bulk? Is that right? So what you are calling the Hamiltonian of the bulk provides no time evolution of the bulk state and does not even in principle help resolve the question of the details of what goes on through the evaporation process? None of this can even in principle provide any account of the bulk, but only the boundary? You are right that I was not thinking of this correctly if that's the deal.

Could you state clearly then just what the AdS/CFT conjecture says? As I mentioned, Marolf clearly does not take it to imply a 1-1 mapping between states on the boundary and states in the bulk. I take it that every operator on the boundary is supposed to correspond to a (unique?) operator in the bulk? Is it also part of the conjecture that every operator in the bulk corresponds to a (unique?) operator on the boundary? Is there any operator in the bulk that actually generates a time evolution rather than just annihilating states?

Bee,

'My best interpretation of your question is whether there is a physical process to form a black hole which doesn't make any sense. "

That wasn't my question, but surely it makes sense (Oppenheimer et. al. wrote some famous papers about this). The actual question was about the bag of gold states, whose degeneracy seems to exceed the degeneracy suggested by the area law. Can one produce all these states by physical processes or not? If not, then removing these states from your theory is not an ad-hoc definition as you were suggesting, but the physical thing to do. I still don't understand your position: can all these states by created this way or not?

Tim,

It would be good to separate out the issues that relate to AdS/CFT from more general issues. Let's consider in general the Wheeler-deWitt description in a theory with a boundary. The WdW wavefunction depends on the boundary time t, the metric of a 3-geometry, and the values of matter fields living on this 3-geometry. The wavefunction is annihilated by the constraint equations, and its t dependence is governed by a Schrodinger equation where the Hamiltonian is the AdM mass operator. Now, contained within this is a description of, say, the "time" dependent process of a star collapsing deep in the bulk. It is just that you have to give physical meaning to time away from the boundary. For instance, as part of your matter you can introduce a collection of radioactive nuclei, and you can ask questions like: conditional on there being a fraction x of the nuclei left, what is the radius of the star? The bottom line is that this Hamiltonian description definitely does provide a description of time evolution in the bulk and allows one to discuss issues regarding black hole evaporation. I think everyone would agree with this, and you can read about it is many places.

Returning to AdS/CFT, I think it's essentially correct to say that the wavefunction of the CFT is to be equated to the WdW wavefunction, after some complicated transformation that relates the variables on the two sides (the transformation is simple for some variables, for example those corresponding to bulk fields localized near the boundary). On the other hand, not every possible WdW wavefunction is necessarily represented in the CFT, but rather there is a consistent smaller space of WdW wavefunctions that evolve into each other under time evolution. This space includes, for example, collapsing shells of matter in AdS undergoing gravitational collapse. So one can use this to address questions regarding black hole evaporation.

black hole guy,

If a black hole collapses to a bag of gold configuration, then there is a physical process to create them. If you assume there's no physical process to create them, then you don't create them. Can't you see that this is a circular argument?

Bee,

Again, I am not assuming anything. I am asking you to justify the existence of N bag of gold states, where N is greater than the corresponding black hole degeneracy, by showing that they can all be produced starting from reasonable initial data. If you can, then I think this is a problem for AdS/CFT. But if not, then you have no argument that these states exist in any physically relevant sense. I am not sure why you keep evading this question...

black hole guy,

You want to argue that bags of gold cannot be created. I am saying is that if you do so, you shouldn't assume that they can't be created because that's a circular argument. You are saying you are not 'assuming anything' but you assume that AdS/CFT is valid, which discards these states from the beginning on.

I am not the one who has anything to show here because I am not making claims about their existence or non-existence. I am merely saying your argument isn't logically sound. Also, read this in case you don't know it.

Bee,

This is hopeless. You say that I "assume that AdS/CFT is valid", but I explicitly told you that I am not doing that when I wrote "forget AdS/CFT". I am just asking you whether you have any evidence that these states exist in the sense that they can be produced, and for whatever reason you are refusing to answer, which I find odd.

black hole guy,

My present understanding of the situation is that you agree there is no argument to exclude bag of gold states.

black hole guy,

OK let's leave Wheeler-deWitt aside for the moment, although I note that if it all really worked we would not be saying that we need a quantum theory of gravity: we would have one. I think the problems with the theory are much more severe than you acknowledge.

I note also that you seem to be appealing to the Holographic Hypothesis which is, to be charitable, speculative. It would help the conversation if you could specify exactly what you take the AdS/CFT correspondence to precisely say. As I mentioned, Marolf is clear that it does not postulate a 1-1 correspondence of states, and it is not clear there is a 1-1 correspondence of operators either.

Let's assume (because these seem to be the only examples given) that there is a correspondence between the operators on the CFT and some set of boundary operators in the AdS. I don't see how it follows that "there is a consistent smaller space of WdW wavefunctions that evolve into each other under time evolution" What about the other states in the WdW folium? What happened to them? Why not rather say (as Malorf suggests) that there is a degeneracy here: many distinct bulk states "correspond" to the boundary vacuum. If this is right, then counting the boundary states available gives us no direct information about the bulk states available. And there is a further question. Suppose we choose one of the possible bulk states to correspond to the boundary vacuum state. Then we can operate on it with boundary creation operators to build up a folium of bulk states that are in 1-1 correspondence with the boundary states we get by operating on the boundary vacuum with the corresponding creation operator. Now it is plausible that any boundary state can be created this way. But it is not plausible that every bulk state can. For example, suppose we create in the bulk a state that leads to a black hole, and after the black hole evaporates one is left with a baby universe. No creation operator at the boundary can create such a state. So the states available in the bulk may vastly outstrip the states available on the boundary for two reasons: 1) the multiplicity of bulk states that can be chosen to correspond to the boundary vacuum (and the different folia that arise from operating on these with creation operators at the boundary and 2) the existence of bulk states that can't even be created that way but still can arise through dynamical evolution. All of this implies the falsity of the the Holographic hypothesis, as well and the false of the claim that the black hole entropy goes as the surface area of the event horizon. So how could AdS/CFT possibly rule out baby universes or bags of gold?

Bee,

No I don't agree: most (all?) of these bag of gold states can only be formed from data that is singular in the past. I think that is some evidence against their existence, though it is certainly not conclusive. I would very interested to hear about evidence in their favor, which is why I keep asking... unsuccessfully.

Black Hole Guy,

Going back over Marolf (the paper you suggested), he uses the term "superselection" but in a rather shifty and I think unjustified way. He says that the BH entropy gives the density of states that one can get by acting on the vacuum state with boundary creation operators, and then says that such a set of states forms the "superselection sector defined by the vacuum". But he offers no proof or argument that it is a superselection sector. More bluntly,we grant that acting on the vacuum with boundary creation operators cannot immediately create a baby universe. But we deny that states with baby universes are in a superselection sector separate from the states so created. Use the creation operator to create an incipient black hole. Then let time evolution take that to a state with a baby universe. To say this can't happen due to superselection is to beg the question. How do you prove there is a super-selection rule without already determining what black holes evolve to? I see nothing in Marolf to suggest that baby universe scenarios have been ruled out.

Isn't the proof of Hawking radiation derived from effective field theory? (Or is there some non-effective-field-theory proof of Hawking radiation?)

black hole guy,

As I said above, if black holes end up being bag of gold states, then they can very well be formed from non-singular initial data, the process of formation being gravitational collapse. ('End up' meaning geometry must pass through a Planck-curvature phase.) This is what you want to claim isn't possible. I am asking what's your reasoning but you don't seem to have any answer.

Black Hole Guy,

I have been reviewing some literature, and at this point I think it is fair to say that things are very confused. Let me just reiterate some points.

We have both been talking the following way: having chosen a bulk state to correspond to the CFT vacuum (there is no guarantee that this state will be unique), we can act on the CFT state with creation operators. This should correspond to operating on the bulk vacuum state with operators that create particles near the boundary. We both have said (I think) that since these particles in the bulk are created near the boundary, one cannot thereby immediately create either a bag of gold state or a baby universe state. That sounded right, but of course leaves open the question of whether the time evolution from this state creates a bag of gold or baby universe.

But once we say that the bulk state is governed by the WdW equation, all of this talk makes no sense. Since the time evolution in WdW is pure gauge, the "early" state and the "late" state lie in the same gauge orbit, and essentially count as the same state. That is to say, a single state in the WdW setting corresponds to a complete 4-dimensional solution, not to a state on a Cauchy surface or other space-like slice. This is the source of the "problem of time" in WdW. So if, intuitively, an evaporating black hole yields either a bag of gold or a baby universe then one can create such a state using creation operators on the boundary.

It also makes no sense to claim that the bulk physics has superselection sectors if the AdS/CFT correspondence is supposed to imply a 1-1 isomorphism between a complete set of operators on the boundary and a complete set in the bulk. For in that case, the isomorphism would imply superselection sectors in the CFT as well. As I said, Marolf never really explains or justifies his use of this terminology.

Can I ask, one more time, for a clean statement of exactly what the AdS/CFT correspondence is supposed to be? As I have mentioned, the way Marolf talks about it, it is clearly not as strong as the Holographic Hypothesis. In particular, does AdS/CFT claim any more than that there is an isomorphism between a complete set of operators on the boundary and some (not necessarily complete) set of operators in the bulk? This last claim is quite weak, and will not underwrite certain arguments that try to draw conclusions about the bulk from facts about the boundary.

Tim,

I don't think we need to work too hard to rule out your baby universe scenario within a standard AdS/CFT example. I think we agree on the following:

1) The CFT has a unique vacuum state, and excited states have at most an O(1) degeneracy.

2) The bulk Hamiltonian is expressed in terms of the gravitational field near the AdS boundary, and is equated with the CFT Hamiltonian. More generally, boundary operators in the bulk have a simple correspondence with CFT operators.

3) Baby universes have zero energy but a large number of states

There is a clear incompatibility with AdS/CFT. Put as simply as possible, baby universes correspond to a large number of zero energy states, but the standard CFTs appearing in AdS/CFT have no such degeneracy. So if baby universes with these properties exist, they must be in a different superselection sector from the states that can be created by boundary/CFT operators. This seems pretty airtight to me, but if you can find a loophole I would be interested to hear it.

Now, a person unfamiliar with the AdS/CFT literature would probably argue that this just says that AdS/CFT is an incomplete theory of quantum gravity. However, this becomes hard to sustain once one studies how classical gravity, perturbative loop corrections,, black hole entropy, etc, arise very naturally in AdS/CFT.

Bee,

Let's be clear here: you have not presented any argument for why these states exist. If one has to pass through a Planck curvature scale then, given current understanding, that is the same as saying that we have no idea what happens.

I am happy to lay all my cards on the table. I have a hunch that these bag of gold states do not exist in theories that have an AdS/CFT duality. I would like to see if one can disprove this hunch. A conclusive way of doing this would be to show that one can form these states in a controlled process from smooth initial data, with no Planckian curvatures. Note that this line of reasoning is needed to rule out other problematic solutions of GR. For example, we can easily write down solutions with closed timelike curves, but in all cases to form these requires either unphysical matter or spacetime singularities -- there is no controlled way of producing them. Hence the lore is that these CTC spacetimes do not exist in a physical sense. I would like to repeat this line of reasoning for the bag of gold states. So far, I have heard zero evidence in their favor.

black hole guy,

Ok, then, I finally get the impression we may be able to converge on something. I haven't presented evidence they exist, you haven't presented evidence they don't exist. The bottomline is, no one knows.

My point here isn't so much to claim that baby universes or bags of gold exist, I merely wanted to clarify that this option has never been ruled out.

Yes, I think your hunch is correct that they can't exist in AdS/CFT, but I am not sure why you want to make it so complicated. They'd violate the entropy bound, isn't that enough to conclude they can't exist?

I suspect (but don't know) the reason you can't do it is that the initial data would have to have compact support. I don't know why or whether this should be unphysical, but that's why you don't see these states in the boundary expansion (and why you can't create them 'locally'). There's the usual argument that a wave-function on compact support will immediately spread, but I'm not sure this solves the issue. This kinda has been touched on in the literature here and there (consult Marolf when in doubt), but I don't think it's ever been fully clarified.

Tim,

This is in response to your longer message.

"We both have said (I think) that since these particles in the bulk are created near the boundary, one cannot thereby immediately create either a bag of gold state or a baby universe state. That sounded right, but of course leaves open the question of whether the time evolution from this state creates a bag of gold or baby universe."I agree

"That is to say, a single state in the WdW setting corresponds to a complete 4-dimensional solution, not to a state on a Cauchy surface or other space-like slice"This is incorrect. The WdW wavefunction depends on the boundary time t, and different values of t are not related by a gauge transformation. So you should think of the WdW wavefunction as being a wavefunction on the space of all 3 geometries that at end a fixed location on the boundary.

"It also makes no sense to claim that the bulk physics has superselection sectors if the AdS/CFT correspondence is supposed to imply a 1-1 isomorphism between a complete set of operators on the boundary and a complete set in the bulk."The point here is that we don't know a priori what a "complete set of operators in the bulk" is supposed to mean, particularly in exotic cases where these putative operators lie behind black hole horizons. On the other hand, we do know what a complete set of CFT operators means. So we are trying use the CFT to infer a self-consistent notion of a complete set in the bulk. This might involve throwing out some states/operators in the bulk that you would have "liked" to have kept, but if doing so causes no inconsistencies and leads to standard physics in non-exotic regimes, then your complaint would carry little force.

"Can I ask, one more time, for a clean statement of exactly what the AdS/CFT correspondence is supposed to be? As I have mentioned, the way Marolf talks about it, it is clearly not as strong as the Holographic Hypothesis. In particular, does AdS/CFT claim any more than that there is an isomorphism between a complete set of operators on the boundary and some (not necessarily complete) set of operators in the bulk?"AdS/CFT is a work in progress, so I can't provide a definitive answer. At present, we are essentially saying that the CFT defines a theory of gravity in the bulk. This turns the question into whether the bulk theory so defined is really a "good" theory of quantum gravity. This is why a large part of the literature is pushing on this question, and so far (leaving black hole evaporation out of the picture for a moment) the story holds together: we can reproduce all/most of the desired features of gravity in the regimes where it has been tested or is strongly constrained by general principles. The exception is black hole evaporation. It is clear that AdS/CFT leads to a situation in which the state on Sigma2 is pure, but what isn't clear is how it manages to do this in bulk language, and whether the mechanism involves some huge breakdown in low energy physics. If it does (and its hard to see how it can't) this will emerge as a prediction of AdS/CFT which may or may not have real world implications. Time will tell.

This may be of interest:

https://arxiv.org/abs/1503.08245

"The Persistence of the Large Volumes in Black Holes

Yen Chin Ong

Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes. "

---

"Secondly, is the observation that, if information can indeed be stored in the interior

volume of a black hole despite its ever shrinking area, we would have to subscribe to the “weak form” interpretation of the Bekenstein-Hawking entropy [that is, contrary to what holography would suggest, the area is not the total measure of the information content ofthe black hole [13,49–51], and so the various entropy bounds [52–56] may also be violated].

If the volume is in fact the true measure of the information content, the Page time of a black hole would be proportional to its volume instead of its horizon area. Nevertheless, unless the volume is infinite [or, if the volume is finite but Hawking radiation does not carry any information], Page time would eventually set in and one has to deal with potential problems such as the firewall [see sec.(3.3) of [13]]. Therefore, the suggestion that black hole remnants may have arbitrarily large interior volumes that help ameliorate both the information loss paradox and the firewall paradox is only feasible if Hawking radiation is — as the original Hawking’s calculation suggests — purely thermal and does not carry any information. For an opposite point of view, see, e.g., [57].

Since other large volume scenarios, such as bubble universes, are most likely not generic [that is, we should not expect these to be inside every black hole [13]], it certainly comes as a relief that general relativity and the simplest model of Hawking radiation with thermal spectrum already provide a generic black hole with large volume that may resolve the information loss paradox".

----

Yen Chin Ong is also the author of a book (2016, Ph.D thesis turned into book) "Evolution of Black Holes in Anti-de Sitter Spacetime and the Firewall Controversy"

The amazon.com blurb reads:

"This thesis focuses on the recent firewall controversy surrounding evaporating black holes, and shows that in the best understood example concerning electrically charged black holes with a flat event horizon in anti-de Sitter (AdS) spacetime, the firewall does not arise.

The firewall, which surrounds a sufficiently old black hole, threatens to develop into a huge crisis since it could occur even when spacetime curvature is small, which contradicts general relativity.

However, the end state for asymptotically flat black holes is ill-understood since their curvature becomes unbounded. This issue is avoided by working with flat charged black holes in AdS. The presence of electrical charge is crucial since black holes inevitably pick up charges throughout their long lifetime. These black holes always evolve toward extremal limit, and are then destroyed by quantum gravitational effects. This happens sooner than the time required to decode Hawking radiation so that the firewall never sets in, as conjectured by Harlow and Hayden.

Motivated by the information loss paradox, the author also investigates the possibility that “monster” configurations might exist, with an arbitrarily large interior bounded by a finite surface area. Investigating such an object in AdS shows that in the best understood case, such an object -- much like a firewall -- cannot exist."

Bee,

I agree that since bag of gold states have too much entropy they can't arise in AdS/CFT. The interesting question is then how the theory manages to avoid them in a self-consistent way. One wants to know whether the mechanism for this implies that the bulk theory behaves in a very special way that we don't expect in a "physical" bulk theory. This is why I think it's important to push on the problem of bag of gold formation. My hunch is that bags of gold can be avoided without giving up any cherished properties of low energy gravitational physics. As far as initial data with compact support is concerned, AdS/CFT can handle this provided the time evolution of this data into the past hits the boundary -- then we can define the state at the earlier time using boundary operators and use the CFT Hamiltonian to run it forward. My (incomplete) understanding of this is that if you try to run such bag of gold data backwards in time you hit singularities. I take this as an indication that GR is telling you that such initial data is unphysical.

The interior of the blackhole apparently also has an area-dependent entropy:

https://arxiv.org/abs/1510.02182

Black Hole Guy,

So let's suppose there are baby universe states: states that are disconnected from the piece whose boundary is the space of the CFT and that have zero energy as far as the CFT Hamiltonian is concerned. (If we are doing WdW we get this for free: the Hamiltonian annihilates these states.) OK: so what do we mean by the vacuum state of the CFT? That is a state that is annihilated by any annihilation operator for the quanta. Now supposing there are baby universes, what is the Hilbert space for the bulk? It should be the tensor product of infinitely many connected universe Hilbert spaces. One of these Hilbert spaces represents the state on the connected universe whose boundary is the CFT space. So: what do we mean by a bulk state that corresponds to the CFT vacuum? Here we have 2 choices. We can say that it is any zero energy state that is annihilated by an annihilation operator for a quantum on the boundary of the piece the CFT lives on, or we can say that it is any zero energy state such that the connected piece that the CFT lives on is annihilated. According the the first way of thinking, there is a unique bulk vacuum: the vacuum state of the bulk is the vacuum for the connected bit with the CFT boundary producted with zero vectors for all the other Hilbert spaces. According to the second way, it is any state that is the vacuum state on the piece connected to the CFT producted with any collection of zero energy states for the other Hilbert spaces. I don't care which choice you make. The real issue is this: is there a superselection rule that somehow prevents the universal quantum state from evolving from a state with no baby universes to a state with one baby universe? Or two? Or ten billion? Why should there be? If the model of Black Hole evaporation I am defending is right, then although one can't create a state with a baby universe just by operating on the vacuum of the bulk, one can by creating a state on the bulk that time evolves to a Black Hole. Waiting long enough, one gets a baby universe If you deny that this will lead to a baby universe then your argument is question-begging.

So how does your little argument go wrong? It is exactly where you assume that to reconcile the non-degeneracy of the CFT states with the possibility of baby universes one need a superselection rule. But it is quite sufficient to have a selection rule. It's not that it is dynamically impossible to get from a state with zero baby universes to a state with one, but rather than it is difficult. In particular, one has to form a black hole and then let it evaporate. That's not a process that happens in every day life.

You bring up superselection sectors again, but I cannot even begin to respond to that without a clean statement of what AdS/CFT asserts. I have asked for this multiple times, but for some reason you never provide it. I can't figure out why not. Can you at least answer this: are there corresponding superselection sectors in the CFT?

You say that you are not using the AdS/CFT correspondence nor the holographic hypothesis. So how do you establish these superselection rules in the bulk?

bag of gold,

I'm not sure in which sense you are referring to 'singularity' in your last comment. Any function that's on compact support is singular in the mathematical sense pretty much by definition, but I suspect it's not what you mean. Do you mean a curvature singularity?

Yes, I agree it would be interesting to see how remnants are avoided in AdS/CFT. Much could be learned from it. If the state can be expanded around the boundary at time t_1, but not at time t_2, how does the time-evolution in the CFT manage to keep track of it? Or do you just mean you change the time-slicing at t_1?

Bee,

Perhaps I misinterpreted your earlier remark. As far as I am aware (see 0803.4212 and 0810.4886) all known initial data for creating bags of gold involve singularities when evolved back in time, where by singularity I mean non-subtle curvature singularities. So while such initial data looks smooth at one instant in time, it appears that it cannot be set up while remaining in the low energy regime that we understand. This suggests that such bags of gold are either excluded from the theory or lie in a separate "superselection sector". Note that I am not invoking AdS/CFT here.

You made a comment about forming these from initial data of compact support, but I'm not sure which explicit construction you had in mind. I am fine with the very mild singularities associated with this (presumably one could smooth this out without changing the conclusion). My point was simply that there is no obstacle to describing such initial data in AdS/CFT.

Tim,

"You bring up superselection sectors again, but I cannot even begin to respond to that without a clean statement of what AdS/CFT asserts. I have asked for this multiple times, but for some reason you never provide it. I can't figure out why not"Ahem. I have been quite occupied with correcting your misconceptions regarding basic GR issues (Hamiltonian, asymptotic structure, WdW etc) before I could turn to the more subtle issues.

"Can you at least answer this: are there corresponding superselection sectors in the CFT?"The CFTs in question obey all the standard properties. There is a state-operator map so the full Hilbert space is isomorphic to the space of local operators. So no superselection sectors in that sense.

Turning to the issue of the statement of AdS/CFT including baby universes, of course I can't give a definitive answer here since none exists at present -- if I could there would be no black hole information paradox and we wouldn't be having this discussion. Before providing a working definition, I should emphasize that there is no formulation I know of that allows your desired baby universe solution of the problem to be compatible with AdS/CFT, since, as I have explained, for this we need only some very general principles. Anyway, the working definition is as follows. Let's suppose the full bulk Hilbert space is a tensor product of a baby universe part (which could contain any number of babies) and an AdS part. The claim is that to map to CFT we consider only product states and that the CFT Hilbert space is isomorphic to the states in the AdS factor. All bulk AdS operators can be expressed as boundary operators by time evolving them to the boundary if necessary. Time evolution respects the product structure of the bulk state, since the Hamiltonian is a boundary operator. So non-product state lie in a different "superselection sector". The existence of such a superselection sector uses only the assumption that the bulk Hamiltonian acts entirely within the AdS factor; this is an incredibly general property that will hold true in any generally covariant theory, so is hardly any assumption at all.

Note that your statement about creating baby universes via black holes relies on unknown physics at the Planck scale, so this is not so much an argument as a wild guess.

black hole guy,

No, I think you understood me fine. The two things (remnants and initial conditions on compact support) are not related in any way I know of, other than that I suspect they may be. I still don't know, however, why you say there's no problem with such initial data. How do you want to get the information to the boundary if the fields are equal zero there in all derivatives?

Bee,

This has been discussed under the general heading of "precursors". The point is that your initial state of compact support can be evolved back in by the equations of motion until it hits the boundary (this will always happen). At that point I use the usual AdS/CFT dictionary to create the corresponding CFT state. Then I use the CFT Hamiltonian to evolve this state forward to the original time. In general, this construction suffers from the fact that to characterize the final state you need to be able to solve the CFT equations of motion. But for a free scalar field in AdS this construction can be carried out completely explicitly. If you hand me the bulk state I can give you the corresponding CFT state, and it will have vanishing expectation value for the CFT operator dual to the bulk field. This follows from the fact that for free fields in AdS we have a complete and explicit understanding of the mapping of Hilbert spaces between bulk and CFT.

So can I take it as proven that a super selection sector that rules out a baby universe does not also rule out a black hole?

black hole guy,

My understanding of precursors was that they are (very nonlocal) descriptions of bulk fields that touch the boundary only in some places. I think it comes down to your comment that the state will always hit the boundary. How come?

black hole guy.

We are making a certain kind of progress! Let me check to make sure that you are committed to these claims.

First, there not only is no proof of the AdS/CFT conjecture, there isn't even a precise statement of what the conjecture says. In particular, it is not (yet) committed to a full isomorphism of structure (both operators and states) between the AdS theory and the CFT. It is not even committed to an isomorphism of just the algebra of operators. There is a commitment to the existence of an AdS operator that corresponds to each CFT operator, but in many cases the correspondence is to an AdS operator at the boundary of the AdS. Operators that represent properties in the interior of the AdS (where the black hole actually forms and evaporates may not correspond to any CFT operator.

Second, gravity is supposed to be implemented in the bulk by means of canonical quantization of GR in Hamiltonian form, i.e. by the Wheeler-deWill equation. All of the interpretive problems that infect WdW therefore infect AdS/CFT.

Third that the key feature of AdS/CFT that is supposed to bear on the evaporation problem is that the boundary does not have any gravity and so the CFT is supposed to be unproblematic. In particular, pure states on Cauchy surfaces of the CFT evolve to pure states.

Now, how is feature 3 supposed to have implications about the AdS? If one assumes that the pure state of the CFT has to correspond to a pure state on the part of the bulk that is connected to the boundary, then you would get a pure state on Sigma 2. But since the AdS/CFT conjecture is so vague I can't see how such a result is forthcoming. If the state on the CFT corresponds to the state on the whole bulk space (including baby universes) then all we get is that the pure state on the CFT corresponds to a pure state on the bulk. Unless there is some argument that this cannot happen, there seems to me to be no force to this argument that makes trouble for the baby universe solution.

You say:"time evolution respects the product structure of the bulk state". Why do you say that? Time evolution does not usually do such a thing. The Hamiltonian evolutions commonly entangles parts of the wave function that start out in product states. So the question to ask in why we can't expect a time evolution that ends up with a universe pure state, with support both in the usual AdS space and in a baby universe. The Hamiltonian will operate on the whose space, not just the piece connected to the boundary. I still can't see any argument agains that.

You say that my statement "relies on unknown physics at Planck scale". Well, welcome to the club! I arrive at a solution by taking the causal structure of the Penrose diagram seriously. Maybe that's wrong, but it is no less principled than any other argument I have seen.

Tim,

As I have already stated, of course there is not a completely precise statement of what the AdS/CFT conjecture is, since this would require one to first precisely define quantum gravity. But it is precise enough to rule out a lot of things such as this baby universe scenario for resolving the information paradox.

Second, people in this field rarely talk about the WdW wavefunction, and everyone is well aware that while it can offer some conceptual guidance it has major technical problems. I only wrote about WdW because you brought it up and I am trying to be responsive to what you know. The general belief is that the objects that will ultimately have a mathematically rigorous meaning are boundary correlation functions, while everything else, including local bulk physics, will emerge as approximate concepts in some restricted domain.

Regarding these questions:

" If the state on the CFT corresponds to the state on the whole bulk space (including baby universes) then all we get is that the pure state on the CFT corresponds to a pure state on the bulk. Unless there is some argument that this cannot happen, there seems to me to be no force to this argument that makes trouble for the baby universe solution.""You say:"time evolution respects the product structure of the bulk state". Why do you say that? "if you read back through the thread you will see I that I gave sharp answers to these questions, multiple times in fact. Your comment makes it seem as if my previous words do not exist, which is distressing.

Also regarding,

I arrive at a solution by taking the causal structure of the Penrose diagram seriously. Maybe that's wrong, but it is no less principled than any other argument I have seen.it seems that my message has not gotten through at all. In the absence of AdS/CFT I would agree that this solution is perhaps as plausible as any other, depending on taste. But in AdS/CFT it is not viable no matter what assumptions you make about Planck scale physics, since it grossly violates some of the most basic entries in the AdS/CFT dictionary. Again, I am frustrated that my arguments are not being acknowledged, and this makes it hard to motivate myself to continue the discussion.

Bee,

The question is whether any initial data of compact support for a free scalar field will hit the AdS boundary at some finite time in the future (or the past, the issue is the same). First note that an outward directed null ray hits the boundary at a finite time. Second, note that the time evolution of the edge of the initial data is governed by the retarded propagator. For a conformally coupled field, the retarded propagator has support on the light cone, since it obeys the same equation as in flat space. But the same is in fact true for a scalar field of any mass, since the propagation of the boundary of the initial data is governed by very short wavelengths where the mass is irrelevant. So the succinct answer is that the retarded propagator for a massive scalar field in AdS has support on the light cone (and its interior, of course).

Yes, if you prepare a wavepacket deep inside AdS and ask me what CFT operator defined at this time creates this state it will indeed be a complicated nonlocal operator. But if you also allow me to create it by using operators defined at an earlier time then I can make it look less complicated. Of course, this is just the statement that Hamiltonian evolution evolves simple states/operators into complicated ones.

black hole guy,

Well what about trapped surfaces?

I take it that the CFT reflects information from the interior of the horizon of a bulk black hole**. I believe that AdS supports the creation of a blackhole that is in equilibrium with its Hawking radiation and so never evaporates. So this would be some complicated state in the CFT. I also believe that the (maximal) volume of a hypersurface in the interior of the horizon is continually growing; I'm wondering what property in the CFT reflects that.

**If the CFT doesn't see within the horizon, then it can't know about baby universes or remnants or anything that happens behind the horizon.

Bee,

I was restricting to free scalar fields in pure AdS, since I was trying make the point that describing data of compact support is not a problem per se. Once you have horizons, trapped surfaces, etc, the story is much more complicated, and we would have to discuss these case by case.

I think Dr Maudlin's paper takes a wrong turn when he approvingly quotes Newton's statement that the present moment is "diffused throughout all spaces." Maudlin adds that Cauchy slices are also diffused across space.

The present does not exist across the universe. Rather the universe exists in the present. In other words, to say that something exists is simply to say it's present. Whereas space-time varies relativistically, time-as-presence is absolute and fundamental. No matter what the clock says, the time is now.

If space-time is subsidiary to time-present, the incompleteness of Cauchy slices due to black holes has no bearing on unitarity. A few jagged edges in space-time cannot impact the completeness of ongoing presence.

Unitarity is preserved despite GR because GR is only a partial rendering of time.

black hole guy,

Above you characterize a certain scheme for incorporating baby universes in the bulk. Since it is not the scheme I have been advocating, I do not see that it rules baby universes out. I understand that you think there is some simple argument here, but having read and reread your posts I can't make it out. Maybe some observations will help.

Observation 1: The original claim, as I understood it, is that the baby universe scenario is inconsistent with AdS/CFT. We leave aside for the rest of the discussion whether AdS/CFT is true. Now it seems to be a common (mis?)conception that AdS/CFT is an instance of the Holographic Hypothesis, and asserts a full 1-to-1 isomorphism of both states and operators between the bulk gravity theory and the boundary CFT. On your telling, this is not so, and indeed the exact import of the conjecture cannot be given. I note two consequences. First, if the common belief of a 1-to-1 isomorphism were accurate, then the bulk could not exhibit superselection sectors without the CFT exhibiting corresponding superselection sectors. Since you want to deny that the CFT has any and want to assert that the bulk theory does, you are committed to the failure of a full 1-to-1 correspondence between the theories. Second, without an account of a complete set of observables in the bulk the question of whether there are any superselection rules cannot be addressed. At various times you have appealed to superselection sectors in your account of how AdS/CFT bears on the information loss paradox. Without a complete set of observables in the bulk, the question of whether there are any superselection rules cannot even be raised. So I am puzzled about how you can make such assured assertions about them.

Observation 2: Above you write: "The claim is that to map to CFT we consider only product states and that the CFT Hilbert space is isomorphic to the states in the AdS factor." That is not, of course the claim I have been making. Since in the solution I propose the initial pure state on Sigma 1 evolves into a pure state on Sigma 2out U Sigma 2in, with the two parts being entangled, I deny that CFT states only map to product states in the bulk.

Observation 3: We agree that a baby universe cannot be formed by creation operators at the boundary of the bulk. We do not agree about whether a baby universe can be created by time evolution from a state created by creation operators at the bulk. That is, indeed, the very question at issue. If there were superselection rules that forbade evolution from the one state to the other that would, of course, settle the issue. But since, as noted above, the question of superselection rules cannot even be raised yet no such argument is forthcoming.

Con't

Observation 4: Above you state: "The bulk Hamiltonian is expressed in terms of the gravitational field near the AdS boundary, and is equated with the CFT Hamiltonian. More generally, boundary operators in the bulk have a simple correspondence with CFT operators." Now I can understand the claim that the CFT Hamiltonian is supposed to correspond—under the AdS/CFT correspondence—to the bulk Hamiltonian. In can also understand that the AdS/CFT Correspondence asserts that some bulk operator corresponds to each CFT operator (but not conversely). And I understand that every CFT operator corresponds to some bulk operator that operates near the boundary. But I do not see why the latter correspondence must be identical to the former. I would have thought that the bulk Hamiltonian would generate time evolution in the entire bulk, not just near the boundary. So while there may be a bulk operator near the boundary that is naturally associated with the CFT Hamiltonian, I don't see why that should be the bulk Hamiltonian. More directly, if we are using WdW, the Hamiltonian of the bulk should annihilate the state. Why should any operator that operates only near the boundary do that? It sounds as if what happens in the interior just doesn't matter. How could that be?

Observation 5: Above you state: " In the absence of AdS/CFT I would agree that this solution is perhaps as plausible as any other, depending on taste. But in AdS/CFT it is not viable no matter what assumptions you make about Planck scale physics, since it grossly violates some of the most basic entries in the AdS/CFT dictionary." I understand that you think you have made this clear, and I am really trying, but I don't see it. Which, exactly are these "most basic entries"? And how does the baby universe scenario violate them? Can you state the violation without reference to superselection rules, and if not can you justify talk about what superselection rules there are? I see no reason to postulate a rule that requires product states to evolve only into product states, for example.

black hole guy,

One last observation. The claim there there is some superselection rule that forbids transitions from product states (between the state on the component connected to the boundary and the state on a disconnected baby universe) to entangled states cannot even be formulated without begging the question if WdW is being used.Since the time-evolution in the bulk is pure gauge in WdW, if an evolution such as the one I am advocating does occur, then a given WdW state (i.e. a state indexed to a particular time at the boundary) cannot be characterized as either a "product state" or as an "entangled state". On one Cauchy surface with those boundary conditions it will be a product and on another it will be entangled, because on one the surface will be connected and on another disconnected.

Also note that although the gauge orbit does not contain all possible foliations (because you have nailed down the Cauchy surface "at infinity"), still the Cauchy slices within a single gauge orbit will cover the entire interior space-time. So in effect, the states of the bulk ascribed to different boundary times will all be physically equivalent, and they automatically can exhibit no variation in boundary time. This is the WdW problem of time all over again.

black hole guy,

It occurred to me yesterday that I think you have answered your own question, how AdS/CFT avoids remnants/baby universes. Here's how: If you have remnants, then the boundary-count of the bh microstates must fail somehow. The only way this can happen if you have states in the bulk that can't be expanded around the boundary at all. This means they must be on compact support, not only on one time-slice but on all time-slices. Now, as you have said, the propagator leaks on the lightcone, meaning all fields that are on compact support on one slice will reach the boundary somewhere - unless you have a horizon that prevents the cones from getting to the boundary. Now sprinkle some singularity theorem over that (miracle happens here) and conclude that this must have meant the initial state of that wave-function was divergent, hence unphysical. Do you think that makes sense?

Tim,

Here again is the argument that rules out your scenario. Your proposal is that at late times the bulk Hilbert space is a product of a Hilbert space on Sigma2in and a Hilbert space on Sigma2, and we identify this Hilbert space with that of the CFT. Operators in the Sigma2in Hilbert space by definition commute with those in Sigma2. The CFT Hamiltonian is identified with the bulk Hamiltonian, which is built out of the gravitational field on the boundary of Sigma2. Hence operators in Sigma2in commute with the CFT Hamiltonian. Hence the CFT must have a hugely degenerate spectrum, since we can take any state and act with one of these Sigma2in operators without changing the energy. But the CFTs that arise in the AdS/CFT do not have such a hugely degenerate spectrum. So this scenario is impossible.

I think what has you stuck are issues having to do with Hamiltonians and time evolution in GR not specific to AdS/CFT. Some of these issues are indeed a bit subtle, but they are very well understood, and have been for a long time. Here are a couple of questions that you might find useful to think about:

1) In a closed universe the WdW wavefunction is governed by a Hamiltonian which is zero on physical states. How do we then recover, in the semiclassical limit, the usual picture of matter fields evolving in time as the universe expands or contracts?

2) For a universe with an asymptotic boundary (e.g. asymptotically flat space or AdS) the GR Hamiltonian acting on physical states is given by a boundary term at spatial infinity. As we turn off gravity by taking Newton's constant to zero, how do we get back the usual picture of a Hamiltonian defined as an integral over a Cauchy surface that acts nontrivially on physical states?

Bee,

Yes, I think that does sound like it is along the right lines. The conjecture is that any construction that prevents the data from leaking to the boundary in the past or future will necessarily lead to curvature singularities in the past and future, in which case one can't give a controlled argument for the existence of such states without invoking unknown Planckian physics.

black hole guy,

Good, let's go through your argument step by step.

"Your proposal is that at late times the bulk Hilbert space is a product of a Hilbert space on Sigma2in and a Hilbert space on Sigma2,"

Yes.

"and we identify this Hilbert space with that of the CFT"

I'm not sure what "identify" means here. As Marolf points out, there is a trivial sense in which this is true for any pair of infinite-dimensional Hilbert spaces. If you want more than this trivial sense, you aren't getting it from me.

"Operators in the Sigma2in Hilbert space by definition commute with those in Sigma2."

This is true for any pair of operators associated with different points on a Cauchy surface, by the ETCR. Nothing special here,

"The CFT Hamiltonian is identified with the bulk Hamiltonian"

Agreed.

"which is built out of the gravitational field on the boundary of Sigma2."

Not agreed. Why should I believe this? How can the Hamiltonian of the bulk be "built out" of operators on the boundary? If the Hamiltonian generates time development of the bulk state, as it must if it is to determine what happens to the black hole, how can it be constructed out of boundary operators?

" Hence operators in Sigma2in commute with the CFT Hamiltonian"

With the CFT Hamiltonian? Well that's on the boundary, so OK, again by ETCR.

" Hence the CFT must have a hugely degenerate spectrum, since we can take any state and act with one of these Sigma2in operators without changing the energy"

You have lost me here entirely. The operators on Sigma 2in obviously do not literally act on the boundary states at all. Do you mean "The operators in the CFT that correspond to the operators on Sigma 2in"? That would at least make sense if we had a map from the operators in the bulk to the operators on the boundary. But you have not asserted that the AdS/CFT conjecture even conjectures this. And even if it does, the fact that operators on Sigma 2in commute with the Hamiltonian of the CFT does not imply they commute with the bulk operator that corresponds to the CFT Hamiltonian.

"Hence the CFT must have a hugely degenerate spectrum, since we can take any state and act with one of these Sigma2in operators without changing the energy. But the CFTs that arise in the AdS/CFT do not have such a hugely degenerate spectrum. So this scenario is impossible."

We have not established the existence of a 1-to-1 map from the AdS operators to CFT operators, so there can't be any obvious degeneracy in the CFT implied here. Since that seems to be your view, I would like more detail about it. In particular, you have argued that every CFT operator corresponds (in one sense of corresponds) with a bulk operator on the boundary of the bulk. If there is to be a 1-to-1 correspondence between bulk operators and CFT operators, that must therefore be a different correspondence. Since the interior bulk operators just don't operate on the boundary, the argument must be different from what is written: I can't operate on a CFT state with an interior bulk operator, and all the operators on Sigma 2in are interior bulk operators. So if you mean to advert to some mapping from Sigma 2in operators to CFT operators can you specify what you have in mind?

In the original stringy AdS/CFT the CFT was a Yang-Mills in the infinite N-color limit. So, yes, while the ground state is probably unique, it had a huge degeneracy. It is something I have to learn as to how that got simplified.

black hole guy wrote: ".... in which case one can't give a controlled argument for the existence of such states without invoking unknown Planckian physics"

But I thought in AdS we could make a stable blackhole in equilibrium with its own hawking radiation, and this semi-eternal blackhole has a CFT description, including its singularity, and arose from a well-described-in-the-CFT initial state, and therefore the unknown Planckian physics in the bulk corresponds to some quite-knowable if not yet known CFT state. The only way I know how to rule that out is that the Marolf superselection sector doesn't allow such a blackhole to form :) :) :)

I second Arun's comment above. My understanding is that AdS/CFT was originally conjectured for the limit as N goes to infinity for the CFT. That might well lead to infinite degeneracy of the spectrum of non-vacuum states. Can someone explain why this doesn't happen?

I am also puzzled, like Arun, about the exchange between black hole guy and Sabine. Any resolution of the "paradox" requires some new Planckian physics, namely quantum gravity! I thought that was the whole point. If the "information" gets out to Sigma 2out, that requires new physics. If the singularity is avoided, that requires new physics. So while it is true that any solution that avoids the information reaching the boundary (as I propose) requires new Planckian physics, it is equally true that any theory that allows the information to get to the boundary also requires new Planckian physics. I thought that AdS/CFT was suggesting that he new physics is somehow encoded in the CFT, is only we could recover it. Is that not part of the conjecture?

Arun, Tim,

I checked out of your discussion at some point so not sure what your puzzlement. I found my exchange with the black hole guy helpful to clarify the pathologies of remnants in AdS/CFT. Not claiming this solves anything except some of my own confusions.

I myself am somewhat puzzled though that Tim now states

"Any resolution of the "paradox" requires some new Planckian physics, namely quantum gravity!"

because I thought he was claiming there isn't any paradox to begin with.

Sabine:

Of course, that's why I put the word in scare quotes. I think that one can read the baby universe scenario off the Penrose diagram and that there is nothing to contradict that scenario. black hole guy has been arguing that the baby universe scenario is inconsistent with AdS/CFT, and we are continuing to discuss that claim.

Tim,

Thanks for the summary. I think my comment is actually relevant then. I'd say baby universes aren't inconsistent with AdS/CFT per se, but AdS/CFT combined with some restriction on the initial value data. Whether that bears any relevance for asymptotically flat/dS space, I don't know.

I don't really see how it matters for that argument whether the singularity is really a singularity or some quantum gravity fluff. It really only matters that it needs to be hidden behind a horizon to prevent contact (at any time) with the boundary. The point is then that - at least in AdS/CFT - remnants won't work for all initial data and in particular not for initial data that looks 'normal'.

Tim,

I can't see how to make progress here since you are denying well known facts about GR, and I don't have the time or inclination to give an extended tutorial about this. I will just point out two closely related confusions

1)

" "Operators in the Sigma2in Hilbert space by definition commute with those in Sigma2."This is true for any pair of operators associated with different points on a Cauchy surface, by the ETCR. Nothing special here,"

2)

"The CFT Hamiltonian is identified with the bulk Hamiltonian"Agreed.

"which is built out of the gravitational field on the boundary of Sigma2."

Not agreed. Why should I believe this? How can the Hamiltonian of the bulk be "built out" of operators on the boundary? If the Hamiltonian generates time development of the bulk state, as it must if it is to determine what happens to the black hole, how can it be constructed out of boundary operators? "

The fact that the GR Hamiltonian is an operator built out of the metric at spatial infinity is a totally standard statement (see e.g. section II of the Marolf paper and references therein). The reason you are finding it confusing is because of your belief stated in (1), which is false. In GR there are no local operators: e.g. an operator that creates a particle at a point also has to create its gravitational field, otherwise it violates the constraint equations. So bulk operators fail to commute with the Hamiltonian, even if they are associated with points that are spacelike separated from the spatial boundary at which the Hamiltonian is defined. Hence the boundary Hamiltonian perfectly well generates bulk evolution.

On the other hand, operators in Sigma2in do commute with the Hamiltonian due to the nonconnectedness of the Cauchy surface. Physically, the gravitational field of a particle placed in Sigma2in does not make it out to the boundary.

black hole guy,

I will work through Marolf and the references contained therein, but I think you should be aware that that material is not "well known facts about GR". I have two pretty strong pieces of evidence for this. One is that the relevant "references contained therein" are all references to other papers by Marolf. That is, none of the claims you are making are standard claims of GR that appear in the standard literature, they are rather novel claims by Marolf that he has proposed while trying to draw consequences from AdS/CFT. In fact, the papers of Marolf devote quite a lot of acreage to responding to objections to the claims he is making, which would hardly be the case if those claims were "well known". Second, I happened to be at a conference with George Ellis, and asked him last night about this claim about building the bulk Hamilton from boundary operators. He was unfamiliar with such a claim. So for sure this is not well known fact about GR. I will add that Marolf makes use of the ADM mass in his discussion. The conditions for the ADM mass to even be defined are highly restrictive, and won't hold in this case. Further, as far as I can tell you would want to be using the Bondi mass if anything to cover realistic black holes. The whole process of artificially adding a boundary to an open space time (as a realistic one will be) is mathematically quite delicate. If you think all this is standard GR then you could provide some standard references (I. e. not Marolf). If this is all Marolf then it isn't standard, and one should be prepared to find out that it isn't right.

black hole guy,

"On the other hand, operators in Sigma2in do commute with the Hamiltonian due to the nonconnectedness of the Cauchy surface. Physically, the gravitational field of a particle placed in Sigma2in does not make it out to the boundary." This is a bit confused. There is no objective fact about whether or not a point inside the event horizon lies on a connected or disconnected Cauchy slice. In some foliations the slice will be connected and in others disconnected. Maybe you mean to say that operators inside the event horizon commute with boundary operators because the gravitational field does not make it out to the boundary, which is of course true because the event horizon is, well, an event horizon. By this criterion, the creation operators for points inside the event horizon fail to commute with the boundary operators not because of the existence of baby universes but because there is an event horizon at all. And that means that the degeneracy you are worried about will arise so long as there is an event horizon, i.e. so long as there is a black hole. But that in turn means that if no baby universe solution is acceptable, no solution at all is acceptable because black holes are not acceptable. So what you put forward as an argument against a certain solution to the problem will hit all solutions. That's not a good result.

Post a Comment