Zulip Chat Archive

Stream: Is there code for X?

Topic: Status of Spectral Theorem?


Kevin Shu (Jul 16 2020 at 20:48):

Hello, are there people working on the spectral theorem for finite dimensional Hilbert spaces (or anything of greater generality)?

Jalex Stark (Jul 16 2020 at 21:04):

we just finished Cayley Hamilton, so defining eigenstuff is definitely in striking distance

Jalex Stark (Jul 16 2020 at 21:05):

@Apurva Nakade was looking to start on a project like this

Jalex Stark (Jul 16 2020 at 21:05):

Anne Baanen's recent linear algebra video is great, have you watched it yet?

Jalex Stark (Jul 16 2020 at 21:06):

https://www.youtube.com/watch?v=EnZvGCU_jpc&feature=youtu.be

Patrick Massot (Jul 16 2020 at 21:06):

Jalex Stark said:

we just finished Cayley Hamilton, so defining eigenstuff is definitely in striking distance

I really don't understand why everybody keeps repeating that. Why would you need CH to defining eigenstuff?

Patrick Massot (Jul 16 2020 at 21:07):

This is totally insane.

Jalex Stark (Jul 16 2020 at 21:07):

in my head, the number of eigenvalues with multiplicity is equal to the degree of the characteristic polynomial

Jalex Stark (Jul 16 2020 at 21:07):

but maybe there's a much more efficient path

Patrick Massot (Jul 16 2020 at 21:08):

Even if you don't do "linear algebra done right", you can define eigenstuff and prove a lot about them with Cayley-Hamilton.

Jalex Stark (Jul 16 2020 at 21:08):

i agree you don't need all this for the definition of eignstuff

Patrick Massot (Jul 16 2020 at 21:08):

The degree of the characteristic polynomial is the size of the matrix.

Alex J. Best (Jul 16 2020 at 21:09):

@Miguel Raz Guzmán Macedo was also interested in eigenvalues

Jalex Stark (Jul 16 2020 at 21:09):

i should maybe pick up a book and read an efficient development of the theory before commenting further about the best past forward

Heather Macbeth (Jul 16 2020 at 21:10):

We need a #bourbaki, for Patrick's use only.

Apurva Nakade (Jul 16 2020 at 21:18):

Hey! I am very interested in trying to prove the spectral theorem, for normal operators maybe?
Am still browsing the library and trying to figure out what is known about hilbert spaces...

Apurva Nakade (Jul 16 2020 at 21:24):

Looks like inner product spaces are defined under normed spaces but quadratic forms are defined under linear algebra. Hmm... Where would Hilbert spaces go?

Kevin Shu (Jul 16 2020 at 21:26):

One observation is that the existence of eigenvalues for real symmetric matrices can be done using 'only' calculus, and observing that the corresponding quadratic form is minimized on the unit sphere, since it is compact. This is a relatively ugly proof, but I think it's just a computation once you know that the sphere is compact. Do we have that fact?

Patrick Massot (Jul 16 2020 at 21:30):

Heather, I don't think the exposition style in "Linear algebra done right" is so close to Bourbaki's style...

Patrick Massot (Jul 16 2020 at 21:31):

But the definition of eigenstuff is independent of determinant and Cayley-Hamilton in any case, even in determinant-centric expositions.

Patrick Massot (Jul 16 2020 at 21:31):

Kevin, we know finite dimensional spheres are compact.

Apurva Nakade (Jul 16 2020 at 21:45):

@Kevin Shu I feel like proving spectral theorem the "standard way" will force us to build the theory of finite dim Hilbert spaces which will be useful on its own.

Kevin Shu (Jul 16 2020 at 22:13):

@Apurva Nakade That is true, you probably want this standard proof in the end. That said, the proof I described might also be used to show the variational characterization of eigenvalues, which is useful in other places, and it might be thought of as a stopgap. I will give a shot at that version, though since I am new, how far I'll get is questionable :)

Jalex Stark (Jul 16 2020 at 22:15):

kevin, if you post a link to a github project, i'll happy follow

Aaron Anderson (Jul 16 2020 at 23:51):

I tried defining eigenstuff a month ago (to be fair, I'm not experienced now, and I was REALLY new then). I found that defining what an eigenvalue/eigenvector is works just fine, but if you want to do anything that involves finsets in the finite-dimensional case, or prove any existence results, Cayley-Hamilton seemed by far the easiest place to start

Scott Morrison (Jul 17 2020 at 01:44):

@Patrick Massot, I only had in mind the existence of eigenvalues for finite dimensional matrices over C, when I've been advertising Cayley-Hamilton. (Pretty much as @Aaron Anderson says.)

Scott Morrison (Jul 17 2020 at 01:45):

I think it would be great if we just do Banach algebra / C*-algebras / functional calculus, and then derive finite dimensional stuff from that (it would fit with how the rest of mathlib goes ...).

Scott Morrison (Jul 17 2020 at 01:46):

I'm not actually sure that that route helps at all for the "finite-dimensional-but-not-necessarily-normal" part of the theory. I guess I've only ever thought about the route to that via CH...

Patrick Massot (Jul 17 2020 at 07:49):

I think that existence of eigenvalues for finite dimensional matrices over C is precisely where the path advocated by "Linear algebra done right" is better without any possible debate.

Apurva Nakade (Jul 17 2020 at 19:50):

As a start, would it make sense to prove that a normal matrix with complex entries can be written as UDU* where U is unitary and D is diagonal?

Apurva Nakade (Jul 17 2020 at 19:57):

And then one day when f.d. complex inner product spaces exist we can produce a more direct proof by proving the existence of an eigenbasis?

Jalex Stark (Jul 17 2020 at 20:19):

mathematically, don't U and the eigenbasis have the same data? if you think know how to prove the first form, you should do it.

Apurva Nakade (Jul 17 2020 at 20:28):

Yes, theoretically it is the same information. But proving the existence of an eigenbasis for normal operators requires one to setup the theory of complex inner product spaces whereas I believe all the theory needed for proving the theorem for matrices already exists in mathlib.

Jalex Stark (Jul 17 2020 at 20:30):

got it, so to move between the two forms of the theorem you need... a *-algebra isomorphism between matrix n n R and the linear endomorphisms of (n \to R)? I think we already have this as a linear isomorphism and maybe even an algebra isomorphism

Jalex Stark (Jul 17 2020 at 20:31):

I think the strategy of "first prove any form of the theorem, then see what API you need to convert to other forms" is a good one

Jalex Stark (Jul 17 2020 at 20:32):

once you've proven the first theorem, you reduce the problem from a maths + software engineering challenge to only a software engineering challenge :)

Alexander Bentkamp (Aug 13 2020 at 14:53):

@Kevin Shu I had started working on the Spectral Theorem a year ago. I followed this document: https://www.maa.org/sites/default/files/pdf/awards/Axler-Ford-1996.pdf
I guess this is the same approach as in "Linear algebra done right".

I formalized Theorem 2.1, Proposition 2.2, Lemma 3.1, and Proposition 3.4. I'll try to update the code to the latest version of mathlib and upload it somewhere.

Have you started working on this? Which approach are you taking?

Alexander Bentkamp (Aug 14 2020 at 13:52):

So this is what I currently have, now updated to the latest version of mathlib: https://github.com/abentkamp/spectral

Apurva Nakade (Aug 14 2020 at 18:08):

Oh wow, this is great. I have been trying to get to the spectral theorem in a more direct way:

  1. Show that an eigenvector + eigenvalue exists using Cayley-Hamilton
  2. Construct a unitary matrix whose first column is this eigenvector
  3. Reduce the problem to a lower dimensional matrix and induct.

Apurva Nakade (Aug 14 2020 at 18:09):

Just posted my code here: https://github.com/apurvnakade/spectral-theorem

Apurva Nakade (Aug 14 2020 at 18:09):

I will add a README.md and several comments to the code now that there seems to be multiple approaches to proving this

Apurva Nakade (Aug 14 2020 at 18:15):

@Alexander Bentkamp since you already have done so much work on eigenvalues and eigenvectors, I will focus on theorems about unitary and normal matrices

Apurva Nakade (Aug 14 2020 at 18:24):

I see that you already have lemma generalized_eigenvector_span :D

Apurva Nakade (Aug 14 2020 at 18:24):

the spectral theorem should be a direct corollary of this

Apurva Nakade (Aug 14 2020 at 18:25):

Can you push this to mathlib?

Patrick Massot (Aug 14 2020 at 20:37):

Alexander Bentkamp said:

So this is what I currently have, now updated to the latest version of mathlib: https://github.com/abentkamp/spectral

Alex, did you see that algebraically closed fields and existence of algebraic closures are now in mathlib?

Patrick Massot (Aug 14 2020 at 20:41):

Why isn't all this in mathlib?

Alexander Bentkamp (Aug 15 2020 at 09:52):

Apurva Nakade: the spectral theorem should be a direct corollary of this

I think we first need a complex inner product, right?

Alexander Bentkamp (Aug 15 2020 at 10:02):

Patrick Massot: Why isn't all this in mathlib?

Yeah, I know I should have merged this earlier, but I have been a bit worried about finishing my main PhD project in time. Now it looks like it's easily doable and I'll have some time to merge it.

Alexander Bentkamp (Aug 15 2020 at 10:31):

I think we first need a complex inner product, right?

Oh, well, in your approach, you'll use the dot product instead, which I just discovered in your files. Then the spectral theorem shouldn't be far away indeed.

Apurva Nakade (Aug 15 2020 at 14:53):

Using your theorem you need two more steps:

  1. Gram-Schmidt orthogonalization
  2. Proving that an upper triangular normal matrix is diagonal
    The second should be easy, the first one will need some work though. In any case, your theorems are great! I was daunted by having to prove the existence of an eigenvbasis and was trying a workaround using induction but now we don't need to do that anymore.

Apurva Nakade (Aug 15 2020 at 15:00):

Actually, you don't even need Gram-Schmidt orthogonalization if you are only interested in the linear operator version of spectral theorem. All you need to show is that every generalized eigenvector of a normal operator is a genuine eigenvector. I don't remember the proof of the top of my head but it is the linear operator version of every upper triangular normal matrix is diagonal.

Apurva Nakade (Aug 16 2020 at 14:48):

I, @Jalex Stark , and @Kevin Shu have been trying to brainstorm how to move toward the spectral theorem given that complex inner products seem controversial.

We were thinking of doing this using complex dot products (that mimic the current dot product) and then prove the spectral theorem for normal matrices using @Alexander Bentkamp 's work. We want to write the code in such a way that one day when complex inner products exist the code can be easily upgraded with no extra effort. Does this sound reasonable? Or is it better to wait for complex inner products to exist on mathlib?

Alexander Bentkamp (Aug 16 2020 at 14:55):

My original plan was to prove the spectral theorem without choosing a basis (i.e., only linear maps, no matrices). I'd still find it elegant if we could do it that way. The dot product can only be defined with a basis, right?

Apurva Nakade (Aug 16 2020 at 15:01):

Mathematically, what you say is uncontroversially the best way to do it. But this requires the existence of a complex inner product, which might take a long time to make it to mathlib.

Apurva Nakade (Aug 16 2020 at 15:03):

Although, maybe this has changed recently. It would be really awesome if we could have some complex inner product on mathlib. That would just open up doors to so many more results.

Alexander Bentkamp (Aug 16 2020 at 15:11):

I think we could at least try to push for it by creating a copy of real_inner_product and adapting it to complex numbers. Then create a pull request and try to get it merged. Do you think that would be rejected?

Alexander Bentkamp (Aug 16 2020 at 15:11):

It could still be generalized later if that's what people want...

Heather Macbeth (Aug 16 2020 at 15:11):

@Sebastien Gouezel @Ruben Van de Velde @Chris M :up:

Apurva Nakade (Aug 16 2020 at 15:13):

Alexander Bentkamp said:

I think we could at least try to push for it by creating a copy of real_inner_product and adapting it to complex numbers. Then create a pull request and try to get it merged. Do you think that would be rejected?

This has already been done: https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/Complex.20inner.20product/near/200795959

Apurva Nakade (Aug 16 2020 at 15:14):

but it was not PRed to mathlib. maybe with your work now there is enough motivation to have this is in the library

Heather Macbeth (Aug 16 2020 at 15:17):

so let's add @Frédéric Dupuis to the tag party :)

Heather Macbeth (Aug 16 2020 at 15:23):

Personally, I wish there were some way to unify the real and Hermitian inner product (maybe along the lines I was brainstorming here). But it seems nobody knows a good way to do it, so maybe it is better to have parallel real and Hermitian files.

Heather Macbeth (Aug 16 2020 at 15:25):

How exactly does to_additive work? Would it be possible to do something like that in this setting, to reduce duplication?

Patrick Massot (Aug 16 2020 at 16:00):

I think it's not too hard to code something like to_additive and I'd like to see more meta-programming in this style, but I'm not sure the specification would be so easy to write. Actually we may need to first write everything by hand with as much duplication as possible before knowing exactly what we want the tactic to do.

Jalex Stark (Aug 16 2020 at 18:17):

the machine learning folks say that what they need is lots of labeled examples. This is also true for "good old-fashioned AI", i.e. metaprogramming. If we write the library in such a way that it's easy to see how to use automation to make the presentation more compact / maintainable, then we'll get more people teaching themselves how to metaprogram.
Maybe a second example of "things like docs#to_additive" is docs#simps_attr, which I just learned has very detailed docstrings.
I guess in addition to heaping praise on to_additive, we should also encourage theory-specific provers like docs#mfld_set_tac.

Jalex Stark (Aug 16 2020 at 18:31):

more on topic...
I agree that the specification is not easy to write. I tried (on the mathlib:complex_dot_product branch) imitating the proof of one real inner product lemma for the complex numbers, and found that we were missing a lemma like

@[simp] lemma complex.conj_sum {α : Type*} (s : finset α) (f : α  ) :
  complex.conj ( i in s, f i) =  i in s, complex.conj (f i)  := sorry

Jalex Stark (Aug 16 2020 at 18:44):

I'm not sure how one would make sure they've gotten all of the side lemmas like this without effectively doing the whole "manual translation" one declaration at a time

Kevin Buzzard (Aug 16 2020 at 19:19):

This should follow immediately from the fact that complex.conj is an additive group hom. Presumably complex.conj is the "bare function"; the ring homomorphism will probably have nicer properties.

Jalex Stark (Aug 16 2020 at 21:46):

sorry, I didn't mean to imply that the sorry was hard to fill. Just that it seems to be missing, and makes a good simp lemma. Unless there's a way to teach simp about the version for general additive homs?

Kevin Buzzard (Aug 16 2020 at 23:58):

Doesn't simp already know the result for general additive homs? I don't know if this is impossible in your situation but I'm wondering if you can use the ring hom instead of the bare map all the way through. I don't know if this is more sensible than just training the simplifier on all the ring isom lemmas for complex.conj.

Jalex Stark (Aug 17 2020 at 00:40):

I see. Right now, data.matrix.basic models a vector as a bare function type n \to V. I think you want to see a type vector n so that

  • vector n has a C \mathbb C -module instance
  • vector.maptakes additive homs to additive homs
  • vector.conj is defined as vector.map complex.conj

Jalex Stark (Aug 17 2020 at 00:47):

maybe modelling off of data.matrix.basic doesn't make sense. Is it written with bare function types for computational reasons?

Jalex Stark (Aug 17 2020 at 00:59):

Ah, I see Frederic bases it on real_inner_product.lean.

Frédéric Dupuis (Aug 17 2020 at 13:50):

Ah, I'm glad that this is coming up again -- I would also very much like to see complex inner products in mathlib, a lot of what I am interested in formalizing depends on this! I'm not convinced that duplicating the real inner product code (like I did a while back) is the best approach however, it would be much nicer to have a unified approach to both cases. I'm starting to think that the better way to do this would be to create this "ℝ or ℂ" typeclass that was mentioned in that other thread. I was thinking of something like this:

class is_R_or_C (K : Type*) [normed_field K] [algebra  K] :=
  (re : K  )
  (im : K  )
  (conj : K  )

and then giving that class the same API as the complex numbers, adding whatever it takes to the class definition along the way to make it happen. The result probably won't be very pretty, but then afterwards when using it, it should behave fairly nicely, with re, im, conj, etc, being defeq to the right things in both cases.

Does that sound like a reasonable plan?

Patrick Massot (Aug 17 2020 at 13:51):

I fear this will make the real case much less pleasant to work with.

Kenny Lau (Aug 17 2020 at 13:51):

why not just say "a finite extension of R"

Patrick Massot (Aug 17 2020 at 13:51):

Unless you have meta-programming generating the right statements in the real and complex case from statements in this real_or_complex case.

Kevin Buzzard (Aug 17 2020 at 13:52):

@Kenny Lau yeah I said that last time and it derailed the thread

Frédéric Dupuis (Aug 17 2020 at 13:54):

I was hoping that for the real case, you would end up with a bunch of re and conj, etc, being the identity and that it wouldn't be too much of an inconvenience but this might be a bit naive...

Patrick Massot (Aug 17 2020 at 14:00):

You can only hope for definitional equality here, not syntactic one. But again meta-programming could help here. You would have something like:

@[real_or_complex]
lemma inner_symm (x y : α) : inner x y = conj (inner y x) := ...

that would generate two lemmas real_inner_symm (x y : α) : inner x y = inner y x := and cplx_inner_symm (x y : α) : inner x y = conj (inner y x) := . The proof would be the same, and the statement would simply remove conj and re, and do something smarter about im.

Frédéric Dupuis (Aug 17 2020 at 14:03):

Yes, that would be much nicer to use at the end. I'm guessing it wouldn't be too much harder to program than to_additive but then again I have zero experience with metaprogramming.

Patrick Massot (Aug 17 2020 at 14:06):

I think the meta-programming part isn't the hardest part. If you want to see that happening, the hard work is to first do it by hand for a number of lemmas which illustrate as many situations as possible.

Patrick Massot (Aug 17 2020 at 14:10):

Actually the code of to_additive even contain crucial bits that are ready to serve elsewhere, like docs#tactic.transform_decl_with_prefix_dict

Frédéric Dupuis (Aug 17 2020 at 14:17):

Actually I guess in practice that would be very similar to what I did by coding a complex version of real_inner_product.lean...

Patrick Massot (Aug 17 2020 at 14:20):

I don't remember the status of sesquilinear forms actually. We do have something, right?

Frédéric Dupuis (Aug 17 2020 at 14:24):

It's there, yes, in linear_algebra/sesquilinear_form.lean. It involves a parameter (I : R ≃+* Rᵒᵖ) that plays the role of the complex conjugate -- maybe there's a way to use it to make this parameter go away somehow in the real and complex cases but I don't see how to do it.

Frédéric Dupuis (Aug 17 2020 at 14:27):

Maybe this out_param business would help here -- is this documented somewhere? I tried to follow the refactoring of affine spaces but I didn't understand anything.

Mario Carneiro (Aug 17 2020 at 15:41):

@Patrick Massot In your proposal, does that still require the is_R_or_C typeclass? What is the type of inner_symm?

Patrick Massot (Aug 17 2020 at 16:29):

Yes, I was still thinking about using this type class. But it's not a very serious proposal until we actually see some code.

Jalex Stark (Aug 17 2020 at 22:10):

I feel like the best way forward is to get something merged now which deals with the very basics of the complex, maximizing the code similarity between the real and complex cases as per one of Patrick's suggestions. Maybe this involves taking a file called real_inner_product_space.lean, changing it to a folder inner_product_space with two files real.lean and complex.lean. A reviewer might choose to literally read the diff of the two files with each other as part of the review.

Jalex Stark (Aug 17 2020 at 22:14):

Doing it this way creates the sort of well-defined and obviously-valuable metaprogramming problem which will cause people to invest effort to learn metaprogramming. The problem comes with a lower bound on how many lines of code you get to remove from mathlib, which sounds like a pretty fun metric to optimize for.

Frédéric Dupuis (Aug 17 2020 at 23:41):

This could be done pretty quickly, but the danger here is to drag our feet on the metaprogramming once this is done and get in too deep with the two separate cases. For instance, once we have complex inner products, it will be very tempting to get started on Hilbert spaces, where there is a lot of "ℝ or ℂ" stuff going on as well. Getting it right the first time could save us a lot of work done the line...

Aaron Anderson (Aug 18 2020 at 00:11):

I've got a probably daft idea for the is_R_or_C typeclass, from reading up on Artin-Schreier:
variables {K : Type*} [field K] [algebra ℝ K] [finite_dimensional ℝ K] would define your K to be probably R or C enough, but then you'd still need a variable for the automorphism that'd either be the identity for R or conjugation for C

Kevin Buzzard (Aug 18 2020 at 00:24):

Right -- the problem with this approach is that you'll surely end up needing that the norm from K down to the reals sends z to z*sigma(z) if K isn't R, i.e. that it's Galois with group of order at most 2. I do not know this area. In number theory (when dealing with infinite places of number fields) we are really happy with the squared norm |a+ib|=a^2+b^2 because this makes a bunch of stuff work out quite nicely. But in analysis I would imagine that you want to take the square root a fair amount of the time. Whilst it is true that there is a more general theory where you assume that the extension has degree 2, I'm finding it hard to unify the degree 1 and 2 cases. In number theory it's quite common to split into two different calculations for the real and the complex case.

Frédéric Dupuis (Aug 18 2020 at 02:40):

Also, I think Patrick's main point was that having this automorphism lying around all the time would get too annoying in the long run, even if it ends up being defeq to the identity or conjugation depending on the case.

Reid Barton (Aug 18 2020 at 02:45):

One possibility would be to have a general interface with the automorphism and then a specialized API for when the automorphism is the identity, though I don't really know whether there are enough other examples to make this worthwhile

Frédéric Dupuis (Aug 18 2020 at 02:49):

Yes, I think a specialized API is what we want in the end, and I guess the main question is whether we should write it by hand or if it's worth having a to_additive-style mechanism to generate it automatically.

Patrick Massot (Aug 18 2020 at 09:53):

Frédéric Dupuis said:

This could be done pretty quickly, but the danger here is to drag our feet on the metaprogramming once this is done and get in too deep with the two separate cases. For instance, once we have complex inner products, it will be very tempting to get started on Hilbert spaces, where there is a lot of "ℝ or ℂ" stuff going on as well. Getting it right the first time could save us a lot of work done the line...

I'm sure we can try the meta-programming before doing Hilbert spaces.

Frédéric Dupuis (Aug 18 2020 at 13:39):

I agree! Do you know of a reference for metaprogramming? I wouldn't mind giving it a shot, but I'm a total beginner on that front...

Kevin Buzzard (Aug 18 2020 at 14:23):

https://leanprover-community.github.io/learn.html#metaprogramming-and-tactic-writing contains a link to a very nice introductory tutorial (written by Patrick in fact).

Frédéric Dupuis (Aug 18 2020 at 14:26):

Nice -- thanks!

Bryan Gin-ge Chen (Aug 18 2020 at 15:38):

The tutorial videos from LFTCM2020 by Rob Lewis are great too: https://www.youtube.com/watch?v=o6oUjcE6Nz4&list=PLlF-CfQhukNnq2kDCw2P_vI5AfXN7egP2

Johan Commelin (Aug 18 2020 at 16:20):

@Bryan Gin-ge Chen Good point. Here's a PR that adds them to learn.md: https://github.com/leanprover-community/leanprover-community.github.io/pull/115

Frédéric Dupuis (Aug 19 2020 at 02:39):

I've given this some more thought, and I think it's probably better to write at least the statements of the specialized lemmas by hand, especially for the real case. What we want is substantially fancier than to_additive: for instance, we don't just want to replace im's by zeroes, we want to get rid of them completely. This means that if the lemmas are autogenerated, there will be a lot of guesswork involved in figuring out what statement the metaprogram actually produced, which sounds much more painful than having to write them all by hand. However, it might be useful to write a tactic that would turn the proofs of these specialized statements into one-liners, for example by hunting down subexpressions involving im, re, etc, and calling the simplifier on them with a cleverly chosen set of lemmas (i.e. mul_zero and friends).

Scott Morrison (Aug 19 2020 at 03:08):

This might not need any actually metaprogramming, just an appropriate simp-set.

Frédéric Dupuis (Aug 19 2020 at 03:15):

Even better!

Patrick Massot (Aug 19 2020 at 08:00):

That's why I wrote "and do something smarter about im".

Frédéric Dupuis (Aug 24 2020 at 21:57):

I've made some progress on the new typeclass, and I now have a first draft in PR #3934, that I have marked as work in progress for now. Basically I've defined the typeclass and added enough axioms to be able to reproduce most of the API in complex.lean. Towards the end of the file most of the original proofs just went through unchanged, which seems to bode well.

I'd be very happy to have some feedback on this! In particular, if someone knows a good way to prove this:

lemma thirty_seven : (37 : K) = of_real 37 := sorry

Last updated: Dec 20 2023 at 11:08 UTC