Three simple astronomy problems: An unmoved goal post

Ten years ago* I wrote an article entitled, "The Singularity and the State of the Art in Artificial Intelligence". It was written just before the deep learning revolution, which I did not in the least foresee, so some of the article has dated pretty badly. My first bullet point in the "Summary" section was particularly short-sighted:
For most tasks in automated vision and natural language processing, even quite narrowly defined tasks, the quality of the best software tends to plateau out at a level considerably below human abilities, though there are important exceptions. Once such a plateau has been reached, getting further improvements to quality is generally extremely difficult and extremely slow. Oh well.

But other aspects of the article are still valid, and I think have some more years or decades of life in them. In particular, I proposed three astronomy problems:

  1. Is there ever a lunar eclipse one day and a solar eclipse the next?
  2. To an astronomer near Polaris, which is brighter, the sun or Sirius?
  3. When is the next sunrise over crater Aristarchus?

A few comments about these:

Is there ever a lunar eclipse one day and a solar eclipse the next?

For any human being who has a basic understanding of the relative motions of the earth, sun, and moon, and their relation to eclipses, this is obvious. The ancient Babylonians knew the answer, and the ancient Greeks both knew the answer and had an adequate explanation (it does not require a heliocentric theory).

As regards current and near-future AI: It might be doable in either of two ways.

First, there is some chance that, in the immense volume of text that large language models like GPT-3 have read, someone has written something close enough to this --- e.g. ``Necessarily, therefore, a solar eclipse and a lunar eclipse can never be less than about 14 days apart'' --- that the technology could match against the question to get the right answer. But it seems unlikely, for two reasons:

  1. Probably no one has actually written that or anything similar.
  2. Determining the correspondence between ``never be less than about 14 days apart'' and ``ever ... one day apart'' involves the kind of reasoning that LLMs do very badly on. In particular, LLMs do badly with quantifiers, with negation, and with Yes/No questions, all of which are involved here.

In any case, a solution of this kind would not be robust, because similar problems could be raised, which no one has ever bothered to write down explicitly. Admittedly, these are a little harder, but should not be very hard.