Text-to-speech (TTS) software has improved dramatically in recent years after the success of several machine learning innovations resulting in the current AI surge. In spite of its popularity, TTS still produces some infelicitous utterances in simple sentence structures, such as questions. In Mainstream American English, Wh- questions typically have falling pitch, while Yes-No questions have rising pitch. We evaluate how well TTS produces these expected patterns on questions that occur in a short story, and compare with a human orator audiobook. We test two modern TTS systems, Kokoro and Fastspeech 2. We use automated methods to label pitch rise or fall at the end of each question. Kokoro, the better performing model, produced rising pitch for the Yes-No questions 66.7% of the time, compared with 77.8% by the human orator, and rising pitch for the Wh- questions 61.9% of the time, compared with 31.6%. These results demonstrate some mismatch between TTS and the human orator, and indicate that further work is needed to synthesize speech with naturalistic prosody.