We are wondering about the probability that 192 randomly chosen commuters
will overload the escalator.
The population is conceptual: all London commuters who might use the Pimlico
station at rush hour. We may as well take N to be infinite. The variable of
interest is their weight. The population mean is 150 lb; the population SD
is 28 lb; and the population histogram probably has something of a longright-
hand-tail (because you can't survive as an adult at a weight much less
than 80 lb or so but you can eat yourself quite far into the right tail
without expiring immediately). We are thinking of the total weight of the 192
commuters on the escalator, at a moment during rush hour when it is crowded,
as like the *sum* of 192 iid draws (or SRS; it doesn't matter when N is
infinite) from the population, so the sample is the 192 people so chosen,
n=192, and the sample summary of interest is the sum S of the 192 draws. In
terms of S the probability we want is P(S > 29700 lb). To compute this
probability we have to fill in the imaginary dataset, by repeatedly imagining
taking 192 IID draws from the population and computing their sum. The first
time you might get 29400 lb; the next time 28600; and so on. One way to
compute P(S > 29700 lb) is to work out the mean and SD of the imaginary dataset,
approximate the histogram of the sums by the normal curve, and compute the
area under the normal curve to the right of 29700. The long-run mean of the
sums in the imaginary dataset is the expected value of the sum, E(S)=
192*150 lb = 28800 lb. The long-run SD of the sums in the imaginary dataset
is the standard error of the sum, SE(S)=28*192 = 388 lb; in other
words, each sum should be around 28800 lb, give or
take about 388 lb. The long-run histogram of the sums should follow the
normal curve pretty well by the Central Limit Theorem, because 192 is a lot
of draws and the population histogram probably wasn't *that* badly
nonnormal to begin with. So P(S > 29700) can be decently approximated by
converting 29700 to standard units on a normal curve with mean 28800 and
SD 388 -- (29700 - 28800)/388 = 2.32 -- and looking up the area to the right
of 2.32 under the standard normal curve, which is about 1.0% or 1 in 100. In
other words, if they designed the escalator to handle 29700 lb, it would
break down about once every 100 fully loaded trips. 1% sounds like a small
number, but it's not small enough: With 90 one-minute fully loaded trips
every day in the morning and evening rush-hours combined, the escalator would
break about once every 1.1 days, which is much too often.

Suppose they only wanted it to break once every 10,000 fully loaded trips. That would be like asking for a number of pounds x so that P(S > x) = .0001. The place on the standard normal curve with .0001 as the area to the right of it is about 3.72 (we want 99.98% in the middle), and working backwards from (x - 28800)/388 = 3.72, you get about 30240 lb. This is pretty interesting: to get the failure rate down from 1 in 100 to 1 in 10000 they only have to increase the load tolerance by about 540 pounds, from 29700 to 30240. The reason is that 2.32 is already pretty far out in the right tail of the normal curve, and the curve tails off toward zero very quickly from that point on -- you don't have to go much farther out to make the tail area drop like a rock.

*Technical note*: we are probably relying too much on the exact behavior
of the normal curve way out in its right tail in making these calculations --
careful engineering work would be based not on the normal curve but on
simulations from the actual weight distribution of Underground rush hour
commuters.