It just means using one CPU for one hour. We could probably assume a CPU is a core since when doing CPU bound work, a core is basically another CPU. It's a similar metric as man-hour, which is literally one person working for one hour. The actual time to completion doesn't matter and performance is ignored, you simply adjust the amount of time you think you'll need if the performance is actually better or worse than some baseline you're using. For instance if you estimate 10 CPU hours of work to get something done, you can get it done in 2 hours by scheduling 5 CPUs to do the work. Or if you're running on higher performing machines than when you made the estimate, you could reduce this to 8 CPU hours.
So basically, it's just saying they spent a cumulative total of 100 million hours crunching numbers.
EDIT: I realized this doesn't really explain why things are measured this way. The basic reason for all of this is simple: running stuff on a HPC, server, etc, is billed by the time you use it. HPCs and the like are typically time shared, the researchers don't own any of the computers that do the heavy lifting, so to speak. So for the purposes of accounting, you have to estimate how many hours you think you'll need on a system to do the work. Then the bean counters and proposal people can go "okay, we need $XXXX for compute costs"