To properly train a classifier, you need positive and negative examples of whatever you're interested in detecting. I wonder what were the negative examples? Did she inject her own fictitious reports into the training data? If so, how does she know it would work with someone else's fictitious reports?
From a cybersecurity perspective, it's a bad idea to publish the classifier, because an adversary could use it to train their own model for generating plausible ghost readings and end up with something that's very good at fooling hers.
IMO, the proper way to solve this problem is by integrating it with air traffic control radar and linking flights back to known observations and flight manifests. They can then try to localize known false reports and hopefully get some law enforcement to use RF tracking equipment to try and find the transmitters of these false broadcasts.