In this talk, I present work that uses deep neural networks trained with raw MEG data to predict the age of children performing a verb-generation task, a monosyllable speech-elicitation task, and a multi-syllabic speech-elicitation task. I argue that the network makes these predictions on the grounds of differences in speech development. Previous work has explored using neural networks to classify encephalographic recordings with some success, but they do little to acknowledge the structure of these data, typically relying on some popular contemporary architecture designed for a vaguely related application. Previous such approaches also typically require extensive feature engineering to succeed. I will show that configuring a neural network to mimic the common manual pipeline employed for brain-computer interface classifiers allows them to be trained with raw magnetoencephalography (MEG) and electroencephalography (EEG) recordings and achieve state-of-the-art accuracies with no hand engineered features.