Some reading for today on my research into AI, regarding speech... in infancy. I've read the intro and discussion. [ncbi.nlm.nih.gov] Very interesting is the five stages of vocal development in the first paragraph. I have never heard about natural language AI research even attempting to replicate that. And a related abstract about speech and vision in infants. [ncbi.nlm.nih.gov]
My primary question was whether or not vocal mimicry is indeed something that babies do, as I've been assuming. It is. And it's long been my plan to include it in developing my AI. And something else I found really interesting is that it seems babies make a very early connection between visual cues and articulation. That is, seeing people speak helps them articulate.
Long ago, I decided in my approach to combine vocalization, hearing, and vision all at once, from the start, in order to achieve generalized AI (as opposed to what most applications seems to be these days, task-oriented AI). Part of my decision stems from a lack of resources. Robotics is an expensive endeavor. The other part, it's just about having a workable framework; a workable set of abilities. A set of abilities that can work in combination to... make sense of the world and... to express that. And in some cases question others about it.
My son is studying linguistics and is currently enrolled in a linguistic morphology class. This early in the term (started Jan 7), he is learning a lot about how language morphemes are developed in the very young. You might want to check out some texts in that area, if you have not, yet. (Subject: linguistic morphology in language development). Neat project! I hope it goes well for you.
I'll be trying to utilize a software program called praat, which includes an aticulatory speech synthesizer, based on a physics accurate model of the human vocal tract. I'm not sure at this point how much more I need to study linguistics.