Can AI save endangered languages? Learning theories, language and AI

There is no doubt that learning new languages is infuriatingly difficult, especially at the later stages of life. As the world becomes "smaller" through globalisation, certain languages begin to increase in utility and start taking precedence over others, resulting in the extinction of the less "useful" languages. According to McWhorter (2009), in the next 100 years, the 6,000 languages in use today will be reduced to about 600. Whether and how to save these endangered languages is an important question plaguing the language sciences community. We are now in the age of information and artificial intelligence. All the data we need is available in the palm of our hands. Mobile applications like Babbel and Duolingo lower the barriers to entry when it comes to learning a new language. So why is language learning still so difficult? Haven't the plethora of philosophical thought experiments, cognitive theories and neuroscience research combined with the scale and reach of modern technology enabled us to make language learning as easy and intuitive as playing a video game? Can we not use this technology to then increase the number of speakers for endangered languages? The answers and further questions lie in the history of personalised learning and the underlying principles and paradigm shifts that have shaped it over the centuries. The nature of knowing represented through contemporary theories of learning such as behaviourism, cognitivism, and constructivism have provided some insight into the question of how we learn. In this talk, I will walk through a brief history of learning that will shed some light on not only the question of whether artificial intelligence can save endangered languages, but also whether it can play a role in making language learning less difficult.

2356 232