Does natural language understanding have anything to do with understanding natural language?

Natural language understanding (NLU) and natural language generation (NLG) are the main general goals of computational work on human language. Does such work have anything to tell us about the scientific/linguistic goal of understanding how natural languages behave? Can better understanding of linguistics help the development of practical computational techniques? It is far from obvious that the answer to these questions is yes, since successful computational modelling does not necessarily imply any real understanding. For instance, trajectories of physical objects can be modelled without understanding the underlying physics. Fred Jelinek, in a talk given at an award ceremony in 2004, admitted making the notorious comment 'every time I fire a linguist, our system performance improves' but argued that the goals of practical speech recognition simply did not coincide with the interests of the linguists. However, in this talk, I will suggest some more positive answers to these questions. I will describe DELPH-IN, a long-standing international collaboration involving researchers who are explicitly addressing both linguistic and computational goals, and discuss some of its successes. I'll outline how the development of precision grammars for various languages allows us to investigate language scientifically, and also to build practical systems for end users. I'll conclude by speculating how this sort of work can further progress in the brave new computational world of deep learning.

2356 232

Suggested Podcasts

Book Riot

Tim Mullooly

Dave Saboe, CBAP, PMP, CSM | Certified Business Analysis Professional | Agile Coach

Dotsie Bausch and Alexandra Paul

We’ll Never Be Royal Podcast

Hope Smiles

Monocle

Michael Horton, Justin Holcomb, Bob Hiller, Walter R. Strickland II