Dan Melamed's Research Interests

I'm interested in pretty much everything, but especially everything related to the enterprise of making machines use language as fluently as people do. This enterprise has many names, including "computational linguistics" (CL), "natural language processing" (NLP), and "Human Language Technology" (HLT). It is a field of research that started before digital computers even existed!

Within this large, multifaceted, and interdisciplinary field of research, I focus on what has long been considered the Holy Grail: automatic ("machine") translation between natural languages. Machine translation (MT) is "AI-complete" in the sense that to do it perfectly, we would have to solve every other problem of artificial intelligence. Therefore, MT researchers must pay attention to almost every other aspect of computational linguistics, plus a variety of other fields like machine learning and systems engineering. That's one of the reasons I enjoy research on MT: it encourages me to constantly broaden my horizons.

Another thing I love about MT research is that it encompasses problems from the highly abstract and theoretical to the very application-oriented and experimental. I like to float between one end of this spectrum and the other, pushing theory into practice. For example, in 2003-2004, my colleagues, my students, and I invented a new class of "translingual" grammars and associated inference algorithms. In 2005, I led a team of scientists and engineers to implement a new kind of statistical machine translation system, based on the theoretical principles that we invented. This system is now the first fully integrated SMT system to be made publicly available as open source. The diversity of research that is relevant to MT gives me and my students the flexibility to choose research topics that suit their talents and interests, while remaining part of a collegial group working towards a common goal.

Ultimately, that goal is nothing short of changing the way that MT research is done. The poor-quality translation technology that abounds on the web today is based on research from many years ago. A number of exciting scientific advances within the last few years, along with the burgeoning information landscape, have made machine translation ripe for a leap forward. The key is to marry the state of the art in machine learning with the state of the art in natural language parsing. Many of the research projects that I lead (here, here, here, and here) are aimed at the component technologies that are necessary for this marriage. My approach is language-independent, and does not require its practitioners to be fluent in the languages that they happen to be working on.

In addition to MT, I am working on a variety of other NLP problems, some cross-lingual and some not. My research interests often reflect those of my students and collaborators. Current projects range from theoretical work on unbiased evaluation methods to practical work on automatic plagiarism detection. I also like to implement my ideas in software that I give away.


Dan Melamed (melamed at cs dot nyu dot edu)
Last modified: Wed Nov 2 14:38:31 EST 2005