Jason Eisner is a Professor of Computer Science at Johns Hopkins University (JHU). He is recognized for his significant research contributions to natural language processing (NLP), programming languages, and artificial intelligence. His work is characterized by its theoretical depth and practical application, aiming to bridge the gap between current language models and robust reasoning capabilities.
Eisner is a prominent faculty member in the Department of Computer Science at Johns Hopkins University. His affiliations extend across several key research centers and departments within JHU, reflecting the interdisciplinary nature of his work:
- Department of Computer Science, Johns Hopkins University
- Center for Language and Speech Processing (CLSP), Johns Hopkins University
- Mathematical Institute for Data Science (MINDS), Johns Hopkins University
- Data Science and AI Institute, Johns Hopkins University
- Department of Cognitive Science (joint appointment), Johns Hopkins University
In addition to his academic roles, Eisner has also held a significant industry position, serving as Director of Research at Microsoft Semantic Machines from 2019 to 2024. This role highlights the practical relevance and industry impact of his research.
Professor Eisner's research program is centered on developing probabilistic modeling, inference, and learning techniques to build comprehensive models of linguistic structure. His overarching goal is to integrate the strengths of existing language models, particularly Large Language Models (LLMs), with common-sense and formal reasoning. This integration is crucial for advancing downstream applications such as sophisticated chatbot assistants and intelligent AI-driven educational tools.
His research contributions are substantial and span several interconnected areas within computer science and linguistics:
-
Natural Language Processing (NLP): Eisner is renowned for his foundational work in NLP. He has developed numerous influential algorithms for core NLP tasks, including:
- Parsing Algorithms: His work includes innovative parsing techniques that efficiently analyze the syntactic structure of sentences.
- Machine Translation: He has made contributions to statistical machine translation, focusing on improving the accuracy and fluency of translated text.
- Weighted Finite-State Machines (WFSTs): Eisner has extensively utilized and advanced the theory and application of WFSTs for various NLP problems, particularly in speech and language processing.
- Computational Phonology: His research in this area includes formalizations, algorithms, and theorems for modeling phonological processes, alongside empirical evaluations.
- Unsupervised and Semi-supervised Learning: He has developed methods for learning syntactic structures, morphological patterns, and word-sense disambiguation with limited or no labeled data, addressing a critical challenge in NLP.
-
Declarative Programming for AI: Recognizing the need for robust and flexible tools for AI research, Eisner is the lead designer of Dyna, a declarative programming language. Dyna is specifically designed to provide a powerful and expressive infrastructure for implementing and experimenting with a wide range of artificial intelligence algorithms, particularly those in NLP and machine learning.
-
Machine Learning Innovations: Eisner's current research actively incorporates cutting-edge machine learning techniques. This includes:
- Creative Applications of Large Language Models (LLMs): He explores novel ways to leverage the capabilities of LLMs while addressing their limitations, particularly in reasoning and grounding.
- Probabilistic Models of Linguistic Structures: He continues to refine and develop probabilistic models that capture the complexities of language, aiming for more interpretable and robust NLP systems.
- Combinatorial Algorithms and Approximate Inference: His work often involves the design and analysis of efficient combinatorial algorithms and approximate inference methods, essential for scaling up NLP models and making them computationally tractable.
A defining characteristic of Eisner's research is the strong emphasis on empirical validation. He and his students rigorously implement and evaluate their proposed methods on naturally occurring language data. This commitment to empirical evaluation ensures that his theoretical contributions translate into practical advancements in the field, consistently pushing the state of the art in NLP.
Professor Eisner's research and teaching excellence have been recognized through numerous prestigious awards and honors:
- Fellow of the Association for Computational Linguistics (ACL): This is a highly selective honor recognizing individuals who have made significant and sustained contributions to the field of computational linguistics.
- Best Paper Awards: He has received multiple Best Paper awards at top-tier NLP conferences, including:
- ACL 2017 Best Paper Award
- EMNLP 2019 Best Paper Award
- NAACL 2021 Best Paper Award
- Outstanding Paper Awards: Further recognition of impactful research includes Outstanding Paper awards:
- ACL 2022 Outstanding Paper Award
- EMNLP 2024 Outstanding Paper Award
- Whiting School-wide Awards for Excellence in Teaching, Johns Hopkins University: He has been honored twice with this university-wide award, underscoring his commitment to and effectiveness in teaching and mentorship.
- NSF CAREER Award: This prestigious award from the National Science Foundation supports early-career faculty who have the potential to serve as academic role models in research and education.
These awards collectively highlight the high impact and quality of Professor Eisner's research and his dedication to both research and teaching.
Professor Eisner's academic background is exceptionally strong and interdisciplinary, reflecting his diverse research interests:
- PhD in Computer Science, University of Pennsylvania (2001): His doctoral work laid the foundation for his subsequent research in NLP and related areas.
- BA/MA in Mathematics, University of Cambridge (1993): His advanced degree in Mathematics from Cambridge University demonstrates his strong theoretical and analytical skills.
- AB in Psychology, Harvard University (1990): His undergraduate degree in Psychology provides a valuable perspective on cognitive processes, informing his approach to natural language understanding and AI.
This diverse educational background in mathematics, psychology, and computer science has uniquely positioned him to make interdisciplinary contributions to the field of artificial intelligence.
Professor Eisner and his research group have developed several influential software tools and resources that are used within the research community:
- Dyna Programming Language (Lead Designer): As mentioned earlier, Dyna is a declarative programming language specifically designed for AI algorithm development. It is intended to simplify the implementation of complex probabilistic models and inference algorithms. Dyna project page
- Dopp Programming Language Parser (Lead Designer): Dopp is a parser for programming languages, likely developed as part of his research in declarative programming and language processing. Information about Dopp is less readily available publicly, but it is mentioned in the context of his Dyna work.
- Dynasty Hypergraph Browser (Lead Designer): Dynasty is a visualization and exploration tool for hypergraphs, a data structure commonly used in NLP and constraint satisfaction problems. This tool aids in understanding and debugging complex models. Dynasty tool on GitHub
These software tools demonstrate Eisner's commitment to not only theoretical research but also to creating practical resources that benefit the wider research community.
Professor Eisner has authored and co-authored numerous highly cited and influential publications in leading NLP and AI conferences and journals. His publications are characterized by their rigor, innovation, and impact on the field. A selection of his notable publications includes:
- Eisner, J. (1996). Three new probabilistic parsing algorithms: Earley's and Cocke-Younger-Kasami and Inside-Outside applied to dependency grammars. In Proceedings of the 34th annual meeting on Association for Computational Linguistics (pp. 148-156). Association for Computational Linguistics. [ACL Anthology Link]
- Eisner, J., & Satta, G. (1999). Parsing with bilexical contexts. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (pp. 117-124). Association for Computational Linguistics. [ACL Anthology Link]
- Eisner, J. (2002). Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 1-8). Association for Computational Linguistics. [ACL Anthology Link]
- Goodman, M., & Eisner, J. (2006). Code and annotation projection between syntax and semantics. Natural Language Engineering, 12(3), 265-306. [Cambridge Core Link]
- Eisner, J., Goldwater, S., & Smith, N. A. (2005). Breaking the ice: First steps toward unsupervised grammar induction of spoken language. In Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (pp. 803-810). [ACL Anthology Link]
- Jason Eisner - Department of Computer Science - Johns Hopkins University, Computer Science Department Faculty Page. [https://www.cs.jhu.edu/faculty/jason-eisner/]
- Jason Eisner - Bio - Johns Hopkins University, Computer Science Department Biography. [https://www.cs.jhu.edu/~jason/bio.html]
- Jason Eisner - Home Page - Jason Eisner's personal website at Johns Hopkins University. [https://www.cs.jhu.edu/~jason/]
- Jason Eisner - Center for Language and Speech Processing - Johns Hopkins University, Center for Language and Speech Processing Faculty Page. [https://www.clsp.jhu.edu/faculty/eisner-jason/]
- ACL Fellows - Association for Computational Linguistics. [https://aclweb.org/aclwiki/ACL_Fellows] (Note: Jason Eisner is listed in the ACL Fellows list)
- NSF Award Search - NSF CAREER Award for Jason Eisner. [https://www.nsf.gov/awardsearch/simpleSearchResult?queryText=Jason+Eisner&ActiveAwards=true]
- Microsoft Semantic Machines - Microsoft Research. [https://www.microsoft.com/en-us/research/group/semantic-machines/]
- Dyna Project Page. [https://dynalang.org/]
- Dynasty tool on GitHub. [https://github.com/jason-eisner/dynasty]
- ACL Anthology Link for Eisner (1996). [https://aclanthology.org/P96-1019/]
- ACL Anthology Link for Eisner and Satta (1999). [https://aclanthology.org/P99-1016/]
- ACL Anthology Link for Eisner (2002). [https://aclanthology.org/P02-1001/]
- Cambridge Core Link for Goodman and Eisner (2006). [https://www.cambridge.org/core/journals/natural-language-engineering/article/code-and-annotation-projection-between-syntax-and-semantics/C36E654287486C844B5473144A780478]
- ACL Anthology Link for Eisner, Goldwater, & Smith (2005). [https://aclanthology.org/H05-1099/]