The goal of this project is developing methods that enable symmetric communication between people and computers. Machines are not merely receivers of instructions but collaborators, able to harness a full range of natural modes including language, gesture and facial or other expressions. Communication is understood to be the sharing of complex ideas in collaborative contexts. Complex ideas are assumed to be built from a relatively small set of elementary ideas, and language is thought to specify such complex ideas—but not completely, because language is ambiguous and depends in part on context, which can augment language and improve the specification of complex ideas. In the case of collaborative composition researchers explore the process by which humans and machines might collaborate toward the assembly of a creative product—in this case, contributing sentences to create stories. Success in this program would advance a number of application areas, most notably robotics and semi-autonomous systems.
Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. Explainable AI — especially explainable machine learning — will be essential to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. The goal of this project is to produce more explainable models, while maintaining a high level of learning performance and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.Project pageDARPA’s Page
Capturing complex spatial and temporal structure in high-bandwidth, noisy, ambiguous data streams is a significant challenge in even the most modern signal/image analysis systems. Current computational approaches are overwhelmingly compute intensive and are only able to extract limited spatial structure from modest quantities of data. The objective of this project is to represent a family of cortical processing models, that are able to handle different data types, in addition to constantly optimizing their performance automatically in order to handle new data. Such algorithms would be necessary to a cortical processor, including temporal/spatial recognition with a unified architecture and a modular structure. The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable.Project PageDARPA’s RFI
The objective of this project is to model social interactions between humans with focuses on the science of social interactions and human dynamics, technological and pedagogical design of training tools for developing human dynamics interaction proficiencies, and assessment of SSIM training and subsequent performance outcomes.
The objective the program is to explore and develop methods for scalable autonomous systems capable of understanding scenes and events for learning, planning, and execution of complex tasks. The program is exploring powerful mathematical frameworks for unified knowledge representation applied to shared perception, learning, reasoning, and action. Exploiting probabilistic methods such as stochastic grammars to represent and process visual scenes and actions. Data-driven methods for spatial, temporal, and causal parsing of information are being developed for semantic understanding of scenes and events in unstructured environments along with cognitive processing methods for exploitation and manipulations.
This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.