IARPA's mission is to promote high-risk, high-payoff research that has the potential to enhance the performance of IC activities. IARPA conducts prize challenges to invite the broader research community of industry and academia world-wide to participate in a convenient, efficient, and non-contractual competition to stimulate breakthroughs in science and technology.

Can you passively characterize the ionosphere with selected digitized radio-frequency (RF) spectrum recordings containing active sounders? The PINS Challenge presents an opportunity for individuals and teams to earn prizes by developing algorithms that characterize and model the effects of ionospheric variations on high frequency emissions.

We’re on a mission to improve the accuracy and timeliness of geopolitical forecasts by advancing the science of forecasting. Are you up to the challenge? The second Geopolitical Forecasting Challenge (GF Challenge 2) presents Solvers with questions ranging from political elections to disease outbreaks to macro-economic indicators and asks for innovative, programmatic solutions that can include any combination of human forecasts and their own data sources and models into accurate, timely forecasts. This is your chance to test and showcase your forecasting methods and prove yourself against other state-of-the-art methods. GF Challenge 2 offers Solvers the opportunity to advance your research, contribute to global security and humanitarian activities, and enhance the science of forecasting, as part of a collaborative community. And did we mention that up to $250,000 in prizes will be awarded?

Can you develop an algorithm that watches hours of video of a parking lot and automatically detects if someone walks onto the scene? Can your algorithm detect that the person entered a car? Can it detect if they were carrying something heavy? The Activities in Extended Video Prize Challenge invites participants from around the world to create innovative solutions to automatically detect and localize a set of 18 different types of activities in extended video scenes.

Challenge Workshop

Workshop Date: July 18, 2019

Register Deadline: July 10, 2019

Event Registration
Registration Password: CASE829CFW

Workshop Details

Every day we make decisions about whether the people and information sources around us are reliable, honest, and trustworthy – the person, their actions, what they say, a particular news source, or the actual information being conveyed. Often, the only tool to help us make those decisions are our own judgments based on current or past experiences.

For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, asking specific questions, looking at a user badge or rating system, asking for confirming information from other people - or in more formal settings, verifying biometrics or recording someone’s physiological responses, such as is the case with the polygraph. Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how accurate such tools really are.

Countless studies have tested a variety of credibility assessment techniques and have attempted to use them to rigorously determine when a source and/or a message is credible and, more specifically, when a person is lying or telling the truth. Despite the large and lengthy investment in such research, a rigorous set of valid methods that are useful in determining the credibility of a source or their information across different applications remains difficult to achieve. The Intelligence Advanced Research Projects Activity, within the Office of the Director of National Intelligence, intends to launch the Credibility Assessment Standardized Evaluation Challenge to address this critical question.

Nearly half of all internet websites are in languages other than English. Access to the web and social media continues to propagate to communities where less commonly used languages are used, affording these languages an increasingly more substantial web presence. For most of these languages, reliable, automatic processing software does not exist. Can you develop an application that can locate foreign language text and speech information relevant to your needs, using queries in English? Participants will be given modest amounts of machine translation and speech recognition training data to develop their solutions and compete with natural language processing practitioners all around the world. The ultimate goal of the challenge is to advance the research and development of human language technologies for lower resourced and computationally underserved languages.

Participant's Day WebEx Resources

Presentation Slides

Meeting Video

Meeting Q&A

Knowing when someone is telling the truth plays a critical role in law enforcement and national security events, to include criminal investigations, screening new employees before hiring, and interviewing potential sources and witnesses. The polygraph is one tool that members of the Intelligence Community (IC) and law enforcement look to for help, but there is a long-standing debate among researchers and polygraph practitioners about the accuracy and reliability of this tool. How can we evaluate how good the polygraph is, and how much better new tools may be? The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), intends to launch the Credibility Assessment Standardized Evaluation (CASE) Challenge to address this critical question.

Can you create a method to forecast the future? The Mercury Challenge presents an opportunity for individuals and teams to earn prizes by creating innovative solutions using machine learning and artificial intelligence methods to automatically predict the occurrence of critical events.

Can you leverage the data outputs of multiple face recognition algorithms to improve overall accuracy? There is a large literature on biometric fusion intended to improve accuracy via fusion of multiple modalities (e.g., face + fingerprint), multiple algorithms, or multiple samples. However, most of the research has only addressed 1:1 verification at the score-level. This prize challenge is aimed at stimulating research into methods to improve one-to-many (i.e., 1:N) identification accuracy via template-level fusion. Can further accuracy gains be realized by fusing feature-level templates or through more innovative score-level fusion methods informed by modern data science?

What is the current state-of-the art for image restoration and enhancement applied to images acquired under less than ideal circumstances? Can the application of enhancement algorithms as a pre-processing step improve image interpretability for manual analysis or automatic visual recognition to classify scene content? The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), is sponsoring the UG2+ Prize Challenge. This challenge seeks to answer these important questions for general applications related to computational photography and scene understanding. As a well-defined case study, the challenge aims to advance the analysis of images collected by small unmanned aerial vehicles (UAV) by improving image restoration and enhancement algorithm performance using the UG2 dataset that includes imagery from UAV, glider, and ground collects.