| | | | | | |
| | |
| Loading… |
Academic |
Business < 500 Employees |
Business > 1000 Employees |
Business 500-1000 Employees |
Community College |
Federally Funded Research and Development Center (FFRDC) |
Government Owned and Operated (GOGO) |
Historically Black College and University (HBCU) |
Indian/Native American Tribal Government |
Individual |
Large Business |
Non-Profit |
Small Business |
State and/or Local Government |
University |
|
|
| Loading… |
Bioenergy |
Building Efficiency |
Conventional Generation (Non-renewable) |
Grid |
None of the above |
Other |
Renewable power (non-bio) |
Transportation |
|
| | |
| Advanced Micro Devices | Jay Owen | Large Business |
Other
| As a leader in the emerging AR/VR markets, and a technology leader in GPU and CPU solutions, AMD is interested in participating with qualified solution providers the development and market deployment of AR/VR in wide-ranging applications. Collaboration in the development of photo-realistic virtual presence solutions and algorithms for this and other applications would be managed through AMD Advanced Research, LLC; a wholly owned subsidiary. AAR, LLC also handles AMD's participation in the DOE Exascale Computing Initiative and other agency funded programs. |
|
| Agility Robotics | Jonathan Hurst | Business < 500 Employees |
None of the above
| Agility Robotics creates legged mobility platforms, which enable teleoperated robotic systems to go where humans can go, and co-inhabit human spaces. Our robots can be controlled as one would control a horse: Choose a direction, but allow the robot to decide footfalls and path details. We are interested to provide this platform in partnership with companies exploring virtual reality interfaces, to enable a true telepresence experience for various forms of tele-labor, including logistics and package delivery, infrastructure inspection, or real-time data collection in remote locations.
We bring this technology to market from more than a decade of research at Oregon State University and Carnegie Mellon University. Videos of robots walking and running, as well as recorded talks, are posted on our company web site. |
|
| Epic Games | Kim Libreri | Business < 500 Employees |
Other
| Epic Games was founded in 1991 headquartered in Cary, North Carolina and employs 400+ employees worldwide. Epic develops and distributes leading gaming titles for PC, VR, mobile, and console platforms, with a prestigious portfolio of titles including the billion-dollar Gears of War franchise, as well as leading historical and upcoming titles such as Unreal Tournament, Infinity Blade, Robo Recall, Paragon, Fortnite, and SpyJinx.
Epic is the creator of the award-winning Unreal Engine, a highly sophisticated software platform designed to enable users to create and develop video games and other interactive high-fidelity graphical content. Unreal Engine is the most widely recognized and leading high-end engine platform for triple-A and online games, with capabilities across PC/Mac, Xbox, PlayStation, Nintendo, mobile and enterprise applications.
Epic Games and Unreal Engine has been leading the industry for innovation in real-time computer graphics over the last two decades and is now the premium choice for photorealistic Virtual Reality experiences and has been a critical component in pushing the boundaries of what is possible with digital humans/avatars and live driven performance capture.
Some background on our recent developments: https://www.fxguide.com/featured/epic-face-work-with-ninja-theory/ https://www.fxguide.com/featured/epic-win-previs-to-final-in-five-minutes/ |
|
| FoVI 3D | Amy Lessner | Business < 500 Employees |
Other
| FoVI 3D is at the forefront of lightfield visualization capabilities. As a pioneer in the creation of and standards for native lightfield display visualization technology, FoVI 3D is interested in collaborating with solution providers to develop and deploy a collaborative, heterogeneous environment in the application of Digital Transportation. Our light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume. A light-field table provides full 360° viewing with precise spatial registration, independent of the location of multiple simultaneous viewers. All of the expected depth cues are provided to recreate the 3D visual as one would see in the natural real-world. As a result, the viewers quickly and naturally understand complex spatial relationships between objects. This allows for faster, richer comprehension of the data and the ability for the synced collaboration of multidisciplinary stakeholders over a common operating picture. FoVI 3D currently works with multiple DoD agencies to support the advancement of this ground breaking technology. |
|
| fxguide, LLC. | Mike Seymour | Business < 500 Employees |
Other
| The Californian based fxguide team, would be lead by Mike Seymour. Mike Seymour is a leading consultant and researcher in the area of digital human's and best practice in digital humans. His work in digital presence and what it means to provide an effective facial and interactive pipeline has lead him to work with some of the best companies and research groups in the world. fxguide's skill set is in understanding the interaction and the implications of a working successful system.
Mike Seymour's own research explores technological developments that bringing interactive computer agents, and avatars into our everyday lives and routines. These interactive avatars are designed to be the focus of our interactions – we can feel “present” with them. Yet current theories of “presence” do not account for the question of what it means to be present with technology in an experiential sense. In response, fxguide's research draws on both technical implementations and existential philosophy in order to generate an agenda for conceptualising presence in the context of what we term human-computer engagement. This research suggests a new perspective focusing on the situated interaction rather than an a-priori assessment of the entities involved. Our expertise extends to consideration of the ethical questions that emerge when technology is experienced as being an independent avatar with which one can be present with and interact. |
|
| GE Global Research | Peter Tu | Business > 1000 Employees |
Other
| GE Global Research has developed a number of computer vision methods for the capture of social cues of individuals as they interact in group level activities. We believe that these capabilities can be used to "digitize" individuals allowing for high fidelity reproduction of such persons such as an avatar. |
Website: none
Email: tu@ge.com
Phone: 518-387-5838
Address: 1 Research Circle, Niskayuna, NY, 12309
|
| Georgia Institute of Technology | B. H. (Fred) JUANG | University |
Other
| A biosketch of B.H. (Fred) Juang Professor Juang received his undergraduate degree from National Taiwan University, and his master’s and doctoral degrees from the University of California, Santa Barbara. He was associated with Speech Communication Research Laboratory and Signal Technologies, Inc. on several Government-sponsored projects in the area of speech communication during 1979-1982. In 1982, he joined Bell Laboratories, where he conducted research in the area of signal coding and recognition, acoustic signal processing and digital audio broadcasting. Professor Juang became Director of Acoustics and Speech Research at Bell Labs in 1996, where he directed research in speech, audio and multimedia communications. He led a team that is accredited with such important inventions as the electret microphone, the network echo canceller, a series of speech CODECs that are widely used in landline and wireless networks, a number of key computational algorithms for signal/data modeling and automatic speech recognition, and the development of an array of speech-enabled service applications for AT&T. Innovations under his leadership include a world-first real-time full-duplex hands-free stereo teleconferencing system and an early system for digital audio broadcasting in the United States. Prof. Juang holds about 20 patents. He has published extensively, including the textbook, “Fundamentals of Speech Recognition”, co-authored with L.R. Rabiner in 1993. Prof. Juang joined Georgia Tech in 2002 as Motorola Foundation Chair Professor.
Prof. Juang was Editor-in-Chief of the IEEE Transactions on Speech and Audio Processing, member of the IEEE SP conference board, and Distinguished Lecturer and Chair of Fellow Evaluation Committee of the IEEE SP Society, in a long record of services for technical societies. Prof. Juang has received numerous distinctions and recognitions, including Bell Labs' President Gold Award, IEEE Signal Processing Society Technical Achievement Award, the IEEE Third Millennium Medal, several Best Paper Awards and Most Influential Signal Processing Magazine Paper Award from the IEEE Signal Processing Society, and the IEEE James L. Flanagan Field Medal on Speech and Audio Processing. Prof. Juang is an IEEE Fellow, a Bell Labs Fellow, a member of the National Academy of Engineering, an Academician of the Academia Sinica, and a Charter Fellow of the National Academy of Inventors. |
|
| HEXXU Corp | Richard Oesterreicher | Business < 500 Employees |
Other
| HEXXU is developing patent pending technology (HEX VR) for live action capture and delivery of VR/AR content using minimal bandwidth. The focus of HEXXU's efforts is on four critical components:
- VISUAL FIDELITY Photorealistic digitizing of humans, objects, and environments. - MOTION FIDELITY 3D markerless/sensorless motion capture from 2D cameras. - BANDWIDTH SAVINGS HEX VR motion stream uses less bandwidth than HD video. - VIEWER AWARE CHARACTERS Enabling reactive performances on live or recorded content.*
* HEXXU's reactive performance technology means virtual characters can personally interact with each viewer, with important body language, eye contact, and conversation – even if there are thousands of viewers at the same time. |
|
| High Fidelity | Philip Rosedale | Small Business |
Other
| Philip Rosedale is CEO and co-founder of High Fidelity, Inc., a company devoted to exploring the future of next-generation shared virtual reality. Our team has a deep legacy of expertise in software development, social entertainment, peer-based recognition systems, community development, and workforce mobilization. We believe that both the hardware and the internet infrastructure are now available to give people around the world access to shared virtual worlds that will offer a broad range of capabilities for creativity, education, exploration, and play. Prior to High Fidelity, Rosedale created the virtual civilization Second Life, populated by one million active users generating US$700M in annual transaction volumes. In addition to numerous technology inventions (including the video conferencing product called FreeVue, acquired by RealNetworks in 1996 where Rosedale later served as CTO), Rosedale has also worked on experiments in distributed work and computing. |
|
| Immersio Inc. | Eriks Strals | Business < 500 Employees |
Other
| Immersio is here to make digital transportation FUN. Without an engaging and enjoyable experience, we have no hope to reach the general consumer audience. That is the audience we believe we should truly be focused on if we want to make the most impact on physical transportation.
We are a creative games company focused on building top quality VR and AR experiences. We strive to create the most comfortable and enjoyable experiences possible. Our team consists of experts in the fields of game design, UX/UI design, 3D art/modeling, and game production. We are up to date on all the latest technologies used to build 3D VR worlds, such as the Unreal engine and Unity 3D.
Because of our background in games development and creative software engineering, that puts us on the cutting edge of creating immersive interactive experiences using virtual reality as a medium. While others out there in the industry of interactive content creation might be able to create simple functional experiences, nothing beats gaming as the most engaging experience as possible. Just look at the explosion of growth in the games and emerging VR industry for proof.
Some of our experiences have led our clients to multi-million dollar acquisitions from companies such as Google and Accenture. |
|
| Invimage LLC | Michele Calogero-Duca | Business < 500 Employees |
Other
| Invimage is a woman owned software development small business located in Carlsbad, California. We are focused on developing communication software for remote collaboration using the latest Virtual/Augmented Reality technologies. We are actively developing a multi-device collaboration platform that will work with VR/AR devices, mobile devices and in any standard web browser. We are very excited about the version of our collaboration platform running on VR/AR devices, as they give you a feeling of actually participating and being in a meeting. We think this is a significant advancement in remote meetings, as your brain truly connects you to being in the meeting, which creates more participation, engagement and productivity from the meeting. |
|
| Iowa State University - Virtual Reality Applications Center | James Oliver | University |
Other
| Iowa State University’s Virtual Reality Applications Center (VRAC) is an interdisciplinary research center focused at the intersection of humans and technology, aimed broadly at enhancing the productivity and creativity of people.
The VRAC research community spans a wide spectrum of disciplinary experts with particular strengths in state-of-the-art interaction technologies including virtual, augmented and mixed reality (VR/AR/MR) as well as mobile computing, developmental robotics, and haptics interaction. The VRAC community is also skilled at human centered design and user experience (UX) evaluation as well as assessing the effectiveness of new interaction modalities via formal user studies. |
|
| MIT Media Lab | V. Michael Bove, Jr. | University |
Other
| The Media Lab and its predecessor organization have been developing telepresence and surrogate travel since the late 1970s. Current relevant expertise includes true holographic video and other lightfield display technologies, immersive displays including 8K/HDR, augmented reality, visual psychophysics, and free-space haptics. |
|
| National Transportation Center at the University of Maryland | Lei Zhang | University |
Transportation
| Area of Expertise Related to Digital Transportation: "Scholarly validation of travel-replacement criteria."
The National Transportation Center (NTC) at the University of Maryland (UMD) is a National University Transportation Center designated by the U.S. Department of Transportation, and an international leader in travel and transportation data, modeling, and analysis. NTC has an ongoing ARPA-E project on travel behavior as part of the TRANSNET Program. UMD has the most active ARPA-E projects among all universities with 9 ongoing projects. NTC has conducted a number of previous projects for various federal agencies on telecommuting, travel choices, and in-home/out-of-home activity substitution. NTC is also a leader in transportation system performance monitoring and assessment. We intend to patriciate in the Digital Transportation Program as a sub that provides "scholarly validation of travel-replacement criteria," as specified in the ARPA-E expertise requirement for this program. |
|
| Naval Research Laboratory | Mark A. Livingston | Government Owned and Operated (GOGO) |
Other
| Our research on robotics, virtual reality, augmented reality, and human-robot interaction provides expertise on real-time tele-immersive communication and interaction. We have experience in real-time digitization of objects and environments. Digital data are transmitted in real-time and remotely reconstructed to allow immersive human interaction. Our extensive history in augmented reality includes one of the earliest mobile AR systems, command-and-control stations, and a host of indoor applications. We have conducted numerous quantitative and qualitative user studies in AR, both to generate requirements and to evaluate system performance.
Results from our research in AR may be found on the listed web page. For research results in other areas, please contact us. |
|
| Neuroscape, UCSF | Adam Gazzaley / David Ziegler | University |
Other
| Neuroscape is a translational neuroscience center whose mission is to bridge the gap between technology and neuroscience. Our Core faculty and staff engage in technology innovation and scientific research to develop and validate novel assessment and optimization approaches. Neuroscape’s unique multidisciplinary approach involves the development of custom-designed, closed-loop systems that integrate recent technological advances in software (e.g., 3D video game engines, multimodal recording and brain computer interface algorithms) with the latest innovations in hardware (e.g., virtual reality, motion capture, GPU computing, wearable physiological recordings, and transcranial brain stimulation) and mobile multimodal biosensing technologies (e.g., wireless EEG, MRI, heart rate variability, skin conductance responses, pupilometry, eye movements, facial expression recognition) to obtain quantified metrics of physiology during high-level performance and training. Building effective closed-loop technologies necessitates collaborative development teams of experienced designers, programmers, multimedia engineers, UI experts, and artist/musicians, working closely with our Core scientists to generate engaging interactive experiences complete with adaptivity, rewards, art, music, and story.
Dr. Adam Gazzaley, M.D., Ph.D. is Professor in Neurology, Physiology and Psychiatry at University of California, San Francisco and the Founder / Executive Director of Neuroscape. He designs and develops novel brain assessment and optimization tools to impact education, wellness, and medicine practices. This novel approach involves the development of custom-designed, closed-loop video games integrated with the latest advancements in software (brain computer interfaces, GPU computing, cloud-based analytics) and hardware (virtual/augmented reality, motion capture, mobile physiological recording devices, transcranial electrical brain stimulation). He submits these advances to rigorous research studies that evaluate the impact of the technology on multiple aspects of brain function and physiology. |
|
| NVIDIA | Tom Riley | Business > 1000 Employees |
Other
| THE WORLD LEADER IN GPU ACCELERATED COMPUTING
NVIDIA is the pioneer of GPU-accelerated computing. We specialize in products and platforms for the large, growing markets of gaming, professional visualization, data center, and automotive. Our creations are loved by the most demanding computer users in the world – gamers, designers, and scientists. And our work is at the center of the most consequential mega-trends in technology — virtual reality, artificial intelligence, and self-driving cars.
- See more at: http://www.nvidia.com/object/about-nvidia.html#sthash.dbDLBpHb.dpuf |
|
| PARC, a Xerox Company | Victoria Bellotti | Business < 500 Employees |
Other
| PARC, a Xerox Company, has strong competencies in hardware and software technology innovation, including strengths in Human-Computer Interaction (HCI), social science, user experience research, and design, with expertise in the domains of knowledge work, remote collaboration, computer mediated communication, and interpersonal awareness technologies.
PARC has experience of developing and studying remote collaboration technologies, and its social scientists are interested in partnering with other technology developers to bring their expertise and experience in this area to bear on “real-time capture and digitization of extremely detailed communicative information” and the “development of complementary technologies that the community justifies as being necessary for the realization of digital transportation,” as mentioned in the ARPA-E Digital Transportation Teaming Partner List Announcement (DTTPLA). Our social scientists and user experience designers are skilled at eliciting hidden requirements and translating field data into actionable design insights, mock-ups and prototypes, as well as conducting user-centered technology evaluation.
PARC is also interested in the challenge of running “studies that aim to definitively and quantitatively establish the set of requirements for digital transportation technologies to be preferable to physical travel, and methods to test and validate progress of these technologies towards travel-reduction,” as mentioned in the DTTPLA. PARC has developed cost-effective ways and experience of collecting qualitative and quantitative field data on a national scale from diverse settings, including ethnographic studies in the home as well as in the enterprise (with or without video data collection), the “participant observation” method, interviews, experience sampling, diary studies, surveys, and more. PARC is also familiar with user-centered and participatory design, agile and lean methods.
PARC’s most senior social scientists, Dr. Peter Pirolli and Dr. Victoria Bellotti are both members of the ACM SIGCHI Academy and associate editors of various HCI publications and Dr. Pirolli is also a Fellow of the American Association for the Advancement of Science (AAAS), the American Psychological Association (APA), the Association for Psychological Science, and the National Academy of Education. |
|
| ROBOTIS INC | Inna Tlisov | Small Business |
Other
| ROBOTIS is a leading developer of smart servos, industrial actuators, manipulators, open-source humanoid platforms, and educational robot kits and platforms. The company specializes in the development of modular, expandable, and reconfigurable robotic hardware.
Our core technology is the modular actuator, DYNAMIXEL; these robot-standard building blocks enable engineers and developers to focus on the development of platforms. We have accumulated a wealth of humanoid platform technology that is based on this core DYNAMIXEL technology. This includes THORMANG, a full-sized open-source platform, based on the DYNAMIXEL PRO industrial actuator. Optimized for field applications, THORMANG was a finalist at the 2015 DARPA Robotics Challenge (DRC). Moreover, eight other finalists out of twenty-three teams were also utilizing ROBOTIS’ DYNAMIXEL PRO or THORMANG as a base platform, highlighting the need for a modular, standardized platform.
ROBOTIS envisions such a platform (i.e. THORMANG) playing a vital role as a field robot. This platform should be easy to dispatch and maintain and should be capable of exploring, monitoring, and maintaining its environment autonomously, identifying hazardous situations, and containing them if containment systems fail, all while capturing and conveying “detailed, [real-time] communicative information...to a remote observer.”
However, the THORMANG humanoid platform is still a research platform that is in the initial stages of development. We recognize the need for researching and developing a field robot platform capable of telepresence and tele-labor technologies that allow the user to accomplish one or more of the following: (1) communication (convey or consume information), (2) labor (physically affect the environment), and (3) experience; without the need to be physically present. This requires further research and development through active collaboration and government support. |
|
| SPAWAR Systems Center - Pacific | Heidi Buck | Government Owned and Operated (GOGO) |
Other
| The Battlespace Exploitation of Mixed Reality (BEMR) Lab at the Space and Naval Warfare (SPAWAR) Systems Center – Pacific (SSC-Pacific) is funded to explore and develop applications of virtual reality (VR) and augmented reality (AR), also known as mixed reality (MxR), to every aspect of Navy and Marine Corps missions and tasks. The goals of the BEMR Lab are to: • Create a Mixed Reality Lab environment to leverage the rapidly evolving MxR technologies and explore the application of MxR technology to Naval tasks, • Foster the implementation of MxR technology into operational environments by demonstrating those technologies and applications to program managers, Navy and Marine Corps officers, technical specialists, academia, and government program managers and officials, and • Assess the technical requirements necessary to bring MxR capabilities to the operational, maintenance, and training tasks that would benefit substantially from them. |
|
| Tele-immersion Lab, University of California at Berkeley | Gregorij Kurillo | University |
Other
| Tele-immersion Lab at UC Berkeley lead by Prof. Ruzena Bajcsy has been investigating remote interaction via tele-immersion technology for the past decade. We have demonstrated the use of tele-immersive technology in several application areas, including remote dance choreography (with University of Illinois), remote interaction with geoscientific data (with University of California, Davis), collaborative archeology (with University of California, Merced and with University of Tokyo, Japan), and tele-medicine applications (with University of California Davis Medical Center and with University of Basque Country, Spain). In our demonstration systems that we have built over the years, we have investigated various aspects of interaction, networking, data compression, rendering, and others, while working with multi-disciplinary groups of users. We are currently developing a novel platform for remote consultation using immersive computing and augmented reality technologies. In particular, our focus is on application in telemedicine by facilitating state-of-the-art interaction to achieve better patient triage and emergency care; to provide more effective medicine care over long distances by connecting patients with general and specialized healthcare providers, and to reduce the cost of transportation as an integral part of reducing the total cost of the current healthcare system. Our expertise are in computer vision, virtual reality, human machine interaction, and human modeling. |
|
| The Enterprise Center | Andrew Rodgers | Non-Profit |
None of the above
| We have been piloting multiple projects in the Chattanooga area to utilize our advanced broadband infrastructure to stream high resolution (UltraHD), real-time access to scientific instruments in an interactive manner. Our goal is further the development of these platforms to enable transformative education experiences for our public education system, and help further their deployment in the national advanced broadband ecosystem. We are interested in sharing what we've learned through our deployment, both technically and administratively as well as serve as a pilot location for further innovation in this space.
Specifically, we are looking for projects that utilize the high-speed, extremely low latency properties of our advanced broadband to deliver innovative solutions to societal issues through digital transportation. |
|
| TIPD, LLC | Lloyd LaComb | Business < 500 Employees |
Other
| TIPD is an early stage startup company that was founded in order to commercialize the extensive research and development of various holographic display systems, photonic materials and devices conducted by the Optoelectronic and Photonics research group at the University of Arizona (UA). TIPD’s operational strategy is focused on development and manufacturing of holographic, 3D and photonic systems and devices. TIPD has active research programs in holographic displays using photorefractive polymers and holographic displays based upon electro-optic (EO) and acousto-optic (AO) scanning subsystems. TIPD has also developed software capable of high speed image generation for holographic and field of light displays. |
|
| Torpedo Labs, Inc | Shvetank Jain | Business < 500 Employees |
Other
| We have built multiple successful entertainment and games businesses. We are a team of Stanford, IIT engineers who have been committed to VR and 3D graphics technologies for over a decade. Our last business was acquired by Disney for $750M. At the same time, since we operated large teams of over 500+ people in a remote manner, we both understand and empathize with the necessity of investing in digital transportation, both for efficient communication and optimizing energy consumption.
I have been committed to this problem for a long time. I believe in the future of energy and communication and I have been on the boards of a leading solar startup (Aurora Solar), enterprise communications startup (Mattermost), both out of Stanford. Given my background in 3D technologies, I want to bring all these efforts together in building the most advanced Virtual Reality communication platform.
I have a dream. I want to enable the same quality of communication that we have in person, to be had in Virtual Reality. This will fundamentally change the world! |
|
| Universidad de Zaragoza | Diego Gutierrez | University |
Other
| Diego Gutierrez is a full professor at the Universidad de Zaragoza, where he leads the Graphics and Imaging Lab. He's the editor in chief of ACM Transactions on Applied Perception, and associate editor of four other journals including ACM Transactions on Graphics. Our areas of interest include computer graphics, virtual reality, computational imaging and perception. Our work has been published in top venues like SIGGRAPH.
Regarding computer graphics, we have carried out extensive research in accurate simulation of human skin, in real time. Our state of the art separable subsurface scattering model is currently being used by many game companies, and has been implemented in many game and rendering engines. We have also developed a biophysically based model of skin aging, and a practical model for dynamic skin appearance, which automatically produces the subtle color changes that are linked to the physical or emotional state of the characters.
Regarding virtual reality and perception, our work deals with understanding how humans behave and interact in virtual environments. This is crucial for many applications, such as developing compression algorithms, designing effective cinematic VR content, or ensuring a proper level of presence in different scenarios. By analyzing the interplay between visual stimuli, head orientation, gaze direction, etc. we can predict patterns and biases of how people explore these virtual environments. |
|
| Loading… |
|