Menu

Publications and Presentations

2020

Jason Zurawski, Jennifer Schopf, Hans Addleman, “University of Wisconsin-Madison Campus-Wide Deep Dive”, LBNL Report, May 26, 2020, LBNL 2001325

Inder Monga, Chin Guok, John MacAuley, Alex Sim, Harvey Newman, Justas Balcas, Phil DeMar, Linda Winkler, Tom Lehman, Xi Yang, “Software-Defined Network for End-to-end Networked Science at the Exascale”, Future Generation Computer Systems, April 13, 2020,

Abstract

Domain science applications and workflow processes are currently forced to view the network as an opaque infrastructure into which they inject data and hope that it emerges at the destination with an acceptable Quality of Experience. There is little ability for applications to interact with the network to exchange information, negotiate performance parameters, discover expected performance metrics, or receive status/troubleshooting information in real time. The work presented here is motivated by a vision for a new smart network and smart application ecosystem that will provide a more deterministic and interactive environment for domain science workflows. The Software-Defined Network for End-to-end Networked Science at Exascale (SENSE) system includes a model-based architecture, implementation, and deployment which enables automated end-to-end network service instantiation across administrative domains. An intent based interface allows applications to express their high-level service requirements, an intelligent orchestrator and resource control systems allow for custom tailoring of scalability and real-time responsiveness based on individual application and infrastructure operator requirements. This allows the science applications to manage the network as a first-class schedulable resource as is the current practice for instruments, compute, and storage systems. Deployment and experiments on production networks and testbeds have validated SENSE functions and performance. Emulation based testing verified the scalability needed to support research and education infrastructures. Key contributions of this work include an architecture definition, reference implementation, and deployment. This provides the basis for further innovation of smart network services to accelerate scientific discovery in the era of big data, cloud computing, machine learning and artificial intelligence.

Inder Monga, FABRIC: integration of bits, bytes, and xPUs, JET meeting, March 17, 2020,

Presenting NSF-funded FABRIC project to the JET community

2019

Verónica Rodríguez Tribaldos, Shan Dou, Nate Lindsey, Inder Monga, Chris Tracy, Jonathan Blair Ajo-Franklin, “Monitoring Aquifers Using Relative Seismic Velocity Changes Recorded with Fiber-optic DAS”, AGU Meeting, December 10, 2019,

Dipak Ghosal, Sambit Shukla, Alex Sim, Aditya V. Thakur, and Kesheng, Wu, “A Reinforcement Learning Based Network Scheduler For Deadline-Driven Data Transfers”, 2019 IEEE Global Communications Conference, December 9, 2019,

Jason Zurawski, Jennifer Schopf, Hans Addleman, Scott Chevalier, George Robb, “Great Plains Network - Kansas State University Agronomy Application Deep Dive”, LBNL Report, November 11, 2019, LBNL 2001321

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, Scott Chevalier, “Purdue University Application Deep Dive”, LBNL Report, November 1, 2019, LBNL 2001318

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Trinity University Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL 2001319

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “University of Cincinnati Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL 2001320

Sambit Shukla, Dipak Ghosal, Kesheng Wu, Alex Sim, and Matthew Farrens, “Co-optimizing Latency and Energy for IoT services using HMP servers in Fog Clusters.”, 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC), IEEE, August 15, 2019,

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Arcadia University Bioinformatics Application Deep Dive”, LBNL Report, July 8, 2019, LBNL 2001317

Marco Ruffini, Kasandra Pillay, Chongjin Xie, Lei Shi, Dale Smith, Inder Monga, Xinsheng Wang, and Jun Shan Wey, “Connected OFCity Challenge: Addressing the Digital Divide in the Developing World”, Journal of Optical Communications and Networking, June 20, 2019, 11:354-361,

Jason Zurawski, Eli Dart, Lauren Rotman, Paul Wefel, Editors, “Nuclear Physics Network Requirements Review 2019 - Final Report”, ESnet Network Requirements Review, May 8, 2019, LBNL LBNL-2001281

Nicholas A Peters, Warren P Grice, Prem Kumar, Thomas Chapuran, Saikat Guha, Scott Hamilton, Inder Monga, Raymond Newell, Andrei Nomerotski, Don Towsley, Ben Yoo, “Quantum Networks for Open Science (QNOS) Workshop”, DOE Technical Report, April 1, 2019,

Qiming Lu, Liang Zhang, Sajith Sasidharan, Wenji Wu, Phil Demar, Chin Guok, John MacAuley, Inder Monga, Se Young Yu, Jim Hao Chen, Joe Mambretti, Jin Kim, Seo Young Noh, Xi Yang, Tom Lehman, and Gary Liu, “BigData Express: Toward Schedulable, Predictable, and High-Performance Data Transfer”, 2018 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), Institute of Electrical and Electronics Engineers Inc., February 21, 2019, 75-84,

Jonathan B. Ajo-Franklin, Shan Dou, Nathaniel J. Lindsey, Inder Monga, Chris Tracy, Michelle Robertson, Veronica Rodriguez Tribaldos, Craig Ulrich, Barry Freifeld, Thomas Daley and Xiaoye Li, “Distributed Acoustic Sensing Using Dark Fiber for Near-Surface Characterization and Broadband Seismic Event Detection”, Nature, February 4, 2019,

Mariam Kiran and Anshuman Chhabra, “Understanding flows in high-speed scientific networks: A Netflow data study”, Future Generation Computer Systems, February 1, 2019, 94:72-79,

2018

F Alali, N Hanford, E Pouyoul, R Kettimuthu, M Kiran, B Mack-Crane, “Calibers: A bandwidth calendaring paradigm for science workflows”, Future Generation Computer Systems, December 1, 2018, 89:736-745,

Inder Monga, Chin Guok, John Macauley, Alex Sim, Harvey Newman, Justas Balcas, Phil DeMar, Linda Winkler, Xi Yang, Tom Lehman, “SDN for End-to-end Networked Science at the Exascale (SENSE)”, INDIS Workshop SC18, November 11, 2018,

The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project is building smart network services to accelerate scientific discovery in the era of ‘big data’ driven by Exascale, cloud computing, machine learning and AI. The project’s architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive ‘intent’ based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices. The significance of these capabilities is the ability for science applications to manage the network as a firstclass schedulable resource akin to instruments, compute, and storage, to enable well defined and highly tuned complex workflows that require close coupling of resources spread across a vast geographic footprint such as those used in science domains like high-energy physics and basic energy sciences.

M Gribaudo, M Iacono, Mariam Kiran, “A performance modeling framework for lambda architecture based applications”, Future Generation Computer Systems, November 9, 2018, 86:1032-1041,

Amel Bennaceur, Ampaeli Cano, Lilia Georgieva, Mariam Kiran, Maria Salama, Poonam Yadav, “Issues in Gender Diversity and Equality in the UK”, Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, ACM, July 13, 2018,

Inder Monga, Prabhat, “Big-Data Science: Infrastructure Impact”, Proceedings of the Indian National Science Academy, June 15, 2018,

The nature of science is changing dramatically, from single researcher at a lab or university laboratory working with graduate students to a distributed multi- researcher consortiums, across universities and research labs, tackling large scientific problems. In addition, experimentalists and theorists are collaborating with each other by designing experiments to prove the proposed theories. ‘Big Data’ being produced by these large experiments have to verified against simulations run on High Performance Computing (HPC) resources.

The trends above are pointing towards

  1. Geographically dispersed experiments (and associated communities) that require data being moved across multiple sites. Appropriate mechanisms and tools need to be employed to move, store and archive datasets from such experiments.

  2. Convergence of simulation (requiring High Performance Computing) and Big Data Analytics (requiring advanced on-site data management techniques) into a small number of High Performance Computing centers. Such centers are key for consolidating software and hardware infrastructure efforts, and achieving broad impact across numerous scientific domains.

    The trends indicate that for modern science and scientific discovery, infrastructure support for handling both large scientific data as well as high-performance computing is extremely important. In addition, given the distributed nature of research and big-team science, it is important to build infrastructure, both hardware and software, that enables sharing across

 

institutions, researchers, students, industry and academia. This is the only way that a nation can maximize the research capabilities of its citizens while maximizing the use of its investments in computer, storage, network and experimental infrastructure.

This chapter introduces infrastructure requirements of High-Performance Computing and Networking with examples drawn from NERSC and ESnet, two large Department of Energy facilities at Lawrence Berkeley National Laboratory, CA, USA, that exemplify some of the qualities needed for future Research & Education infrastructure.

RK Shyamasundar, Prabhat Prabhat, Vipin Chaudhary, Ashwin Gumaste, Inder Monga, Vishwas Patil, Ankur Narang, “Computing for Science, Engineering and Society: Challenges, Requirement, and Strategic Roadmap”, Proceedings of the Indian National Science Academy, June 15, 2018,

Ilya Baldin, Tilman Wolf, “The Future of CISE Distributed Research Infrastructure”, ACM SIGCOMM Computer Communication Review, May 1, 2018,

Shared research infrastructure that is globally distributed and widely accessible has been a hallmark of the networking community. We present a vision for a future mid-scale distributed research infrastructure aimed at enabling new types of discoveries. The “lessons learned” from constructing and operating the Global Environment for Network Innovations (GENI) infrastructure are the basis for our attempt to project future concepts and solutions. Our aim is to engage the community to contribute new ideas and to inform funding agencies about future research directions.

Paul Ruth, Mert Cevik, Cong Wang, Yuanjun Yao, Qiang Cao, Rubens Farias,
Jeff Chase, Victor Orlikowski, Nick Buraglio,
“Toward Live Inter-Domain Network Services on the ExoGENI Testbed”, 2018 IEEE INFOCOM, IEEE, April 15, 2018,

This paper introduces ExoPlex, a framework to improve the QoS of live (real) experiments on the ExoGENI federated testbed. The authors make the case for implementing the abstraction of network service providers (NSPs) as a way of having experimenters specify the performance characteristics they expect from the platform (at the testbed level). An example tenant using this version of ExoGENI enhanced with NSP capabilities is presented, and experimental results show the effectiveness of the approach.

Nick Buraglio, Automation, Orchestration, prototyping, and strategy, Great Planes Network Webinar Series Presentation, March 9, 2018,

Presentation on network automation and orchestration with focus on getting started and options available.

Ralph Koning, Nick Buraglio, Cees de Laat, Paola Grosso, “CoreFlow: Enriching Bro security events using network traffic monitoring data”, Future Generation Comp. Syst., February 1, 2018, 79,

Attacks against network infrastructures can be detected by Intrusion Detection Systems (IDS). Still reaction to these events are often limited by the lack of larger contextual information in which they occurred. In this paper we present CoreFlow, a framework for the correlation and enrichment of IDS data with network flow information. CoreFlow ingests data from the Bro IDS and augments this with flow data from the devices in the network. By doing this the network providers are able to reconstruct more precisely the route followed by the malicious flows. This enables them to devise tailored countermeasures, e.g. blocking close to the source of the attack. We tested the initial CoreFlow prototype in the ESnet network, using inputs from 3 Bro systems and more than 50 routers.

2017

M. Gribaudo, M. Iacono, M. Kiran, “A performance modeling framework for lambda architecture based applications”, Future Generation Computer Systems, August 30, 2017,

M Kiran, E Pouyoul, A Mercian, B Tierney, C Guok, I Monga, “Enabling intent to configure scientific networks for high performance demands”, Future Generation Computer Systems, August 2, 2017,

S Khan, T Yairi, M Kiran, “Towards a Cloud-based Machine Learning for Health Monitoring and Fault Diagnosis”, Asia Pacific Conference of the Prognostics and Health Management Society 2017, August 1, 2017,

A Mercian, M Kiran, E Pouyoul, B Tierney, I Monga, “INDIRA:‘Application Intent’ network assistant to configure SDN-based high performance scientific networks”, Optical Fiber Communication Conference, July 1, 2017,

Kim Roberts, Qunbi Zhuge, Inder Monga, Sebastien Gareau, and Charles Laperle, “Beyond 100 Gb/s: Capacity, Flexibility, and Network Optimization”, Journal of Optical Communication Network, April 1, 2017, Volume 9,

In this paper, we discuss building blocks that enable the exploitation of optical capacities beyond 100 Gbs. Optical networks will benefit from more flexibility and agility in their network elements, especially from co- herent transceivers. To achieve capacities of 400 Gbs and more, coherent transceivers will operate at higher symbol rates. This will be made possible with higher bandwidth components using new electro-optic technologies imple- mented with indium phosphide and silicon photonics. Digital signal processing will benefit from new algorithms. Multi-dimensional modulation, of which some formats are already in existence in current flexible coherent transceiv- ers, will provide improved tolerance to noise and fiber non- linearities. Constellation shaping will further improve these tolerances while allowing a finer granularity in the selection of capacity. Frequency-division multiplexing will also provide improved tolerance to the nonlinear charac- teristics of fibers. Algorithms with reduced computation complexity will allow the implementation, at speeds, of direct pre-compensation of nonlinear propagation effects. Advancement in forward error correction will shrink the performance gap with Shannons limit. At the network con- trol and management level, new tools are being developed to achieve a more efficient utilization of networks. This will also allow for network virtualization, orchestration, and management. Finally, FlexEthernet and FlexOTN will be put in place to allow network operators to optimize capac- ity in their optical transport networks without manual changes to the client hardware. 

L. Zuo, M. Zhu, C. Wu and J. Zurawski, “Fault-tolerant Bandwidth Reservation Strategies for Data Transfers in High-performance Networks”, Computer Networks, February 1, 2017, 113:1-16,

Ashwin Gumaste, Tamal Das, Kandarp Khandwala, and Inder Monga, “Network Hardware Virtualization for Application Provisioning in Core Networks”, IEEE Communications Magazine, February 1, 2017,

Service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built speci cally to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a pro- vider’s business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization--oriented equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. A simulation study validates our NV-induced model. 

Mariam Kiran, X-Machines for Agent-Based Modeling: FLAME Perspectives, (January 30, 2017)

Tatyana Eftonova, Mariam Kiran, Mike Stannett, “Long-term Macroeconomic Dynamics of Competition in the Russian Economy using Agent-based Modelling”, International Journal of System Dynamics Applications (IJSDA) 6 (1), 1-20, 2017, January 1, 2017,

2016

M Usman, A Iqbal, M Kiran, “A Bandwidth Friendly Architecture for Cloud Gaming”, 31st International Conference on Information Networking (ICOIN 2017), December 1, 2016,

M Kiran, E Pouyoul, A Mercian, B Tierney, C Guok, I Monga, “Enabling Intent to Configure Scientific Networks for High Performance Demands”, 3nd International Workshop on Innovating the Network for Data Intensive Science (INDIS) 2016, SC16., November 10, 2016,

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at the GM/CA Macromolecular Crystallogra- phy Beamlines at the Advanced Photon Source”, The 11th New Opportunities for Better User Group Software (NOBUGS) Conference, Copenhagen Denmark, October 1, 2016,

Brian Tierney, Nathan Hanford, Recent Linux TCP Updates, and how to tune your 100G host, Internet2 Technology Exchange, September 27, 2016,

M. O’Connor, Y. Hines, Amazon Web Services Pilot Report, ESnet Report, September 2016,

This report summarizes an effort called "The ESnet Amazon Web Services (AWS) pilot" which was implemented to determine AWS “Direct Connect” or “DX” service provides advantages to ESnet customers above and beyond that of ESnet's standard Amazon connections at public Internet exchange points. 

B Mohammed, M Kiran, IU Awan, KM Maiyama, “Optimising Fault Tolerance in Real-Time Cloud Computing IaaS Environment”, Future Internet of Things and Cloud (FiCloud), 2016 IEEE 4th International, 2016, September 15, 2016,

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni, J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at GM/[email protected]: Pilatus3 6M and Eiger 16M”, 12th International Conference on Biology and Synchrotron Radiation (BSR-16), Palo Alto CA, August 1, 2016,

Mariam Kiran, Anthony Simons, “Testing Software Services in Cloud Ecosystems”, International Journal of Cloud Applications and Computing (IJCAC) 6 (1), 42-58 2016, July 1, 2016,

Chevalier, S., Schopf, J. , M., Miller, K., Zurawski, J., “Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure”, July 1, 2016, LBNL 1005797

Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets.

The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends.

This work explores low cost alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. We present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.

Alberto Gonzalez, Jason Leigh, Sean Peisert, Brian Tierney, Andrew Lee, and Jennifer M. Schopf, “NETSAGE: Open Privacy-Aware Network Measurement, Analysis and Visualization Service”, TNC16 Networking Conference, June 15, 2016,

Baris Aksanli, Jagannath Venkatesh, Inder Monga, Tajana Rosing, “Renewable Energy Prediction for Improved Utilization and Efficiency in Data Centers and Backbone Networks”, ( May 30, 2016)

The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

Anir Mandal, Paul Ruth, Ilya Baldin, Dariusz Krol, Gideon Juve, Rajiv Mayani, Rafael Ferreira da Silva, Ewa Deelman, Jeremy Meredith, Jeffrey Vetter, Vickie Lynch, Ben Mayer, James Wynne III, Mark Blanco, Chris Carothers, Justin LaPre, Brian Tierney, “Toward an End-to-end Framework for Modeling, Monitoring and Anomaly Detection for Scientific Workflows”, Workshop on Large-Scale Parallel Processing (LSPP), in conjuction with 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), May 23, 2016,

Sean Peisert, William Barnett, Eli Dart, James Cuff, Robert L Grossman, Edward Balas, Ari Berman,
Anurag Shankar, Brian Tierney,
“The Medical Science DMZ”, Journal of the American Medical Informatics Association, May 2, 2016,

Nick Buraglio, SDN Best Practices, Great Planes Network Webinar Series Presentation, April 8, 2016,

Presentation of best practices in production SDN deployments based on experience deploying SDN based networks based on varying technologies and techniques. 

Nick Buraglio, SDN: Theory vs. Practice, Invited talk, CODASPY 2016 SDN/NFV workshop, March 11, 2016,

Discuss research based software based networking and the differences beetween real world, prodiuction SDN for CODASPY SDN/NFV conference workshop. 

Nathan Hanford, Vishal Ahuja, Matthew Farrens, Dipak Ghosal, Mehmet Balman, Eric Pouyoul, Brian Tierney, “Improving network performance on multicore systems: Impact of core affinities on high throughput flows”, Future Generation Computer Systems, Vol 56., March 1, 2016,

Inder Monga, Chin Guok, SDN for End-to-End Networking at Exascale, February 16, 2016,

Traditionally, WAN and campus networks and services have evolved independently from each other. For example, MPLS traffic engineered and VPN technologies have been targeted towards the WAN, while the LAN (or last mile) implementations have not incorporated that functionality. These restrictions have resulted in dissonance in services offered in the WAN vs. the LAN. While OSCARS/NSI virtual circuits are widely deployed in the WAN, they typically only run from site boundary to site boundary, and require painful phone calls, manual configuration, and resource allocation decisions for last mile extension. Such inconsistencies in campus infrastructures, all the way from the campus edge to the data-transfer hosts, often lead to unpredictable application performance. New architectures such as the Science DMZ have been successful in simplifying the variance, but the Science DMZ is not designed or able to solve the end-to-end orchestration problem. With the advent of SDN, the R&E community has an opportunity to genuinely orchestrate end-to-end services - and not just from a network perspective, but also from an end-host perspective. In addition, with SDN, the opportunity exists to create a broader set of custom intelligent services that are targeted towards specific science application use-cases. This proposal describes an advanced deployment of SDN equipment and creation of a comprehensive SDN software platform that will help with bring together the missing end-to-end story. 

Inder Monga, Plenary Keynote - "Design Patterns: Scaling up eResearch", Web Site, February 9, 2016,

2015

Mariam Kiran, “What is Modelling and Simulation: An introduction”, Encyclopedia of Computer Science and Technology, ( December 24, 2015)

M Kiran, Invited Talk Software Engineering challenges in Smart Cities, Optics Group Arizona University, Oct 2015, December 1, 2015,

M Kiran, M Stanett, “Bitcoin risk analysis, NEMODE Policy Paper”, December 1, 2015,

Vincenzo Capone, Mary Hester, Florence Hudson, Lauren Rotman, “Connecting HPC and High Performance Networks for Scientists and Researchers”, November 2015,

Nathan Hanford, Brian Tierney, Dipak Ghosal, “Optimizing Data Transfer Nodes using Packet Pacing”, Second Workshop on Innovating the Network for Data-Intensive Science, November 16, 2015,

Paper available from SIGHPC website as well.

M Kiran, “Multiple platforms: Issues of porting Agent-Based Simulation from Grids to Graphics cards”, Workshop on Portability Among HPC Architectures for Scientific Applications, SC15: HPC transforms, 2015, Austin Texas., November 15, 2015,

M Kiran, “Women in HPC: Changing the Face of HPC”, SC15: HPC transforms, 2015, Austin Texas, November 15, 2015,

P Yadav, M Kiran, A Bennaceur, L Georgieva, M Salama and A E Cano, “Jack of all Trades versus Master of one”, Grace Hopper 2015 Conference, November, 2015, November 1, 2015,

Mariam Kiran, Peter Murphy, Inder Monga, Jon Dugan, Sartaj Baveja, “Lambda Architecture for Cost-effective Batch and Speed Big Data processing”, First Workshop on Data-Centric Infrastructure for Big Data Science (DIBS), October 29, 2015,

This paper presents an implementation of the lambda architecture design pattern to construct a data-handling backend on Amazon EC2, providing high throughput, dense and intense data demand delivered as services, minimizing the cost of the network maintenance. This paper combines ideas from database management, cost models, query management and cloud computing to present a general architecture that could be applied in any given scenario where affordable online data processing of Big Datasets is needed. The results are presented with a case study of processing router sensor data on the current ESnet network data as a working example of the approach. The results showcase a reduction in cost and argue benefits for performing online analysis and anomaly detection for sensor data

Mariam Kiran, “Legal Issues Surrounding Connected Government Services: A Closer Look at G-Clouds”, Cloud Computing Technologies for Connected Government, ( October 24, 2015)

M Kiran, “A methodology for Cloud Security Risks Management”, Cloud Computing, ( October 20, 2015)

Zhenzhen Yan, Chris Tracy, Malathi Veeraraghavan, Tian Jin, Zhengyang Liu, “A Network Management System for Handling Scientific Data Flows”, Journal of Network and Systems Management, October 11, 2015,

Inder Monga, Network Operating Systems and Intent APIs for SDN Applications, Technology Exchange Conference, October 6, 2015,

Philosophy of Network Operating Systems and Intent APIs

Julian Borrill, Eli Dart, Brooklin Gore, Salman Habib, Steven T. Myers, Peter Nugent, Don Petravick, Rollin Thomas, “Improving Data Mobility & Management for International Cosmology”, CrossConnects 2015 Workshop, October 2, 2015, LBNL 1001456

M Kiran, Platform dependency and cloud use for ABM, CCS Conference, Oct 2015, October 1, 2015,

Inder Monga, ICN roadmaps for the next 2 years, 2nd ACM Conference on Information-Centric Networking (ICN 2015), October 1, 2015,

Panelists: Paul Mankiewich (Cisco), Luca Muscariello (Orange), Inder Monga (ESnet), Ignacio Solis (PARC), GQ Wang(Huawei), Jeff Burke (UCLA)

Inder Monga, Science Data and the NDN paradigm, NDN Community Meeting (NDNcomm 2015): Architecture, Applications, and Collaboration, September 28, 2015,

Eli Dart, Mary Hester, and Jason Zurawski, Editors, “Biological and Environmental Research Network Requirements Review 2015 - Final Report”, ESnet Network Requirements Review, September 18, 2015, LBNL 1004370

M Kiran, “Platform dependency and cloud use for ABM, Satellite Workshop, Computational Transparency in Modeling Complex Systems,”, Conference on Complex Systems, Arizona, USA, 2015., September 5, 2015,

Mariam Kiran, Kabiru Maiyama, Haroon Mir, Bashir Mohammad, Ashraf Al Oun, “Agent-Based Modelling as a Service on Amazon EC2: Opportunities and Challenges”, Utility and Cloud Computing (UCC), 2015 IEEE/ACM 8th International Conference on, September 1, 2015,

M Kiran, S Konur, M Burkitt, “PlatOpen Platform Dependency for Open ABM Complex Model Simulations, Satellite Workshop,”, Conference on Complex Systems, Arizona, USA, 2015., September 1, 2015,

P Yadav, M Kiran, A Bennaceur, L Georgieva, M Salama and A E Cano, “Impact of Gender Diversity and Equality Initiatives”, WomENcourage 2015 Conference, Uppsala, Sweden, October, 2015, September 1, 2015,

ANL – Linda Winkler, Kate Keahey, Caltech – Harvey Newman, Ramiro Voicu, FNAL – Phil DeMar, LBNL/ESnet – Chin Guok, John MacAuley, LBNL/NERSC – Jason Hick, UMD/MAX – Tom Lehman, Xi Yang, Alberto Jimenez, SENSE: SDN for End-to-end Networked Science at the Exascale, August 1, 2015,

Funded Project from DOE

Ranjana Addanki, Sourav Maji, Malathi Veeraraghavan, Chris Tracy, “A measurement-based study of big-data movement”, 2015 European Conference onNetworks and Communications (EuCNC), July 29, 2015,

Ewa Deelman, Christopher Carothers,Anirban Mandal,Brian Tierney,Jeffrey S Vetter,Ilya Baldin,Claris Castillo,Gideon Juve,Dariusz Król,Vickie Lynch,Ben Mayer,Jeremy Meredith,Thomas Proffen,Paul Ruth,Rafael Ferreira da Silva,, “PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows”, International Journal of High Performance Computing Applications, July 14, 2015,

Michael Smitasen, Brian Tierney, Evaluating Network Buffer Size requirements for Very Large Data Transfers, NANOG 64, San Francisco, June 2, 2015,

M Kiran, “Modelling Cities as a Collection of TeraSystems–Computational Challenges in Multi-Agent Approach”, Procedia Computer Science 52, 974-979, 2015, June 1, 2015,

M Kiran, G Katsaros, J Guitart, J L Prieto, “Methodology for Information Management and Data Assessment in Cloud Environments”, International Journal of Grid and High Performance Computing (IJGHPC), 6(4), 46-71, June 1, 2015,

Jason Zurawski, Bridging the Technical Gap: Science Engagement at ESnet, Great Plains Network Annual Meeting, May 28, 2015,

Nick Buraglio, Bro intrusion detection system (IDS): an overview, Enhancing CyberInfrastructure by Training and Education, May 22, 2015,

Jason Zurawski, Network Monitoring with perfSONAR, BioTeam & ESnet Webinar, May 18, 2015,

Eli Dart, The Science DMZ, BioTeam Science DMZ 101 Webinar, May 18, 2015,

Nick Buraglio, Anita Nikolich, Dale Carder, Secure Layer 3 SDX Concept (Interdomain SDN), May 14, 2015,

A concept framework for Secure Layer 3 Interdomain SDN and ISD/IXP. 

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, May 8, 2015,

Handling Data Challenges in the Capacity Crunch, Royal Society London, May 2015, May 1, 2015,

S Konur, M Kiran, M Gheorghe, M Burkitt, F Ipate,, “Agent-based high-performance simulation of biological systems on the GPU,”, High Performance Computing and Communications, IEEE, 2015, May 1, 2015,

Eli Dart, Mary Hester, Jason Zurawski, “Advanced Scientific Computing Research Network Requirements Review - Final Report 2015”, ESnet Network Requirements Review, April 22, 2015, LBNL 1005790

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, PNWGP Board Meeting, April 21, 2015,

Jason Zurawski, The perfSONAR Effect: Changing the Outcome of Networks by Measuring Them, 2015 KINBER Annual Conference, April 16, 2015,

Eli Dart, The Science DMZ, CDC OID/ITSO Science DMZ Workhsop, April 15, 2015,

Jason Zurawski, Improving Scientific Outcomes at the APS with a Science DMZ, Globus World 2015, April 15, 2015,

Shawn McKee, Marian Babik, Simone Campana, Tony Wildish, Joel Closier, Costin Grigoras, Ilija Vukotic, Michail Salichos, Kaushik De, Vincent Garonne, Jorge Alberto Diaz Cruz, Alessandra Forti, Christopher John Walker, Duncan Rand, Alessandro De Salvo, Enrico Mazzoni, Ian Gable, Frederique Chollet, Hsin Yen Chen, Ulf Bobson Severin Tigerstedt, Guenter Duckeck, Andreas Petzold, Fernando Lopez Munoz, Josep Flix, John Shade, Michael O'Connor, Volodymyr Kotlyar, Bruno Heinrich Hoeft, Jason Zurawski, “Integrating network and transfer metrics to optimize transfer efficiency and experiment workflows”, 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa Japan, April 13, 2015,

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, Southern Partnership in Advanced Networking, April 9, 2015,

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, Southern Partnership in Advanced Networking, April 8, 2015,

Jason Zurawski, Science DMZ Architecture and Security, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, April 3, 2015,

Peter Hinrich, P Grosso, Inder Monga, “Collaborative Research Using eScience Infrastructure and High Speed Networks”, Future Generation Computer Systems, April 2, 2015,

B Mohammed, M Kiran, “Analysis of Cloud Test Beds Using OpenSource Solutions,”, Future Internet of Things and Cloud (FiCloud), Rome, Italy, August, 2015, April 1, 2015,

A Al-Ou’n, M Kiran, DD Kouvatsos, “Using Agent-Based VM Placement Policy,”, Future Internet of Things and Cloud (FiCloud), Rome, Italy, August, 2015, April 1, 2015,

Joe Metzger, Jason Zurawski, ESnet's LHCONE Service, 2015 OSG All Hands Meeting, March 23, 2015,

Adrian Lara, Byrav Ramamurthy, Eric Pouyoul, Inder Monga, “WAN Virtualization and Dynamic End-to-End Bandwidth Provisioning Using SDN”, Optical Fiber Communication Conference 2015, March 22, 2015,

We evaluate a WAN-virtualization framework in terms of delay and scalability and demonstrate that adding a virtual layer between the physical topology and the end-user brings significant advantages and tolerable delays

Nick Buraglio, IPv6 Status; Operating production IPv6 networks, March 22, 2015,

IPv6 Status update and primer on operating production IPv6 networks as of 3/2015

Jason Zurawski, perfSONAR and Network Monitoring, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, March 13, 2015,

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, 2015 Quilt Winter Meeting, February 11, 2015,

Jason Zurawski, Wagging the Dog: Determining Network Requirements to Drive Modern Network Design, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, February 6, 2015,

Jason Zurawski, perfSONAR at 10 Years: Cleaning Networks & Disrupting Operation, perfSONAR Focused Technical Workshop, January 22, 2015,

Science Engagement: A Non-Technical Approach to the Technical Divide, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, January 16, 2015,

The Science DMZ and the CIO: Data Intensive Science and the Enterprise, RMCMOA Workshop, January 13, 2015,

2014

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, New Mexico Technology in Education (NMTIE) Cyber Infrastructure Day, November 19, 2014,

Nathan Hanford, Vishal Ahuja, Matthew Farrens, Dipak Ghosal, Mehmet Balman, Eric Pouyoul, Brian Tierney., “Analysis of the effect of core affinity on high-throughput flows”, Proceedings of the Fourth International Workshop on Network-Aware Data Management (NDM '14), November 16, 2014,

Karel van der Veldt, Inder Monga, Jon Dugan, Cees de Laat, Paola Grosso, “Carbon-aware path provisioning for NRENs”, International Green Computing Conference, November 3, 2014,

 

National Research and Education Networks (NRENs) are becoming keener in providing information on the energy consumption of their equipment. However there are only few NRENs trying to use the available information to reduce power consumption and/or carbon footprint. We set out to study the impact that deploying energy-aware networking devices may have in terms of CO2 emissions, taking the ESnet network as use case. We defined a model that can be used to select paths that lead to a lower impact on the CO2 footprint of the network. We implemented a simulation of the ESnet network using our model to investigate the CO2 footprint under different traffic conditions. Our results suggest that NRENs such as ESnet could reduce their network’s environmental impact if they would deploy energy- aware hardware combined with paths setup tailored to reduction of carbon footprint. This could be achieved by modification of the current path provisioning systems used in the NREN community. 

 

Mariam Kiran, Concerns for how software is distributed through the Cloud, Consultation on Cloud Computing, EU Digital Agenda for Europe, 2014, November 1, 2014,

EC consultation research directions for future European research calls 2015-2016, November 1, 2014,

EPSRC Grand Engineering Challenges, EPSRC meeting in defining the engineering challenges for future UK research, November 2014, November 1, 2014,

Nick Buraglio, Anita Nikolich, Dale Carder, Securing the SDN WAN, October 30, 2014,

SDN has been successfully implemented by large companies and ISPs within their own data centers. However, the focus has remained on intradomain use cases with controllers under the purview of the same authority. Interdomain SDN promises more fine grained control of data flows between SDN networks but also presents the greater challenges of trust, authentication and policy control between them. We propose a secure method to peer SDN networks and a test implementation

Michael Smitasin, Brian Tierney, Switch Buffers Experiments: How much buffer do you need to support 10G flows?, 2014 Technology Exchange, October 29, 2014,

Nick Buraglio,Vincent Stoffer, Adam Slagell, Jim Eyrich, Scott Campbell, Von Welch, Securing the Science DMZ: a discussion, October 28, 2014,

The Science DMZ model is a widely deployed and accepted architecture allowing for movement and sharing of large-scale data sets between facilities, resources, or institutions. In order to help assure integrity of the resources served by the science DMZ, a different approach should be taken regarding necessary resources, visibility as well as perimeter and host security. Experienced panelists discuss common techniques, best practices, typical caveats as well as what to expect (and not expect) from a network perimeter that is purpose built to move science data.

Jason Zurawski, Science Engagement: A Non-Technical Approach to the Technical Divide, Cyber Summit 2014: Crowdsourcing Innovation, September 25, 2014,

Jason Zurawski, Mary Hester, Of Mice and Elephants: Supporting Research with the Science DMZ and Software Defined Networking, Cyber Summit 2014: Crowdsourcing Innovation, September 24, 2014,

Eli Dart, Mary Hester, Jason Zurawski, “Basic Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, September 2014, LBNL 6998E

M Kiran, A Friesen, A J H Simons and W K R Schwach, “Model-based Testing in Cloud Brokerage Scenarios,”, Proc. 1st. Int. Workshop on Cloud Service Brokerage. Service-Oriented Computing, ICSOC 2013 Workshops, LNCS 8377, 2014, September 1, 2014,

Mariam Kiran, Anthony JH Simons, “Model-Based Testing for Composite Web Services in Cloud Brokerage Scenarios”, Advances in Service-Oriented and Cloud Computing, ESOCC, 2014, September 1, 2014,

Henrique Rodriguez, Inder Monga, Abhinava Sadasivarao , Sharfuddin Sayed, Chin Guok, Eric Pouyoul, Chris Liou,Tajana Rosing, “Traffic Optimization in Multi-Layered WANs using SDN”, 22nd Annual Symposium on High-Performance Interconnects, Best Student Paper Award, August 27, 2014,

Wide area networks (WAN) forward traffic through a mix of packet and optical data planes, composed by a variety of devices from different vendors. Multiple forwarding technologies and encapsulation methods are used for each data plane (e.g. IP, MPLS, ATM, SONET, Wavelength Switching). Despite standards defined, the control planes of these devices are usually not interoperable, and different technologies are used to manage each forwarding segment independently (e.g. OpenFlow, TL-1, GMPLS). The result is lack of coordination between layers and inefficient resource usage. In this paper we discuss the design and implementation of a system that uses unmodified OpenFlow to optimize network utilization across layers, enabling practical bandwidth virtualization. We discuss strategies for scalable traffic monitoring and to minimize losses on route updates across layers. We explore two use cases that benefit from multi-layer bandwidth on demand provisioning. A prototype of the system was built open using a traditional circuit reservation application and an unmodified SDN controller, and its evaluation was per-formed on a multi-vendor testbed.

http://blog.infinera.com/2014/09/05/henrique-rodrigues-wins-best-student-paper-at-ieee-hot-interconnects-for-infinerabrocadeesnet-multi-layer-sdn-demo/

http://esnetupdates.wordpress.com/2014/09/05/esnet-student-assistant-henrique-rodrigues-wins-best-student-paper-award-at-hot-interconnects/

 

 

Jason Zurawski, FAIL-transfer: Removing the Mystery of Network Performance from Scientific Data Movement, XSEDE Campus Champions Webinar, August 20, 2014,

 

Best practices for securing an open perimeter network
Securing the Science DMZ

Best practices for securing an open perimeter network or Science DMZ at for BroCon 2014.  Slides. Video

Eli Dart, Mary Hester, Jason Zurawski, “Fusion Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, August 2014, LBNL 6975E

Nick Buraglio, Securing the Science DMZ, June 14, 2014,

The Science DMZ model is a widely deployed and accepted architecture allowing for movement and sharing of large-scale data sets between facilities, resources, or institutions.
In order to help assure integrity of the resources served by the science DMZ, a different approach should be taking regarding
necessary resources, visibility as well as perimeter and host security. Based on proven and existing production techniques
and deployment strategies, we provide an operational map and high level functional framework for securing a science DMZ utilizing a “defense in depth” strategy including log aggregation, effective IDS filtering and management techniques, black hole routing,
flow data and traffic baselining.

Nick Buraglio, Real world IPv6 deployments, June 9, 2014,

Presentation for Westnet conference on Real world IPv6 deployments, lessons learned and expectations.

B Mohammed, M Kiran, “Experimental Report on Setting up a Cloud Computing Environment at the University”, arXiv preprint arXiv:1412.4582 1 2014, June 1, 2014,

K. Djemame, B Barnitzke, M Corrales, M Kiran, M Jiang, D Armstrong, N Forgo, I Nwankwo, “Legal issues in clouds: towards a risk inventory”, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 371, 1983, The Royal Society, June 1, 2014,

Jason Zurawski, A Brief Overview of the Science DMZ, Open Science Grid Campus Grids Webinar, May 23, 2014,

Malathi Veeraraghavan, Inder Monga, “Broadening the scope of optical circuit networks”, International Conference On Optical Network Design and Modeling, Stockholm, Sweden, May 22, 2014,

 

Advances in optical communications and switching technologies are enabling energy-efficient, flexible, higher- utilization network operations. To take full advantage of these capabilities, the scope of optical circuit networks can be increased in both the vertical and horizontal directions. In the vertical direction, some of the existing Internet applications, transport-layer protocols, and application-programming interfaces need to be redesigned and new ones invented to leverage the high-bandwidth, low-latency capabilities of optical circuit networks. In the horizontal direction, inter-domain control and management-protocols are required to create a global-scale interconnection of optical circuit-switched networks. 

 

Jason Zurawski, Brian Tierney, Mary Hester, The Role of End-user Engagement for Scientific Networking, TERENA Networking Conference (TNC), May 20, 2014,

Jason Zurawski, Brian Tierney, Jason Zurawski, An Overview in Emerging (and not) Networking Technologies, TERENA Networking Conference (TNC), May 19, 2014,

Jason Zurawski, Essentials of perfSONAR, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, Fundamentals of Data Movement Hardware, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, The perfSONAR Project at 10 Years: Status and Trajectory, GN3 (GÉANT) NA3, Task 2 - Campus Network Monitoring and Security Workshop, April 25, 2014,

Jason Zurawski, Network and Host Design to Facilitate High Performance Data Transfer, Globus World 2014, April 15, 2014,

Jason Zurawski, Brian Tierney, ESnet perfSONAR Update, 2014 Winter ESnet Site Coordinators Committee (ESCC) Meeting, February 25, 2014,

Jason Zurawski, Security and the perfSONAR Toolkit, Second NSF Workshop on perfSONAR based Multi-domain Network Performance Measurement and Monitoring (pSW 2014), February 21, 2014,

Overview of recent security breaches and practices for the perfSONAR Toolkit. 

2013

Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski, “The Science DMZ: A Network Design Pattern for Data-Intensive Science”, SC13: The International Conference for High Performance Computing, Networking, Storage and Analysis, Best Paper Nominee. Denver CO, USA, ACM. DOI:10.1145/2503210.2503245, November 19, 2013, LBNL 6366E.

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that create an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.

 

Ezra Kissel, Martin Swany, Brian Tierney and Eric Pouyoul, “Efficient Wide Area Data Transfer Protocols for 100 Gbps Networks and Beyond”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13:, November 17, 2013,

Nathan Hanford, Vishal Ahuja, Mehmet Balman, Matthew Farrens, Dipak Ghosal, Eric Pouyoul and Brian Tierney, “Characterizing the Impact of End-System Affinities On the End-to-End Performance of High-Speed Flows”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13, November 17, 2013,

Z. Yan, M. Veeraraghavan, C. Tracy, C. Guok, “On How to Provision Virtual Circuits for Network-Redirected Large-Sized, High-Rate Flows”, International Journal on Advances in Internet Technology, vol. 6, no. 3 & 4, 2013, November 1, 2013,

Campana S., Bonacorsi D., Brown A., Capone E., De Girolamo D., Fernandez Casani A., Flix Molina J., Forti A., Gable I., Gutsche O., Hesnaux A., Liu L., Lopez Munoz L., Magini N., McKee S., Mohammad K., Rand D., Reale M., Roiser S., Zielinski M., and Zurawski J.}, “Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology”, 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2013), October 2013,

Jason Zurawski, Sowmya Balasubramanian, Aaron Brown, Ezra Kissel, Andrew Lake, Martin Swany, Brian Tierney, Matt Zekauskas, “perfSONAR: On-board Diagnostics for Big Data”, 1st Workshop on Big Data and Science: Infrastructure and Services Co-located with IEEE International Conference on Big Data 2013 (IEEE BigData 2013), October 6, 2013,

Jason Zurawski, The Science DMZ - Architecture, Monitoring Performance, and Constructing a DTN, Operating Innovative Networks (OIN), October 3, 2013,

U Khan, M Oriol, M Kiran, “Threat methodology for securing scalable video in the Cloud”, Internet Technology and Secured Transactions (ICITST), 2013 8th Int. Conf., Pages 428-436, IEEE, 2013, September 1, 2013,

Eli Dart, Mary Hester, Jason Zurawski, Editors, “High Energy Physics and Nuclear Physics Network Requirements - Final Report”, ESnet Network Requirements Workshop, August 2013, LBNL 6642E

Abhinava Sadasivarao, Sharfuddin Syed, Chris Liou, Ping Pan, Andrew Lake, Chin Guok, Inder Monga, “Open Transport Switch - A Software Defined Networking Architecture for Transport Networks”, August 17, 2013,

 

There have been a lot of proposals to unify the control and management of packet and circuit networks but none have been deployed widely. In this paper, we propose a sim- ple programmable architecture that abstracts a core transport node into a programmable virtual switch, that meshes well with the software-defined network paradigm while leverag- ing the OpenFlow protocol for control. A demonstration use-case of an OpenFlow-enabled optical virtual switch im- plementation managing a small optical transport network for big-data applications is described. With appropriate exten- sions to OpenFlow, we discuss how the programmability and flexibility SDN brings to packet-optical backbone networks will be substantial in solving some of the complex multi- vendor, multi-layer, multi-domain issues service providers face today. 

 

Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou, Andy Lake, Chin Guok, Inder Monga, Open Transport Switch: A Software Defined Networking Architecture for Transport Networks, Workshop, August 16, 2013,

Presentation at HotSDN Workshop as part of SIGCOMM 2013

Jason Zurawski, Kathy Benninger, Network PerformanceTutorial featuring perfSONAR, XSEDE13: Gateway to Discovery, July 22, 2013,

Jason Zurawski, A Completely Serious Overview of Network Performance for Scientific Networking, Focused Technical Workshop: Network Issues for Life Sciences Research, July 18, 2013,

Jason Zurawski, Site Performance Measurement & Monitoring Best Practices, 2013 Summer ESnet Site Coordinators Committee (ESCC) Meeting, July 16, 2013,

Baris Aksanli, Jagannathan Venkatesh, Tajana Rosing, Inder Monga, “A Comprehensive Approach to Reduce the Energy Cost of Network of Datacenters”, International Symposium on Computers and Communications, Best Student Paper award, July 7, 2013,

Best Student Paper

Several studies have proposed job migration over the wide area network (WAN) to reduce the energy of networks of datacenters by taking advantage of different electricity prices and load demands. Each study focuses on only a small subset of network parameters and thus their results may have large errors. For example,  datacenters usually have long-term power contracts instead of paying market prices. However, previous work neglects these contracts, thus overestimating the energy savings by 2.3x. We present a comprehensive approach to minimize the energy cost of networks of datacenters by modeling performance of the workloads, power contracts, local renewable energy sources, different routing options for WAN and future router technologies. Our method can reduce the energy cost of datacenters by up to 28%, while reducing the error in the energy cost estimation by 2.6x.

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, SSERCA / FLR Summit, June 14, 2013,

William E. Johnston, Eli Dart, Michael Ernst, Brian Tierney, “Enabling high throughput in widely distributed data management and analysis systems: Lessons from the LHC”, TERENA Networking Conference, June 3, 2013,

M Holcombe, S Chin, S Cincotti, M Raberto, A Teglio, S Coakley, C Deissenberg, S vander Hoog, C Greenough, H Dawid, M Neugart, S Gemkow, P Harting, M Kiran, D Worth, “Large-scale modelling of economic systems”, Complex Systems 22 (2) 8 2013, June 1, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Great Plains Network Annual Meeting, May 29, 2013,

Jason Zurawski, Matt Lessins, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Merit Member Conference 2013, May 15, 2013,

Lauren Rotman, Jason Zurawski, Building User Outreach Strategies: Challenges & Best Practices, Internet2 Annual Meeting, April 22, 2013,

Z. Yan, M. Veeraraghavan, C. Tracy, and C. Guok, “On how to Provision Quality of Service (QoS) for Large Dataset Transfers”, Proceedings of the Sixth International Conference on Communication Theory, Reliability, and Quality of Service, April 21, 2013,

Michael Sinatra, IPv6 Deployment Panel Discussion, Department of Energy Information Managers’ Conference, April 2013,

Jason Zurawski, Network Tools Tutorial, Internet2 Annual Meeting, April 11, 2013,

Bill Johnston, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, The Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering, March 2013,

Jason Zurawski, Networking Potpourri, OSG All Hands Meeting, March 11, 2013,

Jason Zurawski, Debugging Network Performance With perfSONAR, eduPERT Performance U! Winter school, March 6, 2013,

Joe Metzger, ESnet5 Network Engineering Group Update, Winter ESCC 2013, January 2013,

Inder Monga, Network Abstractions: The first step towards a programmable WAN, TIP 2013, January 14, 2013,

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. This talk explores a new "one virtual switch" abstraction leveraging software-defined networking and OpenFlow concepts, that provides the science users a simple, adaptable network framework to meet their future application requirements. The talk will include the high-level design that includes use of OpenFlow and OSCARS as well as implementation details from demonstration planned for super-computing.

Michael Sinatra, DNSSEC: Signing, Validating, and Troubleshooting, TIP 2013: Joint Techs, January 2013,

Eli Dart, Brian Tierney, Raj Kettimuthu, Jason Zurawski, Achieving the Science DMZ, January 13, 2013,

Tutorial at TIP2013, Honolulu, HI

  • Part 1: Architecture and Security
  • Part 2: Data Transfer Nodes and Data Transfer Tools
  • Part 3: perfSONAR

 

 

Joe Metzger, Lessons Learned Deploying a 100G Nationwide Network, TIP 2013, January 2013,

J. van der Ham, F. Dijkstra, R. Łapacz, J. Zurawski, “Network Markup Language Base Schema version 1”, Open Grid Forum, GFD-R-P.206, 2013,

2012

Brian Tierney, Efficient Data Transfer Protocols for Big Data, CineGrid Workshop, December 2012,

M. Boddie, T. Entel, C. Guok, A. Lake, J. Plante, E. Pouyoul, B. H. Ramaprasad, B. Tierney, J. Triay, V. M. Vokkarane, On Extending ESnet's OSCARS with a Multi-Domain Anycast Service, IEEE ONDM 2012, December 2012,

Inder Monga, Introduction to Bandwidth on Demand to LHCONE, LCHONE Point-to-point Service Workshop, December 13, 2012,

Introducing Bandwidth on Demand concepts to the application community of CMS and ATLAS experiments.

Inder Monga, Software Defined Networking for big-data science, Worldwide LHC Grid meeting, December 2012,

Michael Sinatra, Don’t Ignore the Substrate: What Networkers Need to Worry About in the Era of Big Clouds and Big Data, Merit Networkers Workshop, December 2012,

Michael Sinatra, Risks of Not Deploying IPv6 Now, CANS 2012, December 2012,

Michael Sinatra, IPv6 Measurement Related Activities, CANS 2012, December 2012,

Eli Dart, Brian Tierney, Editors, “Biological and Environmental Research Network Requirements Workshop, November 2012 - Final Report””, November 29, 2012, LBNL LBNL-6395E

Inder Monga, Eric Pouyoul, Chin Guok, Software Defined Networking for big-data science, SuperComputing 2012, November 15, 2012,

 

The emerging era of “Big Science” demands the highest possible network performance. End-to-end circuit automation and workflow-driven customization are two essential capabilities needed for networks to scale to meet this challenge. This demonstration showcases how combining software-defined networking techniques with virtual circuits capabilities can transform the network into a dynamic, customer-configurable virtual switch. In doing so, users are able to rapidly customize network capabilities to meet their unique workflows with little to no configuration effort. The demo also highlights how the network can be automated to support multiple collaborations in parallel.

 

Wu, Q., Yun, D., Zhu, M., Brown, P., and Zurawski, J., “A Workflow-based Network Advisor for Data Movement with End-to-end Performance Optimization”, The Seventh Workshop on Workflows in Support of Large-Scale Science (WORKS12), Salt Lake City Utah, USA, November 2012,

Gunter D., Kettimuthu R., Kissel E., Swany M., Yi J., Zurawski J., “Exploiting Network Parallelism for Improving Data Transfer Performance”, IEEE/ACM Annual SuperComputing Conference (SC12) Companion Volume, Salt Lake City Utah, USA, November 2012,

Greg Bell, Lead Panel on DOE Computing Resources, National Laboratory Day in Mississippi, November 12, 2012,

Greg Bell, Measuring Success in R&E Networking, The Quilt, November 2012,

Greg Bell, Network as Instrument, CANARIE Users’ Forum, November 2012,

Yufei Ren, Tan Li, Dantong Yu, Shudong Jin, Thomas Robertazzi, Brian L. Tierney, Eric Pouyoul, “Protocols for Wide-Area Data-intensive Applications: Design and Performance Issues”, Proceedings of IEEE Supercomputing 2012, November 12, 2012,

Providing high-speed data transfer is vital to various data-intensive applications. While there have been remarkable technology advances to provide ultra-high-speed network band- width, existing protocols and applications may not be able to fully utilize the bare-metal bandwidth due to their inefficient design. We identify the same problem remains in the field of Remote Direct Memory Access (RDMA) networks. RDMA offloads TCP/IP protocols to hardware devices. However, its benefits have not been fully exploited due to the lack of efficient software and application protocols, in particular in wide-area networks. In this paper, we address the design choices to develop such protocols. We describe a protocol implemented as part of a communication middleware. The protocol has its flow control, connection management, and task synchronization. It maximizes the parallelism of RDMA operations. We demonstrate its performance benefit on various local and wide-area testbeds, including the DOE ANI testbed with RoCE links and InfiniBand links.

 

Inder Monga, Eric Pouyoul, Chin Guok, “Software Defined Networking for big-data science (paper)”, SuperComputing 2012, November 11, 2012,

 

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. Most recently, Science DMZ, a campus design pattern that bypasses traditional performance hotspots in typical campus network implementation, has been gaining momentum. In this paper and corresponding demonstration, we build upon the SC11 SCinet Research Sandbox demonstrator with Software-Defined networking to explore new architectural approaches. A virtual switch network abstraction is explored, that when combined with software-defined networking concepts provides the science users a simple, adaptable network framework to meet their upcoming application requirements. 

 

Inder Monga, Programmable Information Highway, November 11, 2012,

 

Suggested Panel Questions:

- What do you envision will have dramatic impact in the future networking and data management?  What research challenges do you expect in achieving your vision? 

- Do we need to re-engineer existing tools and middleware software? Elaborate on network management middleware in terms of virtual circuits, performance monitoring, and diagnosis tools.

- How do current applications match increasing data sizes and enhancements in network infrastructure? Please list a few network-aware application.  What is the scope of networking in the application domain?

- Resource management and scheduling problems are gaining importance due to  current developments in utility computing and high interest in Cloud infrastructure. Explain your vision.  What sort of algorithms/mechanisms will practically be used in the future?

- What are the main issues in designing/modelling cutting edge dynamic networks for large-scale data processing? What sort of performance problems do you expect?

- What necessary step do we need to implement to benefit from next generation high bandwidth networks? Do you think there will be radical changes such as novel APIs or new network stacks?

 

I. Monga, E. Pouyoul, C. Guok, Software-Define Networking for Big-Data Science – Arthictectural Models from Campus to the WAN, SC12: IEEE HPC, November 2012,

David Asner, Eli Dart, and Takanori Hara, “Belle-II Experiment Network Requirements”, October 2012, LBNL LBNL-6268E

The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is
expected in 2015.

In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration’s work.

The requirements identified at the Belle II Experiment Requirements workshop are summarized in the Findings section, and are described in more detail in this report. KEK invited Belle II organizations to attend a follow-up meeting hosted by PNNL during SC12 in Salt Lake City on November 13, 2012. The notes from this meeting are in Appendix C.

Inder Monga, Software-defined networking (SDN) and OpenFlow: Hot topics in networking, Masters Class at CMU, NASA Ames, October 2012,

Brian Tierney, ESnet’s Research Testbeds, GLIF Meeting, October 2012,

Eli Dart, Network expectations, or what to tell your system administrator, ALS user group meeting tomography workshop, October 2012,

Michael Sinatra, DNS Security: Panel Discussion, NANOG 56, October 2012,

Paola Grosso, Inder Monga, Cees DeLaat, GreenSONAR, GLIF, October 12, 2012,

C.Guok, E, Chaniotakis, A. Lake, OSCARS Production Deployment Experiences, GLIF NSI Operationalization Meeting, October 2012,

Inder Monga, Bill St. Arnaud, Erik-Jan Bos, Defining GLIF Architecture Task Force, GLIF, October 11, 2012,

12th Annual LambdaGrid Workshop in Chicago

Brian Tierney, Ezra Kissel, Martin Swany, Eric Pouyoul, “Efficient Data Transfer Protocols for Big Data”, Proceedings of the 8th International Conference on eScience, IEEE, October 9, 2012,

Abstract—Data set sizes are growing exponentially, so it is important to use data movement protocols that are the most efficient available. Most data movement tools today rely on TCP over sockets, which limits flows to around 20Gbps on today’s hardware. RDMA over Converged Ethernet (RoCE) is a promising new technology for high-performance network data movement with minimal CPU impact over circuit-based infrastructures. We compare the performance of TCP, UDP, UDT, and RoCE over high latency 10Gbps and 40Gbps network paths, and show that RoCE-based data transfers can fill a 40Gbps path using much less CPU than other protocols. We also show that the Linux zero-copy system calls can improve TCP performance considerably, especially on current Intel “Sandy Bridge”-based PCI Express 3.0 (Gen3) hosts.

 

Eli Dart, Brian Tierney, editors, “Advanced Scientific Computing Research Network Requirements Review, October 2012 - Final Report”, ESnet Network Requirements Review, October 4, 2012, LBNL LBNL-6109E

Inder Monga, Network Service Interface: Concepts and Architecture, I2 Fall Member Meeting, September 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ECT2012: The Eighth International Conference on Engineering Computational Technology, September 2012,

Brian Tierney, High Performance Bulk Data Transfer with Globus Online, Webinar, September 2012,

Greg Bell, Network as Instrument, NORDUnet 2012, September 2012,

Mike Bennett, EEE for P802.3bm, Objective Proposal, IEEE 40G and 100G Next Generation Optics Task Force, IEEE 802.3 Interim meeting, September 2012,

Mike Bennett, An Overview of Energy-Efficient Ethernet, NGBASE-T Study Group, IEEE 802.3 Interim meeting, September 2012,

Mike Bennett, Energy Efficiency in IEEE Ethernet Networks – Current Status and Prospects for the Future, Joint ITU/IEEE Workshop on Ethernet--Emerging Applications and Technologies, September 2012,

Greg Bell, ESnet Dark Fiber Footprint and Testbed, CESNET Customer Empowered Fiber Networks Workshop, September 2012,

Inder Monga, Architecting and Operating Energy-Efficient Networks, September 10, 2012,

The presentation outlines the network energy efficiency challenges, the growth of network traffic and the simulation use-case to build next-generation energy-efficient network designs.

T.Kirkham, D.Armstrong, K.Djemame, M.Corrales, M.Kiran, I.Nwankwo, M.Jiang, N.Forgo, “Assuring Data Privacy in Cloud Transformations,”, In: TrustCom, 2012, September 1, 2012,

A.U.Khan, M.Kiran, M.Oriol, M.Jiang, K.Djemame, “Security risks and their management in cloud computing”, CloudCom 2012: 121-128, September 1, 2012,

M.Kiran, A.U.Khan, M.Jiang, K.Djemame, M.Oriol, M.Corrales,, “Managing Security Threats in Clouds”, Digital Research 2012, September 1, 2012,

T Kirkham, K Djemame, M Kiran, M Jiang, G Vafiadis, A Evangelinou, “Risk based SLA management in clouds: A legal perspective,”, Internet Technology and Secured Transactions, 2012, September 1, 2012,

Brian Tierney, ESnet perfSONAR-PS Plans and Perspective, OSG Meeting, August 2012,

Eli Dart, Networks for Data Intensive Science Environments, BES Neutron and Photon Detector Workshop, August 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ARNES: The Academic and Research Network of Slovenia, August 2012,

Greg Bell, ESnet Manifesto, Joint Techs Conference, July 2012,

Inder Monga, Eric Pouyoul, Chin Guok, Eli Dart, SDN for Science Networks, Summer Joint Techs 2012, July 17, 2012,

Greg Bell, ESnet Update, ESnet Coordinating Committee Meeting, July 2012,

Inder Monga, A Data-Intensive Network Substrate for eResearch, eScience Workshop, July 2012,

Inder Monga, Marching Towards …a Net-Zero Network, WIN2012 Conference, July 2012,

Jon Dugan, The MyESnet Portal: Making the Network Visible, Summer 2012 ESCC/Internet2 Joint Techs, July 2012,

Joe Metzger, ANI & ESnet5, Summer ESCC 2012, July 2012,

Inder Monga, Energy Efficiency starts with measurement, Greentouch Meeting, June 2012,

Mehmet Balman, Eric Pouyoul, Yushu Yao, E. Wes Bethel Burlen Loring, Prabhat, John Shalf, Alex Sim, Brian L. Tierney, “Experiences with 100Gbps Network Applications”, The Fifth International Workshop on Data Intensive Distributed Computing (DIDC 2012), June 20, 2012,

100Gbps networking has finally arrived, and many research and educational institutions have begun to deploy 100Gbps routers and services. ESnet and Internet2 worked together to make 100Gbps networks available to researchers at the Supercomputing 2011 conference in Seattle Washington. In this paper, we describe two of the first applications to take advantage of this network. We demonstrate a visualization application that enables remotely located scientists to gain insights from large datasets. We also demonstrate climate data movement and analysis over the 100Gbps network. We describe a number of application design issues and host tuning strategies necessary for enabling applications to scale to 100Gbps rates.

Chris Tracy, 100G Deployment--Challenges & Lessons Learned from the ANI Prototype & SC11, NANOG 55, June 2012,

Inder Monga, ESnet Update: Networks and Research, JGNx and NTT, June 2012,

M.Holcombe, S.Adra, M.Bicak, S.Chin, S.Coakley, A.I.Graham, J.Green, C.Greenough, D.Jackson, M.Kiran, S.MacNeil, A.Maleki-Dizaji, P.McMinn, M.Pogson, R.Poole, E.Qwarnstrom, F.Ratnieks, M.D.Rolfe, R.Smallwood, T.Sun and D.Worth, “Modelling complex biological systems using an agent-based approach,”, Integrative Biology, 2012, June 1, 2012,

Jon Dugan, Gopal Vaswani, Gregory Bell, Inder Monga, “The MyESnet Portal: Making the Network Visible”, TERENA 2012 Conference, May 22, 2012,

 

ESnet provides a platform for moving large data sets and accelerating worldwide scientific collaboration. It provides high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions, enabling them to collaborate on some of the world's most important scientific challenges including renewable energy sources, climate science, and the origins of the universe.

ESnet has embarked on a major project to provide substantial visibility into the inner-workings of the network by aggregating diverse data sources, exposing them via web services, and visualizing them with user-centered interfaces. The portal’s strategy is driven by understanding the needs and requirements of ESnet’s user community and carefully providing interfaces to the data to meet those needs. The 'MyESnet Portal' allows users to monitor, troubleshoot, and understand the real time operations of the network and its associated services.

This paper will describe the MyESnet portal and the process of developing it. The data for the portal comes from a wide variety of sources: homegrown systems, commercial products, and even peer networks. Some visualizations from the portal are presented highlighting some interesting and unusual cases such as power consumption and flow data. Developing effective user interfaces is an iterative process. When a new feature is released, users are both interviewed and observed using the site. From this process valuable insights were found concerning what is important to the users and other features and services they may also want. Open source tools were used to build the portal and the pros and cons of these tools are discussed.

 

Brian Tierney, ESnet’s Research Testbed, LSN Meeting, May 2012,

Eli Dart, High Performance Networks to Enable and Enhance Scientific Productivity, WRNP 13, May 2012,

Bill Johnston, Evolution of R&E Networks to Enable LHC Science, Istituto Nazionale di Fisica Nucleare (INFN) and Italian Research & Education Network network (GARR) joint meeting, May 2012,

Greg Bell, ESnet Overview, LBNL Advisory Board, May 2012,

Jon Dugan, The MyESnet Portal: Making the Network Transparent, TERENA Networking Conference 2012, May 2012,

Bill Johnston, Some ESnet Observations on Using and Managing OSCARS Point-to-Point Circuit, LHCONE / LHCOPN meeting, May 2012,

Bill Johnston and Eli Dart, The Square Kilometer Array: A next generation scientific instrument and its implications for networks (and possible lessons from the LHC experience), TERENA Networking Conference 2012, May 2012,

Brian Tierney, ESnet, the Science DMZ, and the role of Globus Online, Globus World, April 2012,

Eli Dart, Cyberinfrastructure for Data Intensive Science, Joint Techs: Internet2 Spring Member Meeting, April 2012,

Michael Sinatra, IPv6 Panel: Successes and Setbacks, ARIN XXIX, April 2012,

Von Welch, Doug Pearson, Brian Tierney, and James Williams (eds)., “Security at the Cyberborder Workshop Report”, NSF Workshop, March 28, 2012,

Greg Bell, ESnet Update, National Laboratory CIO Meeting, March 2012,

Greg Bell, ESnet Update, CENIC Annual Conference, March 2012,

McKee S., Lake A., Laurens P., Severini H., Wlodek T., Wolff S., and Zurawski J., “Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

Zurawski J., Ball R., Barczyk A., Binkley M., Boote J., Boyd E., Brown A., Brown R., Lehman T., McKee S., Meekhof B., Mughal A. Newman H., Rozsa S., Sheldon P., Tackett A., Voicu R., Wolff S., and Yang X., “The DYNES Instrument: A Description and Overview”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

C. Guok, I. Monga, IDCP and NSI: Lessons Learned, Deployments and Gap Analysis, OGF 34, March 2012,

Andy Lake, Network Performance Monitoring and Measuring Using perfSONAR, CENIC 2012, March 2012,

Baris Aksanli, Tajana Rosing, Inder Monga, “Benefits of Green Energy and Proportionality in High Speed Wide Area Networks Connecting Data Centers”, Design, Automation and Test in Europe (DATE), March 5, 2012,

Abstract: Many companies deploy multiple data centers across the globe to satisfy the dramatically increased computational demand. Wide area connectivity between such geographically distributed data centers has an important role to ensure both the quality of service, and, as bandwidths increase to 100Gbps and beyond, as an efficient way to dynamically distribute the computation. The energy cost of data transmission is dominated by the router power consumption, which is unfortunately not energy proportional. In this paper we not only quantify the performance benefits of leveraging the network to run more jobs, but also analyze its energy impact. We compare the benefits of redesigning routers to be more energy efficient to those obtained by leveraging locally available green energy as a complement to the brown energy supply. Furthermore, we design novel green energy aware routing policies for wide area traffic and compare to state-of-the-art shortest path routing algorithm. Our results indicate that using energy proportional routers powered in part by green energy along with our new routing algorithm results in 10x improvement in per router energy efficiency with 36% average increase in the number of jobs completed.

 

T. Lehman, C. Guok, Advanced Resource Computation for Hybrid Service and TOpology Networks (ARCHSTONE), DOE ASCR PI Meeting, March 2012,

Eli Dart, Network Impacts of Data Intensive Science, Ethernet Technology Summit, February 2012,

Inder Monga, Enabling Science at 100G, ON*Vector Conference, February 2012,

Bill Johnston and Eli Dart, Design Patterns for Data-Intensive Science--LHC lessons and SKA, Pawsey Supercomputer Center User Meeting, February 2012,

Michael Sinatra, ESnet as an ISP, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Michael Sinatra, Site IPv6 Deployment Status & Issues, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Inder Monga, John MacAuley, GLIF NSI Implementation Task Force Presentation, Winter GLIF Tech Meeting at Baton Rouge, LA, January 26, 2012,

Joe Metzger, ESnet 5 Deployment Plans, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Patty Giuntoli, Sheila Cisko, ESnet Collaborative Services (ECS) / RCWG updates, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Eric Pouyoul, Brian Tierney, Achieving 98Gbps of Cross-country TCP traffic using 2.5 hosts, 10 10G NICs, and 10 TCP streams, Winder 2012 Joint Techs, January 25, 2012,

Greg Bell, Science at the Center: ESnet Update, Joint Techs, January 25, 2012,

In this talk, Acting Director Greg Bell will provide an update on ESnet's recent activities through the lens of its mission to accelerate discovery for researchers in the DOE Office of Science. Topics covered: what makes ESnet distinct? Why does its ScienceDMZ strategy matter? What are potential 'design patterns' for data-intensive science? Does 100G matter?

Eli Dart, Brent Draney, National Laboratory Success Stories, Joint Techs, January 24, 2012,

Reports from ESnet and National Laboratories that have successfully deployed methods to enhance their infrastructure support for data intensive science.

Chin Guok, Evolution of OSCARS, Joint Techs, January 23, 2012,

On-demand Secure Circuits and Advance Reservation System (OSCARS) has evolved tremendously since its conception as a DOE funded project to ESnet back in 2004. Since then, it has grown from a research project to a collaborative open-source software project with production deployments in several R&E networks including ESnet and Internet2. In the latest release of OSCARS as version 0.6, the software was redesigned to flexibly accommodate both research and production needs. It is being used currently by several research projects to study path computation algorithms, and demonstrate multi-layer circuit management. Just recently, OSCARS 0.6 was leveraged to support production level bandwidth management in the ESnet ANI 100G prototype network, SCinet at SC11 in Seattle, and the Internet2 DYNES project. This presentation will highlight the evolution of OSCARS, activities surrounding OSCARS v0.6 and lessons learned, and share with the community the roadmap for future development that will be discussed within the open-source collaboration.

This presentation will discuss the challenges and lessons learned in the deployment of the 100GigE ANI Prototype network and support of 100G circuit services during SC11 in Seattle. Interoperability, testing, measurement, debugging, and operational issues at both the optical and layer-2/3 will be addressed. Specific topics will include: (1) 100G pluggable optics – options, support, and standardization issues, (2) Factors negatively affecting 100G line-side transmission, (3) Saturation testing and measurement with hosts connected at 10G, (4) Debugging and fault isolation with creative use of loops/circuit services, (5) Examples of interoperability problems in a multi-vendor environment, and (6) Case study: Transport of 2x100G waves to SC11.

Joe Breen, Eli Dart, Eric Pouyoul, Brian Tierney, Achieving a Science "DMZ", Winter 2012 Joint Techs, Full day tutorial, January 22, 2012,

There are several aspects to building successful infrastructure to support data intensive science. The Science DMZ Model incorporates three key components into a cohesive whole: a high-performance network architecture designed for ease of use; well-configured systems for data transfer; and measurement hosts to provide visibility and rapid fault isolation. This tutorial will cover aspects of network architecture and network device configuration, the design and configuration of a Data Transfer Node, and the deployment of perfSONAR in the Science DMZ. Aspects of current deployments will also be discussed.

M.Kiran, M.Bicak, S.Maleki-Dizaji, M.Holcombe, “FLAME: A Platform for High Performance Computing of Complex Systems, Applied for Three Case Studies”, Acta Physica Polonica B, Proceedings Supplement, DOI:10.5506/APhysPolBSupp.4.201, PACS numbers: 07.05.Tp, vol 4, no 2, 2011 (Polish Journal), January 1, 2012,

2011

Zurawski J., Boyd E., Lehman T., McKee S., Mughal A., Newman H., Sheldon P, Wolff S., and Yang X., “Scientific data movement enabled by the DYNES instrument”, Proceedings of the first international workshop on Network-aware data management (NDM ’11), Seattle WA, USA, November 2011,

K.Djemame, D.Armstrong, M.Kiran, M.Jiang, “A Risk Assessment Framework and Software Toolkit for Cloud Service Ecosystems, Cloud Computing 2011,”, The Second International Conference on Cloud Computing, Grids and Virtualization, pg: 119-126, ISBN: 978-1-61208-153-3, Italy, September 1, 2011,

Steve Cotter, ANI details leading to ESnet5, ESCC, Summer 2011, July 13, 2011,

C. Guok, OSCARS, GENI Project Office Meeting, May 2011,

William E. Johnston, Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service, TERENA Networking Conference, 16 - 19 May, 2011, Prague, Czech Republic, May 16, 2011,

Tom Lehman, Xi Yang, Nasir Ghani, Feng Gu, Chin Guok, Inder Monga, and Brian Tierney, “Multilayer Networks: An Architecture Framework”, IEEE Communications Magazine, May 9, 2011,

Neal Charbonneau, Vinod M. Vokkarane, Chin Guok, Inder Monga, “Advance Reservation Frameworks in Hybrid IP-WDM Networks”, IEEE Communications Magazine, May 9, 2011, 59, Issu:132-139,

Steve Cotter, Early Lessons Learned Deploying a 100Gbps Network, Enterprise Innovation Symposium in Atlanta, May 4, 2011,

Inder Monga, Chin Guok, William E. Johnston, and Brian Tierney, “Hybrid Networks: Lessons Learned and Future Challenges Based on ESnet4 Experience”, IEEE Communications Magazine, May 1, 2011,

W.E. Johnston, C. Guok, J. Metzger, B. Tierney, Network Services for High Performance Distributed Computing and Data Management, The Second International Conference on Parallel, Distributed, Grid, and Cloud Computing for Engineering, Ajaccio - Corsica - France, April 12, 2011,

Eli Dart, “ESnet Requirements Workshops Summary for Sites”, ESCC Meeting, Clemson, SC, February 2, 2011,

Brian Tierney, ANI Testbed Project Update, Winter 2011 Joint Techs, Clemson, SC, February 2, 2011,

Steve Cotter, ESnet Update, Winter 2011 Joint Techs Clemson, SC, February 2, 2011,

Eli Dart, The Science DMZ, Winter 2011 Joint Techs, February 1, 2011,

Joe Metzger, DICE Diagnostic Service, Joint Techs - Clemson, South Carolina, January 27, 2011,

M.Kiran, M.Jiang, D.Armstrong and K.Djemame, “Towards a Service Life Cycle-based Methodology for Risk Assessment in Cloud Computing,”, CGC 2011, International conference on Cloud and Green Computing, December, Australia, Proceedings DASC 2011: 449-456, January 2, 2011,

S.F.Adra, M.Kiran, P.McMinn, N.Walkinshaw, “A multiobjective optimisation approach for the dynamic inference and refinement of agent-based model specifications,”, IEEE Congress on Evolutionary Computation 2011: 2237-2244, New Orleans, USA, January 2, 2011,

Eli Dart, Brian Tierney, editors, “Fusion Energy Network Requirements Workshop, December 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5905E

Eli Dart, Lauren Rotman, Brian Tierney, editors, “Nuclear Physics Network Requirements Workshop, August 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5518E

2010

Chaitanya S. K. Vadrevu, Massimo Tornatore, Chin P. Guok, Inder Monga, A Heuristic for Combined Protection of IP Services and Wavelength Services in Optical WDM Networks, IEEE ANTS 2010, December 2010,

Joe Metzger, editor, “General Service Description for DICE Network Diagnostic Services”, December 1, 2010,

Chris Tracy, Introduction to OpenFlow: Bringing Experimental Protocols to a Network Near You, NANOG50 Conference, Atlanta, Oct. 4, 2010, October 4, 2010,

Chris Tracy, Eli Dart, Science DMZs: Understanding their role in high-performance data transfers, CANS 2010, September 20, 2010,

Kevin Oberman, IPv6 Implementation at a Network Service Provider, 2010 Inter Agency IPv6 Information Exchange, August 4, 2010,

Eli Dart, High Performance Data Transfer, Joint Techs, Summer 2010, July 15, 2010,

Kevin Oberman, Future DNSSEC Directions, ESCC Meeting, Columbus, Ohio, July 15, 2010,

Jon Dugan, Using Graphite to Visualize Network Data, ESCC Meeting
, Columbus, Ohio, July 15, 2010,

Evangelos Chaniotakis, Virtual Circuits Landscape, ESCC Meeting, Columbus, Ohio, July 15, 2010,

Joe Metzger, PerfSONAR Update, ESCC Meeting, July 15, 2010,



M.Kiran, P.Richmond, M.Holcombe, L.S.Chin, D.Worth and C.Greenough, “FLAME: Simulating Large Populations of Agents on Parallel Hardware Architectures”, AAMAS 2010: 1633-1636, Toronto, Canada, June 1, 2010,

C. Guok, OSCARS Roadmap, OGF 28; DICE Control Plane WG, May 2010,

Jon Dugan, Network Monitoring and Visualization at ESnet, Joint Techs
, Salt Lake City, Utah, February 3, 2010,

Kevin Oberman, IPv6 SNMP Network Management, http://events.internet2.edu/2010/jt-slc/, February 3, 2010,

Joint Techs
Salt Lake City, Utah
http://events.internet2.edu/2010/jt-slc/

Steve Cotter, ESnet Update, ESCC Meeting, Salt Lake City, Utah, February 3, 2010,

Kevin Oberman, DNSSEC Implementation at ESnet, Joint Techs
, Salt Lake City, Utah, February 2, 2010,

Steve Cotter, ESnet Update, Joint Techs, Salt Lake City, Utah, February 2, 2010,

C. Guok, I. Monga, Composible Network Service Framework, ESCC, February 2010,

Swany M. Portnoi M., Zurawski J., “Information services algorithm to heuristically summarize IP addresses for distributed, hierarchical directory”, 11th IEEE/ACM International Conference on Grid Computing (Grid2010), 2010,

Eli Dart, Brian Tierney, editors, “Basic Energy Sciences Network Requirements Workshop, September 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4363E

Office of Basic Energy Sciences, DOE Office of Science; Energy Sciences Network; Gaithersburg, MD — September 22 and 23, 2010

Participants and Contributors; Alan Biocca, LBNL (Advanced Light Source); Rich Carlson, DOE/SC/ASCR (Program Manager); Jackie Chen, SNL/CA (Chemistry/Combustion); Steve Cotter, ESnet (Networking); Eli Dart, ESnet (Networking); Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager); Jim Davenport, DOE/SC/BES (BES Program); Alexander Gaenko, Ames Lab (Chemistry); Paul Kent, ORNL (Materials Science, Simulations); Monica Lamm, Ames Lab (Computational Chemistry); Stephen Miller, ORNL (Spallation Neutron Source); Chris Mundy, PNNL (Chemical Physics); Thomas Ndousse, DOE/SC/ASCR (ASCR Program); Mark Pederson, DOE/SC/BES (BES Program); Amedeo Perazzo, SLAC (Linac Coherent Light Source); Razvan Popescu, BNL (National Synchrotron Light Source); Damian Rouson, SNL/CA (Chemistry/Combustion); Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager); Bobby Sumpter, ORNL (Computer Science and Mathematics and Center for Nanophase; Materials Sciences); Brian Tierney, ESnet (Networking); Cai-Zhuang Wang, Ames Lab (Computer Science/Simulations); Steve Whitelam, LBNL (Molecular Foundry); Jason Zurawski, Internet2 (Networking)

Eli Dart, Brian Tierney, editors, “Biological and Environmental Research Network Requirements Workshop, April 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4089E

Office of Biological and Environmental Research, DOE Office of Science Energy Sciences Network Rockville, MD — April 29 and 30, 2010. This is LBNL report LBNL-4089E.

Participants and Contributors: Kiran Alapaty, DOE/SC/BER (Atmospheric System Research) Ben Allen, LANL (Bioinformatics) Greg Bell, ESnet (Networking) David Benton, GLBRC/University of Wisconsin (Informatics) Tom Brettin, ORNL (Bioinformatics) Shane Canon, NERSC (Data Systems) Rich Carlson, DOE/SC/ASCR (Network Research) Steve Cotter, ESnet (Networking) Silvia Crivelli, LBNL (JBEI) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager) Narayan Desai, ANL (Networking) Richard Egan, ANL (ARM) Jeff Flick, NOAA (Networking) Ken Goodwin, PSC/NLR (Networking) Susan Gregurick, DOE/SC/BER (Computational Biology) Susan Hicks, ORNL (Networking) Bill Johnston, ESnet (Networking) Bert de Jong, PNNL (EMSL/HPC) Kerstin Kleese van Dam, PNNL (Data Management) Miron Livny, University of Wisconsin (Open Science Grid) Victor Markowitz, LBNL/JGI (Genomics) Jim McGraw, LLNL (HPC/Climate) Raymond McCord, ORNL (ARM) Chris Oehmen, PNNL (Bioinformatics/ScalaBLAST) Kevin Regimbal, PNNL (Networking/HPC) Galen Shipman, ORNL (ESG/Climate) Gary Strand, NCAR (Climate) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories, Middleware) Dean Williams, LLNL (ESG/Climate) Jason Zurawski, Internet2 (Networking)  

Editors: Eli Dart, ESnet; Brian Tierney, ESnet

2009

William E Johnston, Progress in Integrating Networks with Service Oriented Architectures / Grids. The Evolution of ESnet's Guaranteed Bandwidth Service, Cracow ’09 Grid Workshop, October 12, 2009,

Mariam Kiran, Simon Coakley, Neil Walkinshaw, Phil McMinn, Mike Holcombe, “Validation and discovery from computational biology models”, Biosystems, September 1, 2009,

“HEP (High Energy Physics) Network Requirements Workshop, August 2009 - Final Report”, ESnet Network Requirements Workshop, August 27, 2009, LBNL LBNL-3397E

Office of High Energy Physics, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-3397E.

Participants and Contributors: Jon Bakken, FNAL (LHC/CMS) Artur Barczyk, Caltech (LHC/Networking) Alan Blatecky, NSF (NSF Cyberinfrastructure) Amber Boehnlein, DOE/SC/HEP (HEP Program Office) Rich Carlson, Internet2 (Networking) Sergei Chekanov, ANL (LHC/ATLAS) Steve Cotter, ESnet (Networking) Les Cottrell, SLAC (Networking) Glen Crawford, DOE/SC/HEP (HEP Program Office) Matt Crawford, FNAL (Networking/Storage) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (HEP/LHC/ATLAS) Ian Fisk, FNAL (LHC/CMS) Rob Gardner, University of Chicago (HEP/LHC/ATLAS) Bill Johnston, ESnet (Networking) Steve Kent, FNAL (Astroparticle) Stephan Lammel, FNAL (FNAL Experiments and Facilities) Stewart Loken, LBNL (HEP) Joe Metzger, ESnet (Networking) Richard Mount, SLAC (HEP) Thomas Ndousse-Fetter, DOE/SC/ASCR (Network Research) Harvey Newman, Caltech (HEP/LHC/Networking) Jennifer Schopf, NSF (NSF Cyberinfrastructure) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alan Stone, DOE/SC/HEP (HEP Program Office) Brian Tierney, ESnet (Networking) Craig Tull, LBNL (Daya Bay) Jason Zurawski, Internet2 (Networking)

 

William Johnston, Energy Sciences Network Enabling Virtual Science, TERENA Conference, Malaga, Spain, July 9, 2009,

William E Johnston, The Evolution of Research and Education Networks and their Essential Role in Modern Science, TERENA Conference, Malaga, Spain, June 9, 2009,

“ASCR (Advanced Scientific Computing Research) Network Requirements Workshop, April 2009 - Final Report”, ESnet Networking Requirements Workshop, April 15, 2009, LBNL LBNL-2495E

Office of Advanced Scientific Computing Research, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-2495E.

Participants and Contributors: Bill Allcock, ANL (ALCF, GridFTP) Rich Carlson, Internet2 (Networking) Steve Cotter, ESnet (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Brent Draney, NERSC (Networking and Security) Richard Gerber, NERSC (User Services) Mike Helm, ESnet (DOEGrids/PKI) Jason Hick, NERSC (Storage) Susan Hicks, ORNL (Networking) Scott Klasky, ORNL (OLCF Applications) Miron Livny, University of Wisconsin Madison (OSG) Barney Maccabe, ORNL (Computer Science) Colin Morgan, NOAA (Networking) Sue Morss, DOE/SC/ASCR (ASCR Program Office) Lucy Nowell, DOE/SC/ASCR (SciDAC) Don Petravick, FNAL (HEP Program Office) Jim Rogers, ORNL (OLCF) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alex Sim, LBNL (Storage Middleware) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories/Middleware) Dean Williams, LLNL (ESG/Climate) Linda Winkler, ANL (Networking) Frank Wuerthwein, UC San Diego (OSG)

C. Guok, ESnet OSCARS, DOE Joint Engineering Taskforce, February 2009,

Swany M. Brown A., Zurawski J., “A General Encoding Framework for Representing Network Measurement and Topology Data”, Concurrency and Computation: Practice and Experience, 2009, 21:1069--1086,

Grigoriev M., Boote J., Boyd E., Brown A., Metzger J., DeMar P., Swany M., Tierney B., Zekauskas M., Zurawski J., “Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites”, 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), January 1, 2009,

Tierney B., Metzger J., Boote J., Brown A., Zekauskas M., Zurawski J., Swany M., Grigoriev M., “perfSONAR: Instantiating a Global Network Measurement Framework”, 4th Workshop on Real Overlays and Distributed Systems (ROADS’09) Co-located with the 22nd ACM Symposium on Operating Systems Principles (SOSP), January 1, 2009, LBNL LBNL-1452

2008

Chin Guok, David Robertson, Evangelos Chaniotakis, Mary Thompson, William Johnston, Brian Tierney, A User Driven Dynamic Circuit Network Implementation, IEEE DANMS 2008, November 2008,

William Johnston, Evangelos Chaniotakis, Eli Dart, Chin Guok, Joe Metzger, Brian Tierney, “The Evolution of Research and Education Networks and their Essential Role in Modern Science”, Trends in High Performance & Large Scale Computing, ( November 1, 2008)

Published in: "Trends in High Performance & Large Scale Computing" Lucio Grandinetti and Gerhard Joubert, Editors

William Johnston, ESnet Planning, Status, and Future Issues, ASCAC Meeting, August 1, 2008,

A. Baranovski, K. Beattie, S. Bharathi, J. Boverhof, J. Bresnahan, A. Chervenak, I. Foster, T. Freeman, D. Gunter, K. Keahey, C. Kesselman, R. Kettimuthu, N. Leroy, M. Link, M. Livny, R. Madduri, G. Oleynik, L. Pearlman, R. Schuler, and B. Tierney, “Enabling Petascale Science: Data Management, Troubleshooting and Scalable Science Services”, Proceedings of SciDAC 2008,, July 1, 2008,

“NP (Nuclear Physics) Network Requirements Workshop, May 2008 - Final Report”, ESnet Network Requirements Workshop, May 6, 2008, LBNL LBNL-1289E

Nuclear Physics Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD. LBNL-1289E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (RHIC) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Andy Kowalski, JLAB (Networking) Jerome Lauret, BNL (STAR at RHIC) Charles Maguire, Vanderbilt (LHC CMS Heavy Ion) Douglas Olson, LBNL (STAR at RHIC and ALICE at LHC) Martin Purschke, BNL (PHENIX at RHIC) Gulshan Rai, DOE/SC (NP Program Office) Brian Tierney, ESnet (Networking) Chip Watson, JLAB (CEBAF) Carla Vale, BNL (PHENIX at RHIC)

William E Johnston, ESnet4: Networking for the Future of DOE Science, Office of Science, Science Programs Requirements Workshops: Nuclear Physics, May 1, 2008,

“FES (Fusion Energy Sciences ) Network Requirements Workshop, March 2008 - Final Report”, ESnet Network Requirements Workshop, March 13, 2008, LBNL LBNL-644E.

Fusion Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-644E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Tom Casper, LLNL (Fusion – LLNL) Dan Ciarlette, ORNL (ITER) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Bill Dorland, University of Maryland (Fusion – Computation) Martin Greenwald, MIT (Fusion – Alcator C-Mod) Paul Henderson, PPPL (Fusion – PPPL Networking, PPPL) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) Ihor Holod, UC Irvine (Fusion – Computation, SciDAC) William Johnston, ESnet (Networking) Scott Klasky, ORNL (Fusion – Computation, SciDAC) John Mandrekas, DOE/SC (FES Program Office) Doug McCune, PPPL (Fusion – TRANSP user community, PPPL) Thomas NDousse, DOE/SC/ASCR (ASCR Program Office) Ravi Samtaney, PPPL (Fusion – Computation, SciDAC) David Schissel, General Atomics (Fusion – DIII-D, Collaboratories) Yukiko Sekine, DOE/SC/ASCR (ASCR Program Office), Sveta Shasharina, Tech-X Corporation (Fusion – Computation) Brian Tierney, LBNL (Networking)

William Johnston, Joe Metzger, Mike O'Connor, Michael Collins, Joseph Burrescia, Eli Dart, Jim Gagliardi, Chin Guok, Kevin Oberman, “Network Communication as a Service-Oriented Capability”, High Performance Computing and Grids in Action, Volume 16, Advances in Parallel Computing, ( March 1, 2008)

ABSTRACT    
In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as “services” – they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service – it is provided, in general, as a “best effort” capability with no guarantees and only statistical predictability.

In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science – e.g., the multi-disciplinary simulation of next generation climate modeling and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) – networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures.

In this paper we describe ESnet's approach to each of these – an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.

Joseph Burrescia, ESnet Update, Joint Techs, Honolulu, Hawaii, January 21, 2008,

Kevin Oberman, The Gathering Storm: The Coming Internet Crisis, Joint Techs, Honolulu, Hawaii, January 21, 2008,

J. Zurawski, D Wang, “Fault-tolerance schemes for clusterheads in clustered mesh networks”, International Journal of Parallel, Emergent and Distributed Systems, 2008, 23:271--287,

Chin P. Guok, Jason R. Lee, Karlo Berket, “Improving The Bulk Data Transfer Experience”, International Journal of Internet Protocol Technology 2008 - Vol. 3, No.1 pp. 46 - 53, January 1, 2008,

2007

C. Guok, Impact of ESnet OSCARS and Collaborative Projects, SC07, November 2007,

Joe Metzger, ESnet4: Networking for the Future of DOE Science, ICFA International Workshop on Digital Divide, October 25, 2007,

Dan Gunter, Brian L. Tierney, Aaron Brown, Martin Swany, John Bresnahan, Jennifer M. Schopf, “Log Summarization and Anomaly Detection for Troubleshooting Distributed Systems”, Proceedings of the 8th IEEE/ACM International Conference on Grid Computing, September 19, 2007,

Matthias Vallentin, Robin Sommer, Jason Lee, Craig Leres, Vern Paxson, Brian Tierney,, “The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware”, Proceedings of the Symposium on Recent Advances in Intrusion Detection (RAID), September 10, 2007,

“BER (Biological and Environmental Research) Network Requirements Workshop, July 2007 - Final Report”, ESnet Network Requirements Workshop, July 26, 2007,

Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD – July 26 and 27, 2007. LBNL/PUB-988.

Participants and Contributors: Dave Bader, LLNL (Climate) Raymond Bair, ANL (Comp Bio) Anjuli Bamzai, DOE/SC BER Paul Bayer, DOE/SC BER David Bernholdt, ORNL (Earth System Grid) Lawrence Buja, NCAR (Climate) Alice Cialella, BNL (ARM Data) Eli Dart, ESnet (Networking) Eric Davis, LLNL (Climate) Bert DeJong, PNNL (EMSL) Dick Eagan, ANL (ARM) Yakov Golder, JGI (Comp Bio) Dave Goodwin, DOE/SC ASCR Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Phil Jones, LANL (Climate) Raymond McCord, ORNL (ARM) Steve Meacham, NSF George Michaels, PNNL (Comp Bio) Kevin Regimbal, PNNL (EMSL) Mike Sayre, NIH Harris Shapiro, LBNL (JGI) Ellen Stechel, ASCAC Brian Tierney, LBNL (Networking) Lee Tsengdar, NASA (Geosciences) Mike Wehner, LBNL (Climate) Trey White, ORNL (Climate)

William E Johnston, “ESnet: Advanced Networking for Science”, SciDAC Review, July 1, 2007,

“BES (Basic Energy Sciences) Network Requirements Workshop, June 2007 - Final Report”, ESnet Network Requirements Workshop, June 4, 2007, LBNL LBNL/PUB-981

Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC – June 4 and 5, 2007. LBNL/PUB-981.

Participants and Contributors: Dohn Arms, ANL (Advanced Photon Source) Anjuli Bamzai, DOE/SC/BER (BER Program Office) Alan Biocca, LBNL (Advanced Light Source) Jackie Chen, SNL (Combustion Research Facility) Eli Dart, ESnet (Networking) Bert DeJong, PNNL (Chemistry) Paul Domagala, ANL (Computing and Information Systems) Yiping Feng, SLAC (LCLS/LUSI) David Goodwin, DOE/SC/ASCR (ASCR Program Office) Bruce Harmon, Ames Lab (Materials Science) Robert Harrison, UT/ORNL (Chemistry) Richard Hilderbrandt, DOE/SC/BES (BES Program Office) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Roger Klaffky, DOE/SC/BES (BES Program Office) Michael McGuigan, BNL (Center for Functional Nanomaterials) Stephen Miller, ORNL (Spallation Neutron Source) Richard Mount, SLAC (Linac Coherent Light Source) Jeff Neaton, LBNL (Molecular Foundry) Larry Rahn, SNL/BES (Combustion) Thomas Schulthess, ORNL (CNMS) Ken Sidorowicz, ANL (Advanced Photon Source) Ellen Stechel, SNL (ASCAC) Brian Tierney, LBNL (Networking) Linda Winkler, ANL (Networking) Zhijian Yin, BNL (National Synchrotron Light Source)

Tom Lehman , Xi Yang, Chin P. Guok, Nageswara S. V. Rao, Andy Lake, John Vollbrecht, Nasir Ghani, “Control Plane Architecture and Design Considerations for Multi-Service, Multi-Layer, Multi-Domain Hybrid Networks”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

“Measurements On Hybrid Dedicated Bandwidth Connections”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

INFOCOM 2007, IEEE (TCHSN/ONTC)

William Johnston, “The Advanced Networks and Services Underpinning Modern, Large-Scale Science”, SciDAC Review Paper, May 1, 2007,

William E Johnston, ESnet4 - Networking for the Future of DOE Science: High Energy Physics / LHC Networking, ON Vector (ON*Vector) Workshop
, February 26, 2007,

2006

Chin Guok, David Robertson, Mary Thompson, Jason Lee, Brian Tierney and William Johnston, “Intra and Interdomain Circuit Provisioning Using the OSCARS Reservation System”, Third International Conference on Broadband Communications Networks, and Systems, IEEE/ICST, October 1, 2006,

Eli Dart, editor, “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006”, ESnet Networking Requirements Workshop, February 21, 2006,

Update to the 2002 Office of Science Networking Requirements Workshop Report February 21, 2006. LBNL report LBNL-61832.

Contributors: Paul Adams, LBNL (Advanced Light Source); Shane Canon, ORNL (NLCF); Steven Carter, ORNL (NLCF); Brent Draney, LBNL (NERSC); Martin Greenwald, MIT (Magnetic Fusion Energy); Jason Hodges, ORNL (Spallation Neutron Source); Jerome Lauret, BNL (Nuclear Physics); George Michaels, PNNL (Bioinformatics); Larry Rahn, SNL (Chemistry); David Schissel, GA (Magnetic Fusion Energy); Gary Strand, NCAR (Climate Science); Howard Walter, LBNL (NERSC); Michael Wehner, LBNL (Climate Science); Dean Williams, LLNL (Climate Science).

ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS), Google invited talk; Advanced Networking for Distributed Petascale Science Workshop; IEEE GridNets; QUILT Fall Fiber Workshop, 2008, 2006,

Zurawski, J., Swany M., and Gunter D., “A Scalable Framework for Representation and Exchange of Network Measurements”, 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TridentCom 2006), Barcelona, Spain, 2006,

2005

R. Pang, M. Allman, M. Bennett, J. Lee, V. Paxson, B. Tierney, “A First Look at Modern Enterprise Traffic”, Internet Measurement Conference 2005 (IMC-2005), October 19, 2005,

Zurawski J., Swany M., Beck M. and Ding Y., “Logistical Multicast for Data Distribution”, Proceedings of CCGrid, Workshop on Grids and Advanced Networks 2005 (GAN05), Cardiff, Wales, 2005,

Wang D. Zurawski J., “Fault-Tolerance Schemes for Hierarchical Mesh Networks”, The 6th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT 2005), Dalian, China, 2005,

Hanemann A., Boote J. , Boyd E., Durand J., Kudarimoti L., Lapacz R., Swany M., Trocha S., and Zurawski J., “PerfSONAR: A Service-Oriented Architecture for Multi-Domain Network Monitoring”, International Conference on Service Oriented Computing (ICSOC 2005), Amsterdam, The Netherlands, 2005,

2004

Gunter D., Leupolt M., Tierney B., Swany M. and Zurawski J., “A Framework for the Representation of Network Measurements”, LBNL Technical Report, 2004,

2003

D. Gunter, B. Tierney, C. E. Tull, V. Virmani, “On-Demand Grid Application Tuning and Debugging with the NetLogger Activation Service”, 4th International Workshop on Grid Computing (Grid2003), November 17, 2003,

“DOE Science Networking Challenge: Roadmap to 2008 - Report of the June 3-5, 2003, DOE Science Networking Workshop”, DOE Science Networking Workshop, June 3, 2003,

Report of the June 3-5, 2003, DOE Science Networking Workshop Conducted by the Energy Sciences Network Steering Committee at the request of the Office of Advanced Scientific Computing Research of the U.S. Department of Energy Office of Science.

Workshop Chair Roy Whitney;  Working Group Chairs: Wu-chun Feng, William Johnston, Nagi Rao, David Schissel, Vicky White, Dean Williams; Workshop Support: Sandra Klepec, Edward May; Report Editors: Roy Whitney, Larry Price; Energy Sciences Network Steering Committee: Larry Price; Chair: Charles Catlett, Greg Chartrand, Al Geist, Martin Greenwald, James Leighton, Raymond McCord, Richard Mount, Jeff Nichols, T.P. Straatsma, Alan Turnbull, Chip Watson, William Wing, Nestor Zaluzec.

D. Agarwal, J. M. González, G. Jin, B. Tierney, “An Infrastructure for Passive Network Monitoring of Application Data Streams”, 2003 Passive and Active Measurement Workshop, April 14, 2003,

Dan Gunter, Brian Tierney, “NetLogger: A Toolkit for Distributed System Performance Tuning and Debugging”, Proceedings of The 8th IFIP/IEEE International Symposium on Integrated Network Management, March 24, 2003,

2002

J. Lee, D. Gunter, M. Stoufer, B. Tierney, “Monitoring Data Archives for Grid Environments”, Proceeding of IEEE Supercomputing 2002 Conference, November 15, 2002,

A Chervenak, E. Deelman, I. Foster, A. Iamnitchi, C. Kesselman, W. Hoschek, P. Kunszt, M. Ripeanu, H. Stockinger, K. Stockinger, B. Tierney, “Giggle: A Framework for Constructing Scalable Replica Location Services”, Proceeding of IEEE Supercomputing 2002 Conference, November 15, 2002,

T. Dunigan, M. Mathis and B. Tierney, “A TCP Tuning Daemon”, Proceeding of IEEE Supercomputing 2002 Conference, November 10, 2002,

“High-Performance Networks for High-Impact Science”, High-Performance Network Planning Workshop, August 13, 2002,

Report of the High-Performance Network Planning Workshop. Conducted by the
 Office of Science, U.S. Department of Energy
. August 13-15, 2002.

Participants and Contributors: Deb Agarwal, LBNL; Guy Almes, Internet 2; Bill St. Arnaund, Canarie, Inc.; Ray Bair, PNNL; Arthur Bland, ORNL; Javad Boroumand, Cisco; William Bradley, BNL; James Bury, AT&T; Charlie Catlett, ANL; Daniel Ciarlette, ORNL; Tim Clifford, Level 3; Carl Cork, LBL; Les Cottrell, SLAC; David Dixon, PNNL; Tom Dunnigan, Oak Ridge; Aaron Falk, USC/Information Sciences Inst.; Ian Foster, ANL; Dennis Gannon, Indiana Univ.; Jason Hodges, ORNL; Ron Johnson, Univ. of Washington; Bill Johnston, LBNL; Gerald Johnston, PNNL; Wesley Kaplow, Qwest; Dale Koelling, US Department of Energy; Bill Kramer, LBNL/NERSC; Maxim Kowalski, JLab; Jim Leighton, LBNL/Esnet; Phil LoCascio, ORNL; Mari Maeda, NSF; Mathew Mathis, Pittsburgh SuperComputing Center; William (Buff) Miner, US Department of Energy; Sandy Merola, LBNL; Thomas Ndousse-Fetter, US Department of Energy; Harvey Newman, Caltech; Peter O'Neil, NCAR; James Peppin, USC/Information Sciences Institute; Arnold Peskin, BNL; Walter Polansky, US Department of Energy; Larry Rahn, SNL; Anne Richeson, Qwest; Corby Schmitz, ANL; Thomas Schulthess, ORNL; George Seweryniak, US Department of Energy; David Schissel, General Atomics; Mary Anne Scott, US Department of Energy; Karen Sollins, MIT; Warren Strand, UCAR; Brian Tierney, LBL; Steven Wallace, Indiana University; James White, ORNL; Vicky White, US Department of Energy; Michael Wilde, ANL; Bill Wing, ORNL; Linda Winkler, ANL; Wu-chun Feng, LANL; Charles C. Young, SLAC.

D. Gunter, B. Tierney, K. Jackson, J. Lee, M. Stoufer,, “Dynamic Monitoring of High-Performance Distributed Applications”, Proceedings of the 11th IEEE Symposium on High Performance Distributed Computing (HPDC-11), July 10, 2002,

2001

Brian L. Tierney, Joseph B. Evans, Dan Gunter, Jason Lee, Martin Stoufer, “Enabling Network-Aware Applications”, Proceedings of the 10th IEEE Symposium on High Performance Distributed Computing (HPDC-10), August 15, 2001,

2000

B. Tierney, B. Crowley, D. Gunter, M. Holding, J. Lee, M. Thompson, “A Monitoring Sensor Management System for Grid Environments”, Proceedings of the IEEE High Performance Distributed Computing conference ( HPDC-9 ), August 10, 2000,

1999

Tierney, B. Lee, J., Crowley, B., Holding, M., Hylton, J., Drake, F., “A Network-Aware Distributed Storage Cache for Data Intensive Environments”, Proceedings of IEEE High Performance Distributed Computing conference ( HPDC-8 ), August 15, 1999,

1998

Tierney, B., W. Johnston, B. Crowley, G. Hoo, C. Brooks, D. Gunter., “The NetLogger Methodology for High Performance Distributed Systems Performance Analysis”, Proceedings of IEEE High Performance Distributed Computing conference (HPDC-7), July 12, 1998,

1969

Daniel K. Gunter, Keith R. Jackson, David E. Konerding, Jason R. Lee and Brian L. Tierney, “Essential Grid Workflow Monitoring Elements”, The 2005 International Conference on Grid Computing and Applications (GCA'05), December 31, 1969,