CHEP 2018 Plenary Speakers

Andrea Ceccanti is a staff member at the INFN National Computing Center (INFN-CNAF) where he works in the Distributed Systems and Software Development group on the design and development of AAI and storage management solutions in support of scientific research.

In the past 15 years, Andrea has been involved in several European projects (EGEE, EMI, INDIGO-DataCloud) working on the design, development, maintenance and evolution of key middleware components in use in WLCG, EGI and other scientific computing infrastructures.

Currently Andrea is leading the development and maintenance for key INFN middleware products (VOMS, StoRM, Argus, the INDIGO Identity and Access Management service) and is involved in the WLCG, EOSC-Hub, EOSC-Pilot, Deep-HybridDataCloud and Extreme-DataCloud projects where he serves as an expert in software development, AAI and distributed computing.

LeManuel Lee Bitsóí (Diné), EdD, is a critical ethnographer and bioethicist who currently serves as Chief Diversity Officer at Stony Brook University, Long Island, NY, where he also maintains a faculty appointment as Research Professor in the Department of Technology and Society.

Dr. Bitsóí has also served in diversity leadership roles at Dartmouth, Harvard, Georgetown and Rush University Medical Center in Chicago.  Dr. Bitsóí is an indigenous scholar whose research and publication portfolio includes diversity and inclusion, social justice, access and equity, bioethical concerns, and understanding the impact of intergenerational trauma for indigenous people and communities.

Admirably, Dr. Bitsóí has devoted his career to enhancing opportunities for underrepresented minority students to become scientists, science educators and scientifically-informed community members.

Dr. Bitsóí earned a bachelor of science degree from the University of New Mexico, a master of education degree from Harvard University and a doctoral degree from the University of Pennsylvania.

David Rousseau is senior scientist at Laboratoire de l’Accélérateur Linéaire in Orsay, France (Univ. Paris-Sud CNRS/IN2P3, Université Paris-Saclay).

He obtained his PhD on ALEPH from Université Aix Marseille in 1992, then joined ATLAS in 1997.

He was involved in many software developments (in particular reconstruction algorithms and event data model) during the preparation and commissioning of the experiment and coordinated the offline software developments in 2011-2012.

Then he turned to Machine Learning, organizing the Higgs ML challenge in 2014, and now the tracking ML challenge, both on the Kaggle platform. He is co-coordinator of the ATLAS ML forum.

Daniel S. Katz is Assistant Director for Scientific Software and Applications at the National Center for Supercomputing Applications (NCSA), and Research Associate Professor in Computer Science, Electrical and Computer Engineering, and the School of Information Sciences (iSchool) at the University of Illinois Urbana-Champaign.

He is also Guest Faculty at Argonne National Laboratory and Adjunct Faculty at the Center for Computation & Technology, Louisiana State University.

He obtained his PhD in Electrical Engineering at Northwestern University in 1994.

His technical research interests are in applications, algorithms, fault tolerance, and programming in parallel and distributed computing, including HPC, Grid, Cloud, etc.

He is also interested in policy issues, including citation and credit mechanisms and practices associated with software and data, organization and community practices for collaboration, and career paths for computing researchers.

Imma Riu is scientific staff member of IFAE-BIST Barcelona and currently working in the ATLAS experiment at CERN.

She is working in the Trigger and Data Acquisition (TDAQ) project since 2006 where she co-coordinated the first integration group before the Run 1 start-up, the trigger menu group during part of Run 1 and the Level-1 topological trigger group in 2016 and this year.

Most recently, she has been one of two main editors of the already approved ATLAS TDAQ Technical Design Report for the Phase-II High Luminosity LHC Upgrade.

She obtained her PhD in Physics at the Universitat Autònoma de Barcelona in 1998 with the W mass measurement in the ALEPH experiment. Later she worked in Hamburg at DESY and at the University of Geneva.

Jakob Blomer is a staff member in the scientific software group at CERN.

He is the original author of the CernVM File System.

Jakob received a PhD in computer science from the Technical University of Munich in 2012.

In 2014, he was a visiting scholar at the RAMCloud research group at Stanford University.

Besides his work on CernVM-FS, Jakob takes care of the CernVM virtual appliance and he recently started working with the ROOT team on the evolution of the I/O subsystem.

Andreas Peters is member of the CERN IT storage group.

1997 he started as a student for the NA48 Collaboration at CERN devloping the first PC farm based data acquisition system and a zero suppression system for the electro-magnetic calorimeter.

He obtained  a PHD in physics at the University of Mainz in 2002 studying direct CP-violation in the neutral kaon system.

As a research fellow he joined the ALICE experiment doing mainly development of GRID software and data management tools.

During his five years at the European grid project EGEE he focused on development of end-user tools for distributed analysis and distributed data management.

In 2008 he joined the CERN data management group doing research and development for future data management at CERN.

Since 2010 he is project leader and core developer of the EOS Open Storage platform providing 250 PB of disk space to experiments and a backend for CERNBOX.

Karol Hennessy is a core research associate at the University of Liverpool.

He is co-coordinator of the DAQ for the ProtoDUNE-SP project as part of CERN’s neutrino platform.

In this role, he is commissioning the DAQ for the largest test-beam experiment to date, due to take data in autumn 2018.

He is DAQ work package leader for the LHCb VELO Upgrade, and has been working on Controls and DAQ for the LHCb Vertex Locator since its commissioning in 2008.

Prior to his PhD, he spent a year working for UMBC (Maryland) at NASA’s Goddard Space Flight Center.

He obtained his PhD in Particle Physics at the University College Dublin and has worked primarily on DAQ and computing projects since then.

Steven Farrell is a Machine Learning Engineer at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL).

He works on deep learning for science research and on enabling deep learning workloads on HPC facilities.

His current research interests include deep learning applications for HEP such as particle track reconstruction, event classification, and generative models for simulation.

Prior to NERSC, Steven worked on the ATLAS experiment.

He worked on searches for Supersymmetry in his PhD work at University of California, Irvine, and then did software development and machine learning R&D as a Postdoc at LBNL.

T. Daniel Crawford holds the Ethyl Chair of Chemistry at Virginia Tech and is the Director of the Molecular Sciences Software Institute, an NSF-funded center designed to serve as a nexus for science, education, and cooperation for the global community of computational molecular scientists.

He received his Ph.D. in 1996 at the University of Georgia's Center for Computational Quantum Chemistry and subsequently held a postdoctoral appointment at the University of Texas.

Prof. Crawford's research efforts focus on accurate quantum mechanical models for molecular spectroscopy.

He is recipient of the 2010 Dirac Medal of the World Association of Theoretical and Computational Chemists (WATOC) and a Fellow of the American Chemical Society.

Dr Rosie Bolton is based at the SKA Organisation, where her role is Project Scientist for the SKA Regional Centres - the advanced data centres where SKA data products will be stored and analysed to extract scientific results.

Her task is to establish how a global network of these regional centres should function and cope with the high demands of SKA - both in terms of data rates (up to 300 PBytes per year from each of the two telescopes) and user demands - thousands of users with complex data visualisation and analysis tasks to perform.

Philippe Charpentier is Senior applied physicist at CERN.

He has been for 15 years leading the DELPHI Collaboration Online project and responsible for the experiment data taking until the LEP closure when he joined the LHCb experiment.

In LHCb he has been since the beginning involved in the Offline project (software and distributed computing).

He has been representing LHCb is all successive (W)LCG boards (PEB, SC2, OB, GDB, CB) and was for several years LHCb Computing coordinator.

In the last few years he was responsible for the LHCb data handling and processing subproject and as such participated very actively in the LHCb distributed computing operations as well as to software development in the DIRAC and LHCbDIRAC projects.

He will retire from CERN at Autumn 2018.

Thomas Kuhr is professor at the Ludwig-Maximilians University Munich and works on experimental flavor physics.

In his role as Software Coordinator of the Belle II Collaboration he focuses on collaborative software development.

He has led the Belle II software development from its beginning and initiated the formation of a Belle II distributed computing model.

He obtained his PhD in 2002 at the University of Hamburg where he also contributed to the development of an object oriented framework of the H1 experiment.

After two years as CERN Fellow, working on the ALICE software, he went to the University of Karlsruhe and joined the CDF and in 2008 the Belle and Belle II Collaborations.

He is member of the German HEP Computing and Software Panel and chief referee for the Worldwide LHC Computing Grid in the LHC Experiments Committee.

Michel Jouvin has been working at Laboratoire de l’Accélérateur Linéaire (CNRS/IN2P3) for 25 years. Before becoming the head of the Computing Division (30 people involved in software development and computing operations) in 2013, he has been involved in system management and computing operations. In particular, he has been the technical manager of the GRIF grid site (8000 cores, 8 PB of disk storage), a distributed grid site in Paris region involving 5 laboratories. He has been involved in European grid initiatives since 2001.

He spent a lot of efforts in networking and community building. After chairing the HEPiX forum (2007-2013), a world-wide forum of system/site administrators, he became the WLCG GDB chairman in 2013 during four years.

More recently, he has been one of founder of the HEP Software Foundation. Member of its coordination since the beginning (2015), he was part of the core team who drove the Community White Paper process and did the editorial work.

Andreas Salzburger is CERN Staff since 2012. He was a Marie Curie Fellow at CERN 2009-2012 and a post-doctoral scientific member of Staff, DESY Zeuthen, 2008-2009. Andreas did his PhD at University of Innsbruck 2008 “Track Simulation and Reconstruction for the ATLAS Experiment". He studied Physics at University of Innsbruck 1997 onwards.

Currently, Andreas is ATLAS Software Coordinator (with Upgrade focus) and a co-orgnaiser of the Tracking Machine Learning Challenge. He was a former ATLAS Reconstruction Coordinator; ATLAS Inner Detector SW; offline run coordinator and Tracking CP convener

Jean-Yves Le Meur is currently the head of the CERN's Digital Memory project started in 2016.

Its goal is to ensure the long term preservation in digital format of the historical and recent assets of the organization.

He set up his first web server in 1993, the CERN preprint server, before leading as section leader the developments of the CERN Document, Library and Multimedia/Webcast services in the following years.

In 2002, he launched and managed the underlying open source Institutional Repository software, Invenio and he created its sister application, Indico, dedicated to the capture and management of conference content.

Today, these software are used world-wide, and since 2013, Jean-Yves Le Meur has driven the creation of the CERN spin off company that sell services on top of the Invenio framework.

Luca dell’Agnello is Director of Technology Research at INFN and currently coordinates the INFN Tier-1 data center located at INFN CNAF in Bologna, and he also represents INFN in the WLCG Management Board.

He graduated in Physics at the University of Firenze in 1992 and worked on computing since then.

He joined INFN in 1996 working, at the beginning on the GARR project (Italian NREN) and then in the European Projects Datagrid and Datacloud.

Since 2003 he has been involved in the start-up of the INFN Tier-1.

He was member of the GARR technical committee.

Axel Naumann started off as a physicists, then took the exit into the land of physics computing more than ten years ago by joining and now leading the ROOT team.

He is representing CERN and its users at the ISO C++ committee.

Paul Jackson leads the University of Adelaide Experimental Particle Physics group and he is a member of the ATLAS and Belle II collaborations.

His expertise is in searches for beyond Standard Model physics using novel techniques, and in detector readout.

He is also a member of the GAMBIT collaboration, performing global fits to data from multiple sources.

Dr Kamleh is a leading expert in computational physics, with a focus on the application of advanced algorithms and technologies to non-perturbative simulations. He has conducted extensive work in the field of lattice QCD, examining vacuum structure, resonance physics, electromagnetic interactions, and dynamical fermion algorithms. An early adopter of GPU technologies, he has lead the transformation of the lattice QCD program at the University of Adelaide onto the GPU accelerator platform.

  • Awarded PhD in 2004 from the University of Adelaide.
  • Post-doc at Trinity College Dublin (Ireland) from 2005-06.
  • Returned to Adelaide in 2007 where he is currently a University Research Fellow.

James Amundson is a senior staff member at Fermilab, where he heads the Scientific Software Infrastructure Department in the Scientific Computing Division.

Since completing his Ph.D. in Theoretical Particle Physics at the University of Chicago, he has worked on all aspects of particle physics, from theory (the production and decay of heavy quarks) to experiment (early grid work for CMS), accelerators (high-performance computing simulation of particle accelerators) and, most recently, applied quantum computing for HEP.

He and his collaborators at Fermilab recently developed a new quantum algorithm for the simulation of interacting fermion-boson systems. He also leads the Community Project for Accelerator Science and Simulation 4 (ComPASS4), a U.S. DOE SciDAC project.

Liz Sexton-Kennedy former CMS software and computing coordinator and current CIO of Fermilab will present Jim’s talk.