Embedded CSE (eCSE) support
Through a series of regular calls, Embedded CSE (eCSE) support provides funding to the ARCHER user community to develop software in a sustainable manner to run on ARCHER.
Details of any current eCSE calls and their remit can be found on the eCSE calls page
- Application Guidance
- Proposal Template
- eCSE projects funded from previous calls
- Codes worked on during eCSE projects
- eCSE Final report template
- Acknowledgement of eCSE funding
The eCSE programme provides tangible software enhancements to the communities exploiting software on ARCHER. This in turn has lead to significant scientific advancements and both economic and social benefits to society.
Reports from completed eCSE projects
The dolfin-adjoint package enables the automated optimisation of problems constrained by partial differential equations (PDEs). These problems are ubiquitous in engineering. Prior to this project, dolfin-adjoint was limited to serial optimisation libraries with no concept of parallel linear algebra. Now, the software is equipped with a Python-based interface to the parallel algorithms included in PETSc, thereby eliminating the performance penalty associated with gathering the data and lifting the upper bound on the size of problems able to be considered.
CASTEP is a UK-developed and widely used program for the quantum mechanical modelling of materials. This project aimed to improve the performance of CASTEP for large calculations on ARCHER, through two key approaches: by reducing memory usage per node, and by improving calculation time per iteration. The project added a new level of parallelism to CASTEP, allowing memory to be shared between CPU cores. The work has laid the foundations for a version of CASTEP that will work well on future exascale supercomputers.
The main objective of this project was to tune the electronic structure code FHI-aims for optimal efficiency on ARCHER's hardware, both as a standalone code and as a key component within two software packages - KLMC and ChemShell - for the benefit of members of the Materials Chemistry Consortium (MCC) and other new or current users. Improving the scalability for a wide range of applications to enable new science was the overarching target. Our work enables and encourages the undertaking of new research using current and emerging UK computer facilities in a wide range of fields including materials and life sciences, chemistry, physics, and engineering.
Large-scale DFT codes such as ONETEP allow us to predict ground-state properties for large systems such as complex biomolecules. This project aimed to enhance, extend and improve the implementation in ONETEP of Linear Response Time-Dependent Density-Functional Theory (LR-TDDFT), the method of choice for computing optical properties of large systems. Previous work by the investigators had proven that the method scaled linearly with system size, enabling simulations to reach the kind of system sizes that can be probed experimentally. However, the method had not yet been widely applied, optimised, or its accuracy extensively tested. This project provided an opportunity to test this functionality and build upon it significantly.
The Community Earth System Model (CESM) is a state-of-the-art coupled climate model for simulating the earth's climate system. Composed of four separate sub-models simultaneously simulating the earth's atmosphere, ocean, land surface and sea-ice, and one central coupler component, CESM allows researchers to conduct fundamental research into the earth's past, present and future climate states. This project has ported and optimised two versions of CESM to ARCHER, and these are now both readily usable by the UK climate research community on ARCHER. This is expected to generate a further growth in interest in the model.
Chemical reactions, drug-protein interactions, and many chemical and physical processes on surfaces are examples of technologically important processes that happen in the presence of solvents. The inclusion of electrolytes (salt) in solvents such as water is crucial for biomolecular simulations, as most processes (e.g. protein-protein or protein-drug interactions or DNA mutations) take place in saline solutions. This project aimed to develop the capability to model electrolyte-containing solvents in quantum-mechanical simulations of materials from first principles. Using a linear-scaling code such as ONETEP enables simulations to be performed on entire biomolecules or catalysts that typically involve hundreds or thousands of atoms.
Performance enhancement in RMT codes in preparation for application to circular polarised light fields
The RMT method is a new method for solving the time-dependent Schroedinger Equation, the fundamental equation of motion for objects at the atomic and sub-atomic level. The RMT approach demonstrates great efficiency and accuracy, primarily because it employs finite-difference (FD) techniques to model the ejected electron far from the atomic core. This project largely involved the optimisation of the inner-region communication algorithms of the RMT code. The new communications design enabled a new load balancing algorithm, which increased the speed of the code by up to a factor of 5, and reduced the amount of RAM used during initialisation by one or more orders of magnitude.
Scientists at the University of Hull have developed their simulation software to utilise ARCHER to model complete bones or large sections of bones. This offers the exciting opportunity to model skeletal development and adaptation. The potential benefits are enormous, ranging from a better understanding of both the fundamental biomechanics of bone and the cause and effects of musculoskeletal conditions, to better implant design.
TPLS (Two Phase Level Set) is a powerful 3D Direct Numerical Simulation (DNS) solver that is able to simulate multi-phase flows at unprecedented detail, speed and accuracy with applications in energy (oil/gas pipeline flows, microelectronic cooling via phase-change), environment (carbon capture and cleaning) and health (flows in retinal capillaries).
Scientists at the University of Edinburgh have been working on the code, converting the previously serial I/O to a scalable parallel implementation. This results in a halving of the time taken for each simulation from set-up to complete analysis.
At Imperial College London, scientists have been working with the Fluidity CFD code to improve file I/O and the performance and scalability of the underlying PETSc library. The resulting performance increase is then utilised to improve the mesh initialisation performance of the Fluidity CFD code through run-time distribution and more efficient data migration. In addition to performance improvements the capabilities of PETSc's DMPlexDistribute interface have been extended to include load-balancing and re-distribution of parallel meshes, as well as the ability to generate multiple levels of partition overlap. Furthermore, support for additional mesh file formats has been added to DMPlex and Fluidity, including binary Gmsh, Fluent-Case.