Programming Environment (PE) ChangesJuly 16, 2018On July 16th 2018 the programming environment was updated in several ways.
To asssist with moving to the newer PE and CUDA 9.1 we are providing custom modules for testing. They can be swapped from the current PrgEnv that you are currently using as follows. Note that the
Then add as needed
Python
Troubleshooting
"file "/opt/cray/hdf5-parallel/1.10.0/CRAY/8.3/include/HDF5.mod" contains modules and/or submodules. It must be recompiled as its format is old and unsupported. It is version 94 from release 8.3" then swap the hdf module with modules from
"error while loading shared libraries: libnvidia-fatbinaryloader.so.390.46: cannot open shared object file: No such file or directory" then add the following to your PBS job script:
March 29, 2017On March 29, 2017 the programming environment defaults where changed from PE 16.04 to PE 16.11.
Prior to March 29, 2017 users are able to test the configuration by making the following changes For all compilers
module swap craype craype/2.5.8 module swap modules modules/3.2.10.5 module swap cray-mpich cray-mpich/7.5.0 CCE module swap cce cce/8.5.5 GNU (no change to default but 5.x and 6.x versions are available but do not support CUDA) module swap PrgEnv-cray PrgEnv-gnu Intel (no change to default Intel compiler)
module swap PrgEnv-cray PrgEnv-intel PGI module swap PrgEnv-cray PrgEnv-pgi module swap pgi/16.3.0 pgi/16.9.0 module swap cray-libsci cray-libsci/16.11.1
If you need to use other modules, cray-libsci for example, then please look at the list of modules above for the new default values. June 7th 2016To test the changes to the programming environment that will take place on June 20, 2016 to move to CUDA 7.5 from CUDA 7.0, we recommend trying the CPU modules that are needed to support the programming environment for CUDA 7.5. On June 20th the system defaults will be set to support CUDA 7.5 but until then you will not be able to test CUDA 7.5, only the CPU and communication modules.
In order to test the updated compiler and MPI on CPU code, please test with the following in addition to your usual modules.
For all compilers
module swap craype/2.5.0 craype/2.5.4 module swap cray-mpich/7.3.0 cray-mpich/7.3.3
CCE module swap cce/8.4.2 cce/8.4.6 GNU module swap PrgEnv-cray PrgEnv-gnu # No change in default GCC compiler Intel module swap PrgEnv-cray PrgEnv-intel module swap intel/15.0.3.187 intel/16.0.3.210 PGI module swap PrgEnv-cray PrgEnv-pgi module swap pgi/15.10.0 pgi/16.3.0
If you need to use other modules, cray-libsci for example, then please look at the list of modules below for the new default values.
Defaults after maintenance June 20th, 2016.
January 18, 2016CLE 5.2 UP04 The Cray Linux Environment (CLE) 5.2 will be upgraded from UP02 to UP04. Included in this upgrade are a newer version of the 3.0.101 kernel, a newer version of the Lustre 2.5 client as well as other system related packages and modules. The upgrade also includes a set of security, stability and functionality patches. In future maintenance events the Sonexion Lustre servers will be upgraded. PE15.12 and CUDA 7.0 The upgrade of UP04 allows the system to upgrade to NVIDIA CUDA 7.0 with a necessary upgrade of the Programming Environment (PE) for both XE and XK nodes. No recompilation is expected if standard linking practices have been used. If issues are encountered please rebuild/relink or set your runtime environment to match that used to originally build the application. CrayPE CrayPE will now detect when the compiler/linker has been called via configure and will not pass a link option to the compiler, deferring to it's native like mode, e.g. gcc defaults to dynamic linking. This can speed up invocations of 'configure' by not forcing all of the invocations it makes to CrayPE to be linked statically, which is the default on Blue Waters. CCE CCE 8.4 supports CUDA 6.5 and 7.0. Applications running CUDA 5.5 can continue to use CCE releases prior to 8.4. CCE 8.4 now supports by default certain GNU features with the new default setting of -hgnu. Use -hnognu to disable this feature. GCC GCC 4.9.0 supports CUDA 7.0. GCC 5 is available on the system but does not support CUDA. MPICH A new feature was added to the Cray MPI library that allows users to display the high water mark of the memory used by MPI. See the MPICH_MEMORY_REPORT env variable in the intro_mpi man page for more information.. A new Lustre file locking mode was added to the Cray MPI library that, when used with MPI-IO collective buffering, can increase write performance by up to 2x for large contiguous writes. The new MPI-IO hint is cray_cb_write_lock_mode=1. See the intro_mpi man page for more information. Please see the Programming Environment Changes page on the Blue Waters portal for more information. Changed modules or packages of interest Programming Environment Modules
Cray Linux Environment Modules
Complete List of updated User modules
May 18, 2015In support of CUDA 6.5 on Blue Waters some changes to the default programming environment were made. The changes to cce and cray-mpich and cray-libsci_acc represent the larger changes to module versions. The following table provides a list of changed default module versions and can be used to recreate the earlier environment if needed.
January 13, 2015As discussed in recent User Calls, the Blue Waters software stack will be upgraded from CLE 4.2 to CLE 5.2 as part of the maintenance. Changes to the Programming Environment should be mostly transparent and not require action by the user. There are some exceptions.
It is recommended that statically linked binaries that use MPI, SHMEM, PGAS, CoArrary, CHARM++ and MPI be relinked. The DMAPP and uGNI libraries are tied to specific kernel versions and no backward or forward compatibility is provided. Relinking is not needed for dynamically linked binaries such as those using CUDA or OpenACC. We have not observed issues with tests of some user applications but it is recommended that jobs be checked and binaries relinked as needed.
The Lustre client will be upgraded from 1.8.6 to 2.5.1. Wide striping (beyond 160 OSTs) will not be enabled due to an existing bug that should be resolved soon.
The Linux kernel will be upgraded from 2.6.32 to 3.0.101 with SLES 11 SP3.
August 18, 2014The next default change in Programming Environment (PE) will require rebuilding of MPI based applications. Cray's MPT 7.0.0 release of MPICH adheres to the cray-mpich-compat/v7" is performed, the latest version of MPT 7.x.x and compatible libraries and products will be swapped based on the module list that were previously loaded. If any compatible libraries or products are not available, a message will be displayed and no additional modules will be swapped. To go back to the latest version of the MPT 6.x.x and the latest compatible versions of the libraries, a user can module swap back to cray-mpich-compat/v6. Note: The modulefile does not remember the specific previous library and product versions, rather it uses the latest installed versions that are compatible. Type "module help cray-mpich-compat/v7" for more information. The cray-mpich-compat/v7 module ensures that the currently loaded modules, at the time of loading it, are compatible with mpich 7.x.x
cray-mpich/7.0.2 cray-shmem/7.0.2 cray-ga/5.1.0.5 cray-libsci/13.0.0 cray-libsci_acc/3.0.2 cray-tpsl/1.4.1 cray-petsc/3.4.4.0 cray-hdf5-parallel/1.8.9 cray-netcdf-hdf5parallel/4.3.2 cray-parallel-netcdf/1.4.1 cray-trilinos/11.8.1.0 cce/8.3.2 perftools/6.2.0
For PrgEnv-gnu please use except for CUDA applications which then need to use nvcc and gcc/4.8.2
cray-mpich/6.3.1 cray-shmem/6.3.1 cray-ga/5.1.0.4 cray-libsci/12.2.0 cray-libsci_acc/3.0.1 cray-tpsl/1.4.0 cray-petsc/3.4.3.1 cray-hdf5-parallel/1.8.12 cray-netcdf-hdf5parallel/4.3.1 cray-parallel-netcdf/1.4.0 cray-trilinos/11.6.1.0 cce/8.2.6 perftools/6.1.4
For PrgEnv-gnu please use gcc/4.8.2
Keeping at cray-mpich 6.x compatibiltyTo continue to use the cray-mpich 6.x compatible release of the programming environment you can add:
to your ~/.modules file.
Known Issues- rca and cudatoolkit: If you are using rca functionality with a cuda application then you will need to add the following to your environment when compiling: export PE_PKGCONFIG_LIBS=cray-rca:$PE_PKGCONFIG_LIBS - CUDA and GCC 4.9.0: applications using CUDA need to use gcc/4.8.2 when using nvcc. |