Skip to Content

Programming Environment (PE) Changes 

July 16, 2018

On July 16th 2018 the programming environment was updated in several ways.

  • New compilers and associated libraries from a new programming environment (PE) were installed non-default. Macro modules will be provided to move to the new PE. See below.
  • CUDA 9.1 drivers were loaded onto the XK nodes. 
  • The system was returned with gcc/4.9.3 and cudatoolkit/9.1.85_3.10-1.0502.df1cc54.3.1 as default. Tests have shown that this combination is stable and will let existing XK and XE binaries run as is.
  • To get full benefit of CUDA 9.1 use gcc/6.3.0.
  • Note: gcc/7.3.0 is not supported by CUDA 9.1
  • Note: There is a C++ ABI change when moving from gcc/4.9.3 to gcc/6.3.0. Users are encourged to rebuild their software stack when moving to gcc/6.3.0.

To asssist with moving to the newer PE and CUDA 9.1 we are providing custom modules for testing. They can be swapped from the current PrgEnv that you are currently using as follows. Note that the cudatoolkit/9.1.85_3.10-1.0502.df1cc54.3.1 module is not loaded with these modules.

  • module unload PrgEnv-cray; module load PrgEnv/cray-18_06-cuda-9.1
  • module unload PrgEnv-gnu; module load PrgEnv/gnu-6.3.0-cuda-9.1
  • module unload PrgEnv-intel; module load PrgEnv/intel-18.0.3.222-cuda-9.1
  • module unload PrgEnv-pgi; module load PrgEnv/pgi-18.3.0-cuda-9.1

Then add as needed

  • OpenACC
    • module add craype-accel-nvidia35

  • CUDA
    • module add cudatoolkit/9.1.85_3.10-1.0502.df1cc54.3.1
  • HDF5
    • module add cray-hdf5/1.10.2.0
    • module add cray-hdf5-parallel/1.10.2.0

Python

  • At this time cudatoolkit/7.5.18-1.0502.10743.2.1 and gcc/4.9.3 should be used when building against bypw/1.2.X until bwpy/2.0.0 is released.

Troubleshooting

  • If you see messages like 

"file "/opt/cray/hdf5-parallel/1.10.0/CRAY/8.3/include/HDF5.mod" contains modules and/or submodules.  It must be recompiled as its format is old and unsupported.  It is version 94 from release 8.3"

then swap the hdf module with modules from 1.10.2.0.

  • If you see messages about missing NVIDIA libraries or unable to load NVIDIA libraries such as

 "error while loading shared libraries: libnvidia-fatbinaryloader.so.390.46: cannot open shared object file: No such file or directory"

then add the following to your PBS job script:

export LD_LIBRARY_PATH=/opt/cray/nvidia/default/lib64:$LD_LIBRARY_PATH

  • The CUDA 7.5 toolkit (cudatoolkit/7.5.18-1.0502.10743.2.1) is available and can be used with gcc/4.9.3 and current default modules.

 

March 29, 2017

On March 29, 2017 the programming environment defaults where changed from PE 16.04 to PE 16.11.

 

module new default old default
atp 2.0.4 2.0.1
cce 8.5.5 8.4.6
cray-ccdb 2.0.3 2.0.0
cray-ga 5.3.0.7 5.3.0.6
cray-hdf5 1.10.0 1.8.16
cray-hdf5-parallel 1.10.0 1.8.16
cray-lgdb 3.0.4 3.0.0
cray-libsci 16.11.1 16.03.1
cray-mpich 7.5.0 7.3.3
cray-mpich-abi 7.5.0 7.3.3
cray-netcdf 4.4.1 4.4.0
cray-netcdf-hdf5parallel 4.4.1 4.4.0
cray-petsc 3.7.2.1 3.6.3.0
cray-petsc-64 3.7.2.1 3.6.3.0
cray-petsc-complex 3.7.2.1 3.6.3.0
cray-petsc-complex-64 3.7.2.1 3.6.3.0
cray-shmem 7.5.0 7.3.3
cray-tpsl 16.07.1 16.03.1
cray-tpsl-64 16.07.1 16.03.1
cray-trilinos 12.6.3.3 12.2.1.0
craype 2.5.8 2.5.4
craypkg-gen 1.3.4 1.3.3
darshan 3.1.3 2.3.0
ddt 6.0.6.1 6.0.3.1
ddt-memdebug 6.0.6.1 6.0.3.1
fftw 3.3.4.10 3.3.4.7
forge 6.0.6.1 6.0.3.1
iobuf 2.0.7 2.0.6
modules 3.2.10.5 3.2.10.4
papi 5.5.0.1 5.4.3.1
perftools 6.4.3 6.3.2
perftools-base 6.4.3 6.3.2
perftools-lite 6.4.3 6.3.2
pgi 16.9.0 16.3.0
stat 2.2.0.3 2.2.0.2

Prior to March 29, 2017 users are able to test the configuration by making the following changes

For all compilers 

 

module swap craype craype/2.5.8

module swap modules modules/3.2.10.5

module swap cray-mpich  cray-mpich/7.5.0

CCE

module swap cce cce/8.5.5
module swap cray-libsci cray-libsci/16.11.1

GNU (no change to default but 5.x and 6.x versions are available but do not support CUDA)

module swap PrgEnv-cray PrgEnv-gnu
module swap cray-libsci cray-libsci/16.11.1

Intel (no change to default Intel compiler)

 

module swap PrgEnv-cray PrgEnv-intel

PGI

module swap PrgEnv-cray PrgEnv-pgi

module swap pgi/16.3.0 pgi/16.9.0

module swap cray-libsci cray-libsci/16.11.1

 

If you need to use other modules, cray-libsci for example, then please look at the list of modules above for the new default values.

June 7th 2016

To test the changes to the programming environment that will take place on June 20, 2016 to move to CUDA 7.5 from CUDA 7.0, we recommend trying the CPU modules that are needed to support the programming environment for CUDA 7.5.

On June 20th the system defaults will be set to support CUDA 7.5 but until then you will not be able to test CUDA 7.5, only the CPU and communication modules.

 

In order to test the updated compiler and MPI on CPU code, please test with the following in addition to your usual modules. 

 

For all compilers 

 

module swap craype/2.5.0 craype/2.5.4

module swap cray-mpich/7.3.0  cray-mpich/7.3.3

 

CCE

module swap cce/8.4.2  cce/8.4.6

GNU

module swap PrgEnv-cray PrgEnv-gnu

# No change in default GCC compiler

Intel

module swap PrgEnv-cray PrgEnv-intel

module swap intel/15.0.3.187  intel/16.0.3.210

PGI

module swap PrgEnv-cray PrgEnv-pgi

module swap pgi/15.10.0 pgi/16.3.0

 

If you need to use other modules, cray-libsci for example, then please look at the list of modules below for the new default values.

 

Defaults after maintenance June 20th, 2016.

module  new default old default
atp 2.0.1 1.8.3
cce 8.4.6 8.4.2
cray-ccdb 2.0.0 1.0.7
cray-ga 5.3.0.6 5.3.0.4
cray-hdf5 1.8.16 1.8.14
cray-hdf5-parallel 1.8.16 1.8.14
cray-lgdb 3.0.0 2.4.5
cray-libsci 16.03.1 13.3.0
cray-libsci_acc 16.03.1 13.3.0
cray-mpich 7.3.3 7.3.0
cray-mpich-abi 7.3.3 7.3.0
cray-netcdf 4.4.0 4.3.3.1
cray-netcdf-hdf5parallel 4.4.0 4.3.3.1
cray-parallel-netcdf 1.7.0 1.6.1
cray-petsc 3.6.3.0 3.6.1.0
cray-petsc-64 3.6.3.0 3.6.1.0
cray-petsc-complex 3.6.3.0 3.6.1.0
cray-petsc-complex-64 3.6.3.0 3.6.1.0
cray-shmem 7.3.3 7.3.0
cray-tpsl 16.03.1 1.5.2
cray-tpsl-64 16.03.1 1.5.2
cray-trilinos 12.2.1.0 11.12.1.5
craype 2.5.4 2.5.0
craypkg-gen 1.3.3 1.3.2
cudatoolkit 7.5.18-1.0502.10743.2.1 7.0.28-1.0502.10742.5.1
ddt 6.0.3.1 5.1.0.2
ddt-memdebug 6.0.3.1 5.1.0.2
fftw 3.3.4.7 3.3.4.6
forge 6.0.3.1 5.1.0.2
intel 16.0.3.210 15.0.3.187
modules 3.2.10.4 3.2.10.3
papi 5.4.3.1 5.4.1.3
perftools 6.3.2 6.3.1
perftools-base 6.3.2 6.3.1
perftools-lite 6.3.2 6.3.1
pgi 16.3.0 15.10.0
stat 2.2.0.2 2.2.0.1

 

January 18, 2016

CLE 5.2 UP04

The Cray Linux Environment (CLE) 5.2 will be upgraded from UP02 to UP04.  Included in this upgrade are a newer version of the 3.0.101 kernel, a newer version of the Lustre 2.5 client as well as other system related packages and modules. The upgrade also includes a set of security, stability and functionality patches. In future maintenance events the Sonexion Lustre servers will be upgraded.

PE15.12 and CUDA 7.0

The upgrade of UP04 allows the system to upgrade to NVIDIA CUDA 7.0 with a necessary upgrade of the Programming Environment (PE) for both XE and XK nodes. No recompilation is expected if standard linking practices have been used. If issues are encountered please rebuild/relink or set your runtime environment to match that used to originally build the application. 

CrayPE

CrayPE will now detect when the compiler/linker has been called via configure and will not pass a link option to the compiler, deferring to it's native like mode, e.g. gcc defaults to dynamic linking. This can  speed up invocations of 'configure' by not forcing all of the invocations it makes to CrayPE to be linked statically, which is the default on Blue Waters. 

CCE

CCE 8.4 supports CUDA 6.5 and 7.0. Applications running CUDA 5.5 can continue to use CCE releases prior to 8.4. CCE 8.4 now supports by default certain GNU features with the new default setting of -hgnu. Use -hnognu to disable this feature. 

GCC

GCC 4.9.0 supports CUDA 7.0.  GCC 5 is available on the system but does not support CUDA.

MPICH

A new feature was added to the Cray MPI library that allows users to display the high water mark of the memory used by MPI.  See the MPICH_MEMORY_REPORT env variable in the intro_mpi man page for more information..  A new Lustre file locking mode was added to the Cray MPI library that, when used with MPI-IO collective buffering, can increase write performance by up to 2x for large contiguous writes.  The new MPI-IO hint is cray_cb_write_lock_mode=1.  See the intro_mpi man page for more information.

Please see the Programming Environment Changes page on the Blue Waters portal for more information. 

Changed modules or packages of interest

Programming Environment Modules

Default Modules

After upgrade

Current

PrgEnv-xyz

5.2.82

5.2.40

cce

8.4.2

8.3.10

cray-libsci

13.3.0

13.0.3

cray-mpich

7.3.0

7.2.0

craype

2.5.0

2.3.0

pgi

15.10.0

14.2.0

gcc

4.9.3

4.8.2

intel

15.0.3.187

15.0.3.187

 

Cray Linux Environment Modules

System Modules post-upgrade pre-upgrade
cudatoolkit 7.0.28-1.0502.10742.5.1 6.5.14-1.0502.9613.6.1
pmi 5.0.10-1.0000.11050.179.3 5.0.6-1.0000.10439.140.3
rca 1.0.0-2.0502.60530.1.63 1.0.0-2.0502.53711.3.125
xpmem 0.1-2.0502.64982.5.3 0.1-2.0502.55507.3.2
dmapp 7.0.1-1.0502.11080.8.74 7.0.1-1.0502.9501.5.211
gni-headers 4.0-1.0502.10859.7.8 3.0-1.0502.9684.5.2
ugni 6.0-1.0502.10863.8.28 5.0-1.0502.9685.4.24

 

Complete List of updated User modules

User Modules post-upgrade pre-upgrade
PrgEnv-cray 5.2.82 5.2.40
PrgEnv-gnu 5.2.82 5.2.40
PrgEnv-intel 5.2.82 5.2.40
PrgEnv-pgi 5.2.82 5.2.40
acml 5.3.1 5.3.1
atp 1.8.3 1.8.1
cce 8.4.2 8.3.10
chapel 1.11.0 1.8.0
cray-ccdb 1.0.7 1.0.6
cray-ga 5.3.0.4 5.3.0.1
cray-hdf5 1.8.14 1.8.13
cray-hdf5-parallel 1.8.14 1.8.13
cray-lgdb 2.4.5 2.4.2
cray-libsci 13.3.0 13.0.3
cray-libsci_acc 3.3.0 3.1.1
cray-mpich 7.3.0 7.2.0
cray-netcdf 4.3.3.1 4.3.2
cray-netcdf-hdf5parallel 4.3.3.1 4.3.2
cray-parallel-netcdf 1.6.1 1.6.0
cray-petsc 3.6.1.0 3.5.3.0
cray-petsc-64 3.6.1.0 N/A
cray-petsc-complex 3.6.1.0 3.5.3.0
cray-petsc-complex-64 3.6.1.0 N/A
cray-shmem 7.3.0 7.2.0
cray-tpsl 1.5.2 1.4.4
cray-tpsl-64 1.5.2 N/A
cray-trilinos 11.12.1.5 11.12.1.2
craype 2.5.0 2.3.0
ddt 5.1.0.2 5.0.1.4_43050
ddt-memdebug 5.1.0.2 5.0.1.4_43050
fftw 3.3.4.6 3.3.4.1
gcc 4.9.0 4.8.2
iobuf 2.0.6 2.0.5
java jdk1.8.0_51 jdk1.7.0_45
papi 5.4.1.3 5.4.0.1
perftools 6.3.1 6.2.3
perftools-lite 6.3.1 6.2.3
pgi 15.10.0 14.2.0

 

 

May 18, 2015

In support of CUDA 6.5 on Blue Waters some changes to the default programming environment were made. The changes to cce and cray-mpich and cray-libsci_acc represent the larger changes to module versions. The following table provides a list of changed default module versions and can be used to recreate the earlier environment if needed.

module before after
atp 1.7.5 1.8.1
cce 8.3.3 8.3.10
chapel 1.10.0 1.8.0
cray-ccdb 1.0.3 1.0.6
cray-ga 5.1.0.5 5.3.0.1
cray-lgdb 2.3.2 2.4.2
cray-libsci 13.0.1 13.0.3
cray-libsci_acc 3.0.2 3.1.1
cray-mpich 7.0.3 7.2.0
cray-parallel-netcdf 1.5.0 1.6.0
cray-petsc 3.5.1.0 3.5.3.0
cray-petsc-complex 3.5.1.0 3.5.3.0
cray-shmem 7.0.3 7.2.0
cray-tpsl 1.4.1 1.4.4
cray-trilinos 11.8.1.0 11.12.1.2
craype 2.2.1 2.3.0
cudatoolkit 5.5.51-1.0502.9594.3.1 6.5.14-1.0502.9613.6.1
ddt 4.2.1.4_37994 5.0.0.2_40932
fftw 3.3.4.0 3.3.4.1
fftw 3.3.4.0 3.3.4.1
papi 5.3.2 5.4.0.1
perftools 6.2.1 6.2.3
perftools-lite 6.2.1 6.2.3
pmi 5.0.5-1.0000.10300.134.8.gem 5.0.6-1.0000.10439.140.3.gem
rca 1.0.0-2.0502.53711.3.125.gem-get-fix 1.0.0-2.0502.53711.3.125.gem-rca-fix
 

January 13, 2015

As discussed in recent User Calls, the Blue Waters software stack will be upgraded from CLE 4.2 to CLE 5.2 as part of the maintenance. Changes to the Programming Environment should be mostly transparent and not require action by the user. There are some exceptions.

 

It is recommended that statically linked binaries that use MPI, SHMEM, PGAS,  CoArrary, CHARM++ and MPI  be relinked. The DMAPP and uGNI libraries are tied to specific kernel versions and no backward or forward compatibility is provided. Relinking is not needed for dynamically linked binaries such as those using CUDA or OpenACC. We have not observed issues with tests of some user applications but it is recommended that jobs be checked and binaries relinked as needed.

 

The Lustre client will be upgraded from 1.8.6 to 2.5.1. Wide striping (beyond 160 OSTs) will not be enabled due to an existing bug that should be resolved soon.

 

The Linux kernel will be upgraded from 2.6.32 to 3.0.101 with SLES 11 SP3.

 

August 18, 2014

The next default change in Programming Environment (PE) will require rebuilding of MPI based applications.

Cray's MPT 7.0.0 release of MPICH adheres to the cray-mpich-compat/v7" is performed, the latest version of MPT 7.x.x and compatible libraries and products will be swapped based on the module list that were previously loaded. If any compatible libraries or products are not available, a message will be displayed and no additional modules will be swapped. To go back to the latest version of the MPT 6.x.x and the latest compatible versions of the libraries, a user can module swap back to cray-mpich-compat/v6. Note: The modulefile does not remember the specific previous library and product versions, rather it uses the latest installed versions that are compatible. Type "module help cray-mpich-compat/v7" for more information.

The cray-mpich-compat/v7 module ensures that the currently loaded modules, at the time of loading it, are compatible with mpich 7.x.x 


The following modules on this system are compatible (AS OF August 18, 2014): 

        cray-mpich/7.0.2

        cray-shmem/7.0.2

        cray-ga/5.1.0.5

        cray-libsci/13.0.0

        cray-libsci_acc/3.0.2

        cray-tpsl/1.4.1

        cray-petsc/3.4.4.0

        cray-hdf5-parallel/1.8.9

        cray-netcdf-hdf5parallel/4.3.2

        cray-parallel-netcdf/1.4.1

        cray-trilinos/11.8.1.0

        cce/8.3.2

        perftools/6.2.0

 

For PrgEnv-gnu please use
gcc/4.9.0

except for CUDA applications which then need to use nvcc and 

gcc/4.8.2


The cray-mpich-compat/v6 module ensures that the currently loaded modules, at the time of loading it, are compatible with mpich 6.x.x 

The following modules on this system are compatible: 

        cray-mpich/6.3.1

        cray-shmem/6.3.1

        cray-ga/5.1.0.4

        cray-libsci/12.2.0

        cray-libsci_acc/3.0.1

        cray-tpsl/1.4.0

        cray-petsc/3.4.3.1

        cray-hdf5-parallel/1.8.12

        cray-netcdf-hdf5parallel/4.3.1

        cray-parallel-netcdf/1.4.0

        cray-trilinos/11.6.1.0

        cce/8.2.6

        perftools/6.1.4

 

For PrgEnv-gnu please use

gcc/4.8.2 

 

Keeping at cray-mpich 6.x compatibilty

To continue to use the cray-mpich 6.x compatible release of the programming environment you can add:

module load cray-mpich-compat/v6

to your ~/.modules file.

 

Known Issues

 - rca and cudatoolkit: If you are using rca functionality with a cuda application then you will need to add the following to your environment when compiling:

export PE_PKGCONFIG_LIBS=cray-rca:$PE_PKGCONFIG_LIBS

- CUDA and GCC 4.9.0: applications using CUDA need to use gcc/4.8.2 when using nvcc.