https://github.com/E3SM-Project/E3SM

sort by:
Revision Author Date Message Commit Date
9971ec9 Removed if for listid (modal_aer_opt) and fixed bug in calcsz Now progrnostic and diagnostic calls the calcsize and wateruptake in the same way. The list id check is within calcsize routine. There was a bug in the original implementation where, the last mmode was always used in size calculations in accum<->ait transfer. That bug is fixed in the commit along with some minor cleanup. 30 September 2020, 17:55:16 UTC
cfa9adb Changed code so that old state and pbuf is not used, so code is NBFB I removed all instances of old state and pbuf which were there to maintain BFB. Now calcsize is using current state and pbuf so it is not BFB with the original code. 30 September 2020, 04:05:28 UTC
b41a456 Reverted ipair change and added it back for dignostic ait-acc tranfers test is still BFB 29 September 2020, 21:28:59 UTC
1f0b3c6 Removed ipair, everything works and BFB 24 September 2020, 00:43:54 UTC
ff0ca5d minor changes for ait->accum transfer,everything works and BFB 23 September 2020, 23:54:09 UTC
81bace2 Allocation moved so that they happen only once along with some cleanup and more recfactoring 19 September 2020, 04:40:53 UTC
8be6743 Simulations were not BFB with history verbose turned off, now they are Simulations are BFB now with history verbose true. "qsrflx" variable was non-BFB which was causing history variables to differ. It was because I was not sending info about whether the call is for cloud borne aerosols or the interstitial aerosols. I fixed it and now codes are BFB with ne4 and ne30 simulations (5 days) 18 August 2020, 21:49:07 UTC
21f2934 Further cleanup....still BFB 16 August 2020, 09:10:55 UTC
96e3739 Major cleaning...but it still in progress.. 16 August 2020, 06:42:41 UTC
83e0338 partial refactored, everything BFB 07 August 2020, 05:21:09 UTC
7adcaad Changed some var names and some clean up-still everything BFB 05 August 2020, 05:46:57 UTC
63037dc BFB with rad_diag_3 as well. 26 July 2020, 06:35:29 UTC
d2ae18c At this point, diags with all aerosols and no aerosols are BFB Diags with one ommited aerosol (e.g. bc) is still NBFB. I am using the following user_nl_cam and rad_diag_3 is still NBFB: nhtfrq = -120 mode_defs = 'mam4_mode1:accum:=', 'A:num_a1:N:num_c1:num_mr:+', 'A:so4_a1:N:so4_c1:sulfate:/compyfs/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:pom_a1:N:pom_c1:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:soa_a1:N:soa_c1:s-organic:/compyfs/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:bc_a1:N:bc_c1:black-c:/compyfs/inputdata/atm/cam/physprops/bcpho_rrtmg_c100508.nc:+', 'A:dst_a1:N:dst_c1:dust:/compyfs/inputdata/atm/cam/physprops/dust_aeronet_rrtmg_c141106.nc:+', 'A:ncl_a1:N:ncl_c1:seasalt:/compyfs/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:mom_a1:N:mom_c1:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc', 'mam4_mode1_no_bc:accum:=', 'A:num_a1:N:num_c1:num_mr:+', 'A:so4_a1:N:so4_c1:sulfate:/compyfs/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:pom_a1:N:pom_c1:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:soa_a1:N:soa_c1:s-organic:/compyfs/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+','A:dst_a1:N:dst_c1:dust:/compyfs/inputdata/atm/cam/physprops/dust_aeronet_rrtmg_c141106.nc:+', 'A:ncl_a1:N:ncl_c1:seasalt:/compyfs/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:mom_a1:N:mom_c1:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc', 'mam4_mode2:aitken:=', 'A:num_a2:N:num_c2:num_mr:+', 'A:so4_a2:N:so4_c2:sulfate:/compyfs/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:soa_a2:N:soa_c2:s-organic:/compyfs/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:ncl_a2:N:ncl_c2:seasalt:/compyfs/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:mom_a2:N:mom_c2:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc', 'mam4_mode3:coarse:=', 'A:num_a3:N:num_c3:num_mr:+', 'A:dst_a3:N:dst_c3:dust:/compyfs/inputdata/atm/cam/physprops/dust_aeronet_rrtmg_c141106.nc:+', 'A:ncl_a3:N:ncl_c3:seasalt:/compyfs/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:so4_a3:N:so4_c3:sulfate:/compyfs/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:bc_a3:N:bc_c3:black-c:/compyfs/inputdata/atm/cam/physprops/bcpho_rrtmg_c100508.nc:+', 'A:pom_a3:N:pom_c3:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:soa_a3:N:soa_c3:s-organic:/compyfs/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:mom_a3:N:mom_c3:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc', 'mam4_mode3_no_bc:coarse:=', 'A:num_a3:N:num_c3:num_mr:+', 'A:dst_a3:N:dst_c3:dust:/compyfs/inputdata/atm/cam/physprops/dust_aeronet_rrtmg_c141106.nc:+', 'A:ncl_a3:N:ncl_c3:seasalt:/compyfs/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:so4_a3:N:so4_c3:sulfate:/compyfs/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+','A:pom_a3:N:pom_c3:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:soa_a3:N:soa_c3:s-organic:/compyfs/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:mom_a3:N:mom_c3:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc', 'mam4_mode4:primary_carbon:=', 'A:num_a4:N:num_c4:num_mr:+', 'A:pom_a4:N:pom_c4:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:bc_a4:N:bc_c4:black-c:/compyfs/inputdata/atm/cam/physprops/bcpho_rrtmg_c100508.nc:+', 'A:mom_a4:N:mom_c4:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc' 'mam4_mode4_no_bc:primary_carbon:=', 'A:num_a4:N:num_c4:num_mr:+', 'A:pom_a4:N:pom_c4:p-organic:/compyfs/inputdata/atm/cam/physprops/ocpho_rrtmg_c130709.nc:+', 'A:mom_a4:N:mom_c4:m-organic:/compyfs/inputdata/atm/cam/physprops/poly_rrtmg_c130816.nc' rad_climate = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2', 'A:O3:O3', 'N:N2O:N2O', 'N:CH4:CH4', 'N:CFC11:CFC11', 'N:CFC12:CFC12', 'M:mam4_mode1:/compyfs/inputdata/atm/cam/physprops/mam4_mode1_rrtmg_aeronetdust_c141106.nc', 'M:mam4_mode2:/compyfs/inputdata/atm/cam/physprops/mam4_mode2_rrtmg_c130628.nc', 'M:mam4_mode3:/compyfs/inputdata/atm/cam/physprops/mam4_mode3_rrtmg_aeronetdust_c141106.nc', 'M:mam4_mode4:/compyfs/inputdata/atm/cam/physprops/mam4_mode4_rrtmg_c130628.nc' rad_diag_1 = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2','A:O3:O3', 'N:N2O:N2O', 'N:CH4:CH4','N:CFC11:CFC11', 'N:CFC12:CFC12', 'M:mam4_mode1:/compyfs/inputdata/atm/cam/physprops/mam4_mode1_rrtmg_aeronetdust_c141106.nc','M:mam4_mode2:/compyfs/inputdata/atm/cam/physprops/mam4_mode2_rrtmg_c130628.nc', 'M:mam4_mode3:/compyfs/inputdata/atm/cam/physprops/mam4_mode3_rrtmg_aeronetdust_c141106.nc', 'M:mam4_mode4:/compyfs/inputdata/atm/cam/physprops/mam4_mode4_rrtmg_c130628.nc' rad_diag_2 = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2','A:O3:O3', 'N:N2O:N2O', 'N:CH4:CH4','N:CFC11:CFC11', 'N:CFC12:CFC12' rad_diag_3 = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2','A:O3:O3', 'N:N2O:N2O', 'N:CH4:CH4','N:CFC11:CFC11', 'N:CFC12:CFC12', 'M:mam4_mode1_no_bc:/compyfs/inputdata/atm/cam/physprops/mam4_mode1_rrtmg_aeronetdust_c141106.nc','M:mam4_mode2:/compyfs/inputdata/atm/cam/physprops/mam4_mode2_rrtmg_c130628.nc', 'M:mam4_mode3_no_bc:/compyfs/inputdata/atm/cam/physprops/mam4_mode3_rrtmg_aeronetdust_c141106.nc', 'M:mam4_mode4_no_bc:/compyfs/inputdata/atm/cam/physprops/mam4_mode4_rrtmg_c130628.nc' 26 July 2020, 03:25:08 UTC
e092bc9 Removed some print statements, everything is still BFB 24 July 2020, 21:48:59 UTC
4ecc153 Everything is BFB for the first time step with the default model I am sending copy of pbuf (only cld borne mixing ratios) as well along with the copy of state. Also, cloud borne mixing ratios are not not done from calcsize. 24 July 2020, 21:48:59 UTC
527efb9 BFB at this point (dryballi and wetballi vars) Changes made: 1. passed a state copy and time step (dt) to modal_aer_opt for both sw and lw 2. set both do_adjust and do_aitaccum_transfer logicals to false in calcsize for this branch and branch singhbalwinder/default-for-remove-calcsize-diags 3.Added some hostory output to compare dgnumdry 24 July 2020, 21:48:59 UTC
2f43cd8 Model compiles fine with NBFB answers, next step make it BFB 24 July 2020, 21:48:58 UTC
42c9873 Merge branch 'worleyph/cam/assign_chunks_opt' (PR #3689) Improve load balancing when assigning chunks The current EAM physics load balancing scheme attempts to create chunks that are all have approximately the same computational cost, and then assigns the chunks to processes in such a way as to minimize the number of columns that need to be sent to other processes during d_p_coupling and p_d_coupling (hopefully decreasing communication overhead). However, if not all chunks have the identical number of columns or if the columns are not all of equal cost, then the more expensive chunks are not necessarily spread equally between the processes, and unnecessary load imbalance can result. Here, modifications are made to improve the load balance in these situations, while still preserving some of the communication avoiding capability of the original algorithm. To enable this optimization, the threading schedule for chunks in physpkg is specifed as SCHEDULE(static,1), so that chunks are assigned in a wrap map fashion among the threads on a process. Given that there are typically relatively few chunks assigned to each thread, and that the cost of the chunks can be estimated pretty well and assigned appropriately among the threads (if we know the schedule), this is unlikely to hurt performance. Experiments indicate that performance improves with this change (in conjunction with the rest of the PR). As part of the optimization to the assignment of chunks to threads, the algorithm to calculate the number of chunks to processes is modified to calculate the number of chunks to assign per thread, and then use this to determine the number of chunks to assign to each process. This supports future heterogenous systems where not all processes have the same number of threads, though it still assumes that all threads are equally capable. To document the load balance and associated performance, performance per process over the chunk loops in physpkg.F90 is collected and compared to the cost per chunk (which was already collected). The output of print_cost_p is modified to include these new data. The output of print_cost_p is also now archived with the other performance data at the end of the run. Finally, there is currently a check whether a runtime pcols specification (phys_chnk_fdim) disagrees with a compile-time specification. If so, a warning message is output and the model continues running using the compile-time specification. This is even triggered by the default setting of phys_chnk_fdim, which means that the warning message is always output when pcols is specified at compile-time. This is modified so that a warning message is only generated when the runtime pcols namelist variable (phys_chnk_fdim) specifies a specific pcols that is not the same as the compile-time set value. A value phys_chnk_fdim < 1 indicates that the model should decide what value of pcols to use, and setting pcols at compile-time implicitly defines what this default value is, so the two settings are not in conflict. [BFB] * worleyph/cam/assign_chunks_opt: Eliminate inconsistency in process min heap Minimize number of columns communicated during coupling Change variable names and also eliminate arbitrary "magic numbers" Eliminate unuseful warning when specifying pcols at compile-time Modify comments to match style used in create_chunks Improve load balancing when chunks have different costs Modify names of variables and routine assoc, with min heap of chunks Improve load balancing when processes have different numbers of threads Add OpenMP scheduling and augment performance diagnostic output 24 July 2020, 18:44:07 UTC
e84b826 Merge branch 'mark-petersen/ocean/improveGMdrhodx' (PR #3730) Update mpas-source: improve GM drhodx This PR brings in a new mpas-source submodule with changes only to the ocean core. Currently the GM horizontal buoyancy gradient is computed at a fixed vertical index. At a number of places in the ocean this is problematic where layer thickness changes quickly between cells (especially near the bottom for partial bottom cells). See MPAS-Dev/MPAS-Model#633 [non-BFB] 24 July 2020, 17:30:28 UTC
abbfb64 Merge branch 'jonbob/mosart/fix-coupling_period' (PR #3724) Make coupling_period calculation dependent on NCPL_BASE_PERIOD This PR modifies the MOSART build-namelist so that the coupling frequency it computes, in the form of variable coupling_period, is dependent on the value of NCPL_BASE_PERIOD. Previously it was hard-wired to assume the base period was one day and ROF_NCPL was the number of couplings per day. This update was necessary for getting IG cases working, where the NCPL_BASE_PERIOD is often a year so that the landice code can run on order of days. [BFB] 24 July 2020, 17:22:57 UTC
4ea385c update mpas-source:improve GM drhodx 23 July 2020, 18:15:55 UTC
045cb66 Merge branch 'jgfouca/cime/update_sub' into master (PR #3728) Update CIME submodule [NML] adds IOP setting * jgfouca/cime/update_sub: Update CIME submodule 23 July 2020, 17:11:22 UTC
1b48c3b Merge branch 'mark-petersen/ocean/remove_scratch_allocate_init_analysis' (PR #3717) Update mpas-source: remove scratch allocates, analysis and init This PR brings in a new mpas-source submodule with changes only to the ocean core. Those changes represent two MPAS-Ocean PRs with similar improvements but to two different parts of the model: * Remove scratch allocates and config. Part 1: analysis members (MPAS-Dev/MPAS-Model#539) * Remove scratch allocates and config. Part 2: Init mode (MPAS-Dev/MPAS-Model#553) These are each part of MPAS-Dev/MPAS-Model#457, which replaces scratch allocates throughout the model. We felt it was safer to convert the model in smaller pieces. [BFB] 23 July 2020, 15:15:53 UTC
8393a70 Merge branch 'jonbob/elm/fix-snicar-h2osno' (PR #3718) Change source of h2osno and frac_sno in SNICAR_AD_RT This PR changes the source of two associated arrays, h2osno and frac_sno, in subroutine SNICAR_AD_RT. Previously the two arrays had pointed at the waterstate_vars derived type, but subroutine SNICAR_RT, which it optionally replaces, pointed those arrays to derived type col_ws. When using the waterstate_vars derived type and use_snicar_ad set to true (meaning CLM was using SNICAR_AD_RT instead of SNICAR_RT), tests would fail after a couple of years with a traceback into subroutine TwoStream. Debugging indicated the problem was due to a 0 snow albedo in a cell whose snow fraction was 1, and more analysis found the 0 albedo was coming from SNICAR_AD_RT. Apparently the water state_vars value for h2osno for that cell was 0 and subsequently no albedo was calculated, though the col_ws value was non-zero and is used elsewhere in the calculation. With the current patch, the same test that had previously failed was run 20 years. [BFB] 23 July 2020, 15:09:38 UTC
fc26d9d Eliminate inconsistency in process min heap The previous commit allowed the process min heap to lose the min heap property for short periods. While this did not cause problems in test cases, it might lead to problems in the future. Here this problem is resolved by calculating an updated process cost when thread 0 is assigned a new chunk, but not changing the cost associated with the process in the min heap until after new chunks are also assigned to the other threads. [BFB] 23 July 2020, 01:04:00 UTC
7018490 Minimize number of columns communicated during coupling The initial implementation of the new algorithm to assign chunks to processes in a more load balanced way missed some opportunities to minimize the number of columns that need to be copied during couppling between the physics and dynamics. Here a change is made to require approximately the same number of column transfers as with the old chunk assignment algorithm when the chunks all have the same cost. Another change is to delay updating the process min heap after the estimated cost of a process changes (when a chunk is added to thread 0) until all other threads have had a chunk assigned to it. Since chunks are assigned in order of decreasing cost, the estimated cost of the process will not increase again until this occurs. The delay allows for addiitonal opportunities to minimize communication overhead by continuing to regard this process as having lower cost than it actually has. Finally, the format of the output in the file atm_chunk_costs.txt is modifed to include the number of columns in each chunk that do not have to transferred to other processes during coupling. [BFB] 23 July 2020, 01:03:59 UTC
946f292 Change variable names and also eliminate arbitrary "magic numbers" Changed names of variables in physpkg.F90 and phys_grid.F90 to be more easily distinguishable and self-documenting. For example, variables with the following elements in their names were expanded as noted (in new code - did not examine all of physpkg.F90 or phys_grid.F90): pcount => proc_cnt ccount => chnk_cnt pcost => proc_cost ccest => chnk_estcost ccost => chnk_cost tcest => thrd_estcost pcest => proc_estcost pheap => proc_heap cheap => chnk_heap cdex => chnk_dex or cdex => col_dex udex => used_col_dex Fixed typo in a variable description in the print_cost_p routine (in phys_grid.F90) Reworked logic to avoid division by zero or division resulting in an infinity to not use arbitrary "magic" numbers. [BFB] 23 July 2020, 01:03:39 UTC
8c308d6 Eliminate unuseful warning when specifying pcols at compile-time There is currently a check whether a runtime pcols specification (phys_chnk_fdim) disagrees with a compile-time specification. If so, a warning message is output and the model continues running using the compile-time specification. This is even triggered by the default setting of phys_chnk_fdim, which means that the warning message is always output when pcols is specified at compile-time. The default (phys_chnk_fdim == 0) implies that the model should determine what pcols should be set to. When defining pcols at compile-time, that changes what the model will determine pcols to be (at runtime), so phys_chnk_fdim == 0 does not conflict with a compile-time setting of pcols (with this interpretation). Here the logic is changed to remove the warning when phys_chnk_fdim <= 0 and pcols is also specified at compile-time. [BFB] 23 July 2020, 01:03:39 UTC
c755ceb Modify comments to match style used in create_chunks Reviewers for an earlier PR requested a change in the style of comments used in the routine create_chunks. Here the same style is implemented in the routines assign_chunks, pheap_adjust, and cheap_adjust. [BFB] 23 July 2020, 01:02:02 UTC
c7e3171 Improve load balancing when chunks have different costs The assign_chunks algorithm is modified to balance the total estimated cost per process while retaining the current approach of attempting to minimizing the communication costs in dp_coupling. This is implemented by introducing a min heap of processes ranked by the estimated cost based on the estimated cost of the chunks assigned to each process. Chunks are assigned in order of decreasing estimated cost, and estimated process cost is adjusted when assigning a chunk to thread 0 (assuming a wrap map of chunks to threads within a process and perfectly parallel computation between threads). If there are multiple candidate processes to which a chunk can be assigned, the chunk is assigned to a process which would minimize the communication cost in d_p_coupling and p_d_coupling. [BFB] 23 July 2020, 01:02:02 UTC
6347fd9 Modify names of variables and routine assoc, with min heap of chunks In future changes we will use a min heap of processes (with weights defined by estimated cost of chunks assigned to a process). Here the min heap of chunks used when assigning columns to chunks is renamed to cheap. Names of associated variables and routines are likewise modified, as is the corresponding documentation. [BFB] 23 July 2020, 01:02:01 UTC
643a319 Improve load balancing when processes have different numbers of threads In the case when processes do not all have the same number of threads, the logic in assign_threads may not calculate the number of chunks to be assigned to each process in a load-balanced way when the total number of chunks is not evenly divisible by the total number of threads. Here, the logic is modified to assign "extra" chunks: mod(total number of chunks,total number of threads) in a round-robin fashion, first to thread 0 in each process, then to thread 1, etc. If a process does not have a particular thread id, then it is skipped in this assignment. This approximately load balances across the threads, which is a more accurate metric when load balancing performance. [BFB] 23 July 2020, 01:02:01 UTC
7413594 Add OpenMP scheduling and augment performance diagnostic output Added a SCHEDULE(static,1) to the OpenMP loops over chunks in physpkg.F90 so that the assignment to threads is round-robin. This is in order to be consistent with the new algorithm being implemented in assign_chunks (and also demonstrates improved performance in validation experiments). Added phys_pcost (public) to phys_grid.F90, where phys_pcost is measured physics cost for the threaded loops over chunks in physpkgs.F90 (bc_physics and ac_physics), and added timing logic to physpkg.F90 to capture this data. Updated print_cost_p to calculate and output data on process-level performance for the threaded loops over chunks in physpkg.F90: PROC id nthrds nchnks estcost (norm) cost (norm) cost (seconds) 'speed-up' where estimated process cost is the maximum over the estimated thread costs, and the estimated thread cost is calculated from the chunks assigned to each thread, assuming a static round-robin distribution. Here process 'speed-up' is the sum of local chunk costs divided by local process cost. This is a measure of load balance, not traditional speed-up, which is relative to the serial cost. Reworked print_cost_p to rename variables in a consistent fashion. Added CHNK to the beginning of output lines with chunk-related performance data, to easily differentiate from the process-related performance data: CHNK owner lcid cid pcols ncols estcost (norm) cost (norm) cost (seconds) [BFB] 23 July 2020, 01:02:00 UTC
f3f7d6a Update CIME submodule [NML] adds IOP setting 22 July 2020, 22:39:13 UTC
12f9e10 update mpas-source: remove scratch allocates, analysis & init 22 July 2020, 21:05:06 UTC
ae90a7e Merge branch 'origin/oksanaguba/homme/clean-output' into master (PR #3592) Removing redundant code in computing levels for min and max print statements. 22 July 2020, 19:39:55 UTC
2249b67 Merge branch 'whannah/mmf/update-mmf-tests' (#3677) The MMF "test" compsets were originally created to shorten the run time of tests, but without any special development needs these compsets are unnecessary and just clutter up the compset configuration. If we decide we need smaller tests again it will be better to add a specific test mod rather than a special compset. [non-BFB] only for MMF tests since they will change 22 July 2020, 16:02:50 UTC
2362bed Merge branch 'akturner/seaice/snow_layer_check' (PR #3720) Update mpas-source: Add warning messages for snow and ice layers This PR brings in a new mpas-source submodule with changes only to the seaice core. It compares config options with initial condition values and adds a critical error if they differ. see MPAS-Dev PR #612 Fixes #3669 [BFB] 22 July 2020, 15:32:12 UTC
07e4df3 Merge branch 'whannah/atm/add_ne30pg1-3-4_support' (PR #3568) Add mapping/domain/topo/drydep files for running with certain tri-grid+physgrid configurations, and removes support for "pg1" grids. The new grids are: ne30pg3_r05_oECv3 ne30pg4_r05_oECv3 ne120pg2_r05_oECv3 conusx4v1_r05_oECv3 conusx4v1pg2_r05_oECv3 [BFB] Conflicts: cime 21 July 2020, 17:38:11 UTC
d5f7185 Add missing ECPP compset definition 21 July 2020, 15:10:41 UTC
e6750a1 Add missing quotes Add missing quotes from one line 21 July 2020, 00:13:05 UTC
fe66d97 Make coupling_period calculation dependent on NCPL_BASE_PERIOD 20 July 2020, 23:07:05 UTC
7ebefbf Merge remote-tracking branch 'origin/master' into whannah/mmf/update-mmf-tests 20 July 2020, 21:41:04 UTC
7bae560 Merge wlin/atm/updte-clubb-releaseotes (PR #3714) Update URL to the clubb site in release notes The link to the clubb website is updated in the comment section at top of advance_clubb_core_module.F90 to https://carson.math.uwm.edu/larson-group/clubb_site/ [BFB] 20 July 2020, 21:26:04 UTC
816e9d8 Rename a test that got missed 20 July 2020, 21:23:48 UTC
3b83b4a Update mpas-source: Add warning messages for snow and ice layers 20 July 2020, 20:57:32 UTC
0be16f3 Merge branch 'rljacob/test/remove-mali' (PR #3715) Remove SMS.f09_g16_a.MALI from integration suite until MALI is working with new Albany and new Albany has been installed on test machines. 20 July 2020, 20:22:07 UTC
3ea6c6d Change source of h2osno and frac_sno in SNICAR_AD_RT 20 July 2020, 15:14:47 UTC
29dc02d Remove SMS.f09_g16_a.MALI from integration Remove SMS.f09_g16_a.MALI from integration suite until its working with new Albany. 19 July 2020, 23:56:44 UTC
85b29ae Update URL to the clubb site in release notes The link to the clubb website is updated in the comment section at top of advance_clubb_core_module.F90 to https://carson.math.uwm.edu/larson-group/clubb_site/ [BFB] 19 July 2020, 19:56:23 UTC
b6507c7 Merge 'MT/homme-gcc10' into master (PR #3693) the gfortran with gcc10 has much stricture type checking and no longer allows logicals to be mixed with integers. Add option to standalone HOMME builds to allow relax this type checking. Added cmake necessary to build cprnc from latest version of CIME Add HOMME machine file for "mappy" new machine replacing melvin remove pio dependency from unit tests 17 July 2020, 21:32:54 UTC
a5e9392 Merge remote-tracking branch 'MAIN/master' into homme-gcc10 17 July 2020, 21:16:30 UTC
faadb3e removing pio lib from unit tests 17 July 2020, 20:54:39 UTC
fe140f3 Merge branch 'jgfouca/cime/becomes_a_submodule' into master (PR #3708) Cime becomes a submodule This is a big day and a big change for E3SM but impacts should be very minimal since the cime code should be identical to what's currently in the E3SM cime subdirectory. From hereon, E3SM/CIME integration should be as simple as updating this submodule. [BFB] * jgfouca/cime/becomes_a_submodule: Make cime a submodule Remove cime Update CIME to ESMCI cime5.8.28 (PR #3696) 17 July 2020, 19:42:24 UTC
fc8c1bc Make cime a submodule 16 July 2020, 21:38:36 UTC
cbb95f7 Remove cime 16 July 2020, 21:37:56 UTC
44d5187 Merge pull request #3701 from E3SM-Project/jayeshkrishna/scorpio_v1.1.2 Updating the version of Scorpio to 1.1.2 . This new release includes, * Fix for a memory corruption issue with the BOX rearranger This fix is required for ne30 pg2 production runs (see #3684), ne1024 pg2 runs (see E3SM-Project/scorpio#323) and standalone HOMME (see E3SM-Project/scorpio#324) * Increases the range of I/O decomposition ids allowed. This fix is required for some long running simulations (see E3SM-Project/scorpio#315) Also changing the default I/O library in HOMME to Scorpio. Fixes #3684 [BFB] 16 July 2020, 19:29:32 UTC
94a1067 Merge branch 'akturner/seaice/carbon_conservation' (PR #3702) Seaice carbon conservation bug This PR brings in a new mpas-source submodule with changes only to the seaice core. It fixes a bug with carbon conservation in the sea ice BGC. see MPAS-Dev PR #610 Fixes #3661 [non-BFB] only for configurations with seaice BGC 16 July 2020, 17:31:08 UTC
2a343fd Update CIME to ESMCI cime5.8.28 (PR #3696) Update CIME to ESMCI cime5.8.28 Squash merge of jgfouca/branch-for-to-acme-2020-07-13 Features: * improve error message for create newcase test option * Clean up screen output & logging in check_input_data * cpl7 mct driver: updates to dry deposition data * User mods config grids2 * Allow gptl build to recognize ESMF_LIBDIR Bug fixes: * Make cprnc standalone build more robust to current env by using rpath * Fix non-py3 compatible change in simple_compare.py * set PIO_REARR_COMM_TYPE: coll for mpi-serial * fix force_build_smp, did not work after case.setup * Give the file name that results in the "empty read-only file" error * share/streams: fix print format issue and turn off debug flags * Improvement to baseline generation and comparison for the PET multi-submit test. * When writing .env_mach_specific files, only write settings for main job Features that don't effect E3SM * remove nuopc data models to new repo CDEPS * major refactor of nuopc datamodel caps and share code * pio2 update * improved error message for incorrect setting of cime model * couples HYCOM with data atmosphere for UFS HAFS application * Start adding option for MizuRoute [non-BFB] for any full-atmosphere case because of dry-deposition changes. 15 July 2020, 20:05:18 UTC
7f5acb0 Merge branch E3SM-Project/worleyph/machines/remove_sqs (PR #3698) Replace 'sqs -f' call with 'scontrol show jobid' on NERSC systems 15 July 2020, 18:14:15 UTC
53c26c7 Merge branch 'mark-petersen/ocean/fixCVMixInterpolation' (PR #3695) Update mpas-source: Fixes CVMix surface layer averaging This PR brings in a new mpas-source submdoule with changes only to the ocean core. Currently when cvmix computes averages for the surface layer (needed for finding boundary layer depth) it simply does an arithmetic mean. This is reasonable for the 60 layer mesh (with near constant 10m resolution through 200m), but is not correct for stretched grids. In this PR the averaging is changed to a layerThickness weighted average. See MPAS-Dev/MPAS-Model#608 [non-BFB] 15 July 2020, 17:18:49 UTC
5e3b9ad Seaice carbon conservation bug 15 July 2020, 15:32:39 UTC
ba70645 Switch to Scorpio for standalone HOMME Switch the default I/O library used by standalone HOMME from Scorpio classic to Scorpio 15 July 2020, 01:12:46 UTC
bd38ab6 Updating Scorpio to v1.1.2 Updating Scorpio library from version 1.1.1 to 1.1.2. This version includes, * Fix for a memory corruption issue with the BOX rearranger This fix is required for ne30 pg2 production runs (see #3684), ne1024 pg2 runs (see E3SM-Project/scorpio#323) and standalone HOMME (see E3SM-Project/scorpio#324) * Fix for heap overflow during logging * Increases the range of I/O decomposition ids allowed. This fix is required for some long running simulations (see E3SM-Project/scorpio#315) * Better load balancing for the SUBSET rearranger The Scorpio library is not updated (has no new changes) 15 July 2020, 00:55:02 UTC
ec21e40 machine file for mappy 14 July 2020, 20:56:43 UTC
ae91cb2 Update MMF tests & remove MMF "test" compsets modified: cime/config/e3sm/tests.py modified: components/cam/cime_config/config_component.xml modified: components/cam/cime_config/config_compsets.xml 14 July 2020, 19:17:23 UTC
7338890 Replace 'sqs -f' call with 'scontrol show jobid' on NERSC systems A recent update to NERSC system software removed the '-f' option from the sqs command. 'sqs -f' is called from provenance.py for both cori-knl and cori-haswell, causing job submission to fail on those systems. 'sqs -f' is also called within the job progress monitoring scripts syslog.cori-knl and syslog.cori-haswell, which will cause these scripts to abort (though this will not affect a running model). 'sqs -f' is equivalent to 'scontrol show jobid', so here 'sqs -f' is replaced by 'scontrol show jobid' in provenance.py and in the job progress monitoring scripts. Note that the output from 'scontrol show jobid' that is captured in provenance.py is still saved in a file with the original name (sqsf_jobid.$lid), because this name is looked for in the performance archiving postprocessing scripts. [BFB] 14 July 2020, 17:49:30 UTC
14899a6 Merge branch 'jonbob/datm/reimplement-CLMMOSARTTEST' (PR #3668) Reimplement CLMMOSARTTEST support in datm This PR reimplements CLMMOSARTTEST support in datm for use in the IM20TRNLDASCNPECACNTBC compset, as well as other future compsets. This capability had been brought in by PR #3478 but was subsequently overwritten in a squash merge. Fixes #3665 [BFB] for all tested configurations 14 July 2020, 16:20:56 UTC
8ca9a53 update mpas-source: Fixes CVMix surface layer averaging 13 July 2020, 19:06:00 UTC
b787c16 adapt HOMME to work with updated cprnc cmake build 12 July 2020, 19:35:39 UTC
ae2197c new option needed for gfortran 10 11 July 2020, 21:42:07 UTC
31b650d Merge branch 'apcraig/mosart/usrdat' (PR #3642) - Adds new usrdat resolutions for ELM/MOSART and MOSART. - Removes RMOSARTNLDAS compset because it's not working - Adds four new tests [BFB] 10 July 2020, 21:07:52 UTC
1a5e081 Merge branch 'jgfouca/cime/disable_test_archiving_on_mappy' into master (PR #3690) Disable test archiving on mappy Disk space is at a premium on mappy, especially until /home is mounted on a bigger SSD. [BFB] * jgfouca/cime/disable_test_archiving_on_mappy: Disable test archiving on mappy 10 July 2020, 19:27:46 UTC
3fabcdf Disable test archiving on mappy Disk space is at a premium on mappy, especially until /home is mounted on a bigger SSD. [BFB] 10 July 2020, 19:25:18 UTC
4c0329a Merge branch 'mark-petersen/ocean/vert_mix_max_index_fix' (PR #3682) Turn on ocean Redi mixing. Includes minor bug fixes This PR brings in a new mpas-source submodule, with changes only to the ocean core. It turns on Redi mixing by default, making this non-bfb for all configurations with an active ocean. It includes several bug fixes uncovered during testing with Redi on: * Fixes Redi decomposition error in k33 (see MPAS-Dev/MPAS-Model#620); * Improves Redi Kappa variable names and descriptions; * Replaces the use of N with k after a k loop this was a typo, but turns out to be a bit-for-bit change (see MPAS-Dev/MPAS-Model#567); and * Fixes indices to remove warnings from analysis members [NML] [non-BFB] 10 July 2020, 17:02:52 UTC
bafac43 Merge branch 'jonbob/scripts/add-lowres-trigrids' (PR #3606) Merge branch 'jonbob/scripts/add-lowres-trigrids' into next (PR #3606) Add low-res tri-grid configurations for testing This PR adds support for ne4pg2-r05-oQU480 and ne16pg2-r05-oQU240 configurations, including all mapping and domain files. All new files have been placed on the inputdata server and should now be available. [non-BFB] for ne4pg2 configurations 10 July 2020, 16:35:35 UTC
3b3b117 Update to reflect new name for the CAM ne4pg2 topo file 09 July 2020, 20:25:00 UTC
8bb1d22 Make the config_dt value a float to match previous namelists 09 July 2020, 18:48:20 UTC
3270a4c Merge wlin/atm/ne30ne120_r0125_oRRS18to6v3 (PR #3663) Add support for grids using r0125_oRRS18to6v3 Following grids and their pg2 counterparts are added: ne30_r0125_oRRS18to6v3 and ne120_r0125_oRRS18to6v3. The mapfiles between the components are also populated. [BFB] 09 July 2020, 18:39:38 UTC
a37f716 Merge branch 'rljacob/cime/move-e3sm-config' into master (PR #3657) Move E3SM config files out of cime directory Move all the e3sm config files from cime/config/e3sm to E3SM/cime_config. This is preparation for converting cime to a submodule. [BFB] * rljacob/cime/move-e3sm-config: Move tests.py Move e3sm machines directory out of cime Move e3sm config files to cime_config dir 09 July 2020, 17:30:59 UTC
d775c76 Removes another MOSART test 08 July 2020, 23:03:50 UTC
2ca222c Merge remote-tracking branch 'origin/master' into rljacob/cime/move-e3sm-config * origin/master: Intelgpu updates Use I/O libraries from /home/wuda on JLSE for both jlse and jlse-iris configurations Switch to using module use command for intel and intelgpu on JLSE. Improved support for latest GCC on JLSE. Set NETCDF and PNETCDF variables for intel and intelgpu compilers. 08 July 2020, 22:35:54 UTC
f37bf3e Merge branch 'jlse_update_module_fix_netcdf' (PR #3676) JLSE update modules and fix netcdf Updates and fixes for JLSE for intel and intelgpu compilers. Fixes #3670 [BFB] 08 July 2020, 19:34:45 UTC
9e912bc Merge pull request #3 from E3SM-Project/azamat/jlse/intelgpu-updates Intelgpu updates 08 July 2020, 18:49:39 UTC
9379326 Removes the CLMMOS_USRDAT test 08 July 2020, 18:07:32 UTC
ed2355f Turn on Redi 08 July 2020, 15:07:14 UTC
1b53ff2 update mpas-source: Redi decomposition and threading 08 July 2020, 15:06:56 UTC
4bd735e update map files to and from ne30 and ne120 pg2 grids 08 July 2020, 03:31:44 UTC
e312352 Adds missing testmod file and few fixes The missing testmode for clmmos_usrdat is added. The shell commands for new tests is also updated. 08 July 2020, 02:32:06 UTC
034d93d Move tests.py 07 July 2020, 22:32:57 UTC
9d7d88e Merge remote-tracking branch 'origin/master' into rljacob/cime/move-e3sm-config * origin/master: (90 commits) Remove unused testlist XML files and prefer mappy to melvin in some places Add support for new mappy machine Turn Redi off by default Fix typo in test def Add test for MMF RCE compset Adding XML doc for io2comp max pend req = 0 case Update log msgs related to max pending reqs Modify style of comments in create_chunks again Log rearranger options after CIME updates Modify style of comments in create_chunks Change the default value of config_GM_visbeck_alpha Re-introduce CIME auto settings for max pend reqs Update changes to phys_chnk_fdim_max defaults Change phys_chnk_fdim_max defaults Eliminate unused space Add runtime default pcol calculation Update bld scripts to match changes to Registry file Update mpas-source: alter 3D GM Add variables to control new runtime default pcols calculation Fix uninitialized LW optical depths ... 07 July 2020, 22:27:45 UTC
92733f6 Merge branch 'jgfouca/misc/cleanup_and_mappy' into master (PR #3685) Remove unused testlist XML files and prefer mappy to melvin in some places [BFB] * jgfouca/misc/cleanup_and_mappy: Remove unused testlist XML files and prefer mappy to melvin in some places 07 July 2020, 19:12:24 UTC
37d8fdd Remove unused testlist XML files and prefer mappy to melvin in some places [BFB] 07 July 2020, 19:04:55 UTC
496ca34 Merge branch 'worleyph/cam/runtime_pcols_part2'(PR #3671) Implement runtime calculation of pcols in EAM In PR #3620, EAM was modified to allow setting pcols, the first dimension of the chunk data structure, at runtime. See the discussion in this previous PR and in the associated issue #3538 for a description as to why this decision was made. Here, support for setting pcols at runtime is generalized to allow a default to be calculated that a) minimizes the number of chunks subject to constructing chunks all with approximately the same cost (based on either number of columns or on a user-specified relative cost per column) and assigning the same number of chunks to each computational thead, constrained by the physics load balancing option (-1, 0, 1, 2, 3, 4, or 5) and the target number of chunks per thread; and b) minimizes the difference between pcols and ncols, where ncols is the actual number of columns assigned to a chunk, thus minimizing wasted space and increasing cache locality. This minimization is done per process, as pcols only needs to be the same for chunks assigned to a process. The current options are still supported: set pcols to a specific value at compile-time for all processes by adding -pcols to CAM_CONFIG_OPTS in env_build.xml set pcols to a specific value at runtime for all processes by adding phys_chnk_fdim = <pcols value> to user_nl_cam. However, the new default is phys_chnk_fdim = 0 which causes phys_grid_init to compute a pcols value satisfying the above criteria ( (a) and (b) ), subject to constraints defined by two new namelist variables: phys_chnk_fdim_max: an upper bound on pcols phys_chnk_fdim_mult: a requirement that pcols be a multiple of this value Note that if phys_chnk_fdim > 1 or if -pcols is specified in CAM_CONFIG_OPTS, then these namelist variables are ignored. There is now also support for specifying the defaults for phys_chnk_fdim_max and phys_chnk_fdim_mult by system, dycore, and physics grid type. The following rules are included in this PR (from namelist_defaults_cam.xml): <!-- physics chunk first dimension upper bound, encoding empirically-determined defaults --> <phys_chnk_fdim_max > 16 </phys_chnk_fdim_max> <phys_chnk_fdim_max dyn="se" npg="2"> 16 </phys_chnk_fdim_max> <phys_chnk_fdim_max mach="compy" dyn="se" npg="0"> 24 </phys_chnk_fdim_max> <phys_chnk_fdim_max mach="cori-knl" dyn="se" npg="0"> 128 </phys_chnk_fdim_max> <!-- physics chunk first dimension factor, encoding empirically-determined defaults --> <phys_chnk_fdim_mult > 1 </phys_chnk_fdim_mult> <phys_chnk_fdim_mult mach="cori-knl" dyn="se"> 2 </phys_chnk_fdim_mult> Based on these, the current default (pcols = 16) is still approximated by the "generic" default for phys_chnk_fdim_max, in that the calculated pcols will be <= 16, and this will determine the total number of chunks, so there may necessarily be more than one chunk per thread. This is simply meant to be a conservative choice, given that pcols=16 has been working in CAM/EAM for a long time. However, this is now an upper bound, and pcols could be smaller for a given process for a given case (grid resolution and number of computational threads). The other rules are the results of extensive empirical experiments on Compy and Cori-KNL for the indicated set of options. Appropriate defaults for other systems or problem settings can be evaluated and then implemented in the same way. However, these can always be overridden by setting these variables in user_nl_cam. There were some subtleties in implementing this capability, requiring, for example, reordering steps in the atmosphere intialization to allow pcols to be set in phys_grid_init, and adding the system name to the EAM/CAM configure command. See the commit messages for the details. [BFB] [NML] Fixes #3538 Fixes #3485 * worleyph/cam/runtime_pcols_part2: Modify style of comments in create_chunks again Modify style of comments in create_chunks Update changes to phys_chnk_fdim_max defaults Change phys_chnk_fdim_max defaults Eliminate unused space Add runtime default pcol calculation Add variables to control new runtime default pcols calculation Reorder init. logic for EUL, FV, and SLD dycores Reorder init. logic to allow pcols to be set in phys_grid_init Conflicts: components/cam/src/dynamics/se/dyn_comp.F90 07 July 2020, 17:34:04 UTC
24affef Add corresponding pg2 grids and associated maps 06 July 2020, 22:39:40 UTC
b852972 Intelgpu updates 06 July 2020, 22:12:43 UTC
805d605 Update to use mono maps to and from ne120np4 06 July 2020, 22:11:43 UTC
5238426 Merge pull request #3672 from E3SM-Project/jayeshkrishna/cime_max_pend_req_fix The maximum I/O pending requests setting in env_run.xml, PIO_REARR_COMM_MAX_PEND_REQ_COMP2IO, is by default set to 0. CIME needs to set the variable to an appropriate value in this case (when its set to 0). Re-introducing this behavior that was disabled by ESMCI/cime#3499. Also updating the logging of the I/O rearranger options to make it more intuitive to the user Also see ESMCI/cime#3597 that fixes the disabling of automatic setting of I/O requests in CIME Fixes #3658 [BFB] 02 July 2020, 22:20:52 UTC
bbd0ce2 Merge branch 'jgfouca/cime/add_mappy' into master (PR #3683) Add support for new mappy machine [BFB] * jgfouca/cime/add_mappy: Add support for new mappy machine 02 July 2020, 21:31:25 UTC
f168e77 Add support for new mappy machine [BFB] 02 July 2020, 21:29:23 UTC
back to top