Flerchinger USEd in NEW version. Iterations= 10

Any issues with the actual running of the WRF.

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby bebop » Thu Nov 08, 2012 12:15 pm

Hello, I had same the problem recently, I decreased the timestep in namelist.input file from 180 to 100 sec and the problem disappeared.

But I discovered today that when I'm running simulations with miprun using 16 proc, everthing is going fine. Unfortunately when I increase to 22 proc or more, Flerchinger is back.

Any idea about this ?

Thank you,
Best regards
B.
bebop
 
Posts: 4
Joined: Thu Nov 08, 2012 12:08 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby phowarth » Thu Jul 25, 2013 3:28 am

I have the same problem, works fine with 20 or less processes, crashes with Flerchinger errors if the number of processes is increased any further.

Has anyone come up with a cause or solution yet?
phowarth
 
Posts: 7
Joined: Tue Mar 08, 2011 11:21 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby nh_modeler » Mon Aug 12, 2013 8:27 am

Has anyone found a solution to this yet? It happens for me when trying to use Noah LSM initialized off of GFS. I have not made any modifications to Noah and am using a short time step.
nh_modeler
 
Posts: 2
Joined: Thu Aug 08, 2013 7:05 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby herbert12345 » Sun Sep 08, 2013 9:28 am

Let me contribute some of the things I found:

- The Flerchinger message in itself is not an error but simply a status message sent by the Noah LSM. However, it appears to occur in cases of abnormal model state. For this reason, you can get rid of the message by changing the LSM but the underlying error will remain and may still cause the model to crash, write NaN or do other bad things. The Flerchinger message does not point towards the source of the problem.

In my specific case, checking rsl.error I found that the model would crash in the longwave radiation part but send Flerchinger error messages earlier. But that was not the root of the problem either. Looking deeper into the logs I found error messages about violations of the vertical CFL condition. Turned out that there was an instability that could be fixed by setting epssm in the dynamics namelist to 0.3 (instead of the default of 0.1).
herbert12345
 
Posts: 16
Joined: Tue Jun 30, 2009 8:45 am

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby ronbeag » Fri Nov 01, 2013 11:56 am

The Flerchinger error is a notification about how soil moisture is being calculated. So i would conclude that crashes due to a Flerchinger error are likely be connected to the soil moisture in the initial conditions.

I've found that sometimes when WPS is interpolating my soil moisture data to the model domain that sometimes erroneous negative soil moisture values appear on the domain. These can be in very small areas, maybe just one grid cell in the entire domain, and difficult to see unless you view your initial conditions with an appropriately ranged colour scale. Obviously, negative values of soil moisture don't make any sense. Also, even though it may be just one grid cell, this will quickly propagate across the domain giving NaN or missing values all over the place. Set the model to write output every timestep to see.

The solution for my case was to simply edit my wrfinput_d0* files so that there were no negative soil moisture values in the ICs. This is easy with NCL, ie load the wrfinput_d0* file and then get soil moisture and do something like this to it SMOIS = where(SMOIS.gt.0, SMOIS, 0.005) , before writing that back to the wrfinput_d0* file. Also note that so far as i can gather, 0.005 should be the lowest soil moisture value. Simply setting the negative values to zero didn't stop the error in my case.

Another solution may be to change the interpolation scheme used for soil moisture in METGRID.TBL to a more complex scheme, like the one used for TSK maybe. I haven't tried this yet though so can't recommend anything in particular.

Also, assuming this is an initial conditions problem, one could try starting the simulation 6 hours earlier or later, to avoid any problems that may be in a particular wrfinput_d0* file.

EDIT: I'll also add that i've noticed similar problems with interpolation of SST data to the WRF domain. That is to say, in some tiny places (one or two grid cells) I noticed negative SSTs, even though the SSTs were scaled in K, these all seemed to be around the sea-ice boundary (which is presumably how it impacted on the LSM). This also led to the flerchinger message and a model crash soon after initialisation. Editing the wrflowinp_d0* files so that all SSTs were greater than or equal to 0K made the message and resultant crash go away.

Another source of Flerchinger errors i've found is when using sea ice in the domain, either in initial conditions or through wrflowinp_d01. SEAICE is interpolated to the WRF domain against the LANDSEA mask, provided by the source data. SST interpolation does not use this mask. In many cases the source data mask LANDSEA can be quite coarse, meaning that SEAICE is not interpolated at all smoothly to the grid, especially at the edges between sea ice and the land mask of the domain. If you change "interp_mask = LANSEA(1)" to "interp_mask = LANDMASK(1)" in METGRIT.TBL.ARW (for the SEAICE entry) this will give a much better interpolation around the edges of the sea ice and, in my case, was enough to eliminate the Flerchinger error.
ronbeag
 
Posts: 12
Joined: Thu Jul 26, 2012 12:12 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby Arshal Wang » Thu Nov 03, 2016 9:57 pm

Dear All,

I am doing some research using TC bogussing scheme and get the the same problem.I tried changing the sf_surface_physics scheme but still got the "Segmentation fault(core dumped)".As dbh409 said,I checked the input files for real.exe ,and I found the the outfile after tc.exe is different from the met_file form the WPS.
I guess maybe there are some problems about the TC Bogus Scheme and I do not know how so solve this problem.
Please help me.
Thanks in advance,
Arshal Wang
Arshal Wang
 
Posts: 3
Joined: Fri Sep 30, 2016 5:13 am

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby kwthomas » Fri Nov 04, 2016 4:20 pm

The general rule is the timestep should be no larger than 6x the grid spacing (km). Anything more will probably go unstable, and usually quickly.

If your run is initialized with severe convection in progress (assuming radar data is used as part of the initial conditions), you may need to reduce the timestep.

Vertical levels are critical too. Normally, CAPS spring runs (HWT experiment) have no problems. For 2016, we went to the experimental HRRR vertal level scheme. Some runs were failing at 4x.
Kevin W. Thomas
Center for Analysis and Prediction of Storms
University of Oklahoma
kwthomas
 
Posts: 220
Joined: Thu Aug 07, 2008 6:53 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby ociurana » Mon Feb 05, 2018 6:59 am

Hello,

I have the same problem in some regions when using Noah Land Surface Model (setting sf_surface_physics=2). For example, if I use it in a single parent domain over Greece, it works fine. But if use the same configuration over the Philippines region I get the "Flerchinger USEd in NEW version. Iterations= 10" error, causing a bad termination in wrf.exe (exit code: 139). Tested in wrf 3.8.1, 3.9.1 and 3.9.1.1. Moreover, changes in time_step doesn't take any positive result.

Namelist.input:

Code: Select all
 &time_control
 start_year                          = 2017,
 start_month                         = 09,
 start_day                           = 06,
 start_hour                          = 00,
 start_minute                        = 00,
 start_second                        = 00,
 end_year                            = 2017,
 end_month                           = 09,
 end_day                             = 09,
 end_hour                            = 00,
 end_minute                          = 00,
 end_second                          = 00,
 interval_seconds                    = 3600
 input_from_file                     = T,
 history_interval                    = 60,
 history_outname                     = "wrfout_FIL0P10_d<domain>_<date>"
 frames_per_outfile                  = 1,
 restart                             = F,
 restart_interval                    = 60,
 io_form_history                     = 2
 io_form_restart                     = 2
 io_form_input                       = 2
 io_form_boundary                    = 2
 debug_level                         = 0
 /

 &domains
 use_adaptive_time_step              = T,
 step_to_output_time                 = T,
 time_step                           = 30,
 time_step_fract_num                 = 0,
 time_step_fract_den                 = 1,
 max_dom                             = 1,
 e_we                                = 101,
 e_sn                                = 91,
 e_vert                              = 60,
 p_top_requested                     = 5000,
 num_metgrid_levels                  = 32,
 num_metgrid_soil_levels             = 4,
 dx                                  = 11117.748,
 dy                                  = 11117.748,
 grid_id                             = 1,
 parent_id                           = 1,
 i_parent_start                      = 1,
 j_parent_start                      = 1,
 parent_grid_ratio                   = 1,
 parent_time_step_ratio              = 1,
 feedback                            = 0,
 smooth_option                       = 0,
 use_adaptive_time_step              = T,
 step_to_output_time                 = T,
 target_cfl                          = 1.2,
 target_hcfl                         = 0.84,
 max_step_increase_pct               = 10,
 starting_time_step                  = 10,
 max_time_step                       = 60,
 min_time_step                       = 10,
/

 &physics
 sst_update                          = 0
 mp_physics                          = 8,
 ra_lw_physics                       = 4,
 ra_sw_physics                       = 4,
 radt                                = 10,
 sf_sfclay_physics                   = 1,
 sf_surface_physics                  = 2,
 bl_pbl_physics                      = 1,
 bldt                                = 0,
 cu_physics                          = 1,
 cudt                                = 5,
 isfflx                              = 1,
 ifsnow                              = 1,
 icloud                              = 1,
 surface_input_source                = 1,
 num_soil_layers                     = 4,
 num_land_cat                        = 21,
 sf_urban_physics                    = 0,
 /

 &fdda
 /

 &dynamics
 w_damping                           = 0,
 diff_opt                            = 1,
 km_opt                              = 4,
 diff_6th_opt                        = 0,
 diff_6th_factor                     = 0.12,
 base_temp                           = 290.
 damp_opt                            = 0,
 zdamp                               = 5000.,
 dampcoef                            = 0.2,
 khdif                               = 0,
 kvdif                               = 0,
 non_hydrostatic                     = T,
 moist_adv_opt                       = 1,     
 scalar_adv_opt                      = 1,     
 /

 &bdy_control
 spec_bdy_width                      = 5,
 spec_zone                           = 1,
 relax_zone                          = 4,
 specified                           = T,
 nested                              = F,
 /

 &grib2
 /

 &namelist_quilt
 nio_tasks_per_group = 0,
 nio_groups = 1,
 /


And this is the wrf.exe output:

Code: Select all
 Ntasks in X            1 , ntasks in Y            1
--- WARNING: traj_opt is zero, but num_traj is not zero; setting num_traj to zero.
--- NOTE: sst_update is 0, setting io_form_auxinput4 = 0 and auxinput4_interval = 0 for all domains
--- NOTE: grid_fdda is 0 for domain      1, setting gfdda interval and ending time to 0 for that domain.
--- NOTE: both grid_sfdda and pxlsm_soil_nudge are 0 for domain      1, setting sgfdda interval and ending time to 0 for that domain.
--- NOTE: obs_nudge_opt is 0 for domain      1, setting obs nudging interval and ending time to 0 for that domain.
--- NOTE: bl_pbl_physics /= 4, implies mfshconv must be 0, resetting
Need MYNN PBL for icloud_bl = 1, resetting to 0
*************************************
No physics suite selected.
Physics options will be used directly from the namelist.
*************************************
--- NOTE: RRTMG radiation is in use, setting:  levsiz=59, alevsiz=12, no_src_types=6
--- NOTE: num_soil_layers has been set to      4
WRF V3.9.1.1 MODEL
 *************************************
 Parent domain
 ids,ide,jds,jde            1         101           1          91
 ims,ime,jms,jme           -4         106          -4          96
 ips,ipe,jps,jpe            1         101           1          91
 *************************************
DYNAMICS OPTION: Eulerian Mass Coordinate
 alloc_space_field: domain            1 ,             421459652  bytes allocated
med_initialdata_input: calling input_input
Timing for processing wrfinput file (stream 0) for domain        1:    0.70088 elapsed seconds
Max map factor in domain 1 =  1.05. Scale the dt in the model accordingly.
INPUT LandUse = "MODIFIED_IGBP_MODIS_NOAH"
 LANDUSE TYPE = "MODIFIED_IGBP_MODIS_NOAH" FOUND          33  CATEGORIES           2  SEASONS WATER CATEGORY =           17  SNOW CATEGORY =           15
INITIALIZE THREE Noah LSM RELATED TABLES
Skipping over LUTYPE = USGS
 LANDUSE TYPE = MODIFIED_IGBP_MODIS_NOAH FOUND          20  CATEGORIES
 INPUT SOIL TEXTURE CLASSIFICATION = STAS
 SOIL TEXTURE CLASSIFICATION = STAS FOUND          19  CATEGORIES
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
 Flerchinger USEd in NEW version. Iterations=          10
ThompMP: read qr_acr_qg.dat stead of computing
ThompMP: read qr_acr_qs.dat instead of computing
ThompMP: read freezeH2O.dat stead of computing
 mediation_integrate.G        1728 DATASET=HISTORY
 mediation_integrate.G        1729  grid%id            1  grid%oid            1
Timing for Writing wrfout_FIL0P10_d01_2017-09-06_00:00:00 for domain        1:    0.94326 elapsed seconds
Timing for processing lateral boundary for domain        1:    0.46027 elapsed seconds
WRF NUMBER OF TILES FROM OMP_GET_MAX_THREADS =  16
 Tile Strategy is not specified. Assuming 1D-Y
WRF TILE   1 IS      1 IE    101 JS      1 JE      6
WRF TILE   2 IS      1 IE    101 JS      7 JE     12
WRF TILE   3 IS      1 IE    101 JS     13 JE     18
WRF TILE   4 IS      1 IE    101 JS     19 JE     24
WRF TILE   5 IS      1 IE    101 JS     25 JE     30
WRF TILE   6 IS      1 IE    101 JS     31 JE     36
WRF TILE   7 IS      1 IE    101 JS     37 JE     41
WRF TILE   8 IS      1 IE    101 JS     42 JE     46
WRF TILE   9 IS      1 IE    101 JS     47 JE     51
WRF TILE  10 IS      1 IE    101 JS     52 JE     56
WRF TILE  11 IS      1 IE    101 JS     57 JE     61
WRF TILE  12 IS      1 IE    101 JS     62 JE     67
WRF TILE  13 IS      1 IE    101 JS     68 JE     73
WRF TILE  14 IS      1 IE    101 JS     74 JE     79
WRF TILE  15 IS      1 IE    101 JS     80 JE     85
WRF TILE  16 IS      1 IE    101 JS     86 JE     91
WRF NUMBER OF TILES =  16
Timing for main (dt= 10.00): time 2017-09-06_00:00:10 on domain   1:    6.00981 elapsed seconds

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 5001 RUNNING AT glmeteoi01
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions


If I change the sf_surface_physics from 2 to 1 (5-layer thermal diffusion) everything works fine, but I want to understant what is causing this error in Philippines and why there is no error over Greece.

Thanks in advance.
ociurana
 
Posts: 1
Joined: Mon Feb 05, 2018 6:30 am

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby hmacintyre » Wed May 23, 2018 11:55 am

Hi, I'm slightly re-activating this thread as I have come across this error and am not sure I have fixed it. Let me explain...

I changed from using USGS to MODIS land surface data after updating from v3.6.1 to v3.9. I have run the model successfully over the UK (at 1km in the inner domain d04 over the West Midlands), making use of the NoahLSM, as well as the BEP urban canopy model, for which I have land categories 31,32,33.

However, I found that there were some boxes where urban categories in MODIS (cat 13) were not overwritten completely by the urban categories (cat 31,32,33) from the more detailed land cover data. I went and modified the met_em file to take the land fraction assigned to cat 13 and distribute this to the other land category fractions in the box, and recalculate the LU_INDEX. e.g. if a box has land fraction of one third each of cat 13, cat 12, cat 5, then I'd end up with half cat 12 and half cat 5 after correction.

I then ran the model again and encountered the "Flerchinger USEd in NEW version. Iterations= 10" error. Checked the soil moisture - all fine (range 0.26 to 0.28).
There was another error "in upward_rad" where the temperature was printed as being some extreme value. I've had this error previously (see viewtopic.php?f=6&t=9355&p=26109&hilit=roof#p26109) where the roughness of the roof was wrong and it caused the roof to get below 100K(!). However, I can't see what caused it here.

More confusingly, I just set the model running again and it somehow skipped over the errors this time and is running ok...???

Has anyone seen this behaviour before, where these types of error just... don't happen second run round?
hmacintyre
 
Posts: 12
Joined: Thu Dec 04, 2014 4:31 pm

Previous

Return to Runtime Problems

Who is online

Users browsing this forum: No registered users and 9 guests

cron