Flerchinger USEd in NEW version. Iterations= 10

Any issues with the actual running of the WRF.

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby bebop » Thu Nov 08, 2012 12:15 pm

Hello, I had same the problem recently, I decreased the timestep in namelist.input file from 180 to 100 sec and the problem disappeared.

But I discovered today that when I'm running simulations with miprun using 16 proc, everthing is going fine. Unfortunately when I increase to 22 proc or more, Flerchinger is back.

Any idea about this ?

Thank you,
Best regards
B.
bebop
 
Posts: 4
Joined: Thu Nov 08, 2012 12:08 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby phowarth » Thu Jul 25, 2013 3:28 am

I have the same problem, works fine with 20 or less processes, crashes with Flerchinger errors if the number of processes is increased any further.

Has anyone come up with a cause or solution yet?
phowarth
 
Posts: 7
Joined: Tue Mar 08, 2011 11:21 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby nh_modeler » Mon Aug 12, 2013 8:27 am

Has anyone found a solution to this yet? It happens for me when trying to use Noah LSM initialized off of GFS. I have not made any modifications to Noah and am using a short time step.
nh_modeler
 
Posts: 2
Joined: Thu Aug 08, 2013 7:05 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby herbert12345 » Sun Sep 08, 2013 9:28 am

Let me contribute some of the things I found:

- The Flerchinger message in itself is not an error but simply a status message sent by the Noah LSM. However, it appears to occur in cases of abnormal model state. For this reason, you can get rid of the message by changing the LSM but the underlying error will remain and may still cause the model to crash, write NaN or do other bad things. The Flerchinger message does not point towards the source of the problem.

In my specific case, checking rsl.error I found that the model would crash in the longwave radiation part but send Flerchinger error messages earlier. But that was not the root of the problem either. Looking deeper into the logs I found error messages about violations of the vertical CFL condition. Turned out that there was an instability that could be fixed by setting epssm in the dynamics namelist to 0.3 (instead of the default of 0.1).
herbert12345
 
Posts: 16
Joined: Tue Jun 30, 2009 8:45 am

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby ronbeag » Fri Nov 01, 2013 11:56 am

The Flerchinger error is a notification about how soil moisture is being calculated. So i would conclude that crashes due to a Flerchinger error are likely be connected to the soil moisture in the initial conditions.

I've found that sometimes when WPS is interpolating my soil moisture data to the model domain that sometimes erroneous negative soil moisture values appear on the domain. These can be in very small areas, maybe just one grid cell in the entire domain, and difficult to see unless you view your initial conditions with an appropriately ranged colour scale. Obviously, negative values of soil moisture don't make any sense. Also, even though it may be just one grid cell, this will quickly propagate across the domain giving NaN or missing values all over the place. Set the model to write output every timestep to see.

The solution for my case was to simply edit my wrfinput_d0* files so that there were no negative soil moisture values in the ICs. This is easy with NCL, ie load the wrfinput_d0* file and then get soil moisture and do something like this to it SMOIS = where(SMOIS.gt.0, SMOIS, 0.005) , before writing that back to the wrfinput_d0* file. Also note that so far as i can gather, 0.005 should be the lowest soil moisture value. Simply setting the negative values to zero didn't stop the error in my case.

Another solution may be to change the interpolation scheme used for soil moisture in METGRID.TBL to a more complex scheme, like the one used for TSK maybe. I haven't tried this yet though so can't recommend anything in particular.

Also, assuming this is an initial conditions problem, one could try starting the simulation 6 hours earlier or later, to avoid any problems that may be in a particular wrfinput_d0* file.

EDIT: I'll also add that i've noticed similar problems with interpolation of SST data to the WRF domain. That is to say, in some tiny places (one or two grid cells) I noticed negative SSTs, even though the SSTs were scaled in K, these all seemed to be around the sea-ice boundary (which is presumably how it impacted on the LSM). This also led to the flerchinger message and a model crash soon after initialisation. Editing the wrflowinp_d0* files so that all SSTs were greater than or equal to 0K made the message and resultant crash go away.

Another source of Flerchinger errors i've found is when using sea ice in the domain, either in initial conditions or through wrflowinp_d01. SEAICE is interpolated to the WRF domain against the LANDSEA mask, provided by the source data. SST interpolation does not use this mask. In many cases the source data mask LANDSEA can be quite coarse, meaning that SEAICE is not interpolated at all smoothly to the grid, especially at the edges between sea ice and the land mask of the domain. If you change "interp_mask = LANSEA(1)" to "interp_mask = LANDMASK(1)" in METGRIT.TBL.ARW (for the SEAICE entry) this will give a much better interpolation around the edges of the sea ice and, in my case, was enough to eliminate the Flerchinger error.
ronbeag
 
Posts: 12
Joined: Thu Jul 26, 2012 12:12 pm

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby Arshal Wang » Thu Nov 03, 2016 9:57 pm

Dear All,

I am doing some research using TC bogussing scheme and get the the same problem.I tried changing the sf_surface_physics scheme but still got the "Segmentation fault(core dumped)".As dbh409 said,I checked the input files for real.exe ,and I found the the outfile after tc.exe is different from the met_file form the WPS.
I guess maybe there are some problems about the TC Bogus Scheme and I do not know how so solve this problem.
Please help me.
Thanks in advance,
Arshal Wang
Arshal Wang
 
Posts: 3
Joined: Fri Sep 30, 2016 5:13 am

Re: Flerchinger USEd in NEW version. Iterations= 10

Postby kwthomas » Fri Nov 04, 2016 4:20 pm

The general rule is the timestep should be no larger than 6x the grid spacing (km). Anything more will probably go unstable, and usually quickly.

If your run is initialized with severe convection in progress (assuming radar data is used as part of the initial conditions), you may need to reduce the timestep.

Vertical levels are critical too. Normally, CAPS spring runs (HWT experiment) have no problems. For 2016, we went to the experimental HRRR vertal level scheme. Some runs were failing at 4x.
Kevin W. Thomas
Center for Analysis and Prediction of Storms
University of Oklahoma
kwthomas
 
Posts: 168
Joined: Thu Aug 07, 2008 6:53 pm

Previous

Return to Runtime Problems

Who is online

Users browsing this forum: Google [Bot], ricardofaria and 2 guests

cron