The Flerchinger error is a notification about how soil moisture is being calculated. So i would conclude that crashes due to a Flerchinger error are likely be connected to the soil moisture in the initial conditions.
I've found that sometimes when WPS is interpolating my soil moisture data to the model domain that sometimes erroneous negative soil moisture values appear on the domain. These can be in very small areas, maybe just one grid cell in the entire domain, and difficult to see unless you view your initial conditions with an appropriately ranged colour scale. Obviously, negative values of soil moisture don't make any sense. Also, even though it may be just one grid cell, this will quickly propagate across the domain giving NaN or missing values all over the place. Set the model to write output every timestep to see.
The solution for my case was to simply edit my wrfinput_d0* files so that there were no negative soil moisture values in the ICs. This is easy with NCL, ie load the wrfinput_d0* file and then get soil moisture and do something like this to it SMOIS = where(SMOIS.gt.0, SMOIS, 0.005) , before writing that back to the wrfinput_d0* file. Also note that so far as i can gather, 0.005 should be the lowest soil moisture value. Simply setting the negative values to zero didn't stop the error in my case.
Another solution may be to change the interpolation scheme used for soil moisture in METGRID.TBL to a more complex scheme, like the one used for TSK maybe. I haven't tried this yet though so can't recommend anything in particular.
Also, assuming this is an initial conditions problem, one could try starting the simulation 6 hours earlier or later, to avoid any problems that may be in a particular wrfinput_d0* file.
EDIT: I'll also add that i've noticed similar problems with interpolation of SST data to the WRF domain. That is to say, in some tiny places (one or two grid cells) I noticed negative SSTs, even though the SSTs were scaled in K, these all seemed to be around the sea-ice boundary (which is presumably how it impacted on the LSM). This also led to the flerchinger message and a model crash soon after initialisation. Editing the wrflowinp_d0* files so that all SSTs were greater than or equal to 0K made the message and resultant crash go away.
Another source of Flerchinger errors i've found is when using sea ice in the domain, either in initial conditions or through wrflowinp_d01. SEAICE is interpolated to the WRF domain against the LANDSEA mask, provided by the source data. SST interpolation does not use this mask. In many cases the source data mask LANDSEA can be quite coarse, meaning that SEAICE is not interpolated at all smoothly to the grid, especially at the edges between sea ice and the land mask of the domain. If you change "interp_mask = LANSEA(1)" to "interp_mask = LANDMASK(1)" in METGRIT.TBL.ARW (for the SEAICE entry) this will give a much better interpolation around the edges of the sea ice and, in my case, was enough to eliminate the Flerchinger error.