Much to my chagrin, I've been charged again to run a captain's promotional exam. Those circumstances are another story entirely. However, whatever the irony and backstory, It's happening.
As I begin the process of assembling a team, updating the test, etc., a question forms as it always does when testing time arrives. Does our testing process choose the best potential person for the job, or does it help us choose the best test taker? I don't pose that question because I see that our current crop of officers that result from today's testing process are deficient. The top scorers are typically driven and dedicated, working hard for what they get. I still come back to the question - is there a better way? Because those efforts also describe many, if not most, of those that don't score high enough for promotion.
As a person who has taken many different tests, with varying results, I would reply that it depends. Of course, when I didn't get the promotion, the test sucked, I was robbed, the other folks were far inferior to me but the test didn't indicate that, etc. etc. etc. When I was successful, boy was that test awesome. As a relatively normal human being, most everything in life runs through that little ego-centric analysis. Things go well for me? Brilliant. Not the best outcome for me? There's probably something wrong with - everyone else, the test, and so on. Maybe I didn't perform all that well, but that seems quite unlikely, knowing myself as I do.
Back in the objective real world, I always wonder and challenge how we design and run our promotional exams. Over the history of the fire service, and my own career, these have been approached in many ways. They're handled somewhat differently throughout the country as well. The goal is always the same. Find the best candidate, amongst many, that will be a good officer. For probably the last 20 years in my organization, and others locally, the following is the process for company officer: a minimum time of experience on the job and some college classes, a written test, an oral board, a tactical exercise, and some kind of job simulation. That last part may include personnel interactions, in-basket, and/or a variety of different behavioral performances - described as an assessment center. All reduced to numerical scores, served up to the chief, and there are your officers - leaders - managers. It is a way to choose that is pretty fair (same test for all), and satisfies the human resources folks, with all their silly rules about legality and other stuff we like to whine about as useless. Even though - chief talk here - it's critically important stuff for a professional organization, we still whine.
To further expand upon my first question, does this process really give us the best potential officer? How do we know? In a world of measurement, especially when related to budget items, what do we use to measure the results of our carefully crafted testing other than a numerical result? I feel pretty confident stating that anyone reading this has worked for or with folks that are either gods of the fire service, or people you wonder who shows them the way to work each day because even that simple task might be a challenge for them. Both types of them have gone through some kind of testing process. How did the hyper-challenged make it through, and why number 27 on the list (a position most smaller departments don't get to), who would probably be one of those leader/gods if given the chance, can't seem to get there? Does anyone actually measure what we get on the results side of the promotional process other than numerical ranking? Is there a way to measure? Yes, there's a probationary time frame. Is it meaningful, does it measure anything, is that a part of the process? I would submit that it rarely does much more than make an individual keep their head down, and quietly make it through probation somehow. That's a bit cynical, but I have seen it work that way.
Back to the measurement piece. Back in the day - a little before my time - promotion was a simple process. Seniority was the test. Jimmy's retiring. Bob, you're the next senior guy so next shift, you're in charge of engine two. Some departments may still do this, I don't know. That's going back quite a few years. Next, came an interview along with some kind of tactical exercise - maybe written, maybe with some crude simulator. Then came written exams, oral board, tactical. Then it seems the cops started in with assessment centers, stolen/borrowed form the private sector, so little acting vignettes were developed to try to observe behaviors. Which is where we sit today. Some of the chosen-by-seniority guys were gods, some were turds. Some of the full-blown, weeks-long assessment centers produced gods and some were turds as well. Has anyone measured if we're doing better now? Can it be measured? Are there too many variables, can what we want our from our leaders be measured effectively/fairly? Are we clear about what it is we want specifically from our officers? Did the old date-of-hire-based program give us the same ratio of gods to turds? It would certainly eliminate these young, smart-a**** who are so smart, but lack meaningful years of experience yet test well. And that is certainly an unfair blanket statement, because many of those younger folks are, or become, talented and awesome leaders. Actual experience, especially with fires and major emergencies, is becoming more and more rare. Those items are the low-frequency leadership challenges. Just so much more whining by a dinosaur...maybe I'm now thinking that seniority-only thing isn't so bad. Seniority is a great thing, when you have some.
I have no answers to my questions, because I can't figure out how we measure these things - the real dilemma in my mind. I really don't rail against the current process. It "seems" to be the best way. As I described, it has evolved over the years, is validated by the requisite legalities, and does provide a competitive process. It is universally agreed upon that the company officer is the most critical part of the fire service leadership/management/delivery model. Good or bad, choosing this way is the best method we've got, without the quantifiable or qualitative measurement of the success of our process. Only a string of subjective observations of who is good and effective, and who is merely wearing out a lazy-boy (barca-lounger, or whatever) and the channel-changer.
Yet another argument: what exactly is the definition of a good and effective leader? Seems that should come from the big chief, and his immediate minions - the folks who are supposed to have the vision we all support. While access to, and an understanding of, that vision may be available, I've never had it communicated to me in a meaningful way. Maybe I was gone that day, or I should have sought it out myself with a little more enthusiasm. As usual, I will take no blame for that shortcoming, even if I deserve some. That's the way I like to roll, if I can get anyone to believe....
As we prepare for our upcoming test, I've thrown this question to all (bottom to top) in our department. Here's what we've always done. Are there any suggestions, thoughts, innovative approaches out there? Give me something at least equal to, if not better than, our current gauntlet to promotion. If it's reasonable and do-able, maybe we can give it a try. If not, we'll do the dreaded (and not necessarily bad) way we've always done it. We're going to do it again. We'll end up with some brand new company officers, ready to do good for and with the people they serve and lead, or change channels at a higher pay grade.
I would really like to hear if anyone else has wondered any of this stuff. If so, does anyone have any answers? Or maybe more questions. Who we are as an organization is shaped significantly by our leaders. How they're chosen, especially since we choose most of them ourselves, is pretty important stuff.