Managing Your Research Writing For Success: Passing The "Gatekeepers"


 peer-reviewed article

Charles C. Fischer Chuck@pittstate.edu is University Professor and Chair, Department of Economics, Finance and Banking, Pittsburg State University. He is also the Editor in Chief of the "Journal of Managerial Issues."


 

ABSTRACT

This article is aimed at academic researchers, as well as practitioners, who desire to be successful in publishing their work. Though it is intended mainly for those who wish to publish in peer-reviewed (e.g., refereed) academic journals, much of this article is also of value to individuals seeking publication in other outlets (e.g., trade publications, magazines). The goal here is to provide authors with practical guidance on how to enhance the odds of publishing success. Guidance offered is based on: (1) referee review data from a successful management journal, (2) insights from the Editor in Chief of this journal, and (3) key insights from the literature. A main theme which emerges from this article is that successful publishing is not difficult, if one takes a strategic approach.

change the color of the background

 

Introduction

In 1989, the inaugural issue of The Journal of Managerial Issues (JMI) was launched. Three years later, based partly on my experience as founding and current Editor in Chief (hereafter referred to as Editor) of JMI, I wrote a research article on the common pitfalls that lead to publishing failure [Fischer, 1992]. This article was based on three years of quantitative and qualitative data from JMI, as well as a review of the extant literature. The current article builds on the earlier study, utilizing a larger referee dataset, as well as 16 years of experience as Editor. A main theme to emerge from this study is that being a successful researcher/publisher in quality academic journals is largely a matter of understanding what referees look for in a quality manuscript and taking a strategic approach in the development and targeting of one's submission.

In light of the important role academic publishing plays in faculty tenure and promotion decisions, as well as the AACSB International accreditation process for business schools, academic journal publishing is important to the professional success of faculty and the academic prestige of their employing institutions. Yet is not uncommon for new faculty just out of graduate school to be unprepared to publish in academic journals. While the research and writing skills necessary for the doctoral dissertation are important tools for successful authorship, there is much to be learned beyond these for academic journal publishing. Many times, as Editor, I have seen a referee's comment along the lines of "...this looks like something lifted from the author's dissertation." And, often, it is painfully obvious.

This article is intended to help those new to academe better understand the determinants of successful journal publishing, as well as help experienced authors to be more effective (e.g., fewer rejections, less revision work).

Referees' "Looking Glass"

Faculty in many business schools are expected to publish in "peer-reviewed" academic journals. This often means "refereed" journals. Referees' recommendations are an important consideration in the decision by the Editor on whether to publish a manuscript. In fact, referees' recommendations often are the determining factor, except in special cases (e.g. when a paper is invited by the Editor; when the Editor serves as a "tie-breaker" in manuscript evaluation). Thus, it is not surprising that journal referees are often viewed as "gatekeepers," separating wheat from chaff. The big question for aspiring authors is how to get past these gate keepers.

A strategic approach for making it past the gate keepers is to tailor one's research/writing to what referees typically look for in a submission. This calls for knowing the referees' "looking glass," if you will [Lee, 1995; Hammermesh, 1994]. While I had some intuitive understanding of this as an experienced author when we started JMI, I felt I needed a more rigorous frame of reference. Thus, I surveyed Editors of similar journals in an attempt to identify a common set of manuscript evaluation criteria. The goal was to identify those elements of a quality paper management journals tend to rely upon in evaluating manuscript submissions.

Using Cabell's Directory in Management [1990], seventy-five Editors of comparable management journals were identified. Each received a letter of inquiry, asking if he or she required their journal's referees to use formal evaluation criteria and, if so, to identify them. Forty-six usable responses were received (effective response rate of 61 percent), and from those, 32 answered "yes" and provided the criteria (70 percent). Selecting those criteria mentioned by 50 percent or more of the respondents yielded seventeen composite criteria [Fischer, 1992]. At JMI, I selected from this list of 17 those criteria I felt were most appropriate for our Editorial mission. This yield the eleven criteria listed in Table 1.

It is noteworthy that, in our sample, a majority of Editors gave their referees' formal criteria to use, and that a common set of criteria emerged from their referee policies. These criteria help identify the domain of manuscript evaluation by referees, and they provide a good checklist for appraising the quality of one's own work. However, a word of caution is in order; namely, that some journals and some referees may have rather unique criteria for evaluating manuscripts (e.g., conclusions reached in a paper). The referee process can be fickle at times, and success may be heavily influenced by the "luck of the draw." Nonetheless, paying attention to the evaluation criteria presented here will likely enhance the chances for publishing success.

Referees' "Looking Glass": A Case Study

To better flesh out the nature of the referees' "looking glass," I now turn to an extensive case study, covering thirteen years of referee data from JMI. Referees for JMI are requested to provide written comments and complete an evaluation matrix (Reviewer Report Form; RRF). This matrix resembles Table 1, but includes additional adjectival ratings (ranging from "Completely Inadequate" to "Superior" and from "None" to "Path-Breaking"). Since the data reported here were taken form those RRFs related to manuscripts rejected, only the problem adjectival ratings for each criteria are of interest (See Table 1.).

When JMI receives a submission, it is first reviewed by the Editor for fit, overall quality, etc. Those that pass the in-house review (about 40 percent) are then sent to two external referees. Care is taken to ensure a good match between the manuscript and referees' interests and expertise. A database containing referees' reviewing preferences is used to develop a list of referee candidates for a particular submission. The Editor selects two individuals from this list, and they receive an e-mail, asking if they are interested in reviewing the manuscript, based on its title and abstract (included in the e-mail). This process is continued until two referee acceptances are obtained. The manuscript is then mailed to each referee, including a detailed letter setting forth our expectations, evaluation forms, a prototype example of a helpful review, and a detailed statement of our review philosophy and review process (all materials available upon request). In the letter, the referee is asked to promptly return the manuscript if he or she does not feel qualified to review it. The initial e-mail to the referee, plus this reminder, minimizes the odds of a referee reviewing a paper he or she is not interested in or is not competent to review.

Table 1 presents a summary of referee evaluations of manuscripts rejected by JMI (N = 285), based largely on referees' recommendations. The percentages reported for each criteria were calculated by dividing the frequency of responses obtained for the adjectival rating by the number of completed RRFs (563, not 570; a few manuscripts were reviewed by three referees or by only one). These percentages as well as the relative ranking of the eleven criteria are shown in Table 1. To rank the criteria by level of importance to the referees, I added the two percentage scores for each criteria to obtain a combined score, and then ranked from highest (1 = most frequently noted by referees) to lowest (11).

The referees selected for manuscript review for JMI come from a large and diversified sample. Our available pool of referees is a little over 600, representing all regions of the U.S. (48 states), and many countries throughout the world. Our referees are mostly from academe, with about 40 from industry or government. With few exceptions, this latter group, like their counterparts in academe, hold doctoral degrees. Referees from academe represent a wide range of institutions, from large, "flagship" research institutions to small, regional colleges, in both the public and private academic arena. Yet, out of such a large and diverse sample, some clear and prominent themes emerge as to how referees evaluated those manuscripts rejected at JMI.

In Table 1, data are presented for the initial three-year study (1989-91) [Fischer, 1992] and the last full ten years (1994-2003). (Referee data for 1992-93 are not available.) Several important insights can be obtained from these data. Most striking is the consistency of the top three areas of concern noted (most frequently cited) by referees over both periods: (1) significance of contribution, (2) methodological rigor, and (3) conceptual rigor. Drawing upon referees' comments to the authors and Editor, it is clear that the main problem areas regarding a lack of significance are author's failure to go beyond the extant literature and, closely related, the findings are of little value or interest. While it is not necessary that research be path-breaking (which is quite rare and often difficult to get published), it should go beyond what is commonly known (e.g., more extensive demographic sample than used in previous studies to verify or refute generality of findings). Unfortunately, however, there may be some truth in the notion that "A 'safe' paper that lines up the tenth duck in some row will [likely] be refereed by the progenitors of the fifth and eighth ducks, whose reactions will be: 'Goody, another duck, and my name is in the references'" [Shepherd, 1995: 115]. It is not surprising that "Significance of Contribution" often causes problems for authors with referees, for it requires a delicate balance between adding value to the literature and remaining within orthodoxy (except for those journals that champion non-orthodox research; e.g., Union of Radical Political Economy).

The main issues regarding methodological rigor were sample problems/limitations (e.g., generality, single-source bias) and inappropriate statistical analysis (e.g., inadequate control variables, inappropriate technique). The most common problem area regarding conceptual rigor was lack of adequate foundation work for proposed hypotheses. (It is important to keep in mind that the JMI focuses on empirical research and, thus, the concerns noted by its referees refer to that type of research and would not be relevant to other types of research.)

We also see considerable stability, with one glaring exception, in the ranking of the remaining criteria in Table 1: "Discussion of Results" (5/4), "Length/Contribution Ratio" (6/5), "Treatment of Relevant Literature" (7/6), "Contribution" (potentially, if revised) (8/7), "Clarity of Objectives" (9/9), "Readability" (10/10), and "Logical Organization" (11/8). Usually these are not fatal flaws, and can be remedied by addressing the referees' objections. However, even so, they can lead to a rejection recommendation if too many of these problem areas are evident and/or if they are particularly flagrant (e.g., very dated literature review). The only evaluation criteria to flip-flop over time is "Significance of Topic" (4/11). Perhaps, as the journal established its identity over time, authors targeting it for their research had a better grasp of what appealed to its referees than when it was new.


Table 1

JMI Reviewer Report Data Summary:

1989-1991 (N = 68)/1994-2003 (N = 217)

 

Evaluation Criteria Adjectival Rating- Referee Response Rate
Combined
Score
Rank
Completely Inadequate Major Problems





    Clarity of Objectives 4%/3% 22%/24% 26%/27% 9/9
    Conceptual Rigor 12%/14% 37%/39% 49%/53% 3/3
    Methodological Rigor 13%/16% 47%/44% 60%/60% 2/2
    Logical Organization 6%/5% 15%/26% 21%/31% 11/8
    Treatment of Relevant Literature 7%/11% 25%/28% 32%/39% 7/6
    Discussion of Results 13%/8% 28%/37% 41%/45% 5/4
    Readability 3%/3% 19%/17% 22%/20% 10/10
    Length/Contribution Ratio 10%/14% 29%/30% 39%/44% 6/5
         
  None
Trivial
   
    Significance of Topic 1%/3% 44%/7% 45%/10% 4/11
    Significance of Contribution
    (in current form)
15%/24% 50%/42% 65%/66% 1/1
    Significance of Contribution
    (potentially, if revised)
6%/4% 22%/28% 28%/32% 8/7

Note:  "Length/Contribution" ratio speaks to the strength of the contribution of the paper relative to its length ("bang per buck," if you will), and is important due to the relative scarcity of publication space.


It is noteworthy that studies of similar journals suggests that the problems identified above are common to manuscript submissions in general [Thompson, 1981; Betts & Penbera,1989; Townsend, 1988; Beyer, Chanove, & Fox, 1995]. The Journal of Small Business Management (JSBM) found that the reasons for unfavorable recommendations fell into three broad categories: (1) deficiencies in scholarship, (2) shortcomings in writing quality, and (3) lack of applicability to the journal [Thompson, 1981]. For JMI, the last two problem areas are responsible for a majority of submissions not passing the in-house review. The data in Table 1, as well as similar studies reported in the literature, indicate how important it is to carefully think through research design, lay the proper theoretical foundations, and have a thorough understanding of what has been done in the literature when attempting to make a value-added contribution [Germano, 2001]. Much time and effort can be lost by authors who fail to give adequate attention to these potentially-fatal problem areas.

The In-house Review: Qualitative Insights from the Editor

Over the last 16 years as Editor of JMI, I have reviewed nearly 2,000 manuscript submissions in-house to determine suitability for review by our referees (with approximately 40 percent passing in-house review and going on to external review). While some Editors send nearly all submissions out for review, my philosophy is to send out only those that seem to have good potential for acceptance (recognizing the importance of saving time for both authors and not wasting the time of referees). In this regard, I am the initial "gate keeper" (but turn to a "blind partner" for those that show merit).

The reasons for submissions not passing the in-house review at JMI boil down to a few crucial problem areas:

1. Does Not Fit Journal's Editorial Mission. It is important to know the journal selected for submission. At a minimum, an author should understand the Editorial mission of the journal (e.g., topics of interest, research methodologies required or preferred). Better yet, the author should become familiar with the nature and style of the journal's articles and tailor his or her submission accordingly. Shepherd stresses journal targeting from the very beginning: "Don't just write the paper and then start thinking of possible journals" [1995:120]. "Some journals tend to have revealed preferences ... about the area of the research that they publish ..... Give this some thought...." [Shepherd, 1995: 112]. It is apparent when a paper is written for JMI, since it is a hybrid, academic-practitioner publication and has particular "markers" in its article (e.g., emphasis on managerial implications). An author attuned to such matters enhances his/her chances of success.

2. Sloppy Writing. Sloppy writing, unfortunately, is often a problem.  Chris A. Betts and Joseph J. Penbera interviewed journal Editors, A.A.C.S.B. accreditation representatives, and business faculty in an attempt to identify key reasons for manuscript rejection. They concluded that one of the main reasons for publishing failure is " ... the submission is poorly written" [1989: 1-3]. Editor James J. Thompson reports such specific criticisms as "... shortcomings in organization and paragraphing, poor word choice, lack of variety in sentence structure, wordiness, lack of clarity, grammatical and typographical errors, and overly pedantic (or, conversely, overly 'journalistic') tone" [1981:9]. Such problems raise more than one concern about a submission. Most apparent is the difficulty of (and Editorial time and resources required for) turning it into a quality paper. Further, sloppy writing can cast doubts on other aspects of the research (e.g., sloppy data collection and statistical analysis). However, with an adequate amount of care, sloppy writing does not have to exist. As Betts and Penbera argue, poor writing often is more a matter of haste and carelessness by the author than a lack of writing expertise. In their words, "While very few are born writers, a poor manuscript is often the result of premature submission rather than a lack of competence" [1989:1].

3. Out-of-Date Literature. An out-of-date literature base is a poor foundation for any research effort. Also, it raises the issue of whether the submission is a previously rejected paper. While it is permissible to submit a rejected paper to another journal (as long as the research is still relevant), a badly out-of-date literature base suggests the paper may have been "shopped-around" for some time. It is usually best to revise and update before resubmitting a rejected paper.

4. Does Not Meet Adequate Levels of Scholarship. Submissions sometimes lack necessary academic rigor (e.g., an opinion or thought piece) and scholarship (e.g., little or no literature support). For a journal with high standards of scholarship, this problem is a variant of poor targeting.

5. Unwieldy. Some submissions are so unwieldy (e.g., overly complex, poorly organized, disjointed) as to make them, from an Editorial standpoint, more work than they are worth. Editors facing scarce time and resources will not likely "fight" an unwieldy submission, especially when there are many good ones to select from.

For JMI, the first two problem areas above--poor fit and sloppiness--have been the most frequently encountered. This is both unfortunate and unnecessary, as these problems can be avoided by careful targeting of one's research and "sweating the details" in manuscript preparation.

Strategies for Success

Publication success largely involves avoiding the common pitfalls (i.e., passing the gate keepers), and being strategic in the development of one's research, writing, and submission targets. In this section, strategies for publishing success are presented. These strategies are drawn from our referees' comments, from discussions with other journal Editors and successful authors, and from the literature [see Pasco, 2002; Thompson, 1981; Locker, 1994; Turk & Kirkman, 1982; Lewis, 1982; Gilman, 1961; Betts & Penbera, 1989; Tichy, 1966; Day, 1989; Davis, 1985; Gilliland & Cortina, 1997; Hernon, 1993; Maske, 2003].

1. Pick Your Level and Build Up. For an inexperienced author, it may be best to start with the less competitive publication outlets and then build upon success at this level. This may mean starting with monographs, working papers, and conference proceedings. The next level might be regional journals with a fairly high acceptance rate (30-50 percent), and then on to ranked national journals (e.g., top 20 by number of citations). This progression may run all the way to the top two or three journals in the discipline (e.g., Academy of Management Journal). Where one starts and how far one goes up the ladder is an individual matter, depending on ability and ambition. Interestingly, this strategy runs counter to those that argue for submitting to a top-level journal first and, if that fails, then work your way down. This may be a sound approach if one realistically has a change to get published in a journal with, say, a 5 percent acceptance rate. Otherwise, it is a waste of Editors' and referees' time and, given the long review time for many journals, it can threaten a timely piece of research.

2. Diversify Your Portfolio of Submissions. This strategy can be a hybrid of the above, where one pursues several levels at a time with a variety of research projects. This is a good strategy for faculty tenure candidates, who must have several publications by the end of the candidacy period. It is a way of hedging one's bets, instead of hoping to get one or two "big hits." However, even in this case, it still is important to be realistic about one's chances for success. The diversification of submissions should include a probable range of success.

3. Follow Your Comparative Advantage. Rather than go solo as an author, team up with co-authors selected strategically for their expertise (e.g., "word smithing," questionnaire design, statistical analysis). With each co-author focusing on his/her comparative advantage as a researcher/writer, the team can reap the benefits of specialization and division of labor.

4. Apprenticeship/Mentor. Work with experienced authors. Often successful authors are willing to take on less-experienced co-authors to perform such "lesser" tasks as literature research and data "crunching." These opportunities may arise in a graduate program for graduate students working for experienced professors, or on the job after graduate school, especially in those departments that have formal mentoring programs (strongly encouraged by AACSB International). The advantages of an apprenticeship/mentoring program for inexperienced authors is "learning while doing" under the guidance of those more experienced.

5. Network. Develop professional relationships with other researchers in the field (facilitating strategies #3 and #4 above). One good way to do this is by participating in professional conferences (e.g., paper presenter, discussant, session chair), where one can meet those with similar interests and co-authoring potential.

6. Learn From the Best. A good strategy is to study conference award papers and journal "best papers" issues. If one carefully analyzes these papers, common elements of sound scholarship and effective writing should become apparent.

7. Get Critical Feedback. Take advantage of the expertise of colleagues and get critical feedback from them. This is one reason why paper presentations can be a crucial step toward publication in a refereed journal. An experienced paper discussant can offer valuable comments to the author.

8. Learn Critical Evaluation Skills. Learn to look at your own work critically. One can hone critical evaluation skills by reviewing for various journals in the discipline. (Perhaps, a more difficult task is to be objective about one's own work.)

9. Market Your Submission to the Editor. This is where a good cover letter is important. The cover letter can be strategic--one that succinctly sets forth the value-added contribution of the paper, answering the "so-what" question, and explains why it would be of interest to the journal's readership [Gump, 2004]. Also, a well crafted abstract can play a similar role. A good cover letter and abstract can make the Editor's initial screening more efficient and should have a positive impact on his/her first impression of the submission.

These strategies are aimed at developing an author's skills, utilizing the expertise of colleagues, and effectively "marketing" one's research. Such a three-pronged approach holds the promise of publishing success.

Conclusion

Managing your research writing for publishing success is not difficult, but it does require a methodical approach. It is important to know how referees tend to view manuscript evaluation (e.g., common criteria used), so that you have a better chance of passing these gate keepers. The data and insights offered here should be useful, as they identify common pitfalls to avoid, and suggest strategies to pursue. One key strategy is to know well the journal you wish to target. Many journals follow essentially a formula approach in their articles, and an article that reflects the scholarly nuances of the journal is likely to be appreciated by the Editor and referees. And, since the competition can be keen, it is important to be strategic not only in journal targeting, but also with respect to your research efforts (e.g., following your comparative advantage among your co-authors) and your portfolio of submissions (diversification reduces risks of failure). Also, sweat the details [see McCrimmon, 1983; Day, 1989; Daniel, 1993; Gilman, 1991; Lewis, 1982]. Editors and referees appreciate a paper that has been finely crafted and is largely free of errors. A conscientious effort stands out well. Finally, do not get discouraged when all does not go well [see Bochanski, 1995]. Take seriously the feedback you receive from the referees, solicit advice from colleagues, and refine and resubmit (when appropriate). Inexperienced authors should keep in mind that most everybody receives rejections. They are not necessarily a sign of incompetence, and are as much a part of publishing as are strikes in baseball. You strike out, you go back to bat.


References

Bets, C. A. and J. J. Penberra (1989). A Guide to 100 Publishing Opportunities for Business Faculty, Fresno, CA: The Press at California State University.

Beyer, Janice M., Chanove, Roland G. and Fox, William B. (1995). "The Review Process and the Fates of Manuscripts Submitted to AMJ." Academy of Management Journal, 38 (5), 1219-1260.

Bochanski, Frank X. (1995). "You've Failed Peer Review...Have You Really?" Pennsylvania CPA Journal, 66 (1), 26-27.

Cabell, David W. E. (Ed.) (1990). Cabell's Directory of Publishing Opportunities in Management, Beaumont, TX: Cabell Publishing Co.

Daniel, Hans-Dieter (1993). A Guide to Academic Writing, Oxford, UK: Greenwood Press.

Davis, Richard M. (1985). "Publication in Professional Journals: A Survey of Editors." IEEE Transactions on Professional Communication, PC-28, 34-43.

Day, Robert A. (1989). "The Development of Research Writing." Scholarly Publishing, 20 (2), 107-115.

Fischer, Charles C. (1992). "Writing for Management Journals: Common Pitfalls." Journal of Business & Entrepreneurship, 4 (1), 115-123.

Germano, William. (2001). "Surviving the Review Process." Scholarly Publishing, 33 (1), 53-69.

Gilliland, Stephen W. and Cortina, Jose M. (1997). "Reviewer and Editor Decision Making in the Journal Review Process." Personnel Psychology, 5 (4), 427-452.

Gilman, William (1961). The Language of Science: A Guide to Effective Writing. New York: Harcourt, Brace & World, Inc.

Gump, Steven E. (2004). "Writing Successful Cover Letters for Unsolicited Submissions to Academic Journals." Scholarly Publishing, 34 (2), 92-102.

Hammermesh, Daniel S. (1994). "Facts and Myths About Refereeing." Journal of Economic Perspectives, 8 (1), 153-163.

Hernon, Peter (1993). "Publication in 'College and Research Libraries'--Accepted, Rejected, and Published Papers." College and Research Libraries, 54 (4), 303-321.

Lee, Allan S. (1995). "Reviewing a Manuscript for Publication." Journal of Operations Publishing, 13 (1), 87-92.

Lewis, David V. (1982). Secrets of Successful Writing, Speaking, and Listening, New York: AMACOM.

Locker, Kitty O., Reinsch, N. L., Dulek, Ronald, and Flatley, Marie E. (1994). "What Makes an Article Publishable." Bulletin of the Association for Business Communication, 57 (2), 59-66.

Maske, Kellie L., Durden, Gary C. and Gaynor, Patricia E. (2003). "Determinants of Scholarly Productivity Among Male and Female Economists." Economic Enquiry, 41 (4), 555-564.

McCrimmon, J. M. (1983). Writing With a Purpose, Boston, MA: Houghton-Mifflin.

Pasco, Allan H. (2002). "Basic Advice for Novice Authors." Scholarly Publishing, 33 (2), 75-89.

Shepherd, George B. (Ed.) (1999). Rejected: Leading Economists Ponder the Publication Process, Sun Lakes, AZ: Thomas Horton and Daughters.

Thompson, J. H. (1981). "Essential Qualities of a Good Manuscript." Journal of Small Business Management, 19 (1), 8-9.

Tichy, H. J. (1966). Effective Writing: For Engineers, Managers, and Scientists, New York: John Wiley & Sons, Inc.

Townsend, G. B. (1988). "Writing Effectively for Professional Journals." Journal of Management in Engineering, 4, 60-64.

Turk, Christopher and Kirkman, John (1982). Effective Writing: Improving Scientific, Technical & Business Communication, New York: E & F. N. Spon.  


To change the color of the background, scroll to the desired color and click on it.


Links to tables of contents