You are on page 1of 2

The process of a Magic Quadrant by Lydia Leong Thereve been a number of ongoing dialogues on Twitter, on Quora, and various

peoples blogs about the Magic Quadrant. I thought, however, that it would be helpful to talk about the process of doing a Magic Quadrant, before I write another blog post that explains how we came to our current cloud-and-hosting MQ. We lay this process out formally in the initiation letters sent to vendors when we start MQ research, so Im not giving away any secrets here, just exposing a process thats probably not well known to people outside of analyst relations roles. A Magic Quadrant starts its life with a market definition and inclusion criteria. Its proposed to our group of chief analysts (each sector we cover, like Software, has a chief), who are in charge of determining what markets are MQ-worthy, whether or not the market is defined in a reasonable way, and so forth. In other words, analysts cant decide to arbitrarily write an MQ, and theres oversight and a planning process, and an editorial calendar that lays out MQ publication schedules for the entire year. The next thing that you do is to decide your evaluation criteria, and the weights for these criteria in other words, how you are going to quantitatively score the MQ. These go out to the vendors near the beginning of the MQ process (usually about 3 months before the target publication date), and are also usually published well in advance in a research note describing the criteria in detail. (We didnt do a separate criteria note for this past MQ for the simple reason that we were much too busy to do the writing necessary.) Gartners policy is to make analysts decide these things in advance for fairness deciding your criteria and their weighting in advance makes it clear to vendors (hopefully) that you didnt jigger things around to favor anyone. In general, when youre doing an MQ in a market, you are expected to already know the vendors well. The research process is useful for gathering metrics, letting the vendors tell you about small things that they might not have thought to brief you on previously, and getting the summary briefing of what the vendor thought were important business changes in the last year. Vendors get an hour to tell you what they think you need to know. We contact three to five reference customers provided by the vendor, but we also rely heavily upon what weve heard from our own clients. There should generally not be any surprises involved for either the analysts or the vendors, assuming that the vendors have done a decent job of analyst relations. Client status and whatnot doesnt make any difference whatsoever on the MQ. (Gartner gets 80% of its revenue from IT buyers who rely on us to be neutral evaluators. Nothing a vendor could pay us would ever be worth risking that revenue stream.) However, it generally helps vendors if theyve been more transparent with us, over the previous year. That doesnt require a client relationship, although I suspect most vendors are more comfortable being transparent if they have an NDA in place with us and can discuss these things in inquiry, rather than in the non-NDA context of a briefing (though we always keep things confidential if asked to). Ongoing contact tends to mean that were more likely to understand not just what a vendor has done, but why theyve done it. Transparency also helps us to understand the vendors apparent problems and bad decisions, and the ways theyre working to overcome them. It leads to an evaluation that takes into account not just what the vendor is visibly doing, but also the thought process behind it. Once the vendors have gone through their song and dance, we enter our numeric scores for the defined criteria into a tool that then produces the Magic Quadrant graph. We cannot arbitrarily move vendors around; you cant say, well, gosh, that vendor seems like they ought to be a Leader / Challenger / Visionary / Niche Player, lets put them in that box, or X vendor is superior to Y vendor and they should come out higher. The only way to change where a vendor is placed is to change their score on the criterion. We do decide the boundaries of the MQ (the scale of the overall graph compared to the whitespace in the graph) and thus where the axes fall, but since a good MQ is basically a scatterplot, any movement of axis placement alters the quadrant placement of not just one vendor but a bunch. Once the authoring analysts get done with that, and have done all the write-up text, it goes into a peer review process. Its formally presented in a research meeting, any analyst can comment,

and we get challenged to defend the results. Content gets clarified, and in some cases, text as well as ratings get altered as people point out things that we might not have considered. Every vendor then gets a fact-check review; they get a copy of the MQ graphic, plus the text weve written about them. Theyre entitled to a phone call. They beg and plead, the ones who are clients call their account executives and make promises or threats. Vendors are also entitled to escalate into our management chain, and to the Ombudsman. We never change anything unless the vendor can demonstrate something is erroneous or unclear. MQs also get management and methodologies review ensuring that the process has been followed, basically, and that we havent done anything that we could get sued for. Then, and only then, does it go to editing and publication. Theoretically the process takes four months. It consumes an incredible amount of time and effort. (Please note that per Gartners social media policies for analysts, I am posting this strictly as an individual; I am not speaking on behalf of the company.)

You might also like