Diffusion of DSM-5 usage amongst clinicians in Europe: Towards a uniform classification system for mental disorders

(Exemplary research proposal)

Author: D. Janssen
Affiliation: Maastricht University
Date: 11-03-2015
Open PDF version

Section 1: Introduction

1.1 Background

Although health care governance within the European Union remains a competence of the individual Member States, through the open method of coordination (OMC), comparative evaluation of Member States’ performance is being encouraged (Mossialos, Permanand, Baeten & Hervey, 2010). Comparative evaluation facilitates the process of setting benchmarks, developing guidelines, and exchanging good and best practices (Mossialos et al., 2010). And through comparative evaluation, both between and within countries, one can produce scientific knowledge about the causes of ill-health and consequently develop more effective methods to cure disease, care for the sick and promote health (Czabanowska, 2014). This valuable process of comparison is operationalized by agreeing on indicators that reflect, or are related to, the health status of a population. Informative and useful indicators include the prevalence, incidence and disease burden of various disorders and diseases. These indicators can be incorporated in prominent databases and monitoring tools, such as the WHO’s Health for All database (HFA-DB) (World Health Organization (WHO), 2014), the European Community Health Indicators (ECHI) (European Commission, 2009), and the WHO’s Global Health Estimates (WHO, 2013), allowing researchers that are interested in health related cross-country comparisons easy access to relevant data. However, adherence to adequate, systematic, valid and reliable ways of measuring these health related indicators is a prerequisite to allow for a reasonable and meaningful degree of comparison (Smith, Mossialos & Papanicolas, 2008). Heterogeneity in definitions, classifications and coding practices of indicators across Member States endangers both the validity of comparisons and the inferences following these comparisons (National Institute for Health and Care Excellence, 2008). Particularly, the field of mental health has faced, and is still facing, challenges in establishing uniformity and homogeneity in the way mental health indicators are defined and measured throughout Europe (Polanczyk, de Lima & Horta, 2007). Not only do the definitions of some mental disorders differ across Member States (Ginter, 2014), also the willingness of clinicians to establish diagnoses seems to vary between countries when specific disorders or patient groups are involved (Wedge, 2012). The way mental disorders are defined and measured in a country, and the willingness or reluctance of health care professionals to establish a diagnosis, potentially affect a country’s performance on certain health indicators, such as prevalence, incidence and disease burden rates. In an attempt to harmonize the way mental disorders are defined and to objectively guide the diagnostic process, comprehensive classification systems have been developed in the past decades, such as the ‘Diagnostic and Statistical Manual of Mental Disorders’ (DSM) (American Psychiatric Association (APA), 1980, 1994, 2000, 2013), and the ‘ICD-10 classification of mental and behavioural disorders’ (WHO, 1992). These classification systems offer clear definitions of various mental disorders, incorporating relevant symptomatology and morbidity characteristics, and can be used by clinicians to guide the diagnostic process. Widespread usage of a classification system increases the uniformity in the way disorders are defined, consequently adding a dimension of objectivity and homogeneity in the way patients are being diagnosed. In turn, this allows for more representative morbidity statistics and more valid comparisons between countries. Figure 1 summarizes how classification systems, clinical diagnoses, databases and monitoring tools, and the open method of coordination relate to each other in the field of mental health.

Figure 1. A visualisation of how classification systems, clinical diagnoses, databases and monitoring tools, and the open method coordination relate to each other in the field of mental health (own figure)
Figure 1

1.2 The DSM-5 as an innovative classification system

The Diagnostic and Statistical Manual of Mental Disorders (DSM) and the International Classification of Diseases (ICD) are the two leading classification systems in the world for mental disorders (Ginter, 2014).  Both classification systems receive praise and endure criticism, both systems have their advocates and adversaries, and both are perceived as having their weaknesses and their strengths (Derksen, 2012; First, 2009; Ginter, 2014; Wedge, 2012). Both the ICD and the DSM are descriptive categorical classification systems that categorize mental disorders based upon a constellation of symptoms and signs (Ginter, 2014), and both systems cluster related mental disorders in diagnostic blocks (ICD) or chapters (DSM). Collaboration between the authors and developers of the ICD and DSM has led to more overlap in terminology and diagnostic descriptions over the years (Ginter, 2014).  Still, significant discrepancies between the ICD and DSM have been prevailing. An example is the multiaxial system the DSM has been using since the release of its third edition (APA, 1980). Next to relevant mental disorders and medical conditions, this multiaxial system incorporates other important diagnostic information such as environmental factors (Axis IV) and level of functioning (Axis V) (Ginter, 2014).  In contrast, the ICD has always used a more simplistic system that merely lists mental disorders alongside medical conditions and other health ailments. Furthermore, where the DSM uses very detailed and specific criteria, the ICD depends more on prototype descriptions with less detailed criteria and minimal background information to guide the diagnostic process (Ginter 2014). The complexity of the DSM reflects its purpose to serve as a classification tool for highly educated and licensed mental health professionals, while the relative simplicity of the ICD reflects the intention of providing an easily accessible classification tool for health care professionals worldwide, regardless of educational background (Kupfer, Kuhl & Wulsin, 2013; WHO, 1992). One of the main concerns however from a public health perspective, is the fact that despite harmonization attempts, the criteria for various mental disorders still differ significantly between the DSM and ICD (First, 2009), which is something that greatly jeopardizes the comparability of health data when more than one designated classification system is used. For instance, patients meeting the criteria for bulimia nervosa or post-traumatic stress disorder (PTSD) when using the ICD-10 criteria, might not qualify for a similar diagnosis when adhering to the DSM-IV criteria (Ginter, 2014). These differences can make comparisons of bulimia nervosa and PTSD related data a precarious and troublesome venture.

If we want to ensure the comparability of mental health related data on a European level, we should strive to adopt one designated classification system to be applied in all Member States, and that is where the DSM-5 (APA, 2013) has a potential role to play. The DSM-5 is the most recent revision to the DSM-classification system, in which substantial alterations to specific disorders and fundamental conceptual changes were introduced (McCarron, 2013). Its development has been a deliberate, extensive effort to further harmonize the way mental disorders are defined and categorized, while incorporating the most recent empirical insights from clinical psychology and psychiatry. Various points of criticism on past editions of the DSM were taken into account and addressed in the development process (Nemeroff et al., 2013), and through close collaboration with representatives of the WHO (responsible for the ICD-10), the development team of the DSM-5 aimed to create a handbook that represents the best of both worlds, implementing the perceived favourable characteristics of the ICD-classification system, while preserving the strengths of past DSM-editions (Ginter, 2014; Regier, Kuhl & Kupfer, 2013). And though we acknowledge the fact that the DSM-5, just like previous editions, is still very much contested (Frances, 2013; Lane, 2013; Nemeroff et al., 2013), one could substantiate the claim that the DSM-5 can be seen as a promising innovation for bringing uniformity in the classification of mental disorders on a European level. Since the DSM-5 can be described as a novel way of improving administrative efficiency in health systems, it also meets the definition of an innovation (Greenhalgh, Robert, Macfarlane, Bate & Kyriakidou, 2004).

1.3 Research setting, aim and objectives          

The main aim of our study will be to evaluate what factors contribute towards the adoption and rejection of the DSM-5 amongst clinicians in  three European countries, namely France, Germany and the Netherlands. These countries together form an interesting research setting, since they all have very different traditions in the usage of mental disorder classification systems. In the Netherlands the DSM-IV-TR (APA, 2000) has been the leading classification system in clinical practice in recent years, facilitated by the fact that health insurance companies require a DSM-IV-TR diagnosis before treatment costs are covered (Nederlandse Vereniging voor Psychiatrie, 2014). In Germany the ICD-10-GM (the German modification of the ICD-10) is the official classification system for the encoding of diagnoses in inpatient and outpatient medical care (Deutsches Institut für Medizinische Dokumentation und Information, 2014). Lastly, we incorporate France, a country where strong anti-DSM sentiments have surfaced amongst psychologists and psychiatrists over recent years (Lane, 2012). As a response to the unwanted influence of past DSM editions, French psychiatrists and psychologists have since developed some of their own classification systems, such as the ‘Classification Française des Troubles Mentaux de L’Enfant et de L’Adolescent’ (CFTMEA) (Wedge, 2012). Consequently, this lack of uniformity in the usage of classification systems between France, Germany and the Netherlands hampers comparative evaluation. The current study sets out to investigate which factors play a role in guiding French, Dutch and German clinicians’ decisions to either adopt or reject the new, innovative DSM-5 classification system. Based on the study’s outcomes, our objective is to develop a model that illustrates for each country separately, the relative contribution of certain determinants to either the rejection or adoption of the DSM-5 amongst clinicians. Identifying why the new DSM-5 is being adopted or rejected produces valuable insights on what further action to take in working towards the realization of a uniform classification system for mental disorders in Europe. 

Additional to a cross-country comparison, a cross-discipline comparison will be incorporated in the current study; our second research objective is to examine if tendencies to adopt or reject the DSM-5 differ between clinicians of different clinical disciplines. A large scale study in the United States found distinct differences in DSM-5 adoption rates between different groups of clinicians (Cassels, 2014). Since a similar study in a European setting has not yet been conducted, this research scope will be included in the current study as a secondary research objective.

1.4 Research questions

The 4 main research questions (RQs) of the current study are:

– RQ1. What are the differences between clinicians in France, Germany and the Netherlands in their overall tendency to adopt or reject the DSM-5?
– RQ2. In France, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?
– RQ3. In the Netherlands, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?
– RQ4. In Germany, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?

For the final research questions, we don’t make a distinction between countries, but we make a distinction between clinicians. We set out to investigate if within France, Germany and the Netherlands, there are different factors at play influencing the adoption of the DSM-5 when comparing clinicians of different disciplines (clinical psychologists, psychiatrists, primary care physicians, paediatricians, occupational therapists, neurologists, social workers and psychiatric nurses). The final research questions are:

– RQ5. What are the differences between clinicians of various clinical disciplines in the influence specific indicators have on adopting or rejecting the DSM-5?
– RQ6. What are the differences between clinicians of various clinical disciplines in their overall tendency to adopt or reject the DSM-5?

Section 2: Theory and Conceptual Model

2.1 Indicators hypothesized to contribute to the adoption / rejection of the DSM-5

In 2004, a comprehensive, influential model on the diffusion of innovations was introduced (Greenhalgh et al., 2004). For the current study, various determinants from that model will be incorporated in a new model that aims to specifically describe the relative contribution of several determinants towards either the adoption or rejection of the DSM-5 amongst Dutch, German and French clinicians. We narrow the scope of the current study by not incorporating all the determinants Greenhalgh et al. (2004) hypothesized as having a potential influence on the adoption decision. Instead, we focus on specific determinants that Greenhalgh et al. (2004) categorized under ‘the innovation’ and ‘communication and influence’ in their model. For the purpose of developing our own model, some of the determinants of Greenhalgh et al. (2004) have to be specified, altered or split into multiple new determinants to reflect the strong DSM-5 focus of our own model. We split Greenhalgh et al.’s (2004) concept of relative advantage by incorporating relative advantage for the patient and for the clinician separately. There are indications in the literature suggesting that the advantages (and disadvantages) for clinicians from using classification systems in general, and the DSM in particular, distinctively differ from the advantages (and disadvantages) for patients (Derksen, 2012). We also split Greenhalgh et al.’s (2004) concept of task issues; firstly, we use the concept in the traditional way as was intended by Greenhalgh et al. (2004), i.e. the innovation being relevant for task performance. Additionally, we set out to investigate if clinicians view it as their task, and therefore their responsibility, to contribute to the usage of a uniform classification system in Europe, and if so, if this influences their tendency to adopt or reject the DSM-5. Table 1 shows which of the determinants of Greenhalgh et al.’s (2004) model we will adopt and how we will transform them into new indicators for our DSM-5 model. On the left side the determinants/indicators as used in Greenhalgh et al.’s (2004) model are displayed, while on the right side their equivalents in our DSM-5 model are shown.

Table 1. The determinants of Greenhalgh et al.’s (2004) model on the diffusion of innovation, and their equivalents that will be used in the new DSM-5 model (own figure)
Table 1

In total our model incorporates 21 determinants. In developing our model, we make the assumption that each of these 21 determinants can contribute towards either the rejection or adoption of the DSM-5. The provisional basis of our model, prior to data analysis, is illustrated in figure 2; for pragmatic reasons, our developed indicators are expressed as their corresponding numbers from table 1.

Figure 2. The 21 determinants of our model (see table 1 for a description) and their influence on the rejection or adoption of the DSM-5, expressed by the arrows (own figure)
Figure 2

2.2 Further developing separate models for France, Germany and the Netherlands

The aim is to further develop our proposed model for France, Germany and the Netherlands separately and to develop an understanding of which determinants contribute to either the adoption or rejection of the DSM-5, and how strong the influence of each determinant is. In the eventual model(s), which will be further developed after data has been gathered and analysed, the sizes of the arrows will vary (small, medium and large) and so will the colours of the arrows (red, green and white). The size of the arrow will illustrate the strength of the effect (the larger the arrow, the stronger that indicator’s influence on the decision to adopt or reject the DSM-5), while the colour will illustrate the direction of the effect (a green arrow signals that an indicator contributes to adoption, and a red arrow signals it contributes to rejection). A small white arrow indicates we found no distinct overall effect for this indicator on the tendency to either adopt or reject the DSM-5. An example of what the eventual model could look like, is presented in figure 3.

Figure 3. An example of what our model could look like when for each determinant the strength and direction of its influence have been established (own figure)
Figure 3

Once relevant data has been gathered, we can develop 3 of these models; one for France, one for the Netherlands, and one for Germany. By juxtaposing the models, one will be able to quickly compare the countries in terms of which indicators/determinants play a role in the adoption or rejection of the DSM-5, and also how strong the influence of each of those indicators is. An example of what the comparison of the different country models could look like, based on fictional research results, is illustrated in figure 4. 

Figure 4. An example of what the different country models could look like after data analysis (own figure)
Figure 4

In sections 3.3 and 3.4 we explain in more detail how we will determine the strength (visualized by the size of the arrows) and direction (visualized by the colour of the arrows) of the influence of each determinant. 

2.3 Adoption and rejection of the DSM-5 amongst clinicians of various disciplines

In the current study we not only aim to compare three different countries regarding the prevalent tendencies amongst clinicians to adopt or reject the DSM-5, but we also set out to examine if the tendencies to adopt or reject the DSM-5 differ between clinicians of different professional disciplines. For this, we aim to include clinicians from all relevant disciplines that potentially play a decisive role in the diagnostic process of mental disorders in France, Germany and the Netherlands. This includes mental health care professionals with backgrounds in clinical psychology and psychiatry, but also primary care physicians, paediatricians, occupational therapists, neurologists, social workers and psychiatric nurses (APA, 2014; Cassels, 2014; Schmitz, Kruse, Heckrath, Alberti & Tress, 1999). As the development team of the DSM-5 predominantly consisted of psychiatrists (APA, 2013), we hypothesize that psychiatrists will display more favourable tendencies towards adopting the DSM-5 than clinicians of other disciplines, based on the assumption that psychiatrists might embrace the underlying principles and theoretical frameworks of mental disorders as incorporated in the DSM-5 more easily. For instance, some clinical psychologists have argued  that past DSM-editions, and also the DSM-5, overemphasize the medical model and the biological components in the etiology of mental disorders at the expense of psychological and social components (Derksen, 2012; Zur & Nordmarken, 2013). This could lead to clinical psychologists being less prone to adopt the DSM-5 than psychiatrists, the latter group generally having less difficulties to adopt a more biologically oriented model as it corresponds more with their educational background (Derksen, 2012). The model we developed to compare France, Germany and the Netherlands, can similarly be used as a way to compare clinicians of various disciplines in terms of which indicators play a role in their decision to adopt or reject the DSM-5, and also how strong the influence of each of those indicators is.

Section 3: Methodology

3.1 Research type and design         

For our study we will adopt a post-positivistic research paradigm, which is a value-free research paradigm based on empirical observations (Neumann, 2014). A (post-)positivistic research paradigm assumes the existence of an objective, universal truth, and embraces the assumption that knowledge can be obtained through a process of quantifying concepts, followed by the verification and falsification of hypotheses by conducting statistical analyses (Denzin & Lincoln, 2011). This research paradigm is generally deemed most adequately fitting a quantitative research approach (Pavlova, 2015), similar to the research approach we will adopt in the current study. Advantages of a quantitative research approach include the possibility to incorporate a large number of participants, and the possibility to make substantiated claims about hypothesized relations based on statistical significance levels (Ellis, 2003). Our study has a cross-sectional research design in which data are collected at one point in time. While cross-sectional research can be exploratory, descriptive, or explanatory, it is most consistent with a descriptive approach (Neumann, 2014, p 44). The current study adopts such a descriptive approach, as we aim to create a clear picture of the factors that play a role in the tendency to adopt or reject the DSM-5 by presenting country profiles and profiles of clinicians of various disciplines, visualized in the form of our new model.     

3.2 Research population and sample selection

The research population of the current study will consist of clinicians from France, Germany and the Netherlands. As clinicians we aim to include all disciplines of which the literature suggests they might play a substantial role in the diagnostic process of mental disorders (APA, 2014; Cassels, 2014; Schmitz, Kruse, Heckrath, Alberti & Tress, 1999). Therefore we include clinical psychologists, psychiatrists, primary care physicians, paediatricians, occupational therapists, neurologists, social workers and psychiatric nurses. Where possible, we will approach these clinicians with the assistance of the official health care professional registries, such as the BIG-registry in the Netherlands (Ministerie van Volksgezondheid, Welzijn en Sport, n.d.). Alternatively, we will approach participants through the national professional associations of the various disciplines. We will apply a purposive sampling method (Neumann, 2014) that aims to include as many eligible clinicians as possible by utilising aforementioned registries and associations. We aim to include a large and representative sample of clinicians, to ensure a certain degree of generalizability when making claims on the tendencies to adopt or reject the DSM-5 on a country level. We will approach potential participants via e-mail or postal mail, shortly explaining the purpose of our survey and asking for their cooperation. We will explicitly state that in order to qualify for participation, the clinicians have to be involved in the diagnostic process of mental disorders, to ensure that their participation is relevant for the purposes of the current study. Each participant will receive a personal verification number. With this verification number, the participant can access an online survey, which can be filled in and submitted through a dedicated website which we will set up for the purposes of the current study. The personal verification numbers are used to ensure that each participant can fill in the survey only once, and that external parties which aren’t meant to be included in our research sample, can’t contaminate our data pool by accessing the website and filling in surveys.

3.3 Data collection

Prior to data collection, we need to operationalize the 21 indicators of our model, as displayed in table 1, into measurable, empirical and quantitative entities. This operationalization process will be guided by the principles of Gregory (2011) on test construction. For each indicator we want to know how strongly this indicator contributes towards the tendency to adopt or reject the DSM-5. We aim to capture the essence of each indicator in an open ended statement that directly applies to the participants’ personal experience. This open ended statement then has to be complemented by the participant by choosing one of 7 possible alternatives on a Likert scale (Neumann, 2014, p 230). These 7 alternatives are also assigned a numerical value, from 0 to 6, to facilitate the process of data analysis later on. We will illustrate what a survey item could potentially look like with the following example:
Likert scale items

As can be seen in table 1, we already made some provisional steps in operationalizing Greenhalgh et al.’s (2004) indicators into assessable statements. For instance, where Greenhalgh et al. (2004) use the indicator ‘Trialability’ we formulated this as ‘The possibility to experiment with the DSM-5’, which could be incorporated as one of the statements to be complemented by the participants using the 7-point Likert scale. However, deciding on what statements to use to capture the essence of each indicator will be subjected to a more thorough evaluation by a multilingual research team. As the current study includes participants from France, the Netherlands and Germany, we need to find adequate ways of phrasing each statement in several languages. The way a statement can be interpreted has to be as unambiguous as possible (Gregory, 2011), and each statement has to be understood in the same way regardless of the language the survey is conducted in. Once we have agreed on which statements to use and how to translate these in multiple languages, surveys will be developed in Dutch, French and German. When accessing the survey online, participants will be able to select the language they want to complete the survey in. Additional to the 21 statements which represent our indicators, we will inquire about the participants’ professional backgrounds (clinical discipline) and ask participants in which country they ply their trade.

3.4 Data analysis

Determining what specific procedures of data analysis are appropriate, depends on both the research question we want to answer and the data we have available for this purpose (Ellis, 2003). First, our survey data will be imported in the statistical program SPSS, version 22. Using this program, descriptive analyses (Ellis, 2003) suffice for answering the following research questions:

– RQ2. In France, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?
– RQ3. In the Netherlands, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?
– RQ4. In Germany, how strongly do specific indicators contribute towards either the rejection or adoption of the DSM-5 amongst clinicians?
– RQ5. What are the differences between clinicians of various clinical disciplines in the influence specific indicators have on adopting or rejecting the DSM-5?

For each country separately, we will assess per indicator what response was most frequently given on the 7-point Likert scale. As stated before, the responses on the Likert scale will be  given a numerical value, varying from a score of 0 (meaning that participants stated that this indicator strongly contributes to their tendency to reject the DSM-5) to a score of 6 (meaning that participants stated that this indicator strongly contributes to their tendency to adopt the DSM-5). Per indicator, the most frequently given response will be expressed as an arrow, and incorporated in the individual country models as illustrated in section 2.2 of this research proposal. Figure 5 illustrates which outcomes on the 7-point Likert scale correspond with the different arrows in the model.

Figure 5. The arrows of our model and the corresponding scores on the Likert scale (own figure)
Figure 5

Even though we assign numerical values to the answers on our Likert scale, in essence they still represent ordinal values and have to be treated accordingly, meaning we cannot simply apply a quantitative analysis in which we treat the numerical values as continuous variables (Gregory, 2011). That is why we use the most frequently given answers on the 7-point Likert scale to create the country profiles. In a similar manner as we will do for the individual countries, we will also assess for each clinical discipline separately, what response was most frequently given per indicator on the 7-point Likert scale. We will determine the colour and size of the corresponding arrows and visualize this in 8 additional models, one for each clinical discipline. Juxtaposing these models will allow for a quick comparison of different groups of clinicians and the factors they deem as contributing to the adoption or rejection of the DSM-5.

This leaves us with two remaining research questions to answer:

– RQ1. What are the differences between clinicians in France, Germany and the Netherlands in their overall tendency to adopt or reject the DSM-5?
– RQ6. What are the differences between clinicians of various clinical disciplines in their overall tendency to adopt or reject the DSM-5?

To answer these questions, we first refer to Neumann (2014, p 234): “Likert scale measures are at the ordinal level of measurement because responses indicate only a ranking. The numbers we assign to the response categories are arbitrary. Only when we combine several ranked items, we get a more comprehensive multiple indicator measurement.” This means that when we sum up the 21 numerical scores on the Likert scales, we can reasonably treat the outcome of this summation as a continuous variable, which can be subjected to statistical analysis. By assigning numerical values to the ordinal answer categories on the 7-point Likert scale, we can calculate an overall ‘tendency to adopt the DSM-5’-score by taking the sum score of the 21 indicators. Since the answers on the 7-point Likert scale contained an ordinal hierarchy, ranging from 0 strongly contributing to the tendency to reject the DSM-5, to 6 strongly contributing to the tendency to adopt the DSM-5, the sum score of all indicators provides information on the overall tendency to adopt or reject the DSM-5; the higher this sum score, the greater the tendency to adopt the DSM-5. Using these sum scores, we will conduct two GLM-univariate analyses (Ellis, 2003). Table 2 provides more information on the remaining statistical procedures that will be used to answer the two final research questions.

Table 2. An overview of the GLM-univariate analyses to be conducted
Table 2

3.5 Validity and reliability

In the following subsections we will address possible issues with regard to validity and reliability, and how we aim to address them in the current study.

3.5.1 Measurement validity

Measurement validity refers to the notion that a test truly measures what it purports to measure, to the extent that inferences made from it are appropriate, meaningful and useful (Gregory, 2011). Based on the findings from Greenhalgh et al. (2004) on the variety of indicators playing a role in the diffusion of an innovation, it can be assumed that there are other indicators that contribute to the adoption or rejection of the DSM-5 in addition to the indicators we set out to investigate in the current study. Since we use the sum scores of our 21 indicators to calculate an overall ‘tendency to adopt the DSM-5’score, it is important to realize that content validity (a specific type of measurement validity) might be jeopardized (Neumann, 2014). This means that the ‘tendency to adopt the DSM-5’ score most likely doesn’t incorporate all factors that truly contribute to the decision of adopting or rejecting the DSM-5. Awareness of a possible breach of measurement validity is helpful when making inferences based on research findings (Gregory, 2011).

Furthermore, our way of establishing the country profiles and the clinical discipline profiles by using mode scores (Neumann, 2014, p 399) has one major pitfall; since the strength and direction of the contribution of each indicator towards the adoption or the rejection of the DSM-5 are solely determined based on the response that was most frequently given by participants on the corresponding statement, a lot of potentially valuable information is lost. Therefore we will control for skewed or non-normal distributions, and make inferences accordingly (Ellis, 2003).

3.5.2 Measurement reliability

Measurement reliability means that the numerical results an indicator produces do not vary because of characteristics of the measurement process or measurement instrument itself (Neumann, 2014, p 212). Reliability improves when concepts are clearly conceptualized, a precise level of measurement is used, multiple indicators are incorporated and pilot tests are conducted (Neumann, 2014, p 213). To ensure reliability, we aim to get together with a multilingual team of researcher to clearly conceptualize the 21 indicators of our model. This clear conceptualization is needed to formulate the statements that will be used in the survey. We will test our survey items in a pilot study; this pilot study will be conducted with German, French and Dutch participants separately, to see if any ambiguities prevail in the interpretation of the survey statements and questions. If this is the case, we will rephrase statements and conduct additional pilot tests. By incorporating a 7 point Likert scale to measure participants’ tendencies to adopt or reject the DSM-5, we deem that an adequate level of measurement precision will be obtained. Furthermore, there are indications that participants struggle with determining their standpoint on Likert scales with more than 7 answer alternatives (Gregory, 2011). Since each indicator is explicitly meant to measure a separate construct, the concepts of internal consistency and split-half reliability (Gregory, 2011) don’t apply.

3.5.3 Internal validity

A high degree of internal validity is established when the independent variable, and nothing else, influences the dependent variable (Neumann, 2014, p 298 – 299). It can be established through the minimization of possible confounding variables and biases (Gregory, 2011). Internal validity will be ensured by carefully monitoring all procedural steps of the study, in an effort to minimize bias. Despite the linguistic differences between the countries, we aim to create surveys that are as similar as possible, to further safeguard internal validity.

3.5.4 External validity

A high level of external validity means that the results of a study can be generalized to the population. If a study lacks external validity, the findings may hold true for only a specific research setting (Neumann, 2014). One potential threat to external validity is sampling bias (Gregory, 2011). Since in France, Germany and the Netherlands registration in a national registry is mandatory for most health care professionals, either directly or indirectly via local associations (Bundesärtzekammer, 2015; Council for healthcare regulatory excellence, 2012; Rijksoverheid, n.d.), utilizing these registries will enable us to identify the vast majority of relevant clinicians, minimizing sampling bias in the process. Another potential threat to external validity is non-response bias (Gregory, 2011). To minimize the amount of non-responders, we will pre-test our survey website; it has to be easily accessible and function properly on various media devices (pc, laptop, tablet). Participants will be given a 5 week time frame in which they can fill in the survey. If they initially don’t respond to the participation request (which can be monitored via the personal verification codes), they will receive automated reminder messages after 15 days and after 25 days. In these messages we will stress both the importance and ease of participation, and we ensure confidentiality. We will also actively seek the assistance of the national professional associations in engaging and convincing clinicians to participate. The aforementioned steps can all contribute towards the minimization of non-response bias (Penwarden, 2013).  The impact of the eventual non-response rates on potential non-response bias will be assessed using the findings of Peytchev (2013) and Groves & Peytcheva (2008).

Section 4: Dissemination and self-evaluation

4.1 Dissemination of project results

Developing a dissemination strategy for conveying the outcomes of the current study, will be based on the key elements of dissemination as established by the Consumers, Health, Agriculture and Food Executive Agency (CHAFEA, 2012a). The purpose of dissemination is twofold. The first purpose is to inform the community about the outcomes of the current study. The second purpose is to engage and unite various stakeholders, to obtain feedback from them while creating a basis for dialogue and further research. But for this purpose, relevant stakeholders first need to be identified. This is done by conducting a stakeholder analysis (CHAFEA, 2012a). This stakeholder analysis will be collaboratively conducted by our international research team, consisting of researchers from the ‘Ecole des Hautes Études en Santé Publique’ (EHESP) in Rennes (France), Maastricht University in Maastricht (the Netherlands) and Bielefeld University in Bielefeld (Germany). Our international research team will then set out to further develop the dissemination strategy and sustainability strategy (CHAFEA, 2012a). The outcomes of our study can contribute towards creating an understanding of what clinicians in France, Germany and the Netherlands seek in a classification system for mental disorders, and to what extent the DSM-5 fits these criteria. The outcomes of the current study can also guide the debate on what role the DSM-5 should play in realizing a uniform classification system for mental disorders in Europe. The methodology and model we developed can be used to investigate the tendencies to adopt or reject the DSM-5 in additional European Member States. Further research can then set out to investigate what action is needed to further facilitate uptake of the DSM-5 in Europe, what potential alterations to the DSM-5 are required to increase clinicians’ willingness to adopt it, and what alternatives to the DSM-5 classification system potentially have better chances of becoming the standard in Europe. In order to achieve all this, it is important to utilize the right communication channels in disseminating the outcomes of our study. We aim to publish the results of our study in the form of a scientific article in the ‘European Journal of Public Health’. Alternatively, we aim to publish our results in ‘Innovation: The European Journal of Social Science Research’. Another concise article with the key research findings will be published on the website www.europeanpublichealth.com. All the participants of our study will receive a letter or e-mail, thanking them for their participation and providing them with a universal resource locator (URL) leading to the research article on www.europeanpublichealth.com. Accompanying the article, a designated online discussion forum will be available to discuss the findings of the study. The forum can also facilitate the further development of ideas for future research and the creation of research alliances. Depending on the allocation of a grant, a conference might be organized, for which representatives from the DSM, ICD, national professional associations of clinicians, politicians from the national ministries of health, participants from our study and other relevant stakeholders (as identified in the stakeholder analysis) will be invited. At this conference we will present the outcomes of our study, as well as provide a platform for further discussions and brainstorm sessions on how to create more unity in the classification of mental disorders in Europe. It is important to involve all relevant stakeholders in such an endeavour, since stakeholders need to be engaged from the outset of a project to ensure ownership and partnerships for effective action. Decisions made in collaboration with stakeholders are more likely to be durable and effective (National Public Health Partnership, 2000).

4.2 Self-evaluation

Evaluation can be defined as the systematic appraisal of the success and quality of a project. Success refers to whether the project objectives have been achieved, and quality refers to whether the needs of the stakeholders have been met (CHAFEA, 2012b). A designated project manager will be appointed to be in charge of project evaluation. The project manager will be responsible for leading the research team, monitoring research progress and controlling scope creep (Payne et al., 2011). Collaborating and communicating with different stakeholders, assessing risks and addressing potential issues, and publication and project scheduling will also fall under the project manager’s responsibilities. The project manager will be assisted by a steering committee, consisting of three researchers; one from each of the universities involved in the study. The steering committee will assist the project manager and carry joint responsibility for the scientific and quality management aspects of the project (Payne et al., 2011). At the beginning of the project, an evaluation plan will be developed by the project manager and the steering committee. This evaluation plan will incorporate key evaluation points and specific evaluation questions, indicators and targets for different stages in the research process. The suggestions on how to conduct a project evaluation as developed by CHAFEA (2012b; 2012c) will guide the process of developing the evaluation plan further. An external agency will be consulted to assess the quality and comprehensiveness of the developed evaluation plan.

4.2.1 DSM-5 controversy

It is important to elaborate to some extent on the controversy that surrounds the DSM-5. Where some would label the DSM-5 as a promising innovation to classify mental disorders (McCarron, 2013), others are opposed to using the DSM-5 altogether (Frances, 2013), arguing that it contributes to the medicalization of normal behaviour (Spence, 2012) as well as to the stigmatization of people with mental disorders (Ben-Zeev, Young & Corrigan, 2010). Some have argued that using the DSM-5 can even be dangerous (Kelland, 2012). An additional aspect of self-evaluation specifically applicable to the current study, is awareness of this aforementioned controversy surrounding the DSM-5 and how it should be accounted for in the research procedures. An objective approach by the research team throughout the different stages of the study is a necessity to safeguard the truthfulness of the study’s outcomes. The objectivity of the research team should furthermore be made explicit to all relevant stakeholders and participants. As a research team, our goal is not to advocate usage of the DSM-5, nor do we seek to promote its diffusion and adoption across Europe. We deem that the decision to adopt or reject the DSM-5 is best left to experienced clinicians in the field of mental health. They are the ones that potentially have to work with it and best capable of assessing what consequences DSM-5 usage will have for every day practice and for patient welfare. And that is why these clinicians are also the ones that will be included as participants in the current study. Our study aims to obtain insights in what moves these clinicians to adopt or reject the DSM-5. If a substantial number of clinicians indeed agree that the DSM-5 is an unwelcome innovation, this will resonate in the outcomes of the study. Indicators such as the ‘compatibility of the DSM-5 with the clinician’s values, norms and needs‘ and ‘the level of perceived risk from using the DSM-5’can provide valuable insights on how clinicians evaluate the DSM-5. The knowledge obtained from our study has an important role to play in guiding the debate on how to classify mental disorders, with the eventual public health oriented goal in mind of working towards a uniform classification system on a European level.

4.2.2 Ethical considerations

Approval to conduct the current study will be sought from relevant ethics committees in France, Germany and the Netherlands separately. Informed consent will be obtained from participants once the survey is accessed through the online platform. Despite using verification codes to access the online surveys, data will be processed anonymously. All researchers will declare to have no conflicting interests.

 4.2.3 Timetable

The current research project will have a timespan of 11 months, as illustrated in figure 6.

Figure 6. Timetable of the current research project (own figure)
Figure 6

References

  • American Psychiatric Association (APA). (1980). Diagnostic and statistical manual of mental disorders (3rd ed.). Washington, DC: Author.
  • American Psychiatric Association (APA). (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.
  • American Psychiatric Association (APA). (2000). Diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author.
  • American Psychiatric Association (APA). (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.
  • American Psychiatric Association (APA). (2014). About DSM-5. Retrieved 2 March 2015, from http://www.dsm5.org/about/pages/default.aspx.
  • Ben-Zeev, D., Young, M.A., & Corrigan, P.W. (2010). DSM-V and the stigma of mental illness. Journal of Mental Health, 19 (4), 318-327. 
  • Bunderärtzekammer. (2015). About the German Medical Association. Retrieved 8 March 2015, from http://www.bundesaerztekammer.de/page.asp?his=4.3569.
  • Cassels, C. (2014). One third of psychiatrists not using DSM-5. Retrieved 2 March 2015, from http://www.medscape.com/viewarticle/830099.
  • Consumers, Health, Agriculture and Food Executive Agency (CHAFEA). (2012a). Managing projects. Disseminating project results. Retrieved 8 March 2015, from http://ec.europa.eu/chafea/management/Fact_sheet_2010_10.html.
  • Consumers, Health, Agriculture and Food Executive Agency (CHAFEA). (2012b). Managing project. Performing a project evaluation. Retrieved 9 March 2015, from http://ec.europa.eu/chafea/management/Fact_sheet_2010_09.html.
  • Consumers, Health, Agriculture and Food Executive Agency (CHAFEA). (2012c). Managing project. Elaborating an evaluation plan. Retrieved 9 March 2015, from http://ec.europa.eu/chafea/management/Fact_sheet_2010_05.html.
  • Council for healthcare regulatory excellence. (2012). The regulation of doctors in France: a summary. Retrieved 8 March 2015, from http://www.professionalstandards.org.uk/docs/default-source/psa-library/120802-france-doctors-draft.pdf?sfvrsn=0.
  • Czabanowska, K. (18 September 2014). Diversity Compared. Diversity Recognised, Explored and Compared: Measuring and Comparing Health and Health Care. Lecture conducted from Maastricht University, Maastricht.
  • Denzin, N.K., & Lincoln, Y.S. (2011). The SAGE Handbook of Qualitative Research. Thousand Oaks, CA: SAGE Publications.
  • Department of Health Statistics and Information Systems. (2013). WHO methods and data sources for global burden of disease estimates 2000-2011. Retrieved 17 February 2015, from http://www.who.int/healthinfo/statistics/GlobalDALYmethods_2000_2011.pdf.
  • Derksen, J. (2012). Bevrijd de psychologie: Uit de greep van de hersenmythe. Amsterdam:Bakker.
  • Deutsches Institut für Medizinische Dokumentation und Information. (2014). ICD-10-GM. Retrieved 2 March 2015, from  http://www.dimdi.de/static/de/klassi/icd-10-gm/index.htm.
  • Ellis, J. (2003). Statistiek voor de psychologie. GLM en non-parametrische toetsen. Amsterdam: Boom Lemma.
  • European Commission. (2009). Discussion paper – European health information – objectives and organization. Retrieved 19 February 2015, from http://ec.europa.eu/health/strategy/docs/ev_20090428_rd01_en.pdf.
  • First, M.B. (2009). Harmonisation of ICD-11 and DSM-V: Opportunities and challenges. The British Journal of Psychiatry, 195, 382-390. doi:10.1192/bjp.bp.108.060822.
  • Frances, A. (2013). Essentials of psychiatric diagnosis: Responding to the challenges of DSM-5. New York, NY: Guilford Press.
  • Ginter, G. (2014). DSM-5 Conceptual Changes: Innovations, Limitations and Clinical Implications. The Professional Counselor, 4 (3), 179-190.
  • Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: systematic review and recommendations. The Milbank Quarterly, 82 (4), 581-629. doi:10.1111/j.0887-378X.2004.00325.x.
  • Gregory, R.J. (2011). Psychological Testing: History, Principles and Applications. Boston: Pearson Education.
  • Groves, R.M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly, 72 (2), 167-189.
  • Kelland, K. (2012). New mental health manual is “dangerous” say experts. Retrieved 9March 2015, from http://www.reuters.com/article/2012/02/09/us-mental-illness-diagnosis-idUSTRE8181WX20120209.
  • Kupfer, D.J., Kuhl, E.A., & Wulsin, L. (2013). Psychiatry’s integration with medicine: The role of DSM-5. Annual Review of Medicine, 64, 385-392. doi:10.1146/annurev-med-050911-161945.
  • Lane, C. (2012). Anti-DSM Sentiment Rises in France. Retrieved 21 February 2015, from https://www.psychologytoday.com/blog/side-effects/201209/anti-dsm-sentiment-rises-in-france.
  • Lane, C. (2013). Why DSM-5 Concerns European Psychiatrists. Retrieved 21 February 2015, from https://www.psychologytoday.com/blog/side-effects/201303/why-dsm-5-concerns-european-psychiatrists.
  • McCarron, R.M. (2013). The DSM-5 and the art of medicine: Certainly uncertain. Annals of Internal Medicine, 159, 360-361. doi:10.7326/0003-4819-159-7-201310010-00688.
  • Mossialos, E., Permanand, G., Baeten, R., & Hervey, T.K. (2010). Health systems governancein Europe: The Role of European Law and Policy. Cambridge: Cambridge University Press.
  • Ministerie van Volkgsezondheid, Welzijn en Sport. (n.d.). BIG-register. Retrieved 3 March 2015, from https://www.bigregister.nl/.
  • National Institute for Health and Care Excellence. (2008). Attention Deficit Hyperactivity Disorder: Diagnosis and Management of ADHD in Children, Young People and Adults. NICE clinical guideline 72. Retrieved 19 February 2015, from http://www.nice.org.uk/guidance/CG72.
  • National Public Health Partnership. (2000). A Planning Framework for Public Health Practice. Melbourne: National Public Health Partnership.
  • Nederlandse Vereniging voor Psychiatrie. (2014). Vijftien veel gestelde vragen over DSM-5. Retrieved 2 March 2015, from http://www.pam.nl/wp-content/uploads/QA-lijst.pdf.
  • Nemeroff, C.B., Weinberger, D., Rutter, M., MacMillan, H.L., Bryant, R.A., Wesseley, S., . . . Lysaker, P. (2013). DSM-5: a collection of psychiatrists views on the changes, controversies, and future directions. BMC Medicine, 11:202. doi:10.1186/1741-7015-11-202.
  • Neumann, W.L. (2014). Social Research Methods: Qualitative and Quantitative Approaches. Harlow: Pearson Education Limited.
  • Pavlova, M. (9 January 2015). Research paradigms: Overview of methodologies. Research Methods. Lecture conducted from Maastricht University, Maastricht.
  • Payne, J.M., France, K.E., Henley, N., D’Antoine, H.A., Bartu, A.E., Elliot, E.J., & Bower, C. (2011). Researchers’ experience with project management in health and medical research: Results from a post-project review. BMC Public Health 2011, 11:424.
  • Penwarden, R. (2013). How to avoid nonresponse error. Retrieved 8 March 2015, from http://fluidsurveys.com/university/how-to-avoid-nonresponse-error/.
  • Peytchev, A. (2013). Consequences of survey nonresponse. Annals of the American Academy of Political and Social Science, 645, 88-111.
  • Polanczyk, G., de Lima, M.S., & Horta, B.L. (2007). The worldwide prevalence of ADHD: a systematic review and meta-regression analysis. Am J Psychiatry, 146, 942-948. 
  • Regier, D.A., Kuhl, E.A., & Kupfer, D.J. (2013). The DSM-5: Classification and criteria changes. World Psychiatry, 12, 92-98. doi:10.1002/wps.20050.
  • Rijksoverheid. (n.d.). Registratie in BIG-register van beroepen. Retrieved 8 March 2015, from http://www.rijksoverheid.nl/onderwerpen/personeel-in-de-zorg/registratie-in-big-register-van-beroepen.
  • Schmitz, N., Kruse, J., Heckrath, C., Alberti, L., & Tress, W. (1999). Diagnosing mental disorders in primary care: the General Health Questionnaire (GHQ) and the Symptom Check List (SCL-90-R) as screening instruments. Social Psychiatry and Psychiatric Epidemiology, 34 (7),  360-366.                
  • Smith, P.C., Mossialos, E., & Papanicolas, I. (2008). Performance measurement for health system improvement: experiences, challenges and prospects. Background document. WHO European Ministerial Conference on Health Systems: “Health Systems. Health and Wealth”. Copenhagen, WHO Regional Office for Europe. Retrieved 19 February 2015, from http://www.who.int/management/district/performance/PerformanceMeasurementHealthSystemImprovement2.pdf.
  • Spence, D. (2012). The psychiatric oligarchs who medicalise normality. British Medical Journal, 344. doi:http://dx.doi.org/10.1136/bmj.e3135.
  • Wedge, M. (2012). Why French kids don’t have ADHD. Retrieved 20 February 2015, from https://www.psychologytoday.com/blog/suffer-the-children/201203/why-french-kids-dont-have-adhd.
  • World Health Organization (WHO). (1992). The ICD-10 classification of mental and behavioural disorders: Clinical descriptions and diagnostic guidelines. Geneva, WHO Press.
  • World Health Organization (WHO). (2013). Global Health Estimates for 2000-2012. Geneva, WHO Press. Retrieved 19 February 2015, from http://ec.europa.eu/health/strategy/docs/ev_20090428_rd01_en.pdf.
  • World Health Organization (WHO). (2014). Prevalence of mental disorders (statistics).European Health for All database (HFA-DB). Copenhagen, WHO Regional Office for Europe. Retrieved 19 February 2015, from http://www.euro.who.int/en/data-and-evidence/databases/european-health-for-all-database-hfa-db.
  • Zur, O., & Nordmarken, N. (2013). DSM: Diagnosing for Status and Money- Summary Critique of the DSM-5. Retrieved 22 February 2015, from http://www.zurinstitute.com/dsmcritique.html.