(April 2016)
In Phase 1, a two-day workshop was held in the Swedish town of Sigtuna, a key trading and meeting point on the Baltic at the time of the Vikings and hence the name of the principles. Starting from the participants’ current understanding (e.g., Nielsen & Abildgaard, 2013 ; Nielsen & Randall, 2013 ; Reed, Howe, Doyle & Bell, 2018 ; von Thiele Schwarz et al., 2016 ), a broad range of best practices were identified and explored through reflexive conversations inspired by the Open Space Technology (OST) (Owen, 2008 ). OST is a participant-driven, real-time approach that relies on self-organization to explore topics of interest to participants. We used this approach to allow participants to move freely in and out of smaller groups, gathering around emerging principles visualized on flipchart papers. Discussions were captured by developing each flipchart. At this stage, the number of principles were allowed to expand and retract, combining old and adding new flipcharts as needed. The flipcharts were then examined by the whole group and through discussions of similarities and differences, they were condensed into a first set of 15 principles. These were further amended and condensed over the following year (see Table 1 ). In Phase 2, the principles were refined and validated with external experts, including both academics and practitioners, through a series of meetings and workshops (e.g., a symposium at the EAWOP 2017 Conference). Written and oral feedback revealed an overall agreement on the relevance and importance of the principles, but that some were ambiguous. We therefore refined the principles during the following five months, with an additional workshop in October 2017, to finalize the principles.
Organizational interventions often consist of three phases: 1) design, 2) implementation, and 3) evaluation (Tafvelin et al., 2019 ). The principles cut across the three phases, as illustrated in Figure 1 .
Principle 1: Ensure active participation and engagement among key stakeholders
Ten principles for how to design, implement, and evaluate organizational interventions
This principle recognizes that employees and organizations are not passive recipients of organizational interventions (Nielsen, 2013 ). They need to shape, manage, and own interventions. Participatory approaches are currently recommended by national and international policy bodies for managing psychosocial risk and for organizational interventions (Nielsen, 2017 ). Participation is relevant to consider across the design, implementation and evaluation of interventions, and among employees as well as managers at all levels of the organization. The latter includes ensuring senior management support and ownership over the intervention at the appropriate level of the organization (Hasson et al., 2014 ).
In the design phase, participation can increase the appropriateness of the intervention by ensuring that participants’ expertise is considered in the development of the intervention, e.g., what changes are feasible and appropriate in their workplace (Storkholm, Savage et al., 2019 ). During implementation, participants are more like to be committed to the intervention if they have had a chance to influence it (Rosskam, 2009 ). Participation can also facilitate integration into existing work practices and procedures (principle 6) (Tsutsumi et al., 2009 ). For evaluation, participation increases the likelihood that stakeholders will accept the validity of any findings the evaluation will yield, and commitment to act on them (i.e., evaluability) (Leviton et al., 2010 ).
What is meant by participation varies greatly, both in terms of the degree of influence and in terms of what the participants gain influence over (i.e., the content, the process, or the goal of the intervention) (Abildgaard et al., 2018 ). Based on the substantive evidence supporting active engagement, our proposition is for active forms of participation where researchers and organizational stakeholders, including employees, work closely together throughout the design, implementation, and evaluation of the intervention, enabling influence over all aspects of the intervention, including as co-creative partners (Brydon-Miller et al., 2003 ; Storkholm, Mazzocato et al., 2019 ).
Although this principle acknowledges the value of close collaboration and power-sharing (Brydon-Miller et al., 2003 ), it also acknowledges that the appropriate level of participation varies. For example, the optimal level of participation will vary with the characteristics of the intervention (e.g., the aim), and with contextual factors. These may include cultural differences affecting expectations on degree of participation. For example, participation will be less challenging if it does not deviate from the cultural norms, such as in the European Nordic countries, where there is a long-standing tradition emphasizing collaboration and participation between employer and employees (Gustavsen, 2011 ).
Degree of participation will also vary during the course of the change process. Participation will be required from different stakeholders at different time points and for different purposes. For example, senior management involvement may be needed when the overall project is designed to ensure the fulfilment of Principles 2 and 3 (Hasson et al., 2018 ), whereas employee involvement may be most important when the intervention is being implemented on the local level, giving employees and line-managers space and time to integrate the intervention into their work context. Thus, the proposition here is to find the appropriate level of participation across multiple stakeholders. Appropriate level of participation entails understanding the embedded power structures in the organizations (formal/stable hierarchies and informal/fluctuating structures) because they impact the level of influence that the different stakeholders have on the intervention. Not everyone will feel comfortable to speak up, not everyone’s voice will count (Wåhlin-Jacobsen, 2019 ), and there will be information asymmetries in what people know in the organization that affects the willingness and opportunity to participate, and thus, who has influence over or benefits most from the intervention. As an outsider, such power structures may be tricky to notice, and it may therefore be better to make the default assumption that power-issues are at play.
Principle 2: Understand the situation (starting points and objectives)
As outlined in the introduction, organizational interventions are embedded in the organizational context. Therefore, this principle suggests that researchers acknowledge that organizations are social systems, with their own unique dynamics and histories. We propose that the likelihood of a successful outcome is greater when organizational contexts are actively considered in the design, implementation, and evaluation (Nielsen & Randall, 2013 ; von Thiele Schwarz et al., 2016 ). Thus, building on disciplines such as engineering and quality improvement, we argue that researchers need to understand the context and take it into account before finalizing the design and starting to implement an organizational intervention (Edwards & Jensen, 2014 ). In its most basic form, this principle encourages researchers to refrain from conducting organizational interventions if they have not ensured that the organization needs it. For example, introducing a physical exercise intervention may not be the most appropriate in a context where work overload is the main issue.
Understanding the current situation includes considering the work systems, working conditions, history, challenges, and problems, as well as the implicit and explicit goals and intended outcomes of the intervention (e.g., the ideal end state). Such understanding can be achieved through recurrent conversations and negotiations between stakeholders, as well as through more formal assessments describing the situation (e.g., surveys and risk assessments). Knowledge about the organizational context matters for the design and implementation, as well as the evaluation of organizational interventions. First, it helps identifying or designing the most suitable intervention, by clarifying the direction of the change (from where, to where) (Aarons et al., 2011 ). Then, clarifying the gap between present and desired states may create an urge for change (i.e., “creative tension”), supporting engagement and participation (Principle 1) (Senge, 2006 ). An understanding of the context can also enable uncovering organizational factors that can make or break the implementation (barriers and facilitators) (e.g., financial constraints, staff turnover, etc.) so that these can be managed. Finally, the knowledge provides information about the starting point (“baseline”), which is essential for evaluation because it makes it possible to track changes over time (Batalden & Davidoff, 2007 ).
Different stakeholders may not understand the situation in the same way. We do not suggest that all stakeholders must have a fully shared understanding of the situation (i.e., what the problem, the intervention, and the desired outcome are), although it helps to have a mutual agreement on the need for and purpose of the change (Frykman et al., 2014 ; Storkholm et al., 2017 ) (i.e., shared sense-making) (Weick, 1995 ). It is, however, important to understand that there are different perspectives. Research on perceptual distance has shown that a lack of congruence, for example, between managers and employees, has an independent, negative effect on intervention outcomes (e.g., Hasson et al., 2016 ; Tafvelin et al., 2019 ). Thus, even if stakeholders do not have a shared understanding of the situation, it is essential that they know if that is the case, so that any misunderstandings can be managed upfront.
Principle 3: Align the intervention with existing organizational objectives
As described in the introduction, organizations are not neutral settings for testing research hypotheses, and therefore organizational interventions need to benefit the organization as well as researchers. This requirement for dual benefit means they need to be designed and implemented with consideration of how they contribute to organizational objectives as well as the researchers’ objectives. Alignment with the organization’s objectives serves several purposes. First, alignment helps create engagement by demonstrating how the intervention can contribute to important organizational outcomes. It can reduce the risk of contradictions that emerge when the aim of an intervention is not in line with other objectives (Ipsen et al., 2015 ; Ipsen & Jensen, 2012 ). Secondly, it reduces the risk of unintended side effects that emerge when an intervention is designed, implemented, and evaluated without consideration of how it may affect other areas (Bamberger et al., 2016 ) (e.g., when an intervention benefits one employee group at the expense of another). Finally, aligning objectives is essential to minimize the risk of the intervention becoming a time-limited ancillary project, discontinued once the researchers or a key change agent in the organization move on. Thus, this principle is central for the sustainability of organizational interventions.
Striving for alignment also involves trade-offs and challenges. First, for researchers, it may mean adjusting their research agenda to ensure it is benefitting the organization – or refraining from doing the research in a particular organization where there is no alignment. With regard to the different organizational stakeholders, aligning the intervention with organizational objectives means that the intervention is placed in the landscape of (perceived or real) contradictory and competing organizational objectives, such as those between safety and production (von Thiele Schwarz & Hasson, 2013 ). This may amplify tensions between stakeholder-groups which, in turn, may pose a barrier to the implementation of the intervention. It may also become an ethical dilemma when researchers and change agents need to favour one organizational objective over another.
Aligning the intervention with organizational objectives does not suggest that the objectives of the intervention automatically change, for example, from focusing on employee health and well-being to focusing on efficiency. Instead, it suggests that discussions about how an intervention might affect various organizational objectives should be considered during the design of the intervention and continually revisited to avoid unexpected agendas derailing the intervention at a later stage. Thus, at a minimum, this principle points to the need to disclose any competing objectives so that they can be managed or monitored to avoid derailment and unsustainable improvements.
Principle 4: Explicate the program logic
Given that organizational interventions are dependent on their context, it is essential for the design, implementation, and evaluation to explicate how they are supposed to work. This involves outlining the logical links between the intervention activities and immediate, short-, and long-term outcomes (e.g., Pawson, 2013 ; Rogers, 2008 ) including identifying multiple possible intervention activities as well as multiple pathways (Abildgaard et al., 2019 ). Drawing on the field of program evaluation, this principle suggests explicating a program logic (also known as a program theory, logic model, impact pathway, or theory of intervention) as a way to clarify the proposed theoretical mechanisms that explain why certain activities are expected to produce certain outcomes (Pawson & Tilley, 1997 ). Program logic focuses on the theory of change, i.e., the how and why intervention activities may work (Havermans et al., 2016 ; Kristensen, 2005 ; Nielsen & Miraglia, 2017 ), rather than, for example, theories of health (i.e., the relationship between exposure to work factors and employee health).
Program logic is used in the design as well as the implementation and evaluation of an intervention. First, it identifies which intervention activities are most likely to close the gap between the current and desired state. An important part of this is utilizing best available knowledge. Secondly, it guides implementation by clarifying the mechanisms, thereby explicating the implementation strategies needed to support behavioural change. Finally, it is a blueprint for the evaluation, as it describes when and what to measure.
To explicate the program logic, multiple sources of information are needed, so it may benefit from co-creation with stakeholders (von Thiele Schwarz et al., 2018 ). The development process may differ depending on the extent to which intervention activities are predefined, such as when the change involves implementation of guidelines or an evidence-based intervention. When intervention activities are predefined, they become the starting point for logically identifying appropriate proximal and distal outcomes. When intervention activities are not predefined, the program logic becomes an important part of identifying the intervention activities. This is done by starting from the outcomes and working backwards so that activities that could lead to the desired outcomes are identified (Reed et al., 2014 ; Saunders et al., 2005 ). In both cases, the program logic should be considered a hypothesis to be continuously tested and revised throughout implementation.
Principle 5: Prioritize intervention activities based on effort-gain balance
Once the program logic has helped to explicate the goals of the intervention and the possible intervention activities, it may be necessary to prioritize between different activities. This principle suggests that the decision of which activities to prioritize should be based on an effort-gain balance analysis. This prioritization involves considering the anticipated impact of an intervention on the one hand and the expected effort needed to realize it on the other (Batalden & Stoltz, 1993 ; Kotter, 1996 ; Wilder et al., 2009 ). Prioritizing activities, therefore, entails striving to strike a balance between the investment (in effort) that an organization is ready to commit to and the potential gains that can be achieved. Understanding this ”return on investment” balance for each intervention activity can inform the prioritization between potential intervention activities and is therefore a calculation integral to the design phase (Cox et al., 2000 ).
There is a need to identify the potential gains of certain activities in congruence with the goals of the intervention. Potential gains are often evident from the goals of the intervention and the alignment process (Principle 3), or can be illuminated from previous studies. For example, a gain might be improved social support through an intervention involving providing mailmen with mobile phones so they can call each other when on route. Subsequently, the efforts needed to achieve these gains needs to be considered. Efforts include the resources (time, money, emotional and cognitive efforts) involved in bringing about the changes and mitigating the barriers to the design and implementation, e.g., it is not only the financial costs of buying mobile phones, but also ensuring that mailmen have each other’s phone numbers and have the skills to use the mobile phones (i.e., implementation efforts).
Prioritizing and conducting effort-gain analyses is not straightforward. Limited prior experience with implementation or the lack of an organizational learning culture will require additional effort (Kaplan et al., 2011 ). The advantage of effort-gain balance analyses is that it helps prioritize some activities (low effort-high gain) over others (high effort-low gain). Activities that are low effort-low gain may, however, at times be a feasible starting point, from a motivational perspective, because they can build momentum by showing immediate, albeit limited, results (Cox et al., 2002 ). High effort-high gain activities may be prioritized when they offer a solution to a central problem, as well as when there is a relative match between the level of complexity of the problem and the solution (Storkholm et al., 2019 ). The prioritization may also be postponed for implementation later, when the organizational members have further developed their capability to manage change. Overall, using the knowledge of various stakeholders (Principle 1), is vital for ensuring a balanced understanding of efforts-gains.
Principle 6: Work with existing practices, processes, and mindsets
During the design, implementation, and evaluation of an organizational intervention, piggybacking on what is already known, already in place, and already done, can help integrate the intervention with the organizational practices, processes, and individual mindsets (Sørensen & Holman, 2014 ; von Thiele Schwarz et al., 2017 ). Thus, this principle addresses both organizational (practices and processes) and individual factors. Following this principle involves making the intervention fit with organizational logics and daily work. This fit may reduce the risk of conflict with existing organizational procedures, practices, and mindsets (Storkholm et al., 2017 ) and facilitate stakeholder engagement (Bauer & Jenny, 2012 ; Nielsen & Abildgaard, 2013 ).
This principle is particularly important when planning the implementation because the creation of separate implementation structures is costly, hinders synergies, and prevents the intervention activities from becoming an integrated part of everyday work (von Thiele Schwarz et al., 2015 ). New structures are easily abandoned once the project is over, which hinders sustainability (Ipsen et al., 2015 ).
The principle draws on developments in work and organizational psychology (e.g., (Stenfors Hayes et al., 2014 ; Zwetsloot, 1995 )), which in turn build on the integrated management system movement in quality management (Jørgensen et al., 2006 ; Wilkinson & Dale, 1999 , 2002 ). As an alternative to the conventional praxis of trying to minimize contextual influence, this principle is an example of how the interrelatedness between an intervention and its context should be embraced. For example, if an organization already has a process for working with quality improvement, it may be possible to extend it to include implementation of the intervention activities (von Thiele Schwarz et al., 2017 ). Other examples of working with existing practices can be to use groups, meetings, and communication pathways that are already in place, rather than creating new practices (Malchaire, 2004 ).
Nevertheless, it is not always feasible to follow this principle. For example, it is not applicable when the existing processes are part of the problem. That may be the case when the content of the intervention calls for changes of the system, rather than within the system. The implication is that this principle calls for the same careful consideration as when balancing quality improvement, i.e., improvement within a system, and innovation, which challenges the system by breaking new ground (March, 1991 ; Palm et al., 2014 ). Thus, it is vital to acknowledge that it may very well be the existing practices, processes, and mindsets that are the root causes of the problems, which in turn makes changing them a core intervention objective. What we are proposing is that the effort involved in breaking new ground such as challenging existing practices, processes, and mindsets should never be underestimated. Thus, challenging them should be done with intention, not by accident.
Principle 7. Iteratively observe, reflect, and adapt
Based on the premise that organizational interventions are complex, researchers and organizations need to iteratively observe, reflect, and (frequently) make adaptations to the planned intervention, implementation, or context as the intervention unfolds. This principle calls for ongoing monitoring and evaluation of the intervention progress, as well as the use of that information to improve the intervention content to ensure goal attainment. It also calls for increased attention to factors related to the change process, for example, the frequency of use of problem-solving tools in an intervention, in contrast to only focusing on the intervention’s outcomes.
The principle contrasts with traditional ways of designing, implementing, and evaluating organizational interventions as if they were episodic changes in a static system, with a clearly delineated beginning and end (Nielsen et al., 2010 ). It builds upon a shift from focusing solely on solving specific problems without questioning the solution (i.e., the intervention) (single-loop learning) to focusing on double-loop learning, which allows the solution, process, and goal to be questioned and modified based on continual monitoring and evaluation (Argyris & Schön, 1996 ).
The ability of interventions to achieve intended outcomes is mediated by a number of factors related to the interactions between content (intervention activities), process, and context (Pettigrew & Whipp, 1993 ). Thus, although the program logic (Principle 4) provides a hypothesized model for how it may be played out, this is a hypothesis: How this plays out cannot be fully anticipated beforehand, particularly in interrelated systems where changes in one part of the system can have unintended consequences in another. Therefore, interventions can seldom be fixed and implemented exactly as planned (Chambers et al., 2013 ). This principle calls for careful attention to what the core components of the intervention are, so that their evolution can be continually monitored, evaluated, and adapted to achieve the intended outcomes. This achievement is, after all, what matters for organizations; they care less about if the intervention is implemented exactly as planned, as long as it works.
Data and analysis are key to ensuring rigour in the process of observing, refining, and adapting an intervention (Storkholm et al., 2019 ). We suggest iterative cycles in which data concerning the intervention, implementation, context, and outcomes are monitored and used to inform potential adaptations (e.g., Shewhart’s cycle of plan-do-study-act) (Taylor et al., 2014 ). As a result, organizations and researchers would benefit from a systematic approach to evaluate progress using pragmatic scientific principles (Savage et al., 2018 ). To ensure this is done rigorously, we suggest to: 1) Use formal and informal methods (surveys, interviews, observations, documents, conversations) to collect data, 2) Mind the time lags as derived from the program logic, 3) Use the information to adapt the design or implementation of the intervention to the context; 4) Conduct small-scale rapid tests of activities, and increase the scale as data accumulate; 5) Identify new systemic challenges that may require the focus of the intervention activities to be revisited; and 6) Consider how intervention activities may adversely affect other parts of the system. This approach ensures a dynamic approach to change. It also positions evaluation as an ongoing process, managed locally by the organization, rather than the domain of the researcher after design and implementation (von Thiele Schwarz et al., 2016 ; Woodcock et al., 2020 ).
Principle 8. Develop organizational learning capabilities
This principle broadens the scope of researching organizational interventions from a narrow focus on specific study objectives to a broader commitment to facilitate a learning capability within the organization. Building a learning capability ensures that lessons are harvested within the organization to support future change efforts. This includes lessons from designing, implementing, and evaluating an intervention, as well as the tools, infrastructures, and practices developed. Organizational interventions tend to become finite projects which are not sustained over time even though they are often costly (Bernerth et al., 2011 ). Therefore, researchers involved in organizational interventions need to ensure that individual and organizational benefits are optimized. This principle also highlights the potential added value for organizations collaborating with researchers by ensuring that at least some of the know-how stays in the organization when the researchers leave. For example, it may involve engaging Human Resources staff or internal consultants to deliver intervention components rather than using external experts, or adding components that facilitate transfer of intervention-specific learning to other situations.
This principle is rooted in the disciplines of organizational learning, organizational behaviour, pragmatism, and systems theory, as well as in management concepts such as lean management. Developing a learning capability is essential to an organization’s ability to address future challenges and continually learn from change processes (Nielsen & Abildgaard, 2013 ). This principle builds on the double-loop learning of Principle 7 and expands it to include triple-loop-learning (i.e., that the organization becomes aware of the processes and structures needed to improve how learning is constructed, captured, disseminated, and incorporated) (McNicholas et al., 2019 ; Visser, 2007 ).
Principle 9: Evaluate the interaction between intervention, process, and context
If organizational interventions are to be conducted as outlined in the previous principles, it has implications for evaluation, both in terms of evaluation design and analytic approaches. Conceptually, this principle calls for a move away from answering research questions concerning whether an intervention works (isolated from context) to focusing on for whom, when, and why it works, and how it can be improved, building on theories and practice in change management, evaluation science, and organizational science (Pawson, 2013 ; Pettigrew & Whipp, 1993 ). By applying this principle, the evaluation sheds light on how a change was brought about: how the intervention interacted with the context (including participants and structures), and how this enabled certain mechanisms to trigger intended outcomes (Pawson, 2013 ). It contributes to theory building as well as to answering the type of questions practitioners ask.
The evaluation needs to capture the iterative changes to the intervention outlined in Principle 7, as well as the reasons for those changes and the impact they have on outcomes. Yet, in order to meet the objective of contributing both to science and practice, this needs to be done in a way that allows causal inferences as well as accumulation of data across cases. Process evaluation is an important first step in this, addressing research questions such as if employee participation, leadership support or facilitation explains variation in outcomes of an intervention (Biron & Karanika-Murray, 2013 ). It also calls for research designs beyond pre- and post-measurement, e.g., stepped-wedged designs, propensity scores, and regression discontinuity (Schelvis et al., 2015 ).
Realist evaluation is another example of how some of the complexities of organizational interventions can be captured. (Pawson & Tilley, 1997 ). It allows for hypothesized configurations derived from a program logic (Principle 4) to be tested. For example, using multi-group structure equation modelling, one study tested if the impact of a new participatory and structured problem-solving approach (kaizen) on employee wellbeing was explained by whether the kaizen work also included an occupational health perspective and showed that that was indeed the case (von Thiele Schwarz et al., 2017 ).
There may be a need to move beyond traditional variable-oriented methods and case-studies. One example is statistical process control charts, which build on rigorous time-series analyses to detect if an outcome changes over time over and above the expected natural variation in a pattern that can be attributed to “special-causes” – including, intervention activities (Benneyan et al., 2003 ; Shewhart, 1930 ). This analysis allows for testing of research questions. For example, to establish if graphical feedback can have a positive impact on hospital infection trends, or if variation in performance can be reduced by eliminating waste in the work process (Thor et al., 2007 ).
A third example is configurational comparative methodology (Thiem, 2017 ). It is a statistical method from the person-/case-oriented family, rather than the variable-oriented approaches that most evaluations of organizational interventions rely on. Coincidence analysis allows assessment of multiple pathways to the same outcome. For example, one study showed that in order to have high intention to change, a positive attitude among staff was always needed, whereas behaviour control was only important under some circumstances (Straatmann et al., 2018 ). These three examples are very different; yet, they are all examples of evaluation methodologies that combine sensitivity to what works for whom and in which circumstances with scientific rigour (Pawson, 2013 ; Pawson & Tilley, 1997 ).
Principle 10: Transfer knowledge beyond the specific organization
A cornerstone of organizational research, and what sets it apart from consulting, is the ambition to not only induce change in a single setting but to transfer knowledge from the specific to the general by cumulating learning that can be generalized, abstracted into theory, disseminated, and scaled up. Dissemination and the scaling up of organizational interventions is different from evaluations of interventions that aim to draw generalizable conclusions about the effects of an intervention and where knowledge is accumulated through replication of (the same) intervention. Accumulation through replication requires interventions to be fixed over time and isolated from the context of application. When organizational interventions are approached as outlined in these principles, accumulation through replication is not feasible because the intervention is integrated into, and interacts with, specific organizational contexts and changes over time through ongoing adaptations. This principle builds on the assumption that interventions seldom have an effect independent of the context in which they are used (Semmer, 2006 ).
In these cases, scalability cannot focus on statistical generalization and accumulation of knowledge through the identification of specific interventions independent of context. Knowledge need to be developed in other ways. This principle outlines that generalization, dissemination, and scalability should rely on analytical generalization, drawing from case study research (Flyvbjerg, 2006 ; Yin, 2013 ). This includes addressing research questions such as “What is required to reproduce the impact of the intervention in different settings?” Therefore, we encourage striving for accumulating knowledge, that is, the gradual refinement of phenomena by focusing on the aspects included in the principles, including how various factors interact to produce an outcome, and comparing and contrasting these across studies (Pawson & Tilley, 1997 ). This accumulation can, for example, be done using methodologies for literature reviews such as qualitative metasyntheses (Docherty & Emden, 1997 ; Nielsen & Miraglia, 2017 ). Thus, rather than striving for generalizability, this principle suggests striving for specificity, a gradual increase in the precision of the knowledge of what works for whom and when (Pawson, 2013 ).
This article set out to propose principles for how to design, implement, and evaluate organizational interventions based on expertise from multiple disciplines, offering suggestions for how organizational interventions can be researched in a way that makes the end result both scientifically rigorous and practically applicable. We propose a way to move the field of organizational interventions forward. Using a Mode 2 knowledge production approach, we draw on our expertise and the literature from multiple disciplines, to propose principles for further empirical testing and development.
In this paper, the principles are presented as discrete entities and in a linear fashion. This is a necessary simplification of a complex process for presentational purposes. The principles may overlap, their order is not self-evident, and they are interrelated. Further research is needed into the contribution of each individual principle, their timing, and the interrelatedness between principles; we hope this paper sparks an interest to advance this agenda.
Viewed one-by-one, the principles are not unique. They reflect the available evidence and/or best practice in one or more research disciplines that are concerned with changes in organizations. Some of them, for example, Principle 1 (Ensure active participation and engagement among key stakeholders), rest on substantial empirical evidence and are common across many disciplines. Others, like Principle 4 (Explicate the program logic), represent established methodological practices in some disciplines (e.g., evaluation science), but not many others. Due to the origins of the principles from various disciplinary backgrounds, the principles are reflected in existing discipline-specific frameworks. For example, Engagement of various stakeholders (Principle 1) and Understanding the situation (e.g., conducting a needs assessment) (Principle 2) and Develop program logic models (Principle 4) are part of the Centres for Disease Control (CDC) Framework for Program Evaluation in Public Health (Centers for Disease Control and Prevention, 1999 ). However, the CDC framework does not reflect the ambition to meet both research- and organizational objectives, or the dynamic characteristics of organizational interventions. Another example is Brown and Gerhardt’s integrative practice model (Brown & Gerhardt, 2002 ). The model focuses on the design and formative evaluation of training programs and it emphasizes the need for alignment both with strategy (Principle 3) and work procedures (Principle 6) and iterative development (Principle 7) of training material. Yet, it does not discuss principles such as the use of program logics, choosing activities based on effort-gain balance, or going beyond the training of a specific skill to developing learning capabilities (Brown & Gerhardt, 2002 ).
Thus, instead of claiming that each principle is unique on its on, we argue that the contribution lies in the convergence of principles across multiple disciplines, and in how they represent a common understanding across a group of experts from various disciplines and research fields, despite their differences in theoretical, empirical, and epistemological backgrounds. Thus, the Sigtuna Principles represent common denominators for researching improvements in organizations that go beyond specific disciplines and may be one step towards a more general theory of organizational interventions.
Do all the principles need to be followed for organizational interventions to be successful? This is an empirical question that calls for further exploration. Our expectation is that the more principles are followed, the better. The degree to which it is feasible to do so is likely to differ between occasions and studies. For example, sometimes, the intervention is predefined, as when guidelines or new legislation is to be implemented, meaning that some principles are not applicable. The number of principles employed will also depend on the mandate that the researcher has in the organization. Sometimes the mandate is restricted to parts of the process, such as when the researcher is only involved in the design or the evaluation phases. This restriction, too, will affect which principles are applicable.
Nevertheless, when combined, these principles offer the potential for a transformative change in the way research into organizational interventions is conducted in at least two interrelated ways. First, it changes the role of researchers and the relationships between the researchers and the organization towards a partnership between equals, where both perspectives are needed to negotiate the inherent contradictions between the objectives. For researchers, adopting a dual perspective implies a change from considering the intervention in isolation, mainly judging the content based on theory or what would yield the highest effect sizes, to thinking about how the practical impact of the change can be maximized. Such an approach includes considering the intervention in relation to the restraints and possibilities of the context where the intervention is set, and the change process, and to determine which activities would result in the most optimal solution given that context (von Thiele Schwarz et al., 2019 ).
Secondly, the combination of Principles 1–9 on the one hand and Principle 10 on the other, implies a change in the relationship between science and practice. This change involves moving from a one-way street from evidence to practice, where evidence can first be established and then disseminated, implemented, and have an impact, to a constructivist view on knowledge development, where the science base is gradually refined in the interaction with practice (Greenhalgh & Wieringa, 2011 ). The principles encourage researchers to consider impact upstream, by asking how value for organizational stakeholders can be optimized throughout the design, implementation, and evaluation of the intervention, not just after the research is finished.
The change inherent in applying the principles is not easy, but neither is researching organizations without such considerations: there are whole books dedicated to all the pitfalls involved (e.g., Karanika-Murray & Biron, 2015a ). The reasons for derailment include factors related to the intervention itself (e.g., incorrect content), the context (e.g., concurrent organizational changes), and the process (e.g., conflicts and power struggles) (Karanika-Murray & Biron, 2015b ), all well-known to organizational researchers. These principles do not solve all of these challenges, but encourage researchers to build relationships with organizational stakeholders so they can be involved in trouble-shooting and problem-solving issues that might threaten to derail the change process – and the research.
The target group for this paper is researchers, yet it is not limited to this group. The principles encourage a way of working in partnership between research and practice, and therefore, the principles are relevant for practitioners, too. In fact, the principles may be of value to practitioners whether a researcher is involved or not. All but the last few principles are related to issues that are of common interest to both practitioners and researchers. The principles are also potentially applicable to other fields, given their development as part of an aspiration to find synergies across communities of practice in various research fields.
The development of the principles followed a structured process focused on harvesting learning from experts from various fields, which increases the trustworthiness of the result. However, they were developed by a relatively small group of people, and although many research fields were represented, not all fields relevant to organizational interventions were. There is still a risk that the principles do not reflect a broader understanding of the phenomena. A thorough validation process with other researchers and practitioners was employed to mitigate this risk.
This paper presents ten principles that could contribute to the transformation of how organizational interventions are researched, and thereby increase the potential real-world impact. We hope these principles spark interest in the entire intervention process, from the design and implementation to evaluation, and towards a mutually beneficial relationship between the need for robust research and the flexibility needed to achieve change in practice.
The authors would like to acknowledge the input from practitioners and researchers participating in the workshops and validation sessions and in particular, thank Gregory A Aarons, Hanna Augustsson, Marit Christensen, Kevin Daniels, Christian Dyrlund Wåhlin-Jacobsen, Désirée Füllemann, Liv Gish, Peter Hagedorn-Rasmussen, Sara Ingvarsson, Sara Korlen, Robert Larsson, Andreas Hellström, Michael Munch-Hansen, Monika Oswaldsson, Rachael Potter, Signe Poulsen, Thim Prætorius, Raymond Randall, Henk van Rhee, Ole Henning Sørensen, Andreas Stenling, Christian Ståhl, Susanne Tafvelin, and Johan Thor.
This work was supported by grants from the following agencies and grants. Funding for the meetings and the writing of this paper was generously provided by a network grant from the Joint Committee for Nordic research councils in the Humanities and Social Sciences (grant number 2016-00241/NOS-HS). In addition, UvTS was funded by the Swedish Research Council (2016-01261). JR was funded by the National Institute for Health Research (NIHR) under the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) programme for North West London, and The Health Foundation. The views expressed in this publication are those of the authors and not necessarily those of the funders.
1. Owen ( 2008 ).
2. Priles ( 1993 ).
The authors declare no conflicts of interest.
Research output : Chapter in Book/Report/Conference proceeding › Chapter
To effectively adapt and thrive in today’s business world, organizations need to implement effective organizational development (OD) interventions to improve performance and effectiveness at the individual, group, and organizational levels. OD interventions involve people, trust, support, shared power, conflict resolution, and stakeholders’ participation, just to name a few. OD interventions usually have broader scope and can affect the whole organization. OD practitioners or change agents must have a solid understanding of different OD interventions to select the most appropriate one to fulfill the client’s needs. There is limited precise information or research about how to design OD interventions or how they can be expected to interact with organizational conditions to achieve specific results. This book offers OD practitioners and change agents a step-by-step approach to implementing OD interventions and includes example cases, practical tools, and guidelines for different OD interventions. It is noteworthy that roughly 65% of organizational change projects fail. One reason for the failure is that the changes are not effectively implemented, and this book focuses on how to successfully implement organizational changes. Designed for use by OD practitioners, management, and human resources professionals, this book provides readers with OD basic principles, practices, and skills by featuring illustrative case studies and useful tools. This book shows how OD professionals can actually get work done and what the step-by-step OD effort should be. This book looks at how to choose and implement a range of interventions at different levels. Unlike other books currently available on the market, this book goes beyond individual, group, and organizational levels of OD interventions, and addresses broader OD intervention efforts at industry and community levels, too. Essentially, this book provides a practical guide for OD interventions. Each chapter provides practical information about general OD interventions, supplies best practice examples and case studies, summarizes the results of best practices, provides at least one case scenario, and offers at least one relevant tool for practitioners.
Original language | English (US) |
---|---|
Title of host publication | Organization Development Interventions |
Subtitle of host publication | Executing Effective Organizational Change |
Publisher | |
Pages | 1-340 |
Number of pages | 340 |
ISBN (Electronic) | 9781000418347 |
ISBN (Print) | 9781032049137 |
DOIs | |
State | Published - Jan 1 2021 |
Discover the world's research
Published on 12.9.2024 in Vol 26 (2024)
Authors of this article:
1 Department of Health Services Research, Institute for Public Health and Nursing Research, University of Bremen, Bremen, Germany
2 Leibniz ScienceCampus Digital Public Health, Bremen, Germany
3 Department of Prevention and Health Promotion, Institute for Public Health and Nursing Research, University of Bremen, Bremen, Germany
4 Leibniz Institute for Prevention Research and Epidemiology, Bremen, Germany
5 Institute for Information, Health and Medical Law, University of Bremen, Bremen, Germany
6 Institute for Philosophy, University of Bremen, Bremen, Germany
7 Department of Health Care Management, Institute for Public Health and Nursing Research, University of Bremen, Bremen, Germany
8 Digital Media Lab, University of Bremen, Bremen, Germany
9 Human-Computer Interaction Group, University of Konstanz, Konstanz, Germany
10 Department for Health Services Research, Institute for Public Health and Nursing Research, University of Bremen, Bremen, Germany
Tina Jahnel, BA, MA, PhD
Department of Health Services Research
Institute for Public Health and Nursing Research
University of Bremen
Grazer Str 4
Bremen, 28359
Phone: 49 042121868808
Email: [email protected]
Background: Digital public health (DiPH) interventions may help us tackle substantial public health challenges and reach historically underserved populations, in addition to presenting valuable opportunities to improve and complement existing services. However, DiPH interventions are often triggered through technological advancements and opportunities rather than public health needs. To develop and evaluate interventions designed to serve public health needs, a comprehensive framework is needed that systematically covers all aspects with relevance for public health. This includes considering the complexity of the technology, the context in which the technology is supposed to operate, its implementation, and its effects on public health, including ethical, legal, and social aspects.
Objective: We aimed to develop such a DiPH framework with a comprehensive list of core principles to be considered throughout the development and evaluation process of any DiPH intervention.
Methods: The resulting digital public health framework (DigiPHrame) was based on a scoping review of existing digital health and public health frameworks. After extracting all assessment criteria from these frameworks, we clustered the criteria. During a series of multidisciplinary meetings with experts from the Leibniz ScienceCampus Digital Public Health, we restructured each domain to represent the complexity of DiPH. In this paper, we used a COVID-19 contact–tracing app as a use case to illustrate how DigiPHrame may be applied to assess DiPH interventions.
Results: The current version of DigiPHrame consists of 182 questions nested under 12 domains. Domain 1 describes the current status of health needs and existing interventions; domains 2 and 3, the DiPH technology under assessment and aspects related to human-computer interaction, respectively; domains 4 and 5, structural and process aspects, respectively; and domains 6-12, contextual conditions and the outcomes of the DiPH intervention from broad perspectives. In the CWA use case, a number of questions relevant during its development but also important for assessors once the CWA was available were highlighted.
Conclusions: DigiPHrame is a comprehensive framework for the development and assessment of digital technologies designed for public health purposes. It is a living framework and will, therefore, be updated regularly and as new public health needs and technological advancements emerge.
The overarching goal of public health is to promote and improve the health and well-being of people and communities. In recent years, digital interventions specifically designed for public health purposes have emerged on a large scale. Digital public health (DiPH) interventions may help us tackle substantial public health challenges, including aging populations [ 1 ], the dual burden of noncommunicable and communicable diseases [ 2 ], and the health impacts of climate change [ 3 ]. Moreover, DiPH interventions present valuable opportunities to improve and complement existing health care services and reach historically underserved populations.
With the COVID-19 pandemic, we have seen how digital technologies may accelerate responses to public health emergencies. For example, digital contact-tracing apps have become a major component to monitor community transmission and curb the spread of the virus in a population [ 4 ]. Further, the development of information platforms for international real-time public health data has supported policy and decision makers in planning and executing containment strategies. Another relevant field that became more visible during the pandemic concerns public health education. Digital platforms of health authorities and national agencies played a critical role in rapidly engaging and educating the population through prompt dissemination of trusted and tailored public health information, while limiting the visibility of information from unreliable sources [ 5 ].
As with other health technologies, DiPH interventions need to be developed through an iterative process, considering a multitude of factors right from the beginning of the conceptualization process. However, these factors (eg, acceptability, usability, data security, and sustainability) are sometimes not well thought out during the development or not at all considered, often resulting in low-value interventions that are ineffective and burdensome and reduce both quality and efficiency. In turn, the development of DiPH interventions is often triggered through technological advancements (ie, what is possible) rather than current public health needs [ 6 ].
Although vast amounts of new health apps are launched in app stores regularly, the number of downloads for many of these apps generally stays notoriously low [ 7 ]. Individual decisions around the initial use, adoption, rejection, and continued use of an app might be influenced by concerns regarding data security and data protection issues, costs to purchase the app, or user-friendliness for different user groups [ 8 , 9 ]. Other societal aspects, such as sustainable financing and regulatory requirements, are described as challenges to fulfill public health functions. Thus, these aspects may influence the design of a DiPH intervention and need to be considered from the beginning of the development process [ 10 ].
During the development and evaluation process, a number of different stakeholders assess the potential impact of DiPH interventions (eg, tech companies, health insurances, governments, and health organizations). As such, for each DiPH intervention, a great variety of potential users and user environments must be considered. To systematically develop and evaluate DiPH interventions, a comprehensive framework is needed that systematically covers all aspects with relevance for public health. This includes considering the complexity of the technology, the context in which the technology is supposed to operate, its implementation, and its effects on public health (eg, ethical, legal, and social aspects). Such a comprehensive framework would cover all phases, from conceptualization to evaluation, of all types of DiPH interventions and all parties [ 11 ].
Interventions are often developed without a systematic method and without drawing on the evidence and theories. This point was made by Martin Eccles, Emeritus Professor of Clinical Effectiveness in the United Kingdom, in referring to a frequently used principle of intervention design, the ISLAGIATT (It Seemed Like A Good Idea At The Time!) principle. This means that we jump straight to intervention and crucially miss out understanding the behaviors we are trying to change or do not consider contextual facilitators and barriers for a successful implementation of the intervention. Frameworks that integrate a wide range of domains allow us to think ahead and help us avoid potential pitfalls before they occur so that we can design appropriate interventions based on this analysis [ 12 ].
Although frameworks for digital health interventions, health technologies, and public health interventions have been developed previously, to the best of our knowledge, no framework for the systematic development and assessment of digital interventions for public health purposes exists today. Assessment criteria for health-related technologies have been developed previously, although their focus generally lies on either health technology [ 13 , 14 ] or digital health-relevant [ 15 ] aspects.
One prominent example of assessing various health technologies is health technology assessment (HTA). “HTA is a multidisciplinary process that uses explicit methods to determine the value of a health technology at different points in its lifecycle. The purpose is to inform decision-making in order to promote an equitable, efficient, and high-quality health system” [ 16 ]. Based on this methodology, various organizations have developed frameworks with different foci [ 13 , 14 , 17 ]. For instance, the European Network for Health Technology Assessment (EUnetHTA) developed the health technology core model for assessing the dimensions of value to facilitate the production and sharing of HTA information, such as evidence on efficacy, effectiveness, and patient aspects, to inform decisions. The model has a broad scope and offers a common ground to various stakeholders by offering a standard structure and a transparent set of proposed HTA questions [ 13 ]. HTA frameworks are generally applied to already developed technologies rather than providing standards for evaluation aspects that should be considered throughout development. However, this is important because existing interventions would likely be outdated by the time their assessment is finished.
Assessment frameworks specifically designed for the evaluation of digital health technology also exist. The National Institute for Health and Care Excellence (NICE) recently developed an Evidence Standards Framework (ESF) for digital health technologies [ 15 ], aiming at providing standards for clinical evidence of (novel) health technology’s (cost-)effectiveness within the UK health and care system. Similar to other frameworks [ 18 - 21 ], the ESF lacks applicability to public health technologies, due to its focus on clinical outcomes. Other frameworks focus on evaluation and assessment criteria along the life cycle of digital health interventions yet still lacking a public health focus [ 22 ].
Digital interventions heavily rely on user interaction and engagement. However, public health frameworks generally do not include specific measures to assess usability, user experience, and the design aspects crucial for promoting sustained user engagement [ 23 ]. Furthermore, DiPH interventions often require integration into existing health care systems, which can be complex and fraught with interoperability, data security, and data protection challenges—issues that are often not properly addressed in public health frameworks [ 24 ]. Although these are just a few examples, they illustrate how unique aspects of DiPH interventions may fall short in existing public health frameworks.
Together, we identified the following gaps:
Addressing these gaps requires the development of a comprehensive framework specifically tailored for digital interventions in public health, integrating diverse domains and considering usability, user experience, and integration challenges throughout the development and assessment process so that developers and assessors need not draw on multiple frameworks. The main focus of this paper was to present the current form of the digital public health framework (DigiPHrame) and describe its development process, followed by a use case to illustrate its application. More detailed information on the scoping review that served as a starting point to develop DigiPHrame can be found in the protocol that we preregistered on the Open Science Framework (OSF) [ 25 ]. The German contact-tracing app Corona-Warn-App (CWA), as a digital public warning system with a clear public health focus, was deemed as a suitable use case to illustrate the application of DigiPHrame.
We developed DigiPHrame in several steps, as shown in Figure 1 . First, we conducted a scoping review to identify existing frameworks for public health and digital health interventions (see the protocol and registration on the OSF [ 26 ]). See Table 1 for the eligibility criteria. For information sources, we searched journal papers in the electronic literature databases MEDLINE (via PubMed), Scopus, IEEE, CINAHL (via EBSCO), and PsycINFO (via Ovid). Our search strategy was first developed around our core concepts as our primary search keywords and Boolean operators: (“Public Health” [Title/Abstract] OR “Digital Health” [Title/Abstract]) AND Evaluation [Title] AND Framework [Title]). The search syntax was then expanded to include the synonyms, wildcards, and relevant subject terms of the primary keywords to increase the sensitivity of our searches. Next, we modified the subject terms and search field of the search syntax to adapt to each database (see Multimedia Appendix 1 ). We also manually searched relevant reviews’ reference lists. The final search was completed on April 12, 2022, with no publication date limitations.
Criterion | Inclusion | Exclusion |
Framework | Development or evaluation framework for health interventions related to public health or digital health | No framework or guidance in the report |
Report | Framework or guidance outlining the standards, principles, criteria, or properties needed to support the systematic development or evaluation of health interventions aimed at health promotion or prevention with or without digital technologies | Framework or guidance not focusing on developing, monitoring, validating, or evaluating health interventions; not providing specific standards, principles, criteria, or properties; only designed for 1 specific tool; and not applicable to other health interventions and only applicable to pharmaceutical/surgical/clinical/rehabilitation interventions |
Publication type | Journal papers, study/policy/program reported in gray literature | Comment, correction, letter, editorial, protocol, oral presentation, poster |
Language | English | Other language than English |
Access to full text | Access to full text of studies selected for data coding | No access to full text |
After deduplication, 4830 titles and abstracts were screened by 2 researchers independently, resulting in 433 (9%) full texts, which were then assessed by 2 independent researchers. Disagreements between researchers were resolved through dialogue, with the involvement of a third party, if necessary, although a definitive agreement score was not established. In total, 68 (15.7%, see Multimedia Appendix 4 ) papers were included for data extraction (see the Preferred Reporting Items for Systematic reviews and Meta-Analyses [PRISMA] flowchart in Multimedia Appendix 2 [ 27 ]).
We extracted all pertinent assessment criteria from the frameworks identified through the scoping review. Initially, these criteria were assigned to HTA domains/subdomains [ 13 ], although several criteria could not be assigned due to thematic misfit (akin to deductive coding). Subsequently, new categories were formed (akin to inductive coding). One researcher performed the coding initially, followed by a collaborative examination of the coded sections by 2 researchers, leading to adjustments during the discussion process (eg, reassignment to other domains, reassignment to other subdomains within a domain, summarization of subdomains, and deletion of irrelevant domains or subdomains). Questions describing the subdomains were devised by us based on the criteria (here, too, a proposal was made by one person, followed by verification by a second person). We consulted additional literature for the categorization of ethical principles [ 28 ].
A group of multidisciplinary experts from the Leibniz ScienceCampus Digital Public Health (LSC DiPH) were assigned to the domains corresponding to their expertise for counseling. Each domain was restructured with proficient inputs to represent the complexity of DiPH. Where necessary, additional literature was consulted, especially when the included frameworks fell short of offering criteria specific to DiPH.
A first draft of the proposed framework was sent to an interdisciplinary expert panel consisting of 105 members of the LSC DiPH. Feedback was gathered as unrestricted comments on the domains we developed. We reached out to experts from diverse fields, including medicine, public health, global health, psychology, sociology, human-computer interaction, (health) economics, informatics, sports science, medical biometry, architecture, urban planning, statistics, ethics, policy analytics, and law, assigning them domains based on their respective expertise. A deadline for feedback submission was set for July 18, 2022. Additionally, the same members of the LSC DiPH were invited to partake in a consensus meeting held on July 19, 2022. Participants were grouped into domain-specific discussions according to their areas of expertise, with these discussions being moderated by the DigiPHrame team. This resulted in the first version of the framework [ 29 ].
Shortly after the COVID-19 pandemic in 2020, numerous digital contact-tracing apps were developed or proposed, with official government support in some territories and jurisdictions. The rationale was that contact tracing is an important tool in infectious disease control, but as the number of cases rises, time constraints make it more challenging to trace transmissions effectively. Digital contact tracing, especially if widely deployed, may be more effective than traditional methods of contact tracing [ 30 ].
COVID-19 apps include mobile apps for digital contact tracing (ie, identifying persons, or “contacts,” who may have been in contact with an infected individual) deployed during the COVID-19 pandemic. Privacy concerns have been raised, especially about systems tracking users’ geographical location. Alternatives include co-opting Bluetooth signals to log a user’s proximity to other smartphones. For example, the open source CWA funded by the German government was based on proximity tracing using Bluetooth signals. The app provides a function for users to warn other users by uploading their positive test results anonymously on a voluntary basis to the CWA server. Users are then notified about any contacts with infected persons and can get tested on a voluntary basis.
The same experts were invited to a workshop on February 23, 2023, where the proposed framework was applied to a study case and tested for face validity. The case study was the CWA. Followed by several revision meetings by the framework team between February and May 2023, the second version of the proposed framework was finalized in May 2023 [ 31 ].
Ethical approval is not applicable for this study since it did not involve human subjects.
DigiPHrame comprises a set of criteria framed as open-ended questions clustered within domains that lead interested parties through a broad spectrum of crucial elements when developing and evaluating DiPH interventions. The evolution of domains and subdomains through the stepwise process, including the number of questions per subdomain in each version, can be found in Multimedia Appendix 3 . The framework in its current form was uploaded to the LSC DiPH website and the OSF [ 32 ] in May 2023 and is a revised version of the original framework that was first published in July 2022. In total, DigiPHrame consists of 182 questions, structured by 12 domains ( Figure 2 ).
Domain 1 describes the current status of health needs and existing interventions; domains 2 and 3 are aimed at the DiPH technology under assessment and aspects related to human-computer interaction, respectively; domains 4 and 5 are aimed at structural and process aspects, respectively; and domains 6-12 assessment criteria address contextual conditions and the outcomes of the DiPH intervention from broad perspectives.
Next, we defined the domains and illustrated how DigiPHrame can be applied using the CWA as a use case. The CWA is a digital public warning system that was designed and developed during the COVID-19 pandemic and has a clear public health focus. We briefly outlined the purpose and characteristics of the CWA. From each domain, we applied 1 assessment question as an example.
Domain 1 involves background information for DiPH interventions, describing the population, conditions, and observance of health inequities. Furthermore, this domain addresses current public health interventions and common alternatives.
Question 1.5 asks, “What is the expected level of digital literacy of the target population?” In the case of the CWA, the target population comprises the entire population within a geographically delimited space. Therefore, the entire spectrum of digital literacy is to be expected. Thus, different forms of representation (eg, graphics, text, and sound) of risk exposure and other information related to the COVID-19 pandemic must be available, which was not the case and might have prevented people from using it.
Domain 2 guides one through assessing general technical aspects of the health technology of interest. The questions focus on what digital tools are applied and how aspects such as interoperability, data integration, internet connectivity, and others are integrated.
Question 2.17 is, “Does the software require an internet connection (eg, all the time, once in a while, or once)?” In the case of the CWA, an internet connection is necessary as the major functionality of warning people is distributed via the internet. Only a fraction of the available functions work completely without an internet connection, such as the contact diary. Generally, the app does not need a continuous internet connection. However, the device on which the app is installed needs to be connected to the internet, at best multiple times a day but at least once a day to sync the contacts and update on test results.
Domain 3 focuses on how usable the health technology system is in order to ensure that its users can perform the required tasks (ie, the intended function) safely, effectively, efficiently, and with satisfaction. Therefore, accessibility, user empowerment, credibility, and trustworthiness are also considered in this domain.
Question 3.3 asks, “Are the health technology and DiPH intervention available in relevant languages?” When the CWA was first launched, it was available only in German and English. Russian, 1 of the most spoken immigrant languages in Germany, was not provided in the CWA until much later versions. Since version 2.20.0 for iOS and version 2.20.4 for Android, the CWA is available in German, English, Turkish, Bulgarian, Polish, Romanian, and Ukrainian.
Domain 4 on structural aspects considers the structure of the context in which a DiPH intervention is developed and implemented, as well as the involved stakeholders. Question 4.4 asks, “Is the DiPH intervention flexible to suit local, cultural, or social needs?” Initially, the German government promoted centralized storage of user data, which, according to the Federal Ministry of Health, would allow it to better track the spread of infections. However, this led to resistance from digital experts and data protectionists. As a consequence, the CWA was developed, with decentralized data collection across various servers. This approach ensured that the data could be decoupled, thereby hindering any potential tracing of app users.
Domain 5 describes aspects to consider before and during integration of a DiPH intervention into the health care system to ensure that the intervention is delivered properly. The domain focuses on the theory used for implementing the DiPH intervention, infrastructure, process, and agents, as well as implementation outcomes and dissemination.
Question 5.9 asks, “Which implementation difficulties (eg, duration, scope, disruptivity, centrality, complexity, and the number of steps required) did the DiPH intervention encounter?” In case of the CWA, necessary features (eg, sharing test results and embedding vaccination certificates) were not initially available when the app was first launched in June 2020, but had to be continuously added to the app.
Domain 6: intended and unintended health-related effects.
Domain 6 considers the positive and negative effects of a DiPH intervention on physical, mental, and social health; the quality of life and well-being; and the knowledge, beliefs, and behavior of individuals and the population in the short, intermediate, and long terms.
Question 6.2 asks, “To what extent is the DiPH intervention expected to impact the physical, mental, and social health of the individual and the population?” With its goal to prevent infections, the CWA was expected to positively affect individuals’ and, ultimately, population health. It is unclear how the large red warning sign displayed on users’ smartphones when a high-risk contact with an infected person occurs would affect their mental health. Although generally accepted, the CWA was not used by the majority of the population and was widely discussed in terms of data privacy concerns prior to the launch of the app. With some individuals using the CWA and some not (including, sometimes, strong opinions in favor or against the benefit of the app within a social circle), this may have affected an individual’s relationships and social health.
Domain 7 examines the societal, cultural, and intersectional dimensions pertinent to communities and groups of individuals, such as ethnic or demographic groups, people residing in the same neighborhood, those sharing common interests, or individuals with specific physical or mental conditions.
Question 7.5 asks, “Which factors in the society/community are relevant for DiPH intervention implementation?” In the case of the CWA, it is the availability of compatible smartphones (eg, older smartphones are not compatible), trust that data will be protected and not used for other purposes (eg, analog data from guests of restaurants, not the data from the app, were used to identify suspects of thefts), and willingness to enter one’s data in the case of infection.
Domain 8 addresses the moral considerations that arise from the implementation of DiPH interventions. The categorization of ethical principles is based on the influential Principles of Biomedical Ethics by Beauchamp and Childress [ 28 ].
Question 8.20 asks, “Does the DiPH intervention discriminate against particular segments of the target population?” Although efforts were successively visible to avoid discrimination, it took too much time to offer the app in different languages frequently spoken in Germany. People using phones with older operating systems were also excluded from applying the app.
Domain 9 generates awareness about which areas of law must be considered when developing or evaluating DiPH interventions. It is not the purpose of the domain to pose every specific legal question that has to be answered in order to develop or evaluate DiPH interventions. Since laws differ from country to country, the domain helps detect fields of law and typical problems in those fields that could be relevant for developers and evaluators. The applicable law and its requirements depend on the country.
Question 9.6 asks, “Have you considered the potential reimbursement of DiPH interventions in a national health system (some countries may have specific requirements for reimbursement)?” This raises awareness about the requirements for reimbursement of the DiPH interventions in a national health system or for other payers. Regarding the CWA, the provider offered the intervention for free (without a reimbursement option) because the free-of-charge offer of the CWA promises a broader and quicker distribution of the app.
Domain 10 focuses on the technological protection of data and, therefore, combines the aspects of data confidentiality, data integrity, data authenticity, data availability, and data controllability. Data protection relates to whether the system is allowed to process personal data.
If personal data are transferred to third parties, question 10.25 asks, “Is there is a legal basis for the transfer, and are the requirements of the legal basis fulfilled?” Regarding the case of the CWA, T-Systems International GmbH and SAP Deutschland SE & Co. KG are acting on the Robert Koch Institute’s behalf. The legal basis is a contract that is binding on the processor with regard to the controller and that sets out the subject matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects, and the obligations and rights of the controller (Art. 28(3) of the General Data Protection Regulation [GDPR]). Otherwise, the Robert Koch Institute only passes on data to third parties if it is legally obliged to do so or if this is necessary for legal action or criminal prosecution in the case of attacks on the app’s technical infrastructure.
Domain 11 assesses DiPH interventions regarding whether they can be considered a rational use of scarce resources. Question 11.1 asks, “Which relevant costs and effects can be identified?” Considering the costs and effects of a DiPH intervention from the beginning could help compare it with other interventions and show that it is economically dominant (ie, it is at least as effective as but costs less than the alternative interventions). Further, this information might be the basis for health economic evaluation (see questions 11.4- 11.6) to see whether what it costs per health gain is considered acceptable by the payer. In the case of the CWA, there are various relevant costs of the intervention itself, such as development and operation (2020: €52.8, or US $57.5, million; 2021: €63.5, or US $69.1, million) and promotion (2020 and 2021: €13.7, or US $14.9, million) [ 16 ]. Taking a broader (societal) perspective, there might be further costs, such as costs of further testing when the CWA receives a warning and costs of unrelated survival gains or benefits, such as a reduction in the loss of earnings, reduction in hospitalizations and rehabilitation measures, and reduction in deaths [ 32 ]. However, to the best of our knowledge, the pandemic context and the decision process about the CWA lead to a situation where a decision was made without formally considering cost-effectiveness in comparison with alternative decision options.
Domain 12 asesses environmental, social, and economic sustainabilty. Given the goal to reduce carbon emissions in health care, question 12.1 asks, “Which resources are necessary to develop and maintain the DiPH intervention?” In the case of the CWA, servers need to run, which produces carbon emissions, and computers need to be obtained to ensure compatibility of health offices with the CWA. Measuring and evaluating these resource consumptions also allows decision makers to consider more climate-friendly design alternatives for DiPH interventions.
In the use case of the CWA, we highlighted a number of questions relevant during the development of the CWA but also important for assessors once the CWA was available. For example, developers needed to consider how the data would be collected and shared without interfering with data privacy and data protection laws. Similarly, assessors needed to find ways of evaluating the effectiveness of the CWA (eg, did the CWA prevent infections?) without relevant data (due to decentralized data storage, data from different individuals could not be connected, and thus, only estimates could be determined). In future scenarios, DigiPHrame can serve as a checklist for both developers and assessors to help them avoid overlooking key issues with relevance to the performance of the intervention. Although for some questions, it might be enough to use common sense (in the case of the CWA, it could be questions surrounding the usability of the app), for others, specialist expertise may be necessary (eg, questions regarding legal and regulatory issues).
DigiPHrame is agile and primarily user led ( Textbox 1 ). We deliberately included the option of feedback loops in the framework to support the agile development process. Although it is advised to consider all domains and respective questions, developers may decide which domains are assessed at what stage of their development process and which questions are relevant for the respective DiPH intervention. For an intervention under development, a first orientation might be enough to understand whether it is worth continuing along the determined path or whether adjustments might be necessary. Developers may also decide to put specific questions on hold and revise them at a later stage in case any changes or additions need to be made to the DiPH intervention. Similarly, assessors may delay answering certain questions in case no robust evidence is available at the time to answer the questions.
Users of DigiPHrame are encouraged to first answer a list of questions regarding a general description of their digital public health (DiPH) intervention. Providing general characteristics will help assessors better understand the DiPH intervention under assessment. DigiPHrame is further equipped with a standardized answer scheme to help developers in answering the questions and, if necessary, plan the next steps in the development process. For assessors, the answer scheme can serve as a checklist to tick off all relevant criteria.
DigiPHrame users can respond to each question using the provided answer scheme. The first 2 assessment indicators are “not applicable” when the question is irrelevant to the particular DiPH intervention and “assessment result” to provide the answer or additional information to the assessor. The last 3 columns of the answer scheme focus on the current status of the DiPH intervention during the assessment. These columns include “Assessment completed and sufficient” when the assessment is finished and satisfactory, “Assessment done but improvement needed” when the assessment is complete but indicates the need for improvements or changes to the DiPH intervention, and “Assessment only partially done or not possible yet” when the assessment is incomplete or not feasible at the moment.
Example answer scheme:
Criterion: population
Question: Who is the target population of the DiPH intervention?
Assessment indicator scheme:
A unified framework for digital interventions with a public health focus.
Although health-related digital technologies hold great potential for enhancing public health and addressing health-related inequalities at a relatively low cost, new developments are often driven by technological advancements and assessments and primarily revolve around clinical aspects of health. To the best of our knowledge, no existing frameworks consider digital interventions specifically designed for public health purposes. Additionally, previous frameworks primarily emphasize clinical aspects when addressing digital health technologies, neglecting the public health perspective. As an example, although the ESF [ 15 ] emphasizes clinical outcomes, crucial for any intervention’s success, it omits essential aspects, such as sociocultural, ethical, legal, and sustainability factors, vital for effectively implementing DiPH interventions. DigiPHrame includes aspects regarding clinical outcomes (eg, domain 6: intended and unintended health-related effects), among others derived from the ESF, but also the above-mentioned factors. Moreover, although HTA frameworks [ 13 ] are often designed for evaluating existing technologies, our objective was to devise a comprehensive framework applicable across all stages of development and evaluation. DigiPHrame adopts a comprehensive public health perspective and can serve as a guide, specifically for developers and assessors throughout the entire development and assessment of DiPH interventions. DigiPHrame provides users with criteria concerning clinical effectiveness, technical functions, and usability, as well as organizational, legal, ethical, economic, and sociocultural aspects. Users have the flexibility to determine the relevant domains and assessment questions based on their specific needs and the stage of the process without relying on multiple development and assessment frameworks. Furthermore, the users of DigiPHrame are encouraged to take a broader view and may be inspired to include other perspectives that were not initially within their scope (eg, sociocultural aspects, ethics, and sustainability).
Additionally, a deeper understanding of contextual factors is necessary to assess what will work in one country versus another. These factors can either enhance or hinder the adoption and diffusion of DiPH technologies. Although many frameworks tend to overemphasize technical aspects, it is essential to acknowledge that various other factors influence success or failure. These factors include disparities in health expenditure, demographic conditions, health infrastructure, information and communication technology (ICT) skill levels, digital health literacy, clinical and patient engagement, and many more. Recognizing and understanding these key differences within and across countries is crucial for policy makers and other stakeholders in public health and DiPH. Although our framework considers these factors, future work needs to apply DigiPHrame in diverse contexts and countries to validate and continuously update the current version of the framework. In addition, although our framework aims to be universally applicable to various DiPH technologies, it will require revision as new public health needs and DiPH technologies emerge. Therefore, our framework can serve as the foundation for a development and assessment toolkit that developers, decision makers, and other users alike can use.
As we illustrated with the German contact-tracing app CWA, that was launched during the first wave of the COVID-19 pandemic, DigiPHrame can be applied for all stages, including design, implementation, and evaluation. This may have helped avoid potential pitfalls from the beginning that would have otherwise occurred further down the development process.
Our framework has several key strengths that set it apart. First, it is based on a comprehensive scoping review of digital health and public health frameworks (OSF [ 25 ]), ensuring a robust foundation. Additionally, we conducted scientific consensus meetings involving interdisciplinary experts, ensuring a breadth of perspectives in its development. Second, the assessment themes within our framework were derived from existing frameworks developed in various Western countries, including Germany, the United Kingdom, and the United States. This demonstrates the broad applicability of DigiPHrame across different geographical contexts, making it adaptable to diverse settings in high-income countries. Another strength of our framework is its universality. It is not limited to specific types of DiPH interventions and, therefore, can be applied to any digital intervention with the overarching aim of improving public health outcomes. This flexibility allows for its widespread application across a wide range of interventions. Furthermore, DigiPHrame is designed as a living framework that will evolve and adapt as technology advances. To do so, we will continue to revise the domains and questions and regularly test any changes for face validity using a variety of use cases. This will ensure that it remains relevant and up to date in the fast-paced DiPH landscape, accommodating emerging technologies and methodologies. Lastly, we incorporated input and expertise from various research fields throughout the entire development process of DigiPHrame. We fostered an interdisciplinary perspective by involving experts from different disciplines, including public health, epidemiology, psychology, philosophy, law, economics, human-computer interaction, and sociology, enriching the framework with diverse insights and knowledge.
Although our framework has several strengths, it is important to acknowledge certain limitations. First, going through the proposed framework might require significant time and expertise due to its complexity and depth. Nevertheless, it is flexible; it is up to the assessor to decide which domains and criteria are applicable to their specific case. This flexibility is advantageous, allowing the framework to be adapted to diverse contexts and DiPH interventions. However, it may also introduce subjectivity in the evaluation process, as different assessors may choose different domains and criteria, leading to varying outcomes. Ensuring transparency and consistency in domain selection could help mitigate this concern. Additionally, we intend to develop a condensed version of the framework focusing on the most critical domains and questions. Second, we engaged experts from diverse research fields to address potential inconsistencies during the development process. However, it is worth noting that the majority of our consultations did not extend to a broader geographical range, particularly in terms of incorporating specific aspects from low- and middle-income countries. It is crucial to recognize that contexts may differ significantly, including factors such as technology accessibility, digital health literacy, and legal requirements. Although DigiPHrame aims to be applicable across different geographical contexts, users of the framework are advised to consider and adhere to their local requirements and nuances. Furthermore, in our scoping review, we focused on primary prevention and health promotion but not on secondary and tertiary prevention (eg, rehabilitation). This could have limited the frameworks and criteria we found. Although as per our definition, DiPH focusses on primary prevention and health promotion, future research may also include frameworks focused on secondary and tertiary prevention. Lastly, we did not provide any evaluation methods along with the framework. As DigiPHrame evolves, however, our goal is to provide suitable existing methods and develop novel evaluation methods for DiPH interventions.
DigiPHrame is a comprehensive framework for the development and assessment of digital technologies designed for public health purposes. Our framework may assist in designing and evaluating DiPH interventions that serve public health needs rather than displaying technological advancements. Moreover, DigiPHrame may help avoid overlooking important aspects, such as acceptability, usability, data security, and sustainability, which would otherwise result in low-value interventions that are not user friendly, violate (data protection) laws, or are not sustainable. We aim to revise and improve DigiPHrame as new technologies emerge, and encourage developers and assessors to use and contribute to improving DigiPHrame.
The authors gratefully acknowledge the Leibniz ScienceCampus Bremen Digital Public Health support, jointly funded by the Leibniz Association (W4/2018), the Federal State of Bremen, and the Leibniz Institute for Prevention Research and Epidemiology (BIPS).
The authors would also like to thank Dorothee Jürgens, Sarah Janetzki, Sarah Forberger, and Jonathan Kolschen for their contributions to conducting the scoping review and data extraction for developing the first version of the framework.
The data collected and analyzed during this study are available from the corresponding author upon reasonable request.
TJ and AG conceived the concept of the manuscript. TJ drafted the first version of the manuscript. All authors contributed to the literature search and writing and editing of the manuscript. All authors have read and approved the final manuscript.
None declared.
Search syntax.
Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) flow diagram.
Evolution of domains and subdomains.
Included Reports.
Corona-Warn-App |
digital public health |
digital public health framework |
Evidence Standards Framework |
health technology assessment |
Leibniz ScienceCampus Digital Public Health |
Open Science Framework |
Edited by A Mavragani; submitted 03.11.23; peer-reviewed by L Maaß, G Humphreys, BC Silenou, V Zander; comments to author 29.01.24; revised version received 27.03.24; accepted 27.06.24; published 12.09.24.
©Tina Jahnel, Chen-Chia Pan, Núria Pedros Barnils, Saskia Muellmann, Merle Freye, Hans-Henrik Dassow, Oliver Lange, Anke V Reinschluessel, Wolf Rogowski, Ansgar Gerhardus. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 12.09.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Fact sheets
In coordination with Member States, the World Health Organization (WHO) and partners have established an access and allocation mechanism for mpox medical countermeasures including vaccines, treatments and diagnostic tests. The Access and Allocation Mechanism (AAM) will increase access to these tools for people at highest risk and ensure that the limited supplies are used effectively and equitably.
This is part of the response to the public health emergency of international concern declared by WHO Director-General Dr Tedros Adhanom Ghebreyesus on 14 August 2024, following an upsurge of mpox in the Democratic Republic of the Congo and neighbouring countries. Fifteen countries in Africa have reported mpox this year. Recommendations issued on the advice of the International Health Regulations Emergency Committee asked States Parties to ensure "equitable access to safe, effective and quality-assured countermeasures for mpox.”
“Alongside other public health interventions, vaccines, therapeutics and diagnostics are powerful tools for bringing the mpox outbreaks in Africa under control,” said WHO Director-General Dr Tedros Adhanom Ghebreyesus. “The COVID-19 pandemic illustrated the need for international coordination to promote equitable access to these tools so they can be used most effectively where they are most needed. We urge countries with supplies of vaccines and other products to come forward with donations, to prevent infections, stop transmission and save lives.”
The AAM was established as a part of the interim Medical Countermeasures Network (i-MCM-Net). The i-MCM-Net brings together partners from around the world, including UN and other international agencies, health organizations, civil society organizations, industry and private sector to build an effective ecosystem for the development, manufacturing, allocation and delivery of medical countermeasures. The network was endorsed by WHO Member States as a mechanism to operate in the interim, as negotiations continue towards a pandemic agreement.
Along with WHO, the AAM for mpox includes members of the i-MCM-Net: the Africa Centres for Disease Control and Prevention, the Coalition for Epidemic Preparedness Innovations, the EU’s Health Emergency Preparedness and Response Authority, FIND, Gavi, the PAHO Revolving Fund, UNICEF, Unitaid and others.
Over 3.6 million doses of vaccines have been pledged for the mpox response. This includes 620 000 doses of the MVA-BN vaccine pledged to affected countries by the European Commission, Austria, Belgium, Croatia, Cyprus, France, Germany, Luxembourg, Malta, Poland, Spain, and the United States of America, as well as vaccine manufacturer Bavarian Nordic. Japan has pledged 3 million doses of the LC16 vaccine, the largest number of doses pledged so far.
The recent surge in mpox cases, coupled with the limited availability of vaccines and other medical countermeasures, underscores the need for a collaborative and transparent process to distribute these critical resources fairly. The AAM is working to allocate the currently scarce supplies of vaccines and diagnostics for those at the highest risk of infection, including for vaccinating contacts of confirmed cases, and providing access to point of care diagnostics to countries with ongoing mpox outbreaks so that people who might be suspected cases can systematically be tested and cared for.
The AAM will operate based on these guiding principles:
“WHO and partners are supporting the government of the Democratic Republic of the Congo and other countries to implement an integrated approach to case detection, contact tracing, targeted vaccination, clinical and home care, infection prevention and control, community engagement and mobilization, and specialized logistical support,” said Dr Mike Ryan, Executive Director of WHO’s Health Emergencies Programme. “The AAM will provide a reliable pipeline of vaccines and other tools in order to ensure the success on the ground in interrupting transmission and reducing suffering.”
Media Contacts
WHO Media Team
World Health Organization
More information about i-MCM-Net
Mpox global strategic preparedness and response plan
WHO's work on mpox
IMAGES
VIDEO
COMMENTS
In the world of organizational development, change is a constant process of discovery, analysis and action. An effective OD intervention can be one of the best mechanisms for creating impactful change and helping improve organizational efficiency. The right OD intervention can help ensure you're solving the right problems, achieve your desired ...
3. Organizational redesign at Corning. Glassware manufacturer Corning had a mold machine shop struggling with cost overages and slow delivery. Redesigning the shop's structure and workflow and training employees in communication and high-performance skills led to lower costs, increased profits, and better-skilled employees. 4.
Henkel - Case Study Building future-proof digital HR capabilities with an in-house academy. ... Organizational development interventions are not the same as ad hoc transformation efforts, for example, when a company makes change decisions once a problem arises and on the go. Instead, an OD intervention strategy is a systematic, research-based ...
Part I: Cases in the Organization Development Process. Case 1: Contracting for Success: Scoping Large Organizational Change Efforts. Case 2: The Discipline Dilemma in Rainbow High School. Case 3: A Case of Wine: Assessing the Organizational Culture at Resolute Winery. Case 4: Utilizing Exploratory Qualitative Data Collection in Small ...
Method. Inspired by Mode 2 knowledge production (Gibbons et al., Citation 1994), we brought together transdisciplinary practitioners and academics with experience of organizational interventions and took them through a process to identify key principles for designing, implementing, and evaluating organizational interventions.The core group consisted of 11 academic experts (the authors) from ...
Understanding an organizational change and development intervention applied in a Global Software Industry: A case ... OBJECTIVE: This study aims to carry out a case study to identify the motivations and actions that supported an episodic organizational change (EOC) in a software industry company that end up in an adoption of a new model of team ...
To effectively adapt and thrive in today's business world, organizations need to implement effective organizational development (OD) interventions to improve performance and effectiveness at the individual, group, and organizational levels. OD interventions involve people, trust, support, shared power, conflict resolution, and stakeholders ...
Organizational development (OD) interventions are planned activities or projects aimed at improving an organization. This could be anything from team-building workshops and leadership training to revamping how a company manages its projects. The goal is to boost performance, enhance communication, and foster a positive workplace culture.
Designed for courses in organization development and change, this is a comprehensive collection of case studies and exercises. Original cases are written by experts in the field and designed to focus very precisely on a specific topic in the OD process or intervention method. Each case is accompanied by learning objectives, discussion questions, references, and suggested additional readings.
Organizational development interventions are a sequential flow of activities, actions, and events intended to help an organization improve its performance and effectiveness (Das and Bhatt, 2016 ...
12.4 Lessons Learned in Aligning Case Studies with an Organizational Learning Agenda . In the previous section we noted that case studies on development practice are used in different ways and with different levels of systematization for the purpose of organizational learning. Here we can make use of our IGOIL categorization to explain how case ...
Mary K. Foster and Vicki F. Taylor. Case 9. Organization Culture - Diagnosis and Feedback. Bruce O. Mabee. Case 10. Engaging Broader Leaders in the Strategic Planning of Lincoln Women's Services. Maria Vakola. Case 11. Resistance to Change: Technology Implementation in the Public Sector.
Designed for courses in organization development and change, this is a comprehensive collection of case studies and exercises. Original cases are written by experts in the field and designed to focus very precisely on a specific topic in the OD process or intervention method. ... Cases in Organization Development Interventions"" ""Case 18 ...
Method. Inspired by Mode 2 knowledge production (Gibbons et al., 1994), we brought together transdisciplinary practitioners and academics with experience of organizational interventions and took them through a process to identify key principles for designing, implementing, and evaluating organizational interventions.The core group consisted of 11 academic experts (the authors) from change ...
Organization development (OD), as an applied arm of the field of organizational behavior, purports to facilitate organizational change through the use of a variety of change interventions. ... A Meta-Analysis Method for OD Case Studies. ... The Comparative Impact of Organization Development Interventions on Ha... Go to citation Crossref Google ...
To effectively adapt and thrive in today's business world, organizations need to implement effective organizational development (OD) interventions to improve performance and effectiveness at the individual, group, and organizational levels. OD interventions involve people, trust, support, shared power, conflict resolution, and stakeholders ...
12.3 Using Case Studies for Organizational Learning in Four Development Agencies. anizations. have di erent ways of curating, documenting, and mobilizingffknowledge. Generating and using case studies as a tool for organizational learning requires a considerable investment of an organization's time an. ff ff.
To effectively adapt and thrive in today's business world, organizations need to implement effective organizational development (OD) interventions to improve performance and effectiveness at the individual, group, and organizational levels. OD interventions involve people, trust, support, shared power, conflict resolution, and stakeholders' participation, just to name a few.
Abstract The purpose of the paper is to examine how organizational development and change (ODC) consultants engage in complex processes of facilitating and implementing team interventions in ...
Practice-based case studies report on the evidence generated from the implementation of an intervention in a real-life practice setting and include the learning from those involved in the development and delivery of that intervention. Such case studies typically provide a narrative explaining how the intervention developed in that context and ...
Background: Digital public health (DiPH) interventions may help us tackle substantial public health challenges and reach historically underserved populations, in addition to presenting valuable opportunities to improve and complement existing services. However, DiPH interventions are often triggered through technological advancements and opportunities rather than public health needs.
In coordination with Member States, the World Health Organization (WHO) and partners have established an access and allocation mechanism for mpox medical countermeasures including vaccines, treatments and diagnostic tests. The Access and Allocation Mechanism (AAM) will increase access to these tools for people at highest risk and ensure that the limited supplies are used effectively and equitably.