Getting started with Improving Together

Small changes often have the biggest impact in improving services. This programme provides you with free online improvement tools to help you make these small changes.

Improving Together will:

  • help you provide an even better service to people
  • assist you to find new ways of working to save you time and reduce stress
  • inspire you to motivate and lead others in improvement
  • empower you to find solutions to work problems you face every day
  • support you in your future career

There are several commonly used quality improvement approaches used globally, including the Model for Improvement, Lean and Six Sigma. The core set of skills presented in the e-learning modules as part of this programme is based on the Model for Improvement. It is a simple yet powerful tool for accelerating improvement, and looks at setting aims, understanding the system, identifying changes, testing those changes and spreading improvements.

By taking part in Improving Together you will help make the New Zealand social sector a better place to work while delivering the best possible service for the people of New Zealand.

  • Enrol for Improving Together

    The Improving Together e-learning modules will help you take a new look at the work you do and find ways to make it even better. They are suitable for all staff and at every level.

    So, whatever you do, and whatever level of responsibility you have, you will gain foundation level knowledge of quality improvement through these e-learning modules.

    The Improving Together e-learning programme is made up of four e-learning modules (each about 30 minutes long).

    Enrol now

  • Understanding the Model for Improvement

    The Model for Improvement provides a framework to structure improvement efforts. The model is based on three key questions:

    • What are we trying to accomplish?
    • How will we know that a change is an improvement?
    • What changes can we make that will result in improvement?

    These questions are then used in conjunction with small scale testing, the 'doing component' known as Plan, Do, Study, Act (PDSA) cycles.

    PDSA cycle

    1. What are we trying to accomplish?

    Improvement requires effort, so it is important to direct our efforts to the right problem. The first thing we have to do is be clear about what we want to achieve. For example, is the aim to reduce errors, save time, reduce risk or improve the way in which we work?

    This sounds obvious, but is often hard to answer precisely. Without this clarity, it is impossible to decide what action to take or to know whether the outcome is an improvement. So the vital question is: “What outcome do we want?”

    2. How will we know that a change is an improvement?

    Once we are clear about the desired outcome, the next task is to choose a standard to measure the outcome against. At best, this measurement will be simple and easy to use, but it is often difficult to find a perfect measurement. We may need to accept that our measures are not perfect and that collecting the necessary information may be difficult.

    Use a measure which:

    • is well-defined
    • allows comparison between sites and over time
    • is already in use, if possible.

    Whether using an existing measure or creating new ones, be clear about how they are defined. If using an existing measure, it is likely to have been developed for a different purpose, so take time to understand how it was put together.

    Make sure that everyone involved in collecting information for new measures knows why they are doing it.

    Improvement work is not an experiment trying to prove the value of an action; it is about adopting and adapting practice, based on evidence. For this reason, and also because it can take a long time for any change in outcome to be recognised, we should also have at least one process measure in place.

    3. What changes can we make that will result in improvement?

    It is essential to link outcome measures to ‘interventions’ – the systems and processes that will help us achieve the desired outcome. We will not make consistent progress towards improving outcomes by focusing on outcome measures alone.

    There are two parts to this question – 'What is wrong with the system now?' and 'What works?'

    What is wrong with the system now?

    The experience of our staff, the evidence of our own eyes, and feedback from our service users will all help us identify what we need to focus and concentrate our efforts on. We need to consider the following:

    • What will deliver the biggest benefit? This is often addressing the things that are done most often or the area where most waste is incurred.
    • What do typical cases tell us about the system?
    • Are demand and need understood properly? How much demand is repeat work or work caused by another part of the service?
    • What is the high-value part of the system (the part that delivers real benefit)? Is it the same as the part which has the highest costs?
    • What can simplify the process?
    • How can we use the knowledge of service users and people in other parts of the process?

    What works?

    To find out what works we first need to gather evidence of how a good system should work. Don’t make this too difficult by going into too much detail. We use the evidence gathered to produce driver diagrams to summarise desired outcomes and how they can be achieved.

    Conclusion

    Answering the three questions raised by the Model for Improvement will help you see ways that you can begin to make positive changes.

  • Using the Plan, Do, Study, Act cycle

    Changes are introduced using the PDSA cycle – Plan, Do, Study, Act – meaning small tests of change can be carried out in an actual work place setting. In improvement work, we have learnt that to try something new in a reliable way, it is best to start small – one person, one setting, one service provider.

    Even if something has been shown to work in other settings, take the time to do a small-scale trial. There are almost no solutions that work in all situations. Testing allows us to adapt actions to particular settings. To test a new procedure or technique, we need to ‘plan, do, study and act’ as explained below.

    PDSA cycle - plan

    Plan what you are going to do differently – ‘who, what, where and when’.

    PDSA cycle - do

    Carry out the plan and collect information on what worked well and what issues need tackling.

    PDSA cycle - study

    Gather relevant team members as soon as possible after the test for a short informal meeting. Analyse the information gathered and review the aim of the new procedure or technique against what actually happened.

    PDSA cycle - act

    Use this new knowledge to plan the next test. Agree the changes and amend the outcome measures if necessary. We should continue testing using the PDSA cycle to refine the new procedure or technique, until it is ready to be fully introduced. But, do it quickly (think in days, not weeks). When the change has been 90 to 95% reliable, adopt the change and share with others and spread to other similar work areas.

    But don’t assume that a change can simply be ‘rolled out’ once it has been successfully tested. The introduction needs to be managed at every stage. There is no hard and fast rule for how quickly to introduce the change. Once it has been introduced in a new area, test the change again.

    The PDSA cycle in practice: Early Learning South Auckland case study video on the improvement stories page.

  • Driver diagrams

    The first step to producing a driver diagram is to gather evidence of what works. The best evidence is published accounts of controlled experiments or, better still, systematic reviews of several publications. If that evidence is not available, professional guidelines, national service frameworks and evidence of good practice may be useful, but we need to be aware of their limitations.

    Below is an example of a driver diagram (click on the image to enlarge):

    driver diagram lge

    When producing driver diagrams there are some basic rules which must be followed.

    • The first column – ‘Aim’ – shows the desired outcome of the service (the simpler the better).
    • The second column – ‘Drivers’ – shows the factors that affect the outcome.
    • The third column – ‘Interventions’ – shows the actions that have been shown to make a difference and bring about improvements.

    Your project improvement team will need to agree on the driver diagram. It should be brief and simple and contain only evidence-based and important interventions.

  • Measuring improvement

    Improvement cannot happen without measurement.

    • We cannot try a solution until we understand the problem.
    • We cannot test a solution unless we are measuring its effect.

    There is no substitute for looking at the system personally, seeing where any measurements come from and how they are made. Using ‘run charts’ is a simple way to help analyse information, and a statistical process control chart will help you look at your information and understand any variation in the process you want to improve. ‘Plotting the dots’ is very effective because it helps us to spot trends and patterns.

    The frequency of measurement, often carried out weekly, is a major difference between measurement for improvement and more traditional forms of measurement.

    Traditionally, figures are smoothed out to get to ‘the real underlying trend’ by taking an average of the period. The problem comes when comparing the previous average with the current one to see if there’s been improvement. Simply comparing two numbers and knowing that one will be bigger than the other gives a 50 percent chance of being better (or worse)!

    In contrast, run charts and statistical process control charts have rules which provide confidence that when a change has been spotted, it reflects a genuine improvement.

    The videos 'Measuring for improvement' and '7 steps for measurement for your project' provide more information. 

  • Generating and spreading ideas

    Quality improvement at all levels means encouraging, testing and spreading ideas about alternative ways of doing things. Those ideas need to be good enough to form the basis of new working systems.

    Teams undertaking improvements together should meet regularly to generate new ideas through:

    • brainstorming exercises
    • adapting strategies from other sectors or industries
    • adapting ‘best practices’ from other services or conferences
    • identifying trends by analysing service user stories/complaints
    • visiting the sites of other services.

    Successful sites regularly involve new people in these meetings and this ensures the group is open to new views. New members of the group help to generate some of the best ideas.

    Asking frontline staff about the biggest challenges they face each day, then looking for ways to tackle them, quickly involves staff in finding solutions for issues they are most concerned about changing.

    Encouraging and spreading ideas about alternative ways of doing things offers a new and different way to improve our services. It is a proven way for frontline staff to help drive improvement rather than having to accept other people’s ideas.

  • Successfully introducing change

    Achieving change will require consistently applying a range of improvement initiatives into the daily work of the organisation. Using driver diagrams is an excellent way of demonstrating how local actions are in line with organisation-wide priorities, and so these diagrams should be developed and used at all levels.

    A second essential component of successfully introducing change is clear accountability – all the way from the frontline team to the most senior level of the organisation. The role of executive lead for an improvement initiative is not a passive ‘figurehead’ role. It requires positive action to support, challenge, allocate appropriate resources and overcome barriers to change.

    The third essential component for introducing change must be a commitment to develop staff at all levels in the skills needed to lead and deliver improvement initiatives.

Last updated 19/09/2018