To comment on the available evidence it was necessary for this research not to have strict exclusion criteria however study designs that had greater scientific credibility such as random controlled trials, received higher rating scores. Studies that were more open to bias or were solely based on expert opinion received lower scores. One of the major obstacles was identifying appropriate literature, research and reviews that related to the principle area of core stability. To identify a larger volume of relevant papers it was necessary to extend the key word search to include other terms that are often used synonymously and in association with core stability. Terms such as trunk stability, spinal stability, core control, abdominal muscles, back pain and exercise, injury prevention and exercise screening were used.
An electronic database search included Medline (1966-2005), Athens Data Base, Cochrane Controlled Trials Register, EMBASE (from 1988), Sport Discus (1975-2004), The Cochrane Musculoskeletal Groups Trial Register, PEDro , Bio Med Central , British Library catalogue, internet discussion pages (peer reviewed) and hand/manual search of current content and bibliographic data. The date of the last search of these databases was April 2005.
Studies or other bodies of work were selected if their content related to an aspect of the fundamental research questions. Key research questions can be seen in Table 1. These questions were considered to cover the key elements of the core stability concept. Question 1 was the foundation question as it investigated whether anatomical, biomechanical and physiological evidence existed to support the core stability concept. If evidence for this question was weak it would undermine the credibility for the whole concept. Five other questions were considered including objective measures, training techniques and the relationship between core stability and injury, performance and lumbopelvic dysfunction. Studies were selected if they helped to answer or related to a component of a key question for example a study on the ‘influence of the multifidus muscle on spinal stability’ would be considered for Question 1 as it related to the anatomical principles behind the core stability concept
Key Research Questions
The concept of core stability was considered in its entirety and from here key elements were extracted for closer scrutiny to determine how sound the evidence base was behind accepted concepts. An attempt was made to investigate the key foundations of the concept, firstly, by investigating the anatomical, biomechanical and physiological principles to determine whether there was physical credibility to the concept. For core stability to have genuine impact as a concept it would need to have valid, reproducible and objective measures. This would be necessary for the initial assessment of patients or athletes which could potentially guide treatment or training plans. The reported impact that core stability can have on performance, injury rates and spinal dysfunction was also considered. Much time is invested by clinicians, athletes, patients, sports trainers and coaches in teaching or performing core stability programmes. The evidence behind these principles was analysed as much emphasis has been placed on the importance of core stability training programmes. Finally, core stability training programmes have been the subject of much publicity and variation in recommendations of appropriate exercise type so an investigation of the variety of existing principles and programmes was performed with the evidence behind the programmes scrutinised.
Table 1. Key Research Questions
What anatomical, biomechanical and physiological evidence exists that supports the concept of core stability?
Is there any valid objective measure for core stability? Can any components of core stability be measured to provide a useful clinical measure?
Does evidence suggest that core stability can influence human performance?
Is there any correlation between core stability and peripheral injury rates?
Does core stability training decrease the recurrence or extent of lumbo-pelvic dysfunction?
Is there any evidence to support specific exercise programmes to enhance core stability?
The Process of Evaluation
All papers were analysed according to the guidelines established by the Scottish Intercollegiate Guidelines Network (SIGN). Consideration was given to a number of systems for grading the quality of evidence and developing subsequent guidelines. Atkins et al. (2004) appraised six prominent systems for grading levels of evidence and the strength of subsequent recommendations. They concluded that all the currently used approaches for grading levels of evidence and the strength of recommendations have important shortcomings. The Gradings of Recommendations, Assesssment, Development and Evaluation (GRADE) Working Group (2004) stated that clinical guidelines were only as good as the evidence and judgements they were based on. Schünemann et al. (2003) acknowledged on behalf of the GRADE Working Group that there was little or no evidence of how well various grading systems were understood by clinicians. It is ironic that the systems used to identify evidence based practices lack evidence of their own efficacy (Upshur, 2003). Some of this is due to the use of different letters, numbers, symbols and words to communicate grades of evidence and recommendations (Schünemann et al., 2003). There has been a movement for the different systems of grading to use consistent nomenclature to aid universal understanding of the guidelines (Schünemann et al., 2003).
In a study investigating the use of guidelines associated with the Cochrane Collaboration it was found that guideline-driven care can be effective in influencing outcome of the care (Thomas et al., 1999). In reaching a decision on which system of grading to use SIGN 50 (2004) was chosen as it provided a system that had been recently reviewed and had considered the strengths and weaknesses of guidelines developed by the US Agency for Healthcare Research and Quality, Cochrane Methods Working Group and the New South Wales Department of Health (Harbour and Miller, 2001). The SIGN 50 (2004) system provided a clear guideline that explicitly linked guidelines to the strength of supporting evidence (Harbour and Miller, 2001).
Following the literature search each study or piece of work was analysed according to an appropriate checklist recommended by the SIGN guidelines (SIGN 50, 2004) (see Appendix A). Each study was analysed according to its suitability to provide evidence to key questions. Factors that could potentially introduce heterogeneity into study findings such as type of study design, quality of studies or variations in population were considered when grading each paper with a score for level of evidence. The guidelines used when scoring levels of evidence can be seen in Table 2. The validity of the research was assessed by analysing the methodology of each study design according to a consistent checklist of questions. Each type of study design had a specific checklist of questions customised to investigate any influences on validity and quality. The checklists compiled by SIGN have been widely investigated by other research bodies such as MERGE (Method for Evaluating Research and Guideline Evidence) in conjunction with the New South Wales Department of Health, Australia (Harbour and Miller, 2001).
Table 2. SIGN grading system for levels of evidence
|Levels of evidence|
|1++||High quality meta analyses, systematic reviews of RCTs, or RCTs with a very low risk of bias|
|1+||Well conducted meta analyses, systematic reviews of RCTs, or RCTs with a low risk of bias|
|1 -||Meta analyses, systematic reviews of RCTs, or RCTs with a high risk of bias|
|2++||High quality systematic reviews of case-control or cohort studies
High quality case-control or cohort studies with a very low risk of confounding, bias, or chance and a high probability that the relationship is causal
|2+||Well conducted case control or cohort studies with a low risk of confounding, bias, or chance and a moderate probability that the relationship is causal|
|2 -||Case control or cohort studies with a high risk of confounding, bias, or chance and a significant risk that the relationship is not causal|
|3||Non-analytic studies, e.g. case reports, case series|
As there were no checklists in the SIGN 50 (2004) guideline for non-analytical systematic reviews or expert opinion a simple checklist was created to assist data analysis and reporting. These can be seen in Appendix A. Due to the nature of these works there was no need to analyse the research methodology and content with the same rigour as a randomised controlled trial. The evaluation checklist of these works is as subjective as their content however it was considered necessary to have a checklist to ensure consistency with the analysis.
While the SIGN 50 guideline appeared to be comprehensive and arguably provided the most reproducible results available there was still a degree of subjectivity in judgement. (Harbour and Miller, 2001). Bias was limited in this study by having two independent researchers review the classification and grading process of the author. This is in accordance with SIGN 50 (2004) guideline development criteria that states, “each study is independently evaluated by at least two individuals and consensus reached on the rating before it is included in any evidence table” (SIGN 50, 2004). These researchers were familiarised with the SIGN 50 (2004) guideline for classification. All studies were analysed individually by the author of this paper and then results for graded levels of evidence were compared with those of the two independent researchers for each paper. If there was any disagreement in classification a third researcher would cast the deciding vote.
Once the checklists were completed evidence tables were created to summarise all the validated studies relating to each key question. The validated studies for each key question were summarised into a standardised table format as recommended by SIGN 50 (2004). This provided an excellent framework to compare results across studies and therefore facilitate the decision making process for grades of recommendations as used by SIGN 50 (2004). A limitation of the SIGN 50 (2004) system is the lack of acknowledgement of some forms of studies in their levels of evidence grading criteria. Study designs such as cross sectional, test-retest and non human biomechanical models were not mentioned in their classification criteria. Levels of evidence for these study designs were determined by interpretation of the literature on grading methods (Harbour, and Miller, 2001; Upshur, 2003; Tevaarwerk, 2004) and discussions with the Quality and Information Director at SIGN.