Hanzhou Li1,3, Jeffrey L. Jauregui, PhD4, Cagla Fenton RD, LDN1,2, Claire M. Chee RD, BS1, A.G. Christina Bergqvist M.D1,5
1Division of Neurology, Department of Pediatrics; 2Department of Clinical Nutrition, The Children’s Hospital of Philadelphia, Philadelphia, PA 19104; 3Department of Biology University of Pennsylvania, Philadelphia, PA 19104; 4Department of Mathematics, Union College, Schenectady, NY 12308; 5The Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104
Corresponding Author: email@example.com
Journal MTM 3:2:11–15, 2014
Background: The Ketogenic Diet (KD) is an effective, alternative treatment for refractory epilepsy. This high fat, low protein and carbohydrate diet mimics the metabolic and hormonal changes that are associated with fasting.
Aims: To maximize the effectiveness of the KD, each meal is precisely planned, calculated, and weighed to within 0.1 gram for the average three-year duration of treatment. Managing the KD is time-consuming and may deter caretakers and patients from pursuing or continuing this treatment. Thus, we investigated methods of planning KD faster and making the process more portable through mobile applications.
Methods: Nutritional data was gathered from the United States Department of Agriculture (USDA) Nutrient Database. User selected foods are converted into linear equations with n variables and three constraints: prescribed fat content, prescribed protein content, and prescribed carbohydrate content. Techniques are applied to derive the solutions to the underdetermined system depending on the number of foods chosen.
Results: The method was implemented on an iOS device and tested with varieties of foods and different number of foods selected. With each case, the application’s constructed meal plan was within 95% precision of the KD requirements.
Conclusion: In this study, we attempt to reduce the time needed to calculate a meal by automating the computation of the KD via a linear algebra model. We improve upon previous KD calculators by offering optimal suggestions and incorporating the USDA database. We believe this mobile application will help make the KD and other dietary treatment preparations less time consuming and more convenient.
Treatment-resistant epilepsy is a national health problem that affects up to 40% of patients with epilepsy and results in significant morbidity, reduced quality of life, and cost to society1–3. For these patients, the Ketogenic Diet (KD), a high fat, low protein and carbohydrate diet, remains an effective therapy4. More than half achieve significant seizure reduction and up to 20% become seizure-free5–8. Use of the KD also allows for reduction in medications and antiepileptic drug-related side effects9. As a result, the use of the KD is increasing across the world10.
Management of KD in children is time-consuming for both the families that have to prepare the meals and the registered dietician (RD) who supervises the diet. In order to maximize success of the KD, caretakers of the patient must follow the prescribed fat, protein, carbohydrate, and caloric intake-per-day guidelines set by their KD team7. During the average three-year duration of treatment, these requirements adjust as the child grows. Each meal is precisely weighed to within 0.1 of a gram11. To maintain the 4:1 (in grams, fat: carbohydrates plus protein) ratio, exact proportions are either computed by hand or with the assistance of one of the currently available Internet KD calculator or homemade parent calculators12,13. This requires an organized caretaker, the ability to perform some algebra, and/or large time commitment by the RD in creating menus or in checking the accuracy of menus created on these Internet KD calculators. This can be a direct deterrent for a family to try the KD, physicians from referring patients to a KD program, or result in discontinuation of the treatment despite positive results14.
In this study, we investigate an algorithm to compute the quantities of food to be prepared for any ratio KD meal. We assume the user has selected n foods, and our goal is to compute the amount each of food, in grams, needed to fulfill the KD plan. These weights (in grams) are denoted by the unknown variables x1, x2, …, xn. For each i=1, 2, …, n, we consult the USDA Nutrient Database to ascertain numbers ai, bi, and ci, which denote the fat per gram, protein per gram, and carbohydrate per gram, respectively, of the ith food. Finally, we let d1, d2, and d3 denote the prescribed fat, protein, and carbohydrate content required, which are determined by the size of the meal and the parameters of the diet. Equating the total amounts of fat, protein, and carbohydrates determined by the food choices to equal those prescribed by the diet, we arrive at the following system of linear equations:
which can alternatively be written in matrix form as:
We denote by A the coefficient matrix, x the vector of unknowns, and b the vector consisting of d1, d2, and d3; thus, the system can be written simply as Ax=b. We apply the following methodology (Figure 1), based on the number of distinct food choices, to determine a solution or approximate solution x to the above system. To reiterate, the entries of the vector x are the respective amounts of the n foods to be given to the patient.
Figure 1: Algorithm For Solving Equations Depending on the Number of Foods Chosen
Less than three distinct food choices
If the number of foods selected is less than three, the system of equations is overdetermined (i.e., there are more constraints than variables). Thus, a solution will generally not exist, so we use the least-squares method to determine an approximate solution x by the formula:
where AT is the matrix transpose of A. In practice, ATA has full rank and so its inverse exists. If an entry of x is negative, the user is warned that no feasible solution was found, so more food choices must be selected.
Three distinct food choices
If three foods are selected, the matrix A is 3-by-3. Moreover, since the food choices are distinct, A has full rank in practice and is therefore invertible. In particular, there exists a unique solution x, given by the equation:
Again, if an entry of x is negative, the user is warned and prompted to select more food choices.
More than three distinct food choices
If the number of foods selected exceeds three, the system of equations is underdetermined. Generally, this means there are infinitely many combinations of the selected foods that would yield a KD appropriate meal. To determine a well-defined, positive solution, we use the Ordered Subsets Expectation Maximization (OSEM) method15.
We briefly explain the details of our OSEM implementation. We begin with a vector xj(0), where j=1, 2, …, n consists of all 1′s. The vector xj(k) will denote the vector at the kth iteration. At each iteration step, we compute
where i=1,2,3, followed by the iteration step
We halt the iteration when the difference between xj(k)> and xj(k+1) is sufficiently small, and use xj(k+1) as our solution. By construction, all of its entries are nonnegative and correspond to the suggested amount of each food.
The methods were implemented in an iOS application and the results are depicted in Figure 2. The diet plan used consisted of a 4:1 ratio KD with 70mg of heavy cream. Because the heavy cream amount is static, the nutritional values are simply subtracted from the total. Figure 2 A demonstrates the application processing two food choices: tofu and sesame oil. Figure 2 B demonstrates the application processing three food choices: chicken tenders, hard-boiled egg, and olive oil. Figure 2 C demonstrates the application processing four food choices: mozzarella cheese, tomatoes, basil, and olive oil. From each of the figures, the total nutrition of the meal matches closely with the recommended KD values. The green progress bar indicates that the application’s constructed meal plan is within 95% precision of the KD requirements. These results demonstrate the robustness of the methods in generating an appropriate, precise meal plan given a variety of input food choices.
Figure 2: Example Calculations Using the Mobile Application
The KD is rewarding but managing it is very time-consuming. Most KD RDs have therefore moved from hand computation and pre-calculated meal plans to using homemade excel spreadsheets or the programs available on the Internet such as the KetoCalculator or the Stanford Keto Calculator12,13. Many RDs also allow direct caretaker access to these programs but require meal plan reviews. Other KD programs adopted an exchange system that simplifies calculations at the expense of nutritional precision16.
Current applications such as the KetoCalculator and other KD Excel spreadsheets prompt the user to attempt to match the KD requirements. With these programs the user increments the amount of each food until the total nutrition matches the prescribed amounts. This takes time and might not achieve the accuracy demanded by the KD. In comparison, our mobile application can algorithmically calculate suggested values for each food item. It instantaneously suggests the exact amounts.
Additionally, currently available calculators only control for the ratio, and not for the macronutrients. For example, they allow the user to maintain a 4:1 KD meal by replacing the protein with carbohydrates by adjusting accordingly to the expression (fat) / (protein + carbohydrate). This potentially can result in insufficient protein intake, which over time can cause protein deficiency and poor growth as we have seen with referred patients from other centers seeking second opinions. Our application’s algorithm does not allow for any deviation from the prescribed amounts of the macronutrients, and will function for any ratio higher or lower than the standard 4:1. Finally, this application utilizes the USDA database as the source for nutritional information and therefore guarantees accuracy and a large selection of food choices. With this program, the meal planning process only takes a few food selections and the click of a button. The flexibility and convenience of a mobile application truly makes the KD much more manageable.
The Ketogenic Diet has great potential in treating refractory epilepsy but requires a heavy demand on the KD staff and caretakers of the patients. To make the KD more manageable, we developed and implemented a mobile application to simplify the KD. We also demonstrated that the methodology generates a medically acceptable meal plan as long as there exists a solution given the food choices. By utilizing mobile technology, we are able to provide effective medical guidance at the users’ convenience. We intend to gather further data from patients regarding how they determine preferences between the foods they choose. Then, we would create additional optimizing constraints to improve the algorithm. Finally, we hope this study will make the KD and other dietary treatments a practical possibility for more caretakers.
We thank the Children’s Hospital of Philadelphia Ketogenic Diet program for supporting this work. The Ketogenic diet patients inspired us to create this program, which we hope will assist them in their daily management of the Ketogenic diet. The first author would like to thank Dr. Joshua Plotkin for encouraging him to pursue this interest.
1. Begley CE, Famulari M, Annegers JF, Lairson DR, Reynolds TF, Coan S, et al. The cost of epilepsy in the United States: an estimate from population-based clinical and survey data. Epilepsia. 2000 Mar;41(3):342–51.
2. Kotsopoulos IA, Evers SM, Ament AJ, de Krom MC. Estimating the costs of epilepsy: an international comparison of epilepsy cost studies. Epilepsia. 2001 May;42(5):634–40.
3. Park C, Wethe JV, Kerrigan JF. Decreased quality of life in children with hypothalamic hamartoma and treatment-resistant epilepsy. J Child Neurol. 2013 Jan;28(1):50–5.
4. Bergqvist AG, Schall JI, Gallagher PR, Cnaan A, Stallings VA. Fasting versus gradual initiation of the ketogenic diet: a prospective, randomized clinical trial of efficacy. Epilepsia. 2005 Nov;46(11):1810–9.
5. Henderson CB, Filloux FM, Alder SC, Lyon JL, Caplin DA. Efficacy of the ketogenic diet as a treatment option for epilepsy: meta-analysis. J Child Neurol. 2006 Mar;21(3):193–8.
6. Lefevre F, Aronson N. Ketogenic diet for the treatment of refractory epilepsy in children: A systematic review of efficacy. Pediatrics. 2000 Apr;105(4):E46.
7. Neal EG, Chaffe H, Schwartz RH, Lawson MS, Edwards N, Fitzsimmons G, et al. The ketogenic diet for the treatment of childhood epilepsy: a randomised controlled trial. Lancet Neurol. 2008 Jun;7(6):500–6.
8. Levy RG, Cooper PN, Giri P. Ketogenic diet and other dietary treatments for epilepsy. Cochrane Db Syst Rev. 2012(3).
9. Nam SH, Lee BL, Lee CG, Yu HJ, Joo EY, Lee J, et al. The role of ketogenic diet in the treatment of refractory status epilepticus. Epilepsia. 2011 Nov;52(11):e181–4.
10. Kossoff EH, McGrogan JR. Worldwide use of the ketogenic diet. Epilepsia. 2005 Feb;46(2):280–9.
11. Mike EM. Practical guide and dietary management of children with seizures using the ketogenic diet. Am J Clin Nutr. 1965 Dec;17(6):399–409.
12. Zupec-Kania B. KetoCalculator: a web-based calculator for the ketogenic diet. Epilepsia. 2008 Nov;49 Suppl 8:14–6.
13. Stanford Medical Center – Ketogenic Diet Program. Stanford University; Available from: http://www. stanford.edu/group/ketodiet.
14. Lightstone L, Shinnar S, Callahan CM, O’Dell C, Moshe SL, Ballaban-Gil KR. Reasons for failure of the ketogenic diet. J Neurosci Nurs. 2001 Dec;33(6):292–5.
15. Hudson HM, Larkin RS. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imaging. 1994;13(4):601–9.
16. Carroll J, Koenigsberger D. The ketogenic diet: a practical guide for caregivers. J Am Diet Assoc. 1998 Mar;98(3):316–21.
The Editorial Board at the Journal of Mobile Technology in Medicine is proud to present Volume 3, Issue 2, published in July 2014. Mobile technology in Medicine is a rapidly developing area, and we hope to continue accelerating research in the field. We look forward to your submissions for Issue 3.
Francoise A. Marvel, MD1,2, Julie Chase, BFA3, Mario Madruga, MD FACP4
1DocTechMD LLC, 4622 15th St., NW, Washington DC 20010; 2Johns Hopkins Bayview Internal Medicine Residency Program, 4940 Eastern Ave, Baltimore, MD 21224; 3Mini Monster Media LLC, Mini Monster Media, 22A Hilton Street, Belleville, NJ 07109; 4Orlando Health Internal Medicine Residency Program, 21 W. Columbia St, Orlando, FL 32806
Corresponding Author: FrancoiseMarvel@gmail.com
Journal MTM 3:2:35–41, 2014
Smartphones have the potential to impact clinical decision-making because of their portability and prevalence of mobile medical applications (“apps”). In this case report, we present a 10-step approach for physicians to develop apps. The process is organized into four phases: (1) App Vision includes conceptualization, defining objectives, and establishing its innovativeness; (2) Creation defines the project’s feature scope and encompasses app development followed by beta testing; (3) Dissemination provides pre-release clearance of patient safety issues followed by distribution to virtual marketplaces and social media outlets; (4) Determining Utility involves the research and development process. This article serves as a roadmap for future physician app developers based on current guidelines and the experience of physicians and developers in creating Madruga and Marvel’s Medical Black Book App.
Pavindran A Gounder, MBBS1, Eliza Cole, MBBS2, Stephen Colley, MBBS, FRANZCO3,
David M Hille, Msc(Oxon)4
1Fremantle Hospital; 2Fremantle Hospital; 3Fremantle Hospital; 4Medical Student, University of Western Australia, WA, Australia
Corresponding author: firstname.lastname@example.org
Journal MTM 3:2:35–39, 2014
Background: The use of tablet devices and smartphones in medicine as assessment tools is becoming more widespread. These devices now run mobile applications or “apps” that have traditionally been the domain of desktop computers or more dedicated hardware. It is important that health professionals have confidence in the accuracy of measurements obtained from these new tools. The “EyeSnellen” app for the iPhone/iPad (running Apple Inc’s iOS operating system) allows users to measure visual acuity using a portable Snellen chart installed on a tablet device.
Aims: To compare the visual acuity measurements obtained from EyeSnellen iPad app with a standard illuminated Snellen Chart.
Methods: Participants were recruited from a tertiary level eye clinic in Western Australia. Visual acuity was measured using the Snellen light box chart and a visual acuity measurement was obtained using EyeSnellen app installed on an Apple iPad mini with the use of an Apple Iphone as a remote that was connected via Bluetooth.
Results: 122 eyes were tested. Bland-Altman analysis revealed a mean difference of 0.001 logMAR units between the visual acuity measurements obtained from EyeSnellen app and those taken on the light box chart with 95% limits of agreement of –0.169 to 0.171.
Conclusion: The Snellen Chart function on EyeSnellen app is equivalent to the traditional Snellen chart at measuring visual acuity at a test distance of 6 metres.
Measurement of visual acuity provides a screening tool for the diagnosis of underlying disease and can be used as a predictor of the functional consequences of visual loss1. It is the first of the “vital signs” of ophthalmology. The original Snellen chart was developed in 1862 by Dr Herman Snellen and since that time many variations have been proposed and considered.
Since the advent of Smartphones and tablet devices, ‘apps’ have been used to simplify many existing daily tasks. In medicine there are an increasing number of uses for these devices and apps are now used widely as resources for learning and tools for improving clinical assessment and treatment. Currently there are multiple apps available worldwide for testing visual acuity however few have been standardised and validated for use.
The EyeSnellen app was developed by Dr Stephen Colley (www.eyeapps.com.au), a Western Australian Ophthalmologist, and released on the iTunes App store in December 20122. It uses an iPad to display the Snellen chart and an iPhone or iPod as a remote device via Bluetooth. There have been regular updates with new features and the app is currently version 1.6 (as of December 2013). There have been over 9500 downloads as of March 2014.
To date, there have been two published studies comparing visual acuity estimates using a standard eye chart and an eye chart on an iPad/tablet device. The first, a study conducted in a Chinese ophthalmic centre, compared an iOS app (Eye Chart Pro) against a tumbling E light box chart3. Their study collected measurements from 240 eyes and concluded that the Eye Chart Pro app was reliable for visual acuity testing when the Snellen visual acuity was better than a decimal visual acuity of 0.1. The second study, conducted in New Zealand, collected visual acuity measurements on patients without ocular pathology4. The study concluded that tablet computer devices were only suitable for use in situations where sources of glare could be eliminated. There has not been a study validating the use of a Snellen chart on a tablet device.
The portability of tablet devices also makes them ideal for remote and rural health care settings and for mobile screening units.
We hypothesized that the EyeSnellen iPad tool was comparable to the traditional Snellen chart at measuring visual acuity at a test distance of 6 metres.
The study was approved by the South Metropolitan Health Service Human Research Ethics Committee. All participants provided informed consent before participating in the study.
Participants were recruited from presentations to the Fremantle Hospital Eye Clinic over a period of two weeks. Patients were excluded from participating if they were below the age of 16, English was their second language or if their visual acuity was worse than measureable on the Snellen Chart.
Visual acuity was assessed using the Snellen Chart function on the EyeSnellen iOS app (ver 1.6) installed on a second generation Apple iPad mini and using a traditional Snellen light box chart. The Snellen Chart function was chosen as it is the most commonly used chart for testing acuity of vision in Western Australian ophthalmology clinics.
EyeSnellen was installed on a first generation iPad mini (163 pixels pwer inch, 160 mm × 120 mm screen seize) and an Apple iPhone 5S was used as a wireless remote control for the use of the chart on the Apple iPad mini. The brightness was set to 75% using an in app control which gave an illumination of 200 lux when measured with a light meter. Visual acuity intervals provided by EyeSnellen app were 6/60, 6/36, 6/24, 6/18, 6/12, 6/9, 6/7.5, 6/6 and 6/4.5. The iPad mini was mounted with Velcro onto a light box chart using a Belkin Shield Sheer Matte Case. (Figure 1, Figure 2)
Figure 1: EyeSnellen iOS application displayed on an iPad mini that was mounted to a traditional lightbox with the use of Velcro and a case
Figure 2: Screenshot from Apple iPhone 5S with EyeSnellen remote installed
The retro illuminated Snellen box chart provided an illumination of 600 lux. The measureable visual acuity intervals provided by the box chart were 6/60, 6/36, 6/24, 6/18, 6/12, 6/9, 6/6, 6/5 and 6/4. (Figure 3)
Figure 3: Snellen Light Box Chart
Visual acuity measurements were assessed and recorded by two resident medical officers.
Patients were instructed to stand 6 metres from both charts. A spectacle vision occluder was used to first test the right then left eye of patients. Patients were instructed to read each line until they were no longer able to resolve the optotype. A visual acuity measurement was recorded if the patient was able to read more than half the optotypes of a given line. Visual acuity was first assessed using EyeSnellen app and followed by a measurement using the traditional Snellen Chart. Neither the assessors nor the patients were masked for the outcome of the vision test. The same refractive correction was maintained for measurements with both charts (either unaided, habitual correction or pinholes).
Visual acuity measurements were recorded as decimals. Results were then converted to logMAR visual acuity for statistical analysis. R (Ver 3.0.2), a freely available statistical computer package5, was used to calculate the results.
A total of 67 participants (average age 57, range 19–89) were recruited for the trial. From these 67 participants, 122 eyes were tested. Main diagnoses were 19 eyes with corneal pathology (16%), glaucoma in 13 eyes (11%), 7 postoperative eyes (6%), cataract in 6 eyes (5%), and 4 eyes with dry eye syndrome (3%). There were 29 eyes (24%) without documented pathology.
The median logMAR visual acuity measured using the Snellen Chart function on EyeSnellen app was 0.097. The range measured –0.125 to 1.000, which is equivalent to a decimal range of 0.100 to 1.333. The median logMAR visual acuity measured using the Snellen light box chart was 0.176. The range measured was –0.176 to 1.000, which is equivalent to a decimal range of 0.100 to 1.500.
Bland-Altman analysis revealed a mean difference of 0.001 logMAR units between the visual acuity results from the iOS app and the light box chart with 95% limits of agreement of –0.169 to 0.171. (Figure 4)
Figure 4: Bland Altman plot of the difference versus mean logMAR visual acuity recorded using a traditional Snellen light box chart and the Snellen chart function on EyeSnellen app (n = 122 eyes)
Bland-Altman analysis demonstrated agreement between visual acuity measured by Snellen chart on EyeSnellen and visual acuity measured by the Snellen light box chart. This result demonstrates that EyeSnellen can be used as an alternative to the traditional Snellen light box chart when vision is tested at 6 metres.
The large difference in median visual acuity measured between the EyeSnellen app and the Snellen light box chart may be explained by a limitation of the study. The 6/7.5 and the 6/4.5 visual acuity intervals were absent on the Snellen light box chart and the 6/5 and 6/4 intervals were absent on the EyeSnellen app. The calculated median result for EyeSnellen equated to the 6/7.5 interval (which was not a provided interval on the light box chart) whereas the median result calculated for the light box chart was 6/9. Given that 6/7.5 and 6/9 are neighbouring intervals it may be highly likely that eyes assessed to be 6/9 on the light box chart may in fact have tested to be 6/7.5 had the interval been available.
A possible source of bias is present due to the lack of masking of the patient or tester, a situation arising from clinic workflow constraints.
Our findings differ slightly from recent studies investigating the reliability of visual acuity measurements on a tablet device. Zhang et al concluded that the Eye Chart Pro iOS app is reliable for testing visual acuity when the decimal Snellen visual acuity was better than 0.16. Our results suggest the EyeSnellen iOS app is reliable for all visual acuities measureable on the Snellen Chart. Although we minimised glare by mounting the tablet device vertically, our results suggest an antiglare screen is not necessary which had been suggested by Black et al to obtain accurate visual acuity measurements7.
Interestingly, although the illumination of the iPad mini screen was measured at 200 lux (below many recommended national standards8,9) its visual acuity measurements were still comparable to the light chart which had a measured illumination of 600 lux. The difference in illumination between both charts may have influenced visual acuity measurements. A study comparing different chart luminance levels suggests that doubling of the luminance level within a range of 40 to 600 lux improves measurements of visual acuity by approximately one letter on a five letter row10.
Some advantages of the EyeSnellen app were noticed during testing. The remote function allowed randomisation of optotypes, which removed the chance of patients recalling optotypes from memory. Another advantage of the app allowed assessors to observe the letters and visual acuity interval on the remote, which made the recording of visual acuity easier.
The Snellen chart function on EyeSnellen app can be reliably used to measure visual acuity in clinical settings. Furthermore, the application may be more advantageous than traditional light box charts due to its portability and the ability to randomise optotypes.
The authors would like to thank the staff and patients of the Fremantle Hospital Ophthalmology Department for their patience and support in conducting this research study.
1. Colenbrander A. The Historical evolution of Visual Acuity Measurement. The Smith-Kettlewell Eye Research Institute. 2001. http://www.ski.org/Colen brander/Images/History_VA_Measuremnt.pdf (Accessed June 2013).
2. Eye Apps website; Stephen Colley, 2013. Available at: http://www.eyeapps.com.au (Accessed March 2014).
3. Zhang ZT, Zhang SC, Huang XG, et al. A pilot trial of the iPad tablet computer as a portable device for visual acuity testing. J Telemed Telecare 2013 Jan;19(1):55–9.
4. Black JM, Jacobs RJ, Phillips G et al. An assessment of the iPad as a testing platform for distance visual acuity in adults [Internet]. BMJ Open. 2013 [cited 2014 Mar 6];3(6). Available from: BMJ
5. The R Project for Statstical Computing (Internet). www.r-project.org (Accessed March 2014)
6. Zhang ZT, Zhang SC, Huang XG, et al. A pilot trial of the iPad tablet computer as a portable device for visual acuity testing. J Telemed Telecare 2013 Jan;19(1):55–9.
7. Black JM, Jacobs RJ, Phillips G et al. An assessment of the iPad as a testing platform for distance visual acuity in adults [Internet]. BMJ Open. 2013 [cited 2014 Mar 6];3(6). Available from: BMJ.
8. New Zealand Government. Medical Aspects of fitness to drive. New Zealand Transport Agency, 2009 Jul. 139p.
9. Canadian Medical Assocation. CMA driver’s guide: Determining Medical Fitness to Operate Motor Vehicles, 8th Edition. 2012. 134p.
10. Sheedy JE, Bailey IL, Raasch TW. Visual acuity and chart luminance. Am J Optom Physiol Opt 1984 Sep;61(9):595–600.
TH Carter, (BSc Hons, MBChB)1, MA Rodrigues, (BSc Hons, MBChB)1, AGN Robertson, (MRCS (Ed), PhD)1, RRW Brady, (MBChB, MRCS (Ed))1, on behalf of Scottish Surgical Research Group (SSRG)
1Department of Clinical Surgery, Royal Infirmary of Edinburgh, Little France, Edinburgh UK
Corresponding Author: email@example.com
Journal MTM 3:2:2–10, 2014
Background: Smartphones provide a diverse range of functions, including the ability to communicate rapidly, store information and consult online medical applications (apps). Whilst their use by doctors is popular, there is little data on their clinical use and application by surgical trainees.
Aims: Here we assess smartphone ownership, usage in clinical environments, medical app download patterns, and knowledge of current app regulation by surgical trainees.
Methods: An online questionnaire was distributed to all core and specialty NHS general surgical trainees working in Scotland.
Results: Thirty three percent (76/233) of trainees responded. Ninety two percent owned a smartphone. Trainees used smartphones at work for email (96%), calls (85%), SMS/MMS (81%), Internet browsing (76%) and medical app access (55%). Eighty two percent of respondents had downloaded at least one app, including clinical guidelines (70%), medical calculators (59%), anatomy guides (50%) and study aids (32%). There was no statistical difference between demographics and smartphone use or app downloads. Thirty five percent had used apps to help make clinical decisions. Thirteen percent felt they had encountered erroneous outputs, according to their own judgement and/or calculation. Fifty eight percent felt apps should be compulsorily regulated however only one trainee could name a regulatory body.
Conclusion: Smartphone possession amongst NHS surgical trainees is high. Knowledge of app regulation is poor, with potential safety concerns regarding inaccurate outputs. Integration of apps, developed and approved by an appropriate authority, may improve confidence when integrating them into training and healthcare delivery.
Margaret Agnes Perrott, M Sports Physio, M App Sci1, Tania Pizzari, PhD2, Jill Cook, PhD3
1Department of Physiotherapy, La Trobe University, Bundoora, Vic. 3086; 2Department of Physiotherapy, La Trobe University, Bundoora, Vic. 3086; 3Faculty of Medicine, Nursing and Health Sciences, Monash University, Frankston, Vic. 3199
Corresponding author firstname.lastname@example.org
Journal MTM 3:2:46–54, 2014
Background: Lumbopelvic stability exercises are commonly prescribed for athletes to prevent sports injury; however, there is limited evidence that exercises are effective. Exercise trials are time consuming and costly to implement when teaching exercises or providing feedback directly to participants. Delivery of exercise programs using mobile technology potentially overcomes these difficulties.
Aims: To evaluate the qualitative clinical changes and quantitative movement pattern changes on lumbopelvic stability and injury in recreational athletes following exercise. It is hypothesised that athletes who complete the stability training program will improve their clinical rating of lumbopelvic stability, quantitatively improve their movement patterns and have fewer injuries compared to those who complete the stretching program.
Methods: One hundred and fifty recreational athletes will be recruited for the trial. Direct contact with researchers will be limited to three movement test sessions at baseline, 12 weeks and 12 months after baseline. Videoed performance of the tests will be accessed from an internet data storage site by researchers for clinical evaluation of lumbopelvic stability. Those without good stability at baseline will be randomly allocated to one of two exercise groups. The exercise programs will be delivered via the internet. Feedback on correct performance of the exercises will be provided using a smartphone software application. Injury will be monitored weekly for 12 months using text messages.
Conclusion: The trial protocol will establish if an exercise training program improves lumbopelvic stability and reduces injury. Improvement in lumbopelvic stability following an exercise program delivered with mobile technology will enable the provision of exercise programs to other athletes who may be geographically remote from their exercise provider and establish a method for researchers and health professions to use for exercise programs for individuals with other health conditions.
Trial Registration: ACTRN12614000095662
Lumbopelvic stability (LPS) has been defined as the ability of an individual to maintain optimal alignment of the spine, pelvis, and the thigh in both a static position and during dynamic activity1. Clinically, there is a perception that LPS is an essential component of injury prevention, and training LPS is thought to aid recovery from injury and improve performance2. Deficits in LPS have been associated with injury or pain in the back, groin and knee3–10 and exercise for the lumbopelvic region can reduce the risk of muscle strain injury11 and improve the gold standard quantitative measure of movement: three dimensional kinematics12,13.
Although evidence demonstrates that the performance of single leg squat (SLS), a key measure of LPS, can be changed by exercise12, it is uncertain if a training program focused solely on LPS can improve an athlete’s qualitative clinical rating of LPS when assessed by physiotherapists or will be validated by improved kinematic measures. It is also uncertain if isolated LPS training reduces the risk of injury. This trial aims to establish whether an LPS exercise program improves an athlete’s qualitative and quantitative performance of specific LPS tests and whether injury is reduced by improvement in LPS.
A barrier to implementing randomised controlled clinical exercise trials is the time consuming and costly nature of teaching exercises directly to research participants14. The use of mobile technology has the potential to overcome these barriers and to standardise the exercises that are taught15. This trial will use mobile technology, both internet and smartphone, in delivery of exercise programs, for providing feedback on exercise technique and for injury monitoring.
A single-blinded parallel randomised controlled trial (Figure 1) will compare the effect of two exercise programs in participants who have deficient LPS. The trial protocol has been approved by the La Trobe University Faculty Human Ethics Committee and all participants will give informed consent before taking part (Reference: FHC13/121) and registered with Australian New Zealand Clinical Trials Registry (ACTRN12614000095662).
Figure 1: Participant flow chart
Rating of Lumbopelvic Stability
One hundred and fifty healthy male and female recreational athletes will be recruited for a randomised controlled clinical trial. They will complete baseline movement testing of eight movement tests. Performance of two tests: SLS and dip test will be videoed by the lead researcher (MP) and uploaded to a Dropbox™ shared with two other researchers (T.P., J.C.). To protect the security of data, Dropbox uses Secure Sockets Layer (SSL) and AES-256 bit encryption to transfer and store data16, making this an ethically acceptable way for the researchers to view the video performance.
The researchers will rate the individual’s LPS as good, poor or neither good nor poor. The rating classification system has been previously validated17. Rating LPS using video eliminates the need for the raters to be present at each movement test or for the participants to perform the tests multiple times for individual raters. This method has been used previously by these researchers17,18 Individuals classified as having good lumbopelvic stability will continue their usual training. All other participants will be randomly allocated to one of two exercise groups focused on the lumbopelvic region: stability training or stretching program. The exercise programs run for 12 weeks and are performed 3 times per week at home. The exercises take less than 15 minutes to perform. Allocation to exercise groups will be performed immediately after the clinical rating of LPS. Group allocation will be concealed by using an off-site trial administrator who holds the randomisation schedule. This administrator will not have any other role in the trial.
Stratified-block randomisation in groups of 20 will be performed using a random sequence generator at http://www.random.org/sequences. Stratification will be based on clinical rating of LPS: poor or neither good nor poor. This randomisation will ensure that similar numbers of participants with poor LPS or neither good nor poor LPS will be randomised to each exercise group. Differences in baseline LPS may influence the outcome of the trial rather than the intervention alone19.
The researchers rating the LPS of participants at the 12 week and 12 month post intervention testing will be blinded to group allocation. The researchers assessing the outcomes and analysing the results data will also be blinded to group allocation.
Participants will attend three testing sessions, baseline, at the completion of the intervention at 12 weeks, and 12 months after baseline testing, to evaluate movement patterns in eight movement tests. This testing will be performed using the Organic Motion system (Organic Motion, New York, USA). This system records movement with gray scale cameras (120 Hz), develops a morphological and kinematic model of the participant, generates a body shape and matches it with a joint centre model from which angular changes in body segments can be extracted20. The system can report details of movement characteristics known to discriminate between good and poor LPS17.
Eight movement tests have been chosen for the trial as they challenge control of the lumbopelvic region and their performance may be influenced by improvement in LPS. Six have previously been described: balance on one leg with eyes closed21,22, SLS23, dip test24, hurdle step and in-line lunge25 and side-to-side hopping26. Two additional tests will be performed: a turning manoeuvre and a pelvic leveling test. The turning manoeuvre will replicate typical sporting activity27 with the participants performing a running v-shaped turn. The pelvic leveling test is based on tests of postural control 28 where the participant stands on one leg, raises and lowers one side of their pelvis and attempts to return their pelvis to a level position. Participants will warm-up with 5 minutes walking at a comfortable speed on a treadmill while watching a video on correct performance of the tests, and then practice each test. The tests will be performed on each leg in random order.
1. Clinical assessment
The performance of SLS and dip test will be rated for LPS. Three other tests: balance, hurdle step and in-line lunge will be videoed and a clinical score recorded using validated rating systems. The balance test is scored with a point for each of 6 possible error types using the Balance Error Scoring System (BESS), with zero being the best possible score21. Hurdle step and in-line lunge are both scored from zero to three, with three being the best possible score25.
2. Kinematic assessment
Kinematic measures of three planes of movement of the back, pelvis and thigh will be recorded during the eight movement tests using the Organic Motion markerless motion capture system.
The same assessment of clinical rating of LPS, clinical scores from 5 movement tests and kinematic measures from all movement tests will be performed for all participants at 12 weeks and 12 months after their inclusion in the trial, including those with good LPS who are continuing their usual training.
Adherence and Injury Monitoring with Mobile Technology
Mobile telephone technology (text messaging) will be used to collect data on exercise adherence and to monitor sporting injuries during the 12 months of the trial. Weekly text messages will be sent to all participants. During the exercise programs the participants will be asked via text message how many times they have performed the exercises that week, with the options of replying “0”, “1”, “2”, or “3”. Also throughout the trial they will be asked if they have sustained a sports injury during the week, with the option to reply “injury” or “no injury”. Therefore, for example, they may reply “3 no injury”. This simple text message response mechanism will assist in keeping participants engaged in the trial with encouragement for prompt reply being rewarded by entry into a weekly prize draw. External observation by text message communication is expected to increase the commitment of participants to perform the exercises29. If participants reply that they have been injured the lead researcher will contact them via phone to identify the nature of the injury and refer them to an appropriate health practitioner for treatment.
Mobile Delivery of Exercise Programs and Feedback
After LPS rating, participants will be randomised to an exercise group. The exercise programs will be delivered to the participants with a link to one of two Dropbox internet sites: one for stability exercise and one for stretching exercise. At the site participants will access two types of video file: first, preliminary instructions and second, video of each exercise routine. The preliminary instructions include examples of correct technique and the number of repetitions to be performed. The stability exercise video also includes instructions on how to progress the exercises through four levels of difficulty. The exercise routine videos show exact timing and technique and allow the participant to exercise in conjunction with the video, providing a model to match. Participants will also be given written instructions and a poster showing either the stability exercises or the stretching exercises and the numbers of exercises and sets to be performed.
Feedback on correct exercise technique will be provided using an app, Coach’s Eye (TechSmith Corporation, Michigan, USA), that can be downloaded to smartphones, iPad and tablets. This app provides visual and verbal feedback on exercise technique that is provided directly to the participant’s smart phone. The system is operational on iOS, android and windows operating systems. The app provider has established a list of recommended devices on which the app is fully operational. If a participant has a smart phone that does not function correctly with the app, the participant will be able to video their performance on their phone, send to the lead researcher and receive visual feedback via email image that is indistinguishable from the Coach’s eye app. Written feedback will also be given in the email. Consistency of feedback across participants is regarded as important so that participants are able to access the same level of involvement in the project29. Feedback on exercise technique will be available at any time during the 12 week exercise program and will give participants the opportunity to report difficulty with performance of the exercises.
Stability Training Program
The participants allocated to this group will be asked to perform a 12 week LPS training program 3 times per week at home (Table 1). They will perform 1–2 sets of 5–12 repetitions of the exercises.
Table 1: Stability training program
The stability training program comprises four exercises, each of which has four levels. The exercises are SLS, arabesque, side plank and prone plank (Figures 2a–d).The exercises commence in well-supported positions, performing only small movements and progress to increasingly challenging exercises with larger ranges of movement in positions that challenge LPS. Each exercise has criteria describing competent performance. The participants will progress at their own rate to the next level when competent at that level. Participants may not reach the highest level of each exercise during the 12 weeks.
Figure 2: Stability exercises a. Single leg squat, b. Arabesque, c. Side plank, d. Prone plank
Stretching Training Program
The participants allocated to this group will be asked to perform a 12 week stretching training program 3 times per week at home. The stretching training program comprises stretches for six muscle groups attached to the lumbopelvic region: hamstrings, quadriceps, adductors, gluteals, trunk rotators and hip flexors (Figures 3a–f) and have been described previously30. The participants should feel a strong but comfortable stretch and hold each stretch for 30 seconds. The stretches will be performed on each side.
Figure 3: Stretching exercises a. Hamstrings, b. Quadriceps, c. Adductors, d. Gluteals, e. Trunk rotators, f. Hip flexors
Power calculation: sample size
One hundred and fifty recreational athletes will be recruited. The sample size is based on the clinically relevant ability to detect change in lumbopelvic stability after stability training in those with poor stability. Previous research shows a range of sample sizes from 21–42 where stability training changed isolated aspects of LPS7, 31, 32, or reduced pain and disability33.
This sample size range is supported by a power calculation based on research investigating the effect of a stability and agility program compared to a stretching program on recurrent hamstring strain34. To detect differences between the two interventions in the current study and achieve a power of 0.8 at an alpha level of 0.05, df = 1, using chi square, a sample size of 19 with poor LPS would be required35. This sample size is likely to be insufficient for the current study since the hamstring study was limited to a specific population with a high risk of re-injury who were closely supervised in their performance of their exercise program. Therefore a larger sample size will be chosen for the current trial.
A sample size of 150 participants should yield 34 participants with poor LPS. This is based on a study by the current researchers that yielded 14 individuals with poor LPS, 9 with good LPS and 39 with neither good nor poor stability from a population of 62 recreational athletes17. This should ensure a large enough sample size to detect change in LPS in those with poor LPS. The power of the trial is increased by basing the sample size only on detecting change in those with poor LPS, as change in LPS in those with neither good nor poor stability will also be examined in this trial.
Data analysis: clinical rating
Clinical rating of LPS (good, poor or neither good nor poor) will be compared before and after intervention using Chi square. Performance scores for balance, hurdle step and in-line lunge will be compared before and after intervention using Friedman two-way analysis of variance by ranks.
The correlation between clinical LPS rating and performance scores will be analysed using Spearman rho at baseline, 12 weeks and 12 months to establish if there is an association between clinical rating and performance scores on other tests. The alpha level will be set at p ≤ 0.05 for all statistical tests.
Data analysis: kinematic measures
Kinematic measures related to lumbopelvic stability will be compared before and after intervention using mixed two-way ANOVA (group by time). This comparison will be made at baseline, 12 weeks and 12 months to determine if an exercise program changes the amount that athletes move. Movement patterns will be analysed on each leg with skill and stance legs36 analysed separately.
Data analysis: injury rate and adherence
The association between baseline rating of LPS and subsequent sports injury will be analysed using Chi square. Adherence to the exercise programs will be reported as a percentage of the 36 expected exercise sessions. Exercise adherence will be used as a covariate in analysis of change in clinical rating and injury rate.
This randomised controlled trial will examine the effectiveness of an exercise program designed to improve LPS compared to a control exercise program in recreational athletes. It is expected that the stability program will be more effective in improving LPS, changing movement patterns and reducing injury than the stretching program.
The trial is dependent on the use of mobile technology, both internet and smartphone, to deliver the exercise program instructions and technique, to provide feedback on exercise technique and to monitor exercise adherence and injury. Exercise trials that rely on teaching exercise programs face to face or that require participants to attend exercise groups are expensive and time consuming to conduct for both researchers and participants. The use of text messages simplifies the process of monitoring adherence and injury rather than the use of exercise/injury diaries. The ability to deliver the randomised controlled trial in a time and cost effective manner has implications for first, the specific outcome of this trial on lumbopelvic stability and second, for exercise trials for other health conditions. If the LPS exercise program is successful in changing LPS and also in reducing injury this provides an effective method to make the exercise program available for the general sporting community. It would also be possible for individuals to perform the important movement tests that enable them to be classified as having good, poor or neither good nor poor LPS at home and send them via the Coach’s Eye app to be assessed. If they do not have good LPS they could be provided with the stability exercise program via the internet and receive feedback with the app. This enables athletes who are geographically remote from skilled physiotherapists to access proven exercise techniques for their LPS. In addition to the direct outcome of this trial, other researchers or health professionals can use the methods in this protocol to establish exercise programs for other health conditions by videoing correct performance of exercise technique to deliver the exercise programs and provide feedback using mobile technology.
The trial will be reported in accordance with the CONSORT group statement.
At the time of manuscript submission recruitment of participants had not commenced.
General Disclosure Statement
Ms Perrott and Dr. Pizzari have nothing to disclose. Prof. Cook reports a relevant financial activity outside the submitted work as a director of company that has interests in tendon imaging and management.
1. Perrott M, Pizzari T, Cook J, Opar MS. Development of clinical rating criteria for tests of lumbo-pelvic stability. Rehabilitation Research and Practice [Internet]. 2011 [cited 2012 May 29]. Available from: http://www.hindawi.com/journals/rerp/2012/803637/.
2. Willardson JM. Core stability training: Applications to sports conditioning programs. J Strength Cond. 2007;21(3):979–85.
3. Biering-Sorenson F. Physical measurements as risk indicators for low-back trouble over a one year period. Spine. 1984;9(2):108–19.
4. Bolgla LA, Malone TR, Umberger BR, Uhl TL. Hip strength and hip and knee kinematics during stair descent in females with and without patellofemoral pain syndrome. J Orthop Sports Phys Ther. 2008;38(1):12–8.
5. Cowan SM, Schache AG, Brukner P, Bennell KL, Hodges PW, Coburn P, et al. Delayed onset of transversus abdominus in long-standing groin pain. Med Sci Sports Exerc. 2004;36(12):2040–5.
6. Evans C, Oldreive W. A study to investigate whether golfers with a history of low back pain show a decreased endurance of transversus abdominus. J Man Manip Ther. 2000;8(4):162–74.
7. Hides JA, Richardson CA, Jull GA. Multifidus muscle recovery is not automatic after resolution of acute, first-episode low back pain. Spine. 1996;21(23):2763–9.
8. Hodges PW, Richardson CA. Inefficient muscular stabilization of the lumbar spine associated with low back pain. Spine. 1996;21(22):2640–50.
9. Leetun DT, Ireland ML, Willson JD, Ballantyne BT, Davis IM. Core stability measures as risk factors for lower extremity injury in athletes. Med Sci Sports Exerc. 2004;36(6):926–34.
10. Zazulak BT, Hewett TE, Reeves NP, Goldberg B, Cholewicki J. Deficits in neuromuscular control of the trunk predict knee injury risk: a prospective biomechanical-epidemiologic study. Am J Sports Med. 2007;35(7):1123–30.
11. Perrott MA, Pizzari T, Cook C. Lumbopelvic exercise reduces lower limb muscle strain injury in recreational athletes Phys Ther Rev. 2013;18(1):24–33.
12. Baldon RD, Lobato DF, Carvalho LP, Wun PY, Santiago PR, Serrao FV. Effect of functional stabilization training on lower limb biomechanics in women. Med Sci Sports Exerc. 2012;44(1):135–45.
13. Shirey M, Hurlbutt M, Johansen N, King GW, Wilkinson SG, Hoover DL. The influence of core musculature engagement on hip and knee kinematics in women during a single leg squat. Int J Sports Phys Ther. 2012;7(1):1–12.
14. McTiernan A, Schwartz RS, Potter J, Bowen D. Exercise clinical trials in cancer prevention research: a call to action. Cancer Epidemiol Biomarkers Prev. 1999;8(3):201–7.
15. Parker M. Use of a tablet to enhance standardisation procedures in a randomised trial. Journal of MTM. 2012;1(1):24–6.
16. How secure is Dropbox? 2014 [cited 2014 17/5/2014]. Available from: https://www.dropbox.com/help/27/en.
17. Perrott MA. Development and evaluation of rating criteria for clinical tests of lumbo-pelvic stability [electronic resource] [Masters Thesis]. Melbourne: La Trobe University; 2010.
18. Perrott MA, Cook J, Pizzari T, editors. Clinical rating of poor lumbo-pelvic stability is associated with quantifiable, distinct movement patterns. Australian Conference of Science and Medicine in Sport; 2009; Brisbane: Sports Medicine Australia.
19. Kernan WN, Viscoli CM, Makuch RW, Brass LM, Horwitz RI. Stratified randomization for clinical trials. J Clin Epidemiol. 1999;52(1):19–26.
20. Mundermann A, Mundermann L, Andriacchi TP. Amplitude and phasing of trunk motion is critical for the efficacy of gait training aimed at reducing ambulatory loads at the knee. J Biomech Eng. 2012;134(1):011010.
21. Finnoff JT, Peterson VJ, Hollman JH, Smith J. Intrarater and interrater reliability of the Balance Error Scoring System (BESS). Pm R. 2009;1(1):50–44.
22. Riemann BL, Guskiewicz K, Shields EW. Relationship between clinical and forceplate measures of postural stability JSR. 1999;8(2):71–82.
23. Zeller BL, McCrory JL, Kibler WB, Uhl TL. Differences in kinematics and electromyographic activity between men and women during the single-legged squat. Am J Sports Med. 2003;31(3):449–56.
24. Harvey D, Mansfield C, Grant M. Screening test protocols: pre-participation screening of athletes. Canberra: Australian Sports Commission; 2000.
25. Cook G, Burton L, Hoogenboom B. Pre-participation screening: the use of fundamental movements as an assessment of function – part 1. N Am J Sports Phys Ther. 2006;1(2):62–72.
26. Itoh H, Kurosaka M, Yoshiya S, Ichihashi N, Mizuno K. Evaluation of functional deficits determined by four different hop tests in patients with anterior cruciate ligament deficiency. Knee Surg Sport Tr A. 1998;6(4):241–5.
27. Muller C, Sterzing T, Lake M, Milani TL. Different stud configurations cause movement adaptations during a soccer turning movement. Footwear Sci. 2010;2(1):21–8.
28. Stevens VK, Bouche KG, Mahieu NN, Cambier DC, Vanderstraeten GG, Danneels LA. Reliability of a functional clinical test battery evaluating postural control, proprioception and trunk muscle activity. Am J Phys Med Rehabil. 2006;85(9):727–36.
29. Lied TR, Kazandjian VA. A Hawthorne strategy: implications for performance measurement and improvement. Clin Perform Qual Health Care. 1998;6(4):201–4.
30. Herbert RD, Gabriel M. Effects of stretching before and after exercising on muscle soreness and risk of injury: systematic review. BMJ. 2002;325(7362):468.
31. Hides JA, Stanton W, McMahon S, Sims K, Richardson CA. Effect of stabilization training on multifidus muscle cross-sectional area among young elite cricketers with low back pain. J Orthop Sports Phys Ther. 2008;38(3):101–8.
32. Stanton R, Reaburn PR, Humphries B. The effect of short-term Swiss ball training on core stability and running economy. J Strength Cond Res. 2004;18(3):522–8.
33. O’Sullivan PB, Twomey LT, Allison GT. Evaluation of specific stabilising exercises in the treatment of chronic low back pain with radiological diagnosis of spondylolisis or spondylolisthesis. Spine. 1997;22(24):2959–67.
34. Sherry MA, Best TM. A comparison of 2 rehabilitation programs in the treatment of acute hamstring strains. J Orthop Sports Phys Ther. 2004;34(3):116–25.
35. Lenth RV. Java Applets for Power and Sample Size [Computer software]. 2006–2009 [4/4/2014]. Available from: http://www.stat.uiowa.edu/~rlenth/Power.
36. Bullock-Saxton JE, Wong JE, Hogan N. The influence of age on weight-bearing joint reposition sense of the knee. Ex Brain Res. 2001;136(3):400–6.