Take a look at the Recent articles

Objective assessment in continual medical education (CME): Medical objective assessment in system control

Dmytro D Ivanov

Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine

E-mail : aa

Yuriy V Voronenko

Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine

Ozar P Mintser

Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine

Larysa Yu Babintseva

Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine

DOI: 10.15761/RRI.1000135

Article
Article Info
Author Info
Figures & Data

Abstract

Existing examinations in postgraduate education and continuing medical education (CME) are not perfect. Modern assessment does not reflect disadvantages of older responders, for whom more time for reply is needed. Specialists with wide clinical experience may choose more than one correct answer in alternative questions. Reduced ability to remember in older people restricts examination without additional sources of information.

We offer an individualised system for testing doctors. It provides personalised choice of examination questions using multiple choice questions with weight characteristics and absence of distractors, interactive cooperation in case of negative answers and the final decision of an expert in relation to the person tested. A special algorithm is proposed for typical questions that combines the advantages of known approaches to testing. The questioning system is complex for the creators of tests but is more convenient and objective than existing ones for medical doctors.

Key words

postgraduate education, continuing medical education, examinations in medicine, multiple choice questions based on weight answers, weight of the question, individual approach for medical testing, personalised examination

Introduction

Computerised examinations may be obligatory for candidates for a medical specialty. Multiple choice questions are widely used in the traditional type of objective assessments. Respondents are asked to select one correct answer out of the choices from a list, named short answer questions [1,2]. This test task is considered as the “closed” type of question, offering multiple or alternative choices and also providing the possibility to match or sequence answers to each question. The number of possible answers usually varies between three and five. The correct answer is called key and the incorrect answers are called distractors [3].

This absolutely contrasts with the “multiple response question”, another “closed or closed-ended” type of question. There is more than one answer which may be considered as correct. The correct answer earns a set number of points or percentage toward the total mark. It is named “the weight” of multiple response questions [4] which usually varies from 20 to 100%.

The scholastic Assessment Test, or SAT Reasoning Test, simply - the SAT, awards partial credit for unanswered questions or penalises applicants for incorrect answers. The new redesigned version of the SAT which seems to be very promising was announced in 2014 by College Board [5] and was launched in 2016.

Advanced “open or open-ended” type of questions, which involves a free response or additions to the existing answer, has already been used in modern assessments. Multiple response questions with weight data, SAT or open types of questions have not been used widely in CME examinations or for medical specialist assessments [6]. Even the Pearson VUE Examsoft University, for example does not employ the modern approach in medical specialties exams [7].

Taking into account the different levels of training in different countries [8,9], with older people performing badly, creating a new approach to assessing knowledge is highly advisable.

Methodology

We have developed a new technical solution for medical testing in Medical Academy of postgraduate education (dep of nephrology). It’s based on clinical knowledge, experience and the psychophysiological features of the physician who is undergoing an assessment. The basis of this approach is an integrated exam system with personalisation of the responder. There are 3 parts (A,B,C) in the shown algorithm: A) choice of questions depending on the individual features of the doctor being examined, B) multiple choice questions with weight characteristics, and open steps algorithm for each response, and C) individual analysis of response (Figure 1).

Figure 1. Algorithm of personalised assessment for candidate doctors

Individualisation of the testing procedure concerns both its parts - the choice of questions and the analysis of the answers (Figure 2). The first part takes into account age, work experience, and the reaction of the tested person to questions (the evaluation is performed automatically on the basis of physiognomic signs, delayed response time, the readiness of the respondent to start the exam, etc.). For some categories including females, people above 50 years of age or inexperienced, non-native speakers, an additional 15% examination time has been added. Physiognomic features are being tested now. Beta-version usage includes the mood and readiness of the responder to start an examination and individual examination for persons with limited mental abilities.

Figure 2. Individual approaches in examination testing

It should be emphasized that in most cases (up to 80% of questions), possible answers do not have distractors, what distinguishes testing for doctors from testing students' knowledge.

The fundamental background is based on well-known platform [10]. We have used up to date knowledge with the stem formed from clinical cases. Each clinical case is divided into 6-20 blocks representing onset, progression, management, follow-up and prophylaxis. It includes extended or ancillary classical material [11], presented by detailed description of a clinical case study, morphological or virtual pictures, laboratory analysis, dynamic graphs or tables, in reference to a case study that had been presented in the previous block.

The stem ends with a lead-in question, explaining how the respondent should reply. If the key answer has become negative (false), the question is formulated once more in a different manner. It means that the question has adaptive characteristics giving a chance to the responder for a correct reply. Correct replies may include groups of key words/phrases. In cases of incomplete answers, the person being tested will still get some credits.

The most important feature of the proposed approach is the friendly interface of communication. If the doctor did not answer the key question (from which the branched algorithm of the examination begins), he is formulated differently (assuming that the doctor simply did not understand the question). In fact, in this way, the adaptability of test knowledge control is realized.

The following categories of question structure are used in clinical cases questions. They are mixed in each clinical case depending on the content [12,13].

OPEN type questions

  • Alternative answer- “yes” and “no”
  • One Choice Answer - one correct answer out of 5 following presented or vice versa - one wrong reply from 4-5 correct statements presented
  • Multiple Choice Answer - two or more correct answers of 5 statements (choose the correct answers) or two or more incorrect replies (choose incorrect keys). Remember that a distractor is an incorrect option in a multiple-choice question
  • Summative scales (Likert - type scale)
  • Matching - the respondent is invited to match elements of two lists
  • Sequencing - a candidate needs to present the elements of a list in sequence
  • The creation of information content - the subject must choose the most significant one from the provided libraries
  • The creation of complexity - the subject needs to collect (like a puzzle) the correct answer from the list or library provided, including distractors with a negative value

CLOSED types questions

  • Free presentation - the subject must formulate a response in an essay-type question - the answer is structured as an essay or free communication
  • Supplement - the subject may give additional explanations or comments

Each clinical case starts and finishes with either an alternative or one choice question. It helps the responder and makes the clinical case easier. The main body of the stem is multiple choice questions.

The items of a multiple-choice test are often colloquially referred to as "questions," but this is a misnomer because many items are not formulated as questions. Note that from our point of view, it is therefore not accurate to describe the elements of a multiple-choice test as "questions", because they can also represent incomplete statements, analogies or mathematical expressions. Thus, the more generalised term "item" is a more appropriate. Items are stored in a bank [14].

How it works? (Screenshots 1-3, Figure 6)

Figure 6. Last remark

The integrated adaptive examination aimed at the final diagnosis and treatment in cases. Each assessment is based on integrated knowledge derived from an academic discipline taking into account the doctor’s own experience [15,16]. It is interesting that there are online tests that allow the doctor to understand the correctness of the chosen specialty [17].

Sample screenshots 1,2 (Figure 3,4)

Figure 3. Sample screenshot 1

Figure 4. Sample screenshot 2

For the very experienced person, the results may be as follows

Sample screenshot 3 (Figure 5)

Figure 5. Sample screenshot 3

The pilot project wil2021 Copyright OAT. All rights reserv
  • Experienced doctors above the age of 45 years
  • Volunteers

Conclusion

The existing system for assessing knowledge in postgraduate medical education as well as CME is not perfect. It does not take into account the doctor’s physical capabilities (age, sex, professional experience, etc.) and psycho-emotional characteristics. So, assessments may not fully evaluate the doctor’s knowledge and his/her professional skills.

We suggest using a more comprehensive and friendly examination giving additional opportunities for older doctors, those with limited abilities or disablement and for those who have difficulty in finding the only correct solution.

We provide an individualized approach for testing based on integrated analysis of personal features of respondents, multiple choice questions with weight characteristics, open steps algorithm for each response, adaptive questions, free explanation and individual analysis by experts for low weight responses.

We propose allowing electronic devices to be used during a time-limited examination. This may help to achieve a satisfactory result for specialists.

Glossary

Assessment - refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs (http://edglossary.org/assessment) Examination - a spoken or written test of knowledge, especially an important one (http://www.ldoceonline.com/Education-topic/exam)

Conflict of interest

Not declared

References

  1. http://www.merriam-webster.com/dictionary/multiple%20choice
  2. Origins and Purposes of Multiple Choice Tests, San Francisco State University, accessed 2016-04-02.
  3. Kehoe, Jerard (1995) Writing multiple-choice test items Practical Assessment, Research & Evaluation. 4: Retrieved February 12, 2008.
  4. http://www.techexams.net/forums/network/3681-multiple-choice-questions-their-weight.html
  5. Lewin T (2014) A New SAT Aims to Realign with Schoolwork. The New York Times. Retrieved 14 May 2014.
  6. http://www.nbme.org/publications/item-writing-manual-download.html
  7. http://university.examsoft.com/h/
  8. Voronenko YV, Mintser OP, Ivanov DD (2015) Promissory Concept of medical education. J Eur CME 4.
  9. Griebenow R, Campbell C, McMahon GT, Regnier K, Gordon J, et al. (2017) Roles and Responsibilities in the Provision of Accredited Continuing Medical Education/Continuing Professional Development. J Eur CME 6: 1314416. [Crossref]
  10. https://www.openpolytechnic.ac.nz/current-students/study-tips-and-techniques/studying-for-exams/types-of-exam-questions/
  11. https://en.wikipedia.org/wiki/Multiple_choice
  12. http://testobr.narod.ru/3.htm
  13. http://www.psciences.net/main/sciences/computer_sciences/articles/komptesty.html
  14. Mintser OP, Krasnov VV (2000) Questions of systematization of automated certification systems in medicine (methodical recommendations). - Ternopol: Ukrmedkniga 56.
  15. Schaffer M, Weisshardt I (2013) Beyond accreditation systems - the identification of different implementation models for CME across Europe. J Eur CME 2: 22602.
  16. Voronenko YV, Mintser OP, Ivanov DD (2017) Computer-based exam and clinical thinking: a modern assessment of doctor’s knowledge. Kidneys V.6 N3 in print.
  17. https://www.med-ed.virginia.edu/specialties/Home.cfm

Editorial Information

Editor-in-Chief

Article Type

Research Article

Publication history

Received: June 08, 2018
Accepted: June 25, 2018
Published: June 28, 2018

Copyright

©2018 Ivanov DD. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Ivanov DD, Voronenko YV, Mintser OP, Babintseva LY (2018) Objective assessment in continual medical education (CME): Medical objective assessment in system control. Res Rev Insights 2: DOI: 10.15761/RRI.1000135

Corresponding author

Dmytro D Ivanov

Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine

Figure 1. Algorithm of personalised assessment for candidate doctors

Figure 2. Individual approaches in examination testing

Figure 3. Sample screenshot 1

Figure 4. Sample screenshot 2

Figure 5. Sample screenshot 3

Figure 6. Last remark