Analysis of trial testing in geography. Based on trial testing results

Analysis of trial testing in geography.  Based on trial testing results

Analysis of trial testing resultsfor 10.28.09 State Institution “Secondary School No. 6” 11th grade.

1. 30 students out of 30 took part in the testing, which is 100% of the graduating students.

2. The average score is 77.8, which is 0.1 points higher than the last testing result, the quality of knowledge is -20%, which is 10% lower than the last testing results. academic success 100%.

A) Positive dynamics of average score and quality of knowledge in subjects:

Russian language GPA-19, which is 1 point higher than the previous testing result, quality -90%, academic performance -100%, teacher - Dmitrieva T.M.;
Falling topics:

Section on morphology and spelling;

Syntax and punctuation:

Speech styles;

Causes:

the stage of introductory repetition of the Russian language course of the basic school has not been completed

  • Geography the average score is 11.9%, which is 1.9% higher than the last testing result, quality -42.9, which is 42.9% higher than the last testing result, academic performance -100%, teacher - Zhezher L.S.;
    Falling topics:

Natural areas of the world and the Republic of Kazakhstan;

"Structure of the Earth's Crust"

Causes:

Many students have not fully decided on the choice of the fifth subject;

The stage of introductory repetition of the material has not been completed.

B) Negative dynamics of average score and quality of knowledge in subjects:

. Kazakh language average score - 18, which is lower than the previous one by 0.2, quality 90%, which is higher than the previous one by 3.3%, academic performance - 100%, teacher - Mamelbaeva G.S., Kuishinova Zh.T.

Falling topics:

  • understanding of linguistic terms;
  • phraseological units;
  • syntax;

- Causes:

Limited lexicon

. Mathematics average score is 11.5, which is 0.5 lower than the previous testing result, quality -50, which is 3.3% lower than the last testing result, academic performance -100% teacher - Matais T.V.;
Falling topics:

Text problems for composing equations and systems of equations;

Progression;

Irrational equations and inequalities;

Logarithms, logarithmic expressions, equations and inequalities;

- Causes:

Failed material;

The introductory repetition stage has not been completed;

Lack of motivation to learn mathematical formulas.

. History of K. average score - 14, quality - 53.3%, which is 10% lower than the result of the previous testing, academic performance - 100%, teacher - Guseva E.E.; Falling topics:

Soviet period in the history of Kazakhstan:

Administrative-territorial reforms of the 19th century;

Tribes and tribal unions on the territory of ancient Kazakhstan.
Causes:

Insufficient level of self-training of students;

Low level of student motivation;

Unreasonably wide range of changes in the content of questions in test collections
different years;

A large number of overly detailed questions to which the answers are different
different sources, while minor details are brought to the fore.

. Physics average score is 10.1, which is 0.6 lower than the previous testing result, quality -20, which is 5% lower than the last testing result, academic performance -100%, teacher - T.I. Galoton;
Falling topics:

First Law of Thermodynamics;

Navigation according to charts;
- Coulomb's Law;

Working with charts;

Basic MKT equation.
Causes:

Lack of self-training system;

Increased level of anxiety;

. Biology average score - 17, quality -87.5, which is 3.4% lower than the last testing result, academic performance -100%, teacher - Boyko G.S.;

Falling topics:

- “Development of life on Earth”;

- “Root, stem, leaf”;
- “Lichens”:

- “Higher nervous activity”;

Causes:

The training material has not been completed;

Errors in tests;

5. Percentage in choosing the 5th subject:

Physics - 33.3%, biology - 26.6%, geography - 23.5%, foreign language- 16.6% - professional plans of graduates.

6. Distribution of students by number of points scored:
0-49 - 1; 50-59-3; 60-70-8; 71-89-11; 90-100-2; 101 and above -4.

8. The number of students achieving “4” and “5” at the end of last year is 15, and
based on the results of trial testing - 6. They have a grade of “3” Belozor I- in history; Bushuev D - in mathematics, history K, physics; Kolesnichenko K - in mathematics.; Pleshakova E - in physics, history K; Solovyova E - in physics; Shumyakova N - in physics, history K; Yatsenko V - in physics; Kulinich K - in mathematics; Vakalov A - in physics, history K.

Ways to solve problems:

1. Due to the increasing complexity of test tasks, continue to analyze the system of repeating the materials and student work covered in the new collection of Tests 2009. Structure the mistakes made by students and organize thematic repetition. Adjust preparation plans for the UNT.

2. Introduce rational techniques for memorizing formulas. Mathematics and physics teachers will conduct a series of thematic consultations on the development of techniques and skills for self-control of completed work. Continue the workshop on solving problems that require repeated sequential operations, draw students’ attention to changes in the content of tasks, orient them to determine a rational algorithm, complete a test task, repeat rational ways computing operations.

3. Improve the quality of consultations for students



Develop a rational algorithm for use reference materials at the rate.

  1. On the history of Kazakhstan and geography, subject teachers should analyze
    library capabilities in periodical materials. Offer students data
    sources of information, achieve the study and note-taking of new statistical
    data by program sections.
  2. Conduct systematic and targeted work with the category of students,
    having one “three” according to test results.

5. The school administration should strengthen control over the organization and quality of the event.
subject teachers correctional work with students, for efficiency
systematic repetition of educational material.

5. Continue systematic preparation for testing in the new format, take into account the negative aspects of trial testing, and adjust preparation plans taking into account the features of tests in the new format.

Prepared by Miroshnik N.V.

An important condition for conducting trial testing is obtaining statistically reliable results, which is ensured by compliance with a number of conditions:

Pilot testing should be carried out in several parallel* groups. At the same time, it is recommended to conduct it twice in each group, but on the condition that the subjects receive test versions with tasks that they have not answered before. It is advisable that repeated pilot testing in the same group be carried out on different days;

The number of subjects in groups should be large enough (at least 20 people);

All parallel groups must be under the same conditions (time allotted for testing, location and time*);

All subjects within one group must also be in identical conditions, without any “discounts” or “indulgences” for individual subjects. All subjects should receive approximately equal (parallel) tasks in complexity;

The time allocated for trial testing should be such that the most prepared have time to answer all the test questions*;

To obtain reliable results, the possibility of prompting among subjects should be minimized.

Trial testing using a specialized program and a PC is subject to approximately the same requirements as for testing “on paper”.

Analysis of trial testing results and test test selection

Tasks

Test results matrix

After the trial testing, the test takers' answers are checked and the test results are processed. Processing of results should begin with the compilation of matrices of test results (with a computer version of testing, such matrices should be created automatically by the program). It is very important that by the number of the test version and the number of the task in it, it would be possible to unambiguously determine which tasks the test taker performed. This is necessary in order, as noted above, to exclude from the general array precisely those tasks that cannot be called test ones.



It is recommended to prepare test result matrices using a PC, for example, an Excel spreadsheet processor, which will significantly facilitate data processing and checking the statistical properties of each task. An example of such a matrix is ​​given in table. 3.4.

Table 3.4 – Matrix* of test results in a group of 10 people

(for all tasks general rule assessments: correct answer – 1, incorrect – 0)

Item no. Surname Test option no. Task number (tasks are numbered and arranged in order of increasing difficulty: No. 1 /easiest/ ® No. 10 /most difficult/) Total test score
Abramov
Dmitriev
Vasiliev
Borisov
Shchetinin
Zykov
Grigoriev
Kirillov
Ivanov
Zhukov
S -

In the matrix of test results (Table 3.4), the lines with the results of the test takers should be arranged in descending order of the sum of points scored during testing, i.e. in the first line - the strongest student, in the last - the weakest. In the columns of the table containing the test takers’ scores for each test item, the items should be arranged in order of increasing difficulty, i.e. from the easiest to the most difficult.

Bottom line of the table. 3.4 contains the sum of points scored by all subjects for each of the 10 test tasks. It is this sum (more precisely, its inverse value) that in the general case serves measure of difficulty tasks and the criterion by which a particular task receives its place (ordinal number) in the system of test tasks. Initially, as mentioned above, the teacher determines the difficulty of the tasks based on his own experience. The bottom row of the test results matrix contains more than objective assessment the difficulty of the tasks, which in some cases may not coincide with the initial opinion of the teacher. In such a situation, a task whose difficulty differs from that originally expected should be placed in a different place, assigning it a new number corresponding to its difficulty. In this case, responses to the same task obtained in other (parallel) groups of subjects should be taken into account.

Statistical analysis test results and assignment selection

For inclusion in tests

In table Table 3.5 shows some indicators calculated based on the results of trial testing.

Table 3.5 - Analysis of test results*

INDICATORS Job number
Number of correct answers
Number of incorrect answers
Proportion of correct answers, p j 0,8 0,7 0,6 0,5 0,5 0,4 0,3 0,2
Proportion of incorrect answers, q j 0,2 0,3 0,4 0,5 0,5 0,6 0,7 0,8
Potential difficulty q j/ p j 0,00 0,25 0,43 0,67 1,00 1,00 1,50 2,33 4,00 -
Dispersion of points, pq j 0,16 0,21 0,24 0,25 0,25 0,24 0,21 0,16
Correlation coefficient of scores for the task with total scores for the entire test - 0,41 0,62 0,75 0,82 0,82 0,75 0,62 0,41 -

The most important of those listed in table. 3.5 indicators are:

1) potential for difficulty;

2) score dispersion;

3) the correlation coefficient of scores on the task with the total scores for the entire test.

These indicators are the criteria by which one can judge whether a task in test form can be used in tests, i.e. be called test task .

The first indicator meets the requirement (see section 1.2) known difficulty tasks. As can be seen from table. 3.5, tasks No. 1 and No. 10 do not meet this requirement, which implies the need for their “revision” by the developer in order to identify the reasons (the task is too easy or too difficult, incorrectly formulated, contains a “hint” in the answer options, is incorrectly perceived by the test takers, etc.). P.). After the “revision,” the task is either reworked or eliminated and is not used in tests.

Equally important is the dispersion of scores, which can serve as an indicator differentiating ability tasks, i.e. his ability to divide a group of subjects into strong and weak. The greater the dispersion of scores, the better the differentiating ability of the task. However, tasks with a low dispersion value (for example, tasks No. 2 and No. 9) can also be used in tests (taking into account the value of the correlation coefficient with the total scores for the entire test). Such tasks make it possible to more clearly separate those who are completely unprepared from those who know at “3” and, accordingly, those who know at “5”, from those who “do not reach” the maximum score.

The third indicator is correlation coefficient of scores on the task with total scores for the entire test , is the most important. If its value is small, then, apparently, you can do without the corresponding task in the test. On the contrary, tasks with a large value of the specified coefficient (above 0.7) can be considered “leading” or “test-forming” tasks, “key” for a given discipline or its section. It is recommended to include the task in the test provided that the correlation coefficient is no lower than 0.25-0.3.

To calculate the correlation coefficient in our case, the most convenient formula seems to be

where is the score for the task; - total score on the test; - number of subjects in the group.

For example, let’s calculate the correlation coefficient of scores on task No. 5 with total test scores in relation to the matrix of test results given in Table 3.4 (see Table 3.6).

Table 3.6 – Calculation of the correlation coefficient S

When using computer technology to analyze test results, it is advisable to use the corresponding function of the Excel spreadsheet processor to calculate correlation coefficients.

No less important is the comparison of test results obtained in parallel (different) groups. This comparison is made by comparing the difficulty potentials, variances and correlation coefficients of scores on tasks with total scores on the test, which ideally should differ slightly. Significant differences in these indicators may indicate either low reproducibility of test results (i.e., in groups of the same level, the same test gives different results), or a significantly different level of test takers’ preparedness in different groups(i.e. the groups are not parallel).

The parallelism of groups can be checked by assessing the homogeneity of the variances of the total test results using the appropriate statistical criteria - Fisher, Cochran, Bartlett. According to these criteria, it is possible, with sufficient high level significance (recommended 0.05) check how parallel groups differ in average level of preparedness.

Another technique that can be used when processing test results is to combine test results from parallel groups. This technique is recommended for use with a small number of subjects in separate groups, but before “combining” the results, it is advisable to check the homogeneity of variances according to the criteria mentioned above.

A. S. Kusainova, B. S. Imasheva, K. K. Dosmambetova, L. Z. Karsakbaeva, G. A. Derbisalina, A. U. Baidusenova, A. A. Kauysheva, A. K. Eshmanova

ANALYSIS OF THE RESULTS OF TRIAL TESTING OF INTERN DOCTORS

Karaganda State Medical University, Republican Center innovative technologies medical education and science

In order to implement the state program “Salamatty Kazakhstan”, the Republican Center for Innovative Technologies of Medical Education and Science (RCITMOiN) together with the Department of Human Resources and Science of the Ministry of Health of the Republic of Kazakhstan for two recent years conducted trial independent testing of intern doctors medical universities RK.

Previously, graduates were examined only by the universities themselves, so there was often a subjective approach to assessing the quality of interns’ training. The main reason for the biased high indicators was the lack of effective mechanisms for external control of the quality of education. As international experience shows, in countries with efficient systems education, even with increasing decentralization, quality control remains strictly centralized.

Unified external independent control and monitoring of the quality of educational services in medical education organizations can have a significant impact on improving the quality of education.

In 2011, the Ministry of Education of the Republic of Kazakhstan made changes and additions to the Law of the Republic of Kazakhstan “On Education”, to Article No. 55 with additions to clauses 4, 5, 6 as follows:

"External assessment educational achievements is one of the types of monitoring of the quality of education independent of organizations. External assessment of educational achievements is carried out in order to assess the quality of educational services and determine the level of students’ mastery of educational curricula, provided for by the State Standard of Higher Education."

In accordance with State program development of education in the Republic of Kazakhstan for 2011-2020. a system of external assessment of students' educational achievements (EAEA) will be introduced. The procedure is planned to be carried out using computer testing using new information technologies. This system is being introduced in order to assess the quality of educational services and determine the level of students’ mastery of educational programs.

For two years, RCITMOiN conducted computer testing in all internship specialties: therapy, surgery, obstetrics-gynecology, pediatrics, pediatric surgery, general medical practice, dentistry. The main purpose of such independent examination is to improve the quality of medical education.

Comparative analysis The results of testing over two years showed an improvement in the theoretical knowledge of graduates. This was due to the quality of the base of test tasks and the increased responsibility of both interns and teachers. If we compare the average scores among universities, we can note the stability of the indicators of Karaganda State Medical University (the average score in 2010 was 74.0, which was the best result among medical universities, and in 2011 - 80.3).

In 2011, the number of unsatisfactory grades across all universities sharply decreased (from 40% in 2010 to 7.5% in 2011). In 2011, at the State Medical University of Semey, a minimal percentage of dissatisfaction was noted.

Rice. 1. Test results (average score for universities)

Rice. 2. Test results (% of unsatisfactory grades)

Rice. 3. Test results (average score by specialty)

Rice. 4. Test results (% of unsatisfactory grades by specialty)

satisfactory grades (0.5%).

Analysis of testing results by specialty showed that the worst indicators of theoretical knowledge were noted among interns who studied in the specialties “general medical practice” and “dentistry”.

Based on the results of a unified computer test,

formation in 2009-2010 academic year the percentage of unsatisfactory grades in all internship specialties was 34.8%, the average score was 63.0. In the 2010-2011 academic year, the percentage of unsatisfactory grades in all specialties decreased to 7.1%, and the average score rose to 80.0.

Medicine and ecology, 2012, 2

In 2011, the appeal commission was centralized and worked on the basis of RCITMO-iN, which affected the quality of work. If in 2010 the appeal commission was created on the basis of each university and was formed from the teaching staff of the same educational institution and almost all the submitted applications were satisfied, then in 2011 representatives of all medical universities became members of a single centralized appeal commission. With such an organization of the work of the appeal commission, the percentage of satisfied appeals was only 16.9%.

During computer testing, some problems were identified. Thus, there was an insufficient number of trained testologists and experts, there was a danger of a “bias” towards training interns to solve test problems at the expense of clinical training, the use of communication means by interns during testing, and the lack of a legitimate regulatory framework.

According to the authors, ways to solve the identified problems should be sought in holding training seminars for testologists and experts, approving a list of trained experts, conducting independent certification of practical skills (the examination should not be limited only to testing), purchasing and using means during testing that jam cellular communications .

Analysis of the results of computer testing over 2 years showed that conducting examinations of interns can become an effective tool for independent assessment of the knowledge of a university graduate. Based on this, the assessment of knowledge can be taken as the basis for the further development of a system of external assessment of graduates’ educational achievements in medical education. In this case, increasing the responsibility of the teaching staff for the quality training of specialists is of great importance.

Received 05/07/2012

A. S. Kusainova, B. S. Imasheva, K. K. Dosmambetova, L. Zh. Karsakbayeva, G. A. Derbisalina, A. U. Baydusenova, A. A. Kauysheva, A. K. Yeshmanova ANALYSIS OF RESULTS OF TRIAL TESTING OF MEDICAL INTERNS

Analysis of the results of computer testing of 2 years has shown that the conduct of Examination interns can become an effective tool for independent assessment of the knowledge of the graduate school. Accordingly, the attestation of knowledge can be taken as a basis for further development of the external evaluation of educational achievements of graduates in medical education. Great importance is the increased responsibility of the faculty for the quality of training.

A. S. Kusayynova, B. S. Imasheva, K. K. Dosmambetova, L. Zh. Karsatsbaeva, G. A. Derbkalina, A. U. Bayduysenova, A. A. Kauysheva, A. K. Eshmanova D6R1GER-INTERNDERD1 SONA TEST1LEUDSH N6TIZHELER1N TALDAU

EKi zhylry computerlk testsheu netizhelersh taldauy internderge emtihandar etyuzudsch zhorary oku orny tYlekterinich 6miMiH teuelaz baralau kuraly bola alatynyn kersetken. Soran baylanysty, etyuztgen 6miM certificatetauyn medicallyk 6miM berudep oku zhepspktermsch syrtky baralau zYYesin odan eri damytura negiz retinde kabyldaura bolada. Bul rette professor-okytushylar kuramynych mamandardy sapali dayarlaudar zhauapkershiligin arttyrudych machyzy zor.

test results

Test results need to be interpreted in a way that is consistent with the purpose of testing (see Table 4.1).

Table 4.1 - Areas of application of tests, purpose of testing and interpretation of its results

Scope of application

tests

Purpose of testing

Interpretation

test results

Professional selection

Selection of those who best meet the requirements, with knowledge and skills critical to the profession

Ranking of subjects according to the level of professional suitability, competence, comprehensive analysis of results

Entrance testing

Selection of the most prepared (definition passing score), identifying gaps in the knowledge structure

Ranking of subjects by level of preparedness, statistical processing of results

Determination of the “place” in the group for each subject in accordance with the selected criteria

Ranking of subjects according to the measured parameter, statistical processing of results

Current control* 14, monitoring

Progress tracking educational process, identification of gaps in the structure of the subjects’ knowledge and identification of possible reasons for their occurrence

Analysis of the structure and profile of knowledge, statistical processing of results

Remote

education

Stimulation cognitive activity students, increasing motivation to learn, tracking the progress of the educational process, identifying gaps in the structure of the subjects’ knowledge and finding out the possible reasons for their occurrence

Ranking of subjects by level of preparedness, analysis by the teacher (tutor) of the structure and profile of knowledge, statistical processing of results

Self-study (multimedia textbooks, training programs, etc.)

Stimulating the cognitive activity of students

Test results are interpreted by test subjects independently or with “prompts” from the program.

As follows from the table. 4.1, the test should be considered as a unity: 1) method; 2) results obtained by a certain method; and 3) interpreted results obtained by a particular method.

Interpretation of test results is carried out mainly based on the arithmetic mean, indicators of variation of test scores and on the so-called percentage norms, showing what percentage of subjects have a test result worse than that of the subject of interest.

During entrance testing, professional selection or determining a rating in a group, the main task in interpreting the results is to rank the test takers according to their level of preparedness. When monitoring or current control a more important task is to analyze the structure and profile of knowledge. When working independently (distance learning, learning using multimedia textbooks, etc.), the main purpose of the tests is to stimulate the cognitive activity of students, give them the opportunity to evaluate their own successes, and identify gaps in the acquired knowledge.

Regardless of the scope of the test, test results must be subjected to statistical processing in order to determine the main characteristics of test items, check the reliability of measurements and the validity of test results.

Entrance testing. Primary processing of the results obtained during the entrance testing comes down to compiling a table (matrix) of test results according to the rules described earlier (see Table 3.4). This allows not only to clearly assess the level and structure of the test subjects’ preparedness, but also to identify the “strongest” in the group undergoing testing.

As noted in Chapter 3, the distribution of test scores on well-designed tests should ideally be close to normal (reasonably large groups– at least 20 people). In Fig. Figure 4.1 shows, as an example, the distribution of points scored during the entrance test in a group of 80 people. The task was to select 50 of the most prepared people from this group. The test contained 24 tasks, 1 point was awarded for each correct answer. Based on the sum of points scored, the selection committee identified the first 50 people who scored the most points and determined a passing score (in this example - 11 points).

Rice. 4.1 - Determining the passing score for entrance testing (example).

The maximum possible number of points in this example is 24.

The example shown in Fig. 4.1 is in some sense “ideal”. So, if in the same example it was necessary to select not 50, but 52 people (or for example 47 people), with the establishment of a passing score, problems would arise certain difficulties– with a lower value (10 points), there would be more people who passed the test than necessary and vice versa. In this situation, the following solution can be proposed: the admissions committee sets a higher passing score, at which the number of people passing the test is less than required. The commission selects the missing number of people from among those who fell slightly short of the passing score. In this case, preference is given to those who best meet the requirements (for example, have work experience in the chosen specialty, benefits upon admission, a higher average score on basic education documents, etc.). These same people may be offered to take preparatory courses, etc., for an additional fee.

During entrance testing, in addition to determining the passing score, an analysis of the structure and profile of knowledge is quite important (will be discussed later).

Current control (monitoring). Tests for current control and monitoring are created according to the same principles as tests for other purposes. But the main purpose of testing in this case is to track the progress of the educational process, identify gaps in the structure of knowledge, distortions in the knowledge profile of each of the test takers and clarify possible reasons their appearance.

Under knowledge structure In general, one should understand such a degree of completeness of a student’s knowledge and skills that evenly covers all sections of the discipline (or several disciplines) and allows subjects to successfully complete test tasks, regardless of which section of the discipline they belong to.

If a subject completes tasks (including quite difficult ones) related to one section of the discipline and cannot complete tasks in another section (including those of low difficulty), then this indicates a violation (gaps) in the structure of knowledge. It is quite obvious that such disorders can be either individual or observed in a sufficiently large number of subjects. In the latter case, it is necessary to analyze the reasons for the appearance of gaps (poor presentation of the section or separate discipline, lack or lack of methodological support, etc.) and take measures to eliminate them.

A necessary condition for obtaining reliable information about the structure of knowledge is the representativeness of the test items in relation to the amount of knowledge that is tested with its help. In other words, the tasks included in the test must sufficiently fully and evenly cover all sections of the discipline, course, etc. In this case, it is desirable that each section of the discipline be represented by several tasks of varying levels of complexity.

For the convenience of analyzing the structure of knowledge, it is advisable to arrange test results in the matrix as shown in the example (Table 4.2). In this example, each section of the discipline is represented in the test by five tasks of varying difficulty levels. The results of subject No. 2, who completed the test tasks according to option No. 7, showed an almost complete lack of knowledge of section 2 of the discipline, while he more or less coped with the tasks in section 1. In such cases, they talk about gaps in the structure of knowledge.

Term knowledge profile , which testologists call the totality of points in each row of the table of test results, can be illustrated with the example given in table. 4.3 (matrix fragment from Table 3.4).

Table 4.2 – Analysis of the structure of knowledge based on the matrix of test results

No. test options

Grades for test tasks by sections of the discipline (within each section, tasks are arranged in order of increasing difficulty)

Section 1

Section 2

Table 4.3 – Distorted (line No. 6) and undistorted (lines No. 5 and No. 7) knowledge profiles

No. test options

Grades for test tasks (tasks are arranged in order of increasing difficulty: No. 1 /easiest/  No. 10 /most difficult/)

Total test score

As can be seen from the example, the subjects whose results are in lines 5 and 6 scored the same number of points on the test, however, subject No. 5 coped with the first 5, easiest tasks, without coping with the rest. The results of subject No. 6 are somewhat illogical - having failed to cope with relatively easy tasks at the beginning of the test, he managed to complete more difficult tasks. In such cases, they speak of a distorted (inverted) knowledge profile.

The reasons for distortions in the knowledge profile can be very different - a poorly designed test, individual psychological characteristics of the person being tested, low quality of teaching, lack of methodological support and literature, etc. According to prof. V.S. Avanesov and other test specialists, the task of good education is to generate correct (undistorted) knowledge profiles.

Analysis of the structure and profile of knowledge during entrance testing and ongoing control (monitoring) allows teachers to get a general idea of ​​the level of preparedness of test takers, promptly identify gaps in knowledge, errors in teaching methods and take appropriate measures. In educational institutions implementing quality management systems, constant monitoring of the learning process using testing technologies should be one of the main tools for constant adjustment (improvement) of the educational process.

Distance learning. IN existing systems distance learning (SDO “Prometheus”, “Web-class KhPI”, Lotus Learning Space, etc.), as a rule, current and final monitoring of the mastery of educational material is provided. Control can be carried out using a separate testing program, or modules (programs) for testing are built directly into distance courses.* 15 In the latter case, the distance course can be used for independent work, without the participation of a teacher.

Distance learning systems, or distance courses themselves, must be equipped with programs that are “able” not only to save the test results of each subject, but also enable the teacher (tutor) or course developer to carry out their statistical processing with minimal time spent in order to determine the reliability of the pedagogical measurement and validity of test results. Unfortunately, not all of the distance learning systems used provide this opportunity.

Tests developed for use in distance learning are subject to the same requirements as tests for ongoing control (monitoring).

Independent work. As experts note, well-developed tests have a high learning potential, which can significantly increase motivation for learning and, accordingly, increase its effectiveness. IN Lately In the educational process, such teaching aids as training courses, multimedia textbooks, electronic simulators, etc., which can be called educational electronic publications (EEP), are increasingly being used. Their main advantage is the possibility of independent learning with minimal teacher intervention. EI must be provided with tests for current and final control, and preferably ones that would allow the student not only to see what exactly he does not know, but also “explain” why this or that answer is incorrect and “recommend” returning to the appropriate section for re-study.

Tests for EI, as well as tests for other purposes, must be representative of the totality of knowledge and skills being tested. No less important is the preliminary testing of the tasks included in these tests in order to determine their difficulty and other characteristics. Having information about the difficulty of each task, the developer of the EI can make sure that during testing the program “gives” them to the test subject according to the principle “from the easiest to the most difficult.” In this case, it is desirable to have a sufficiently large number of parallel tasks so that during repeated testing the test taker is given new tasks that he has not completed before.

In multimedia textbooks and other educational materials, as a rule, there is no need to save test results and, moreover, to perform their statistical processing. The main task of the tests used in the EI is to stimulate the student’s cognitive activity and adjust his individual “learning trajectory.”

REFERENCE

based on the analysis of the results of trial diagnostic testing in the Unified State Exam format in mathematics, Russian language and elective subjects
In accordance with the preparation plan for the state (final) certification of 11th grade graduates, approved by order of the Gymnasium No. 353 dated September 20, 2012. and order No. 406 of October 20, 2012. “On conducting trial diagnostic testing in the Unified State Exam format for 11th grade graduates” in order to prepare 11th grade graduates for passing the state final certification, practicing skills in working with Unified State Exam forms, working with tests, 11th grade students took part in diagnostic testing in Russian language, mathematics and elective subjects.

Staffing




Item

Teacher's name

Category

Mathematics

Safonova L.G.

1st quarter category

Russian language

Ziyatdinova A.I.

highest qualification category

Physics

Gilmanova N.N.

1st quarter category

Social science

Kuzyukova O.V.

highest qualification category

Computer science

Salakhieva E.M.

1st quarter category

Story

Karametdinova R.F.

1st quarter category

English language

Ismagilova G.I., Shamseeva A.D.

1st quarter category

Biology

Kropacheva L.L.

highest qualification category

Chemistry

Yuskaeva Ch.M.

1st quarter category

Thus, qualified teachers work in the parallel 11th grade.

In parallel to the 11th grade in the 2012/2013 academic year, 45 graduates are studying in two classes. A total of 40 students (89% of the total) took part in the diagnostic testing.

41 out of 45 11th grade students took part in diagnostic testing in the Russian language (this is 91% of total number). Amirova R., Badertdinova L., Yakubova A., Bilyalov A. did not take part.

The results of the trial Unified State Examination in the Russian language are presented in the table:


Class

Russian language (minimum – 36)

below min. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-35

36-63

64-79

80-100

11A

0

12

3

2

100

29

63,9

11B

0

6

11

7

100

75

72,8

total

0

18

14

9

100

56

69,1

Thus, the test performance in the Russian language is 100%, the quality is 56%. The average test test score in the Russian language is 69.1 points: in 11A - 63.8 points, in 11B - 72.8 points. Let me remind you that according to the contract assignment, the average score in the Russian language must be at least 72 points. As you can see, the task was not completed.

In the 2012/2013 academic year, for the first time in the practice of a unified state exam, by order of Rosobrnadzor dated August 29, 2012 No. 3499-10, a minimum number of points was established in all Unified State Examination subjects, confirming that exam participants have mastered the basic general education programs secondary (complete) general education in accordance with the requirements of the federal state educational standard secondary (complete) general education. In the Russian language, the minimum number of points is 36 points. In our trial testing, all 11th grade graduates completed the work with scores above the threshold. The lowest number of points was scored by 3 students of class 11A: Fassakhova A., 11A - 52b., Galiullin B., 11A - 52b., Khalimova A., 11A - 53b.

9 graduates completed the work, scoring above 80 points, but the most maximum results at Plaksin V., 11b – 95b., Gulyaeva T., 11b – 95b., Nechaeva Y., 11b – 90b., Sitdikov D., 11b – 90b.

Comparative results with the results of diagnostic testing in the Russian language, written by the same students in the last academic year, are shown in the diagram. Last year, 97% of students were able to complete the work satisfactorily. This year – 100%. It is understandable that last year, at the time of completing the work, the guys were not ready to do the work; many topics were not studied. But the quality of work, unfortunately, remains at the same level.

Comparative results are also presented for classes separately. Which shows that in the humanities class the quality of work is lower than last year. Although this subject in this class is studied at a specialized level.

The analysis of errors showed that in the Russian language students made mistakes on tasks related to text analysis, punctuation marks, combined and separate spelling of words, determining methods of word formation, choosing linguistic means of expression, determining types complex sentence. When completing assignment Part C, due to inattentive reading of the text, we were unable to correctly formulate and comment on the problem and select arguments.
36 out of 45 students took part in diagnostic testing in mathematics (this is 80% of the total number). Did not take part: Badertdinova L., Farkhutdinova I., Yakubova A., Bilyalov A., Kiyko D., Nechaeva Y., Sklyarov A., Sklyarova V.

The results of the trial testing in mathematics are presented in the table:


Class

Mathematics (minimum – 24)

below min. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-23

24-46

47-64

65-100

11A

1

10

5

0

94

33

37,8

11B

0

7

9

4

100

65

52

total

1

17

14

4

97

53

45,6

Thus, the test performance in mathematics is 97%, the quality is 53%. The average test test score in mathematics is 45.6 points: in 11A - 37.8 points, in 11B - 52 points. Let me remind you that according to the contract assignment, the average score in the Russian language must be at least 57 points. As you can see, the task was not completed.

According to mathematics, the minimum number of points is set at 24 points. In trial testing, 1 graduate of class 11A did not overcome the threshold value: Vakhitova V., 11A – 20b. 1 graduate of class 11A Khalimova A. scored exactly 24 points, i.e. her performance is on the verge of “2”. Only 4 graduates performed at an excellent level, of which only one scored above 80 points: Plaksin V., 11B - 81b.

Comparative results with the results of diagnostic testing in mathematics, written by the same students in the previous school year, are shown in the diagram. Last year, 76% of students were able to complete work above the threshold. This year – 97%. It is understandable that last year, at the time of completing the work, the guys were not ready to do the work; many topics were not studied. But the results might have been different if all test participants had taken part in the testing (I don’t presume to judge whether it’s better or worse).

Comparative results are also presented for classes separately. This shows that in the humanities class the performance of work has increased.

Analysis of errors showed that in mathematics, students made mistakes on tasks: derivative and investigation of a function, transformation of expressions, problems on planimetry and stereometry, solution word problems. In the tasks of part C:

Only 35% coped with the decision trigonometric equations and systems of equations. The main drawback when performing of this assignment it was that the task was solved completely correctly, but the answer was indicated incorrectly;

43% completed task C3 – solving inequalities;

19% completed geometry tasks;

100% failed to complete the task with parameters, and only two tried to solve it (Sitdikov D., Plaksin V.);

Task C6 was also completed only by D. Sitdikov and V. Plaksin.
Trial diagnostic tests were conducted for 11th grade students in elective subjects. To pass the Unified State Exam in subjects of their choice, students chose the following subjects: physics - 15 people (33% of the total number of 11th grade graduates), social studies - 20 people (44%), history - 11 people (24%), English – 10 people (22%), literature – 8 people (18%), chemistry – 9 people (20%), biology – 8 students (18%), computer science – 9 students (20%).

11 graduates chose only one elective subject, 25 graduates - 2 subjects and 9 graduates - 3 subjects.

Data on the number of people who took part in trial testing are presented in the table:


item

Number of people who chose this item for State Examination

Number of people who took part in diagnostic testing

% of those who took part in diagnostic testing

FI of absentees

physics

15

14

93%

Kiiko D.

social science

20

17

85%

Badertdinova L.,

Farkhutdinova I.,

Yakubova A.


story

11

8

73%

Ivanova K.,

Senkina E.,

Tsaturyan R.


English language

10

10

100%

literature

6

5

83%

Amirova R.

chemistry

9

8

89%

Bilyalov A.

biology

8

3

37%

Bilyalov A.,

Nechaeva Yu.,

Sklyarov A.,

Sklyarova V.


The choice of specialized subjects and corresponding specialized classes was also analyzed. In the social and humanities class, 15 students choose social studies, which is more than half, and history - 6 students. In a physics and mathematics class, half the class—12 students—choose physics. The choice of subjects indicates the implementation of the profile chosen by the students. There are students in the classes who have not chosen any of the core subjects, with the exception of compulsory subjects. In 11A this is Safina I. (she chooses biology, chemistry) and Gainutdinov D. (he chooses physics). In 11B these are Govorukhina I., Tsybulya K. (they choose social studies, history, English), Ivanova K. (history, English, literature), Ignatieva A., Kaimakov M. (they choose social studies), Senkina E. ( history, English), Tsaturyan R. (social studies, history).

The results of the trial testing are presented in the tables:


Class

Physics (minimum – 36)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-35

36-52

53-67

68-100

11A (3)

1

2

0

0

67

0

38,7

11B (12)

0

3

6

2

100

73

57,7

Total

1

5

6

2

93

62

53,6

Kiiko D., a student in grade 11B, did not take part in the testing.

The success rate of diagnostic testing in physics is 93%, quality is 62%. The average test score is 53.6.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10 in physics, the minimum number of points is set at 36 points. In the trial testing, 1 graduate of class 11A did not overcome the threshold value: Khalimova A., 11A – 30b. 1 graduate of class 11A Ibragimova A. scored exactly 36 points, the work is very weak, i.e. her performance is on the verge of “2”. Moreover, Albina Ibragimova chooses this item “just in case.” Just like Gazetdinov Albert in 2011 chose computer science “just in case,” and accordingly completed the work with a “2”. 2 graduates completed the work at an excellent level and scored above 80 points: Sitdikov D., 11B - 81b., Plaksin V., 11B - 86b.


Class

Social studies (minimum – 39)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-38

39-54

55-66

67-100

11A (15)

0

7

5

0

100

42

52,8

11B (5)

1

1

3

0

80

75

55,6

Total

1

8

8

0

94

50

53,6

Badertdinova L., Farkhutdinova I., Yakubova A., students of grade 11A, did not take part in the testing.

The success rate of diagnostic testing in social studies is 94%, quality is 50%. The average test test score is 53.6 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10, the minimum number of points in social studies is 39 points. In the trial testing, 1 graduate of grade 11B did not overcome the threshold value: Kaymakov M. – 37b. Vakhitova V., a student of grade 11A, scored exactly 39 points.

The analysis of errors showed that social studies students made mistakes on tasks related to economics (factors of production, reference to social realities and graphic information). Difficulties are caused in defining terms and concepts. In the tasks of part C, difficulties were encountered in listing features, phenomena, using concepts in a given text, and revealing theoretical positions using examples.

The teacher also noted the peculiarity of the work in that the children coped with tasks of increased difficulty, and in the tasks basic level made mistakes.


Class

History (minimum – 32)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-31

32-49

50-67

68-100

11A (6)

0

4

1

1

100

33

50,3

11B (5)

0

0

1

1

100

100

66

Total

0

4

2

2

100

50

54,3

Ivanova K., Senkina E., Tsaturyan R., students of grade 11B did not take part in the testing.

The success rate of diagnostic testing in history is 100%, quality is 50%. The average test test score is 54.3 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10, the minimum number of points in history is 32 points. Ibragimova A., a student of grade 11A, scored the lowest number of points – 37 points. As a result, Albina is not in physics, not in history passing the Unified State Exam not prepared. Two students wrote work at an excellent level, but no one completed the work above 80 points. The highest score among history test participants is 69 points (Saifullina A., Tsybulya K.).

Analysis of errors showed that history students made mistakes in tasks to establish the chronological sequence of events. All test participants experienced difficulties in working with various sources of information. In the tasks of part C, there were difficulties in the ability to formulate one’s own position on the issues under discussion, use historical information for argumentation, and present the results of historical and educational activities in free form.


Class

English language (minimum – 20)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-19

20-58

59-83

84-100

11A (6)

0

2

3

1

100

67

66,2

11B (4)

0

0

1

3

100

100

88

Total

0

2

4

4

100

80

74,9

100% of graduates who chose this subject took part in the testing.

Diagnostic testing performance English language is 100%, quality is 80%. The average test score is 74.9 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10, the minimum number of points in English is 20 points. Vakhitova V., a student of grade 11A, scored the lowest number of points – 48b. 4 students wrote their work at an excellent level, three of them scored high scores: Govorukhina I., 11B – 97b., Senkina E., 11B – 93b., Ivanova K., 11B – 92b. Farkhutdinova I., grade 11A completed the work with 85 points.

All mistakes made are due to inattentive reading of the text, lack of knowledge of the vocabulary found in the text. Difficulties arose in understanding the text listened to.


Class

Literature (minimum – 32)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-31

32-54

55-66

67-100

11A (5)

0

1

1

2

100

75

62

11B (1)

0

0

1

0

100

100

60

Total

0

1

2

2

100

80

61,6

Amirova R. did not take part in testing.

The success rate of diagnostic testing according to the literature is 100%, the quality is 80%. The average test score is 61.6 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10, the minimum number of points for literature is 32 points. Bagautdinov A., a student of class 11A, scored the lowest number of points – 43b. 2 students wrote work at an excellent level, but not higher than 80 points: Zateeva N., 11A - 73b., Saifullina A., 11A - 73b.

Mistakes were made in determining the means of expressiveness of a lyrical work. In the tasks of part C, they were unable to provide the necessary arguments.


Class

Chemistry (minimum – 36)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-35

36-55

56-72

73-100

11A (1)

0

1

0

0

100

0

47

11B (8)

0

5

2

0

100

28,5

52,6

Total

0

6

2

0

100

25

51,9

Bilyalov A., Gulyaeva T. did not take part in testing.

The pass rate of diagnostic testing in chemistry is 100%, quality is 25% (lowest quality). The average test test score is 51.9 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10 in chemistry, the minimum number of points is set at 36 points. Almost all test participants wrote at a weak satisfactory level. There were mistakes made on many assignments from the chemistry course. We did not start solving many problems because the material will be studied in the 11th grade course


Class

Biology (minimum – 36)

Below is the minimum. level

Satisfied.

level


Good. level

Excellent level

Academic performance, %

Quality, %

Average score

0-35

36-54

55-71

72-100

11A (1)

0

0

1

0

100

100

68

11B (7)

0

0

1

1

100

100

67

Total

0

0

2

1

100

100

67,5

Only 4 out of 8 students took part in testing in this subject (A. Bilyalov, Yu. Nechaeva, A. Sklyarov, V. Sklyarova did not participate).

The success rate and quality of diagnostic testing in biology is 100%. The average test test score is 67.5 points.

According to the order of Rosobrnadzor dated August 29, 2012 No. 3499-10 in biology, the minimum number of points is set at 36 points. Test participants made mistakes on topics of grades 8-9, i.e. These questions were not repeated by the boys.
In computer science, school trial testing was not carried out, since the day before, according to the order of the Ministry of Education and Science of the Republic of Tatarstan, all schools in the republic, including us, participated in the experiment of conducting the Unified State Exam in computer science in a computerized form. On October 23, 26 and 30, 27 11th grade graduates, including those who chose computer science, wrote the K-USE. The results were summarized in special installed program, and sent to the IMC. The test results have not yet been reported.
Based on the results of the trial Unified State Examination, a rating of graduates was compiled based on the total score and average Unified State Exam score.

12 graduates have a score higher than 220 points (since prestigious universities require a passing score of at least 220 points): Govorukhina I., Saifullina A., Tsybulya K., Ivanova K., Gulyaeva T., Salikova S., Khojakhanov B., Zateeva N., Plaksin V., Butakova K., Rafikova L., Sitdikov D., Sadykova A., Khasanshina G.

2 graduates have an average score on all Unified State Exams above 80: Plaksin V. - 87.3b., Sitdikov D. - 83.3b.

12 graduates have an average score for all Unified State Examinations below 50. The lowest average score is for two graduates: Vakhitova V. - 40.8b., Khalimova A. - 37.8b.
When it comes to filling out Unified State Exam forms, 11th grade graduates were more responsible than 9th graders. Some participants did not have their passport details written down on the form or did not sign in the appropriate box. I ask subject teachers of both grades 9 and 11 to draw the attention of students to the design of the letter “C” in the forms. Unclear or incorrect writing of any letters will result in the student not receiving their marks, which will affect the final result.
Based on the above, CONCLUSIONS and RECOMMENDATIONS:


  1. All subject teachers should take control of the issue of preparing students for the Unified State Exam, develop a plan to eliminate gaps in knowledge, and work more often with test tasks with filling out answers in special forms through the classroom and extracurricular activities.

  2. Prepare graduates to conduct diagnostic work through the StatGrad system (December 12 – in Russian, December 18 – in mathematics).

  3. Prepare graduates to conduct paid diagnostic work in the city (December 19 - in Russian for grades 9, 11, December 20 - in mathematics for grades 9, 11).

  4. Class teachers Shamseeva A.D., Ziyatdinova A.I. bring the results of trial tests to the attention of parents of 11th grade students at a parent meeting on November 26, 2012.

  5. Krasnoperova A.R., Deputy Director for SD, class teachers Shamseeva A.D., Ziyatdinova A.I. conduct individual conversations with students and their parents who failed the test in subjects.
The certificate was compiled by Deputy Director for SD A.R. Krasnoperova

The certificate was read out at a meeting with the director on November 19, 2012.



top