At the high school where I work, I currently wear a lot of hats and have filled multiple roles in the past few years. While most know me as a coach of physical preparation for football, I've also worked in the following roles:

  1. Driver education teacher – This was short lived. The unit was cut because there weren't enough students to justify two driver education teachers.
  2. In-school suspension supervisor – In-school suspension functions as a holding area for any student kicked out of class. Students are placed here until school administration can figure out what to do with them. It may be an all-day punishment where students have to sit and work. They aren't allowed to talk, sleep or have any electronics. The supervisor pretty much monitors this, makes sure the rules are followed and refers anyone who breaks these rules back to Student Affairs.
  3. Credit recovery teacher – This is a computer-based course and has students retaking courses that they failed via online classes.
  4. Physical education teacher – This covers a lot of areas, but currently I teach three sections of weight training. I've also taught a course called team sports, which is pretty much all of the classic physical education games (basketball, flag football, soccer, volleyball, softball)
  5. Tutor – I've been tutoring more this year and working with students who are on the border of passing graduation benchmarks for reading.
  6. SERVE coordinator – This position involves clearing volunteers through a background check as well as booking guest speakers for certain events.
  7. Mentor for at-risk students – This is a role that I recently acquired. The head football coach heads this committee and asked me to assist. Basically, students with a long history of poor standings enter a list and are periodically checked on by those involved with this group.
  8. Testing coordinator – Of all the titles I've held, this was the most intensive assignment with the most things that could go wrong. Basically, this position involves anything related to state mandated standardized tests. Every year, more and more of these things become mandatory, and in this position, you must book testing locations, designate test administrators/proctors, train those who will be involved in test administration, prepare reports of any irregularities or invalidations of scores and work with guidance, administration, district and state officials to make sure the testing regulations are upheld.

As you can see, the public school system has a way of finding work for those who are willing to do it. Honestly, I never minded any of these roles, and usually the ones that seemed like the less desirable positions (in-school suspension supervisor, credit recovery teacher) were the ones that I actually didn’t mind. However, the one position here that made me start looking at the educational system as well as the norms of training in America was the position of testing coordinator.

In this article, I'll make some parallels between the education system and the sports training system and point out one fairly common and critical flaw: the notion that constant testing, assessment or competition are good things for development.


Education and the Obsession with Testing

Many underlying factors have contributed to the amount of testing that goes on with students. However, examining all of these would be outside the scope of this article. In particular, the types of testing occurring at staggering rates are standardized tests in subjects such as reading, math and science.

In Florida, at the high school level, we have subject area tests called End of Course exams (EOC) for history, algebra I, algebra II, geometry and biology. Out of these exams, the algebra I exam is a graduation benchmark, meaning that aside from receiving a passing score for a grade in the course, a student must also receive a certain scale score to have passed the benchmark that allows them to graduate. There is also a graduation benchmark in reading and writing. In the past, these were under the umbrella of the infamous FCAT exam, but they're now under a new test called FSA. This is pretty much a relabeling of similar tests.

These benchmarks work off of the common core standards of what a student “should” be able to do. However, all students are assessed on this particular exam regardless of individual peculiarities. Did you just move to the country last school year and you don't speak any English? That’s fine, but this is your benchmark and you had better pass it. Similar situations come up with those with learning disabilities.

In the event that these students don’t pass the exam during their benchmark year, they must take the exam two to four more times a year the following year to see if they pass it. In most cases, they will be given either an intensive math or reading class in an attempt to assist them, but if they fail it again the following year, they may not be given another remediation course because, depending on the subject, it may not be available. They still may be tested two to four times but without any remediation occurring. In this case, administrators say that practicing the exam will help them learn what areas they need help with for the next exam. However, without remediation, it may not necessarily make much of a difference. Administrators also claim that students are assessed often because it provides data. While this may be true, how is this data being used to actually assist with anything as far as success?


This proves to be a wildly inefficient approach with many inconsistencies and low success rates. One of these EOC exams is given in September as a retake for students who failed, but many of those students didn't have any remediation. So in this situation, the student failed, did nothing over the summer and is then expected to come back and test in the hopes that he does better. It doesn't many any sense.

Teachers will also begin to teach to the test in an attempt to cover material that is on the test or teach test-taking strategies to work the exam to get a higher score. While this may help with short-term success on these exams, what about the long-term development of the student? What about actually learning what is necessary to succeed on a greater level?

Training and Testing

Education isn't the only place where testing seems to be a hot item. In sports training or training in general, many are obsessed with testing and trying to see what gains have been made. All too often, we see short, abrupt training cycles with some kind of assessment to see what kind of results were achieved. Additionally, to get results, we see many attempt to peak, taper or specialize in drills that essentially serve a similar purpose as to what the teachers listed above are doing.

In training for sport, all that matters on the large scale is results in competition. Everything that contributes to these results is a means to an end. So regardless of whether you're talking weights, sprints, jumps, cone drills or so on, these things don't win games on their own. They can't exist on their own or be the deciding factor as to why one team is a conference champion and another is 0-10. However, from the standpoint of creating a better athlete, many things such as designated test days or tests for things that may or may not have relevance to the actual sport are largely unnecessary.

One example of this is testing one-rep max lifts in the early off-season or upon the arrival of incoming athletes. It is unnecessary to dedicate great amounts of time to this and it can be dangerous because many of these athletes came off of a long season and aren't prepared to perform maxes or they're returning to school from a break and didn't adequately train on their own. However, some coaches still do this because they want to have data from which they can base training percentages. In reality, this can be done through other measures such as observing what athletes can or can't do during movements or monitoring weekly progress through submaximal efforts. With athletes, if it is observed that on week one of a training program, the athlete performed a set number of repetitions with a certain weight and on week six is able to do this with a weight that is 10–15 percent higher, it can be said that the athlete got stronger. We don't need to peak, taper and test a 1RM. This may not apply for an advanced strength athlete, but for athletes in sports that don't require lifting weights as part of the game, this is fine for assessments on an ongoing basis.

Miamisburg High School #2_8006398911_o

On a similar note, using time on the field to make drills specialized to cone agility drills is also another form of teaching to the test. Sure, we could specialize in this, shave some time off of a pro agility and tell everyone that our athletes became so much more agile. However, if they can’t shift directions in an open environment and react to opponents, it really doesn’t matter. Rather than break out the cones, I would rather view what they do during change of direction work and watch their movements. Are they more fluid and less labored? Do they use better mechanics? A checklist can be used in session to monitor this on a daily, weekly or monthly basis. Even in linear sprint work, it isn't entirely necessary to test sprint times in the sense of a test day. This can be done on a session to session basis and the mechanics used can be observed.

Even in strength sports, some people constantly feel that they need to go heavy or “feel the weight.” The problem with this is that the total workload takes a hit if strength gains are being regularly made. It's a need to lift heavy to gain strength in these disciplines, but it doesn’t always need to be maximal and we don't always need to test whether or not strength was gained.

Many point to the classic example of the Bulgarians and their “maxing out” to support the virtues of workloads and building up the total amount of work over time as opposed to just going heavy. Sure, they did lift to a training max in sessions that weren't overly long in duration or high in volume. However, they also did this multiple times a day and multiple times a week. When all the work is calculated, it is much higher than just maxing out on two to three lifts once a week. Even in a rather low volume system that revolves off maximal effort and dynamic effort training, the cumulative volume of all the lifts needs to be added up. Usually, this involves work in secondary movements that fall in the submaximal category and are listed as specialized developmental lifts, accessory work that is categorized as specialized preparatory lifts, submaximal percentages that involve working up on maximal effort days, and submaximal percentages used on dynamic effort days.


If all you do is test, when do you actually study and perfect your craft? In education, the process of learning is disrupted by testing, which blocks the students' ability to actually process information and learn skills that can be used later in the educational process. In training, we use testing for similar reasons—to view where our athletes are currently and what gains they have made. We then analyze the data to see where they need work. However, with a skilled set of eyes as well as coherently designed training, testing may not need to be done with regularity. I'm not saying that you should never assess, but constantly attempting to assess disrupts the long-term development of an athlete. Assessment can be part of the process without having to designate it as a separate entity.

columnist author photo