BAD GRAMMAR CAN INDICATE GOOD THINKING
– Charles James
This research project started as an examination of a simple question
“Is there a correlation between poor spelling and grammar of students
at the beginning of a course and ultimate academic outcomes?”
The answer is “Yes – but not what you think!”
In every Higher Education staff-room across the English speaking world lecturers bemoan the quality of the written English of their students. The inability of many students to compose a complex sentence diminishes the quality of their work.
I believed there was a correlation between the quality of spelling and grammar of students at the beginning of a course and ultimate academic outcomes. I could not find any reports of empirical research to support my belief.
I had to undertake the empirical research myself.
My then employer Bradford College kindly granted me timetable hours towards the research project.
An obvious approach would have been to take a cohort of students entering the LLB course and to track their progress. This was my first thought.
However, there were three problems.
First – If I believe that poor grammar will impede a student’s progress, then as his teacher I am duty bound to tell the student. If as a consequence the student obtains help to improve his grammar that he would not otherwise seek this will be good for the student, but it will reduce the value of the research project.
Second – The observation effect from the famous Honeywell experiments and the known impact of teacher expectation on performance mean that my cohort could be influenced by the fact that research is being conducted.
Third – I would have to wait 3 years for the cohort to graduate!
I therefore decided to take the cohort that had just completed their degrees, the 2005/2008 LLB (Bachelor of Laws) cohort. None of the above problems would affect the cohort.
Our Registry department kindly sent me an Excel spreadsheet with all the information Registry has about the 2005/2008 LLB cohort – date of birth, gender, ethnicity, prior attainment, learning difficulties etc.
My first concern was that students should be anonymous for reasons of privacy and data security. I deleted the surname and forenames columns, so students are only identified by enrollment numbers.
There were no International students.
I was only interested in the outcomes, so I deleted information from years other than the final year of the student. It happened that no members of the cohort were identified as having a learning disability, so that column was also removed.
As a Law lecturer, I had access to past Law assignments and examination scripts. The first written work of the First Year in 2005/06 was a 2,000-word English Legal Process assignment written by students in late November of their first year. Teaching does not begin until late September, and grammar was not then high on the academic agenda.
I decided I could safely take the work submitted in Week 8 of the First Year as a proxy for the state of the student’s grammar and spelling at the outset of the course.
My first problem was that English Legal Process is taken by many students and each group has distinctly different characteristics. Our evening LLB (Bachelor of Laws) classes are generally mature students, our Accountancy and Law and Marketing and Law students are not pure Law students, and the LLB (Advanced Standing) students are of a lower initial academic attainment.
To narrow the research I only used students studying the full time LLB.
Then came the sheer grind of assessing the grammar and spelling of the students’ work. I was daunted by the idea of analysing a large number of 2,000 word assignments for grammar and spelling. I explored computer analysis, but found that none of the existing diagnostic systems seemed to be able to do what I needed.
I was rescued by a DFES document “Writing in English as an Additional Language” by Professor Lynne Cameron and Dr Sharon Besser . These researchers took 100 word samples from the scripts, and analysed those samples. I therefore only had to analyse 100 or so words from each script.
Cameron and Besser’s analyses were much more complex than mine. For my purposes the 100 word extracts proved sufficient to generate significant results.
Some students begin an assignment by repeating the assignment set as the first paragraph. To avoid this complication I decided to take the first paragraph beginning after line 15. I decided to take enough words to make complete sentences, and so accepted I might veer into perhaps 120 words occasionally.
Selecting the sample was not straightforward. When I collected the scripts, they were in order with the lowest marks at the front and the highest marks at the back. I decided that taking the scripts in this order might influence me in my analysis. Subconsciously I might feel that the low marked scripts had more grammar errors than they really had, and I might have been lenient when analysing the higher scoring scripts. Some other order was needed. Alphabetical order was possible, but some ethnicities congregate around certain letters of the alphabet.
I decided on enrolment number as the order. Full of enthusiasm I started at the lowest number and worked through them. After analysing half the cohort, 16 scripts, I decided a 50% sample would meet my needs.
The initial result was so surprising that a colleague suggested my sample might be skewed because the keener students registered early and the poorer students coming in on Clearing registered last. I therefore analysed the last 6 scripts. There is no difference between results from the first registrants and the last registrants.
The number of students making spelling errors was very small, thanks to spellcheckers. Had I been analysing examination scripts I suspect I would have had very different results. I decided analysis of word processed scripts for spelling was unlikely to generate data of value as it did not reflect the true spelling of the students.
The assignment I was analysing for the research was about the development of the courts of Common Law and the courts of Equity. To me as a lawyer, the difference between “equity” and “Equity” is enormous; “equity” means “fairness” and “Equity” refers specifically to the system of law originally practised in the courts of Equity and now practised in all civil courts. I recorded every error rigorously.
Then I reflected that I might be assessing failures of comprehension as failures of grammar. Law students in their first term are not “lawyers”. Would it be obvious to the ordinary person that “equity” and “Equity” were so very different? I decided to ignore “equity” and “Equity” and likewise “common law” and “Common Law”. I had to reanalyse the scripts.
The next issue was my own grammar. It is many years since my written grammar has been subject to scrutiny. I cannot recall being rigorously trained in the description of grammar or the parsing of sentences.
I leaned heavily on “Children’s Writing and Reading – Analysing Classroom Language by Katherine Perera (but all errors are my own!)
I draw comfort from a passage in “Helping Students with Study Problems” by Moira Peelo She quotes from Carey
“I should define punctuation as being governed two-thirds by rule and one-third by personal taste. I shall endeavour not to stress the former to the exclusion of the latter, but I will not knuckle under to those who apparently claim for themselves complete freedom to do what they please in the matter…”
Then Peelo says
“Every generation appears to feel that they were the last to be taught “proper” English and see punctuation as a sign of all the evils which bedevil their world. One wishes to progress, but without spelling, language structure and use to be attached to all these emotive matters. But the academic medium is a means of communication and academic writers, however new, need to keep to some of the rules of its ilk.”
The personal grammar weakness of which I am most acutely aware is my propensity to use unnecessary commas. I readily concede that another person might analyse the scripts differently, and might achieve marginally different results.
It would be possible to analyse the scripts in infinitely more detail. I focused on “and” and “but” as the co-ordinate words – they are of the simplest.
I did not enter into the complexities of “front loading” or count the number of words in the sentence before reaching the verb.
My intention was to establish whether there is a correlation between grammar and academic outcomes. If such a correlation exists, I or someone else can investigate more deeply. If there is no correlation at all then deep analysis would be a waste of effort.
Recognising that a later researcher might wish to check my work, or I might wish to revisit it, I decided to record the actual words used, the errors I had identified, and certain other information about sentence length and grammatical structures. Annexe A is a sample completed analysis sheet. The others are available for inspection. There are 22 in total out of a cohort of 32. Were I to embark upon a similar project again I would amend the form to place the sample and commentary above the results table. I would also state that we are analysing rather than marking.
Of the 22 samples, 19 are from one ethnic group, and the other three are each from a different ethnic group. I decided that an ethnic group of one person is not statistically meaningful, and so I jettisoned those three individuals.
The First Chart
My first chart was a simple chart of correlation between initial grammar at entry and academic outcomes. A close observer might note that I appear to have lost students between the 19 students in my reduced cohort and the numbers appearing on the scatter graphs. The explanation is that some students have identical results and appear at the same point on the scatter graph. Their results are however incorporated in the trend line.
The axes were fairly straightforward. The X-axis is the number of grammar errors in 100 words starting at 0 and going to 14. The Y-axis is academic attainment. A First Class Honours Degree would be valued at 75, a 2:1 at 65, a 2:2 at 55 and a Third at 45. There were some students who were still active but had not graduated yet. Illness, academic failure, or other personal circumstances meant that they had fallen behind. They were likely to graduate and so they were assigned a value of 35. Had any left in Year 3 their value would have been 25. Those who left during Year 2 were assigned a value of 15 and those who left in Year 1 had a value of 5.
I expected that students who made few errors of grammar in the beginning would ultimately do well, and students who made many errors of grammar would achieve poor academic outcomes. This would produce a slope running downward from left to right.
There clearly is a correlation, but not what I expected. Poor grammar now seems to be a predictor of academic success!
Some exploration is required. The LLB students at Bradford College are not of the highest initial quality in terms of A Level results. The admission requirement for this cohort was 2 Cs at A level.
Most of the students are British Citizens of Pakistan ethnicity, educated in Bradford. Many have English as an additional language. Some discuss class work in Punjabi on occasion. Many of the cohort are from deprived areas (ward based), where HEFCE (Higher Education Fund for England) recognises the social deprivation in those wards.
Could this contribute to the unexpected result? If everyone in a cohort has weak grammar, then what would variations in the number of grammar errors mean? Those attempting to express complex thought, trying to string two ideas together, may make more grammar errors than students who are not trying to express complex thought. The better students might use language outside or beyond their comfort zone or zone of confident competence and so continue to develop their English language competence through a willingness to take risks and make errors.
The number of grammar errors could be an indication of attempts to explore more complex language in order to express more complex thought in their very first assignment. I confess that I had not recognised this possibility.
To test it, we should look at whether there is a correlation between the use of co-ordination words and academic outcomes. I had recognised only “and” and “but” as coordination words when undertaking the analysis, for the reasons explained above.
For the second chart, academic attainment stays on the Y-axis and the use of coordinating words (and/but only) is the X Axis.
Students using coordination (and/but) to string two ideas together are the future better students. Grammar errors may sometimes be a by-product of their willingness to leave their comfort zone.
There is a correlation in this group of students at least between poor grammar at the outset and ultimate academic outcomes. The fine detail of the correlation is of less importance than the discovery that there is a correlation.
Teachers should perhaps view errors of grammar amongst weak students as not only a problem, but also as a possible sign of promise for the future. The use of coordinating words may be a better indicator than errors of grammar.
Extension of the Research
The logical extension is to carry out the same exercise in other parts of Higher Education where the ethnic composition, the social mix, and the prior attainment are different. Are my findings true for students of lower prior attainment only, or do they have wider application?
In 2009 I took voluntary redundancy and I have left teaching. Colleagues who do carry out similar research are requested to keep me posted. I may be contacted at
Following the conclusion of this research, and as a result of other discussions, two significant changes have been made at Bradford Law School. The entry requirement has been raised, excluding the poorest students. The first year diet now includes small writing tasks from the outset with one to one tutorials to help students generate significantly better work during Year 1.