Skip navigation
College Quarterly
Fall 2013 - Volume 16 Number 4
Literacy and Learning
By Greg McKenna and Audrey J. Penner
Abstract

This study evaluates the influence of literacy on outcomes in college programs with defined course requirements. This overcomes the limitations of previous research by contextualizing literacy according to program requirements. Results suggest (a) learner literacy varies considerably among programs, (b) there are socio-demographic variables predictive of low literacy, (c) “magnet” programs exist where there are more learners with low literacy skill, and (d) the influence of literacy, though significant, is not the only factor in successful outcomes. Practical implications are discussed, including the need to consider balanced approaches to assessing learner outcomes, embedding accommodative supports within some programs and the need for institutions to identify “magnet” programs.


Introduction

In Canada, 8.9 million people aged 16-65 function below the minimum level for effective functioning in an information society, representing 42% of the Canadian labour force (Brink, 2007). The ability of post-secondary institutions to effectively train this relatively large percentage of the adult population is a concern given the relationship between literacy skills and educational outcomes (Desjardin, 2005). This issue is particularly relevant to community colleges as they remain an important contributor to Canadian labour force development, as these institutions typically provide training with direct employment links (McKenna, 2010). However, little research detailing links between literacy levels of community college students, college programs of study and educational outcomes has been undertaken in a Canadian context. Research that is available focuses on community colleges in the United States of America (USA) where the mandate and type of education is often different from that found in Canada. Nonetheless, research from the USA can still provide a general framework from which to build studies to address Canadian needs. These USA studies often evaluate differences in program outcomes between learners assigned to remedial or developmental courses, based upon various screening measures used by the respective colleges. Bettinger and Long (2005) and Calcagno (2007) found educational outcomes are essentially the same for academically-prepared students and those that required developmental programming once institutional variation in remedial placement policy was taken into account. Calcagno (2007) further noted 37% of their sample graduated high school without college preparatory curriculum and 25% of those who completed preparatory high school curriculum functioned below expected levels in terms of foundational skills (e.g., writing, reading, math). In this and a variety of other studies (Crane, McKay, & Poziemski, 2002; Crews & Aragon, 2007), it was demonstrated that underprepared students who attend remedial programming tend to have more positive outcomes compared to underprepared students who do not. Though some variation in findings is noted, it would appear that, in general, research supports the contention that students entering programs with adequately developed basic skills outperform those who do not have adequately developed basic skills. However, this research tends to compare learners across college programs and does not typically differentiate between groups enrolled in different programs that may have different academic demands.

Some studies have attempted to determine if learner level of preparedness influences outcomes in particular programs of study. Using a descriptive approach, Seybert and Soltz (1992) compared the performance of learners who participated in remedial programming to college wide averages in courses with high demand for writing. They found those who participated in remedial programming tended to have lower grades and pass rates. However, without the use of inferential statistics, it is difficult to rely on this report. Goldstein and Perin (2008) attempted to address this apparent gap in the research by conducting a study that analyzed the relationship between:

  1. literacy, demographic and academic variables to achievement in college content courses and,
  2. the outcomes of those placed in and completing remedial programming, and those who were not placed in remedial programming.

Using data from the college screening measure and participation in remedial programming (reading and writing), Goldstein and Perin compared students enrolled in a literacy-intensive course, Introduction to Psychology. All variables were coded and analyzed as categorical. Results indicated that those groups of students with college-level English skills showed significantly higher achievement compared to underprepared students who did not complete remediation. It was demonstrated that demographic characteristics of those in remedial programming were predictive of success. Variables relevant to the current study include those aged 30 and older and those who completed high school were more likely to be successful, whilst gender and primary language were not predictive of outcome. Results also demonstrated there was no significant difference between the achievement of those who were underprepared, but participated in remedial programming, and those who were fully prepared upon college entry.

Illich, Hagan and McCallister (2004) explored the relationship between participation in remedial programming and outcomes on specific courses; some courses were directly related to the learner’s domain of weakness, while others were seemingly unrelated to the domain of weakness. Using data over three academic years, the authors determined that significant differences were present between the groups of learners, which included those who were (1) enrolled in college courses only, (2) concurrently enrolled in remedial and college courses who passed all remedial course, and (3) concurrently enrolled in remedial and college courses who failed at least one of the remedial courses. Overall, the results supported the view that those who do not complete remedial courses do more poorly, compared to learners who do not require remedial intervention and those who require remedial intervention but successfully complete remedial programming. These results held, despite controlling for type of college course taken and scores on the screening measure.

Taken as a whole, the majority of research suggests that prepared learners, those who already have basic literacy skills or those who acquire theses skills through remedial intervention, typically outperform unprepared learners in college settings. The present study was designed to evaluate the influence of literacy on community college outcomes across various college programs, by measuring outcomes based upon a cluster of courses required for each program of study rather than a single course or a broad college comparison. The specific research questions were: (1) What are the literacy levels of learners at this college? (2) Is literacy level a predictor of successful performance at this college? (3) What socio-demographic variables (including chosen program of study) are associated with lower literacy levels for learners at this college? This study was also implemented as a pilot project to determine if a larger scale “census” style evaluation of the literacy levels of all college learners’ was viable.

Method
Institutional Setting

The data were collected during the 2010-2011 academic year at a college in Canada. The college provides programming across a variety of domains including, but not limited to, Business, Health and Community, Industrial Technology and Trades, Culinary, Applied Sciences and Engineering, Media, and Computer Studies. There were approximately 2000 post-secondary learners enrolled at the main College centers where the study took place. Learners enrolled in a specific program of study leading to a diploma or an applied degree. Each program had its own set of courses and there were no overlapping courses between programs. Programs, given their isolation from each other, varied considerably in terms of literacy demands/requirements. Programs were typically one or two years in length, with each academic year beginning in September and ending in June.

Sample

Learners enrolled in a post-secondary program were informed of the study. Two methods of recruitment were used: (1) program managers were informed and a request was made to have them encourage learners to participate, and (2) announcements were placed throughout the college that provided information on the project and how individuals could participate. Those who opted to participate were provided with information on the study. Consent to obtain final marks and allowing literacy test scores to be released to instructors was requested. There were 346 individuals who agreed to participant. Table 1 outlines socio-demographic characteristics of the sample and the college.

Table 1 Socio-demographic variables for participants
Domain Variable % in Study % in College
Gender
  Male 55% 57%
  Female 45% 43%
Age
  16 – 25 76% 73%
  26 - 35 14% 27%
  36 - 45 5%
  46 - 55 5%
  56 - 65 1%
Education
  Primary >1% 1%
  Some High School 2.3%
  High School – vocational/technical 1.7% 58%
  High School – general/academic 69.4%
  Beyond High School 26.3% 31%
Employment
  Full Time 4.1% 3%
  Part Time 27.2% 53%
  Not Employed 68.8% 44%
Health
  Excellent 19.9% No Data
Available
  Very Good 44.5%
  Good 31.8%
  Fair 3.5
  Poor >1%
1st Language
  English 97.6 No Data
Available
  Other 2.4%

Please note that some data for the college as a whole was not available and in some cases the categorization of the data was slightly different. The table cells are merged in the college column when data was categorized in broader groupings. Seventy-six percent were between the ages of 16-25, 14% were 26-35, 5% were 36-45, 5% were 46-55, and less than 1% was 56-65. Given the small number in the 36-45, 46-55 and 56-65 age groups, they were combined for analysis.

Measures

The Canadian Literacy Evaluation (CLE) is a measure of literacy skills. It is based upon tests used as part of the International Adult Literacy and Skills Survey (IALSS) and has similar content, scoring metric and interpretive scale. This allows scores to be compared to regional, national and international data collected as part of IALSS. Results are provided for three types of literacy: prose, reading connected text; document, reading charts, graphs, etc.; and numeracy, word-based mathematical problems. Scores are expressed using derived scores and classified as level 1 (lowest), to level 4/5. Level 3 is considered a minimal level for functionality in a knowledge-based society. The CLE, like measures used in IALSS, is untimed and delivered on-line with the help of a facilitator. The test consists of a brief series of socio-demographic questions followed by questions designed to evaluate literacy skills. Questions cover a variety of contents and contexts considered relevant to adults, including such things as home and family, health and safety, community and citizenship, consumer economics, work, and leisure and recreation. To answer the questions, it was necessary to read material and respond either by identifying critical elements in the text, selecting from a variety of options or filling in short answers. The answer format was pre-determined and based upon the type of question.

Information from the registrar’s office was collected on each participant. Data included name, gender, identification number, program of study, courses taken, courses completed and course marks. Course marks were typically provided in percentages. Transfer credits and courses designated as pass/fail did not have a percentage. Other mark designations included incomplete and discontinued.

Data Collection and Coding

Learners volunteered to participate. The CLE was delivered online through their program’s designated computer lab. Learners were provided with the web site, an access code for the CLE, a brief verbal description of the measure, and what to expect. They were encouraged to complete the CLE tutorial prior to beginning. The CLE begins with 8-10 demographic questions and then proceeds to the first set of literacy questions. Learners’ literacy was evaluated within the first 6 weeks of the academic year. Program managers facilitated participation in different ways; in some cases whole classes were taken to the computer lab where each person was logged onto the test, while in other cases individuals entered the computer lab and were logged onto the test. Incentives were used as a means of encouraging participation. Participants were entered into a draw to win an iPod or an e-reader.

CLE results were released to instructors to facilitate provision of supports offered for those who appeared to have literacy levels below a level 3. Resources made available to all learners included Strategic Transitions – WordQ and SpeakQ software to accommodate and enhance writing skills and Kurzweil to accommodate reading. WordQ and SpeakQ are integrated word prediction and speech recognition tools designed to support writing activities. Kurzweil is an assistive technology tool that provides literacy support through a variety of means, most commonly used for its text-to-voice function. The purpose of the pilot project was to assess the viability of a broader research initiative and to determine literacy levels, and as such, the use of accommodations by learners was not formally tracked. However, anecdotal reports from instructors and support services, using demand for service in previous years as a reference point, suggested there was little up-take of available interventions despite need being identified.

CLE results were provided on a continuous scale ranging from 0 – 500. Level 1 (0–225) and level 2 (226–275) are considered as below the expected range of literacy to function effectively in an information society; level 3 (276–325) is considered the minimum acceptable level of functioning; and level 4 (326–375) and level 5 (376–500) are considered high-functioning. CLE results were downloaded by the researcher and these data were merged with information gathered from the registrar’s office after the academic year was complete.

Course marks were the outcome measure. The average mark for each learner was calculated using the percentages on the transcript for each course. This would provide the best representation of ability as well as taking into account various literacy demands in the context of the particular program of study in which they were enrolled. After reviewing the transcripts, it was determined that courses designated with a Pass/Fail marking scheme were typically “clinical” or “on-the-job” type courses designed to evaluate “hands-on” skills required in a particular field. These were not included in the calculated percentage. Neither were transfer credits where a percentage was not provided. The presence of discontinued and incomplete designations presented more of a challenge for coding. Based upon discussions with the registrar’s office and faculty, it was believed that, in the vast majority of cases, learners would discontinue a course as a means of avoiding a failing grade (cf. Goldstein & Perin, 2008) and that “incomplete” was often used by faculty as a means of providing struggling students with the option to continue to work on a course after the semester was complete — though, in practice, few followed through. Incompletes are converted by the registrar to a fail after year end. To provide the least-biased perspective, the second research question, “Is literacy level (as measured by the CLE) a predictor of successful performance at this college?” was answered by calculating cluster/program averages for each learner using only courses where a mark was provided; “discontinues” and “incompletes” were not factored into the calculation. However, 31% of participants had one or more discontinues or incompletes on their transcripts. This relatively high percentage led to an unanticipated “supplementary” question being addressed within the context of the second research question. This supplementary question was “What variables are predictive of having discontinued and/or incompleted courses on a transcript?”

The goal was to evaluate results by program of study; however, in order to have sufficient cell size and to protect anonymity, some programs of study were clustered into groups. The clustering process involved two steps. The first was to cluster similar programs based upon a career theory developed by American psychologist John Holland (1997). This theory classifies work environments and/or people into one of six categories. The categories are Realistic (R) — working with objects, machines, tools; Investigative (I) — observe, analyze, evaluate; Artistic (A) — artistic, imaginative, creative; Social (S) — interpersonal problem solvers and educators; Enterprising (E) — sales, management, persuaders; and Conventional (C) — organizing, clerical, numerical. The second was to “validate” these clusters using course descriptions to estimate the literacy demands of the programs. Table 2 shows the program, the code, the percentage of participants from that program and the final program clusters.

Table 2 Programs, program clusters and percentage of participants
Code Program Percentage of Sample Clustering Status
Conventional
C Accounting Technology 7% Stand-Alone Program
C Computer Information 3.8% Stand-Alone Program
Enterprising
E Business Management 11.3% Stand-Alone Program
E Retail Management 2.3% Retail Management + Hotel Management + Marketing Advertising + Travel Tourism = Management
E Hotel Management 1.0%
E Marketing/Advertising 1.2%
E Travel Tourism 1.2%
Investigative
I Environmental Applied Science 2.3% Environmental Applied Science + Wild Life Conservation = Environment
I Wild Life Conservation 1.0%
I Bio Sciences 4.4% Stand-Alone Program
Social
S Early Childhood Care 3.2% Stand-Alone Program
S Human Services 8.4% Stand-Alone Program
Artistic
A Video Game 5.2% Stand-Alone Program
A Journalism >1% Dropped
A Culinary Arts 1.0% Culinary Arts + Basic Visual Arts = Creative
A Basic Visual Arts 2.6%
Realistic
R Electrical 3.2% Stand-Alone Program
R Industrial Electrical 4.6% Stand-Alone Program
R Electrical Mechanical 2.6% Electrical Mechanical + Electrical Engineering Technology = Advanced Electrical
R Electrical Engineering Technology 2.3%
Rc Carpentry 8.7% Carpentry + Wood Work = Building
Ra Wood Work 1.7%
R Architectural Technology 2.6% Architectural Technology + Construction Technology = Building Technology
R Construction Technology 2.6%
R Heating Ventilation Air Conditioning (HVAC) 2.9% HVAC + Machinist + Gas Turbine = Machine Trades
R Machinist 2.0%
R Gas Turbine 2.6%
Unclassified
  Basic Skills 8.4% Stand-Alone Program
Total   100%  

Programs that represented less than 3% of the total sample were considered for clustering. In an effort to retain as many stand-alone programs as possible, attempts were made to cluster two or more programs that had less than 3% of the total sample. In some cases, it was necessary to cluster programs with less than 3% of the total sample with programs that had more than 3% of the sample. In one case, Journalism, it was not possible to cluster the program despite it sharing a categorization (Artistic) with other programs, as the literacy requirements appeared to be dramatically different. This, combined with its very small sample size, resulted in it being dropped from the analysis. In addition, the Basic Skills program was designed to provide learners who did not meet the minimum requirements for entry to other college programs the opportunity to improve their skills. As such, it cannot be easily assigned a code and was left unclassified.

Results

Consistent with expectations, the majority of participants had completed high school or beyond high school, most were unemployed or employed part-time, and the majority reported reasonable health. Over 95% were from the province in which their study took place and more than 97% had English as their first language. (Note: Table 1 presents participant socio-demographic information).
Question 1 — What are the literacy levels of learners at this college? — was answered by calculating mean literacy scores for participants in each of the clusters/programs with an n>10. The overall class average was also entered for comparison purposes. Table 3 presents the results.

Table 3 Literacy scores and average marks for clusters and programs.
Cluster/Program N Prose Document Numeracy Average Mark
Accounting 24 Mean = 292.3
SD = 51.1
Mean = 299.4
SD = 31.8
Mean = 316.9
SD = 31.4
83%
Computer Info Sys 13 305
SD = 43.4
333.5
SD = 36.3
328.5
SD = 43.6
87%
Business Management 39 290
SD = 49.0
294.7
SD =43.5
313.5
SD = 40.0
82%
Management 19 276.6
SD = 54.8
303.9
SD = 35.7
308.9
SD = 49.8
74%
Environment 11 302.7
SD = 89.5
321.8
SD = 56.1
338.2
SD = 21.1
81%
Bioscience 15 337.3
SD = 50.8
337.7
SD = 35.0
362.3
SD =39.2
86%
Early Childhood Ed 11 200.9*
SD = 80.1
191.8*
SD = 111.0
215.5*
SD = 75.4
75%
Human Services 29 256.9*
SD = 63.2
270.7*
SD = 48.1
293.1
SD = 45.6
78%
Video Game 18 320.8
SD = 53.7
323.9
SD = 42.1
334.7
SD = 47.8
70%
Creative 12 262.1*
SD = 65.4
279.6
SD = 65.6
300.8
SD = 40.2
81%
Electrical 11 263.2*
SD = 49.1
288.6
SD = 60.7
319.5
SD = 50.1
81%
Industrial Electrical 16 285.9
SD = 47.4
315.3
SD = 45.3
338.4
SD = 48.2
82%
Advanced Electrical 17 275.6
SD = 67.0
303.5
SD = 63.3
322.6
SD = 55.7
83%
Building 36 267.8*
SD = 56.0
301
SD = 54.9
306.5
SD = 42.9
81%
Building Technology 18 264.2*
SD =72.4
280.8
SD = 83.5
297.5
SD = 82.0
82%
Machine Trades 26 291
SD = 44.2
322.5
SD = 44.5
315.8
SD = 39.2
86%
Basic Skills 29 272.1*
SD = 34.5
281.4
SD = 46.2
290.3
SD = 34.7
75%
Overall 344 280.6
SD = 60.0
297.5
SD = 58.1
311.2
SD = 51.7
81%

*indicates prose scores below level 3

Seven clusters/programs had mean scores below level 3 in prose literacy, while two had mean scores below level 3 in document literacy and one had scores below level 3 in numeracy.

Question 2 — Is literacy level (as measured by the CLE) a predictor of successful performance at this college? — was answered using multiple regression. Average mark was entered as the dependent variable and prose, gender, age range, education level, employment status, and health, as the independent variables. Only one of the CLE measures was included, since there are statistically significant correlations between all three CLE measures, ranging from .61 to .75. Prose was chosen since the reading of connected text is used in related research and it is this ability that is likely to be the most common among programs, as all programs would require at least some reading. In contrast, numeracy and document use would vary more dramatically across programs, and some programs may require very little or no demonstration of these abilities. Results indicated that prose and age range were statistically significant predictors. Prose was a significant (p<.05) predictor of average mark, where higher prose levels were associated with higher averages. Age range was a statistically significant predictor, where those in the age range of 36 years and older performed better (p<.01) than those aged 18-25. The regression table is presented in Table 4 with statistically significant variables in bold.

Table 4 Summary of regression predicting average mark
  Average Mark
  Coefficient Standard Error
Prose 0.037** 0.019
Gender 0.799 1.457
Age 26-35 2.325 2.383
Age 36 + 4.800*** 1.718
Post High School 2.129 1.840
Employed Part Time 3.045 6.486
Not Employed 3.892 6.026
Health Very Good -0.759 2.039
Health Good -0.906 2.188
Health Fair / Poor -2.091 2.749
_cons 65.781*** 8.522
Number of observations 319
Adjusted R2 0.035
Log-Likelihood -1,280.19

Given that 31% of learners had discontinued or incomplete courses on their transcripts, it was decided to explore which variables predicted this outcome. A probit regression was run with discontinued/incomplete dummy coded and entered as the dependent variable. Prose, gender, age range, education level, employment status, health, and cluster/program were entered as independent variables. Regression results are presented in Table 5; predictive variables are in bold.

Table 5 Summary of regression predicting discontinue/incomplete
  Discontinue/Incomplete
Coefficient Standard Error
Prose -0.001* 0.000
Gender -0.093 0.063
Age 26-35 -0.129* 0.074
Age 36 + 0.064 0.114
Post High School -0.133** 0.055
Employed Part Time 0.059 0.151
Not Employed 0.029 0.133
Health Very Good -0.045 0.078
Health Good 0.005 0.083
Health Fair / Poor 0.197 0.184
Business Management 0.538*** 0.124
Management 0.517*** 0.152
BioScience 0.182 0.199
Video Game 0.154 0.185
Creative -0.020 0.162
Computer Information -0.077 0.164
Advanced Electrical 0.230 0.182
Building Technology -0.085 0.140
Industrial Electrical -0.030 0.153
Building -0.168* 0.090
Machine Trades -0.062 0.125
Electrical -0.093 0.135
Human Services -0.074 0.119
Basic Skills 0.296* 0.157
Number of observations 296
Adjusted R2 0.219
Log-Likelihood -140.06

note:  *** p<0.01, ** p<0.05, * p<0.1

Those with higher prose scores and those with higher levels of education were less likely to have discontinues/incompletes on their transcripts. With respect to clusters/programs, those in the Building cluster were less likely to have discontinues/incompletes on their transcripts, despite mean prose scores being below level 3. The Business Management program, the Management cluster and the Basic Skills program were more likely to have discontinues/incompletes on their transcripts. The Business Management program had a mean prose score solidly at level 3 (290); the Management cluster had a mean prose score in level 3, though just on the cusp of level 2 (276.6); and the Basic Skills program had a mean prose score below level 3 (272.1). Two programs were removed from the analysis as they perfectly predicted the dependent variable: the Environmental program, where no participants received discontinues/incompletes on their transcripts, and the Early Childhood Education program, where all participants had discontinues/incompletes on their transcripts.

Question 3 — What socio-demographic variables are associated with lower literacy levels for learners at this College? — was addressed through multiple regression. Table 6 summarizes the regression results. Variables in bold were predictive of prose literacy scores.

Table 6 Summary of regression predicting prose literacy
  Prose
Coefficient Standard Error
Gender 0.006 6.926
Age 26-35 23.273** 9.780
Age 36 + 37.367*** 10.678
Post High School -0.668 8.014
Business Management -4.147 12.753
Management -18.438 15.995
BioScience 39.198** 16.242
Video Game 29.424* 16.527
Creative -26.662 20.910
Computer Information 0.407 16.508
Environment -3.596 26.886
Advanced Electrical -9.703 18.726
Building Technology -28.014 17.997
Industrial Electrical -11.511 16.543
Building -23.904* 13.317
Machine Trades -7.061 14.442
Electrical -32.261** 14.943
Early Childhood Education -93.252*** 27.019
Human Services -32.963** 15.018
Basic Skills -12.822 11.330
Employed Part Time 37.818** 15.011
Not Employed 32.603** 13.924
Health Very Good 2.499 8.069
Health Good -11.619 8.814
Health Fair / Poor -36.640*** 13.925
_cons 257.625*** 18.314
Number of observations 344
Adjusted R2 0.174
Log-Likelihood -1,850.25

note: *** p<0.01, ** p<0.05, * p<0.1

Regression results suggested that predictors of prose literacy levels include age range, employment status, health and program of study. It should be noted that the n for some of the variables was less than 20 and, therefore, results needed to be interpreted with caution. Nonetheless, results were in keeping with previous research.

Discussion

Findings from this study added to the current body of research, allowing for a better understanding of the relationship between college programming and prose literacy levels. Answering question 1 — What are the literacy levels of learners at this college? — provided an overview of literacy levels across those enrolled in this college as well as within specific clusters/programs. The importance of looking at literacy results specific to the cluster/program was apparent given the overall mean for the college was at level 3, and yet a number of cluster/programs’ mean scores were lower, falling in level 2. This was not surprising, given the differing entrance requirements for various programs and what might be referred to as the publically “perceived” level of difficulty of the various programs. Both entry requirements and perceived level of difficulty would likely act as a self-imposed screening-out process, leading to certain programs attracting fewer low-literacy learners, while others may attract a higher number.

Despite a clear pattern of some programs enrolling a larger number of learners with low literacy levels, it was determined in answering question 2 — Is literacy level (as measured by the CLE) a predictor of successful performance at this college? — that, although literacy level was predictive of mark range, it was not the only predictor. Age range was a predictor of marks, with the 36-and-older age range predictive of higher marks. A trend in the data suggested older age ranges tended to have higher marks, compared to the youngest age group. The predictive power of age may be associated with issues of maturity, readiness to learn, motivation and other non-cognitive factors (Noonan & Sedlacek, 2005). The supplementary question regarding variables associated with discontinues/incompletes determined that prose, age and certain programs were linked to the presence of discontinues/incompletes.

As such, the importance of literacy cannot be overstated; however, it is not the sole determinant of success. In the context of the present study, a possible explanation was linked to the relative importance of literacy to individual clusters/programs. In certain clusters/programs, the ability to perform specific activities within specified contexts which are unrelated to prose literacy may lead to positive outcomes. For instance, within the Building cluster, prose scores were below level 3, yet the analysis indicated that learners in this program were less likely to have discontinues/incompletes on their transcripts and that the average mark within this cluster/program was 81%. Document literacy, numeracy and/or visual-spatial skills may be of paramount importance for successful program completion in this cluster/program and, as such, the predictive power of prose literacy may be reduced. In addition, there was a relatively high success rate in programs where the vast majority of learners were functioning below level 3 prose literacy. The Early Childhood Education program, for example, demonstrated prose literacy well below level 3 and those in the program were more likely to have discontinues/incompletes, yet the class’s average mark was 75%. This may be related to the process of attaining marks, such as project-based and group projects, and/or the influence of non-cognitive factors (Allen, Robbins, & Sawyer, 2010; O’Connell & Sheikh, 2009; Noonan & Sedlacek, 2005). The Business Management program and Management cluster had prose literacy scores at level 3 and a class average mark of 82% and 74% respectively, but these programs were also associated with higher rates of discontinues/incompletes. This may be related to program demands, standards and expectations; that is, some programs are simply more challenging than others.

Exploration of question 3 — What socio-demographic variables are associated with lower literacy levels for learners at this college? — yielded results consistent with related studies. In particular, age, self-reported health status and employment status were all linked to prose literacy skills. Weaker skills were noted in those in the younger age cohort (Goldstein & Perin, 2008; Penner, 2011; Penner, McKenna, & Audet, 2011), those with self-reported poor health (Kirsch & von Davier, 2005) and those who were employed full-time — though this later finding needs to be interpreted with caution given the small number who reported working full-time.

The other notable finding was related to prose literacy level and cluster/program of study. It would appear that prose literacy level can be predicted based upon program of study. This helped to support the earlier observation that certain cluster/programs appeared to be enrolling a larger number of low-literacy learners. In effect, there appeared to be “magnet” programs where learners with low-prose literacy were more likely to be accepted, and perhaps were more likely to apply. This issue was particularly concerning given the extent to which prose literacy would appear to be an important skill in at least some of these clusters/programs (e.g., Early Childhood Education and Human Services).

There are important limitations to the present study. First, data was collected at a single college which may limit generalizability. Second, assessment methods and benchmarks may differ between clusters/programs which may have had an effect on outcomes as measured by marks. In fact, this is highly plausible, as it would appear that many individuals functioning below level 3 attained “average” marks in, for instance, the Early Childhood Education program. Despite these limitations, important implications for those working in the field are apparent.

Implications for Practice

The implications of this research may have a direct impact at the administrative and instructional levels. At the administrative level, it would be important for each college to identify the “magnet” programs for learners with low literacy. Doing so would provide the opportunity to (1) put accommodative strategies into place for incoming learners, as well as (2) to review budget assignments in light of accommodative support and the provision of support to faculty whose workloads may be disproportionately heavy considering the presence of learners with additional needs.

At the instructional level, faculty should be encouraged to implement strategies that could be embedded into the curriculum and delivery of programming as a means of enhancing uptake of supports. For instance, reading and writing software could be introduced and utilized in some programs, such as Human Services, as part of a learning strategy with the goals of (1) familiarizing learners with the software so they would be able to effectively “teach” their clients how to use these tools once learners move into the labour force; and (2) requiring learners to utilize these tools while in their program of study. In effect, learners would have to use the tools in their courses, which would enhance the learning experiences of those with low literacy levels, while at the same time preparing learners to deal with the increasing use of technology in the field in which they are planning to work. Approaching the issue in this manner may also reduce the stigma that may be associated with individual learners seeking out and utilizing supports. An additional consideration at the faculty level is related to the minimum level of literacy required for effective functioning in the labor force for each individual field. If literacy levels can be determined, it may then be important for faculty to seriously consider the nature of the assessment techniques used within each program of study. As an example, in the field of Early Childhood Education, it may be common practice to evaluate skills by using project- and/or group-based assessment as the primary means of gaining marks, which would be in keeping with the type of activities taking place within work settings. However, relying on this type of assessment for a “critical mass” of marks may have the unintended effect of allowing those with literacy levels below industry requirements to graduate and move into the work force. If a relatively high percentage of learners were graduating with low literacy skills, this may have a long-term negative impact on an industry that requires better developed literacy. Given the large percentage of learners with low literacy in the present study who were enrolled in Early Childhood Education and Human Services programs, it may be important to address this issue by implementing a more balanced means of assessment — assessments that would include each learner needing to demonstrate a minimum acceptable level of literacy in order to pass key courses.

The opportunities for future research are numerous. Some key areas for exploration include: (1) replicating the present study in different colleges as an important first step to ensure the results can be generalized, (2) measuring literacy levels of graduating high school students and tracking application/entry into post-secondary programming would offer insight into learning trajectories, and (3) measuring literacy in “magnet” programs and linking this to quality measures in the workplace could provide insight into the impact of graduating learners with low literacy on work settings.

In general, these results added to the current body of research by extending the link between literacy and college success in specific clusters/programs. The results strongly suggested that future research needs to take into account participant’s program of study, rather than focusing on overall college averages or by focusing on single courses within a broader program of study. Both of these extremes may influence results and interpretation in such a way as to seriously limit the generalizability of the research.

References

Allen, J.; Robbins, S. B.; & Sawyer, R. (2010). Can measuring psychosocial factors promote college success? Applied Measurement in Education, 23, 1-22.

Bettinger, E. P., & Long, B. T. (2005). Addressing the needs of under-prepared students in higher education: Does college remediation work? NBER Working Paper Series. Working Paper 11325. Cambridge: National Bureau of Economic Research.

Brink, S. (2007). Literacy in P.E.I.; Implications of findings from IALSS, 2003 : Human Resource and Social Development Canada.

Calcagno, J. C. (2007). Evaluating the impact of developmental education in community colleges: A quasi-experimental regression-continuity design (Doctoral dissertation, Columbia University). Dissertation Abstracts International, 68 (06).

Crane, L. R., McKay, E. R., & Poziemski, C. (2002). Pieces of the puzzle: Success of remedial and non-remedial students. Paper presented at the annual meeting of the Association for Institutional Research, Toronto, Canada. (ERIC Document Reproduction Service No. ED 474 034)

Crews, D. M., & Aragon, S. R. (2007). Developmental education writing: Persistence and goal attainment among community college students. Community College Journal of Research and Practice, 31, 637-652.

Desjardins, R. (2005). Education and Skills. In Learning a Living: First Results of the Adult Literacy and Life Skills Survey (chap. 3). Paris: Ministry of Industry, Canada, & Organization for Economic Cooperation and Development.

Goldstein, M. T., & Perin, D. (2008). Predicting performance in a community college content-area course from academic skill level. Community College Review, 36, 89-115.

Holland, J. L. (1997). Making Vocational Choices: A Theory of Vocational Personalities and Work Environments. Third Edition. Odessa, FL: Psychological Assessment Resources.

Illich, P. A., Hagan, C., & McCallister, L. (2004). Performance in college-level courses among students concurrently enrolled in remedial courses: Policy implications. Community College Journal of Research and Practice, 28. 435-453.

Kirsch, I. S., & von Davier, M. (2005). Skills and Health. In Learning a Living: First Results of the Adult Literacy and Life Skills Survey (chap. 3). Paris: Ministry of Industry, Canada, & Organization for Economic Cooperation and Development.

McKenna, G. S. (2010). Can Learning Disability Explain Low Literacy Performance? (Departmental Catalogue No. SP-959-07-10E). Gatineau, QC: Publication Services, Human Resources Development Canada.

Noonan, B. M., & Sedlacek, W. E. (2005). Employing noncognitive variables in admitting and advising community college students. Community College Journal of Research and Practice, 29, 463-469.

O’Connell, M., & Sheikh, H. (2009). Non-cognitive abilities and early school dropout: Longitudinal evidence from NELS. Educational Studies, 35, 475-479.

Penner, A. J. (2011). Comparison of College Performance of General Educational Development (GED) and High School Diploma Students in Nova Scotia and PEI. QC: Publication Services, Human Resources Development Canada.

Penner, A.J., McKenna, G. S., & Audet, M. (2011). Investing in Effective Adult Learning for Island Prosperity: Back to Basics. Gatineau, QC: Publication Services, Human Resources Development Canada.

Seybert, J. A., & Soltz, D. F. (1992). Assessing the outcomes of developmental courses at Johnson County Community College. Overland Park, KS: Johnson County Community College, Office of Institutional Research. (ERIC Document Reproduction Service No. ED349052).


Gregory S. McKenna is the Senior Diagnostic Examiner at Holland College Assessment and Counseling Service. Audrey J. Penner is the Director of Learner Supports, Applied Research and Transitions at Holland College. They can be reached by contacting gmckenna@hollandcollege.com