header
Graduate Education and the Liberal Arts
Louis Menand

I am no better at predicting the future than anyone else is, and I generally try to avoid doing so. If I could predict the future, then my retirement account would be doing a lot better than it is. What I can try to do is what historians do, which is to predict the present. History is about connecting the dots such that present conditions can be understood as a predictable outcome of past events. It is about trying to extract a logic from the higgledy-piggledy of data that randomly survive, trying to recreate the back-story of who we are now.

My view about the future is that it is good to think of it as in our hands. If the future were not in our hands, there wouldn’t be much purpose in having these discussions. We have them because we hope to shape the future, as opposed to having the future shape us. And in starting from the belief that the future is in our hands, it is helpful to know how the conditions in which we find ourselves came to be.

Most human beings, of course, take current conditions to be natural. They tend to think that the way things are is either the way they have always been or the way they are supposed to be. For some people, “current conditions” mean the conditions that obtained when they were in high school. But it is the same idea: people tend to have some conception of the way things ought to be that is based on some experience of the way things either are right now or were once in their own lives.

Academics, of course, are trained to see the way things are as contingent. Most of us can appreciate this fact in the abstract. What is harder for us sometimes is seeing why things are the way they contingently are. And this is especially true for the conditions of our own professional life, our role as people who work in universities. This may sound odd, to say that academics find it especially difficult to grasp the contingent character of present professional arrangements. But this is true of any professional and any profession, and the reason is that professionals are socialized into those arrangements. That is the essence of professional training. We internalize the norms that inform our practices. We really couldn’t do our jobs if we did not.

This is what often makes it difficult for us to countenance challenges to those norms. It is part of the tuna fish salad syndrome—the realization you first have in childhood that not everyone’s mother makes tuna fish salad the way your mother makes it. This can be a very unsettling experience, even vertiginous, the first time it happens. Sensitive children have been known to throw up on discovering, for example, bits of celery in their best friend’s mother’s tuna fish salad. It is a look into the abyss. We professors are made of tougher stuff, of course. But it is helpful to see that the system in which we work is the way it is for certain reasons, since that helps us to decide whether those reasons are still relevant.

The modern system of higher education in the United States emerged in the fifty years between the Civil War and the First World War. You could even say that the system was invented in those years, so completely did it supplant what had been there before. One thing that strikes you looking back at that period was the role played by a small group of individuals—university presidents. They were Titans in their world. They had power, and they used it. Some built institutions from nothing: Daniel Gilman at  Johns Hopkins, William Rainey Harper at Chicago, David Starr Jordan at Stanford, G. Stanley Hall at Clark. Others completely transformed existing institutions: Nicholas Murray Butler at Columbia, Timothy Dwight at Yale, James Angell at the University of Michigan. Twenty-first century university presidents, who have so little control over the academic missions of their institutions, must look back at those men and weep bitter tears of frustration. Of these Titans, the first, the longest-serving, and, for our purposes, the most important was Charles William Eliot, of Harvard.

Eliot became president of Harvard in 1869. His academic field was chemistry, but he was not a particularly accomplished chemist. In fact, he had resigned from the Harvard faculty in 1863 after being passed over for a new chair in chemistry. When the Harvard Overseers chose him, he was working at what many at the time would have regarded as a vocational school, the Massachusetts Institute of Technology. The Overseers were taking a radical step. Eliot’s appointment constituted recognition that American higher education was changing, and that Harvard was in danger of losing its prestige. Harvard picked Eliot because it wanted to be reformed. Eliot did not disappoint. He was inaugurated in the fall of 1869, and he served for forty years.

Eliot

By the time he retired, Eliot had become identified with almost everything that distinguishes the modern research university from the antebellum college: the abandonment of the role of in loco parentis, the abolition of required coursework, the introduction of the elective system for undergraduates, the establishment of graduate schools with doctoral programs in the arts and sciences, and the emergence of pure and applied research as principal components of the university’s mission. Eliot played a prominent part in all these developments. He was, after all, a prominent figure at a prominent school.

But he was not their originator. Other colleges instituted many of these reforms well before Harvard did. Yale had been awarding doctorates since 1861, for example, and the trend toward applied research was kicked off by the Morrill Land-Grant College Act, passed by the wartime Congress in 1862. The reform that Eliot was most closely associated with was the elective system: by 1899 he had gotten rid of all required courses for Harvard undergraduates except first-year English and a foreign language requirement. Cornell and Brown, however, had tried free elective curricula well before Eliot. (Until his appointment, Eliot had actually been somewhat dubious of electives; he seems to have changed his mind, partly because of his own reflections on the advantages of an elective system, but possibly because a committee of the Harvard Overseers had drawn up a report recommending more of them before he was hired.)

So Eliot’s role was to some extent reactive. He was a quick student of trends and an aggressive implementer of change. He adopted a “there’s a new sheriff in town” attitude toward his faculty (an attitude that has not always proved effective among presidents at Harvard). But he did bring one original and revolutionary idea with him when he came into office. This was to make the bachelor’s degree a prerequisite for admission to professional school. It may seem a minor reform, but it was possibly the key element in the transformation of American higher education in the decades after the Civil War.

Before Eliot, students could choose between college and professional school—law, medicine, and science, which in the nineteenth century was taught in a school separate from the college (as at MIT, for example). Harvard had its own science school, called the Lawrence Scientific School. In 1869, Eliot’s first year as president, half of the students at Harvard Law School and nearly three-quarters of the students at Harvard Medical School had not attended college and did not hold undergraduate degrees.

These were, comparatively, respectable numbers. Only nineteen of the 411 medical students at the University of Michigan, and none of the 387 law students there, had prior degrees of any kind. There were no admissions requirements at Harvard Law School, beyond evidence of “good character” and the ability to pay the hundred dollars tuition, which went into the pockets of the law professors. There were no grades or exams, and students often left before the end of the two-year curriculum to go to work. They received their degrees on schedule anyway. Standards at medical schools were only a little less amorphous. To get an MD at Harvard, students were obliged to take a ninety-minute oral examination, during which nine students rotated among nine professors, all sitting in one large room, spending ten minutes with each. When the ninety minutes were up, a bell was sounded, and the professors, without consulting one another, marked pass or fail for their fields on a chalkboard. Any student who passed five of the nine fields became a doctor.

Eliot considered the situation scandalous. He published an article about it in The Atlantic Monthly in 1869, just a few months before being offered the presidency, and that article was almost certainly a factor in the decision to appoint him. Harvard wanted a reformer because there was alarming evidence in the 1860s that college enrollments were in decline, and the existence of an easy professional school option was one of the reasons. Once installed, Eliot immediately set about instituting admission and graduation requirements at Harvard’s schools of medicine, law, divinity, and science, and forcing those schools to develop meaningful curricula. It took some time: a bachelor’s degree was not required for admission to the Harvard Medical School until 1900.

Eliot had two goals in mind: one was to raise the value of the professional degree, but the other was to save the college from going out of business. His reform had several long-term effects on American education and American society. To begin with, it professionalized the professions. It erected a hurdle on what had been a fairly smooth path, compelling future doctors and lawyers to commit to four years of liberal education before entering what are, essentially, professional certification programs. This made the professions more selective and thereby raised the social status of law, medicine, and science and engineering. Law students were no longer teenagers looking for a shortcut to a comfortable career; they were college graduates, required to demonstrate that they had acquired specific kinds of knowledge. People who could not clear the hurdles could not advance to practice. Eliot’s reform helped put universities in the exclusive business of credentialing professionals.

The emergence of pure research as part of the university’s mission—the notion that professors should be paid to produce work that might have no practical application—was a development that Eliot had relatively little enthusiasm for. He believed in the importance of undergraduate teaching—as a champion of electives, he always insisted that the subject was less important than the teacher—and he believed in the social value of professional schools. But he was too utilitarian to believe in research whose worth could not be measured in the marketplace, and Harvard did not formally establish a graduate school in arts and sciences until 1890, which was rather late in the history of graduate education. The push toward doctoral-level education came from elsewhere.

Still, as Eliot quickly realized, graduate schools perform the same function as professional schools. Doctoral programs, and the requirement that college teachers hold a PhD, professionalized the professoriate. The standards for scholarship, like the standards for law and medicine, became systematized: everyone had to clear the same hurdles and to demonstrate competence in a scholarly specialty. People who could not clear the hurdles, or who had never joined the race, were pushed to the margins of their fields. The late-nineteenth-century university was really (to adopt a mid-twentieth-century term) a multiversity; it had far less coherence than the antebellum college, since it was essentially a conglomeration of non-overlapping specialties.

But Eliot’s reform also saved the liberal arts college. In 1870, one out of every sixty men between eighteen and twenty-one years old was a college student; by 1900, one out of every twenty-five was in college. Eliot understood that in an expanding nation, social and economic power would pass to people who, regardless of birth and inheritance, possessed specialized expertise. If a liberal education remained an optional luxury for these people, then the college would wither away.

By making college the gateway to the professions, Eliot linked the college to the rising fortunes of this new professional class. But he also enabled college to preserve its anti-utilitarian ethos in an increasingly secular and utilitarian age. For Eliot insisted on keeping liberal education separate from professional and vocational education. He thought that utility should be stressed everywhere in the professional schools but nowhere in the college. The collegiate ideal, he explained in his Atlantic Monthly article, is “the enthusiastic study of subjects for the love of them without any ulterior objects.” College is about knowledge for its own sake—hence the free elective system, which let students roam across the curriculum without being shackled to the requirements of a major.

Effectively, Eliot struck a bargain: professional schools would require a bachelor’s degree for admission. In return, colleges would not provide pre-professional instruction. The college curriculum would be non-vocational. And this is the system we have inherited: liberalization first, then professionalization. The two types of education are kept separate. We can call this dispensation Eliot’s bargain. It has proved remarkably durable.

Where do master’s programs fit into this system? The answer seems to be that for most of the history of American higher education, nobody really gave that question much thought. The master’s degree evolved more or less under the academic radar, and this explains a lot of its characteristics.

There is a prehistory of the modern master’s degree—Harvard awarded master’s degrees in the seventeenth century—but that need not concern us. The modern master’s degree emerged in the last decades of the nineteenth century as a credential for teachers. In the beginning, when there were far fewer doctorates, the master’s degree could qualify the recipient for a university teaching appointment. By the 1920s, though, the master’s degree was earned either as a credential for secondary school teaching, or as a way-­station, or consolation prize, in PhD programs. And around that time, in the 1920s, new master’s programs started to appear in fields unrelated to the liberal arts and sciences: agriculture, art, business, city planning, engineering, forestry, music, pharmacy, public health, and social work.

Over the course of the first half of the twentieth century, enrollment in master’s programs tracked enrollment in higher education as a whole. That is, the number of master’s degrees awarded increased at the same rate as the numbers of baccalaureates and doctorates. This changed after 1945. Between the 1940s and the 1960s, the number of master’s degrees awarded tripled; the number of institutions offering master’s degrees doubled. Still, the numbers are relatively small. The big jump came after 1970, and this is when the current landscape started to emerge.

Three things happened after 1970. First, there was a rapid increase in the number of master’s degrees. Between 1970 and 1990, there was a 48 percent rise in the number of degrees awarded annually. More than half the master’s degrees ever awarded in the entire history of American higher education up to 1990 were awarded in that twenty-year period. Keep in mind that this was a period, after 1970, when undergraduate enrollment, after doubling in the 1960s, was relatively flat.

Second, after 1970, new master’s programs were created in many fields. These include applied anthropology, applied history, applied philosophy, environmental studies, urban problems, health care for the aged, genetic counseling, aviary medicine, international marketing, dental hygiene, physical therapy, building construction, and advertising management. It was in this period, between 1970 and 1990, that the master’s degree became overwhelmingly a terminal professional degree, with business and education the leading fields. By 1989, half of all master’s degrees were in those two fields. Only 16 percent were in the liberal arts and sciences.

Finally, this was a period of curricular and pedagogical innovation within master’s programs. Programs became less research-centered, a shift manifested by the elimination in many programs of a thesis requirement. Technologies were developed to enable distance learning. And programs became more interdisciplinary, for the fields in terminal master’s programs tend not to map on to traditional academic disciplines. They tend to be interdisciplinary almost by definition.

There are a number of explanations for this expansion. A common one is the transformation of the American economy into an information and knowledge-based economy, a development that raised the demand for better-educated workers. Another is the growing need of universities for researchers and for teaching assistants. Master’s students, like PhD students, provide this labor cheaply. And advanced degree programs are a reflection partly of institutions’ desire to enhance their profiles by adding graduate programs and partly of professions’ desire to enhance their profiles by requiring a post-baccalaureate degree for entrance. The bulk of the expansion in American higher education between 1945 and 1970 had been in the public sector. Adding master’s programs gave state colleges a bigger profile within the system as a whole. It not only increased enrollment figures, it produced alumni with professional careers. In institutions without doctoral programs, master’s programs might also constitute an asset in faculty recruitment.

Concern about the master’s degree has been expressed in official academic circles as far back as 1909. For much of the twentieth century, the dominant complaints had to do, first, with the lack of consistency among master’s programs, and, second, with the belief that master’s programs are less rigorous than doctoral programs, even that the teaching in these programs is often effectively at an undergraduate level.

On the first point, there is an unusual degree of differentiation among master’s programs in terms of requirements for the degree. Virtually every doctoral program requires a dissertation, some in the form of monographs and others in the form of scholarly articles. Some master’s programs require a thesis, but most do not, and the nature of requirements differs widely. This inconsistency has troubled university administrators. And the fact that the typical master’s student is different from the typical doctoral student, tending to be older and in many cases part-time, has contributed to the impression that master’s programs are not academically rigorous.

How valid are these concerns? In the 1980s, the Council of Graduate Schools sponsored a study of master’s programs. The study was run from the Center for Educational Research at the University of Wisconsin, and it involved surveys of hundreds of students, faculty, and administrators. The report was published as a book by Johns Hopkins University Press in 1993. I think it is fair to say that the three authors of the report were surprised by the results of their survey.

In recording the perspectives of those directly involved in master’s education—including students, alumni, and employers as well as faculty and administrators—we came to understand that the experiences these individuals had with master’s education were, for the most part, very positive and inconsistent with the largely negative views of master’s education portrayed in the literature. Despite being relegated by some of the educators we interviewed to second-class status, we conclude that master’s education in the United States has been a silent success—for degree holders, for employers, for society in general....

Throughout the study, we were often impressed—sometimes even astonished—by the extent to which students, program alumni, and faculty valued their master’s experiences....

Equally important, we learned that there are important social benefits associated with master’s education—benefits that have been largely invisible in the literature and to many people in higher education....

Many students and alumni told us that their master’s education greatly enhanced their knowledge and understanding, sharpened their ability to connect theory and professional practice, developed a big-picture perspective, refined their analytic ability, made them more critical questioners of knowledge, and honed their communication and professional practice skills.1

You don’t see prose like that very often in the academic literature on higher education.

Since 1990, the number of master’s degrees has continued to increase at a rate that outpaces both bachelor’s degrees and first professional degrees, such as JDs. Between 1990 and 2000, the number of master’s degrees increased by 39 percent. In the next seven years, between 2001 and 2008, it increased by another 33 percent. In that latter period, the number of bachelor’s degrees increased by only 25 percent; the number of first professional degrees by only 14 percent. There was a striking rise in the number of PhDs, though, which has now turned into a professional disaster. The increase in PhDs between 2000 and 2008 was 42 percent. On the other hand, in 2007–2008, there were almost ten times as many master’s degrees as PhDs awarded, and more than six times as many master’s degrees as first professional degrees. There were almost as many master’s degrees awarded in 2007–2008 as there were associate’s degrees. Master’s programs constitute a big chunk of the higher education system.

As this history suggests, master’s programs have a number of characteristics that are not only different from the characteristics of bachelor’s and doctoral programs, but that can seem anomalous and even inappropriate and sub-academic. We’ve noted that master’s students are demographically nontraditional. They tend to be older. Their graduate education is often not full-time; it is not the immersion experience that most PhD programs are. There is more use of distance learning technology and less emphasis on academic research. And, of course, there is a much tighter fit between educational programs and career opportunities. The ethic of disinterestedness that is the core feature of undergraduate and graduate education in the liberal arts and sciences in America is less prominent in master’s programs. There are, of course, well-established and popular master’s programs in the liberal arts and sciences, notably Masters of Arts in Liberal Studies. But the growth and proliferation of master’s programs was mainly a response to the needs of professions and employers, in business and in government.

It is important to see, I think, that master’s programs would not work unless they maintained these characteristics. The non-traditional nature of the programs gives them something that undergraduate and doctoral programs notoriously lack: nimbleness. This is a stratum of the educational system that can respond quickly to student and market demand. The return on investment is readily calculated in a way that it is manifestly not for liberal arts education. Master’s programs run against the institutional grain of mainstream academia. They grew up, so to speak, between the cracks of the divide between undergraduate and graduate education that we inherited from the nineteenth century, and they developed more or less autonomously, a derivative of neither. The report of the Council of Graduate Schools study was called A Silent Success. That is why the master’s program is the celery in the traditional academic’s tuna fish salad.

Was Eliot’s bargain a devil’s bargain? Eliot’s reform left a question mark in the undergraduate experience. What, if nothing they were learning was intended to have real-world utility, were undergraduates supposed to learn? The free elective system that Eliot instituted at Harvard basically said, “It doesn’t matter; you will learn what you really need to know in graduate school.” And abuses of the free elective system, a problem that was much debated in higher education circles in the late nineteenth century, led to a reaction against it after the turn of the century and the institution of the undergraduate major. But the idea that liberal education is by its nature divorced from professional or vocational education persisted.

This separation is one of the chief characteristics of elite institutions of higher learning. In a system that associates college with the ideals of the love of learning and knowledge for its own sake, a curriculum designed with real-world goals in mind seems utilitarian, instrumentalist, vocational, presentist, anti-intellectual, and illiberal. Those are words that trigger the academic auto-immune system.

Since these terms might characterize terminal master’s degree programs, let us consider them for a moment. There is a little self-­deception in complaints about vocationalism, since there is one vocation, after all, for which a liberal education is not only useful but is deliberately designed: the vocation of professor. The undergraduate major is essentially a preparation for graduate work in the field, which leads to a professional position. The major is set up in such a way that the students who receive the top marks are the ones who show the greatest likelihood of going on to graduate school and becoming professors themselves.

And it seems strange to accuse any educational program of being instrumentalist. Knowledge just is instrumental: it puts us into a different relationship with the world. And it is what is going on in the world that makes colleges need continually to redefine what they do. Faculty in liberal arts colleges always need to ask, “Are we preparing our students for the world they are about to face?” If the faculty thinks that a curriculum in which students spend most of four years being trained in an academic specialty is not going to do it, then they usually try to implement a set of general education requirements that ensure that all students will receive some education that prepares them for life in the twenty-first century.

Liberal education today does face a danger, and it is the same as the danger it faced in Eliot’s day: that it will be marginalized by the proliferation, and the attraction, of non-liberal alternatives. There are data to support this anxiety. Most of the roughly 2,500 four-year colleges in the United States award less than half of their degrees in the liberal arts. Even in the leading research universities, only about half the bachelor’s degrees are awarded in liberal arts fields. The biggest undergraduate major by far in the United States is business. Twenty-two percent of all bachelor’s degrees are awarded in that field. Ten percent of all bachelor’s degrees are awarded in education. Seven percent are awarded in the health professions. Those are not liberal arts fields. There are almost twice as many bachelor’s degrees conferred every year in social work as there are in all foreign languages and literatures combined. Only 4 percent of college graduates major in English. Just 2 percent major in history. In fact, the proportion of undergraduate degrees awarded annually in the liberal arts and sciences has been declining for a hundred years, apart from a brief rise during the great expansion between 1955 and 1970. Except for those fifteen exceptional years, the more American higher education has expanded, the more the liberal arts sector has shrunk in proportion to the whole.

The instinctive response of liberal educators is to pull up the drawbridge, to preserve the college’s separateness at any price. But maybe this is not the most intelligent strategy. What are the liberal arts and sciences? They are simply fields in which knowledge is pursued disinterestedly—that is, without regard to political, economic, or practical benefit. Disinterestedness doesn’t mean that the professor is equally open to any view. Professors are hired because they have views about their subjects that exclude other views. Disinterestedness just means that whatever views a professor holds, they have been arrived at unconstrained, or as unconstrained as possible, by anything except the requirement of honesty.

But disinterestedness has its uses. What does liberal education teach? I think basically three things. The first is methods of inquiry. Undergraduate liberal education teaches students how to assemble, interpret, and evaluate data. The methods range from statistics to hermeneutics. Second, a lot of liberal education is historical. As we were discussing earlier, liberal education gives students the back-story of present arrangements, and in a way that allows them to appreciate the contingent nature of those arrangements, and so see possibilities for change. It helps them see, ideally, that the future is in their hands. And finally, a great deal of liberal education is theoretical or philosophical. It helps students apprehend the structure of assumptions that underwrite our practices.

Professional schools teach methods of inquiry (though one wonders how much students in law school learn about hermeneutics). But they do not teach historically or theoretically. The purpose of professional education—and this includes doctoral education in the liberal arts and sciences—is to deliberalize students. It is to get them to think within the channels of the profession, not to achieve a critical distance on those channels. The aim of law school, as law professors will all tell you, is to teach students how to think like lawyers.

But we don’t need to keep the peas of liberal education from mixing with the mashed potatoes of professional training. Liberal education is enormously useful in its anti-utilitarianism. Almost any liberal arts field can be made non-liberal by turning it in the direction of some practical skill with which it is already associated. English departments can become writing programs, even publishing programs; pure mathematics can become applied mathematics, even engineering; sociology shades into social work; biology shades into medicine; political science and social theory lead to law and political administration; and so on. But conversely, and more importantly, any practical field can be made liberal simply by teaching it historically or theoretically. Many economics departments refuse to offer courses in accounting, despite student demand for them. It is felt that accounting is not a liberal art. Maybe not, but one must always remember the immortal dictum: Garbage is garbage, but the history of garbage is scholarship. Accounting is a trade, but the history of accounting is a subject of disinterested inquiry—a liberal art. And the accountant who knows something about the history of accounting will be a better accountant. That knowledge pays off in the marketplace. Similarly, future lawyers benefit from learning about the philosophical aspects of the law, just as literature majors learn more about poetry by writing poems.

This gives a clue to the value-added potential of liberal education. Historical and theoretical knowledge is knowledge that helps students unearth the a prioris buried in present assumptions; it shows students the man behind the curtain; it provides a glimpse of what is outside the box. It encourages students to think for themselves. The goal of teaching students to think for themselves is not an empty sense of self-satisfaction. The goal is to enable students, after they leave college, to make a more enlightened contribution to the common good.

Master’s degree programs that are situated within institutions with a liberal arts faculty therefore ought to ensure that the special perspectives that liberal education provides be part of their curricula. I don’t think of these perspectives as humanizing the material. I think of them as enhancing the value of what is learned, of giving students a perspective that mere professional training does not normally provide. I think it makes us better professionals

And ideally, undergraduate education in liberal arts colleges can benefit from some explicit attention to practical and real-world issues, as well. The divorce between liberalism and professionalism as educational missions rests on a superstition: that the practical is the enemy of the true. This is nonsense. Disinterestedness is perfectly consistent with practical learning, and practical learning is perfectly consistent with disinterestedness. We will not get hives if we try to bring these missions of higher education into closer alliance.

I want to raise one further issue. This is the issue of academic freedom. The principle of academic freedom was established in the United States by the founding of the American Association of University Professors, by John Dewey and Arthur Lovejoy, in 1915. The AAUP was a response to precisely those presidential Titans who put the modern American research university on the world map. It was designed to protect faculty from the power of people like David Starr Jordan and Nicholas Murray Butler.

Academic freedom does not mean that everything professors say is given equal standing. This is manifestly not the case. The academic profession is all about separating the worthwhile from the worthless—that is what we spend much of our time doing—and there are many legitimate ways to reward the former and penalize the latter. The essence of the principle of academic freedom is faculty self-governance. Academic freedom means that only faculty get to determine what is worthless and what is not. Faculty set the requirements for graduation. Faculty set the criteria for entrance into the academic profession. Faculty decide on the curriculum. The academic mission of the college or university is the faculty’s business.

It is crucial, therefore, that the faculty feels that it has authority over any academic program that its institution offers. Nimbleness is a problem for faculty governance. Faculties are not nimble. So governance can be a problem when programs are designed to adjust swiftly to market forces, or when they use part-time faculty or practitioner-professors, or when student or employer demand influences requirements and curricula. Faculty may not always be the most efficient governors of academic programs. But the first rule of academic administration is that you have to play with the cards that are in the deck. Universities are not corporations: their bottom line is the intellectual integrity of their product. It is the business of the faculty to ensure it. That is what we are here for.

 

Louis Menand is the Anne T. and Robert M. Bass Professor of English at Harvard University. His book The Metaphysical Club (Farrar, Straus, and Giroux, 2001) won the Pulitzer Prize. This essay is based on his lecture to the Valparaiso University Faculty Workshop on 20 August 2010.

 

Notes

1. Clifton F. Conrad, Jennifer Grant Haworth, Susan Bolyard Millar. A Silent Success: Master’s Education in the United States. Baltimore, MD: Johns Hopkins University Press, 1993.

Copyright © 2019 | Valparaiso University | Privacy Policy
rose