Principal A History in Sum: 150 Years of Mathematics at Harvard

A History in Sum: 150 Years of Mathematics at Harvard

,

In the twentieth century, American mathematicians began to make critical advances in a field previously dominated by Europeans. Harvard's mathematics department was at the center of these developments. A History in Sum is an inviting account of the pioneers who trailblazed a distinctly American tradition of mathematics--in algebraic geometry and topology, complex analysis, number theory, and a host of esoteric subdisciplines that have rarely been written about outside of journal articles or advanced textbooks. The heady mathematical concepts that emerged, and the men and women who shaped them, are described here in lively, accessible prose.

The story begins in 1825, when a precocious sixteen-year-old freshman, Benjamin Peirce, arrived at the College. He would become the first American to produce original mathematics--an ambition frowned upon in an era when professors largely limited themselves to teaching. Peirce's successors--William Fogg Osgood and Maxime Bôcher--undertook the task of transforming the math department into a world-class research center, attracting to the faculty such luminaries as George David Birkhoff. Birkhoff produced a dazzling body of work, while training a generation of innovators--students like Marston Morse and Hassler Whitney, who forged novel pathways in topology and other areas. Influential figures from around the world soon flocked to Harvard, some overcoming great challenges to pursue their elected calling.

A History in Sum elucidates the contributions of these extraordinary minds and makes clear why the history of the Harvard mathematics department is an essential part of the history of mathematics in America and beyond.

Año:
2013
Editorial:
Harvard University Press
Idioma:
english
Páginas:
280 / 281
ISBN 13:
9780674725003
ISBN:
067472500X
Series:
1825-1975
File:
PDF, 1006 KB
Descarga (pdf, 1006 KB)

You may be interested in Powered by Rec2Me

 

Most frequently terms

 
 
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
A HISTORY IN SUM

A History in Sum
150 YEARS OF MATHEMATICS
AT HARVARD (1825–1975)

Steve Nadis and Shing-Tung Yau

Harvard University Press
Cambridge, Massachusetts
London, England
2013

Copyright © 2013 by the President and Fellows of Harvard College
All rights reserved
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
Nadis, Steven J.
A history in sum : 150 years of mathematics at Harvard (1825–1975) /
Steve Nadis and Shing-Tung Yau.
pages cm
Includes bibliographical references and index.
ISBN 978-0-674-72500-3 (alk. paper)
1. Mathematics—Study and teaching—Massachusetts—History. 2. Harvard
University. Dept. of Mathematics. I. Yau, Shing-Tung, 1949– II. Title.
QA13.5.M43H376 2013
510.71'17444—dc23
2012049485

To Harvard mathematicians—past, present, and future—
and to mathematicians everywhere who have
contributed to this beautiful subject.

CONTENTS

1
2
3
4
5
6
7

Preface

ix

Prologue: The Early Days—A “Colledge” Riseth
in the Cowyards

1

Benjamin Peirce and the Science
of “Necessary Conclusions”

7

Osgood, Bôcher, and the Great Awakening
in American Mathematics

32

The Dynamical Presence of
George David Birkhoff

56

Analysis and Algebra Meet Topology: Marston Morse,
Hassler Whitney, and Saunders Mac Lane

86

Analysis Most Complex: Lars Ahlfors
Gives Function Theory a Geometric Spin

116

The War and Its Aftermath: Andrew Gleason,
George Mackey, and an Assignation in Hilbert Space

141

The Europeans: Oscar Zariski, Richard Brauer,
and Raoul Bott

166

Epilogue: Numbers and Beyond

204

Notes

211

Index

241

PREFACE

An esteemed colleague in our department recently asked about the motivation for writing a book about the history of mathematics at Harvard.
He didn’t see the history of our department—or of any department, for
that matter—as constituting a worthy end in itself, or at least not worthy of a book. “I don’t see history as an end,” he said. “I see it as a
means. But if it’s supposed to be an end, you really ought to explain why
you consi; der it important and something that others, outside of this
place, might actually find interesting.”
I must admit to being taken aback by his remarks, as I had pretty
much assumed from the get-go that the topic was, without question,
meritorious. But as the person who initiated this project—at a time
when I was still the department chair—I am grateful that he asked, since
it forced my coauthor and me to think long and hard about the book’s
premise. After a good deal of reflection on this matter, I must take issue
with my esteemed colleague, for in the tales of Harvard mathematicians
over the years and decades, I see both a means and an end. First, there’s
the potential educational value in reading about the remarkable feats of
remarkable people—people who at various junctures in history did indeed change the course of mathematics. Second, there are some very
good stories here—stories that deserve to be told about individuals who
came to mathematics through varied and singular routes, in some cases
overcoming considerable adversity to pursue their respective callings.
But beyond that, I truly believe that good mathematicians (as well
as good scientists in general) really need to understand their origins. By
looking at the contributions of the great men and women from the past,
we can trace a path showing how the important ideas in math evolved.
And by looking at that path, we may gain helpful clues regarding avenues
ix

x

Preface

that are likely to be fruitful in the years ahead. One hope for this book,
to put it another way, is that in celebrating this department’s storied
past, we may pave the way toward accomplishments in the future,
thereby helping to ensure that its future may eventually be as storied as
its past, if not more so.
You can be the most brilliant mathematician in the world, but if
you try to prove a theorem without knowing anything of its history,
your chances for success may be limited. It’s pretty much a given that
one person, no matter how tremendous a genius he or she may be, cannot go far in mathematics without taking advantage of the cumulative
knowledge of those who have come before.
You might think that the theorem you just proved is the greatest
thing ever—a guaranteed “game changer” and an instant classic. But in
the grand scheme of things, it’s merely one discrete achievement—a drop
in the vast bucket we call mathematics. When you combine that “drop”
with all of your other achievements, you might have produced a small
volume of water—perhaps a cupful (or pitcherful)—altogether. That cup
of water, as I see it, doesn’t just sit in an engraved mug, alongside the
degrees and awards lining our office walls. Instead, it’s part of a great
river that’s been flowing for a long time and, I hope, will continue to flow
into the indefinite future. When I do mathematics, I like to know, whenever possible, where that river has come from and where it is headed.
Once I know that, I can have a better sense of what I ought to try next.
These are a few thoughts concerning the value that I see in delving
into the past, in mathematics as well as in other intellectual endeavors.
That still leaves the question of why we chose to write about mathematics at Harvard per se, as opposed to somewhere else, and why we consider this place significant enough to warrant such treatment. Apart
from the obvious fact that I work at Harvard, and have been fortunate
enough to have been employed here for the past quarter century, it is
also a fact that this university has helped drive the development of
mathematics in America and beyond. Until a hundred or so years ago,
the field was dominated almost exclusively by Europeans. But American
mathematicians have made their mark in the past century, and Harvard
has been at the center of many critical advances, with our scholars continuing to play a leading role.
I won’t stick my neck out and claim that Harvard is the best, which
would make me unpopular in some quarters and would be hard to prove
in any case. It might not even be true. But I think most objective observers

Preface

would agree that the school’s math department is at least one of the best.
What I can say, without hesitation, is that it has produced and attracted
some tremendous mathematicians, and that’s been the case for more than
a century. It’s also an environment that has spawned some truly amazing
work, and I’ve been struck by Harvard’s illustrious tradition—and sometimes even awed by it—ever since coming here in 1987.
The rooms, libraries, and hallways of our university have been host
to the exploits of legendary individuals—folks with names like Peirce,
Osgood, Bôcher, Birkhoff, Morse, Whitney, Mac Lane, Ahlfors, Mackey,
Gleason, Zariski, Brauer, Bott, and Tate. The influence of these scholars
is still quite palpable, and their legacy is inspiring. In a half dozen or so
separate fields—such as analysis, differential geometry and topology,
algebraic geometry and algebraic topology, representation theory, group
theory, and number theory—Harvard has led the way.
In telling the story of the pioneers in these fields, my coauthor and I
aimed for something far broader than merely recounting the most notable successes to have emerged from this department. Instead, we hope
we have provided a guide to a broad swath of modern mathematics,
explaining concepts to nonspecialists that even mathematics students
are not normally introduced to until graduate-level courses. Although
lay readers will not be able to master these advanced subjects from our
comparatively brief accounts, they can at least get a flavor of the work
and perhaps get the gist of what it’s about. As far as we know, this kind
of discussion—on such abstruse topics as Stiefel-Whitney class, quasiconformal mappings, étale cohomology, and Kleinian groups—has never
before been made available to anyone but advanced students or professional mathematicians. In these pages, we hope to provide a general, and
gentle, introduction to concepts that people may have heard about but
have no idea what they really are.
But we also felt, from the very outset of this project, that we could
not write about “important” Harvard mathematicians without explaining, at some level, what these people did that makes them important far
beyond the confines of Harvard itself. Their mathematical contributions
are told as part of their life stories—an approach that we hope humanizes and enlivens what might otherwise be a dry treatment of the
subject.
A math department, of course, is more than just an assortment of
people, lumped together through a more-or-less common academic pursuit. A department has a history, too, and its origins at Harvard were

xi

xii

Preface

anything but grandiose. One could say that the department officially
began in 1727, when the first Hollis Professor of “Mathematics and
Natural Philosophy,” Isaac Greenwood, was appointed. (Although the
mathematics department consisted of just a single person at the time,
earlier in the school’s history there were no departments at all. At the
school’s very beginning, one instructor was responsible for teaching all
subjects.) A man of obvious mathematical acumen, and a Harvard graduate to boot, Greenwood retained his position for eleven years, until his
career at the college—and eventually his life—was done in by a weakness for alcohol.
While the department started with just one (talented but flawed)
individual almost three hundred years ago, it has evolved to the point
where it’s now a major force in a variety of mathematical subjects, despite its relatively modest size of about two dozen junior and senior
faculty members. Having grown from next to nothing to its current
stature as a world leader, Harvard math can serve as a model for what it
takes to build and maintain a first-class department—what it takes to be
successful and productive in mathematics.
Part of the department’s success, pretty much since the dawn of the
twentieth century, stems from maintaining an atmosphere that encourages research among the faculty and students—even among young students, including undergraduates. Decisions over the hiring of tenured
faculty are of paramount importance and can take years to be finalized,
with the goal being to appoint an individual who is considered the best
in a given field. Efforts are also made to keep a mix of professors, spread
across different age groups, to establish balance and a continuing sense
of renewal.
To the extent that the department has realized its goals and achieved
distinction in various fields—and perhaps even preeminence—its history
may constitute an important part of the history of mathematics in this
country and, in some cases, the world. In those instances, particularly
since the early twentieth century, when Harvard has played a trailblazing role, the school’s math department—the people it’s drawn, the avenues they’ve explored, and the advances they’ve made—has left a lasting
mark on the development of mathematics everywhere. The history of
our department, in other words, is a part—and I’d say a significant
one—of the history of contemporary mathematics as a whole.
A department, as stated before, is more than a bunch of names
listed together on a web page or catalog, or the building that these

Preface

people occupy. It’s like a family, with its own past, a unique genealogy,
and complex dynamics—a mixture of camaraderie, goodwill, and cooperation, along with the inevitable rivalries, grudges, and power struggles.
Mathematics, of course, is a broad subject, and no single person can stay
on top of it all. That’s one of the reasons we collaborate, interacting
with people all over the country and the world. But many of our most
intimate relations are with people in our own department, who may all
be housed under the same roof (which, thankfully, happens to be our
present circumstance)—save for the visiting scholars, scattered across
the globe, who periodically come to enrich this place and expose us to
fresh approaches. We learn from all of these people in important ways;
they keep us abreast of developments we’re not aware of and complement our knowledge and skills.
For example, George David Birkhoff (about whom much more will
be said) sparked the interest of his graduate students Marston Morse
and Hassler Whitney, which led them to forge novel pathways in topology. Morse, in turn, influenced Raoul Bott, who took topology into
previously unforeseen areas, some of which have since given rise to key
developments in both math and physics. Bott’s student Stephen Smale
carried this work, which built upon Morse’s theory, even further still. It’s
all part of that “river” I referred to before. The flow is not confined to a
single department, of course, but important tributaries may run through
it, gaining strength (and additional “water”) during the passage.
Charting the flow of this river—tracing it back to the headwaters
and following the many branches at their points of confluence and
divergence—is not an easy task. “Doing mathematical research is known
to be hard,” claimed Saunders Mac Lane, a mathematician who spent
much of his early career at Harvard before moving to the University of
Chicago. “Writing on the history of mathematics is not hard in the same
way, but it is difficult. Part of the difficulty is that of picking the right
things to bring out.” History is also difficult, Mac Lane wrote, “because
the connections that matter are usually numerous, often hidden, and
then subsequently neglected” (“Addendum,” in A Century of Mathematics in America, Part 3, 1989).
In keeping with the river analogy, a mathematics department, as
with the field itself, is fluid rather than fixed. People are dynamic players, constantly coming and going, which means that our story is by no
means limited to Cambridge, Massachusetts. Some of the mathematicians discussed here might have come to Harvard as undergraduates

xiii

xiv

Preface

and returned as junior faculty, only to move on to other institutions, or
they might have come for graduate school or as already-established senior faculty members. Similarly, top scholars regularly visit from other
American institutions or from Europe, Asia, and elsewhere to exchange
ideas with our students and faculty and to engage in research partnerships that sometimes span decades. The people based here, conversely,
also travel, often collaborating with other researchers spread across the
country and the globe—all of which means that our focus is far less parochial than the topic of Harvard mathematics might initially suggest.
The mathematical discussion, as a result, is not restricted to the goingson at a particular campus but is instead more international in scope.
In taking on a sprawling topic like this, one of the big challenges
then is “picking the right things,” as Mac Lane put it. At pivotal moments in this book, we had to make some tough decisions about whose
stories would figure most prominently among many worthy contenders,
while also making decisions about the time frame under consideration.
Although we’ve tried to focus on those Harvard researchers (primarily
faculty members) that made the greatest contributions to mathematics,
there is, I admit, a degree of arbitrariness and subjectivity at play here.
Because of limitations of time, space, and knowledge (on the part of the
authors), many outstanding individuals may have been given short shrift
in our account, and for that we humbly apologize.
The time frame, too, is also somewhat arbitrary. One hundred and
fifty years—a century and a half—seems like a nice, round number;
1825 was singled out as the “official” starting point (though earlier
years get brief mention), for that was the year in which Benjamin Peirce
first came to Harvard, enrolling as a freshman at the age of sixteen. Many
regard Peirce as the first American to have produced original work in
the realm of pure mathematics. Within months of his appointment to
the Harvard faculty in 1831, for example, Peirce proved a theorem (discussed in Chapter 1) concerning the minimum number of prime factors
that an odd “perfect number”—assuming such a thing exists—must
have.
Unfortunately, university officials did not reward Peirce for these
efforts. They urged him to devote his energies, instead, toward the writing of textbooks, which was deemed the appropriate and, indeed, loftiest objective for a Harvard professor. In fact, little original mathematics
research was being done at Harvard (or at any other American univer-

Preface

sity, for that matter) until the late nineteenth and early twentieth centuries. This transition—the coming of age of mathematics at Harvard in
concert with parallel developments elsewhere in the country—is the
subject of Chapter 2. It was also taken up in an excellent 2009 article by
Steve Batterson, “Bôcher, Osgood, and the Ascendance of American Mathematics at Harvard,” which was published in the Notices of the American
Mathematical Society. While I found Batterson’s account fascinating, I
felt that it ended just as the story was getting interesting—just when our
department was starting to hit its stride. That, indeed, was part of the
motivation for this book—to write about what happened once mathematics really took hold at Harvard at the turn of the twentieth century.
As I see it, a tradition of excellence has been established here that is
self-perpetuating, having taken on a life of its own. The story is constantly unfolding, with faculty members, research fellows, graduate students, and undergraduates continuing to do impressive research, proving
new theorems, and winning prestigious awards. Since there’s no obvious
cutoff point to this work, we made the decision (again somewhat arbitrarily) to essentially cap our chronicle in the year 1975 or thereabouts—
the rationale being that it takes some time, perhaps a matter of decades, to
accurately appraise developments in mathematics. There are many theorems that people initially get excited about, but then twenty to thirty years
later we find that some of them do not loom quite so large after all.
One consequence of our decision regarding the time frame is that,
with a few exceptions, we are writing about people who are no longer in
the department and most of whom are deceased. That makes it easier
when drafting a history like this, since it’s hard to identify a person’s
most salient achievements while his or her career is still in midstream.
It’s helpful to have the benefit of time in assessing the weight of one’s
accomplishments. And there’s always the chance that, at any given moment, the next thing that he or she does may eclipse everything that
preceded it.
The downside of this strategy is that we inevitably omit a lot of
extraordinary mathematics, because it’s clear that Harvard scholars
have had many successes in the years since 1975. Perhaps, someday,
there will be a sequel to this narrative in which we read about their stories and accomplishments as well.
—Shing-Tung Yau

xv

xvi

Preface

When first approached by my coauthor to take on this project, I must
confess that I didn’t know what I was getting into. (That, I’m embarrassed to admit, is the case with most of the literary endeavors I get involved with.) Although I’d been in the math department on countless
occasions before—having met many faculty members, students, and
postdocs during those visits—I’d never given much thought, if any, to
the setting or context in which these people worked. I had no sense of
how they fit into the bigger fabric here. Popping in and out, as I often
did, one can easily overlook the fact that this place is steeped in tradition. Upon a bit of digging, however, I was pleased to find a rousing cast
of characters, over the decades and centuries, who’d done so much—far
more than I’d realized—to further the cause of mathematics in this
country and throughout the world. I was eager to learn more about
them and what they had achieved, and I hoped that others—who, like
me, had no formal connection with this place—would find their stories
engaging as well.
A mathematician I spoke with, the editor of a prominent mathematics journal, told me that Harvard was special—“a beacon in mathematics,” as he put it. “Almost every mathematician who comes to the
U.S. from afar wants to stop at Harvard sometime during his or her
visit.” Before embarking on this project, I’d never heard anyone make a
statement like that, and the fact that someone unaffiliated with Harvard
would say that is certainly a tribute to the department. But it’s also true
that Harvard mathematics, regardless of its present standing, was not
always a beacon. For a long time, Harvard mathematicians, as well as
American mathematicians in general, were not making lasting contributions to their field. That has changed, of course, which is why my coauthor and I considered writing a book on this subject. We thought it might
be instructive to see how the department rose from humble beginnings—
like most of its American counterparts—to its current position of prominence. Our focus is not on the evolution of course curricula, innovations
in math education, or shifts in administrative policies but, rather, on
noteworthy achievements in mathematics—spectacular results, made by
fascinating people, that have stood the test of time.
That quest has involved a fair amount of research, interviews, and
general investigation—for which we have relied on the help of a large
number of people, both inside and outside the department. We’d now
like to thank as many of them as we can, apologizing to anyone whose
efforts were overlooked amidst this frantic activity, which (as the word

Preface

“frantic” implies) did not always proceed in the most orderly fashion.
Thanks are owed to Michael Artin, Michael Atiyah, Michael Barr, Ethan
Bolker, Joe Buhler, Paul Chernoff, Chen-Yu Chi, John Coates, Charles
Curtis, David Drasin, Clifford Earle, Noam Elkies, Carl Erickson, John
Franks, David Gieseker, Owen Gingerich, Daniel Goroff, Fan Chung
Graham, Robert Greene, Benedict Gross, Michael Harris, Dennis Hejhal,
Aimo Hinkkanen, Eriko Hironaka, Heisuke Hironaka, Roger Howe, Yi
Hu, Norden Huang, Lizhen Ji, Yunping Jiang, Irwin Kra, Steve Krantz,
Bill Lawvere, Peter Lax, Jun Li, Bong Lian, David Lieberman, Albert
Marden, Brian Marsden, Barry Mazur, Colin McLarty, Calvin Moore,
Dan Mostow, David Mumford, Richard Palais, Wilfried Schmid, Caroline Series, Joseph Silverman, Robert Smith, Joel Smoller, Shlomo Sternberg, Dennis Sullivan, Terence Tao, John Tate, Richard Taylor, Andrey
Todorov, Howell Tong, Henry Tye, V. S. Varadarajan, Craig Waff, HungHsi Wu, Deane Yang, Lo Yang, Horng-Tzer Yau, Lai-Sang Young, and
Xin Zhouping. In particular, Antti Knowles, Jacob Lurie, and Loring Tu
were extremely generous with their time, and the authors are grateful for
their invaluable input. Maureen Armstrong, Lily Chan, Susan Gilbert,
Susan Lively, Rima Markarian, Roberta Miller, and Irene Minder provided vital administrative assistance. The librarians at the Harvard University Archives have been extremely helpful, as was Nancy Miller of
Harvard’s Birkhoff Library and Gail Oskin and others at Harvard Photographic Services. We’d also like to thank our editor, Michael Fisher,
and his colleagues at Harvard University Press—including Lauren Esdaile, Tim Jones, Karen Peláez, and Stephanie Vyce—for taking on this
project and converting our electronic files into such a handsome volume.
Brian Ostrander and the folks at Westchester Publishing Services, along
with copy editor Patricia J. Watson, helped put the finishing touches on
our book; we appreciate the services as well as the closure.
The authors benefitted from the kind support of Gerald Chan, Ronnie Chan, and the Morningside Foundation, without which we would
not have been able to complete this project. We owe them a debt of gratitude and will not forget their generosity.
Finally, we’d like to pay tribute to our families, who always have to
put up with a lot when one member of the clan decides to abandon reason and get involved in something as all-consuming as writing a book.
My coauthor thanks his wife, Yu-Yun, and his sons, Isaac and Michael; I
thank my wife, Melissa, my daughters, Juliet and Pauline, and my parents, Lorraine and Marty, for their support and unwavering patience.

xvii

xviii

Preface

They heard more about Harvard mathematics than the average person
will be exposed to, never once hinting that the contents of that discussion were anything less than riveting.
—Steve Nadis

Mathematics is the science which draws necessary conclusions.
—BENJAMIN PEIRCE, 1870

PROLOGUE

The Early Days—A “Colledge” Riseth in the Cowyards
The beginnings of Harvard University (originally called “Harvard
Colledge” in the vernacular of the day) were certainly humble, betraying
little hints of what was in store in the years, and centuries, to come. The
school was established in 1636 by decree of the Great and General
Court of the Massachusetts Bay Colony, but in that year it was more of
an abstraction than an actual institute of higher learning, consisting of
neither a building nor an instructor and not a single student. In 1637, or
thereabouts, a house and tiny parcel of cow pasture were purchased in
“Newetowne” (soon to be renamed Cambridge) from Goodman Peyntree, who had resolved to move to Connecticut, which was evidently the
fashionable thing to do at the time among his more prosperous neighbors. In that same year, the college’s first master was hired—Nathaniel
Eaton, who had been educated at the University of Franeker in the
Netherlands, where he had written a dissertation on the perennially enthralling topic of the Sabbath. At first, it was just Eaton, nine students,
and a farmhouse on little more than an acre of land. John Harvard, a
minister in nearby Charlestown, who was a friend of Eaton’s and “a godly
gentleman and lover of learning,”1 died in 1638, having bequeathed the
fledgling school half of his estate and his entire four-hundred-volume
library.
Some 375 years later, the university that bears John Harvard’s name
still stands on that former cow patch—albeit with some added real
estate—the oldest institution of higher learning in the United States. The
school’s libraries collectively hold more than sixteen million books, compared with the few hundred titles in the original collection. The number
of students has similarly grown from a handful to the more than 30,000
that are presently enrolled on a full- or part-time basis. In place of the
1

2

A H I S T O RY I N S U M

lone schoolmaster of the 1630s, there are now about 9,000 faculty members (including those with appointments at Harvard-affiliated teaching
hospitals), plus an additional 12,000 or so employees. Eight U.S. presidents have graduated from the university, and its faculty has produced
more than forty Nobel laureates. Harvard’s professors and its graduates
have won seven Fields Medals—sometimes called the mathematics equivalent of a Nobel Prize—and account for more than one-quarter of the
sixty-two presidents of the American Mathematical Society. In addition,
Harvard scholars have earned many other prestigious mathematics honors about which more will be said in the pages to come.
None of this, of course, could have been foretold when the school
was started by Puritan colonists, who—in contrast to today’s liberal arts
philosophy—were fearful, perhaps above all else, of leaving “an illiterate Ministry to the Churches, when our present Ministers shall lie in the
Dust.”2 The founders felt a pressing need to train new ministers and to
produce a citizenry capable of reading the Bible and hymnbooks—
among other literature—placed before them.
What the founders had in mind, in other words, was something
along the lines of a glorified Bible study school, and the “colledge” they
launched for this purpose surely got off to a rocky start. The school’s
first hire, Eaton, had a tendency to “drive home lessons with the rod.” In
1639, the second year of his tenure, he beat his assistant with “a walnuttree cudgel big enough to have killed a horse,” and the assistant might
have died had it not been for the timely intervention of a minister from
the church nearby. Eaton was hauled into court for assault and dismissed from his position in that same year—partly owing to his fondness
for corporal punishment and partly owing to his wife’s substandard cooking, which left the students ill fed and ill tempered. Evidently, she offered
them too little beef (or none at all) and bread “sometimes made of heated,
sour meal,” and—perhaps the gravest offense of all—she sometimes made
the boarders wait a week between servings of beer. In the absence of any
headmaster or teacher of any sort, the school closed its doors during the
1639–40 academic year, and students were sent elsewhere—some back to
the farms whence they came—prompting many to wonder whether the
school would ever reopen.3
Harvard’s overseers had better luck with Eaton’s successor, a University of Cambridge graduate named Henry Dunster, who put the school
on a sounder course, both fiscally and academically, during his fourteen
years as master and president. Dunster devised a three-year, three-pronged

Prologue

educational plan that revolved around liberal arts, philosophies, and
languages (or “learned tongues,” as they were called). Dunster’s program remained largely intact long after he resigned in 1654, extending
well into the eighteenth century.
While the curriculum offered students a reasonably broad foundation, it was, in the words of the historian Samuel Eliot Morison, “distinctly weak” in mathematics and the natural sciences—in keeping with
the example of English universities of the era upon which Harvard was
modeled.4 (“The fountain could not rise higher than its source,” another
historian once explained, in reference to the paucity of mathematics instruction to be found on campus.)5 Since “arithmetic and geometry were
looked upon . . . as subjects fit for mechanics rather than men of learning,” Morison adds,6 exposure to these subjects was limited to pupils
in the first three quarters of their third and final year of study, with
the fourth quarter of that year reserved for astronomy. Students met at
10 a.m. on Mondays and Tuesdays for the privilege of honing their mathematical skills. These times were apparently etched in stone, or etched
into the school’s bylaws, which stated that the hours were not subject to
change “unless experience shall show cause to alter.”7
In the first one hundred or so years, mathematics instructors, who
held the title of tutors, had little formal training in the subject—consistent
with the general sentiment that the subject itself hardly warranted a more
serious investment. Students, similarly, had to demonstrate proficiency in
Latin (“sufficient to understand Tully, or any like classical author”) to
gain admittance to Harvard but faced no entrance examinations in
mathematics and “were required to know not even the multiplication
table.”8
Evidence suggests there was little change in mathematics education
at Harvard for another eighty to ninety years after Dunster introduced
his original course of study. “Arithmetic and a little geometry and astronomy constituted the sum total of the college instruction in the exact
sciences,” wrote Florian Cajori in an 1890 review of mathematics training in this country. “Applicants for the master’s degree only had to go
over the same ground more thoroughly.”9
Algebra, for example, probably did not show up in the Harvard
curriculum until the 1720s or 1730s, Cajori contended, even though the
French mathematician and philosopher René Descartes introduced
modern algebraic notation in 1637. A textbook on the subject, Elements
of That Mathematical Art Commonly Called Algebra by an English

3

4

A H I S T O RY I N S U M

schoolteacher, John Kersey, was published in two volumes in 1673 and
1674, nearly a half century before Harvard saw fit to expose its students
to algebra.
Based on senior thesis titles of the day, the mathematics scholarship
that took place was hardly earth-shattering, Morison writes, “consisting
largely of such obvious propositions as: ‘Prime numbers are indivisible
by any factor’ and ‘In any triangle the greater side subtends the greater
angle.’ ”10 It seems evident that no new earth was being tilled, nor new
treasures dug up, in this agrarian milieu.
A turning point came in 1726, when the first mathematics professor, Isaac Greenwood, was appointed. Greenwood, a Harvard graduate,
did much to raise the level of pedagogy in science and math, offering
private lessons on various advanced topics. He also gave a series of lectures and demonstrations on the discoveries of Isaac Newton, who coincidentally died in the same year, 1727, that Greenwood became the first
occupant of a newly endowed chair, the Hollis Professorship of Mathematics and Natural Philosophy—named after Thomas Hollis, a wealthy
London-based merchant and Harvard benefactor. Greenwood was responsible for many other firsts, as well, authoring the first mathematics
text written in English by a native-born American and being the first
mathematics professor to teach calculus in the colonies. He also taught
algebra and was possibly the first to introduce the subject to Harvard
students.
Despite these virtues, Greenwood let his taste for alcohol get the
better of him. After repeated bouts of drunkenness and failures to abstain from liquor, despite being granted many opportunities to mend his
ways, he was permanently discharged from his position in 1738, fired
for “gross intemperance.”11 His dismissal was described in an early history of the university as an “excision of a diseased limb from the venerable trunk of Harvard.”12 Greenwood became a traveling lecturer after
leaving Harvard and, sadly, drank himself to death seven years later.
John Winthrop, Greenwood’s twenty-four-year-old successor, fared
considerably better, holding the Hollis professorship for forty-one years.
He was “the first important scientist or productive scholar on the teaching staff at Harvard College,” according to Morison, who compared
Winthrop with Benjamin Franklin in terms of versatility: “With the time
and means at his disposal, he was able to carry investigation deeper than
Franklin on many subjects.” Winthrop studied electricity, sunspots, and

Prologue

seismology, “proving that earthquakes were purely natural phenomena,
and not manifestations of divine wrath,” thereby incurring the (undivine)
wrath of some clergymen.13
Although Winthrop was a first-rate scientist and, by all accounts, an
excellent teacher, Julian Coolidge (a member of Harvard’s math faculty
from 1899 to 1940) could not say “that his interest in pure mathematics
was outstanding”—perhaps a symptom of the times.14 As a general rule,
Cajori noted, “the study of pure mathematics met with no appreciation
and encouragement. Original work in abstract mathematics would have
been looked upon as useless speculations of idle dreamers.”15
The next two occupants of the Hollis mathematics professorship,
Samuel Williams and Samuel Webber, were less distinguished, according
to Coolidge, who claimed “there was certainly a retrocession in . . . interest in mathematics during these years.”16 Williams, who conducted
research on astronomy, meteorology, and magnetism, was an active socialite with an extravagant lifestyle that put him in serious debt—and
ultimately out of his Harvard job in 1788. Webber, described as “a
man without friends or enemies,” assumed the Hollis chair in 1789, becoming president of the college in 1806, though Morison characterized
him as “perhaps the most colorless President in our history.” He died in
1810, long before his dreams of establishing an astronomical observatory at Harvard were realized, with his only tangible accomplishment
on that front being rather modest: the construction of an “erect, declining sundial.”17
In 1806, the Hollis chair was offered to Nathaniel Bowditch, a selftaught mathematician of growing repute, who turned down the offer to
pursue other interests. A year later, the mathematics and natural philosophy chair was filled by John Farrar, a scientist and Harvard graduate
who would later transform our conception of hurricanes, writing that
the great gale that struck New England in 1815 “appears to have been a
moving vortex and not the rushing forward of a great body of the atmosphere.”18 Although Farrar did not complete any original mathematics
research of note, he was an inspired lecturer who brought modern mathematics into the Harvard curriculum, personally translating the works
of French mathematicians such as Jean-Baptiste Biot, Étienne Bézout,
Sylvestre Lacroix, and Adrien-Marie Legendre.
Harvard undergraduates began studying Farrar’s formulation of
Bézout’s calculus in 1824. A year later, a precocious freshman named

5

6

A H I S T O RY I N S U M

Benjamin Peirce, who had already studied mathematics with Bowditch,
enrolled in the school. His father, also named Benjamin Peirce, was the
university librarian who would soon write the history of Harvard.19 His
son, meanwhile, would soon rewrite the history of mathematics—both
at Harvard and beyond.

1
B E N J A M I N P E I RC E A N D T H E S C I E N C E O F
“ N E C E S S A RY C O N C L U S I O N S ”

Benjamin Peirce came to Harvard at the age of sixteen and essentially
never left, all the while clinging to the heretical notion that mathematicians ought to do original mathematics, which is to say, they should
prove new theorems and solve problems that have never been solved
before. That attitude, sadly, was not part of the orthodoxy at Harvard,
nor was it embraced at practically any institution of higher learning in
the United States. At Harvard and elsewhere, the emphasis was on
teaching math and learning math but not on doing math. This approach
never sat well with Peirce, who was unable, or unwilling, to be just a
passive recipient of mathematical doctrine. He felt, and rightfully so,
that he had something more to contribute to the field than just being a
good reader and expositor. Consequently, he was driven to advance
mathematical knowledge and disseminate his findings, even though the
university he worked for did not share his enthusiasm for research or
mathematics journals. (The “publish or perish” ethic, evidently, had not
yet taken hold.)
When Peirce was just twenty-three years old, newly installed as a
tutor at Harvard, he published a proof about perfect numbers: positive
integers that are equal to the sum of all of their factors, including 1.
(Six, for instance, is a perfect number: its factors, 3, 2, and 1, add up to 6.
Twenty-eight is another example: 28 = 14 + 7 + 4 + 2 + 1.) All the perfect numbers known at that time—and still to this day—were even.
Peirce wondered whether odd perfect numbers might exist, and his proof,
which is discussed later in this chapter, placed some constraints on their
existence. Despite the fact that this work turned out to be more than fifty
years ahead of its time, it did not garner international acclaim—or any
notice, for that matter—mainly because the leading European scholars
7

8

A H I S T O RY I N S U M

did not take American mathematics journals seriously, nor did they expect them to publish anything of note. Nevertheless, Peirce’s accomplishment did signal, to anyone who might have been paying attention,
that a new era of mathematics was starting at Harvard—one that the
school’s administration could not suppress, even though it did nothing
to encourage Peirce in this direction.
Peirce had, however, received strong encouragement from Nathaniel Bowditch, who was considered one of the preeminent mathematicians in the United States. Bowditch helped cultivate Peirce’s interest in
“real,” cutting-edge mathematics, and had Bowditch made a different
career decision, he might have played an even more direct role in his
protégée’s education. In 1806, Harvard offered Bowditch the prestigious
Hollis Chair of Mathematics and Natural Philosophy. Bowditch turned
down that offer, just as he turned down subsequent offers from West
Point and the University of Virginia. But he did not entirely turn his back
on Harvard; he later served as a fellow to the Harvard Corporation during a term that overlapped with Peirce’s years there as a student, tutor, and
faculty member.
As leading mathematicians go, Bowditch was something of an anomaly. He was almost entirely self-educated; he had never gone to college,
nor did he attend high school. Instead, he left school at the age of ten to
join the workforce, assisting his father in the cooper trade, making barrels, casks, and other wooden vessels. He helped his father for two years
and then joined the shipping industry. After voyaging to distant places
like Sumatra and the Philippines, he returned to Massachusetts where he
entered the insurance business, while resuming his mathematical studies
on the side. Although his exposure to formal education was brief, he had
learned enough math on his own to know that a university could never
offer him as much money as he came to earn in his job as president of
the Essex Fire and Marine Insurance Company.
Bowditch nevertheless continued to pursue his interest in mathematics, focusing on celestial mechanics—the branch of astronomy that
involves the motions of stars, planets, and other celestial objects. By
1806, the year Bowditch was recruited by Harvard, he had read all four
volumes of Pierre-Simon Laplace’s treatise Mécanique Céleste. (The fifth
volume came out in 1825.) Bowditch, in fact, did a good deal more than
just read it; he set about the task of translating the first four volumes of
Laplace’s great work. His efforts went beyond mere translation—no
mean task in itself—and included a detailed commentary that helped

Benjamin Peirce and the Science of “Necessary Conclusions”

bring Laplace within the grasp of American astronomers and mathematicians, who, for the most part, had not been able to understand his
treatise before. Bowditch not only brought Laplace’s work up to date
but also filled in many steps that the original author had omitted. “I
never came across one of Laplace’s ‘thus it plainly appears’ without feeling sure that I have hours of hard work before me to fill up the chasm
and find out and show how it plainly appears,” Bowditch said.1 The
French mathematician Adrien-Marie Legendre praised Bowditch’s efforts: “Your work is not merely a translation with a commentary; I regard
it as a new edition, augmented and improved, and such a one as might
have come from the hands of the author himself if he had consulted
his true interest, that is, if he had been solicitously studious of being
clear.”2
Peirce, who was born in Salem, Massachusetts, in 1809, would
probably have met Bowditch eventually, given Peirce’s manifest talent in
mathematics and Bowditch’s growing reputation in the field. But they
met earlier than they might have otherwise because Peirce went to a
grammar school in Salem where he was a classmate and friend of Henry
Ingersoll Bowditch, Nathaniel’s son. The story has it that Henry showed
Peirce a mathematical problem that his father had been working on.
Peirce uncovered an error, which the son brought to his father’s attention.
“Bring me the boy who corrects my mathematics,” Bowditch reportedly
said, and their relationship blossomed from there.3
Bowditch moved from Salem to Boston in 1823. Two years later,
the sixteen-year-old Peirce moved to nearby Cambridge to enter Harvard, following in the footsteps of his father, Benjamin Peirce Sr., who
attended the college and later worked as the school librarian and historian. By the time the younger Peirce arrived on campus, he already had
a mentor—not some street-smart upperclassman, but Bowditch himself,
who was then a nationally known figure. Hard at work on his Laplace
translation at the time, Bowditch enlisted the keen eye and proofreading
services of the young Peirce. The improvements suggested by Peirce were
reportedly “numerous.”4 The first volume of Bowditch’s translation was
published in 1829, the year that Peirce graduated from Harvard. The
other three volumes were published in 1832, 1834, and 1839, respectively. (Independently, a separate translation of Laplace’s work came out
in 1831. That book, titled The Mechanism of the Heavens, was written
by Mary Somerville, a British woman who, like Bowditch, had mostly
taught herself mathematics and endeavored to make Laplace accessible.

9

10

A H I S T O RY I N S U M

Her book, too, went beyond a mere translation, containing detailed explanations that put his treatise into more familiar language.)5
Peirce continued to review Bowditch’s manuscripts during his tenure as a Harvard professor. “Whenever one hundred and twenty pages
were printed, Dr. Bowditch had them bound in a pamphlet form and
sent them to Professor Peirce, who, in this manner, read the work for the
first time,” wrote Nathaniel Ingersoll Bowditch, another of Nathaniel
Bowditch’s sons, in a memoir about his father. “He returned the pages
with the list of errata, which were then corrected with a pen or otherwise in every copy of the whole edition.”6
In this way, Peirce was exposed from an early age to mathematics
more advanced than could be found in any American curriculum—
writings that other undergraduates simply were not privy to. Scholars
have speculated that the excitement of reading and mastering Laplace’s
work may have drawn Peirce to mathematical research. It is evident that
Laplace’s writings made a deep impression on him. Decades later, in the
pre-Civil War era, a student told Peirce that he risked incarceration for
helping to rescue a runaway slave; the only consolation about being
locked up in prison, the student said, was that he would finally have
time to read Laplace’s magnum opus. “In that case, I sincerely wish you
may be,” Peirce quipped.7
Peirce had, of course, an even deeper reverence for his mentor than
he did for Laplace. Bowditch, in turn, was convinced that his young
charge would go far, claiming that, as an undergraduate, Peirce already
knew more mathematics than John Farrar, who then held the Hollis
professorship.8 Peirce returned the favor decades later, calling Bowditch
the “father of American geometry” in a treatise he wrote on analytical
mechanics that was dedicated to his mentor.9 Before long, a similar
term, “father of American mathematics,” was applied to Peirce (by the
British mathematician Arthur Cayley, among others). Through the
force of his personality and the originality of his work, Peirce came to
be known as the leading American mathematician of his generation and,
more generally, as the initiator of mathematical research at American
universities.10
On that score, Peirce faced little competition. Before he entered the
scene, no one thought that “mathematical research was one of the things
for which a mathematical department existed,” Harvard mathematician
Julian Coolidge wrote in 1924. It was certainly not a job prerequisite
since there were not nearly as many people qualified to conduct high-

Benjamin Peirce and the Science of “Necessary Conclusions”

level research, or inclined to do so, as there were available teaching slots.
“Today it is commonplace in all the leading universities,” Coolidge
added. “Peirce stood alone—a mountain peak whose absolute height
might be hard to measure, but which towered above all the surrounding
country.”11
Despite the abilities Peirce exhibited at an early age, it was not obvious that he would have the opportunity to attain the aforementioned
heights. After receiving his bachelor’s degree from Harvard in 1829,
Peirce had essentially no options for advanced studies of mathematics
in the United States, because no Ph.D. programs in math existed at the
time. One could go to Europe—Göttingen, Germany, was a popular destination for mathematically inclined young Americans—but this was not
a realistic possibility for Peirce, mainly for financial reasons. It appears
that his family could not afford the luxury of sending him to school
abroad; instead, he had to start earning a living soon after graduation.
He taught for two years at Round Hill School, a preparatory school
in Northampton, Massachusetts, before returning to Harvard in 1831
to work as a tutor. But with Farrar, the Hollis chair, away in Europe at
the time, Peirce was immediately placed at the head of the department. For health reasons, Farrar never resumed his full duties. Peirce
continued to run the department, first as University Professor of Mathematics and Natural Philosophy, starting in 1833, and later as the Perkins Professor of Mathematics and Astronomy, starting in 1842. He retained the Perkins chair until he died in 1880—almost fifty years after
joining the Harvard faculty.
Within months of his original appointment, Peirce submitted his
aforementioned proof on perfect numbers to the New York Mathematical Diary, one of many journals to which he contributed, whereby he
had gained a growing reputation as a talent to be reckoned with.12
Peirce took the position that people needed to solve actual mathematical
problems in order to earn the title of mathematician. “We are too prone
to consider the mere reader of mathematics as a mathematician, whereas
he does not much more deserve the name than the reader of poetry deserves that of poet,” wrote Peirce, by way of promoting Mathematical
Miscellany, a journal that he contributed to frequently and of which he
eventually (though briefly) became editor.13
His 1832 paper on perfect numbers concerned a topic that had attracted attention since antiquity. Euclid proved in the Elements, which he
wrote around 300 b.c., that if 2n − 1 is a prime number, then 2n−1(2n − 1)

11

12

A H I S T O RY I N S U M

is a perfect number. Roughly 2,000 years later, Leonhard Euler proved
that every even perfect number must be of this form. “But I have never
seen it satisfactorily demonstrated that this form includes all perfect
numbers,” Peirce wrote.14 He was alluding to the question of whether
odd perfect numbers might exist. This was among the oldest open problems in mathematics, and it remains unsolved to this day. But Peirce
gave a partial answer to that question, proving that an odd perfect
number—if there is one—must have at least four distinct prime factors.
A perfect number with fewer than four prime factors (such as 6) has to
be even.
In achieving this result, modest though it may seem, Peirce was far
ahead of his contemporaries. The British mathematician James J. Sylvester,
who happened to be a good friend of Peirce’s, and the French mathematician Cl. Servais proved the exact same thing—that any odd number must
have at least four distinct prime factors—in 1888, fifty-six years after
Peirce had established that very fact.15 Sylvester and Servais clearly had
not seen Peirce’s paper, which was published in the Mathematical Diary—a
journal that was not widely read in the United States, let alone followed
by many readers outside the country. Peirce would run into this problem
again and again, as he had set up shop in what was regarded by many
Europeans as a mathematical backwater of the highest rank.
Later, in 1888, Sylvester proved that an odd perfect number must
have at least five distinct prime factors and subsequently conjectured
that there must be at least six. As of this writing, more than a century
later, the minimum number of distinct prime factors now stands at nine.16
If nothing else, Peirce started a cottage industry that persists to this day.
And even after all this time, no one yet knows whether odd perfect numbers exist. But odd numbers up through 10300 have already been checked
without success, making the prospect of finding an odd perfect number
seem increasingly dim.
Curiously, Peirce’s employers did not appreciate his accomplishment: proving a new theorem in number theory that related to a legendary problem. Harvard president Josiah Quincy pushed Peirce in a more
conventional direction: writing textbooks. Peirce, however, had greater
ambitions than that, asking whether the Harvard Corporation wanted
him to “undertake a task that must engross so much time and is so elementary in its nature and so unworthy of one that aspires to anything
higher in science.” But the corporation agreed with Quincy’s directive.
The notion of doing original research in mathematics was so novel then

Benjamin Peirce and the Science of “Necessary Conclusions”

as to have been practically unheard of in the United States; hardly anyone was qualified to even attempt it, which is why Peirce’s entreaties fell
on deaf ears. Instead, he published seven textbooks over the next ten
years, on such subjects as plane trigonometry, spherical trigonometry,
sound, plane and solid geometry, algebra, and An Elementary Treatise
on Curves, Functions, and Forces (in two volumes). His text on analytical mechanics came much later, in 1855. And in keeping with the line
taken by the Harvard administration, he published no further papers on
number theory, though he did not stop pursuing original work in mathematics and science.17
While his textbooks were original in presentation and mathematically elegant, they were too concise for most students—largely stripped
bare of exposition—which made them difficult to understand. Simply put,
the texts were too demanding for all but the most exceptional students.
They were “so full of novelties,” explained former Harvard president
Thomas Hill, “that they never became widely popular, except, perhaps,
the trigonometry; but they have had a permanent influence upon mathematical teaching in this country; most of their novelties have now become
commonplaces in all textbooks.”18
Peirce’s 1855 treatment of analytical mechanics, for example, did
attract some favorable notice. Soon after its publication, an American
student in Germany asked an eminent German professor what book he
should read on that subject. The professor replied, “There is nothing
fresher and nothing more valuable than your own Peirce’s quarto.”19
Despite such praise from those well versed in mathematics, the
works were generally unpopular among students, some of whom wrote
in their books: “He who steals my Peirce steals trash.”20 In fact, enough
students complained about the impenetrability of Peirce’s texts that they
were investigated by the Harvard Committee for Examination in Mathematics, which concluded that “the textbooks were abstract and difficult, that few could comprehend them without much explanation, that
Peirce’s works were symmetrical and elegant, and could be perused with
pleasure by the adult mind, but that books for young students should be
more simple.” The report, in other words, was all over the map, but in
the end Peirce’s textbooks continued to be used in Harvard classrooms
for many more years.21
Peirce’s lectures were a mixed bag as well. The average person
found them almost impossible to follow—some saying that the speed of
Peirce’s mental processes made it difficult for him to put things in a way

13

14

A H I S T O RY I N S U M

that others could comprehend. “In his explanations, he would take giant
strides,” said Dr. A. P. Peabody, a tutor during the 1832–33 academic
year. “And his frequent ‘you see’ indicated what he saw clearly, but that
of which his pupils could hardly get a glimpse.”22
For advanced students who could keep pace with Peirce’s rapid
train of thought, the talks could be inspiring. “Although we rarely could
follow him, we sat up and took notice,” said William Elwood Byerly,
Peirce’s student both in college and in graduate school. Byerly earned
the first Ph.D. granted by Harvard, becoming an assistant professor at
the university in 1876.23
A Cambridge woman had a similar experience when she attended
one of Peirce’s lectures. “I could not understand much that he said; but
it was splendid,” she reported. “The only thing I now remember in the
whole lecture is this—‘Incline the mind to an angle of 45 degrees, and
periodicity becomes non-periodicity, and the ideal becomes real.’ ”24
While Ralph Waldo Emerson once asserted that “to be great is to be
misunderstood,” Peirce’s example at Harvard offered a variant on that
dictum: to be great is to be incomprehensible. At a presentation before
the National Academy of Sciences, Peirce once spent an hour filling a
blackboard with dense equations. Upon turning to see the perplexed
faces among the attendees, he said, “There is only one member of the
Academy who can understand my work, and he is in South America.”25
Coolidge regarded Peirce as a rousing, if opaque, lecturer: “His great
mathematical talent and originality of thought, combined with a total
inability to put anything clearly, produced among his contemporaries a
feeling of awe that amounted almost to dread.”26
Not only were the lectures difficult to follow, Byerly noted, but also
they were often ill prepared. “The work with which he rapidly covered
the blackboard was very illegible, marred with frequent erasures, and
not infrequent mistakes (he worked too fast for accuracy). When the
college bell announced the close of the hour . . . , we filed out, leaving
him abstractedly staring at his work, still with chalk and eraser in his
hands, entirely oblivious of his departing class.”27
Despite the ostensible drawbacks to Peirce’s pedagogy—the words
“wretched”28 and “lamentable”29 have been applied—former Harvard
president Abbott Lawrence Lowell said that in his fifty-year association
with the college, “Benjamin Peirce still impresses me as the most massive
intellect with which I have ever come into close contact, and as being the
most profoundly inspiring teacher that I ever had.”30 Yet Lowell did ad-

Benjamin Peirce and the Science of “Necessary Conclusions”

mit that Peirce’s blackboard presentations left something to be desired:
“He was impatient of detail, and sometimes the result would not come
out right; but instead of going over his work to find the error, he would
rub it out, saying that he had made a mistake in a sign somewhere, and
that we should find it when we went over our notes.”31
His “boardside” (as opposed to bedside) manner was also suspect,
according to Oliver Wendell Holmes, who was a college classmate of
Peirce’s as well as a fellow faculty member. “If a question interested him,
he would praise the questioner, and answer it in a way, giving his own
interpretation to the question,” Holmes said. “If he did not like the form
of the student’s question, or the manner in which it was asked, he would
not answer it at all.”32
In an anecdote of this sort recounted in the Harvard Crimson, a
student took Peirce up on his offer to answer questions after class, in the
event that any of his explanations on higher mathematics were not crystal clear. But after raising his question, the student received no response.
He repeated the question and still received no response. “ ‘But did you
not invite us to ask you questions in regard to your lecture, sir?’ inquired
the student. ‘Oh, certainly,’ replied Professor Peirce, with an air of surprise, ‘but I meant intelligent questions.’ ”33
Students attending the college from 1860 on had some relief, because those who found Peirce’s soliloquies lacking in clarity could get
help from his oldest son, James Mills Peirce, who filled in for his father
that year and became an assistant professor a year later. (Another of his
four sons, Charles Sanders Peirce, ultimately became far more famous
than James, with achievements rivaling—if not exceeding—those of his
father.) As Benjamin Peirce lightened his course load late in his career,
James took over more and more of the teaching responsibilities in the
department, ultimately taking over the Perkins chair after his father’s
death. The quality of instruction improved under the leadership of
James, who was “a much better teacher even though he lacked the spark
of originality” and made “negligible” contributions to the field of mathematics. But his contributions to the Harvard department were deeply
felt, Coolidge wrote.34
Long before relief for students arrived in the form of his son James,
the senior Peirce exhibited little patience for the mathematically dim. He
preferred to spend his time with more talented pupils. Unfortunately,
individuals of that sort tended to be scarce at Harvard, as well as at
other American universities of the time. In 1835, Peirce proposed that

15

16

A H I S T O RY I N S U M

students should not have to take mathematics beyond their first year unless they so chose. The university adopted his plan in 1838. “This allowed Peirce to teach more advanced mathematics than was being
taught elsewhere in the United States,” wrote Peirce biographer Edward
Hogan.35
Peirce believed, further, that professors should devote more time to
research and less to teaching, spending no more than two hours a day
on teaching so as to have more time for original investigations. Years
later, Peirce found a strong ally in Harvard president Thomas Hill—a
former student who, according to historian Samuel Eliot Morison, “was
said to have been the only undergraduate of his generation to comprehend” Peirce’s higher mathematical demonstrations.36 Like Peirce, Hill
believed that “our best Professors are so much confined with the onerous duties of teaching and preparing lectures that they have no time nor
strength for private study and the advancement of science and learning.”
The system’s failing was especially pronounced in mathematics education,
Hill said, owing to the “inverted method” adopted in so many schools of
“exercising the memory, loading it with details . . . , but not illuminating
the imagination with principles to guide its flight.”37 Peirce, of course,
concurred wholeheartedly with Hill’s assessment: “I cannot believe it to
be injudicious to reduce the time which the instructor is to devote to his
formal teaching to a couple of hours each day, or even less.”38
One area in which Peirce spent considerable time outside of class
was astronomy, which was not uncommon for mathematicians in that
era. Both Laplace and Bowditch, as mentioned, put much effort into
that area. Another contemporary, Carl Friedrich Gauss, widely regarded
as one of the greatest mathematicians of all time, spent nearly the last
fifty years of his life as a professor of astronomy at the Göttingen Observatory. Peirce himself played a pivotal role in the founding of the Harvard College Observatory in 1839, although it did not become a fully
functioning observatory until 1847, when the first telescope, the fifteeninch “great refractor,” was installed. “This was the first great and efficient
observatory to be established in the United States,” wrote T. J. J. See, an
astronomer at the University of Chicago. “And its value to American science may be judged from the fact that only a few years before Dr.
Bowditch had declared: ‘America has as yet no observatory worthy of
mention.’ ”39
In the years since the observatory’s founding, astronomy took up
an expanding portion of Peirce’s time and attention. In fact, many of his

Benjamin Peirce and the Science of “Necessary Conclusions”

contemporaries thought of him first and foremost as an astronomer.
Peirce took advantage, for instance, of the “Great Comet of 1843” (formally known as C/1843 D1 and 1843 I), which was visible in midday, to
give a series of public lectures aimed at sparking public interest in astronomy. At the same time, Peirce embarked on elaborate calculations
regarding the comet’s orbit. This exercise would prove handy when
Peirce engaged in even more involved calculations concerning the orbit
of the newly discovered planet Neptune, a high-profile and contentious
matter.
The story burst to the fore in 1846, when Johann Gottfried Galle
of the Berlin Observatory pointed his telescope to a predetermined spot
in the sky and discovered Neptune, the eighth planet from the sun. Prior
to Galle’s observations, two mathematician-astronomers, Urbain Jean
Joseph Le Verrier of France and John Couch Adams of England, had
both predicted the position in the sky of a more distant, and as yet unknown, planet in the solar system that was responsible for perturbations
in the orbit of Uranus. Of the two, Le Verrier was fortunate in having
access to an astronomer, Galle, who was well equipped to take on the
job. Sure enough, Galle found a planet in the expected place, to within a
degree or so of Le Verrier’s and Adams’s predicted values. Because Galle’s
observations came at Le Verrier’s behest, most credit for the detection
fell to him rather than to Adams.
This discovery was one of the most celebrated events in the history
of science—the first time mathematics had been used to correctly ascertain the position of an unknown planet, opening up a whole new approach in astronomy. But matters did not end there, for it is not enough
simply to pick out the right spot in the sky—or the right “ephemeris.”
One would also like to know the orbit of the body in question. And
this is where Peirce entered the fray. The brash Yankee praised the work
of Le Verrier and Adams, which led to Neptune’s discovery, while suggesting that they got the right spot but the wrong planet, so to speak.
Le Verrier initially believed that Neptune was about twice as massive as
Uranus and lay about thirty-six astronomical units from the sun (one
astronomical unit being the mean distance between Earth and the sun).
After further analysis, Peirce supported the view that Neptune was less
massive and less distant, lying thirty or so astronomical units from the
sun—a conclusion drawn, in part, from the computations of Sears Cook
Walker, an American astronomer based at the U.S. Naval Observatory.
There was more than one possible solution, Peirce argued, including the

17

18

A H I S T O RY I N S U M

one Le Verrier had originally advocated and the solution that Peirce had
later come around to. In 1846, Peirce contended, these two solutions
happened to line up in more or less exactly the same place, which is why
he labeled Galle’s discovery, based on Le Verrier’s prediction, “a happy
accident.”40
This, as one might imagine, left Le Verrier none too pleased. He was
among the world’s preeminent mathematical astronomers, whereas Peirce,
the wild-eyed American, was a relative unknown, especially in Europe,
which ruled science in that day. Peirce took a bold stance that some
would have characterized as outrageous. When Peirce announced at a
meeting of the American Academy of Arts and Sciences in Cambridge
that the discovery of Neptune had been accidental, Harvard president
Edward Everett, who was present at the meeting, urged that a declaration so utterly improbable should not be presented to the world without
the academy’s backing. “It may be utterly improbable,” Peirce replied,
“but one thing is more improbable—that the law of gravitation and the
truth of mathematical formulas should fail.”41
Peirce held his ground without backing off. But the question remains:
Was he right? Was Le Verrier really the beneficiary of a happy accident, as
Peirce insisted? There are many ways of looking at this skirmish. In the
end, Peirce’s suggestion that Neptune was located about thirty astronomical units from the sun rather than thirty-six astronomical units turned out
to be much closer to the truth. But Peirce’s statements came after Le Verrier’s and Adams’s predictions and Galle’s detection, as well as coming
after subsequent work by Walker and others. Given the data initially
available to Le Verrier and Adams, it simply was not possible to work
out the orbital elements right off the bat. In the beginning, there is always a range of solutions—a range of possible distances and masses.
One settles in on the precise orbit through an iterative process, after acquiring some data that show the planet’s position at various junctures in
history. Neptune’s orbit, moreover, depended on Neptune’s mass, which
could not be calculated directly until a satellite of Neptune was discovered. In the end, many of Peirce’s hunches were borne out, but that did
not really take anything away from the achievements of Le Verrier and
Adams. They worked out the ephemeris in an acceptable manner and
could not possibly have pinned down the orbit from the outset. To some
extent, both sides won, and none really lost.
Yet most Americans believed that Peirce came out ahead in the exchange, even if he had, in reality, just played to a draw.42 Perhaps some

Benjamin Peirce and the Science of “Necessary Conclusions”

validation can be drawn from the fact that four years after Galle’s
discovery, Peirce was admitted to the Royal Astronomical Society of
London, the first American so elected since his mentor, Nathaniel Bowditch,
had been similarly honored in 1818. That Peirce had taken on the scientific
elite from Europe, emerging unscathed from those debates, “gave standing
to both the scholar and his country,” writes Emory mathematician and
historian Steve Batterson. “The latter was important to him.”43
As Hogan puts it, “Peirce was a scientific patriot. He saw the glory
of America not in terms of Manifest Destiny or military might, but in
terms of the nation’s becoming a world leader in science and education.”
In the 1840s, many Americans were trying to win respect for their nation’s scientists, and none worked harder to that end than Peirce himself.44 Although Peirce was by no means lacking in ego, much of the
work he did was not for self-aggrandizement, because he often did not
bother to publish completed papers or, instead, let others take the credit.
Beyond his individual accomplishments, Peirce was intent on showing
that, when it came to science, Americans deserved a place on the world
stage. Neither he nor his fellow countrymen were yet shaping mathematics on an international basis, but he was at least helping them get
into the game.
With his credentials more firmly established in mathematical astronomy, in 1849 Peirce was named consulting geometer and astronomer to the American Ephemeris and Nautical Almanac. He made many
and varied contributions to the publication over the next thirty years.
For example, Peirce devised novel ways to take advantage of occultations of the Pleiades—instances when the moon obscures our views of
the famous star cluster (also known as the Seven Sisters). He showed
how detailed observations of the cluster, under these conditions, could
reveal features about the shape and surface of both Earth and the moon.
In the 1850s, Peirce turned his attention to the rings of Saturn. Late
in the eighteenth century, Laplace had suggested that Saturn had a large
number of solid rings. In 1850, the American astronomer George P. Bond
discovered a gap in the rings of Saturn during observations made with
Harvard’s great refractor. Bond believed the rings must be fluid rather
than solid, as Laplace and others had maintained. Peirce undertook a
detailed mathematical analysis of the rings’ constitution, concluding
that they were fluid. He showed, moreover, that the presence of Saturn
alone would not keep the rings stable. But Saturn, along with its eight
satellites, could keep a fluid ring in equilibrium. Peirce presented his

19

20

A H I S T O RY I N S U M

findings in 1851 at a national science meeting in Cincinnati. A Boston
newspaper lauded Peirce for putting forth “the most important communication yet presented” at the conference, as well as making “the most
important contribution to astronomical science . . . since the discovery
of Neptune.”45
Unfortunately, Peirce’s conclusion turned out to be incorrect. In
1859, the great physicist James Clerk Maxwell published a paper, “On
the Stability of the Motion of Saturn’s Rings,” in which he argued that
the rings were neither solid nor fluid but were instead composed of a
countless number of small particles independently orbiting the planet.
Maxwell’s theory was confirmed in 1895, when astronomers James E.
Keeler and William W. Campbell showed that the inner portion of the
rings orbits more rapidly than the outer portion. Even though Peirce’s
idea did not prevail in the end, his analyses helped advance science by
spurring Maxwell. In an 1857 letter to the physicist William Thomson
(better known as Lord Kelvin), Maxwell wrote: “As for the rigid ring I
ought to first speak of Prof. Peirce. He communicated a large mathematical paper to the American Academy on the Constitution of Ring, but up
to the present year he has no intention of publishing it.”46 Lord Kelvin
evidently had a high opinion of the aforementioned professor, for in an
address before the British Association for the Advancement of Science
(of which Lord Kelvin became president in 1871) he called Peirce the
“Founder of High Mathematics in America.”47
At this stage in his career, Peirce was already shifting his attention
to geodesy—the branch of science concerned with measuring, monitoring, and representing Earth’s shape and size, while also determining
the precise location of points on the surface. From 1852 to 1867,
Peirce served as director of longitude determinations for the U.S. Coast
Survey—a job that technically involved making east-west measurements
but was, of course, more broadly defined. At that time, Alexander Dallas
Bache, a charismatic individual who dominated U.S. science in that era,
headed the survey. Peirce and Bache became close friends, and when
Bache died in 1867, Peirce took over his position as superintendent. A
survey of Alaska, which had recently been purchased from Russia, was
undertaken during Peirce’s term—a period that saw a general improvement in scientific methods. Peirce surprised many people by his success
at the survey’s helm, given his lack of administrative experience, and he
used his scientific reputation to secure more funding from Congress for
basic research than even Bache had been able to obtain. Both Peirce and

Benjamin Peirce and the Science of “Necessary Conclusions”

Bache had established the U.S. Coast Survey as the most important federal agency for American science by supporting the research of people
both inside the survey and outside of it.
Charles Sanders Peirce, Benjamin’s son, worked intermittently on
the survey from 1859 to 1891, but his accomplishments extended far
beyond that. A brilliant polymath, Charles made contributions in mathematics, astronomy, chemistry, and other areas, but his principal achievements lay in the realm of logic and philosophy. Indeed, the philosopher
Paul Weiss called Charles “the most original and versatile of American
philosophers and America’s greatest logician.”48 In some circles, Benjamin
Peirce’s greatest contribution to scholarly thought was bringing his son
Charles into this world and helping to train him. While there is no denying that Charles made deep and lasting marks in his field—arguably
leaving behind a greater and more durable legacy than that of his
father—it is also true that Benjamin Peirce was himself one of those
larger-than-life figures who could not help but attract the spotlight. And
during his lifetime he cast a very big shadow indeed.
As an intellectual, with no false modesty about his cognitive abilities, the elder Peirce was not shy about sharing his opinion on all manner of subjects—science, arts, politics, and literature. As such, he did not
feel obliged to restrict his pronouncements to mathematics alone. In one
departure from his conventional duties with the math department and
U.S. Coast Survey, Peirce attended a séance in Boston in 1857 to judge
whether the participants could successfully communicate with spirits—a
proposition of which he was highly skeptical. Peirce was there as an
observer to appraise the validity of the proceedings. He was not surprised that the three-day event yielded no positive results. On a separate
occasion, Peirce investigated the spiritualistic claims of a woman who
said she came in contact with a universal force called “Od” in the presence of powerful magnets. In an experiment, Peirce showed those claims
to be fraudulent: the woman exhibited the same reaction when exposed
to a genuine magnet as she did when exposed to a piece of wood that
was painted like a magnet.49
The séance- and spiritualism-busting activities were part of a broader
effort that Peirce was involved in, along with other prominent friends
and scientists, including Bache, as well as Louis Agassiz, an eminent zoologist and geologist at Harvard, and Joseph Henry, one of the country’s
leading scientists who served as the first secretary of the Smithsonian
Institution. This group, which was part social club and part lobbying

21

22

A H I S T O RY I N S U M

arm, called itself the Lazzaroni. Its principal aims were to rid American science of quacks and charlatans and, ultimately, to make the
country the world leader in science. The name was intended to be
humorous—a play on the Italian term lazzaroni, which referred to street
beggars, since their American counterparts saw themselves as constantly
begging to secure financial support for the nation’s fledgling scientific
establishment. The group’s collective efforts led to the founding of the
American Association for the Advancement of Science in 1848, of
which Henry (1849), Bache (1850), Agassiz (1851), and Peirce (1852)
all served as early presidents. The Lazzaroni also helped start the National Academy of Sciences, and Peirce was one of its most active early
members.
In another departure from his usual routine, Peirce submitted testimony in 1867 as an expert witness in a celebrated court case. At issue
was a will that left $2 million to Hetty Robinson, the niece of Sylvia Ann
Howland, who died in 1865. The executor of the estate, Thomas
Mandell, contested Robinson’s claim, maintaining that the will was a forgery. Two of three signatures on the will, Mandell argued, had been traced.
Peirce and his son Charles (who was then working on the U.S. Coast Survey with his father) both testified on Mandell’s behalf, using statistical
reasoning to demonstrate that the signatures were so close—with the
“down strokes” matching so precisely—that the odds of their being genuine, rather than tracings, were one in 2,666,000,000,000,000,000,000.
“Professor Peirce’s demeanor and reputation as a mathematician must
have been sufficiently intimidating to deter any serious mathematical rebuttal,” wrote Paul Meier and Sandy Zabell in the Journal of the American Statistical Association. “He was made to confess a lack of any general expertise in judging handwriting, but he was not cross-examined at
all on the numerical and mathematical parts of his testimony.”50
Agassiz and Oliver Wendell Holmes testified for Robinson, saying
they could find no signs of pencil marks that would have been evidence
of tracing. Mandell prevailed in the end, although the extent to which
the Peirces’ arguments influenced the final verdict is not clear. (If nothing
else, they are likely to have shaken confidence in the validity of the signatures on the will.) “Although Peirce’s methods would be criticized by
modern mathematicians, they were an early and ingenious use of statistical methods applied to a practical problem,” Hogan writes. “Peirce’s
testimony may well be the earliest instance of probabilistic and statistical evidence in American law.”51

Benjamin Peirce and the Science of “Necessary Conclusions”

While Peirce squared off against Agassiz on this particular occasion, they were united in their general desire to promote the national
science agenda. Hogan believes that Peirce’s scientific accomplishments,
primarily in astronomy and mathematics, were overshadowed by “his
efforts to organize American scientists into a professional body and to
make educational reforms at Harvard . . . The development of an institutional base, centered in the universities, and the emergence of a professional scientific community were the most important developments in
American science during the 19th century.”52
Although Peirce devoted countless hours to this cause, he did not
neglect his personal interests altogether. Of those, he considered pure
mathematics his first love, even though he was not able to spend as
much time on it as he might have liked. Indeed, only a small fraction of
his published papers were in that area, with the bulk lying in more applied realms. That, however, may merely reflect the practical demands
placed on his career, rather than his true intellectual leanings.
It is often said that a mathematician makes his or her most significant contribution early in life—typically by the age of thirty or so. Peirce
defied the conventional wisdom in this area, as he did in many other
areas, saving his greatest triumph in mathematics until 1870, when he
hit the ripe age of sixty-one. That was the year in which he presented his
treatise Linear Associative Algebra.
Apparently, Peirce had saved the best for last. That is the prevailing
judgment of history, and Peirce felt that way as well. In a letter he sent
in 1870, along with the manuscript, to George Bancroft, the U.S. ambassador to Germany and cofounder of Round Hill School (Peirce’s
first employer), Peirce spoke modestly of the enclosed work: “Humble
though it be, it is that upon which my future reputation must chiefly
rest.”53 In his dedication to Linear Associative Algebra, Peirce characterized the tract as “the pleasantest mathematical effort of my life. In no
other have I seemed to myself to have received so full a reward for my
mental labor in the novelty and breadth of the results. I presume that to
the uninitiated the formulae will appear cold and cheerless. But let it be
remembered that, like other mathematical formulae, they find their origin in the divine source of all geometry. Whether I shall have the satisfaction of taking part in their exposition, or whether that will remain
for some more profound expositor, will be seen in the future.”54
In some ways, Linear Associative Algebra seems to have come out
of the blue, because Peirce had not done much original work in algebra

23

24

A H I S T O RY I N S U M

before. In another sense, though, Peirce’s efforts were not entirely surprising, since they grew out of Sir William Rowan Hamilton’s invention of “quaternions” in 1843. Hamilton delivered his first lectures on
quaternions in 1848, and Peirce was deeply impressed. “I wish I was
young again,” he said, though he was still in his thirties, “that I might
get such power in using it as only a young man can get.”55 The subject
clearly occupied his thoughts for many years, and he worked on it when
he could, even interspersing his original contributions to algebra with
his administrative responsibilities as head of the U.S. Coast Survey.
“Every now and then I cover a sheet of paper with diagrams, or formulae, or figures,” he told U.S. Treasury secretary Hugh McCulloch, “and
I am happy to say that this work again relieves me from the petty annoyances which are sometimes caused by my receiving friends who do
but upset.”56
Before discussing Hamilton’s work on quaternions and the effect it
had on Peirce, it is worth saying a few words about algebra in general
and how it began to change, and open up, in the early nineteenth century. That is when British mathematicians began transforming mathematics from the “science of quantity” to a much more liberalized and
abstract system of thought. In the 1830s, for example, George Peacock
of Cambridge University proposed that, in addition to arithmetical algebra, which involves the basic arithmetical operations and nonnegative
numbers, there was also symbolic algebra: “The science which treats of
the combinations of arbitrary signs and symbols by means of defined
though arbitrary laws.” Arithmetical algebra, Peacock averred, was really just a special case of the more general symbolic algebra.57
Hamilton carried this a step further by introducing complex numbers into his algebra. Complex numbers assume the form of a + bi,
where a and b are real numbers and i, the square root of −1, is an
imaginary number. Quaternions are four-dimensional representations of
the form (a,b,c,d) or a + bi + cj + dk, where a, b, c, and d are real and i,
j, and k are imaginary. These numbers obey various rules, such as
i2 = j2 = k2 = −1, and ij = −ji. Whereas Peacock held that the same rules
applied to both symbolic and arithmetic algebra, this was not the case in
Hamilton’s system: in arithmetic algebra, a × b is always equal to b × a,
in adherence to the commutative law of multiplication, but the commutative law does not always apply to quaternions, since i × j, by definition, does not equal j × i. Hamilton believed that algebraists were not
bound to set rules but were instead free to write their own rules as they

Benjamin Peirce and the Science of “Necessary Conclusions”

saw fit. “Hamilton’s work on quaternions revealed what has since come
to be known as the freedom of mathematics, essentially the right of
mathematicians to determine somewhat arbitrarily the rules of mathematics,” writes Helena Pycior, a historian at the University of Wisconsin,
Milwaukee.58
Fascinated by quaternions, Peirce discussed them in a course in
1848 (the year of Hamilton’s first lectures on the topic), and he often
called them his favorite subject. If anything, he was too enamored with
quaternions, in the opinion of his son Charles, who complained that his
father was a “creature of feeling” with “a superstitious reverence for the
square root of minus one.”59 But Benjamin’s preoccupation eventually
paid off. He identified and provided multiplication tables for 163 different algebras up to the “sixth order,” that is, containing six or fewer
terms. These algebraic systems obeyed the associative law, a(bc) = (ab)c,
and the distributive law, a(b + c) = ab + ac, but not the commutative
law. Of these 163 algebras, only three had been in common use at the
time: ordinary (arithmetic) algebra, the calculus of Newton and Leibniz,
and Hamilton’s quaternions.
Peirce went beyond Hamilton by insisting that the coefficients of
his algebras, such as a, b, c, and d, could be complex numbers and need
not be limited to real numbers. “Peirce became so infected with the freedom of mathematics,” Pycior writes, that he accused Hamilton of
“mathematical conservatism.” As a result, his algebras “diverged even
farther from arithmetic than the quaternions.”60
Peirce had a great insight, demonstrating that “in every linear associative algebra, there is at least one idempotent or nilpotent expression.”61 A nilpotent is defined as the element a for which a positive
integer n (greater than or equal to 2) exists such that an = 0. An idempotent is the element b for which a positive integer m (greater than or
equal to 2) exists such that bm = b.
The nilpotent was a somewhat controversial notion in algebra since
a, by definition, is a divisor of zero, which is forbidden in the standard
arithmetic version of algebra. Divisors of zero are nonzero numbers, a
and b, such that a × b = 0. Hamilton’s system did not admit the possibility of divisors of zero, whereas Peirce’s algebras allowed for it and were
therefore even more general. (The introduction of divisors of zeros was,
in fact, a consequence of Peirce’s use of complex coefficients rather than
just real number coefficients.) As if anticipating criticism on this score,
Peirce wrote in his paper:

25

26

A H I S T O RY I N S U M
However incapable of interpretation the nilfactorial and nilpotent
expressions may appear, they are obviously an essential element of
the calculus of linear algebras. Unwillingness to accept them has retarded the progress of discovery and the investigation of quantitative
algebras. But the idempotent basis seems to be equally essential to actual interpretation. The purely nilpotent algebra may therefore be regarded as an ideal abstraction, which requires the introduction of an
idempotent basis to give it any position in the real universe.62

By introducing new concepts in such a comprehensive manner,
Peirce laid out a broad new terrain for future study—scores of algebras
that had never before been considered, let alone explored. Through that
accomplishment, Peirce laid “a just claim to be considered an eminent
mathematician,” according to George David Birkhoff, whom many regarded as Harvard’s dominant mathematician in the first half of the
twentieth century, as well as, by many accounts, the greatest American
mathematician of his time.63 “Peirce saw more deeply into the essence of
quaternions than his contemporaries, and so was able to take a higher,
more abstract point of view, which was algebraic rather than geometric.” Peirce thus appears, Birkhoff adds, “as a kind of father of pure
mathematics in our country.”64 That said, it should be stressed that the
most groundbreaking work in the field at the time was still, overwhelmingly, occurring in Europe.
As a historian, Pycior agrees with Birkhoff’s appraisal, calling Linear Associative Algebra “a pioneer work” in American mathematics.
“Peirce deserves recognition not only as a founding father of American
mathematics,” she writes, “but also as a founding father of modern abstract algebra.”65
The recognition that Peirce received for his work in algebra, like
that bestowed by Birkhoff and Pycior, was long in coming, partly because of the manner in which his results were communicated to the
mathematical community. At first Peirce presented his work orally, reading his “memoir” before the National Academy of Sciences in 1870,
with earlier installments coming before then. This was not the ideal way
of conveying such abstruse material. Agassiz, who sat through an earlier
presentation on the subject, spoke for other confused audience members
in saying: “I have listened to my friend with great attention and have
failed to comprehend a single word of what he has said. If I did not

Benjamin Peirce and the Science of “Necessary Conclusions”

know him to be a man of great mind . . . , I could have imagined that I
was listening to the vagaries of a madman.”66
Peirce had a better chance of getting his message across by publishing his paper, but that did not happen—except on an extremely limited
basis—during his lifetime. The National Academy of Sciences intended
to publish it but never got around to it. One hundred lithograph copies
were produced, however, with the help of the U.S. Coast Survey staff.
The work was done, in particular, by “a lady without mathematical
training but possessing a fine hand . . . who could both read his ghastly
script and write out the entire text 12 pages at a time on lithograph
stones.”67 Most of the copies were sent to Peirce’s U.S. colleagues and
friends, who, unfortunately, lacked the expertise to appreciate his accomplishment. The paper had a reasonably good reception in England,
where William Spottiswoode, the outgoing president of the London
Mathematical Society, summarized Peirce’s results in an 1872 talk to the
society. But Peirce was unable to get the mathematical community in
Germany—then the world leader in the field—to take stock of his work.
In 1881, a year after Peirce died, Linear Associative Algebra finally
appeared in its entirety in a more accessible venue, the American Journal
of Mathematics—an outcome that occurred through the initiative of his
son Charles. An introduction described the work as one that “may almost be entitled to take rank as the Principia of the philosophical study
of the laws of algebraic operation,” thus comparing Peirce’s feat—
perhaps hyperbolically—to Isaac Newton’s famous treatise, which contained his laws of motion and gravitation.68
Linear Associative Algebra was Peirce’s first major contribution to
pure mathematics, as most of his work prior to that time had been in
astronomy, physics, and geodesy. It was, arguably, the first major contribution to mathematics by an American at all. Bowditch’s translation of
Laplace, though a prodigious feat, was still, at heart, an explanation
of another’s work. And Peirce’s paper on odd perfect numbers was not
of the same stature, since it merely put a constraint—albeit the first
constraint—on the problem, unlike his later paper, which helped lay the
groundwork for future inquiries into abstract algebra.
Ironically, just as Sylvester and Servais had duplicated Peirce’s
efforts on odd perfect numbers, because his work was not widely
known, so too were much of Peirce’s algebraic efforts duplicated
twenty years later by two German mathematicians, Eduard Study and

27

28

A H I S T O RY I N S U M

Georg Scheffers, who had either overlooked Peirce’s paper or not taken
it seriously. In two papers published in 1902, Columbia University
mathematician Herbert Hawkes maintained that the theorems stated by
Peirce “are in every case true, though in some cases his proofs are invalid.” Hawkes went through the proofs, making corrections or clarifications when necessary, to place the entire work “on a clear and rigorous basis . . . Using Peirce’s principles as a foundation,” he argued, “we
can deduce a method more powerful than those hitherto given for enumerating all number systems of the types Scheffers has considered.” The
reason Peirce’s memoir was “subject to neglect or adverse criticism,”
Hawkes speculated, owed in part to “the extreme generality of the point
of view from which his memoir sprang, namely a ‘philosophic study of
the laws of algebraic operation.’ ”69
Writing in 1881, the Yale mathematician Hubert Anson Newton
also suggested that while the paper broke new ground, it definitely had
a philosophical edge, offering what he thought would become “the solid
basis of a wide extension of the laws of thinking.”70
It is doubtful that Peirce would have resented the characterization
of his work as “philosophic,” since the very first sentence of the paper—as
well as the first two pages—provides a general discussion of what mathematics is all about. He starts the discourse by writing,
Mathematics is the science which draws necessary conclusions. This
definition of mathematics is wider than that which is ordinarily given,
and by which its range is limited to quantitative research . . . The
sphere of mathematics is here extended, in accordance with the derivation of its name, to all demonstrative research, so as to include all
knowledge strictly capable of dogmatic teaching . . . Mathematics,
under this definition, belongs to every enquiry, moral as well as physical. Even the rules of logic, by which it is rigidly bound, could not be
deduced without its aid.71

Peirce thus rejected the notion that mathematics is merely the science of
quantity in favor of the much broader notion of mathematics being a
science based on inference and deduction. In an earlier address to the
American Association for the Advancement of Science, Peirce called mathematics “the great master-key, which unlocks every door of knowledge,
and without which no discovery—no discovery, which deserves the name,
which is law and not isolated fact—has been or ever can be made.”72

Benjamin Peirce and the Science of “Necessary Conclusions”

Peirce’s views about mathematics were deeply colored by his fervent religious convictions. He considered mathematics one of the highest forms of human expression and, as such, a manifestation of God’s
infinite wisdom. Like Plato and Aristotle before him, Peirce believed that
“God wrote the universe in the language of mathematics.”73 Peirce
made no attempt, moreover, to conceal his religious feelings and was,
instead, quite open about them. Indeed, in the introductory paragraph
to his 1870 paper, he noted that the mathematical formulas contained
therein had a divine origin—something that he considered to be true of
mathematics in general. While hoping to have a role in the “exposition”
and advancement of these ideas, he admitted that his ultimate contribution to this effort remained to be seen.74
As it turned out, Peirce did not participate much in the further investigation of the algebras that he had laid out, so systematically, in his
paper, though the “exposition” of which he spoke has been central to
the development of modern abstract algebra. Peirce clearly had other
interests in mind, though some were related to his work on algebraic
systems. He gave a series of lectures late in his life, posthumously published in a volume entitled Ideality in the Physical Sciences, in which he
argued that every physical phenomenon could be expressed through
mathematics and, conversely, that every mathematical idea had an expression in the physical world.75 “There is no physical manifestation
which has not its ideal representation in the mind of math,” he affirmed.76 During the 1879–80 academic year, Peirce was engaged in the
study of cosmology—or “cosmical physics,” as he called it. He planned
to teach a course on the subject in the following year, but his health
failed him. Peirce died on October 6, 1880, before he had the chance
to explain the universe—its origins, formation, and evolution—to
those Harvard students capable of following his often vexing style of
discourse.
Peirce approached death stoically, girded by his lifelong faith. He
expressed no great sorrow, for instance, when his father died in 1831 at
the age of fifty-three. Had he lived longer, Peirce wrote to his father’s
physician, “would he have been happier? Thank God, no. He is in
heaven, and I will not regret that no human power could ward off that
fatal blow.”77
Peirce was similarly resigned to the prospect of his own mortality.
“Distinguished throughout his life by his freedom from the usual abhorrence of death, which he never permitted himself either to mourn when

29

30

A H I S T O RY I N S U M

it came to others, or to dread for himself, he kept this characteristic
temper to the end,” wrote F. P. Matz, a professor of mathematics and
astronomy at New Windsor College in Maryland, in 1895. “Two days
before he ceased to breathe, it struggled into utterance in a few faintlywhispered words, which expressed and earnestly inculcated a cheerful
and complete acceptance of the will of God.”78
Although Peirce may not have feared the exercise of God’s will,
when it finally came to him, he was apprehensive about the accolades
that were likely to be bestowed upon him after his death. More than a
decade and a half earlier, in the spring of 1864, Peirce thought he was
terminally ill, requesting of his friend Bache that, “if I should be taken,
dearest Chief, exert all your influence to save me from eulogistic biographers.”79 But Peirce’s worst fears were realized on this score when the
end came in 1880, with eulogies and memorial tributes pouring in from
all quarters. In fact, an entire book (albeit a small one), Benjamin Peirce:
A Memorial Collection, was published a year later to honor the man
who had served Harvard for nearly half a century. The contents of the
volume, the editor wrote, “but feebly reflect the life of one who ranks
among the few men whose names have been imperishably recorded
in the annals of science and religion in this century.” The collection
contained poems, sermons, tributes, and obituaries from various newspapers and magazines, including the Springfield Republican, which
declared: “America has nothing to regret in his career but that it must
now be closed; while her people have much to learn from his long and
honorable life.”80
In 1880, the Atlantic Monthly published a poem by Peirce’s Harvard classmate Oliver Wendell Holmes:
Through voids unknown to worlds unseen
His clear vision rose unseen . . .
How vast the workroom where he brought
The viewless implements of thought!
The wit, how subtle, how profound,
That Nature’s tangled webs unwound.81

“By the death of Professor Benjamin Peirce, last week, the University
loses its greatest light in science, and perhaps the most distinguished of
its professors,” the Harvard Crimson wrote.82 In the wake of Peirce’s
death, the Harvard mathematics department entered a “period of retro-

Benjamin Peirce and the Science of “Necessary Conclusions”

cession,” according to Coolidge, “a great slump in . . . scientific activity”
that would take years to dig out of. The good news, he wrote, is that “a
renaissance” of mathematics at Harvard would come more than a decade later, led by newly appointed faculty members whom he referred to
as the “great twin brethren . . . A momentous revival in American mathematics,” as Coolidge put it, was about to begin.83

31

2
O S G O O D, B Ô C H E R , A N D T H E G R E AT
AW A K E N I N G I N A M E R I C A N M A T H E M A T I C S

As the first person to conduct significant mathematical research at the
school, Benjamin Peirce was truly a Harvard pioneer, even if he was
practically the only one doing such work, and even if he had to fit in
those efforts between the chores that actually paid his bills: teaching and
writing textbooks. The next leap forward for the Harvard mathematics department was made around the turn of the twentieth century by
William Fogg Osgood and Maxime Bôcher, who turned Harvard into a
powerhouse in the field of analysis—a branch of pure mathematics that
includes calculus, as well as the study of functions and limits. But from
a long-term perspective, two other accomplishments of Osgood and
Bôcher may loom even larger: they brought mathematical research to
the center of the department’s mission rather than just being the hobby
of an incorrigible nonconformist like Peirce, and what is more, they
transformed Harvard’s department into what was arguably the strongest in the country, in the face of considerable competition, during a
time of great progress in American mathematics.1
That transition did not come easily, however. Following Peirce’s
death in 1880, Harvard mathematics underwent a major decline on the
research front (although perhaps not on the teaching front, since molding young minds—especially rather ordinary young minds—was never
one of Peirce’s fortes). “The state of mathematical scholarship at Harvard in the 1880s . . . had reverted back to that at the beginning of the
century,” explains Steve Batterson, a mathematician and math historian
at Emory University. “No one was