Why Do We Use Symbols in Math?
Sometimes it is the little things that are the most important and you could lump mathematical symbols into this category. It is undeniable that symbols not only enhance understanding but also provide a universally perceivable manner in which to show a certain math function or illustrate a sequence. This is not a new concept. It has been around in math since ancient times. It was probably even around in one form or another during the stone age!The fundamental need in math is to represent the relationship between a sign and the number or value it refers. Certain ideas and concepts can be clearly illustrated only by the creation and use of symbols. Measuring the relationship between numbers and representing the relationship symbolically not only serves to simplify the process but also gains a better understanding of the concept than a wordy description of the same. This is where the issue of languages comes in.
In more simple terms, a plus sign, a minus sign, a multiplication sign are all symbols. We need them for a very simple reason: we have to express what we are doing in a clear manner. When we are adding it would be ridiculous to always write out one plus on equals two when we could express this symbolically with 1 + 1 = 2. Imagine trying to perform calculus if you have to write a lengthy equation out in several paragraphs. Not only would such prose be voluminous, it would be confusing and prone to error. Plus, what language do you want to write it in? Remember, math is universal but languages are incredibly vast. Simply put, without proper symbols math becomes next to impossible. In fact, you could look at it this way: the symbols of math are reflective of a mathematical language.
Math is comprised of primarily two things: numbers and symbols. Symbols are found in simple math, algebra, geometry, calculus, statistics, etc. Symbols are essentially representative of a value. Decimals and fractions, for example, are symbols of parts of a whole. These symbols allow us to "work with" parts in a theoretical manner. Without symbols you simply could not perform math. Remember, much of math is abstract. How could you possibly perform simple algebra - much less calculus -without having the use of the symbol "X"? Could you even imagine trying to perform geometry without symbolic representations of triangles, squares and rectangles? It simply can not be done or if it was done it would be so laborious that it wouldn't be as efficient.
It is important to understand that the key to comprehending math is in the interpretation of the concept and not really in the nature or amount of symbols and the role they play. However to understand concepts one must essentially have a good grasp of the role and meaning of symbols and also be able to appreciate their usefulness in making math that much more simpler to understand and duplicate. The logic of signs and symbols in math is undeniable and is often stressed as a vital tool in making math a universal science.
Because symbols are so common in math we sometimes take them for granted. The reason we take them for granted is that they make math so easy to perform (actually, they make math performable period) we do not really tip our hat to their true value. That does not seem like a great way to treat the very thing that makes expressing math possible. Without various symbols you would be forced to go back to counting your fingers and toes and you don't want to do that again do you?
Where and When Did the Symbols “+” and “–” Originate?
The symbols for the arithmetic operations of
addition (plus; “+”) and subtraction (minus; “–”) are so common today we
hardly ever think about the fact that they didn’t always exist. In
fact, someone first had to invent these symbols (or at least other ones
that later evolved into the current form), and some time surely had to
pass before the symbols were universally adopted. When I started
looking into the history of these signs, I discovered to my surprise
that they did not have their origin in antiquity. Much of what we know
is based on an impressively comprehensive and still unsurpassed research
(in 1928–1929) entitled History of Mathematical Notations by the Swiss-American historian of mathematics, Florian Cajori (1859–1930).
The ancient Greeks expressed addition mostly by juxtaposition, but
sporadically used the slash symbol “/” for addition and a
semi-elliptical curve for subtraction. In the famous Egyptian Ahmes
papyrus, a pair of legs walking forward marked addition, and walking
away subtraction. The Hindus, like the Greeks, usually had no mark for
addition, except that “yu” was used in the Bakhshali manuscript Arithmetic
(which probably dates to the third or fourth century). Towards the end
of the fifteenth century, the French mathematician Chuquet (in 1484)
and the Italian Pacioli (in 1494) used “” or “p” (indicating plus) for addition and “” or “m” (indicating minus) for subtraction.There is little doubt that our + sign has its roots in one of the forms of the word “et,” meaning “and” in Latin. The first person who may have used the + sign as an abbreviation for et was the astronomer Nicole d’Oresme (author of The Book of the Sky and the World) at the middle of the fourteenth century. A manuscript from 1417 also has the + symbol (although the downward stroke is not quite vertical) as a descendent of one of the forms of et.
The origin of the – sign is much less clear, and speculations range all the way from hieroglyphic or Alexandrian grammar ancestry, to a bar symbol used by merchants to separate the tare from the total weight of goods.
In Italy, the symbols + and – were adopted by the astronomer Christopher Clavius (a German who lived in Rome), the mathematicians Gloriosi, and Cavalieri at the beginning of the seventeenth century.
The first appearance of + and – in English was in the 1551 book on algebra The Whetstone of Witte by the Oxford mathematician Robert Recorde, who also introduced the equal sign as the rather longer than today’s symbol “═.” In describing the plus and minus signs Recorde wrote: “There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made – and betokeneth lesse.”
As a historical curiosity, I should note that even once adopted, not everybody used precisely the same symbol for +. Widman himself introduced it as a Greek cross + (the sign we use today), with the horizontal stroke sometimes a bit longer than the vertical one. Mathematicians such as Recorde, Harriot and Descartes used this form. Others (e.g., Hume, Huygens, and Fermat) used the Latin cross “†,” sometimes placed horizontally, with the crossbar at one end or the other. Finally, a few (e.g., De Hortega, Halley) used the more ornamental form “.”
The practices of denoting subtraction were somewhat less fanciful, but perhaps more confusing (to us at least), since instead of the simple –, German, Swiss, and Dutch books sometimes used the symbol “÷,” which we now use for division. A few seventeenth century books (e.g., by Descartes and by Mersenne) used two dots “∙∙” or three dots “∙∙∙” for subtraction.
Overall, what is perhaps most impressive in this story is the fact that symbols which first appeared in print only about five hundred years ago have become part of what is perhaps the most universal “language.” Whether you do science or finances, in Kentucky or in Siberia, you know precisely what these symbols signify.
Followers
soon adopted the notation for addition and subtracting. The fourteenth century
Dutch mathematician Giel Vander Hoecke, used the plus and minus signs in his Een
sonderlinghe boeck in dye edel conste Arithmetica and the Brit Robert
Recorde used the same symbols in his 1557 publication, The Whetstone of
Witte (Washington State Mathematics Council). It is important to note that
even though the Egyptians did not use the + and – notation, the Rhind Papyrus
does use a pair of legs walking to the right to mean addition and a pair of
legs walking to the left to mean subtraction (see below)(Weaver and
Smith). Similarly, the Greeks and Arabs
never used the + sign even though they used the operation in their daily
calculations (Guedj, 81).
The division and multiplication signs have an equally
interesting past. The symbol for
division,¸, called an obelus, was first used
in 1659, by the Swiss mathematician Johann Heinrich Rahn in his work entitled Teutsche Algebr. The symbol was later
introduced to London
when the English mathematician Thomas Brancker translated Rahn’s work (Cajori,
A History of Mathematics, 140). Before the explanation of how the letter “x”
became used to mean multiplication, a short biography must be presented for the
man who has contributed so much, both directly and indirectly, to mathematical
notation, William Oughtred. Oughtred
lived in England
during the late 1500’s and into the early 1600’s and was educated at Eaton and King’s
College Cambridge. He then went on to teach some very studious pupils, one of
whom was John Wallis, whose name will come up again in the history of
mathematical notation (O'Connor
and Robertson). Oughtred is
credited with using 150 different symbols in his work, however, one of the few
modern survivors is the “x,” meaning multiplication. Oughtred’s cross can
be seen below (Weaver and Smith).
It
was not all smooth sailing for Oughtred, as he received some opposition from
Leibniz, who wrote, "I do not like (the cross) as a symbol for
multiplication, as it is easily confounded with x; .... often I simply relate
two quantities by an interposed dot and indicate multiplication by ZC.LM."
(Weaver and Smith). It wasn’t until the 1800’s that the symbol “x” was popular
in arithmetic. However, its confusion with the letter “x” in algebra led the
dot to be more widely accepted to mean multiplication (Weaver and Smith).
Oughtred’s name will appear again in the history of math, his contributions
were significant and widespread.
Equality
and Congruence
The contributions of Oughtred’s fellow countryman, Robert
Recorde, are also notably profound. In his 1557 book on algebra, The Whetsone of Witte, Recorde wrote
about his invention of the equal sign, "To
avoide the tediouse repetition of these woordes: is equalle to: I will settle
as I doe often in woorke use, a paire of paralleles, or gemowe [twin] lines of
one lengthe: =, bicause noe .2. thynges, can be moare equalle" (Smoller).
A similar looking symbol, º, meaning “congruent,” was credited to Johann Gauss in 1801. He
stated “-16º9(mod. 5),” which means that
negative sixteen is congruent to nine modulo five (Cajori, A History of
Mathematical Notation, 34). During the same time period, Adrien-Marie Legendre
tried to employ his own notation for congruence. However, he was a bit careless
because he used the “=” twice to mean congruence and once for equality, which,
needless to say, angered Gauss. (Cajori, A History of Mathematical
Notation, 34). Gauss’ notation stuck and that is what is still used today in
number theory and other branches of mathematics.
Inequalities
Three British mathematicians, Harriot, Oughtred and Barrow,
popularized the early symbols for “>” and “<”, meaning strictly greater
than and strictly less than. They were first used in Thomas Harriot’s The Analytical Arts Applied to Solving
Algebraic Equations, which was published in 1631 after he died (Weaver and
Smith). In 1647, Oughtred used the symbol on the left to stand for greater than
and the symbol in the middle for less than (see below). Then in 1674, Isaac
Barrow used the notation on the right in his Lections Opticae & Geometricae meaning "A minor est quam
B" (symbols below from Weaver and Smith).
Almost
one hundred years later, in 1734, the French mathematician Pierre Bouguer, put
a line under the inequalities to form the symbols representing less than or
equal to and greater than or equal to, “£” and “³”(UC
Davis, 2007). Bouguer’s notation, like variations of the British inequality
signs, is still in use today.
Factorial
The factorial, like other symbols in math, has a
multinational background, with roots in Switzerland,
Germany and France. In 1751
Euler represented the multiplication of (1)(2)(3)…(m) by the letter M and in
1774 the German Johann Bernhard Basedow used “*” to mean
5*=(5)(4)(3)(2)(1). It wasn’t until
1808, with Christian Kramp’s contributions, that the term n! meant n(n-1)(n-2)…(3)(2)(1)
(Cajori, A History of Mathematical Notations, 72).
Radical
The radical sign, originating from Italy and Germany, has a Middle Eastern
connection as well. Initially, it was used by the Italian mathematician Rafael
Bombelli, who lived in the sixteenth century, in his l’Algebra. He wrote that
“R.q.[2]” is the square root of 2 and “R.c.[2]” is
the cube root of 2 (Derbyshire, 84). During this time, Arab mathematicians had
the
symbol pictured at the left to mean square root, however it
was not widely adopted elsewhere (Weaver and Smith). It wasn’t until the
seventeenth century, with the help of Descartes, that the symbol that we still
use today was employed (see below) (Weaver and Smith).
Descartes,
who lived in the early part of the 1600’s, turned the German Cossits “Ö” into the square root symbol that
we now have, by putting a bar over it (Derbyshire, 92-93).
Infinity
The symbol “¥”
meaning infinity, was first introduced by Oughtred’s student, John Wallis, in his 1655 book De Sectionibus Conicus (UC Davis). It is
hypothesized that Wallis borrowed the symbol ¥ from the Romans, which meant 1,000 (A History of
Mathematical Notations, 44). Preceding this, Aristotle (384-322 BC) is noted
for saying three things about infinity: i) the infinite exists in nature and
can be identified only in terms of quantity, ii) if infinity exists it must be
defined, and iii) infinity can not exist in reality. From these three
statements Aristotle came to the conclusion that mathematicians had no use for
infinity (Guedj, 112). This idea was
later refuted and the German mathematician, Georg Cantor, who lived from
1845-1918, is quoted as saying; “I experience true pleasure in conceiving
infinity as I have, and I throw myself into it…And when I come back down toward
fitineness, I see with equal clarity and beauty the two concepts [of ordinal
and cardinal numbers] once more becoming one and converging in the concept of
finite integer” (Guedj, 115). Cantor not only accepted infinity, but used
aleph, the first letter of the Hebrew alphabet, as its symbol (see below)
(Reimer). Cantor referred to it as “transfinite” (Guedj, 120). Another
interesting fact is that Euler, while accepting the concept of infinity, did
not use the familiar ¥
symbol,
but
instead he wrote a sideways “s”.
א
Constants
One of the most studied constants of all time, p, the ratio of the circumference of
a circle to its diameter, 3.141592654, has been long studied and closely
approximated. It was originally written by Oughtred as p/d, where p was
the periphery and d was
the diameter. In 1689, J. C. Sturmn,
from the University of Altdorf in Bavaria,
used the letter e to represent the ratio of the length of a circle to its
diameter; however it did not catch on. Pi was introduced again in 1706 by
William Jones. Jones was looking over the work of John Machin and found that he
used p to mean the ratio of circumference
to diameter. In Jones’ book, Synopsis
Palmariarum Mathesos, he praises his intelligence by calling him “the Truly
Ingenious Mr. John Machin” whom states “in the Circle, the Diameter is to the
Circumference as 1 is to 16/3 -4/239 –(1/3)(16/53) – 4/2393 + (1/5)(16/55) - (4/2395)-…=
3.14159…= p”
(Arndt, Haenel, 166). In subsequent years Johann Bernoulli used “c” to
represent pi and Euler used “p” in 1734 and then “c” in 1736 to represent the
constant. Then Euler changed his mind again, and later in 1736 used p in his Mechanica sive motus scientia analytice exposita and then cemented
it into mathematical culture with his 1748 work entitled Introductio in analysin infinitorum. (Arndt, Haenel, 166).
Another important mathematical constant is e, 2.718281828. This irrational number, meaning the
base of natural logarithms, as studied by John Napier, was originally called M
by the English mathematician Roger Cotes, who lived from 1682 to 1716
(Trinity). Newton
first used the exponential notation a2 to mean “e”, and Leonhard
Euler replaced the “a” with an “e” most likely because e comes after a in the
procession of vowels (Trinity). His “e” appeared in Mechanica and was later used by Daniel Bernoulli and Lambert
(Cajori, A History of Mathematical Notations, 13). Euler’s choice of letter
went down in history.
The square root of negative one is another important
constant, with a simpler, less varied background. Again Euler’s approach to
notation has been wedged in time. In 1777 he published Institutionum calculi integralis, where i is the square root of
negative one, and has been undisputed ever since (UC Davis).
The mathematical symbols discussed here have long and
convoluted pasts, quarreled over by different mathematicians spanning the ages,
and some revised at a later date. Certain representations came into existence
through mercantile records and others were born out of necessity to provide
mathematicians with convenient shorthand for repetitious calculations. Although
their creators have perished with the passage of time, their notations are
still prevalent today and continue to play an integral part of our mathematical
world.
This is a part of my college project.
ReplyDeleteSuch a nice way to express your konowledge through this BLOGGER..
Awesome blogger...
ReplyDelete