1% Copyright (c) 2000  The PARI Group
2%
3% This file is part of the PARI/\kbd{gp} documentation
4%
5% Permission is granted to copy, distribute and/or modify this document
6% under the terms of the GNU General Public License
7
8% This should be compiled with plain TeX
9\def\TITLE{A Tutorial for Pari/GP}
10\input parimacro.tex
11
12\chapno=0
13\begintitle
14\vskip2.5truecm
15\centerline{\mine A Tutorial}
16\vskip1.truecm
17\centerline{\mine for}
18\vskip1.truecm
19\centerline{\mine PARI / GP}
20\vskip1.truecm
21\centerline{\sectiontitlebf (version \vers)}
22\vskip1.truecm
23\authors
24\endtitle
25
26\copyrightpage
27\tableofcontents
28
29\noindent This booklet is a guided tour and a tutorial to the \kbd{gp}
30calculator. Many examples will be given, but each time a new function is
31used, the reader should look at the appropriate section in the \emph{User's
32Manual to PARI/GP} for detailed explanations. This chapter can be read
33independently, for example to get acquainted with the possibilities of
34\kbd{gp} without having to read the whole manual. At this point.
35
36\section{Greetings!}
37
38So you are sitting in front of your workstation (or terminal, or PC\dots),
39and you type \kbd{gp} to get the program started (or click on the relevant
40icon, or select some menu item). It says hello in its particular manner, and
41then waits for you after its \kbd{prompt}, initially \kbd{?} (or something
42like {\bf gp}~\kbd{>}). Type
43\bprog
44  2 + 2
45@eprog\noindent What happens? Maybe not what you expect. First of all, of
46course, you should tell \kbd{gp} that your input is finished, and this is
47done by hitting the \kbd{Return} (or \kbd{Newline}, or \kbd{Enter}) key. If
48you do exactly this, you will get the expected answer. However some of you
49may be used to other systems like Gap, Macsyma, Magma or Maple. In this case,
50you will have subconsciously ended the line with a semicolon ``\kbd{;}''
51before hitting \kbd{Return}, since this is how it is done on those systems.
52In that case, you will simply see \kbd{gp} answering you with a smug
53expression, i.e.~a new prompt and no answer!  This is because a semicolon at
54the end of a line tells \kbd{gp} not to print the result (it is still stored
55in the result history). You will certainly want to use this feature if the
56output is several pages long. Try
57\bprog
58  27 * 37
59@eprog\noindent Wow! even multiplication works. Actually, maybe those
60spaces are not necessary after all. Let's try \kbd{27*37}. Seems to be ok. We
61will still insert them in this document since it makes things easier to read,
62but as \kbd{gp} does not care about them, you don't have to type them all.
63
64Now this session is getting lengthy, so the second thing one needs to learn
65is to quit. Each system has its quit signal. In \kbd{gp}, you can use
66\kbd{quit} or \b{q} (backslash q), the \kbd{q} being of course for quit.
67Try it.
68
69Now you've done it! You're out of \kbd{gp}, so how do you want to continue
70studying this tutorial? Get back in please.
71
72Let's get to more serious stuff. I seem to remember that the decimal
73expansion of $1/7$ has some interesting properties. Let's see what \kbd{gp}
74has to say about this. Type
75\bprog
76  1 / 7
77@eprog\noindent
78What? This computer is making fun of me, it just spits back to me my own
79input, that's not what I want!
80
81Now stop complaining, and think a little. Mathematically, $1/7$ is an element
82of the field $\Q$ of rational numbers, so how else but $1/7$ can the computer
83give the answer to you? Well maybe $2/14$ or $7^{-1}$, but why complicate
84matters? Seriously, the basic point here is that PARI, hence \kbd{gp}, will
85almost always try to give you a result which is as precise as possible (we
86will see why ``almost'' later). Hence since here the result can be
87represented exactly, that's what it gives you.
88
89But I still want the decimal expansion of $1/7$. No problem. Type one of
90the following:
91\bprog
92  1./ 7
93  1 / 7.
94  1./ 7.
95  1 / 7 + 0.
96@eprog\noindent
97Immediately a number of decimals of this fraction appear, 38 on most systems,
9828 on the others, and the repeating pattern is $142857$. The reason is that
99you have included in the operations numbers like \kbd{0.}, \kbd{1.} or \kbd{7.}
100which are \emph{imprecise} real numbers, hence \kbd{gp} cannot give you an
101exact result.
102
103Why 28 / 38 decimals by the way? Well, it is the default initial precision.
104This has been chosen so that the computations are very fast, and gives
105already 12 decimals more accuracy than conventional double precision floating
106point operations. The precise value depends on a technical reason: if your
107machine supports 64-bit integers (the standard C library can handle integers
108up to $2^{64}$), the default precision is 38 decimals, and 28 otherwise.
109For definiteness, we will assume the former henceforth. Of course, you can
110extend the precision (almost) as much as you like as we will see in a moment.
111
112I'm getting bored, why don't we get on with some more exciting stuff?  Well,
113try \kbd{exp(1)}. Presto, comes out the value of $e$ to 38 digits. Try
114\kbd{log(exp(1))}. Well, we get a floating point number and not an exact $1$,
115but pretty close! That's what you lose by working numerically.
116
117What could we try now? Hum, \kbd{pi}? The answer is not that
118enlightening. \kbd{Pi}? Ah. This works better. But let's remember that
119\kbd{gp} distinguishes between uppercase and lowercase letters. \kbd{pi} was
120as meaningless to it as \kbd{stupid garbage} would have been: in both cases
121\kbd{gp} will just create a variable with that funny unknown name you just
122used. Try it! Note that it is actually equivalent to type
123\kbd{stupidgarbage}: all spaces are suppressed from the input. In the
124\kbd{27~*~37} example  it was not so conspicuous as we had an operator to
125separate the two operands. This has important consequences for the writing of
126\kbd{gp} scripts. More about this later.
127
128By the way, you can ask \kbd{gp} about any identifier you think it might know
129about: just type it, prepending a question mark ``\kbd{?}''. Try \kbd{?Pi}
130and \kbd{?pi} for instance. On most systems, an extended online help should
131be available: try doubling the question mark to check whether it's the case
132on yours: \kbd{??Pi}. In fact the \kbd{gp} header already gave you that
133information if it was the case, just before the copyright message. As well,
134if it says something like ``\kbd{readline enabled}'' then you should have a
135look at the \kbd{readline} introduction in the User's Manual before you go
136on: it will be much easier to type in examples and correct typos after you've
137done that.
138
139Now try \kbd{exp(Pi * sqrt(163))}. Hmmm, we suspect that the last digit may
140be wrong, can this really be an integer? This is the time to change
141precision. Type \kbd{\b{p} 50}, then try \kbd{exp(Pi * sqrt(163))} again. We
142were right to suspect that the last decimal was incorrect, since we get quite
143a few nines in its place, but it is now convincingly clear that this is not
144an integer. Maybe it's a bug in PARI, and the result is really an integer?
145Type
146\bprog
147  (log(%) / Pi)^2
148@eprog\noindent
149immediately after the preceding computation; \kbd{\%} means the result of the
150last computed expression. More generally, the results are numbered \kbd{\%1,
151\%2, \dots} \emph{including} the results
152that you do not want to see printed by putting a semicolon at the end of the
153line, and you can evidently use all these quantities in any further
154computations. The result seems to be indistinguishable from $163$, hence it
155does not seem to be a bug.
156
157In fact, it is known that $\exp(\pi*\sqrt{n})$ not only is not an integer or
158a rational number, but is even a transcendental number when $n$ is a nonzero
159rational number.
160
161So \kbd{gp} is just a fancy calculator, able to give me more decimals than I
162will ever need? Not so, \kbd{gp} is incredibly more powerful than an ordinary
163calculator, independently of its arbitrary precision possibilities.
164
165\misctitle{Additional comments} (you are supposed to skip this at first,
166and come back later)
167
1681) If you are a PARI old timer, say the last version of PARI you used was
169released around 1996, you have certainly noticed already that many many
170things changed between the older 1.39.xx versions and this one.
171Conspicuously, most function names have been changed. To know how a specific
172function was changed, type \kbd{whatnow({\rm function})}.
173
1742) It seems that the text implicitly says that as soon as an imprecise number
175is entered, the result will be imprecise. Is this always true? There is a
176unique exception: when you multiply an imprecise number by the exact number
1770, you will get the exact 0. Compare \kbd{0 * 1.4} and \kbd{0.~*~1.4}.
178\smallskip
179%
1803) Not only can the number of decimal places of real numbers be large, but
181the number of digits of integers also. Try \kbd{1000!}. It is never necessary
182to tell \kbd{gp} in advance the size of the integers that it will encounter.
183The same is true for real numbers, although most computations with floating
184point assume a default precision and truncate their results to this accuracy;
185initially 38 decimal digits, but we may change that with \b{p} of course.
186\smallskip
187%
1884) Come back to 38 digits of precision (\kbd{\b{p} 38}), and type
189\kbd{exp(100)}. As you can see the result is printed in exponential format.
190This is because \kbd{gp} never wants you to believe that a result is correct
191when it is not. We are working with 38 digits of precision, but the integer
192part of $\exp(100)$ has 44 decimal digits. Hence if \kbd{gp} had dutifully
193printed out 44 digits, the last few digits would have been wrong. Hence
194\kbd{gp} wants to print only 38 significant digits, but to do so it has to
195print in exponential format. \smallskip
196%
1975) There are two ways to avoid this. One is of course to increase the
198precision. Let's try it. To give it a wide margin, we set the precision to 50
199decimals. Then we recall our last result (\kbd{\%}
200or \kbd{\%n} where \kbd{n} is the number of the result). What? We still have
201an exponential format! Do you understand why?
202
203Again let's try to see what's happening. The number you recalled had been
204computed only to 38 decimals, and even if you set the precision to 1000
205decimals, \kbd{gp} knows that your number has only 38 digits of accuracy but
206an integral part with 44 digits. So you haven't improved things by increasing
207the precision. Or have you? What if we retype \kbd{exp(100)} now that we
208have 50 digits? Try it. Now we no longer have an exponential format.
209\medskip
210%
2116) What if I forget what the current precision is and I don't feel like
212counting all the decimals? Well, you can type \b{p} by itself. You may also
213learn about \kbd{gp} internal variables (and change them!) using
214\kbd{default}. Type \kbd{default(realprecision)}, then
215\kbd{default(realprecision, 38)}. Huh? In fact this last command is strictly
216equivalent to \kbd{\b{p} 38}! (Admittedly more cumbersome to type.) There are
217more ``defaults'' than just \kbd{format} and \kbd{realprecision}: type
218\kbd{default} by itself now, they are all there. \smallskip
219%
2207) Note that the \kbd{default} command reacts differently according to the
221number of input arguments. This is not an uncommon behavior for \kbd{gp}
222functions. You can see this from the online help, or the complete description
223in Chapter~3: any argument surrounded by braces \kbd{\obr\cbr} in the
224function prototype is optional, which really means that a \emph{default}
225argument will be supplied by \kbd{gp}. You can then check out from the text
226what effect a given value will have, and in particular the default one.
227\smallskip
228%
2298) Try the following: starting in precision 38, type first
230\kbd{default(format, "e0.100")}, then \kbd{exp(1)}. Where are my 100
231significant digits? Well, \kbd{default(format,)} only changes the output
232format, but \emph{not} the default precision. On the other hand, the \b{p}
233command changes both the precision and the output format.
234
235\section{Warming up}
236
237Another thing you better get used to pretty fast is error messages. Try
238typing \kbd{1/0}. Could not be clearer. But why has the prompt
239become funny, turning from \kbd{?} to \kbd{break>} ? When an error occurs, we
240enter a so-called \emph{break loop}, where you get a chance, e.g to inspect
241(and save!) values of variables before the prompt returns and all
242computations so far are lost. In fact you can run an arbitrary command at
243this point, and this mechanism is a tremendous help in debugging. To get out
244of the break loop, type \kbd{break}, as instructed in the error message
245last line.
246
247\misctitle{Comment} You can enter the break loop at any time using
248\kbd{Control-C}: this freezes the current computation and gets you a new
249prompt so that you may e.g., increase debugging level, inspect or modify
250variables (again, run arbitrary commands), before letting the program go
251on.
252\medskip
253
254Now, back to our favorite example, in precision 38, type
255\bprog
256  floor(exp(100))
257@eprog\noindent
258\kbd{floor} is the mathematician's integer part, not to be confused with
259\kbd{truncate}, which is the computer scientist's: \kbd{floor(-3.4)} is equal
260to $-4$ whereas \kbd{truncate(-3.4)} is equal to $-3$.  You get a more
261cryptic error message, which you would immediately understand if you had read
262the additional comments of the preceding section. Since you were told not to
263read them, here's the explanation: \kbd{gp} is unable to compute the
264integer part of \kbd{exp(100)} given only 38 decimals of accuracy, since
265it has 44 digits.
266
267Some error messages are more cryptic and sometimes not so easy to understand.
268For instance, try \kbd{log(x)}. It simply tells you that \kbd{gp} does not
269understand what \kbd{log(x)} is, although it does know the \kbd{log}
270function, as \kbd{?log} will readily tell us.
271
272Now let's try \kbd{sqrt(-1)} to see what error message we get now. Haha!
273\kbd{gp} even knows about complex numbers, so impossible to trick it that
274way. Similarly, try typing \kbd{log(-2)}, \kbd{exp(I*Pi)}, \kbd{I\pow
275I}\dots\ So we have a lot of real and complex analysis at our disposal.
276There always is a specific branch of multivalued complex transcendental
277functions which is taken, specified in the manual. Again, beware that
278\kbd{I} and \kbd{i} are not the same thing. Compare \kbd{I\pow2} with
279\kbd{i\pow2} for instance.
280
281Just for fun, let's try \kbd{6*zeta(2) / Pi\pow2}. Pretty close, no?
282
283\medskip
284Now \kbd{gp} didn't seem to know what \kbd{log(x)} was, although it did know
285how to compute numerical values of \kbd{log}. This is annoying. Maybe it
286knows the exponential function? Let's give it a try. Type \kbd{exp(x)}.
287What's this? If you had any experience with other computer algebra systems,
288the answer should have simply been \kbd{exp(x)} again. But here the answer is
289the Taylor expansion of the function around $\kbd{x}=0$, to 17 terms. Note
290the \kbd{O(x\pow17)} which ends the series, and which is trademark of power
291series in \kbd{gp}. It is the familiar ``big--oh'' notation of analysis.
292Why 17 terms? This is governed by the \kbd{seriesprecision}, which
293can be changed by typing \kbd{\b{ps} $n$} or \kbd{default(seriesprecision,
294$n$)} where $n$ is the number of terms that you want in your power series
295and is $16$ by default. Converting a polynomial or rational function to a
296power series will yield 16 significant terms: so $x$ gets converted to $x +
297O(x^{17})$; this is completely analogous to \kbd{realprecision} when an eact
298integer or rational number is converted to a floating point real number.
299Then we take the exponential of this new object and, since it has positive
300valuation, we can actually deduce $17$ significant terms from the given $16$.
301This is in keeping with PARI's philosophy of always returning a result which
302is as precise as possible from a given input.
303
304You thus automatically get the Taylor expansion of any function that can be
305expanded around $0$, and incidentally this explains why we weren't able to do
306anything with \kbd{log(x)} which is not defined at $0$. (In fact \kbd{gp}
307knows about Laurent series, but \kbd{log(x)} is not meromorphic either at
308$0$.) If we try \kbd{log(1+x)}, then it works, but this time we \emph{lose}
309one significant term: the result is $x - 1/2*x^2 + \dots + 1/15*x^15 +
310O(x^16))$. (Do you understand why ?)
311
312But what if we wanted the expansion around a point different from 0? Well,
313you're able to change $x$ into $x+a$, aren't you? So for instance you can
314type \kbd{log(x+2)} to have the expansion of \kbd{log} around $\kbd{x}=2$. As
315exercises you can try
316\bprog
317  cos(x)
318  cos(x)^2 + sin(x)^2
319  exp(cos(x))
320  gamma(1 + x)
321  exp(exp(x) - 1)
322  1 / tan(x)
323@eprog\noindent
324for different values of \kbd{serieslength} (change it using \b{ps}
325\var{newvalue}).
326
327Let's try something else: type \kbd{(1 + x)\pow 3}. No \kbd{O(x)} here, since
328the result is a polynomial.  Haha, but I have learnt that if you do not take
329exponents which are integers greater or equal to 0, you obtain a power series
330with an infinite number of nonzero terms. Let's try.  Type
331\kbd{(1 + x)\pow (-3)} (the parentheses around \kbd{-3} are not necessary but
332make things easier to read). Surprise! Contrary to what we expected, we don't
333get a power series but a rational function. Again this is for the same reason
334that \kbd{1 / 7} just gave you $1/7$: the result being exact, PARI doesn't see
335any reason to make it inexact.
336
337But I still want that power series. To obtain it, you can do as in the $1/7$
338example and force a conversion using the $O(x^n)$ notation:
339\bprog
340  (1 + x)^(-3) + O(x^16)
341  (1 + x)^(-3) * (1 + O(x^16))
342  (1 + x + O(x^16))^(-3)
343@eprog\noindent
344(Not on this example, but there is a difference between the first $2$
345methods. Do you spot it?) Better yet, use the series constructor which
346transforms any object into a power series, using the current
347\kbd{seriesprecision}, and simply type
348\bprog
349  Ser( (1 + x)^(-3) )
350@eprog
351
352Now try \kbd{(1 + x)\pow (1/2)}: we obtain a power series, since the
353result is an object which PARI does not know how to represent exactly. (We
354could teach PARI about algebraic functions, but then take \kbd{(1 + x)\pow Pi}
355as another example.) This gives us still another solution to our preceding
356exercise: we can type \kbd{(1 + x)\pow (-3.)}. Since \kbd{-3.} is not an exact
357quantity, PARI has no means to know that we are dealing with a rational
358function, and will instead give you the power series, this time with real
359instead of integer coefficients.
360\smallskip
361
362To summarize, in this section we have seen that in addition to integers, real
363numbers and rational numbers, PARI can handle complex numbers, polynomials,
364rational functions and power series. A large number of functions exist which
365handle these types, but in this tutorial we will only look at a few.
366
367\misctitle{Additional comments} (as before, you are supposed to skip this
368at first reading)
369
3701) In almost all cases, there is no loss of information in PARI output: what
371you see is all that PARI knows about the object, and you can happily
372copy-paste it into another session. There are exceptions, though. Type
373\kbd{n = 3 + 0*x}, then \kbd{n} is not the integer 3 but a constant polynomial
374equal to $3 x^0$. Check it with \kbd{type(n)}.
375
376However, it \emph{looks} like an integer without being one, and this may
377cause some confusion in programs which actually expect integers. Hence if you
378try to \kbd{factor(n)}, you obtain an empty factorization ! (Because, once
379considered as a polynomial, \kbd{n} is a unit in $\Q[x]$.)
380
381If you try to apply more general arithmetic functions, say the Euler totient
382function (known as \kbd{eulerphi} to \kbd{gp}), you get an error message
383worrying about integer arguments. You would have guessed yourself, but the
384message is difficult to understand since 3 looks like a genuine integer!
385Please make sure you understand the above, it is a common source of
386incomprehension.
387
3882) If you want the final expression to be in the simplest form possible (for
389example before applying an arithmetic function, or simply because things will
390go faster afterwards), apply the function \kbd{simplify} to the result.
391This is done automatically at the end of a \kbd{gp} command, but
392\emph{not} in intermediate expressions. Hence \kbd{n} above is not an
393integer, but the final result stored in the output history is! So
394if you type \kbd{type(\%)} instead of \kbd{type(n)} the answer is
395\typ{INT}, adding to the confusion.
396
3973) As already stated, power series expansions are always implicitly around
398$\kbd{x} = 0$. When we wante them around $\kbd{x} = \kbd{a}$, we replace
399\kbd{x} by \kbd{z + a} in the function we want to expand. For complicated
400functions, it may be simpler to use the substitution function \kbd{subst}.
401For example, if \kbd{p~= 1 / (x\pow 4 + 3*x\pow 3 + 5*x\pow 2 - 6*x + 7)},
402you may not want to retype this, replacing \kbd{x} by \kbd{z~+ a}, so you can
403write \kbd{subst(p, x, z+a)} (look up the exact description of the
404\kbd{subst} function).
405
4064) The valuation at $\kbd{x} = 0$ for a power series \kbd{p} is obtained
407as \kbd{valuation(p, x)}.
408
409\section{The Remaining PARI Types}
410Let's talk some more about the basic PARI types.
411
412Type \kbd{p = x * exp(-x)}. As expected, you get the power series expansion
413to 17 terms (if you have not changed the default). Now type
414\kbd{pr = serreverse(p)}. You are asking here for the \emph{reversion} of the
415power series \kbd{p}, in other words the expansion of the inverse function.
416This is possible only for power series whose first nonzero coefficient is
417that of $x^1$.  To check the correctness of the result, you can type
418\kbd{subst(p, x, pr)} or \kbd{ subst(pr, x, p)} and you should get back
419\kbd{x + O(x\pow 18)}.
420
421Now the coefficients of \kbd{pr} obey a very simple formula. First, we would
422like to multiply the coefficient of \kbd{x\pow n} by \kbd{n!} (in the case of
423the exponential function, this would simplify things considerably!). The PARI
424function \kbd{serlaplace} does just that. So type \kbd{ps = serlaplace(pr)}.
425The coefficients now become integers, which can be immediately recognized by
426inspection. The coefficient of $x^n$ is now equal to
427$n^{n-1}$. In other words, we have
428%
429$$\kbd{pr} = \sum_{n\ge1}\dfrac{n^{n-1}}{n!} X^{n}.$$
430%
431Do you know how to prove this? (The proof is difficult.)
432\smallskip
433%
434Of course PARI knows about vectors (rows and columns are distinguished, even
435though mathematically there is no difference) and matrices. Type for example
436\kbd{[1,2,3,4]}. This gives the row vector whose coordinates are 1, 2, 3 and
4374.  If you want a column vector, type \kbd{[1,2,3,4]\til}, the tilde meaning
438of course transpose. You don't see much difference in the output, except for
439the tilde at the end. However, now type \b{B}: lo and behold, the column
440vector appears as a proper vertical thingy now. The \b{B} command is used
441mainly for this purpose. The length of a vector is given by, well
442\kbd{length} of course. The shorthand ``cardinality'' notation \kbd{\#v} for
443\kbd{length(v)} is also available, for instance \kbd{v[\#v]} is the last
444element of \kbd{v}.
445
446Type \kbd{m = [a,b,c; d,e,f]}. You have just entered a matrix with 2 rows and
4473 columns. Note that the matrix is entered by \emph{rows} and the rows are
448separated by semicolons ``\kbd{;}''. The matrix is printed naturally in a
449rectangle shape. If you want it printed horizontally just as you typed it,
450type \b{a}, or if you want this type of printing to be the permanent default
451type \kbd{default(output, 0)}. Type \kbd{default(output, 1)} if you want to
452come back to the original output mode.
453
454Now type \kbd{m[1,2]}, \kbd{m[1,]}, \kbd{m[,2]}. Are explanations necessary?
455(In an expression such as \kbd{m[j,k]}, the \kbd{j} always refers to the
456row number, and the \kbd{k} to the column number, and the first index is
457always 1, never 0. This default cannot be changed.)
458
459Even better, type \kbd{m[1,2] = 5; m}. The semicolon also allows us to put
460several instructions on the same line; the final result is the output of
461the last statement on the line. Now type \kbd{m[1,] = [15,-17,8]}. No
462problem. Finally type \kbd{m[,2] = [j,k]}. You have an error message since you
463have typed a row vector, while \kbd{m[,2]} is a column vector. If you type
464instead \kbd{m[,2] = [j,k]\til} it works. \smallskip
465%
466\label{se:types}
467Type now \kbd{h = mathilbert(20)}. You get the so-called ``Hilbert matrix''
468whose coefficient of row $i$ and column $j$ is equal to $(i+j-1)^{-1}$.
469Incidentally, the matrix \kbd{h} takes too much room. If you don't want to
470see it, simply type a semi-colon ``\kbd{;}'' at the end of the line
471(\kbd{h = mathilbert(20);}). This is an example of a ``precomputed'' matrix,
472built into PARI. We will see a more general construction later.
473
474What is interesting about Hilbert matrices is that first their inverses and
475determinants can be computed explicitly (and the inverse has integer
476coefficients), and second they are numerically very unstable, which make them
477a severe test for linear algebra packages in numerical analysis.  Of course
478with PARI, no such problem can occur: since the coefficients are given as
479rational numbers, the computation will be done exactly, so there cannot be
480any numerical error. Try it. Type \kbd{d~=~matdet(h)}. The result is a
481rational number (of course) of numerator equal to 1 and denominator having
482226 digits. How do I know, by the way? Well, type \kbd{sizedigit(1/d)}. Or
483\kbd{\#Str(1/d)}. (The length of the character string representing the
484result.)
485
486Now type \kbd{hr = 1.* h;} (do not forget the semicolon, we don't want to see
487the result!), then \kbd{dr = matdet(hr)}. You notice two things. First the
488computation, is much faster than in the rational case. (If your computer is
489too fast for you to notice, try again with \kbd{h = mathilbert(40)}, or
490even some larger value.) The reason for this is that PARI is handling real
491numbers with 38 digits of accuracy, while in the rational case it is
492handling integers having up to 226 decimal digits.
493
494The second, more important, fact is that the result is terribly wrong. If you
495compare with \kbd{1.$*$d} computed earlier, which is the correct answer, you
496will see that few decimals agree! (None agree if you replaced 20 by 40 as
497suggested above.) This catastrophic instability is as already mentioned one
498of the characteristics of Hilbert matrices. In fact, the situation is
499worse than that. Type \kbd{norml2(1/h - 1/hr)} (the function \kbd{norml2}
500gives the square of the $L^2$ norm, i.e.~the sum of the squares of the
501coefficients). The result is larger than $10^{32}$, showing that some
502coefficients of \kbd{1/hr} are wrong by as much as $10^{16}$. To obtain the
503correct result after rounding for the inverse, we have to use a default
504precision of 57 digits (try it).
505
506Although vectors and matrices can be entered manually, by typing explicitly
507their elements, very often the elements satisfy a simple law and one uses a
508different syntax. For example, assume that you want a vector whose $i$-th
509coordinate is equal to $i^2$. No problem, type for example
510\kbd{vector(10,i, i\pow 2)} if you want a vector of length 10. Similarly, if
511you type
512\bprog
513  matrix(5,5, i,j, 1 / (i+j-1))
514@eprog\noindent
515you will get the Hilbert matrix of order 5, hence the \kbd{mathilbert}
516function is in fact redundant.  The \kbd{i} and \kbd{j} represent dummy
517variables which are used to number the rows and columns respectively (in
518the case of a vector only one is present of course). You must not forget,
519in addition to the dimensions of the vector or matrix, to indicate
520explicitly the names of these variables. You may omit the variables and
521the final expression to get zero entries, as in \kbd{matrix(10,20)}.
522
523\misctitle{Warning} The letter \kbd{I} is reserved for the complex number
524equal to the square root of $-1$. Hence it is forbidden to use it as a
525variable. Try typing \kbd{vector(10,I, I\pow 2)}, the error message that you
526get clearly indicates that \kbd{gp} does not consider \kbd{I} as a variable.
527There are other reserved variable names: \kbd{Pi}, \kbd{Euler},
528\kbd{Catalan} and \kbd{oo}. All function names are forbidden as well. On the
529other hand there is nothing special about \kbd{i}, \kbd{pi}, \kbd{euler} or
530\kbd{catalan}.
531
532When creating vectors or matrices, it is often useful to use Boolean
533operators and the \kbd{if()} statement. Indeed, an \kbd{if} expression has a
534value, which is of course equal to the evaluated part of the \kbd{if}. So for
535example you can type
536\bprog
537  matrix(8,8, i,j, if ((i-j)%2, 1, 0))
538@eprog\noindent
539to get a checkerboard matrix of \kbd{1} and \kbd{0}. Note however
540that a vector or matrix must be \emph{created} first before being used. For
541example, it is possible to write
542\bprog
543  v = vector(5);
544  for (i = 1, 5, v[i] = 1/i)
545@eprog\noindent
546but this would fail if the vector \kbd{v} had not been created beforehand.
547Of course, the above example is better written as
548\bprog
549  v = vector(5, i, 1/i);
550@eprog
551
552Another useful way to create vectors and matrices is to extract them from
553larger ones. For instance, if \kbd{h} is the $20\times 20$ Hilbert matrix as above,
554\bprog
555  h = mathilbert(20);
556  h[11..20, 11..20]
557@eprog\noindent is its lower right quadrant.
558
559\medskip The last PARI types which we have not yet played with are closely
560linked to number theory. People not interested in number theory can skip
561ahead.
562
563The first is the type ``integer--modulo''. Let us see an example. Type
564\bprog
565  n = 10^15 + 3
566@eprog
567We want to know whether this number is prime or not. Of course we could make
568use of the built-in facilities of PARI, but let us do otherwise. We first
569trial divide by the built-in table of primes. We slightly cheat here and use
570a variant of the function \kbd{factor} which does exactly this. So type
571\kbd{factor(n, 200000)}. The last argument tells \kbd{factor} to trial divide
572up to the given bound and stop at this point. Set it to 0 to trial divide by
573the full set of built-in primes, which goes up to $500000$ by default.
574
575As for all factoring functions, the result is a 2 column matrix: the first
576column gives the primes and the second their exponents. Here we get a single
577row, telling us that if primes stopped at $200000$ as we made \kbd{factor}
578believe, \kbd{n} would be prime. (Or is that a contradiction?) More
579seriously, \kbd{n} is not divisible by any prime up to $200000$.
580
581We could now trial divide further, or cheat and call the PARI function
582\kbd{factor} without the optional second argument, but before we do this let
583us see how to get an answer ourselves.
584
585By Fermat's little theorem, if $n$ is prime we must have $a^{n-1}\equiv 1
586\pmod{n}$ for all $a$ not divisible by $n$. Hence we could try this with $a=2$
587for example. But $2^{n-1}$ is a number with approximately $3\cdot10^{14}$
588digits, hence impossible to write down, let alone to compute. But instead type
589\kbd{a = Mod(2,n)}. This creates the number $2$ considered now as an element
590of the ring $R = \Z/\kbd{n}\Z$. The elements of $R$, called intmods, can
591always be represented by numbers smaller than \kbd{n}, hence small. Fermat's
592theorem can be rewritten
593%
594$\kbd{a}^{n-1} = \kbd{Mod(1,n)}$
595%
596in the ring $R$, and this can be computed very efficiently. Elements of $R$
597may be lifted back to $\Z$ with either \kbd{lift} or \kbd{centerlift}. Type
598\kbd{a\pow (n-1)}. The result is definitely \emph{not} equal to
599\kbd{Mod(1,n)}, thus \emph{proving} that \kbd{n} is not a prime. If we had
600obtained \kbd{Mod(1,n)} on the other hand, it would have given us a hint that
601\kbd{n} is maybe prime, but not a proof.
602
603To find the factors is another story. In this case, the integer $n$ is small
604 enough to let trial division run to completion. Type \kbd{\#} to turn on the
605\kbd{gp} timer, then
606\bprog
607  for (i = 2, ceil(sqrt(n)), if (n%i==0, print(i); break))
608@eprog\noindent
609This should take less than 5 seconds. In general, one must use less naive
610techniques than trial division, or be very patient. Type \kbd{fa = factor(n)}
611to let the factoring engine find all prime factors. You may stop the timer by
612typing \kbd{\#} again.
613
614Note that, as is the case with most ``prime''-producing functions, the
615``prime'' factors given by \kbd{factor} are only strong pseudoprimes, and not
616\emph{proven} primes.  Use \kbd{isprime( fa[,1] )} to rigorously prove
617primality of the factors. The latter command applies \kbd{isprime} to all
618entries in the first column of \kbd{fa}, i.e to all pseudoprimes, and returns
619the column vector of results: all equal to 1, so our pseudoprimes were
620true primes. All arithmetic functions can be applied in this way to the entries
621of a vector or matrix. In fact, it has been checked that the strong
622pseudoprimes output by \kbd{factor} (Baillie-Pomerance-Selfridge-Wagstaff
623pseudoprimes, without small divisors) are true primes at least up to
624$2^{64}$, and no explicit counter-example is known.\smallskip
625
626The second specifically number-theoretic type is the $p$-adic numbers. I have
627no room for definitions, so please skip ahead if you have no use for such
628beasts. A $p$-adic number is entered as a rational or integer valued
629expression to which is added \kbd{O(p\pow n)}, or simply \kbd{O(p)} if
630$\kbd{n}=1$, where \kbd{p} is the prime and \kbd{n} the $p$-adic precision.
631Note that you have to explicitly type in \kbd{3\pow 2} for instance, \kbd{9}
632will not do. Unless you want to cheat \kbd{gp} into believing that \kbd{9}
633is prime, but you had better know what you are doing in this case: most
634computations will yield a wrong result.
635
636Apart from the usual arithmetic operations, you can apply a number of
637transcendental functions. For example, type \kbd{n = 569 + O(7\pow 8)}, then
638\kbd{s~=~sqrt(n)}, you obtain one of the square roots of \kbd{n}; to check
639this, type \kbd{s\pow 2 - n}). Type now \kbd{s = log(n)}, then \kbd{e =
640exp(s)}. If you know about $p$-adic logarithms, you will not be surprised
641that \kbd{e} is not equal to \kbd{n}. Type \kbd{(n/e)\pow 6}: \kbd{e} is in
642fact equal to \kbd{n} times the $(p-1)$-st root of unity \kbd{teichmuller(n)}.
643
644Incidentally, if you want to get back the integer 569 from the $p$-adic
645number \kbd{n}, type \kbd{lift(n)} or \kbd{truncate(n)}.
646\smallskip
647
648The third number-theoretic type is the type ``quadratic number''. This type
649is specially tailored so that we can easily work in a quadratic extension of
650a base field, usually $\Q$. It is a generalization of the type
651``complex''. To start, we must specify which quadratic field we want to work
652in. For this, we use the function \kbd{quadgen} applied to the
653\emph{discriminant} \kbd{d} (as opposed to the radicand) of the quadratic
654field. This returns a number equal to
655$(\kbd{d}+a) / 2$ where $a$ is equal to 0 or 1 according to whether \kbd{d} is
656even or odd. The function \kbd{quadgen} takes an extra parameter which is how
657the number will be printed. To avoid confusion, this number should be
658set to a variable of the same name, i.e. do \kbd{w = quadgen(d, 'w)}.
659
660So type \kbd{w = quadgen(-163,'w)}, then \kbd{charpoly(w)} which asks for the
661characteristic polynomial of \kbd{w}. The result shows what \kbd{w} will
662represent. You may ask for \kbd{1.*w} to see which root of the quadratic has
663been taken, but this is rarely necessary. We can now play in the field
664$\Q(\sqrt{-163})$. Type for example \kbd{w\pow 10}, \kbd{norm(3 + 4*w)},
665\kbd{1 / (4+w)}. More interesting, type \kbd{a = Mod(1,23) * w} then \kbd{b =
666a\pow 264}. This is a generalization of Fermat's theorem to quadratic fields.
667If you do not want to see the modulus 23 all the time, type \kbd{lift(b)}.
668
669Another example: type \kbd{p = x\pow 2 + w*x + 5*w + 7}, then \kbd{norm(p)}. We
670thus obtain the quartic equation over $\Q$ corresponding to the relative
671quadratic extension over $\Q(\kbd{w})$ defined by \kbd{p}.
672
673On the other hand, if you type \kbd{wr  = sqrt(w\pow 2)}, do not expect to get
674back \kbd{w}. Instead, you get the numerical value, the function \kbd{sqrt}
675being considered as a ``transcendental'' function, even though it is
676algebraic. Type \kbd{algdep(wr,2)}: this looks for algebraic relations
677involving the powers of \kbd{w} up to degree 2. This is one way to get
678\kbd{w} back. Similarly, type \kbd{algdep(sqrt(3*w + 5), 4)}. See the user's
679manual for the function \kbd{algdep}.\smallskip
680
681The fourth number-theoretic type is the type ``polynomial--modulo'', i.e.
682polynomial modulo another polynomial. This type is used to work in general
683algebraic extensions, for example elements of number fields (if the base
684field is $\Q$), or elements of finite fields (if the base field is
685$\Z/p\Z$ for a prime $p$). In a sense it is a generalization of the type
686quadratic number. The syntax used is the same as for intmods. For example,
687instead of typing \kbd{w = quadgen(-163,'w)}, you can type
688\bprog
689  w = Mod(x, quadpoly(-163))
690@eprog\noindent
691Then, exactly as in the quadratic case, you can type \kbd{w\pow 10},
692\kbd{norm(3 + 4*w)}, \kbd{1 / (4+w)}, \kbd{a = Mod(1,23)*w}, \kbd{b = a\pow
693264}, obtaining of course the same results. (Type \kbd{lift(\dots)} if you
694don't want to see the polynomial \kbd{x\pow 2 - x + 41} repeated all the
695time.) Of course, you can work in any degree, not only quadratic. For the
696latter, the corresponding elementary operations will be slower than
697with quadratic numbers. Start the timer, then compare
698\bprog
699  w = quadgen(-163,'w); W = Mod(x, quadpoly(-163));
700  a = 2 + w;         A = 2 + W;
701  b = 3 + w;         B = 3 + W;
702  for (i=1,10^5, a+b)
703  for (i=1,10^5, A+B)
704  for (i=1,10^5, a*b)
705  for (i=1,10^5, A*B)
706  for (i=1,10^5, a/b)
707  for (i=1,10^5, A/B)
708@eprog\noindent
709Don't retype everything, use the arrow keys!
710
711There is however a slight difference in behavior. Keeping our polmod \kbd{w},
712type \kbd{1.*w}. As you can see, the result is not the same. Type
713\kbd{sqrt(w)}. Here, we obtain a vector with 2 components, the two components
714being the principal branch of the square root of all the possible embeddings
715of \kbd{w} in $\C$. More generally, if
716\kbd{w} was of degree $n$, we would get an $n$-component vector, and similarly
717for all transcendental functions.
718
719We have at our disposal the usual arithmetic functions, plus a few others.
720Type \kbd{a = Mod(x, x\pow 3 - x - 1)} defining a cubic extension. We can for
721example ask for \kbd{b = a\pow 5}. Now assume we want to express \kbd{a}
722as a polynomial in \kbd{b}. This is possible since \kbd{b} is also a
723generator of the same field. No problem, type \kbd{modreverse(b)}. This gives
724a new defining polynomial for the same field, i.e.~the characteristic
725polynomial of \kbd{b}, and expresses \kbd{a} in terms of this new polmod,
726i.e.~in terms of \kbd{a}. We will see this in more detail in the number
727field section.
728
729An important special case of the above construction allows to work in finite
730fields, by choosing an irreducible polynomial $T$ of degree $f$ over $\F_p$
731and considering $\F_p[t]/(T)$. As in
732\bprog
733  T = ffinit(5, 6, 't); \\ @com degree 6, irreducible over $\F_5$
734  g = Mod(t, T)
735@eprog\noindent Try a few elementary operations involving $g$, such as
736$g^{100}$. This special case of \typ{POLMOD}s is in fact so important that we
737now introduce a final dedicated number theoretical type \typ{FFELT}, for
738``finite field element'', to simplify work with finite fields: \kbd{g =
739ffgen(5\pow6, 't)} computes a suitable polynomial $T$ as above and returns
740the generator $t \mod T(t)$. This has major advantages over the generic
741\typ{POLMOD} solution: elements are printed in a simplified way (in lifted
742form), and functions can assume that $T$ is indeed irreducible. A few dedicated
743functions  \kbd{ffprimroot} (analog of \kbd{znprimroot}), \kbd{fforder}
744(analog of \kbd{znorder}), \kbd{fflog} (analog of \kbd{znlog}) are available.
745Rational expressions in the variable $t$ can be mapped to such a finite
746field by substituting $t$ by $g$, for instance
747\bprog
748  ? g = ffgen(5^6, 't);
749  ? g.mod  \\ @com irreducible over $\F_5$, defines $\F_{5^6}$
750  %2 = t^6 + t^5 + t^4 + t^3 + t^2 + t + 1
751  ? Q = x^2 + t*x + 1
752  ? factor(subst(Q,t,g))
753  %3 =
754  [    x + (t^5 + 3*t^4 + t^3 + 4*t + 1) 1]
755
756  [x + (4*t^5 + 2*t^4 + 4*t^3 + 2*t + 4) 1]
757@eprog\noindent factors the polynomial $Q \in \F_{5^6}[x]$, where
758$\F_{5^6} = \F_5[t]/(\kbd{g.mod})$.
759
760\section{Elementary Arithmetic Functions}
761
762Since PARI is aimed at number theorists, it is not surprising that there
763exists a large number of arithmetic functions; see the list by typing
764\kbd{?5}. We have already seen several, such as \kbd{factor}. Note that
765\kbd{factor} handles not only integers, but also univariate polynomials.
766Type for example \kbd{factor(x\pow 200 - 1)}. You can also ask to factor a
767polynomial modulo a finite field or a number field !
768
769Evidently, you have functions for computing GCD's (\kbd{gcd}), extended GCD's
770(\kbd{bezout}), solving the Chinese remainder theorem (\kbd{chinese}) and so
771on.
772
773In addition to the factoring facilities, you have a few functions related to
774primality testing such as \kbd{isprime}, \kbd{ispseudoprime},
775\kbd{precprime}, and \kbd{nextprime}. As previously mentioned, only strong
776pseudoprimes are produced by the latter two (they pass the
777\kbd{ispseudoprime} test); the more sophisticated primality tests in
778\kbd{isprime}, being so much slower, are not applied by default.
779
780We also have the usual multiplicative arithmetic functions: the M\"obius $\mu$
781function (\kbd{moebius}), the Euler $\phi$ function (\kbd{eulerphi}), the
782$\omega$ and $\Omega$ functions (\kbd{omega} and \kbd{bigomega}), the
783$\sigma_k$ functions (\kbd{sigma}), which compute sums of $k$-th powers of the
784positive divisors of a given integer, etc\dots
785
786You can compute continued fractions. For example, type \kbd{\b{p} 1000}, then
787\kbd{contfrac(exp(1))}: you obtain the continued fraction of the base of
788natural logarithms, which as you can see obeys a very simple pattern. Can
789you prove it?
790
791In many cases, one wants to perform some task only when an arithmetic
792condition is satisfied. \kbd{gp} gives you the following functions: \kbd{isprime}
793as mentioned above, \kbd{issquare}, \kbd{isfundamental} to test whether an
794integer is a fundamental discriminant (i.e.~$1$ or the discriminant of a
795quadratic field), and the \kbd{forprime}, \kbd{fordiv} and \kbd{sumdiv}
796loops. Assume for example that we want to compute the product of all the
797divisors of a positive integer \kbd{n}. The easiest way is to write
798\bprog
799  p = 1; fordiv(n,d, p *= d); p
800@eprog\noindent
801(There is a simple formula for this product in terms of $n$ and the number of
802its divisors: find and prove it!) The notation \kbd{p *= d} is just a
803shorthand for \kbd{p = p * d}.
804
805If we want to know the list of primes $p$ less than 1000 such that 2 is a
806primitive root modulo $p$, one way would be to write:
807\bprog
808  forprime(p=3,1000, if (znprimroot(p) == 2, print(p)))
809@eprog\noindent
810%
811Note that this assumes that \kbd{znprimroot} returns the smallest primitive
812root, and this is indeed the case. Had we not known about this, we could
813have written
814\bprog
815  forprime(p=3,1000, if (znorder(Mod(2,p)) == p-1, print(p)))
816@eprog\noindent
817%
818(which is actually faster since we only compute the order of $2$ in $\Z/p\Z$,
819instead of looking for a generator by trying successive elements whose orders
820have to be computed as well.) Once we know a primitive root $g$, we can write
821any nonzero element of $\Z/p\Z$ as $g^x$ for some unique $x$ in $\Z/(p-1)\Z$.
822Computing such a discrete logarithm is a hard problem in general, performed
823by the function \kbd{znlog}.
824
825Arithmetic functions related to quadratic fields, binary quadratic forms and
826general number fields will be seen in the next sections.
827
828\section{Performing Linear Algebra}
829The standard linear algebra routines are available: \kbd{matdet},
830\kbd{mateigen} (eigenvectors), \kbd{matker}, \kbd{matimage}, \kbd{matrank},
831\kbd{matsolve} (to solve a linear system), \kbd{charpoly} (characteristic
832polynomial), to name a few. Bilinear algebra over $\R$ is also there:
833\kbd{qfgaussred} (Gauss reduction), \kbd{qfsign} (signature). You may also
834type \kbd{?7}. Can you guess what each of these do?
835
836Let us see how this works. First, a vector space (or module) is given by a
837generating set of vectors (often a basis) which are represented as
838\emph{column} vectors. This set of vectors is in turn represented by the
839columns of a matrix. Quadratic forms are represented by their Gram matrix.
840The base field (or ring) can be any ring type PARI supports. However, certain
841operations are specifically written for a real or complex base field, while
842others are written for $\Z$ as the base ring.
843
844We had some fun with Hilbert matrices and numerical instability a while back,
845but most of the linear algebra routines are generic. If as before \kbd{h =
846mathilbert(20)}, we may compute
847\bprog
848  matdet(h * Mod(1,101))
849  matdet(h * (1 + O(101^100)))
850@eprog\noindent
851in $\Z/101\Z$ and the $p$-adic ring $\Z_{101}$ (to $100$ words of accuracy)
852respectively. Let \kbd{H = 1/h} the inverse of \kbd{h}:
853\bprog
854  H = 1/h;  \\ @com integral
855  L = primes([10^5, 10^5 + 1000]);  \\ @com pick a few primes
856  v = vector(#L, i, matdet(H * Mod(1,L[i])));
857  centerlift( chinese(v) )
858@eprog\noindent
859returns the determinant of \kbd{H}. (Assuming it is an integer
860less than half the product of elements of \kbd{L} in absolute value, which
861it is.)
862In fact, we computed an homomorphic image of the determinant in a few small
863finite fields, which admits a single integer representative given the size
864constraints. We could also have made a single determinant computation modulo
865a big prime (or pseudoprime) number, e.g \kbd{nextprime(2 * B)} if we know
866that the determinant is less than \kbd{B} in absolute value.
867(Why is that $2$ necessary?)
868
869By the way, this is how you insert comments in a script: everything
870following a double backslash, up to the first newline character, is ignored.
871If you want comments which span many lines, you can brace them between
872\kbd{/* ... */} pairs. Everything in between will be ignored as well. For
873instance as a header for the script above you could insert the
874following:
875\bprog
876  /* Homomorphic imaging scheme to compute the determinant of a classical
877   * integral matrix.
878   * TODO: Look up the explicit formula
879   */
880@eprog\noindent
881(I hope you did not waste your time copying this nonsense, did you?)
882\medskip
883
884In addition, linear algebra over $\Z$, i.e.~work on lattices, can also be
885performed. Let us now consider the lattice $\Lambda$ generated by the columns
886of \kbd{H} in $\Z^{20}\subset\R^{20}$. Since the determinant is nonzero, we
887have in fact a basis. What is the structure of the finite abelian group
888$\Z^{20}/\Lambda$? Type \kbd{matsnf(H)}. Wow, 20 cyclic factors.
889
890
891There is a triangular basis for $\Lambda$ (triangular when expressed in
892the canonical basis), perhaps it looks better than our initial one? Type
893\kbd{mathnf(H)}. Hum, what if I also want the unimodular transformation
894matrix? Simple : \kbd{z = mathnf(H, 1);} \kbd{z[1]} is the triangular HNF
895basis, and \kbd{z[2]} is the base change matrix from the canonical basis to
896the new one, with determinant $\pm 1$. Try \kbd{matdet(z[2])},
897then \kbd{H * z[2] == z[1]}. Fine, it works. And \kbd{z[1]} indeed looks
898better than \kbd{H}.
899
900Can we do better? Perhaps, but then we'd better drop the requirement that
901the basis be triangular, since the latter is essentially canonical. Type
902\bprog
903  M = H * qflll(H)
904@eprog
905Its columns give an LLL-reduced basis for $\Lambda$ (\kbd{qflll(H)} itself
906gives the base change matrix). The LLL algorithm outputs a nice basis for a
907lattice given by an arbitrary basis, where nice means the basis vectors are
908almost orthogonal and short, with precise guarantees on their relations to
909the shortest vectors. Not really spectacular on this example, though.
910
911Let us try something else, there should be an integer relation between
912$\log 3$, $\log 5$ and $\log 15$. How to detect it?
913\bprog
914  u = [log(15), log(5), log(3)];
915  m = matid(3); m[3,] = round(u * 10^25);
916  v = qflll(m)[,1] \\@com first vector of the LLL-reduced basis
917  u * v
918@eprog\noindent
919Pretty close. In fact, \kbd{lindep} automates this kind of search for integer
920relations; try \kbd{lindep(u)}.
921
922Let us come back to $\Lambda$ above, and our LLL basis in \kbd{M}. Type
923\bprog
924  G = M~*M  \\@com Gram matrix
925  m = qfminim(G, norml2(M[,1]), 100, 2);
926@eprog\noindent
927This enumerates the vectors in $\Lambda$ which are shorter than the first LLL
928basis vector, at most 100 of them. The final argument $2$ instructs the
929function to use a safe (slower) algorithm, since the matrix entries are
930rather large; trying to remove it should produce an error, in this case.
931There are $\kbd{m[1]} = 6$ such vectors, and \kbd{m[3]} gives half of them
932(\kbd{-m[3]} would complete the lot): they are the first 3 basis vectors! So
933these are optimally short, at least with respect to the Euclidean length. Let
934us try
935\bprog
936  m = qfminim(G, norml2(M[,4]), 100, 2);
937@eprog\noindent
938(The flag $2$ instructs \kbd{qfminim} to use a different enumeration
939strategy, which is much faster when we expect more short vectors than we want
940to store. Without the flag, this example requires several hours. This is an
941exponential time algorithm, after all!) This time, we find a slew of short
942vectors; \kbd{matrank(m[3])} says the 100 we have are all included in a
9432-dimensional space. Let us try
944\bprog
945  m = qfminim(G, norml2(M[,4]) - 1, 100000, 2);
946@eprog\noindent
947This time we find 50886 vectors of the requested length, spanning a
948$4$-dimensional space, which is actually generated by \kbd{M[,1]},
949\kbd{M[,2]} \kbd{M[,3]} and \kbd{M[,5]}.
950
951\section{Using Transcendental Functions}
952
953All the elementary transcendental functions and several higher transcendental
954functions are provided: $\Gamma$ function, incomplete $\Gamma$ function, error
955function, exponential integral, Bessel functions ($H^1$, $H^2$, $I$, $J$,
956$K$, $N$), confluent hypergeometric functions, Riemann $\zeta$ function,
957polylogarithms, Weber functions, theta functions. More will be written if the
958need arises.
959
960In this type of functions, the default precision plays an essential role.
961In almost all cases transcendental functions work in the following way.
962If the argument is exact, the result is computed using the current
963default precision. If the argument is not exact, the precision of the
964argument is used for the computation. A note of warning however: even in this
965case the \emph{printed} value is the current real format, usually the
966same as the default precision. In the present chapter we assume that your
967machine works with 64-bit long integers. If it is not the case, we leave it
968to you as a good exercise to make the necessary modifications.
969
970Let's assume that we have 38 decimals of default precision (this is what we
971get automatically at the start of a \kbd{gp} session on 64-bit machines). Type
972\kbd{e = exp(1)}. We get the number $e=2.718\dots$ to 38 decimals. Let us check
973how many correct decimals we really have. Change the precision to a
974substantially higher value, for example by typing \kbd{\b{p} 100}. Then type
975\kbd{e}, then \kbd{exp(1)} once again. This last value is the correct value
976of the mathematical constant $e$ to 100 decimals, while the variable \kbd{e}
977shows the value that was computed to 38 decimals. Clearly they coincide to
978exactly 38 significant digits.
979
980So 38 digits are printed, but how many significant digits are actually
981contained in the variable \kbd{e}? Type \kbd{\#e} which indicates we have
982exactly $2$ mantissa words. Since $2\ln(2^{64}) / \ln(10)\approx38.5$ we see
983that we have 38 or 39 significant digits (on 64-bit machines).
984
985\smallskip
986Come back to 38 decimals (\kbd{\b{p} 38}). If we type \kbd{exp(1.)}
987you can check that we also obtain 38 decimals. However, type
988\kbd{f = exp(1 + 1E-40)}. Although the default precision is still 38,
989you can check using the method above that we have in fact 96 significant
990digits! The reason is that \kbd{1 + 1E-40} is computed according to the PARI
991philosophy, i.e.~to the best possible precision. Since \kbd{1E-40} has 39
992significant digits and 1 has ``infinite'' precision, the number \kbd{1 +
9931E-40} will have at least $79=39+40$ significant digits, hence \kbd{f} also.
994
995Now type \kbd{cos(1E-19)}. The result is printed as $1.0000\dots$, but
996is of course not exactly equal to 1. Using \kbd{\#\%}, we see that the
997result has 4 mantissa words, giving us the possibility of having 77
998correct significant digits. PARI gives you as much as it can, and since 3
999mantissa words would have given you only 57 digits, it uses 4. But why does
1000it give so precise a result? Well, it is the same reason as before. When $x$
1001is close to 1, $\cos(x)$ is close to $1-x^2/2$, hence the precision is going
1002to be approximately the same as when computing this quantity, here
1003$1-0.5*10^{-38}$ where $0.5*10^{-38}$ is considered with 38 significant digit
1004accuracy. Hence the result will have approximately $38+38=76$ significant
1005digits.
1006
1007This philosophy cannot go too far. For example, when you type \kbd{cos(0)},
1008\kbd{gp} should give you exactly 1. Since it is reasonable for a program to
1009assume that a transcendental function never gives you an exact result,
1010\kbd{gp} gives you $1.000\dots$ with as many mantissa word as the current
1011precision.
1012\medskip
1013Let's see some more transcendental functions at work. Type
1014\kbd{gamma(10)}. No problem (type \kbd{9!} to check). Type \kbd{gamma(100)}.
1015The number is now written in exponential format because the default accuracy
1016is too small to give the correct result. To get all digits, the most natural
1017solution is to increase the precision; since \kbd{gamma(100)} has 156 decimal
1018digits, type \kbd{\b{p} 170} to be on the safe side, then \kbd{gamma(100)}
1019once again. Another one is to compute \kbd{99!} directly.
1020
1021Try \kbd{gamma(1/2 + 10*I)}. No problem, we have the complex $\Gamma$ function.
1022Now type
1023\bprog
1024  t = 1000;
1025  z = gamma(1 + I*t) * t^(-1/2) * exp(Pi/2*t) / sqrt(2*Pi)
1026  norm(z)
1027@eprog\noindent The latter is very close to 1, in accordance with the complex
1028Stirling formula.
1029\smallskip
1030
1031Let's play now with the Riemann zeta function. First turn on the timer (type
1032\kbd{\#}). Type \kbd{zeta(2)}, then \kbd{Pi\pow 2/6}. This seems correct. Type
1033\kbd{zeta(3)}. All this takes essentially no time at all. However, type
1034\kbd{zeta(3.1)}. You will notice that the time is substantially larger; if
1035your machine is too fast to see the difference, increase the precision to
1036\kbd{\b{p}1000}. This is because PARI uses special formulas to compute
1037\kbd{zeta(n)} when \kbd{n} is an integer.
1038
1039Type \kbd{zeta(1 + I)}. This also works. Now for fun, let us compute in a
1040naive way the first complex zero of \kbd{zeta}. We know that it is
1041of the form $1/2 + i*t$ with $t$ between 14 and 15. Thus, we can use the
1042following series of instructions. But instead of typing them directly, write
1043them into a file, say \kbd{zeta.gp}, then type \kbd{\b{r} zeta.gp} under
1044\kbd{gp} to read it in:
1045\bprog
1046  {
1047    t1 = 1/2 + 14*I;
1048    t2 = 1/2 + 15*I; eps = 1E-50;
1049    z1 = zeta(t1);
1050    until (norm(z2) < eps,
1051      z2 = zeta(t2);
1052      if (norm(z2) < norm(z1),
1053        t3 = t1; t1 = t2; t2 = t3; z1 = z2
1054      );
1055      t2 = (t1+t2) / 2.;
1056      print(t1 ": " z1)
1057    )
1058  }
1059@eprog\noindent
1060Don't forget the braces: they tell \kbd{gp} that a sequence of instructions
1061is going to span many lines. We thus obtain the first zero to 25 significant
1062digits.
1063
1064By the way, you don't need to type in the suffix~\kbd{.gp} in the \b{r}
1065command: it is supplied by \kbd{gp} if you forget it. The suffix is not
1066mandatory either, but it is convenient to have all GP scripts labeled in the
1067same distinctive way. Also, some text editors, e.g. Emacs or Vim, will
1068recognize GP scripts as such by their suffix and load special colourful
1069modes. \medskip
1070%
1071As mentioned at the beginning of this tutorial, some transcendental functions
1072can also be applied to $p$-adic numbers. This is as good a time as any to
1073familiarize yourself with them. Type
1074\bprog
1075  a = exp(7 + O(7^10))
1076  log(a)
1077@eprog\noindent All seems in order.
1078\bprog
1079  b = log(5 + O(7^10))
1080  exp(b)
1081@eprog\noindent Is something wrong? We don't recover the number we started
1082with? This is normal: type
1083\bprog
1084  exp(b) * teichmuller(5 + O(7^10))
1085@eprog\noindent
1086and we indeed recover our initial number. The Teichm\"uller
1087character \kbd{teichmuller(x)} on $\Z_p^*$ is the unique \hbox{$(p-1)$-st}
1088root of unity which is congruent to \kbd{x} modulo $p$, assuming that \kbd{x}
1089is a $p$-adic unit.\smallskip
1090%
1091Let us come back to real numbers for the moment. Type \kbd{agm(1,sqrt(2))}.
1092This gives the arithmetic-geometric mean of 1 and $\sqrt2$, and is the basic
1093method for computing complete elliptic integrals. In fact, type
1094
1095\kbd{Pi/2 / intnum(t=0,Pi/2, 1 / sqrt(1 + sin(t)\pow 2))},
1096
1097\noindent and the result is the same. The elementary transformation
1098\kbd{x = sin(t)} gives the mathematical equality
1099$$\int_0^1 \dfrac{dx}{\sqrt{1-x^4}} = \dfrac{\pi}{2\text{AGM}(1,\sqrt2)}
1100\enspace,$$
1101which was one of Gauss's remarkable discoveries in his youth.
1102
1103Now type \kbd{2 * agm(1,I) / (1+I)}. As you see, the complex AGM also works,
1104although one must be careful with its definition. The result found is
1105almost identical to the previous one. Do you see why?
1106
1107Finally, type \kbd{agm(1, 1 + 7 + O(7\pow 10))}. So we also have $p$-adic
1108AGM. Note however that since the square root of a $p$-adic number is not
1109in general an element of the same $p$-adic field,
1110only certain $p$-adic AGMs can be computed. In addition,
1111when $p=2$, the congruence restriction is that \kbd{agm(a,b)} can be computed
1112only when \kbd{a/b} is congruent to 1 modulo $16$, and not 8 as could be
1113expected.\smallskip
1114%
1115Now type \kbd{?8}. This gives you the list of all transcendental functions.
1116Instead of continuing with more examples, we suggest that you experiment
1117yourself with this list. Try integer, real, complex and $p$-adic arguments.
1118You will notice that some have not been implemented (or do not have a
1119reasonable definition).
1120
1121\section{Using Numerical Tools}
1122
1123 Although not written to be a numerical analysis package, PARI can
1124nonetheless perform some numerical computations. Since linear algebra and
1125polynomial computations are treated somewhere else, this section focuses on
1126solving equations and various methods of summation.
1127
1128You of course know the formula $\pi = 4(1-\dfrac13+\dfrac15-\dfrac17+\cdots)$
1129which is deduced from the power series expansion of \kbd{atan(x)}. You also
1130know that $\pi$ cannot be computed from this formula, since the convergence
1131is so slow. Right? Wrong! Type
1132\bprog
1133  \p 100
1134  4 * sumalt(k=0, (-1)^k/(2*k + 1))
1135@eprog\noindent In a split second, we get $\pi$ to 100 significant digits
1136(type \kbd{Pi} to check).
1137
1138Similarly, try
1139\bprog
1140  sumpos(k=1, k^-2)
1141@eprog\noindent Although once again the convergence is slow, the summation is
1142rather fast; compare with the exact result \kbd{Pi\pow 2/6}. This is less
1143impressive because a bit slower than for alternating sums, but still useful.
1144
1145Even better, \kbd{sumalt} can be used to sum divergent series! Type
1146\bprog
1147  zet(s) = sumalt(k=1, (-1)^(k-1) / k^s) / (1 - 2^(1-s))
1148@eprog\noindent
1149Then for positive values of \kbd{s} different from 1, \kbd{zet(s)} is equal
1150to \kbd{zeta(s)} and the series converges, albeit slowly; \kbd{sumalt}
1151doesn't care however. For negative \kbd{s}, the series diverges, but
1152\kbd{zet(s)} still gives the correct result! (Namely, the value of a suitable
1153analytic continuation.) Try \kbd{zet(-1)}, \kbd{zet(-2)}, \kbd{zet(-1.5)},
1154and compare with the corresponding values of \kbd{zeta}. You should not push
1155the game too far: \kbd{zet(-100)}, for example, gives a completely wrong
1156answer.
1157
1158Try \kbd{zet(I)}, and compare with \kbd{zeta(I)}. Even (some) complex values
1159work, although the sum is not alternating any more! Similarly, try
1160\bprog
1161  sumalt(n=1, (-1)^n / (n+I))
1162@eprog
1163\medskip
1164More traditional functions are the numerical integration functions. Try
1165\kbd{intnum(t=1,2, 1/t)} and presto! you get 100 decimals of $\log(2)$. Look
1166at Chapter 3 to see the available integration functions.
1167
1168With PARI, however, you can go further since complex types are allowed.
1169For example, assume that we want to know the location of the zeros of the
1170function $h(z)=e^z-z$. We use Cauchy's theorem, which tells us that the
1171number of zeros in a disk of radius $r$ centered around the origin is
1172equal to
1173$$\dfrac{1}{2i\pi}\int_{C_r}\dfrac{h'(z)}{h(z)}\,dz\enspace,$$
1174where $C_r$ is the circle of radius $r$ centered at the origin.
1175The function we want to integrate is
1176\bprog
1177  fun(z) = my(u = exp(z)); (u-1) / (u-z)
1178@eprog\noindent
1179(Here \kbd{u} is a local variable to the function \kbd{f}: whenever
1180a function is called, \kbd{gp} fills its argument list with the actual arguments
1181given, and initializes the other declared parameters and local variables to
11820. It will then restore their former values upon exit. If we had not declared
1183\kbd{u} in the function prototype, it would be considered as a global
1184variable, whose value would be permanently changed. It is not mandatory to
1185declare in this way all parameters, but beware of side effects!)
1186
1187Type now:
1188\bprog
1189  zero(r) = r/(2*Pi) * intnum(t=0, 2*Pi, real( fun(r*exp(I*t)) * exp(I*t) ))
1190@eprog
1191The function \kbd{zero(r)} will count the number of zeros of \kbd{fun}
1192whose modulus is less than \kbd{r}: we simply made the change of variable
1193$z = r*\exp(i*t)$, and took the real part to avoid integrating the
1194imaginary part. Actually, there is a built-in function \kbd{intcirc}
1195to integrate over a circle, yielding the much simpler:
1196\bprog
1197  zero2(r) = intcirc(z=0, r, fun(z))
1198@eprog
1199(This is a little faster than the previous implementation, and no less
1200accurate.)
1201
1202We may type \kbd{\b{p} 9} since we know that the result is a small integer
1203(but the computations should be instantaneous even at \b{p} 100 or so),
1204then \kbd{zero(1)}, \kbd{zero(1.5)}. The result tells us that there are
1205no zeros inside the unit disk, but that there are two (necessarily
1206complex conjugate) in the annulus $1<|z|<1.5$. For the sake of
1207completeness, let us compute them. Let $z = x+iy$ be such a zero, with
1208$x$ and $y$ real. Then the equation $e^z-z=0$ implies, after elementary
1209transformations, that $e^{2x}=x^2+y^2$ and that $e^x\cos(y)=x$. Hence
1210$y=\pm\sqrt{e^{2x}-x^2}$ and hence $e^x\cos(\sqrt{e^{2x}-x^2})=x$.
1211Therefore, type
1212\bprog
1213  fun(x) = my(u = exp(x)); u * cos(sqrt(u^2 - x^2)) - x
1214@eprog\noindent
1215Then \kbd{fun(0)} is positive while \kbd{fun(1)} is negative. Come
1216back to precision 38 and type
1217\bprog
1218  x0 = solve(x=0,1, fun(x))
1219  z = x0 + I*sqrt(exp(2*x0) - x0^2)
1220@eprog\noindent
1221which is the required zero. As a check, type \kbd{exp(z) - z}.
1222
1223Of course you can integrate over contours which are more complicated than
1224circles, but you must perform yourself the variable changes, as we have done
1225above to reduce the integral to a number of integrals on line segments.
1226\smallskip
1227%
1228The example above also shows the use of the \kbd{solve} function. To use
1229\kbd{solve} on functions of a complex variable, it is necessary to reduce the
1230problem to a real one. For example, to find the first complex zero of the
1231Riemann zeta function as above, we could try typing
1232
1233\kbd{solve(t=14,15, real( zeta(1/2 + I*t) ))},
1234
1235\noindent but this does not work because the real part is positive for
1236$\kbd{t}=14$ and $15$. As it happens, the imaginary part works. Type
1237
1238\kbd{solve(t=14,15, imag( zeta(1/2 + I*t) ))},
1239
1240\noindent and this now works. We could also narrow the search interval and
1241type for instance
1242
1243\kbd{solve(t=14,14.2, real( zeta(1/2 + I*t) ))}
1244
1245\noindent which would also work.
1246
1247\section{Polynomials}
1248
1249First a word of warning: it is essential to understand the difference between
1250exact and inexact objects. Try
1251\bprog
1252  gcd(x - Pi, x^2 - 6*zeta(2))
1253@eprog\noindent
1254We return a trivial GCD because the notion of GCD for inexact polynomials
1255doesn't make much sense. A better quantitative approach is to use
1256\bprog
1257  polresultant(x - Pi, x^2 - 6*zeta(2))
1258@eprog\noindent
1259A result close to zero shows that the GCD is nontrivial for small
1260deformations of the inputs. Without telling us what it is, of course. This
1261being said, we will mostly use polynomials (and power series) with exact
1262coefficients in our examples.\smallskip
1263
1264The simplest way to input a polynomial, is to simply write it down,
1265or use an explicit formula for the coefficients and the function \kbd{sum}:
1266\bprog
1267  T = 1 + x^2 + 27*x^10;
1268  T = sum(i = 1, 100, (i+1) * x^i);
1269@eprog\noindent but it is in much more efficient to create a vector of
1270coefficients then convert it to a polynomial using \kbd{Pol} or \kbd{Polrev}
1271(\kbd{Pol([1,2])} is $x+2$, \kbd{Polrev([1,2]) is $2x + 1$}) :
1272\bprog
1273  T = Polrev( vector(100, i, i) );
1274
1275  for (i=1, 10^4, Polrev( vector(100, i, i) ) )   \\@com time: 60ms
1276  for (i=1, 10^4, sum(i = 1, 100, (i+1) * x^i) ) \\@com time: 1,74ms
1277@eprog\noindent The reason for the discrepancy is that the explicit summation
1278(of densely encoded polynomials) is quadratic in the degree, whereas creating
1279a vector of coefficients then converting it to a polynomial type is linear.
1280
1281We also have a few built-in classical polynomial families. Consider the
1282$15$-th cyclotomic polynomial,
1283\bprog
1284  pol = polcyclo(15)
1285@eprog\noindent
1286which is of degree $\varphi(15)=8$. Now, type
1287\bprog
1288  r = polroots(pol)
1289@eprog\noindent We obtain the 8 complex roots of \kbd{pol}, given to 38
1290significant digits. To see them better, type \b{B}: they are given as pairs
1291of complex conjugate roots, in some order. In fact, the
1292function \kbd{polroots} returns the real roots first, in increasing order,
1293then the other roots by increasing absolute value of their imaginary parts
1294(so that pairs of complex conjugate roots appear together).
1295
1296The roots of \kbd{pol} are by definition the primitive $15$-th roots of unity.
1297To check this, simply type \kbd{rc = r\pow 15}. Why, we get an error message!
1298Fair enough, vectors cannot be multiplied, even less raised to a power.
1299However, type
1300\bprog
1301  rc = r^15.0
1302@eprog\noindent without forgetting the `\kbd{.}' at the end. Now it works,
1303because powering to a nonintegral exponent is a transcendental function and
1304hence is applied termwise. Note that the fact that $15.0$ is a real number
1305which is representable exactly as an integer has nothing to do with the
1306problem.
1307
1308We see that the components of the result are very close to 1. It is however
1309tedious to look at all these real and imaginary parts. It would be impossible
1310if we had many more. Let's do it automatically. Type
1311\bprog
1312  rr = round(rc)
1313  exponent(rc - rr)
1314@eprog\noindent We see that \kbd{rr} is indeed all 1's, and that the
1315sup-norm of \kbd{rc - rr} is around $2^{-125}$, reasonable enough when we
1316work with 128 bits of accuracy! In fact, \kbd{round} itself already provides
1317a built-in rough approximation of the error:
1318\bprog
1319  rr = round(rc, &e)
1320@eprog\noindent Again, \kbd{e} contains the number of error bits when rounding
1321\kbd{rc} to \kbd{rr}; in other words the sup norm of $\kbd{rc} - \kbd{rr}$
1322is bounded by $2^{-\kbd{e}}$.
1323%
1324\smallskip
1325Now type
1326\bprog
1327  pol = x^5 + x^4 + 2*x^3 - 2*x^2 - 4*x - 3
1328  factor(pol)
1329  factor( poldisc(pol) )
1330  fun(p) = factor(pol, O(p^10));
1331@eprog\noindent The polynomial \kbd{pol} factors over $\Q$ (or $\Z$) as a
1332product of two factors, and the primes dividing its discriminant are
1333$11$, $23$ and $37$. We also created a function \kbd{fun(p)} which factors
1334\kbd{pol} over $\Q_p$ to $p$-adic precision 10. Type
1335\bprog
1336  fun(5)
1337  fun(11)
1338  fun(23)
1339  fun(37)
1340@eprog\noindent to see different splittings. You can use \kbd{lift} to
1341convert $p$-adic numbers to neighbouring rational numbers.
1342
1343Similarly, type
1344\bprog
1345  lf(p) = lift( factor(pol, p) );
1346  lf(2)
1347  lf(11)
1348  lf(23)
1349  lf(37)
1350@eprog\noindent which show the different factorizations, this time over
1351$\F_p$. In fact, even better: type successively
1352\bprog
1353  T = ffgen(3^3, 't) \\@com we want \kbd{t} to be a free variable
1354  factor(pol, t)
1355@eprog\noindent
1356The element $T$ is printed as \kbd{t} but is actually defined modulo the
1357irreducible polynomial \kbd{t\pow 3 + 2t + 2} over $\F_{3}$ (see
1358\kbd{t.mod}): it defines the finite field $\F_{27}$. Note that we introduced
1359a new variable $t$ to express elements in this nonprime field.
1360More generally, the generic \kbd{factor} function allows a second argument
1361which describes the domain over which we want to factor, here $\F_{27}$
1362(and, above, $\Q_p$ to $10$ digits of accuracy and $\F_p$).
1363Typing \kbd{factor(pol)} directly would factor it over $\Q$, not what we
1364wanted.
1365
1366Similarly, type
1367\bprog
1368  pol2 = x^4 - 4*x^2 + 16
1369  fn = lift( factor(pol2, t^2 + 1) )
1370@eprog\noindent and we get the factorization of the polynomial \kbd{pol2}
1371over the number field $\Q[t]/(t^2+1)$, i.e.~over $\Q(i)$. Without the
1372\kbd{lift}, the result would involve number field elements as \typ{POLMOD}s
1373of the form \kbd{Mod(1+t, t\pow2+1)}, which are more explicit but less
1374readable.
1375\smallskip
1376
1377To summarize, in addition to being able to factor integers, you can factor
1378polynomials over $\C$ using \kbd{polroots}, over $\R$ using
1379\kbd{factor(,1.0)}, over finite fields using
1380\kbd{factor(, p)}, over $\Q_p$ using \kbd{factor(, O(p\pow $n$))},
1381over $\Q$ using \kbd{factor}, and over the number field $\Q[t]/(T)$ using
1382\kbd{factor(, T)}.
1383Note however that \kbd{factor} itself will guess intelligently over
1384which ring you want to factor: try
1385\bprog
1386  pol = x^2 + 1;
1387  factor(pol)
1388  factor(pol *1.)
1389  factor(pol * (1 + 0*I))
1390  factor(pol * (1 + 0.*I))
1391  factor(pol * Mod(1,2))
1392  factor(pol * Mod(1, Mod(1,3)*(t^2+1)))
1393  pol2 = x^2 + y^2;
1394  factor(pol2)
1395  factor(pol2 * Mod(1,5))
1396@eprog\noindent
1397In the present version \vers{}, it is not possible to factor over other
1398base rings than the ones mentioned above, but multivariate polynomials over
1399those rings are allowed as shown in the last examples. Other functions
1400related to factoring are \kbd{padicappr}, \kbd{polrootsmod},
1401\kbd{polrootspadic}, \kbd{polsturm}. Play with them a little.
1402
1403Finally, type
1404\bprog
1405  polsym(pol2, 20)
1406@eprog\noindent where \kbd{pol2} was defined above. This gives the sum of the
1407$k$-th powers of the roots of \kbd{pol2} up to $k=20$, of course computed
1408using Newton's formula and not using \kbd{polroots}. You notice that every
1409odd sum is zero (expected, since the polynomial is even), but also that
1410the signs follow a regular pattern and that the (nonzero) absolute values
1411are powers of 2. This is true: prove it, and more precisely find an explicit
1412formula for the $k$-th symmetric power not involving (nonrational) algebraic
1413numbers.
1414
1415\section{Power series}
1416
1417Now let's play with power series as we have done at the beginning.  Type
1418\bprog
1419  N = 39;
1420  8*x + prod(n=1,N, if(n%4, 1 - x^n, 1), 1 + O(x^(N+1)))^8
1421@eprog\noindent
1422Apparently, only even powers of \kbd{x} appear. This is surprising, but can
1423be proved using the theory of modular forms. Note that we initialize
1424the product to \kbd{1 + O(x\pow (N+1))}, otherwise the whole computation would
1425be done with polynomials; this would first have been slightly slower and also
1426totally useless since the coefficients of \kbd{x\pow (N+1)} and above are
1427irrelevant anyhow if we stop the product at $\kbd{n} = \kbd{N}$.
1428
1429While we are on the subject of modular forms (which, together with Taylor
1430series expansions are another great source of power series), type
1431\bprog
1432  \ps 122     \\@com shortcut for \kbd{default(seriesprecision, 122)}
1433  d = x * eta(x)^24
1434@eprog\noindent This gives the first 122 terms of the Fourier series
1435expansion of the modular discriminant function $\Delta$ of Ramanujan. Its
1436coefficients give by definition the Ramanujan $\tau$ function, which has a
1437number of marvelous properties (look at any book on modular forms for
1438explanations). We would like to see its properties modulo 2. Type \kbd{d\%2}.
1439Hmm, apparently PARI doesn't like to reduce coefficients of power series, or
1440polynomials for that matter, directly. Can we do it without writing a little
1441program? No problem. Type instead
1442\bprog
1443  lift(Mod(1,2) * d)
1444  centerlift(Mod(1,3) * d)
1445@eprog\noindent and now this works like a charm. The pattern in the first
1446result is clear; the pattern is less clear in the second result, but
1447nonetheless there is one. Of course, it now remains to prove it (see Antwerp
1448III or your resident modular forms guru).
1449
1450\section{Working with Elliptic Curves}
1451
1452Now we are getting to more complicated objects. Just as with number fields
1453which we will meet later on, the first thing to do is to initialize them.
1454That's because a lot of data will be needed repeatedly, and it's much more
1455convenient to have it ready once and for all. Here, this is done with the
1456function \kbd{ellinit}.
1457
1458So type
1459\bprog
1460  e0 = ellinit([6,-3,9,-16,-14])
1461@eprog
1462This computes a number of things
1463about the elliptic curve defined by the affine equation
1464%
1465$$ y^2+6xy+9y = x^3-3x^2-16x-14\enspace. $$
1466%
1467It is not that clear what all these funny numbers mean, except that we
1468recognize the first few of them as the coefficients we just input. To
1469retrieve meaningful information from such complicated objects (and number
1470fields will be much worse), one uses so-called \emph{member
1471functions}. Type \kbd{?.} to get a complete list. Whenever \kbd{ell} appears
1472in the right hand side, we can apply the corresponding function to an object
1473output by \kbd{ellinit}. (I'm sure you know how the other \kbd{init} functions
1474will be called now, don't you? Oh, by the way, neither \kbd{clgpinit} nor
1475\kbd{pridinit} exist.)
1476
1477  Let's try it. The discriminant \kbd{e0.disc} is equal to 37, hence
1478the conductor of the curve is 37. Of course in general it is not so
1479trivial. In fact, although the equation of the curve is clearly
1480minimal (since the discriminant is $12$th-power-free), it is not in
1481standard reduced form, so type
1482\bprog
1483  e = ellminimalmodel(e0)
1484@eprog\noindent which
1485gives the \kbd{ell} structure attached to the standard model,
1486exactly as if we had used \kbd{ellinit} on a reduced equation. For
1487some related data, type
1488\bprog
1489  gr = ellglobalred(e0)
1490@eprog\noindent The first
1491component \kbd{gr[1]} tells us that the conductor is 37 as we already
1492knew.  The second component is a 4-component vector which allows us to
1493get the minimal equation: in fact \kbd{e} is \kbd{ellchangecurve(e0, gr[2])}.
1494Type
1495\bprog
1496  q0 = [-2,2]
1497  ellisoncurve(e0, q0)
1498  q = ellchangepoint(q0,gr[2])
1499  ellisoncurve(e, q)
1500@eprog\noindent
1501The point \kbd{q0} is on the curve, as checked by \kbd{ellisoncurve}, and we
1502transferred it onto the minimal model \kbd{e}, using \kbd{ellchangepoint} and
1503the change of variable computed above. Note that \kbd{ellchangepoint()} is
1504unusual among the elliptic curve functions in that it does not take an
1505\kbd{ell} structure as its first argument: in \kbd{gp}, points do not
1506``know'' which curve they are on, but to move a point from one model to
1507another we only need to know the coordinates of the point and the
1508transformation data here stored in \kbd{gr[2]}.  Also, the point at infinity
1509is represented as \kbd{[0]} on all elliptic curves; this is the identity for
1510the group law.
1511
1512Here, \kbd{q=[0,0]} obviously lies on \kbd{e}, which has equation
1513$y^2+y = x^3-x$.  Let us now play a little with points on \kbd{e}.
1514The group law on an elliptic curve is implemented with the functions
1515\kbd{elladd} for addition, \kbd{ellsub} for subtraction and
1516\kbd{ellmul} for multiplication by an integer.  For
1517example, the negative of \kbd{q} is \kbd{ellsub(e,[0],q)}, and the
1518double is obtained either as \kbd{ellmul(e,q,2)} or as
1519\kbd{elladd(e,q,q)}.
1520
1521Now \kbd{q} may be a torsion point. Type \kbd{ellheight(e, q)}, which
1522computes the canonical Neron-Tate height of \kbd{q}. Note that
1523\kbd{ellheight} does not assume that \kbd{e} is \emph{minimal}! (Although
1524it is, making things a little faster.)
1525This is nonzero, hence \kbd{q} is not torsion. To see this even better,
1526type
1527\bprog
1528  for(k = 1, 20, print(ellmul(e, q, k)))
1529@eprog\noindent and we see the characteristic parabolic explosion of the size
1530of the points. (And another proof that \kbd{q} is not torsion, assuming
1531Mazur's bound on the size of the rational torsion.) We could can also type
1532\kbd{ellorder(e, q)} which returns 0, telling us yet again that \kbd{q} is
1533nontorsion. As a consistency check, type
1534\bprog
1535  ellheight(e, ellmul(e, q,20)) / ellheight(e, q)
1536@eprog\noindent
1537We indeed find $400=20^2$ as it should be.
1538
1539Notice how (almost) all those \kbd{ell}--prefixed functions take our
1540elliptic curve as a first argument? This will be true with number
1541fields as well: whatever object was initialized by an $ob$--\kbd{init}
1542function will have to be used as a first argument of all the
1543$ob$--prefixed functions. Conversely, you won't be able to use any
1544such high-level function before you correctly initialize the relevant
1545object. \smallskip
1546
1547Ok, let's try another curve. Type
1548\bprog
1549  E = ellinit([0,-1,1,0,0])
1550  q = [0,0]; ellheight(E, q)
1551@eprog\noindent This corresponds to the equation $y^2+y = x^3-x^2$ and an
1552obvious rational point on it. Again from the discriminant we see that the
1553conductor is equal to 11, and if you type \kbd{ellminimalmodel(E)} you will
1554see that the equation  for \kbd{E} is minimal. This time the height is
1555exactly zero, hence \kbd{q} must be a torsion point. Indeed, typing
1556\bprog
1557  for(k=1, 5, print(ellmul(E, q,k)))
1558  ellorder(E, q)   \\@com simpler
1559@eprog\noindent
1560we see in two different ways that \kbd{q} is a point of order 5. Moreover,
1561typing
1562\bprog
1563  elltors(E)
1564@eprog\noindent shows that \kbd{q} generates all the torsion of \kbd{E},
1565which is cyclic of order~$5$. \smallskip
1566
1567Let's try still another curve, $y^2+y = x^3-7x+6$:
1568\bprog
1569  e = ellinit([0,0,1,-7,6])
1570  ellglobalred(e)
1571@eprog\noindent As before, this is a minimal equation; now the conductor is
15725077. There are some trivial integral points on this curve, but let's try to
1573be more systematic. Typing
1574\bprog
1575  elltors(e)
1576@eprog\noindent
1577shows that the torsion subgroup is trivial, so we don't have to worry about
1578torsion points. Next, the function \kbd{ellratpoints} allows us to find
1579rational points of small height
1580\bprog
1581  v = ellratpoints(e,1000)
1582@eprog\noindent The vector $v$ contains all 130 rational points $(x,y)$ on the
1583curve whose $x$-coordinate is $n/d$ with $|n|$ and $|d|$ both less than $1000$.
1584Note that \kbd{ellratpoints(e,10\pow6)} takes less than 1 second, and
1585produces 344 points. Of course, these are grouped by pairs:
1586if $(x,y)$ is on the curve, its opposite is $(x,-y-1)$ as
1587\bprog
1588  ellneg(e, ['x,'y])
1589@eprog\noindent shows. Note that there is no problem with manipulating points
1590with formal coordinates. This is large for a curve having such a small
1591conductor. So we suspect (if we do not know already, since this curve is
1592quite famous!) that the rank of this curve must be large. Let's try and put
1593some order into this. First, we eliminate one element in each pair
1594of opposite points:
1595\bprog
1596  v = vecsort(v, 1, 8)
1597@eprog\noindent
1598The argument $1$ specifies a comparison function: we sort the points by first
1599coordinate only, in particular two points with the same $x$-coordinate
1600compare as equal; the $8$ flag eliminates ``duplicates''. The same effect
1601could be obtained in a more verbose way using an inline anonymous function
1602\bprog
1603  v = vecsort(v, (P,Q) -> sign(P[1]-Q[1]), 8)
1604@eprog\noindent We now order the points according to their canonical height:
1605\bprog
1606  hv = [ ellheight(e,P) | P <- v ];
1607  v = vecextract(v, vecsort(hv,,1)) \\ indirect sort wrt h, then permute
1608@eprog\noindent
1609It seems reasonable to take the numbers with smallest height as
1610possible generators of the Mordell-Weil group. Let's try the first
1611four: type
1612\bprog
1613  m = ellheightmatrix(e, v[1..4]); matdet(m)
1614@eprog\noindent
1615Since the curve has no torsion, the determinant being close to zero
1616implies that the first four points are dependent. To find the
1617dependency, it is enough to find the kernel of the matrix \kbd{m}. So
1618type \kbd{matker(m)}: we indeed get a nontrivial kernel, and the
1619coefficients are close to integers. Typing \kbd{elladd(e, v[1],v[3])} does
1620indeed show that it is equal to \kbd{v[4]}.
1621
1622Taking any other four points, we seem to always find a dependency.
1623Let's find all dependencies. Type
1624\bprog
1625  vp = v[1..3];
1626  m = ellheightmatrix(e,vp);
1627  matdet(m)
1628@eprog\noindent This is now clearly nonzero so the first 3 points are
1629linearly independent, showing that the rank of the curve is at least equal to
16303. (In fact, \kbd{e} is the curve of smallest conductor having rank 3.)
1631We would like to see whether the other points are dependent:
1632if \kbd{Q} is some point which is dependent on \kbd{v[1],v[2]} and \kbd{v[3]}
1633and
1634\bprog
1635  c = [ellheight(e, P, Q) | P <- vp]~
1636@eprog\noindent then \kbd{m\pow(-1) * c} will give the coefficients of the
1637dependence relation. If these coefficients are not close to integers,
1638then there is no dependency, otherwise we can round and check rigourously
1639using a function such as the following:
1640\bprog
1641  ellcomb(e, P, L) =
1642  { my (Q = [0]);
1643    for (i = 1, #P, Q = elladd(e, Q, ellmul(e, P[i], L[i])));
1644    return (Q);
1645  }
1646@eprog\noindent This is safer than using the \kbd{matker} function. Thus, type
1647\bprog
1648  mi = m^(-1);
1649  w = vector(#v, k, mi * [ellheight(e, P, v[k]) | P <- vp]~)
1650  wr = round(w, &e)
1651@eprog\noindent
1652We ``see'' that the coefficients are all very close to integers, and we
1653quantified it with the last instruction: \kbd{wr} is the vector expressing
1654all the components of \kbd{v} on its first 3, and $2^{-e}$ gives an upper
1655bound on the maximum distance to an integer. The rigorous check is positive:
1656\bprog
1657  for (i=1, #w, if (v[i] != ellcomb(e,vp,wr[i]), error(i)));
1658@eprog No error! We are thus led to strongly believe that the curve has rank
1659exactly 3, with generators \kbd{v[1]}, \kbd{v[2]} and \kbd{v[3]}, and this
1660can be proved to be the case.
1661
1662Two remarks: (1) Using the height pairing to find dependence relations as we
1663have done only finds relations modulo torsion; but in this example, the curve
1664has trivial torsion, as we checked earlier. (2) In the above calculation we
1665were lucky that all the \kbd{v[j]} were $\Z$-linear combinations of
1666\kbd{v[1]}, \kbd{v[2]} and \kbd{v[3]} and not just $\Q$-linear combinations;
1667in general the results in $w$ might have given a vector of rationals: if
1668$k\ge2$ is minimal such that $kQ$ is in the subgroup generated by \kbd{v[1]},
1669\kbd{v[2]} and \kbd{v[3]}, then the entries of \kbd{matsolve(m, ellbil(e,
1670vp,Q))} will be rationals with common denominator~$k$. This can be detected
1671by using \kbd{bestappr} instead of \kbd{round} in the above.
1672
1673\smallskip
1674
1675Let us explore a few more elliptic curve related functions. Keep our
1676curve \kbd{e} of rank 3, and type
1677\bprog
1678  v1 = [1,0]; z1 = ellpointtoz(e, v1)
1679  v2 = [2,0]; z2 = ellpointtoz(e, v2)
1680@eprog\noindent
1681We thus get the complex parametrization of the curve. To add the points
1682\kbd{v1} and \kbd{v2}, we should of course type \kbd{elladd(e, v1,v2)},
1683but we can also type \kbd{ellztopoint(e, z1 + z2)} which has the disadvantage
1684of giving complex numbers, but illustrates how the group law on \kbd{e} is
1685obtained from the addition law on $\C$.
1686
1687Type
1688\bprog
1689  f = x * Ser(ellan(e, 30))
1690@eprog\noindent This gives a power series which is the Fourier expansion of a
1691modular form of weight 2 for $\Gamma_0(5077)$. (This has been proved
1692directly, before Wiles's general result.) In fact, to find the modular
1693parametrization of the curve, type
1694\bprog
1695  modul = elltaniyama(e)
1696  u = modul[1];
1697  v = modul[2];
1698@eprog\noindent
1699We can check in various ways that this indeed parametrizes the curve:
1700\bprog
1701  (v^2 + v) - (u^3 - 7*u + 6)
1702@eprog\noindent
1703is close to $0$ for instance, or simply that \kbd{ellisoncurve(e,modul)}
1704returns~$1$. Now type
1705\bprog
1706  x * u' / (2*v + 1)
1707@eprog\noindent and we see that this is equal to the modular form \kbd{f}
1708found above; the quote \kbd{'} tells \kbd{gp} to take the derivative of the
1709expression with respect to its main variable. The functions \kbd{u} and
1710\kbd{v}, considered on the upper half plane with $x=e^{2i\pi\tau}$, are in
1711fact modular \emph{functions} for $\Gamma_0(5077)$. \smallskip
1712
1713The function \kbd{ellan(e, 30)} gives the first~$30$ coefficients
1714$a_n$ of the $L$-series of \kbd{e}.  One can also ask for a single
1715coefficient: the millionth is \kbd{ellak(e, 10\pow 6)}.  Note however
1716that calling \kbd{ellan(e,100000)} is much faster than
1717the equivalent \kbd{vector(100000,k,ellak(e,k))}.  For a
1718prime~\kbd{p},
1719\kbd{ellap(e,p)} is equivalent to \kbd{ellak(e,p)};  this is the
1720integer $a_p$ such that the number of points on \kbd{e} over $\F_p$ is
1721$1+p-a_p$. (With the standard PARI distribution, \kbd{ellap} is the only way
1722to obtain the order of an elliptic curve over $\F_p$ in \kbd{gp}. The
1723external package \kbd{ellsea} provides much more efficient routines.)
1724
1725Finally, let us come back to the curve \kbd{E} defined above by
1726\kbd{E = ellinit([0,-1,1,0,0])}, which had an obvious rational $5$-torsion
1727point. The sign of the functional equation, given by \kbd{ellrootno(E)}, is
1728$+1$. Assuming the rank parity conjecture, it follows that the Mordell-Weil
1729group of $E$ has even rank. The value of the L-function of \kbd{E} at 1
1730\bprog
1731  ls = lfun(E, 1)
1732@eprog\noindent is definitely nonzero, so \kbd{E} has rank $0$. According to
1733the Birch and Swinnerton-Dyer conjecture, which is proved for this curve,
1734\kbd{ls} is given by the following formula (in this case):
1735%
1736\def\sha{\hbox{III}}
1737$$L(E,1)=\dfrac{\Omega\cdot c\cdot|\sha|}{|E_{\text{tors}}|^2}\enspace,$$
1738%
1739where $\Omega$ is the real period of $E$, $c$ is the global Tamagawa number,
1740$|\sha|$ is the order of the Tate-Shafarevich group, and $E_{\text{tors}}$ is the
1741torsion group of $E$.
1742
1743Now we know many of these quantities: $\Omega$ is equal to \kbd{E.omega[1]}
1744The global Tamagawa number $c$ is given by \kbd{elltamagawa(E)} and is $1$
1745We already know that the torsion subgroup of $E$ contains a point of order 5,
1746and typing \kbd{elltors(E)} shows that it is of order exactly 5. So type
1747\bprog
1748  ls * 25/E.omega[1]
1749@eprog\noindent This shows that $\sha$ must be the trivial group.
1750A short hand for $\dfrac{\Omega\cdot c}{|E_{\text{tors}}|^2}$ is \kbd{ellbsd(E)},
1751so the previous line can be written as
1752\bprog
1753 ls / ellbsd(E)
1754@eprog
1755
1756For more detailed information on the local reduction of an elliptic curve at
1757a specific prime~\kbd{p}, use the function \kbd{elllocalred(E,p)}; the second
1758component gives the Kodaira symbol in an encoded form.  See the manual or
1759online help for details.
1760
1761\section{Working in Quadratic Number Fields}
1762
1763The simplest of all number fields outside $\Q$ are quadratic fields and are
1764the subject of the present section. We shall deal in the next one with
1765general number fields (including $\Q$ and quadratic fields!), and one should
1766be aware that all we will see now has a more powerful, in general easier to
1767use, equivalent in the general context. But possibly much slower.
1768
1769Such fields are characterized by their discriminant. Even better, any
1770nonsquare integer $D$ congruent to 0 or 1 modulo 4 is the discriminant of a
1771specific order in a quadratic field. We can check whether this order is
1772maximal with \kbd{isfundamental(D)}. Elements of a quadratic field are of the
1773form $a+b\omega$, where $\omega$ is chosen as $\sqrt{D}/2$ if $D$ is even and
1774$(1+\sqrt{D})/2$ if $D$ is odd, and are represented in PARI by quadratic
1775numbers. To initialize working in a quadratic order, one starts by the
1776command \kbd{w = quadgen($D$,'w)}.
1777
1778This sets \kbd{w} equal to $\omega$ as above, and is printed \kbd{w}.
1779If you need several quadratic orders at the same time, you can
1780use different variable names:
1781\bprog
1782  w1 = quadgen(-23,'w1)
1783  w2 = quadgen(-15,'w1)
1784@eprog\noindent
1785\smallskip
1786%
1787In addition to elements of a quadratic order, we also want to be able to
1788handle ideals of such orders. In the quadratic case, it is equivalent to
1789handling binary quadratic forms. For negative discriminants, quadratic forms
1790are triples $(a,b,c)$ representing the form $ax^2+bxy+cy^2$. Such a form will
1791be printed as, and can be created by, \kbd{Qfb($a$,$b$,$c$)}.
1792
1793Such forms can be multiplied, divided, powered as many PARI objects using
1794the usual operations, and they can also be reduced using the function
1795\kbd{qfbred} (it is not the purpose of this tutorial to explain what all
1796these things mean). In addition, Shanks's NUCOMP algorithm has been
1797implemented (functions \kbd{qfbnucomp} and \kbd{qfbnupow}), and this is
1798usually a little faster.
1799
1800Finally, you have at your disposal the functions \kbd{qfbclassno} which
1801(\emph{usually}) gives the class number, the function \kbd{qfbhclassno}
1802which gives the Hurwitz class number, and the much more sophisticated
1803\kbd{quadclassunit} function which gives the class number and class group
1804structure.
1805
1806Let us see examples of all this at work.
1807
1808Type \kbd{qfbclassno(-10007)}. \kbd{gp} tells us that the result is 77. However,
1809you may have noticed in the explanation above that the result is only
1810\emph{usually} correct. This is because the implementers of the algorithm
1811have been lazy and have not put the complete Shanks algorithm into PARI,
1812causing it to fail in certain very rare cases. In practice, it is almost
1813always correct, and the much more powerful \kbd{quadclassunit} program, which
1814\emph{is} complete (at least for fundamental discriminants) can give
1815confirmation; but now, under the Generalized Riemann Hypothesis!
1816
1817So we may be a little suspicious of this class number. Let us check it.
1818First, we need to find a quadratic form of discriminant $-10007$. Since this
1819discriminant is congruent to 1 modulo 8, we know that there is an ideal of
1820norm equal to 2, i.e.~a binary quadratic form $(a,b,c)$ with $a=2$. To
1821compute it we type \kbd{f = qfbprimeform(-10007, 2)}. OK, now we have a form.
1822If the class number is correct, the very least is that this form raised to
1823the power 77 should equal the identity. Type \kbd{f\pow 77}. We get a form
1824starting with 1, i.e.~the identity. Raising \kbd{f} to the powers 11 and 7
1825does not give the identity, thus we now know that the order of \kbd{f} is
1826exactly 77, hence the class number is a multiple of 77. But how can we be
1827sure that it is exactly 77 and not a proper multiple? Well, type
1828\bprog
1829  sqrt(10007)/Pi * prodeuler(p=2,500, 1./(1 - kronecker(-10007,p)/p))
1830@eprog\noindent
1831This is nothing else than an approximation to the Dirichlet class number
1832formula. The function \kbd{kronecker} is the Kronecker symbol, in this case
1833simply the Legendre symbol. Note also that we have written \kbd{1./(1 - \dots)}
1834with a dot after the first 1. Otherwise, PARI may want to compute the whole
1835thing as a rational number, which would be terribly long and useless. In fact
1836PARI does no such thing in this particular case (\kbd{prodeuler} is always
1837computed as a real number), but you never know. Better safe than sorry!
1838
1839We find 77.77, pretty close to 77, so things seem in order. Explicit bounds
1840on the prime limit to be used in the Euler product can be given which make
1841the above reasoning rigorous.
1842
1843Let us try the same thing with $D=-3299$. \kbd{qfbclassno} and the Euler
1844product convince us that the class number must be 27. However, we get stuck
1845when we try to prove this in the simple-minded way above. Indeed, we type
1846\kbd{f = qfbprimeform(-3299, 3)} (2 is not the norm of a prime ideal but
18473 is), and we see that \kbd{f} raised to the power 9 is equal to the identity.
1848This is the case for any other quadratic form we choose. So we suspect that the
1849class group is not cyclic. Indeed, if we list all 9 distinct powers of \kbd{f},
1850we see that \kbd{qfbprimeform(-3299, 5)} is not on the list, although its cube
1851is as it must. This implies that the class group is probably equal to a
1852product of a cyclic group of order 9 by a cyclic group of order 3. The Euler
1853product plus explicit bounds prove this.
1854
1855Another way to check it is to use the \kbd{quadclassunit} function by typing
1856for example
1857\bprog
1858  quadclassunit(-3299)
1859@eprog\noindent
1860Note that this function cheats a little and could still give a wrong answer,
1861even assuming GRH: the forms given could generate a strict subgroup of the
1862class group. If we want to use proven bounds under GRH, we have to type
1863\bprog
1864  quadclassunit(-3299,,[1,6])
1865@eprog\noindent
1866The double comma \kbd{,,} is not a typo, it means we omit an optional second
1867argument. As we want to use the optional \emph{third} argument, we have to
1868indicate to \kbd{gp} we skipped this one.
1869
1870Now, if we believe in GRH, the class group is as we thought (see Chapter 3
1871for a complete description of this function).
1872
1873  Note that using the even more general function \kbd{bnfinit} (which handles
1874general number fields and gives more complicated results), we could
1875\emph{certify} this result, i.e~remove the GRH assumption. Let's do it, type
1876\bprog
1877  bnf = bnfinit(x^2 + 3299); bnfcertify(bnf)
1878@eprog
1879
1880  A nonzero result (here 1) means that everything is ok. Good, but what did
1881we certify after all? Let's have a look at this \kbd{bnf} (just type it!).
1882Enlightening, isn't it? Recall that the \kbd{init} functions (we've already
1883seen \kbd{ellinit}) store all kind of technical information which you
1884certainly don't care about, but which will be put to good use by higher level
1885functions. That's why \kbd{bnfcertify} could not be used on the output of
1886\kbd{quadclassunit}: it needs much more data.
1887
1888  To extract sensible information from such complicated objects, you must use
1889one of the many \emph{member functions} (remember: \kbd{?.} to get a complete
1890list). In this case \kbd{bnf.clgp} which extracts the class group structure.
1891This is much better. Type \kbd{\%.no} to check that this leading 27 is indeed
1892what we think it is and not some stupid technical parameter. Note that
1893\kbd{bnf.clgp.no} would work just as well, or even \kbd{bnf.no}!
1894
1895As a last check, we can request a relative equation for the Hilbert class
1896field of $\Q(\sqrt{-3299})$: type \kbd{quadhilbert(-3299)}. It is indeed of
1897degree 27 so everything fits together.
1898
1899\medskip
1900%
1901Working in real quadratic fields instead of complex ones, i.e.~with $D>0$, is
1902not very different.
1903
1904The same \kbd{quadgen} function is used to create elements. Ideals are again
1905represented by binary quadratic forms $(a,b,c)$, this time indefinite. However,
1906the Archimedean valuations of the number field start to come into play, hence
1907in fact quadratic forms with positive discriminant will be represented as a
1908quadruplet $(a,b,c,d)$ where the quadratic form itself is $ax^2+bxy+cy^2$
1909with $a$, $b$ and $c$ integral, and $d\in \R$ is a ``distance''
1910component, as defined by Shanks and Lenstra.
1911
1912To create such forms, one uses the same function as for definite ones, but
1913you can add a fourth (optional) argument to initialize the distance:
1914\kbd{q = Qfb($a$, $b$, $c$, $d$)}.
1915If the discriminant of \kbd{poldisc(q)} is negative, $d$ is silently
1916discarded. If you omit it, this component is set to \kbd{0.} (i.e.~a real
1917zero to the current precision).
1918
1919Again these forms can be multiplied, divided, powered, and they can be
1920reduced using \kbd{qfbred}. This function is in fact a succession of
1921elementary reduction steps corresponding essentially to a continued fraction
1922expansion, and a single one of these steps can be achieved by adding an
1923(optional) flag to the arguments of \kbd{qfbred}. Since handling the
1924fourth component $d$ usually involves computing expensive logarithms, the
1925same flag may be used to ignore the fourth component. Finally, it is
1926sometimes useful to operate on forms of positive discriminant without
1927performing any reduction (this is useless in the negative case), the
1928functions \kbd{qfbcompraw} and \kbd{qfbpowraw} do exactly that.
1929
1930Again, the function \kbd{qfbprimeform} gives a prime form, but the form which
1931is given corresponds to an ideal of prime norm which is usually not reduced.
1932If desired, it can be reduced using \kbd{qfbred}.
1933
1934Finally, you still have at your disposal the function \kbd{qfbclassno} which
1935gives the class number (this time \emph{guaranteed} correct),
1936\kbd{quadregulator} which gives the regulator, and the much more sophisticated
1937\kbd{quadclassunit} giving the class group's structure and its generators,
1938as well as the regulator. The \kbd{qfbclassno} and \kbd{quadregulator}
1939functions use an algorithm which is $O(\sqrt D)$, hence become very slow for
1940discriminants of more than 10 digits. \kbd{quadclassunit} can be used on a
1941much larger range.
1942
1943Let us see examples of all this at work and learn some little known number
1944theory at the same time. First of all, type
1945\bprog
1946  d = 3 * 3299; qfbclassno(d)
1947@eprog\noindent We see that the class number is 3. (We know
1948in advance that it must be divisible by 3 from the \kbd{d = -3299} case above
1949and Scholz's mirror theorem.) Let us create a form by typing
1950\bprog
1951  f = qfbred(qfbprimeform(d,2), 2)
1952@eprog\noindent
1953(the last 2 tells \kbd{qfbred} to ignore the Archimedean component). This
1954gives us a prime ideal of norm equal to 2. Is this ideal principal? Well, one
1955way to check this, which is not the most efficient but will suffice for now,
1956is to look at the complete cycle of reduced forms equivalent to \kbd{f}. Type
1957\bprog
1958 g = f; for(i=1,20, g = qfbred(g, 3); print(g))
1959@eprog\noindent
1960(this time the 3 means to do a single reduction step, still not using
1961Shanks's distance). We see that we come back to the form \kbd{f} without
1962having the principal form (starting with $\pm1$) in the cycle, so the ideal
1963corresponding to \kbd{f} is not principal.
1964
1965Since the class number is equal to 3, we know however that \kbd{f\pow 3} will
1966be a principal ideal $\alpha\Z_K$. How do we find $\alpha$? For this, type
1967\bprog
1968  f3 = qfbpowraw(f, 3)
1969@eprog
1970This computes the cube of \kbd{f}, without
1971reducing it. Hence it corresponds to an ideal of norm equal to $8=2^3$, so we
1972already know that the norm of $\alpha$ is equal to $\pm8$. We need more
1973information, and this will be given by the fourth component of the form.
1974Reduce your form until you reach the unit form (you will have to type
1975\kbd{qfbred(\%,~1)} exactly 6 times), then extract the Archimedean component,
1976say $c$. By definition of this distance, we know that
1977$${\alpha\over{\sigma(\alpha)}}=\pm e^{2c},$$
1978where $\sigma$ denotes real conjugation in our quadratic field. This can be
1979automated:
1980\bprog
1981  q = f3;
1982  while(abs(component(q,1)) != 1, print(q); q = qfbred(q, 1))
1983  c = component(q,4);
1984@eprog\noindent
1985Thus, if we type
1986\bprog
1987  a = sqrt(8 * exp(2*c))
1988  sa = 8 / a
1989@eprog\noindent
1990we know that up to sign, \kbd{a} and \kbd{sa} are
1991numerical approximations of $\alpha$ and $\sigma(\alpha)$. Of course,
1992$\alpha$ can always be chosen to be positive, and a quick numerical check
1993shows that the difference of \kbd{a} and \kbd{sa} is close to an integer, and
1994not the sum, so that in fact the norm of $\alpha$ is equal to $-8$ and the
1995numerical approximation to $\sigma(\alpha)$ is \kbd{$-$sa}. Thus we type
1996\bprog
1997  p = x^2 - round(a-sa)*x - 8
1998@eprog\noindent
1999and this is the characteristic polynomial of $\alpha$. We can check that the
2000discriminant of this polynomial is a square multiple of \kbd{d}, so $\alpha$
2001is indeed in our field. More precisely, solving for $\alpha$ and using the
2002numerical approximation that we have to resolve the sign ambiguity in the
2003square root, we get explicitly $\alpha=(15221+153\sqrt d)/2$. Note that this
2004can also be done automatically using the functions \kbd{polred} and
2005\kbd{modreverse}, as we will see later in the general number field case, or
2006by solving a system of 2 linear equations in 2 variables. (Exercise: now that
2007we have $\alpha$, check that it is indeed a generator of the ideal
2008corresponding to the form \kbd{f3}.)
2009
2010\medskip Let us now play a little with cycles. Type
2011\bprog
2012  D = 10^7 + 1
2013  quadclassunit(D)
2014@eprog\noindent
2015We get as a result a 5-component vector, which tells us that (under GRH) the
2016class number is equal to 1, and the regulator is approximately
2017equal to $2641.5$. You may certify this with
2018\bprog
2019  bnf = bnfinit(x^2 - D, 1);  \\ @com insist on finding fundamental unit
2020  bnfcertify(bnf);
2021@eprog\noindent
2022although it's a little inefficient. Indeed \kbd{bnfcertify} needs the
2023fundamental unit which is so large that \kbd{bnfinit} will have a hard time
2024computing it: it needs about $R/\log(10)\approx 1147$ digits of precision!
2025(So that it would have given up had we not inserted the flag $1$.)
2026See \kbd{bnf.fu}. On the other hand, you can try \kbd{quadunit(D,'w)}.
2027Impressive, isn't it? (You can check that its logarithm is indeed equal to
2028the regulator.)
2029
2030Now just as an example, let's assume that we want the regulator to 500
2031decimals, say. (Without cheating and computing the fundamental unit exactly
2032first!) I claim that simply from the crude approximation above, this can be
2033computed with no effort.
2034
2035This time, we want to start with the unit form. Type:
2036\bprog
2037  u = qfbred(qfbprimeform(D, 1), 2)
2038@eprog\noindent
2039We use the function \kbd{qfbred} with no distance since we want the initial
2040distance to be equal to~0. Now type  \kbd{f = qfbred(u, 1)}. This is the
2041first form encountered along the principal cycle. For the moment, keep the
2042precision low, for example the initial default precision. The distance from
2043the identity of \kbd{f} is around 4.253. Very crudely, since we want a
2044distance of $2641.5$, this should be encountered approximately at
2045$2641.5/4.253=621$ times the distance of \kbd{f}. Hence, as a first try, we
2046type \kbd{f\pow 621}. Oops, we overshot, since the distance is now $3173.02$.
2047Now we can refine our initial estimate and believe that we should be close to
2048the correct distance if we raise \kbd{f} to the power $621*2641.5/3173$ which
2049is close to $517$. Now if we compute \kbd{f\pow 517} we hit the principal
2050form right on the dot. Note that this is not a lucky accident: we will always
2051land extremely close to the correct target using this method, and usually at
2052most one reduction correction step is necessary. Of course, only the distance
2053component can tell us where we are along the cycle.
2054
2055Up to now, we have only worked to low precision. The goal was to obtain this
2056unknown integer $517$. Note that this number has absolutely no mathematical
2057significance: indeed the notion of reduction of a form with positive
2058discriminant is not well defined since there are usually many reduced forms
2059equivalent to a given form. However, when PARI makes its computations, the
2060specific order and reductions that it performs are dictated entirely by the
2061coefficients of the quadratic form itself, and not by the distance component,
2062hence the precision used has no effect.
2063
2064Hence we now start again by setting the precision to (for example) 500,
2065we retype the definition of \kbd{u} (why is this necessary?), and then
2066\kbd{qfbred(u, 1)\pow 517}. Of course we know in advance that we land on the
2067unit form, and the fourth component gives us the regulator to 500 decimal
2068places with no effort at all.
2069
2070In a similar way, we could obtain the so-called \emph{compact representation}
2071of the fundamental unit itself, or $p$-adic regulators. I leave this as
2072exercises for the interested reader.
2073
2074You can try the \kbd{quadhilbert} function on that field but, since the class
2075number is $1$, the result won't be that exciting. If you try it on our
2076preceding example ($3*3299$) it should take about 2 seconds.
2077\medskip
2078
2079Time for a coffee break?
2080
2081\section{Working in General Number Fields}
2082\subsec{Elements}
2083
2084The situation here is of course more difficult. First of all, remembering
2085what we did with elliptic curves, we need to initialize data linked to our
2086base number field, with something more serious than \kbd{quadgen}. For
2087example assume that we want to work in the number field $K$ defined by one of
2088the roots of the equation $x^4+24x^2+585x+1791=0$. This is done by typing
2089\bprog
2090  T = x^4 + 24*x^2 + 585*x + 1791
2091  nf = nfinit(T)
2092@eprog\noindent
2093We get an \kbd{nf} structure but, thanks to member functions, we do not need
2094to know anything about it. If you type \kbd{nf.pol}, you will get the
2095polynomial \kbd{T} which you just input. \kbd{nf.sign} yields the signature
2096$(r_1,r_2)$ of the field, \kbd{nf.disc} the field discriminant, \kbd{nf.zk}
2097an integral basis, etc\dots.
2098
2099The integral basis is expressed in terms of a generic root \kbd{x} of \kbd{T}
2100and we notice it's very far from being a power integral basis, which is a
2101little strange for such a small field. Hum, let's check that: \kbd{poldisc(T)}?
2102Ooops, small wonder we had such denominators, the index of the power order
2103$\Z[x]/(T)$ in the maximal order $\Z_K$ is, well,
2104\kbd{nf.index}. Note that this is also given by
2105\bprog
2106  sqrtint(poldisc(nf.pol) / nf.disc)
2107@eprog\noindent
2108Anyway, that's $3087$, we don't want to work with such a badly skewed
2109polynomial! So, we type
2110\bprog
2111  P = polred(T)
2112@eprog\noindent
2113We see from the third component that the
2114polynomial $x^4-x^3-21x^2+17x+133$ defines the same field with much smaller
2115coefficients, so type
2116\bprog
2117  A = P[3]
2118@eprog\noindent
2119The \kbd{polred} function usually gives a simpler polynomial, and also
2120sometimes some information on the existence of subfields. For example in this
2121case, the second component of \kbd{polred} tells us that the field defined by
2122$x^2-x+1=0$, i.e.~the field generated by the cube roots of unity, is a subfield
2123of~$K$. Note this is incidental information and that the list of subfields
2124found in this way is usually far from complete. To get the complete list, one
2125uses \kbd{nfsubfields} (we shall do that later on).
2126
2127  Type \kbd{poldisc(A)}, this is much better, but maybe not optimal yet
2128(the index is still $7$). Type \kbd{polredabs(A)} (the \kbd{abs} stands for
2129absolute). Since it seems that we won't get anything better, we'll stick with
2130\kbd{A} (note however that \kbd{polredabs} finds a smallest generating
2131polynomial with respect to a bizarre norm which ensures that the index will
2132be small, but not necessarily minimal). In fact, had you typed
2133\kbd{nfinit(T, 3)}, \kbd{nfinit} would first have tried to find a good
2134polynomial defining the same field (i.e.~one with small index) before
2135proceeding.
2136
2137  It's not too late, let's redefine our number field:
2138\bprog
2139  NF = nfinit(nf, 3)
2140@eprog\noindent
2141The output is a two-component vector. The first component is the new
2142\kbd{nf}: type
2143\bprog
2144  nf = NF[1];
2145@eprog\noindent
2146If you type \kbd{nf.pol}, you notice that \kbd{gp} indeed replaced your bad
2147polynomial \kbd{T} by a much better one, which happens to be \kbd{A}. (Small
2148wonder, \kbd{nfinit} internally called \kbd{polredabs}.) The second component
2149enables you to switch conveniently to our new polynomial.
2150
2151Namely, call $\theta$ a root of our initial polynomial \kbd{T}, and $\alpha$
2152a root of the one that \kbd{polred} has found, namely \kbd{A}. These are
2153algebraic numbers, and as already mentioned are represented as polmods. For
2154example, in our special case $\theta$ and $\alpha$ are equal to the polmods
2155\bprog
2156  THETA = Mod(x, x^4 + 24*x^2 + 585*x + 1791)
2157  ALPHA = Mod(x, x^4 - x^3 - 21*x^2 + 17*x + 133)
2158@eprog\noindent respectively. Here we are considering only the algebraic
2159aspect, and hence ignore completely \emph{which} root $\theta$ or $\alpha$ is
2160chosen.
2161
2162Now you may have a number of elements of your number field which are
2163expressed as polmods with respect to your old polynomial, i.e.~as polynomials
2164in $\theta$. Since we are now going to work with $\alpha$ instead, it is
2165necessary to convert these numbers to a representation using $\alpha$. This
2166is what the second component of \kbd{NF} is for: type \kbd{C = NF[2]}, you get
2167\bprog
2168  Mod(-10/7*x^3 + 43/7*x^2 + 73/7*x - 64, x^4 - x^3 - 21*x^2 + 17*x + 133)
2169@eprog\noindent
2170meaning that $\theta =
2171-\dfrac{10}{7}\alpha^3+\dfrac{43}{7}\alpha^2+\dfrac{73}{7}\alpha-64$, and hence the conversion from a
2172polynomial in $\theta$ to one in $\alpha$ is easy, using \kbd{subst}. (We
2173could get this polynomial from \kbd{polred} as well, try \kbd{polred(T, 2)}.)
2174If we want the reverse, i.e.~to go back from a representation in $\alpha$ to
2175a representation in $\theta$, we use the function \kbd{modreverse} on this
2176polmod \kbd{C}. Try it. The result has a big denominator (1029) essentially
2177because our initial polynomial \kbd{T} was so bad. By the way, to get that
21781029,
2179you should type \kbd{denominator(content(C))}. Trying \kbd{denominator} by
2180itself would not work since the denominator of a polynomial is defined to be
21811 (and its numerator is itself). The reason for this is that we think of a
2182polynomial as a special case of a rational function.\smallskip
2183
2184From now on, we forget about \kbd{T}, and use only the polynomial
2185\kbd{A} defining $\alpha$, and the components of the vector \kbd{nf} which
2186gives information on our number field $K$. Type
2187\bprog
2188  u = Mod(x^3 - 5*x^2 - 8*x + 56, A) / 7
2189@eprog\noindent
2190This is an element in $K$. There are three equivalent
2191representations for number field elements: polmod, polynomial, and column
2192vector giving a decomposition in the integral basis \kbd{nf.zk} (\emph{not} on
2193the power basis $(1,x,x^2,\dots)$). All three are equally valid when the
2194number field is understood (is given as first argument to the function).
2195You will be able to use any one of them as long as the function you call
2196requires an \kbd{nf} argument as well. However, most PARI functions will
2197return elements as column vectors. It's an important feature of number
2198theoretic functions that, although they may have a preferred format for
2199input, they will accept a wealth of other different formats. We already saw
2200this for \kbd{nfinit} which accepts either a polynomial or an \kbd{nf}. It
2201will be true for ideals, congruence subgroups, etc.
2202
2203  Let's stick with elements for the time being. How does one go from one
2204representation to the other? Between polynomials and polmods, it's easy:
2205\kbd{lift} and \kbd{Mod} will do the job. Next, from polmods/polynomials to
2206column vectors: type \kbd{v = nfalgtobasis(nf, u)}. So $\kbd{u} = \alpha^3-
2207\alpha^2 - \alpha + 8$, right? Wrong! The coordinates of \kbd{u} are given
2208with respect to the \emph{integral basis}, not the power basis
2209$(1,\alpha,\alpha^2,\alpha^3)$ (and they don't coincide, type \kbd{nf.zk} if
2210you forgot what the integral basis looked like). As a polynomial in $\alpha$,
2211we simply have $\kbd{u} = {1\over7}(\alpha^3 - 5\alpha^2-8\alpha+56)$, which
2212is trivially deduced from the original polmod representation!
2213
2214Of course \kbd{v = nfalgtobasis(nf, lift(u))} would work equally well. Indeed
2215we don't need the polmod information since \kbd{nf} already provides the
2216defining polynomial. To go back to polmod representation, use
2217\kbd{nfbasistoalg(nf, v)}. Notice that \kbd{u} is an algebraic integer since
2218\kbd{v} has integer coordinates (try \kbd{denominator(v) == 1}, which is of
2219course overkill here, but not so in a program).
2220
2221Let's try this out. We may for instance compute \kbd{u\pow 3}. Try it. Or, we
2222can type \kbd{1/u}. Better yet, if we want to know the norm from $K$ to $\Q$
2223of \kbd{u}, we type \kbd{norm(u)} (what else?); \kbd{trace(u)} works as well.
2224Notice that none of this would work on polynomials or column vectors since
2225you don't have the opportunity to supply \kbd{nf}! But we could use
2226\kbd{nfeltpow(nf,u,3)}, \kbd{nfeltdiv(nf,1,u)} (or \kbd{nfeltpow(nf,u,-1)})
2227which would work whatever representation was chosen. Of course,
2228there is also an \kbd{nfeltnorm} function (and \kbd{nfelttrace} as well).
2229You can also consider $(u)$ as a principal ideal, and just type
2230\bprog
2231  idealnorm(nf,u)
2232@eprog\noindent
2233Of course, in this way, we lose the \emph{sign} information. We will talk
2234about ideals later on.
2235
2236  If we want all the symmetric functions of \kbd{u} and not only the norm, we
2237type \kbd{charpoly(u)}. Note that this gives the characteristic polynomial of
2238\kbd{u}, and not in general the minimal polynomial. We have \kbd{minpoly(u)}
2239for this.
2240\misctitle{Exercise} Find a simpler expression for \kbd{u}. \smallskip
2241
2242  Now let's work on the field itself. The \kbd{nfinit} command already
2243gave us some information. The field is totally complex (its signature
2244\kbd{nf.sign} is $[0,2]$), its discriminant \kbd{nf.disc} is $18981$ and
2245\kbd{nf.zk} is an integral basis. The Galois group of its Galois closure
2246can be obtained by typing \kbd{polgalois(A)}. The answer (\kbd{[8,-1,1]}, or
2247\kbd{[8,-1,1,"D(4)"]} if the \kbd{galdata} package is installed) shows that
2248it is equal to $D_4$, the dihedral group with 8 elements, i.e.~the group of
2249symmetries of a square.
2250
2251This implies that the field is ``partially Galois'', i.e.~that there exists
2252at least one nontrivial field isomorphism which fixes $K$, exactly one in
2253this case. Type \kbd{nfgaloisconj(nf)}. The result tells us that, apart from
2254the trivial automorphism, the map
2255$$\alpha \mapsto {1\over7}(-\alpha^3+5\alpha^2+\alpha-49)$$
2256is the only field automorphism.
2257\bprog
2258  nfgaloisconj(nf);
2259  s = Mod(%[2], A)
2260  charpoly(s)
2261@eprog\noindent
2262and we obtain \kbd{A} once again.
2263
2264Let us check that \kbd{s} is of order 2: \kbd{subst(lift(s), x, s)}. It is. We
2265may express it as a matrix:
2266\bprog
2267  w = Vec( matid(4) ) \\@com canonical basis
2268  v = vector(#w, i, nfgaloisapply(nf, s, w[i]))
2269  M = Mat(v)
2270@eprog\noindent
2271The vector \kbd{v} contains the images of the integral basis elements (as
2272column vectors). The last statement concatenates them into a square matrix.
2273So, \kbd{M} gives the action of \kbd{s} on the integral basis. Let's check
2274\kbd{M\pow2}. That's the identity all right.
2275
2276The fixed field of this automorphism is going to be the only nontrivial
2277subfield of $K$. I seem to recall that \kbd{polred} told us this was the
2278third cyclotomic field. Let's check this: type \kbd{nfsubfields(nf)}. Indeed,
2279there's a quadratic subfield, but it's given by \kbd{T = x\pow 2 + 22*x + 133
2280} and I don't recognize it. But \kbd{nfisisom(T, polcyclo(3))} indeed tells
2281us that the fields $\Q[x]/(T)$ and $\Q[x]/(x^2+x+1)$ are isomorphic.
2282(In fact, \kbd{polred(T)} would tell us the same, but does not correspond to
2283a foolproof test: \kbd{polred} could have returned some other polynomials.)
2284
2285We may also check that \kbd{k = matker(M-1)} is two-dimensional, then \kbd{z
2286= nfbasistoalg(nf, k[,2])} generates the quadratic subfield. Notice that 1,
2287\kbd{z} and \kbd{u} are $\Q$-linearly dependent, and in fact $\Z$-linearly as
2288well. Exercise: how would you check these two assertions in general? (Answer:
2289\kbd{concat}, then respectively \kbd{matrank} or \kbd{matkerint} (or
2290\kbd{qflll})). \kbd{z = charpoly(z)}, \kbd{z = gcd(z,z')} and \kbd{polred(z)}
2291tell us that we found back the same subfield again (as we ought to!).
2292
2293Final check: type \kbd{nfrootsof1(nf)}. Again we find that $K$ contains
2294a cube root of unity, since the torsion subgroup of its unit group
2295has order 6. The given generator happens to be equal to \kbd{u}.
2296
2297\misctitle{Additional comment} (you are no longer supposed to skip this,
2298but do as you wish):
2299
2300Before working with ideals, let us note one more thing. The main part of the
2301work of \kbd{polred} or \kbd{nfinit}$(T)$ is to compute an integral basis,
2302i.e.~a $\Z$-basis of the maximal order $\Z_K$ of $K$. This implies factoring
2303the discriminant of the polynomial $T$, which is often not feasible. The
2304situation may be improved in many ways:
2305
23061) First, it is often the case that our number field is of quite a special
2307type. For example, one may know in advance some prime divisors of the
2308discriminant. Hence we can ``help'' PARI by giving it that information. More
2309precisely, we can use the function \kbd{addprimes} to inform PARI to keep on
2310eye for these prime numbers. Do it only for big primes ! (Say, larger than
2311\kbd{primelimit}.)
2312
23132) The second way in which the situation may be improved is that often we do
2314not need the complete information on the maximal order, but only require that
2315the order be $p$-maximal for a certain number of primes $p$ --- but then, we
2316may not be able to use functions which require a genuine \kbd{nf}. The
2317function \kbd{nfbasis} specifically computes the integral basis and is not
2318much quicker than \kbd{nfinit} so is not very useful in its standard use. But
2319we can optionally provide a list of primes: this returns a basis of an order
2320which is $p$-maximal at the given primes. For example coming back to our
2321initial polynomial $T$, the discriminant of the polynomial is
2322$3^7\cdot7^6\cdot19\cdot37$. If we only want a $7$-maximal order, we simply
2323type
2324\bprog
2325  nfbasis([T, [7]])
2326@eprog\noindent Of course, \kbd{nfbasis([T, [2,3,5,7]])} would return an
2327order which is maximal at $2,3,5,7$. A variant offers a nice generalization:
2328\bprog
2329  nfbasis([T, 10^5])
2330@eprog\noindent will return an order which is maximal at all primes less than
2331$10^5$.
2332
23333) Building on the previous points, \emph{if} the field discriminant is
2334$y$-smooth (never mind the polynomial discriminant), up to a few big primes
2335known to \kbd{addprimes}, then \kbd{bas = nfbasis(T, y)} returns a basis for
2336the maximal order! We can then input the resulting basis to \kbd{nfinit}, as
2337\kbd{nfinit([T, bas])}. Better: the $[T, \var{listP}]$ format can be
2338directly used with nfinit, where \var{listP} specifies a finite list of
2339primes in one of the above ways (explicit list or primes up to some bound),
2340and the result can be unconditionally certified, independently of the
2341\var{listP} parameter:
2342\bprog
2343  T = polcompositum(x^7-2, polcyclo(5))[1];
2344  K = nfinit( [T, [2,5,7]] );
2345  nfcertify(K)
2346@eprog\noindent The output is a list of composite integers whose complete
2347factorization must be computed in order to certify the result (which may be
2348very hard, hence is not done on the spot). When the list is empty, as here,
2349the result is unconditional
2350
2351\kbd{nfcertify(nf)}
2352
2353\subsec{Ideals}
2354
2355We now want to work with ideals and not only
2356with elements. An ideal can be represented in many different ways. First, an
2357element of the field (in any of the various guises seen above) will be
2358considered as a principal ideal. Then the standard representation is a
2359square matrix giving the Hermite Normal Form (HNF) of a $\Z$-basis of the
2360ideal expressed on the integral basis \kbd{nf.zk}. Standard means that most
2361ideal related functions will use this representation for their output.
2362
2363Prime ideals can be represented in a special form as well (see
2364\kbd{idealprimedec}) and all ideal-related functions will accept them. On the
2365other hand, the function \kbd{idealtwoelt} can be used to find a two-element
2366$\Z_K$-basis of a given ideal (as $a\Z_K + b\Z_K$, where $a$ and $b$ belong
2367to $K$), but this is \emph{not} a valid representation for an ideal under
2368\kbd{gp}, and most functions will choke on it (or worse, take it for
2369something else and output a meaningless result). To be able to use such an
2370ideal, you will first have to convert it to HNF form.
2371
2372Whereas it's very easy to go to HNF form (use \kbd{idealhnf(nf,id)} for valid
2373ideals, or \kbd{idealhnf(nf,a,b)} for a two-element representation as above),
2374it's a much more complicated problem to check whether an ideal is principal
2375and find a generator. In fact an \kbd{nf} does not contain enough data for
2376this particular task. We'll need a Buchmann Number Field, or \kbd{bnf}, for
2377that. In particular, we need the class group and fundamental units, at least
2378in some approximate form. More on this later (which will trivialize the end
2379of the present section).\smallskip
2380
2381Let us keep our number field $K$ as above and its \kbd{nf} structure. Type
2382\bprog
2383  P = idealprimedec(nf,7)
2384@eprog\noindent
2385This gives the decomposition of the prime number 7 into prime ideals. We have
2386chosen 7 because it divides \kbd{nf.index} (in fact, is equal to it), hence
2387is the most difficult case to treat.
2388
2389The result is a vector with 4 components, showing that 7 is totally split in
2390the field $K$ into prime ideals of norm 7 (you can check:
2391\kbd{idealnorm(nf,P[1])}). Let us take one of these ideals, say the first, so
2392type
2393\bprog
2394  pr = P[1]
2395@eprog
2396We obtain its inertia and residue degree as \kbd{pr.e}
2397and \kbd{pr.f}, and its two generators as \kbd{pr.gen}. One of them is
2398$\kbd{pr.p} = 7$, and the other is guaranteed to have valuation $1$ at
2399\kbd{pr}. What is the Hermite Normal Form of this ideal? No problem:
2400\bprog
2401  idealhnf(nf,pr)
2402@eprog\noindent and we have the desired HNF. Let's now perform ideal
2403operations. For example type
2404\bprog
2405  idealmul(nf, pr, idealmul(nf, pr,pr))
2406@eprog\noindent or more simply
2407\bprog
2408  pr3 = idealpow(nf, pr,3)
2409@eprog\noindent
2410to get the cube of the ideal \kbd{pr}. Since the norm of this ideal is equal
2411to $343=7^3$, to check that it is really the cube of \kbd{pr} and not of
2412other ideals above 7, we can type
2413\bprog
2414  for(i=1, #P, print( idealval(nf, pr3, P[i]) ))
2415@eprog\noindent
2416and we see that the valuation at \kbd{pr} is equal to 3, while the others are
2417equal to zero. We could see this as well from \kbd{idealfactor(nf, pr3)}.
2418
2419Let us now work in the class group ``by hand'' (we shall see simpler ways
2420later). We shall work with \emph{extended ideals}: an extended ideal is a pair
2421\kbd{[A, t]}, where $A$ is an ordinary ideal as above, and $t$ a number field
2422element; this pair represents the ideal $(t) A$.
2423\bprog
2424  id3 = [pr3, 1]
2425  r0 = idealred(nf, id3)
2426@eprog\noindent
2427The input \kbd{id3} is an extended ideal: pr3 together with 1 (trivial
2428factorization). The new extended ideal \kbd{r0} is equal to the old one,
2429in the sense that the products $(t)A$ are the same. It contains a ``reduced''
2430ideal equivalent to \kbd{pr3} (modulo the principal ideals), and a generator
2431of the principal ideal that was factored out.
2432
2433Now, just for fun type
2434\bprog
2435  r = r0; for(i=1,3, r = idealred(nf,r, [1,5]); print(r))
2436@eprog\noindent
2437The ideals in the third \kbd{r} and initial \kbd{r0} are equal, say
2438$(t) A = (t_0) A$: this means we
2439have found a unit $(t_0/t)$ in our field, and it is easy to extract this unit
2440given the extended component:
2441\bprog
2442  t0 = r0[2]; t = r[2];
2443  u = nfeltdiv(nf, t0, t)
2444  u = nfbasistoalg(nf, u)
2445@eprog\noindent The last line recovers the unit as an algebraic number.
2446Type
2447\bprog
2448  ch = charpoly(u)
2449@eprog\noindent
2450and we obtain the characteristic polynomial \kbd{ch} of $u$ again. (Whose
2451constant coefficient is $1$, hence $u$ is indeed a unit.)
2452
2453There is of course no reason for $u$ to be a fundamental unit. Let us see if
2454it is a square. Type
2455\bprog
2456  F = factor(subst(ch, x, x^2))
2457@eprog\noindent
2458We see that \kbd{ch(x\pow2)} is a product of 2 polynomials of degree 4, hence
2459$u$ is a square. (Why?) We now want to find its square root. A simple method
2460is as follows:
2461\bprog
2462  NF = subst(nf,x,y);
2463  r = F[1,1] % (x^2 - nfbasistoalg(NF, u))
2464@eprog\noindent
2465to find the remainder of the characteristic polynomial of \kbd{u2} divided by
2466\kbd{x\pow 2 - $u$}. This is a polynomial of degree 1 in \kbd{x}, with polmod
2467coefficients, and we know that \kbd{u2}, being a root of both polynomials,
2468is the root of \kbd{r}, hence can be obtained by typing
2469\bprog
2470  u2 = -polcoef(r,0) / polcoef(r,1)
2471@eprog\noindent There is an important technicality in the above: why did we
2472need to substitute \kbd{NF} to \kbd{nf}? The reason is that data related to
2473\kbd{nf} is given in terms of the variable \kbd{x}, seen modulo \kbd{nf.pol};
2474but we need \kbd{x} as a free variable for our polynomial divisions. Hence
2475the substitution of \kbd{x} by \kbd{y} in our \kbd{nf} data.
2476
2477The most natural method is to try directly
2478\bprog
2479  nffactor(nf, y^2 - u)
2480@eprog\noindent
2481Except that this won't work for the same technical reason as above: the main
2482variable of the polynomial to be factored must have \emph{higher} priority
2483than the number field variable. This won't be possible here since \kbd{nf}
2484was defined using the variable \kbd{x} which has the highest possible
2485priority. So we need to substitute variables around:
2486\bprog
2487  nffactor(NF, x^2 - nfbasistoalg(NF, subst(lift(u),x,y)))
2488@eprog\noindent
2489(Of course, with better planning, we would just have defined \kbd{nf} in
2490terms of the \kbd{'y} variable, to avoid all these substitutions.)
2491\smallskip
2492
2493A much simpler approach is to consider the above as instances of a
2494\emph{discrete logarithm} problem, where we want to express some elements an
2495abelian group (of finite type) in terms of explicitly given generators, and
2496transfer all computations from abstract groups like $\text{Cl}(K)$ and
2497$\Z_K^*$ to products of simpler groups like $\Z^n$ or $\Z/d\Z$. We shall do
2498exactly that in the next section.
2499
2500Before that, let us mention another famous (but in fact, simpler)
2501\emph{discrete logarithm} problem, namely the one attached to the
2502invertible elements modulo an ideal: $(\Z_K / I)^*$. Just use \kbd{idealstar}
2503(this is an \kbd{init} function) and \kbd{ideallog}.
2504
2505Many more functions on ideals are available. We mention here the complete
2506list, referring to Chapter 3 for detailed explanations:
2507
2508\kbd{idealadd}, \kbd{idealaddtoone}, \kbd{idealappr}, \kbd{idealchinese},
2509\kbd{idealcoprime}, \kbd{idealdiv}, \kbd{idealfactor}, \kbd{idealhnf},
2510\kbd{idealintersect}, \kbd{idealinv}, \kbd{ideallist}, \kbd{ideallog},
2511\kbd{idealmin}, \kbd{idealmul}, \kbd{idealnorm}, \kbd{idealpow},
2512\kbd{idealprimedec}, \kbd{idealred},
2513\kbd{idealstar}, \kbd{idealtwoelt}, \kbd{idealval},
2514\kbd{nfisideal}.
2515
2516We suggest you play with these to get a feel for the algebraic number theory
2517package. Remember that when a matrix (usually in HNF) is output, it is always
2518a $\Z$-basis of the result expressed on the \emph{integral basis} \kbd{nf.zk}
2519of the number field, which is usually \emph{not} a power basis.
2520
2521\subsec{Class groups and units, \kbd{bnf}}
2522
2523Apart from the above functions you have at your disposal the powerful
2524function \kbd{bnfinit}, which initializes a \kbd{bnf} structure, i.e.~a
2525number field with all its invariants (including class group and units), and
2526enough technical data to solve discrete logarithm problems in the class and
2527unit groups.
2528
2529First type \kbd{setrand(1)}: this resets the random seed (to make sure we
2530and you get the exact same results). Now type
2531\bprog
2532  bnf = bnfinit(NF);
2533@eprog\noindent
2534where \kbd{NF} is the same number field as before. You do not want to see the
2535output clutter a number of screens so don't forget the semi-colon. (Well if
2536you insist, it is about three screenful in this case, but may require several
2537Megabytes for larger degrees.) Note that \kbd{NF} is now expressed in terms
2538of the variable \kbd{y}, to avoid later problems with variable priorities.
2539
2540A word of warning: both the \kbd{bnf} and all results obtained from it are
2541\emph{conditional} on a Riemann Hypothesis at this point; the \kbd{bnf} must
2542be certified before the following statements become actual theorems.
2543\smallskip
2544
2545Member functions are still available for \kbd{bnf} structures. So, let's try
2546them: \kbd{bnf.pol} gives \kbd{A}, \kbd{bnf.sign}, \kbd{bnf.disc},
2547\kbd{bnf.zk}, ok nothing really exciting. In fact, an \kbd{nf} is included
2548in the \kbd{bnf} structure: \kbd{bnf.nf} should be identical to
2549\kbd{NF}. Thus, all functions which took an \kbd{nf} as first argument, will
2550equally accept a \kbd{bnf} (and a \kbd{bnr} as well which contains even more
2551data).
2552
2553Anyway, new members are available now: \kbd{bnf.no} tells us the class number
2554is 4, \kbd{bnf.cyc} that it is cyclic (of order 4 but that we already knew),
2555\kbd{bnf.gen} that it is generated by the ideal \kbd{g = bnf.gen[1]}. If you
2556\kbd{idealfactor(bnf, g)}, you recognize \kbd{P[2]}. (You may also play in
2557the other direction with \kbd{idealhnf}.) The regulator \kbd{bnf.reg} is
2558equal to $3.794\dots$. \kbd{bnf.tu} tells us that the roots of unity in $K$
2559are exactly the sixth roots of 1 and gives a primitive root $\zeta =
2560{1\over7}(\alpha^3 - 5\alpha^2 - 8\alpha + 56)$, which we have seen already.
2561Finally \kbd{bnf.fu} gives us a fundamental unit $\epsilon =
2562{1\over7}(\alpha^3 - 5\alpha^2 - \alpha + 28)$, which must be linked to the
2563units \kbd{u} and \kbd{u2} found above since the unit rank is~1. To find
2564these relations, type
2565\bprog
2566  bnfisunit(bnf, u)
2567  bnfisunit(bnf, u2)
2568@eprog\noindent
2569Lo and behold, \kbd{u = $\zeta^2\epsilon^2$} and \kbd{u2 =
2570$\zeta^{4}\epsilon^1$}.
2571
2572\misctitle{Note} Since the fundamental unit obtained depends on the random
2573seed, you could have obtained another unit than $\epsilon$, had you not reset
2574the random seed before the computation. This was the purpose of the initial
2575\kbd{setrand} instruction, which was otherwise unnecessary.\medskip
2576
2577We are now ready to perform operations in the class group. First and
2578foremost, let us certify the result: type \kbd{bnfcertify(bnf)}. The
2579output is \kbd{1} if all went well; in fact no other output is possible,
2580whether the input is correct or not, but you can get an error message (or in
2581exceedingly rare cases an infinite loop) if it is incorrect.
2582
2583It means that we now know the class group and fundamental units
2584unconditionally (no more GRH then!). In this case, the certification process
2585takes a very short time, and you might wonder why it is not built in as a
2586final check in the \kbd{bnfinit} function. The answer is that as the
2587regulator gets bigger this process gets increasingly difficult, and becomes
2588soon impractical, while \kbd{bnfinit} still happily spits out results. So it
2589makes sense to dissociate the two: you can always check afterwards, if the
2590result is interesting enough. Looking at the tentative regulator, you know in
2591advance whether the certification can possibly succeed: if \kbd{bnf.reg} is
2592large, don't waste your time.
2593
2594
2595Now that we feel safe about the \kbd{bnf} output, let's do some real work.
2596For example, let us take again our prime ideal \kbd{pr} above 7. Since we
2597know that the class group is of order 4, we deduce that \kbd{pr} raised to
2598the fourth power must be principal. Type
2599\bprog
2600  pr4 = idealpow(nf, pr, 4)
2601  v = bnfisprincipal(bnf, pr4)
2602@eprog\noindent
2603The first component gives the factorization of the ideal in the class group.
2604Here, \kbd{[0]} means that it is up to equivalence equal to the 0-th power of
2605the generator \kbd{g} given in \kbd{bnf.gen}, in other words that it is a
2606principal ideal. The second component gives us the algebraic number $\alpha$
2607such that $\kbd{pr4}=\alpha\Z_K$, $\alpha$ being as usual expressed on the
2608integral basis. Type \kbd{alpha = v[2]}. Let us check that the result is
2609correct: first, type \kbd{idealnorm(bnf, alpha)}. (Note that we can use a
2610\kbd{bnf} with all the \kbd{nf} functions; but not the other way round, of
2611course.) It is indeed equal to $7^4 = 2401$, which is the norm of \kbd{pr4}.
2612This is only a first check. The complete check is obtained by computing the
2613HNF of the principal ideal generated by \kbd{alpha}. To do this, type
2614\kbd{idealhnf(bnf, alpha) == pr4}.
2615
2616Since the equality is true, \kbd{alpha} is correct (not that there was any
2617doubt!). But \kbd{bnfisprincipal} also gives us information for nonprincipal
2618ideals. For example, type
2619\bprog
2620  v = bnfisprincipal(bnf, pr)
2621@eprog\noindent
2622The component \kbd{v[1]} is now equal to \kbd{[3]}, and tells us that \kbd{pr}
2623is ideal-equivalent to the cube of the generator \kbd{g}. Of course we
2624already knew this since the product of \kbd{P[3]} and \kbd{P[4]} was
2625principal (generated by \kbd{al}), as well as the product of all the
2626\kbd{P[$i$]} (generated by 7), and we noticed that \kbd{P[2]} was equal
2627to \kbd{g}, which has order 4. The second component \kbd{v[2]} gives us
2628$\alpha$ on the integral basis such that $\kbd{pr}=\alpha \kbd{g}^3$. Note
2629that if you \emph{don't} want this $\alpha$, which may be large and whose
2630computation may take some time, you can just add the flag $1$ (see the online
2631help) to the arguments of \kbd{bnfisprincipal}, so that it only returns the
2632position of \kbd{pr} in the class group. \smallskip
2633
2634\subsec{Class field theory, \kbd{bnr}}
2635
2636We now survey quickly some class field theoretic routines. We must first
2637initialize a Buchmann Number Ray, or \kbd{bnr}, structure, attached to a
2638\kbd{bnf} base field and a modulus. Let's keep $K$, and try a finite modulus
2639${\goth f} = 7\Z_K$. (See the manual for how to include infinite places in
2640the modulus.) Since $K$ will now become a base field over which we want to
2641build relative extensions, the attached \kbd{bnf} needs to have variables
2642of lower priority than the polynomials defining the extensions. Fortunately,
2643we already took care that, but it would have been easy to deal with the
2644problem now (as easy as \kbd{bnf = subst(bnf, x, y)}). Then type
2645\bprog
2646  bnr = bnrinit(bnf, 7, 1);
2647  bnr.cyc
2648@eprog\noindent
2649tells us the ray class group modulo ${\goth f}$ is isomorphic to
2650$\Z/24\Z \times \Z/6\Z \times \Z/2\Z $. The attached generators are
2651\kbd{bnr.gen}.  Just as a \kbd{bnf} contained an \kbd{nf}, a \kbd{bnr}
2652contains a \kbd{bnf} (hence an \kbd{nf}), namely \kbd{bnr.bnf}. Here
2653\kbd{bnr.clgp} refers to the ray class group, while \kbd{bnr.bnf.clgp} refers
2654to the class group.
2655\bprog
2656  rnfkummer(bnr,, 2)
2657  rnfkummer(bnr,, 3)
2658@eprog\noindent
2659outputs defining polynomials for the $2$ abelian extensions of $K$ of degree
2660$2$ (resp.~$3$), whose conductor is exactly equal to ${\goth f}$ (the modulus
2661used to define \kbd{bnr}). (In the current implementation of \kbd{rnfkummer},
2662these degrees must be \emph{prime}.) What about other extensions of degree
2663$2$ for instance?
2664\bprog
2665  L0= subgrouplist(bnr, [2])
2666  L = subgrouplist(bnr, [2], 1)
2667@eprog\noindent
2668\kbd{L0}, resp.~\kbd{L} is the list of those subgroups of the full ray class
2669group mod $7$, whose index is $2$, and whose conductor is $7$,
2670resp.~arbitrary. (Subgroups are given by a matrix of generators, in terms of
2671\kbd{bnr.gen}.) \kbd{L0} has $2$ elements, attached to the $2$ extensions
2672we already know. \kbd{L} has $7$ elements, the $2$ from \kbd{L0}, and
2673$5$ new ones:
2674\bprog
2675  L1 = setminus(Set(L), Set(L0))
2676@eprog\noindent
2677The conductors are
2678\bprog
2679  vector(#L1, i, bnrconductor(bnr, L1[i]))
2680@eprog\noindent
2681among which one sees the identity matrix, i.e. the trivial ideal. (It is
2682\kbd{L1[3]} in my session, maybe not in yours. Take the right one!) Indeed,
2683the class group was cyclic of order $4$ and there exists a unique unramified
2684quadratic extension. We could find it directly by recomputing a \kbd{bnr}
2685with trivial conductor, but we can also use
2686\bprog
2687  rnfkummer(bnr, L1[3]) \\ @com pick the subgroup with trivial conductor!
2688@eprog\noindent
2689directly which outputs the (unique by Takagi's theorem) class field
2690attached to the subgroup \kbd{L1[3]}. In fact, it is of the form
2691$K(\sqrt{-\epsilon})$. We can check this directly:
2692\bprog
2693  rnfconductor(bnf, x^2 + bnf.fu[1])
2694@eprog\noindent
2695
2696\subsec{Galois theory over $\Q$}
2697
2698PARI includes a nearly complete set of routines to compute with Galois
2699extensions of $\Q$. We start with a very simple example.
2700Let $\zeta$ a $8$th-root of unity and $K=\Q(\zeta)$. The minimal
2701polynomial of $\zeta$ is the 8$th$ cyclotomic polynomial, namely
2702\kbd{polcyclo(8)} (=$x^4+1$).
2703
2704We issue the command
2705\bprog
2706G = galoisinit(x^4 + 1);
2707@eprog\noindent
2708to compute $G=\text{Gal}(K/\Q)$.  The command \kbd{galoisisabelian(G)}
2709returns \kbd{[2,0;0,2]} so $G$ is an abelian group, isomorphic to $(\Z/2\Z)^2$, generated by
2710$\sigma$=\kbd{G.gen[1]} and $\tau$=\kbd{G.gen[2]}. These automorphisms are
2711given by their actions on the roots of $x^4+1$ in a suitable $p$-adic
2712extension. To get the explicit action on $\zeta$, we use
2713\kbd{galoispermtopol(G,G.gen[i])} for $i=1,2$ and get $\sigma(\zeta)=-\zeta$
2714and $\tau(\zeta)=\zeta^3$. The last nontrivial automorphism is
2715$\sigma\tau$=\kbd{G.gen[1]*G.gen[2]} and we have
2716$\sigma\tau(\zeta)=-\zeta^3$. (At least in my version, yours may return a
2717different set of generators, rename accordingly.)
2718
2719We compute the fixed field of $K$ by the subgroup generated by $\tau$ with
2720\bprog
2721galoisfixedfield(G, G.gen[2], 1)
2722@eprog\noindent
2723and get $x^2 + 2$. Now we want the factorization of $x^4+1$ over that
2724subfield. Of course, we could use \kbd{nffactor}, but here we have a much
2725simpler option: \kbd{galoisfixedfield(G, G.gen[1], 2)} outputs
2726\bprog
2727[x^2 + 2, Mod(x^3 + x, x^4 + 1), [x^2 - y*x - 1, x^2 + y*x - 1]]
2728@eprog\noindent
2729which means that
2730$x^4+1=(x^2-\alpha\*x-1)(x^2+\alpha\*x-1)$ where $\alpha$ is a root of $x^2+2$,
2731and more precisely, $\alpha=\zeta^3+\zeta$. So we recover the well-known
2732factorization:
2733
2734$$x^4+1=(x^2-\sqrt{-2}\*x-1)(x^2+\sqrt{-2}\*x-1)$$
2735
2736For our second example, let us take the field $K$ defined by the polynomial
2737\bprog
2738  P = x^18 - 3*x^15 + 115*x^12 + 104*x^9 + 511*x^6 + 196*x^3 + 343;
2739  G = galoisinit(P);
2740@eprog\noindent Since \kbd{galoisinit} succeeds,
2741the extension $K/\Q$ is Galois. This time \kbd{galoisisabelian(G)} returns
2742$0$, so the extension is not abelian; however we can still put a name on the
2743underlying abstract group. Use \kbd{galoisidentify(G)}, which returns $[18,
27443]$. By looking at the GAP4 classification we find that $[18, 3]$ is
2745$S_3\times\Z/3\Z$. This time, the subgroups of $G$ are not obvious,
2746fortunately we can ask PARI : \kbd{galoissubgroups(G)}.
2747
2748Let us look for a polynomial $Q$ with the property that $K$ is the splitting
2749field of $Q(x^2)$. For that purpose, let us take $\sigma$=\kbd{G.gen[3]}.  We
2750check that \kbd{G.gen[3]\^{}2} is the identity, so $\sigma$ is of order $2$. We now compute the fixed field $K^\sigma$ and the relative factorization of $P$ over
2751$K^\sigma$:
2752\bprog
2753F = galoisfixedfield(G, G.gen[3], 2);
2754@eprog\noindent
2755So $K$ is a quadratic extension of $K^\sigma$ defined by the polynomial
2756\kbd{R=F[3][1]}. It is well-known that $K$ is also defined by $x^2-D$,
2757where $D$ is the discriminant of $R$ (over $K^\sigma$).
2758To compute $D$, we issue:
2759\bprog
2760D = poldisc(F[3][1]) * Mod(1,subst(F[1],x,y));
2761@eprog\noindent
2762Note that since \kbd{y} in \kbd{F[3][1]} denotes a root of \kbd{F[1]}, we
2763have to use \kbd{subst(,x,y)}.  Now we hope that $D$ generates $K^\sigma$ and
2764compute \kbd{Q=charpoly(D)}. We check that $Q=x^9+270\*x^6+12393\*x^3+19683$ is
2765irreducible with \kbd{polisirreducible(Q)}. (Were it not the case, we would
2766multiply $D$ by a random square.) So $D$ is a generator of $K^\sigma$ and
2767$\sqrt{D}$ is a generator of $K$. The result is that $K$ is the splitting
2768field of $Q(x^2)$.  We can check that with
2769\kbd{nfisisom(P,subst(Q,x,x\^{}2))}.
2770
2771\section{Working with associative algebras}
2772
2773Beyond the realm of number fields, we can perform operations with more
2774general associative algebras, that need not even be commutative! Of course
2775things become more complicated. We have two different structures: the first
2776one allows us to manipulate any associative algebra that is
2777finite-dimensional over a prime field ($\Q$ or $\F_p$ for some prime~$p$),
2778and the second one is dedicated to central simple algebras over number
2779fields, which are some nice algebras that behave a lot like number fields.
2780Like in other parts of~\kbd{gp}, every function that has to do with
2781associative algebras begins with the same prefix:~\kbd{alg}.
2782
2783\subsec{Arbitrary associative algebras}
2784
2785In order to create an associative algebra, you need to tell~\kbd{gp} how to
2786multiply elements. We do this by providing a \emph{multiplication table} for the
2787algebra, in the form of the matrix of left multiplication by each basis element,
2788and use the function~\kbd{algtableinit}.
2789
2790For instance, let us work in~$\F_3[x]/(x^2)$. Of course, we
2791could use polmods of intmods to represent elements in this algebra, but let's
2792introduce the general mechanism  with this simple example!
2793This algebra has a basis with two elements:~$1$
2794and~$\epsilon$, the image of~$x$ in the quotient. By the way,~\kbd{gp} will
2795only accept your multiplication table if the first basis vector is~$1$. The
2796multiplication matrix of~$1$ is the~$2\times 2$ identity matrix, and
2797since~$\epsilon\cdot 1 = \epsilon$ and~$\epsilon\cdot\epsilon = 0$, the left
2798multiplication table of~$\epsilon$ is~$\pmatrix{0 & 0 \cr 1 & 0}$. So we
2799use the following multiplication table:
2800\bprog
2801mt1 = [matid(2), [0,0;1,0]]
2802@eprog\noindent
2803Since
2804we want our algebra to be over~$\F_3$, we have to specify the characteristic
2805and create the algebra with
2806\bprog
2807al1 = algtableinit(mt1, 3);
2808@eprog\noindent
2809Let's create another one: the algebra of upper-triangular~$2\times 2$
2810matrices over~$\Q$. This algebra has dimension~$3$, with basis~$1$, $a =
2811\pmatrix{0 & 1 \cr 0 & 0}$ and~$b = \pmatrix{0 & 0 \cr 0 & 1}$, and these
2812elements satisfy~$a^2=0$, $ab = a$, $ba=0$ and~$b^2=b$. Watch out, even
2813though~$a$ and~$b$ are~$2\times 2$ matrices, their left multiplication tables
2814are~$3\times 3$ matrices! The left multiplication tables of~$a$
2815and~$b$ are respectively
2816\bprog
2817ma = [0,0,0; 1,0,1; 0,0,0];
2818mb = [0,0,0; 0,0,0; 1,0,1];
2819@eprog\noindent
2820The multiplication table of our second algebra is therefore
2821\bprog
2822mt2 = [matid(3), ma, mb];
2823@eprog\noindent
2824and we can create the algebra with
2825\bprog
2826al2 = algtableinit(mt2,0);
2827@eprog\noindent
2828In fact, we can omit the second argument and type
2829\bprog
2830al2 = algtableinit(mt2);
2831@eprog\noindent
2832and the
2833characteristic of the algebra will implicitly be~$0$. Warning: in
2834characteristic~$0$, \kbd{algtableinit} expects an integral multiplication table.
2835
2836In fact, \kbd{gp} does not check that the multiplication table you provided
2837really defines an associative algebra. You can check it a posteriori
2838with
2839\bprog
2840algisassociative(al2)
2841@eprog\noindent
2842or before creating the algebra
2843with
2844\bprog
2845algisassociative(mt1,3)
2846@eprog\noindent
2847After creating the algebra, you can get back the
2848multiplication table that you provided with~\kbd{algmultable(al2)}, the
2849characteristic with~\kbd{algchar(al1)} and the dimension with~\kbd{algdim(al2)}.
2850
2851\subsubsec{Elements}
2852
2853In an associative algebra, we represent elements as column vectors expressing them
2854on the basis of the algebra. For instance, in~\kbd{al1} $=\F_3[\epsilon]$, the
2855element~$1-\epsilon$ is represented as~\kbd{[1,-1]\til}. Similarly, in~\kbd{al2}
2856we can define~$a$, $b$ and~$c = \pmatrix{2 & 1\cr 0 & 1}$ by
2857\bprog
2858a = [0,1,0]~
2859b = [0,0,1]~
2860c = [2,1,-1]~
2861@eprog\noindent
2862We can also draw random elements in a box using
2863\bprog
2864algrandom(al2, 10)
2865@eprog\noindent
2866You can compute any elementary operation: try various combinations
2867of~\kbd{algadd}, \kbd{algsub}, \kbd{algneg}, \kbd{algmul}, \kbd{alginv},
2868\kbd{algsqr}, \kbd{algpow}, using the syntax
2869\bprog
2870algmul(al2, b, a)
2871algpow(al2, c, 10)
2872@eprog\noindent
2873and the natural variants. In every algebra we have the left regular
2874representation, which sends every element~$x$ to the matrix of left
2875multiplication by~$x$. In~\kbd{gp} we access it by calling
2876\bprog
2877algtomatrix(al2, c)
2878@eprog\noindent
2879For every element~$x$ in an associative algebra, the trace of that matrix is
2880called the \emph{trace} of~$x$, the determinant is called the \emph{norm}
2881of~$x$, and the characteristic polynomial is called the \emph{characteristic
2882polynomial} of~$x$. We can compute them:
2883\bprog
2884algtrace(al2, a)
2885algnorm(al2, c)
2886algcharpoly(al2, b)
2887@eprog
2888
2889\subsubsec{Properties}
2890
2891Now let's try to compute some interesting properties of our algebras. Maybe the
2892simplest one we want to test is whether an algebra is commutative;
2893\kbd{algiscommutative} does that for us: you can use it to check that~\kbd{al1}
2894is of course commutative, but~\kbd{al2} is not since for instance~$ab=a\neq 0 =
2895ba$. More precisely, we can compute a basis of the center of an algebra
2896with~\kbd{algcenter}. Since~\kbd{al1} is commutative, we obtain the identity
2897matrix, and
2898\bprog
2899algcenter(al2)
2900@eprog\noindent
2901tells us that the center of~\kbd{al2} is
2902one-dimensional and generated by the identity.
2903
2904An important object in the structure of associative algebras is the
2905\emph{Jacobson radical}: the set of elements that annihilate every simple left
2906module. It is a nilpotent two-sided ideal. An algebra is \emph{semisimple} if
2907its Jacobson radical is zero, and for every algebra~$A$ with radical~$J$, the
2908quotient~$A/J$ is semisimple. You can compute a basis for the Jacobson radical
2909using~\kbd{algradical}. For instance, the radical of~\kbd{al1} is generated by
2910the element~$\epsilon$. Indeed, in a commutative algebra the Jacobson
2911radical is equal to the set of nilpotent elements. The radical of~\kbd{al2} has
2912basis~$a$: it is the subspace of strictly upper-triangular~$2\times 2$ matrices.
2913You can also directly test whether an algebra is semisimple
2914using~\kbd{algissemisimple}. Let's compute the semisimplification of our
2915second algebra by quotienting out its radical:
2916\bprog
2917al3 = algquotient(al2, algradical(al2));
2918@eprog\noindent
2919Check that~\kbd{al3} is indeed semisimple now that you know how to do it!
2920
2921Group algebras provide interesting examples: when~$G$ is a finite group and~$F$
2922is a field, the group algebra~$F[G]$ is semisimple if and only if the
2923characteristic of~$F$ does not divide the order of~$G$. Let's try it!
2924
2925\bprog
2926K = nfsplitting(x^3-x+1); \\ Galois closure -> S_3
2927G = galoisinit(K)
2928al4 = alggroup(G, 5) \\ F_5[S_3]
2929algissemisimple(al4)
2930@eprog\noindent
2931Check what happens when you change the characteristic of the algebra.
2932
2933The building blocks of semisimple algebras are \emph{simple} algebras:
2934algebras with no nontrivial two-sided ideals. Since the Jacobson radical is a
2935two-sided ideal, every simple algebra is semisimple. You can check whether an
2936algebra is simple using~\kbd{algissimple}. For instance, you can check that
2937\bprog
2938algissimple(al1)
2939@eprog\noindent
2940returns~\kbd{0}, but this is not very interesting since~\kbd{al1} is not even
2941semisimple. Instead we can test whether~\kbd{al3} is simple, and since we
2942already know that it is semisimple we can prevent~\kbd{gp} from checking it
2943again by using the optional second argument:
2944\bprog
2945algissimple(al3, 1)
2946@eprog\noindent
2947We see that~\kbd{al3} is not simple, and this implies that we can decompose it
2948further. Indeed, every semisimple algebra is isomorphic to a product of simple algebras.
2949We can obtain this decomposition with
2950\bprog
2951dec3 = algsimpledec(al3)[2];
2952apply(algdim, dec3)
2953@eprog\noindent
2954We see that~\kbd{al3} is isomorphic to~$\Q\times\Q$. Similarly, we can
2955decompose~\kbd{al4}.
2956\bprog
2957dec4 = algsimpledec(al4)[2];
2958apply(algdim, dec4)
2959@eprog\noindent
2960We see that~\kbd{al4} is isomorphic to~$\F_5\times\F_5\times A$ where~$A$ is a
2961mysterious 4-dimensional simple algebra. However every simple algebra over a
2962finite field is isomorphic to a matrix algebra over a possibly larger finite
2963field. By computing the center
2964\bprog
2965algcenter(dec4[3])
2966@eprog\noindent
2967we see that this algebra has center~$\F_5$, so that~\kbd{al4} is isomorphic
2968to~$\F_5\times\F_5\times M_2(\F_5)$.
2969
2970\subsec{Central simple algebras over number fields}
2971
2972As we saw, simple algebras are building blocks for more complicated algebras.
2973The center of a simple algebra is always a field, and we say that an
2974algebra over a field~$F$ is \emph{central} if its center is~$F$. The most
2975natural noncommutative generalization of a number field is a \emph{division
2976algebra} over~$\Q$: an algebra in which every nonzero element is invertible.
2977Since the center of a division algebra over~$\Q$ is a number field, we do not
2978lose any generality by considering central division algebras over number
2979fields. However, the tensor product of two central division algebras is not
2980always a division algebra, but division algebras are always simple and central
2981simple algebras are closed under tensor products, giving a much nicer global
2982picture. This is why we choose central simple algebras over number fields as our
2983noncommutative generalization of number fields.
2984
2985\subsubsec{Creation}
2986
2987Let's create our first central simple algebras! A well-known construction is
2988that of \emph{quaternion algebras}, which proceeds as follows.
2989Let~$F$ be a number field, and let~$a,b\in F^\times$; the quaternion
2990algebra~$(a,b)_F$ is the $F$-algebra generated by two elements~$i$ and~$j$
2991satisfying~$i^2=a$, $j^2=b$ and~$ij=-ji$. Hamilton's quaternions
2992correspond to the choice~$a=b=-1$, but it is not the only possible
2993one! Here is our first quaternion algebra:
2994\bprog
2995Q = nfinit(y);
2996al1 = alginit(Q,[-2,-1]); \\  (-2,-1)_Q
2997@eprog\noindent
2998Note that we represented the rationals~$\Q$ with an~\kbd{nf} structure and used
2999the variable~$y$ in that structure. The reason for this variable choice will
3000be clearer after looking at the next construction. You can see from the
3001definition of a quaternion algebra that it has dimension~$4$ over its center, a
3002fact that you can check in our example:
3003\bprog
3004algdim(al1)
3005@eprog\noindent
3006Cyclic algebras generalize the quaternion algebra construction.
3007Let~$F$ be a number field and~$K/F$ a cyclic extension of degree~$d$
3008with~$\sigma\in\text{Gal}(K/F)$ an automorphism of order~$d$, and let~$b\in
3009F^\times$; the \emph{cyclic algebra}~$(K/F,\sigma,b)$ is the
3010algebra~$\bigoplus_{i=0}^{d-1}u^iK$ with~$u^d=b$ and~$k u = u \sigma(k)$
3011for all~$k\in K$. It is a central simple algebra of dimension~$d^2$ over~$F$.
3012Let's construct one. First, start with a cyclic extension of number fields. A
3013simple way of obtaining such an extension is to take a cyclotomic field
3014over~$\Q$.
3015\bprog
3016T = polcyclo(5)
3017K = rnfinit(Q,T);
3018aut = Mod(x^2,T)
3019@eprog\noindent
3020Here the variable of~\kbd{T} must have higher priority than that of~\kbd{Q} to
3021build the~\kbd{rnf} structure. Now choose an element~$b\in\Q^\times$, say~$3$.
3022\bprog
3023b = 3
3024@eprog\noindent
3025Now we can create the algebra and check that it has the right dimension:~$16$.
3026\bprog
3027al2 = alginit(K, [aut,b]);
3028algdim(al2)
3029@eprog\noindent
3030We can recover the field~\kbd{K}, the automorphism~\kbd{aut} and the
3031element~\kbd{b} respectively as follows.
3032\bprog
3033algsplittingfield(al2)
3034algaut(al2)
3035algb(al2)
3036@eprog\noindent
3037In order to see how we recover quaternion algebras with this construction, let's
3038look at
3039\bprog
3040algsplittingfield(al1).pol
3041algaut(al1)
3042algb(al1)
3043@eprog\noindent
3044We see that the quaternion algebra~$(-2,-1)_{\Q}$ constructed by~\kbd{gp} is
3045represented as a cyclic algebra~$(\Q(\sqrt{-2})/\Q, \sigma, -1)$ by
3046writing~$\Q+\Q i+\Q j+\Q ij = \Q(i) + j\Q(i)$.
3047
3048A nice feature of central simple algebras over a given number field is that they
3049are completely classified up to isomorphism, in terms of certain invariants. We
3050therefore provide functions to create an algebra directly from its invariants.
3051The first basic invariant is the dimension of the algebra over its center.
3052In fact, the
3053dimension of a central simple algebra is always a square, and we call the square
3054root of the dimension the \emph{degree} of the algebra. For instance, a
3055quaternion algebra has degree~$2$. We can access the degree with
3056\bprog
3057algdegree(al1)
3058algdegree(al2)
3059@eprog\noindent
3060Let~$F$ be a number field and~$A$ a central simple algebra of degree~$d$
3061over~$F$. To every place~$v$ of~$F$ we can attach a \emph{Hasse invariant}
3062$h_v(A)\in(\dfrac{1}{d}\Z)/\Z\subset\Q/\Z$, with
3063the additional restrictions that the invariant is~$0$ at every complex place and
3064in~$(\dfrac{1}{2}\Z)/\Z$ at every real place. These
3065invariants are~$0$ at all but finitely many places. For instance we can compute
3066the invariants for our algebras:
3067\bprog
3068alghassei(al1)
3069alghassef(al1)
3070alghassei(al2)
3071alghassef(al2)
3072@eprog\noindent
3073The output of~\kbd{alghassei} (infinite places) is a vector of integers of length the number
3074of real places of~$F$, and the corresponding Hasse invariants are these integers
3075divided by~$d$. Here we learn that the invariant of~\kbd{al1} at~$\infty$
3076is~$1/2$ and that of~\kbd{al2} is~$0$. Similarly, the output of~\kbd{alghassef}
3077(finite places) is a pair of vectors, one containing primes of the base field,
3078and a vector of integers of the same length representing the Hasse invariants at
3079finite places. We learn that~\kbd{al1} has invariant~$1/2$ at~$2$ and~$0$ at
3080every other finite place, and that~\kbd{al2} has invariant~$-1/3$ at~$3$,
3081invariant~$1/3$ at~$5$, and~$0$ at every other finite place.
3082These invariants give a complete classification of central simple
3083algebras over~$F$:
3084
3085 \item two central simple algebras over~$F$ of the same degree are isomorphic if
3086   and only if they have the same invariants;
3087
3088 \item the sum of the Hasse invariants is~$0$ in $\Q/\Z$;
3089
3090 \item for every degree~$d$ and finite collection of Hasse invariants satisfying
3091   the conditions above, there exists a central simple algebra over~$F$ having
3092   those invariants.
3093
3094Let's use~\kbd{gp} to construct an algebra from its invariants. First let's
3095construct a number field and a few places.
3096\bprog
3097nf = nfinit(y^2-2);
3098p3 = idealprimedec(nf,3)[1]
3099p17 = idealprimedec(nf,17)[1]
3100@eprog\noindent
3101Now let's construct an algebra of degree~$6$. The following Hasse invariants
3102satisfy the correct conditions since~$1/3+1/6+1/2=0$ in~$\Q/\Z$:
3103\bprog
3104hf = [[p3,p17],[1/3,1/6]]; hi = [1/2,0]; \\ nf has 2 real places
3105@eprog\noindent
3106Finally we can create the algebra:
3107\bprog
3108al3 = alginit(nf,[6,hf,hi]);
3109@eprog\noindent
3110This will require less than half a minute but a lot of memory:
3111this is an algebra of dimension~$2\times 6^2 = 72$ over~$\Q$ and we
3112are doing a nontrivial computation in the initialization! You can check that the
3113dimension over~$\Q$ is what we expect:
3114\bprog
3115algdim(al3,1)
3116@eprog\noindent
3117During the initialization, \kbd{gp} computes an integral multiplication table
3118for the algebra. This allows us to recreate a table version of the algebra:
3119\bprog
3120mt3 = algmultable(al3);
3121al4 = algtableinit(mt3);
3122@eprog\noindent
3123We can then check that the algebra is simple as expected:
3124\bprog
3125algissimple(al4)
3126@eprog\noindent
3127Finally, an important test for a central simple algebra is whether it is a
3128division algebra.
3129\bprog
3130algisdivision(al3)
3131@eprog
3132
3133\subsubsec{Elements}
3134
3135In a cyclic algebra~$(K/F,\sigma,b)$ of degree~$d$, we represent elements as
3136column vectors of length~$d$, expressed on the basis of the~$K$-vector
3137space~$\bigoplus_{i=0}^{d-1}u^iK$:
3138\bprog
3139a = [x^3-1, -x^2, 3, -1]~*Mod(1,T) \\represents an element in al2
3140@eprog\noindent
3141To represent elements in the quaternion algebra~\kbd{al1}, we must view it as a
3142cyclic algebra, and therefore use the representation~\kbd{al1} $ =
3143\Q(i)+j\Q(i)$. The standard basis elements~$1,i,j,k=ij=-ji$ become
3144\bprog
3145one = [1,0]~
3146i = [x,0]~
3147j = [0,1]~
3148k = [0,-x]~
3149@eprog\noindent
3150The expected equalities hold:
3151\bprog
3152algsqr(al1,i) == -2*one
3153algsqr(al1,j) == -1*one
3154algsqr(al1,k) == -2*one
3155algmul(al1,i,j) == k
3156algmul(al1,j,i) == -k
3157@eprog\noindent
3158
3159Like~\kbd{nfinit} for number fields, the~\kbd{alginit} function computes an
3160integral basis of
3161the algebra being initialized. More precisely, an \emph{order} in
3162a~$\Q$-algebra~$A$ is a subring~${\cal O}\subset A$ that is finitely generated
3163as a~$\Z$-module and such that~$\Q{\cal O} = A$. In an algebra
3164structure
3165computed with~\kbd{alginit}, we store a basis of an order~${\cal O}_0$, which we
3166will call the integral basis of the algebra. There is no canonical choice for
3167such a basis, and not every integral element has integral coordinates with
3168respect to that basis (the set of integral elements in~$A$ does not form a
3169ring in general). By default, ${\cal O}_0$ is a maximal order and hence behaves
3170in a way similar to the ring of integers in a number field. You can disable the
3171(costly) computation of a maximal order with an optional argument:
3172\bprog
3173al5 = alginit(nf,[6,hf,hi],,0);
3174@eprog\noindent
3175This command should be faster than the initialization of~\kbd{al3}.
3176As in number fields, you can represent elements of central simple algebras in
3177\emph{algebraic form}, which means the cyclic algebra representation we
3178described above, or in \emph{basis form}, which means as a~$\Q$-linear
3179combination of the integral basis. You can switch between the two
3180representations:
3181\bprog
3182algalgtobasis(al2, a)
3183algalgtobasis(al1, j)
3184algbasistoalg(al3, algrandom(al3,1))
3185@eprog\noindent
3186As usual you can compute any elementary operation: try various combinations
3187of~\kbd{algadd}, \kbd{algsub}, \kbd{algneg}, \kbd{algmul}, \kbd{alginv},
3188\kbd{algsqr}, \kbd{algpow}.
3189
3190Every central simple algebra~$A$ over a number field~$F$ admits a \emph{splitting
3191field}, i.e. an extension~$K/F$ such that~$A\otimes_F K\cong M_d(K)$. We always
3192store such a splitting in an~\kbd{alginit} structure, and you can access it
3193using
3194\bprog
3195algsplittingfield(al1) \\ K as an rnf structure
3196algtomatrix(al1, k) \\ image of k by the splitting isomorphism
3197@eprog\noindent
3198For every~$x\in A$, the trace (resp. determinant, characteristic polynomial)
3199of that matrix is in~$K$ (resp.~$K$,~$K[X]$) and is called the
3200\emph{reduced trace} (resp. \emph{reduced norm}, \emph{reduced characteristic
3201polynomial}) of~$x$. You can compute them using
3202\bprog
3203algtrace(al3, vector(72,i,i==3)~)
3204algnorm(al2, a)
3205algcharpoly(al1, -1+i+j+2*k)
3206@eprog
3207
3208\section{Plotting}
3209
3210PARI supports high and low-level graphing functions, on a variety of output
3211devices: a special purpose window under standard graphical environments (the
3212\kbd{X Windows} system, Mac OS X, Microsoft Windows), or a \kbd{PostScript}
3213file ready for the printer. These functions use a multitude of flags, which
3214are mostly power-of-2. To simplify understanding we first give these flags
3215symbolic names.
3216\bprog
3217  /* Relative positioning of graphic objects: */
3218  nw       = 0;  se       = 4;  relative = 1;
3219  sw       = 2;  ne       = 6;
3220
3221  /* String positioning: */
3222  /* V */ bottom  =  0;   /* H */  left   = 0;   /* Fine tuning */ hgap = 16;
3223          vcenter =  4;            center = 1;                     vgap = 32;
3224          top     =  8;            right  = 2;
3225@eprog\noindent
3226We also decrease drastically the default precision.
3227\bprog
3228  \p 9
3229@eprog\noindent
3230This is very important, since plotting involves calculation of functions at
3231a huge number of points, and a relative precision of 38 significant digits
3232is an obvious overkill: the output device resolution certainly won't reach
3233$10^{38} \times 10^{38}$ pixels!
3234
3235Start with a simple plot:
3236\bprog
3237  ploth(X = -2, 2, sin(X^7))
3238@eprog\noindent
3239You can see the limitations of the ``straightforward'' mode of plotting:
3240while the first several cycles of \kbd{sin} reach $-1$ and $1$, the cycles
3241which are closer to the left and right border do not. This is understandable,
3242since PARI is calculating $\sin(X^7)$ at many (evenly spaced) points, but
3243these points have no direct relationship to the ``interesting'' points on
3244the graph of this function.  No value close enough to the maxima and minima
3245are calculated, which leads to wrong turning points in the graph. To fix
3246this, one may use variable steps which are smaller where the function varies
3247rapidly:
3248\bprog
3249  ploth(X = -2, 2, sin(X^7), "Recursive")
3250@eprog\noindent
3251The precision near the edges of the graph is much better now.
3252However, the recursive plotting (named so since PARI subdivides intervals
3253until the graph becomes almost straight) has its own pitfalls.  Try
3254\bprog
3255  ploth(X = -2, 2, sin(X*7), "Recursive")
3256@eprog\noindent The graph looks correct far away, but it has a straight
3257interval near the origin, and some sharp corners as well.  This happens
3258because the graph is symmetric with respect to the origin, thus the middle 3
3259points calculated during the initial subdivision of $[-2,2]$ are exactly on
3260the same line.  To PARI this indicates that no further subdivision is needed,
3261and it plots the graph on this subinterval as a straight line.
3262
3263There are many ways to circumvent this.  Say, one can make the right limit
32642.1.  Or one can ask PARI for an initial subdivision into 16 points instead
3265of default 15:
3266\bprog
3267  ploth(X = -2, 2, sin(X*7), "Recursive", 16)
3268@eprog\noindent
3269All these arrangements break the symmetry of the initial subdivision, thus
3270make the problem go away.  Eventually PARI will be able to better detect such
3271pathological cases, but currently some manual intervention may be required.
3272
3273The function \kbd{ploth} has some additional enhancements which allow
3274graphing in situations when the calculation of the function takes a lot of
3275time.  Let us plot $\zeta({1\over 2} + it)$:
3276\bprog
3277  ploth(t = 100, 110, real(zeta(0.5+I*t)), /*empty*/, 1000)
3278@eprog\noindent
3279This can take quite some time.  (1000 is close to the default for many
3280plotting devices, we want to specify it explicitly so that the result does
3281not depend on the output device.)  Try the recursive plot:
3282\bprog
3283  ploth(t = 100, 110, real(zeta(0.5+I*t)), "Recursive")
3284@eprog\noindent
3285It takes approximately the same time.  Now try specifying fewer points,
3286but make PARI approximate the data by a smooth curve:
3287\bprog
3288  ploth(t = 100, 110, real(zeta(0.5+I*t)), "Splines", 100)
3289@eprog\noindent
3290This takes much less time, and the output is practically the same.  How to
3291compare these two outputs?  We will see it shortly.  Right now let us plot
3292both real and complex parts of $\zeta$ on the same graph:
3293\bprog
3294  f(t) = my(z = zeta(0.5+I*t)); [real(z),imag(z)];
3295  ploth(t = 100, 110, f(t), , 1000)
3296@eprog\noindent (Note the use of the temporary variable \kbd{z}; \kbd{my}
3297declares it local to the function's body.)
3298
3299Note how one half of the roots of the real and imaginary parts coincide.
3300Why did we define a function \kbd{f(t)}?  To avoid calculation of
3301$\zeta({1\over2} + it)$ twice for the same value of t.  Similarly, we can
3302plot parametric graphs:
3303\bprog
3304  ploth(t = 100, 110, f(t), "Parametric", 1000)
3305@eprog\noindent In that case (parametric plot of the real and imaginary parts
3306of a complex function), we can also use directly
3307\bprog
3308  ploth(t = 100, 110, zeta(0.5+I*t), "Complex", 1000)
3309  ploth(t = 100, 110, zeta(0.5+I*t), "Complex|Splines", 100)
3310@eprog
3311
3312If your plotting device supports it, you may ask PARI to show the points
3313in which it calculated your function:
3314\bprog
3315  ploth(t = 100, 110, f(t), "Parametric|Splines|Points_too", 100)
3316@eprog
3317
3318As you can see, the points are very dense on the graph.  To see some crude
3319graph, one can even decrease the number of points to 30.  However, if you
3320decrease the number of points to 20, you can see that the approximation to
3321the graph now misses zero.  Using splines, one can create reasonable graphs
3322for larger values of t, say with
3323\bprog
3324  ploth(t = 10000, 10004, f(t), "Parametric|Splines|Points_too", 50)
3325@eprog
3326
3327How can we compare two graphs of the same function plotted by different
3328methods?  Documentation shows that \kbd{ploth} does not provide any direct
3329method to do so.  However, it is possible, and even not very complicated.
3330
3331The solution comes from the other direction.  PARI has a power mix of high
3332level plotting function with low level plotting functions, and these functions
3333can be combined together to obtain many different effects.  Return back
3334to the graph of $\sin(X^7)$.  Suppose we want to create an additional
3335rectangular frame around our graph.  No problem!
3336
3337First, all low-level graphing work takes place in some virtual drawing
3338boards (numbered from 0 to 15), called ``rectangles'' (or ``rectwindows'').
3339So we create an empty ``rectangle'' and name it rectangle 2 (any
3340number between 0 and 15 would do):
3341\bprog
3342  plotinit(2)
3343  plotscale(2, 0,1, 0,1)
3344@eprog
3345This creates a rectwindow whose size exactly fits the size of the output
3346device, and makes the coordinate system inside it go from 0 to 1 (for both
3347$x$ and $y$). Create a rectangular frame along the boundary of this rectangle:
3348\bprog
3349  plotmove(2, 0,0)
3350  plotbox(2, 1,1)
3351@eprog
3352Suppose we want to draw the graph inside a subrectangle of this with upper
3353and left margins of $0.10$ (so 10\% of the full rectwindow width), and
3354lower and top margins of $0.02$, just to make it more interesting. That
3355makes it an $0.88 \times 0.88$ subrectangle; so we create another rectangle
3356(call it 3) of linear size 0.88 of the size of the initial rectangle and
3357graph the function in there:
3358\bprog
3359  plotinit(3, 0.88, 0.88, relative)
3360  plotrecth(3, X = -2, 2, sin(X^7), "Recursive")
3361@eprog
3362(nothing is output yet, these commands only fills the virtual drawing
3363boards with PARI graphic objects). Finally, output rectangles 2 and 3 on
3364the same plot, with the required offsets (counted from upper-left corner):
3365\bprog
3366  plotdraw([2, 0,0,  3, 0.1,0.02], relative)
3367@eprog
3368\noindent The output misses something comparing to the output of
3369\kbd{ploth}: there are no coordinates of the corners of the internal
3370rectangle.  If your output device supports mouse operations (only
3371\kbd{gnuplot} does), you can find coordinates of particular points of the
3372graph, but it is nice to have something printed on a hard copy too.
3373
3374However, it is easy to put $x$- and $y$-limits on the graph.  In the
3375coordinate system of the rectangle 2 the corners are $(0.1,0.1)$,
3376$(0.1,0.98)$, $(0.98,0.1)$, $(0.98,0.98)$.  We can mark lower $x$-limit by
3377doing
3378\bprog
3379  plotmove(2, 0.1,0.1)
3380  plotstring(2, "-2.000", left+top+vgap)
3381@eprog\noindent
3382Computing the minimal and maximal $y$-coordinates might be trickier, since
3383in principle we do not know the range in advance (though for $\sin(X^7)$ it
3384is easy to guess!). Fortunately, \kbd{plotrecth} returns the $x$- and
3385$y$-limits.
3386
3387Here is the complete program:
3388\bprog
3389  plotinit(3, 0.88, 0.88, relative)
3390  lims = plotrecth(3, X = -2, 2, sin(X^7), "Recursive")
3391  \p 3          \\ @com $3$ significant digits for the bounding box are enough
3392  plotinit(2);      plotscale(2, 0,1, 0,1)
3393  plotmove(2, 0,0); plotbox(2, 1,1)
3394  plotmove(2, 0.1,0.1);
3395  plotstring(2, lims[1], left+top+vgap)
3396  plotstring(2, lims[3], bottom+vgap+right+hgap)
3397  plotmove(2, 0.98,0.1); plotstring(2, lims[2], right+top+vgap)
3398  plotmove(2, 0.1,0.98); plotstring(2, lims[4], right+hgap+top)
3399  plotdraw([2, 0,0,  3, 0.1,0.02], relative)
3400@eprog
3401
3402We started with a trivial requirement: have an additional frame around
3403the graph, and it took some effort to do so.  But at least it was possible,
3404and PARI did the hardest part: creating the actual graph.
3405Now do a different thing: plot together the ``exact'' graph of
3406$\zeta({1/2}+it)$ together with one obtained from splines approximation.
3407We can emit these graphs into two rectangles, say 0 and 1, then put these
3408two rectangles together on one plot.  Or we can emit these graphs into one
3409rectangle 0.
3410
3411However, a problem arises: note how we
3412introduced a coordinate system in rectangle 2 of the above example, but we
3413did not introduce a coordinate system in rectangle 3.  Plotting a
3414graph into rectangle 3 automatically created a coordinate system
3415inside this rectangle (you could see this coordinate system in action
3416if your output device supports mouse operations).  If we use two different
3417methods of graphing, the bounding boxes of the graphs will not be exactly
3418the same, thus outputting the rectangles may be tricky.  Thus during
3419the second plotting we ask \kbd{plotrecth} to use the coordinate system of
3420the first plotting.  Let us add another plotting with fewer
3421points too:
3422\bprog
3423plotinit(0, 0.9,0.9, relative)
3424plotrecth(0, t=100,110, f(t), "Parametric",300)
3425plotrecth(0, t=100,110, f(t), "Parametric|Splines|Points_too|no_Rescale",30);
3426plotrecth(0, t=100,110, f(t), "Parametric|Splines|Points_too|no_Rescale",20);
3427plotdraw([0, 0.05,0.05], relative)
3428@eprog
3429
3430This achieves what we wanted: we may compare different ways to plot a graph,
3431but the picture is confusing: which graph is what, and why there are multiple
3432boxes around the graph?  At least with some output devices one can control
3433how the output curves look like, so we can use this to distinguish different
3434graphs.  And the mystery of multiple boxes is also not that hard to solve:
3435they are bounding boxes for calculated points on each graph.  We can disable
3436output of bounding boxes with appropriate options for \kbd{plotrect}.
3437With these frills the script becomes:
3438\bprog
3439plotinit(0, 0.9,0.9, relative)
3440plotrecth(0, t=100,110, f(t), "Parametric|no_Lines", 300)
3441opts="Parametric|Splines|Points_too|no_Rescale|no_Frame|no_X_axis|no_Y_axis";
3442plotrecth(0, t=100,110,f(t), opts, 30);
3443plotdraw([0, 0.05,0.05], relative)
3444@eprog
3445
3446\noindent Plotting axes on the second graph would not hurt, but
3447is not needed either, so we omit them.  One can see that the discrepancies
3448between the exact graph and one based on 30 points exist, but are pretty
3449small.  On the other hand, decreasing the number of points to 20 would make
3450quite a noticeable difference.
3451
3452Additionally, one can ask PARI to convert a plot to PS (PostScript)
3453or SVG (Scalable Vector Graphics) format: just use the command
3454\kbd{plotexport} instead of \kbd{plotdraw} in the above examples (or
3455\kbd{plothexport} instead of \kbd{ploth}). This returns a character string
3456which you can then write to a file using \kbd{write}.
3457
3458Now suppose we want to join many different small graphs into one picture.
3459We cannot use one rectangle for all the output as we did in the example
3460with $\zeta({1/2}+it)$, since the graphs should go into different places.
3461Rectangles are a scarce commodity in PARI, since only 16 of them are
3462user-accessible.  Does it mean that we cannot have more than 16 graphs on
3463one picture?  Thanks to an additional operation of PARI plotting engine,
3464there is no such restrictions.  This operation is \kbd{plotcopy}.
3465
3466The following script puts 4 different graphs on one plot using 2 rectangles
3467only, \kbd{A} and \kbd{T}:
3468\bprog
3469  A = 2;   \\@com accumulator
3470  T = 3;   \\@com temporary target
3471  plotinit(A);         plotscale(A, 0, 1, 0, 1)
3472
3473  plotinit(T, 0.42, 0.42, relative);
3474  plotrecth(T, x= -5, 5, sin(x), "Recursive")
3475  plotcopy(T, 2, 0.05, 0.05, relative + nw)
3476
3477  plotmove(A, 0.05 + 0.42/2, 1 - 0.05/2)
3478  plotstring(A,"Graph", center + vcenter)
3479
3480  plotinit(T, 0.42, 0.42, relative);
3481  plotrecth(T, x= -5, 5, [sin(x),cos(2*x)], 0)
3482  plotcopy(T, 2, 0.05, 0.05, relative + ne)
3483
3484  plotmove(A, 1 - 0.05 - 0.42/2, 1 - 0.05/2)
3485  plotstring(A,"Multigraph", center + vcenter)
3486
3487  plotinit(T, 0.42, 0.42, relative);
3488  plotrecth(T, x= -5, 5, [sin(3*x), cos(2*x)], "Parametric")
3489  plotcopy(T, 2, 0.05, 0.05, relative + sw)
3490
3491  plotmove(A, 0.05 + 0.42/2, 0.5)
3492  plotstring(A,"Parametric", center + vcenter)
3493
3494  plotinit(T, 0.42, 0.42, relative);
3495  plotrecth(T, x= -5, 5, [sin(x), cos(x), sin(3*x),cos(2*x)], "Parametric")
3496  plotcopy(T, 2, 0.05, 0.05, relative + se)
3497
3498  plotmove(A, 1 - 0.05 - 0.42/2, 0.5)
3499  plotstring(A,"Multiparametric", center + vcenter)
3500
3501  plotmove(A, 0, 0)
3502  plotbox(A, 1, 1)
3503
3504  plotdraw(A)
3505  \\ s = plotexport(A, relative); write("foo.ps", s) \\ @com if hard copy needed
3506@eprog
3507
3508The rectangle \kbd{A} plays the role of accumulator, rectangle \kbd{T} is
3509used as a target to \kbd{plotrecth} only.  Immediately after plotting into
3510rectangle \kbd{T} the contents is copied to accumulator.  Let us explain
3511numbers which appear in this example: we want to create 4 internal rectangles
3512with a gap 0.06 between them, and the outside gap 0.05 (of the size of the
3513plot).  This leads to the size 0.42 for each rectangle.  We also
3514put captions above each graph, centered in the middle of each gap.  There
3515is no need to have a special rectangle for captions: they go into the
3516accumulator too.
3517
3518To simplify positioning of the rectangles, the above example uses relative
3519``geographic'' notation: the last argument of \kbd{plotcopy} specifies the
3520corner of the graph (say, northwest) one counts offset from. (Positive
3521offsets go into the rectangle.)
3522
3523To demonstrate yet another useful plotting function, design a program to
3524plot Taylor polynomials for a $\sin x$ about 0.  For simplicity, make the
3525program good for any function, but assume that a function is odd, so only
3526odd-numbered Taylor series about 0 should be plotted.  Start with defining
3527some useful shortcuts
3528\bprog
3529  xlim = 13;  ordlim = 25;  f(x) = sin(x);
3530  default(seriesprecision,ordlim)
3531  farray(t) = vector((ordlim+1)/2, k, truncate( f(1.*t + O(t^(2*k+1)) )));
3532  FARRAY = farray('t);  \\@com\kbd{'t} to make sure \kbd{t} is a free variable
3533@eprog\noindent
3534\kbd{farray(x)} returns a vector of Taylor polynomials for $f(x)$, which we
3535store in \kbd{FARRAY}.  We want to plot $f(x)$ into a rectangle, then make
3536the rectangle which is 1.2 times higher, and plot Taylor polynomials into the
3537larger rectangle.  Assume that the larger rectangle takes 0.9 of the final
3538plot.
3539
3540First of all, we need to measure the height of the smaller rectangle:
3541\bprog
3542  plotinit(3, 0.9, 0.9/1.2, 1);
3543  opts = "Recursive | no_X_axis|no_Y_axis|no_Frame";
3544  lims = plotrecth(3, x= -xlim, xlim, f(x), opts,16);
3545  h = lims[4] - lims[3];
3546@eprog\noindent
3547Next step is to create a larger rectangle, and plot the Taylor polynomials
3548into the larger rectangle:
3549\bprog
3550  plotinit(4, 0.9,0.9, relative);
3551  plotscale(4, lims[1], lims[2], lims[3] - h/10, lims[4] + h/10)
3552  plotrecth(4, x = -xlim, xlim, subst(FARRAY,t,x), "no_Rescale");
3553@eprog
3554
3555Here comes the central command of this example:
3556\bprog
3557  plotclip(4)
3558@eprog\noindent
3559What does it do?  The command \kbd{plotrecth(\dots, "no\_Rescale")} scales the
3560graphs according to coordinate system in the rectangle, but it does not pay
3561any other attention to the size of the rectangle.  Since \kbd{xlim} is 13,
3562the Taylor polynomials take very large values in the interval
3563\kbd{-xlim...xlim}.  In particular, significant part of the graphs is going
3564to be \emph{outside} of the rectangle. \kbd{plotclip} removes the parts of
3565the plottings which fall off the rectangle boundary
3566\bprog
3567  plotinit(2)
3568  plotscale(2, 0.0, 1.0, 0.0, 1.0)
3569  plotmove(2,0.5,0.975)
3570  plotstring(2,"Multiple Taylor Approximations",center+vcenter)
3571  plotdraw([2, 0, 0,  3, 0.05, 0.05 + 0.9/12,  4, 0.05, 0.05], relative)
3572@eprog\noindent
3573These commands draw a caption, and combine 3 rectangles (one with the
3574caption, one with the graph of the function, and one with graph of Taylor
3575polynomials) together. The plots are not very beautiful with the default
3576colors. See \kbd{examples/taylor.gp} for a user function encapsulating the
3577above example, and a colormap generator.
3578
3579This finishes our survey of PARI plotting functions, but let us add
3580some remarks.  First of all, for a typical output device the picture is
3581composed of small colored squares (pixels), as a very large checkerboard.
3582Each output rectangle is a disjoint union of such squares.  Each drop
3583of paint in the rectangle will color a whole square in it.  Since the
3584rectangle has a coordinate system, it is important to know how this
3585coordinate system is positioned with respect to the boundaries of these
3586squares.
3587
3588The command \kbd{plotscale} describes a range of $x$ and $y$ in the
3589rectangle.  The limit values of $x$ and $y$ in the coordinate system are
3590coordinates \emph{of the centers} of corner squares.  In particular,
3591if ranges of $x$ and $y$ are $[0,1]$, then the segment which connects (0,0)
3592with (0,1) goes along the \emph{middle} of the left column of the rectangle.
3593In particular, if we made tiny errors in calculation of endpoints of this
3594segment, this will not change which squares the segment intersects, thus
3595the resulting picture will be the same.  (It is important to know such details
3596since many calculations are approximate.)
3597
3598Another consideration is that all examples we did in this section were
3599using relative sizes and positions for the rectangles.  This is nice, since
3600different output devices will have very similar pictures, while we
3601did not need to care about particular resolution of the output device.
3602On the other hand,
3603using relative positions does not guarantee that the pictures will be
3604similar.  Why?  Even if two output devices have the same resolution,
3605the picture may be different.  The devices may use fonts of different
3606size, or may have a different ``unit of length''.
3607
3608The information about the device in PARI is encoded in 6 numbers: resolution,
3609size of a character cell of the font, and unit of length, all separately
3610for horizontal and vertical direction.  These sizes are expressed as
3611numbers of pixels.  To inspect these numbers one may use the function
3612\kbd{plothsizes}.  The ``units of length'' are currently used to calculate
3613right and top gaps near graph rectangle of \kbd{ploth}, and gaps for
3614\kbd{plotstring}.  Left and bottom gaps near graph rectangle are calculate
3615using both units of length, and sizes of character boxes (so that there
3616is enough place to print limits of the graphs).
3617
3618What does it show?  Using relative sizes during plotting produces
3619\var{approximately} the same plotting on different devices, but does not
3620ensure that the plottings ``look the same''.  Moreover, ``looking the
3621same'' is not a desirable target, ``looking tuned for the environment''
3622will be much better.  If you want to produce such fine-tuned plottings,
3623you need to abandon a relative-size model, and do your plottings in
3624pixel units.  To do this one removes flag \kbd{relative} from the above
3625examples, which will make size and offset arguments interpreted this way.
3626After querying sizes with \kbd{plothsizes} one can fine-tune sizes and
3627locations of subrectangles to the details of an arbitrary plotting
3628device.
3629
3630The last two elements of the array returned by \kbd{plothsizes} are the
3631dimensions of the display, if applicable. If there is no real display, like
3632in svg or postscript plots, the width and height of display are set to $0$.
3633
3634To check how good your fine-tuning is, you may test your graphs with a
3635medium-resolution plotting (as many display output devices are), and
3636with a low-resolution plotting (say, with \kbd{plotterm("dumb")} of gnuplot).
3637
3638\section{GP Programming}
3639
3640Do we really need such a section after all we have learnt so far? We now
3641know how to write scripts and feed them to \kbd{gp}, in particular how to
3642define functions. It's possible to define \emph{member} function as well, but
3643we trust you to find them in the manual. We have seen most control
3644statements: the missing ones (\kbd{while}, \kbd{break}, \kbd{next},
3645\kbd{return} and various \kbd{for} loops) should be straightforward. (You
3646won't forget to look them up in the manual, will you?)
3647
3648Output is done via variants of the familiar \kbd{print} (to screen),
3649\kbd{write} (to a file). Input via \kbd{read} (from file), \kbd{input}
3650(querying user), or \kbd{extern} (from an external auxiliary program).
3651
3652To customize \kbd{gp}, e.g.~increase the default stack space or load your
3653private script libraries on startup, look up \kbd{The preferences file}
3654section in the User's manual. We strongly advise to set \kbd{parisizemax} to
3655a large nonzero value, about what you believe your machine can stand: this
3656both limits the amount of memory PARI will use for its computation (thereby
3657keeping your machine usable), and let PARI increases its stack size (up to
3658this limit) to accommodate large computations. If you regularly see \kbd{PARI
3659stack overflows} messages, think about this one!
3660
3661For clarity, it is advisable  to declare local variables in user functions
3662(and in fact, with the smallest possible scope), as we have done so far with
3663the keyword \kbd{my}. As usual, one is usually better off avoiding global
3664variables altogether.
3665
3666\emph{Break loops} are more powerful than we saw: look up \kbd{dbg\_down} /
3667\kbd{dbg\_up} (to get a chance to inspect local variables in various scopes)
3668and \kbd{dbg\_err} (to access all components of an error context).
3669
3670To reach grandwizard status, you may need to understand the all powerful
3671\kbd{install} function, which imports into \kbd{gp} an (almost) arbitrary
3672function from the PARI library (and elsewhere too!), or how to use the
3673\kbd{gp2c} compiler and its extended types. But both are beyond the scope of
3674the present document.
3675
3676Have fun!
3677\bye
3678