The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition by Paul B. Garrett (PDF)

5

 

Ebook Info

  • Published: 2003
  • Number of pages: 398 pages
  • Format: PDF
  • File Size: 13.53 MB
  • Authors: Paul B. Garrett

Description

This book makes a very accessible introduction to a very important contemporary application of number theory, abstract algebra, and probability. It contains numerous computational examples throughout, giving learners the opportunity to apply, practice, and check their understanding of key concepts. KEY TOPICS Coverage starts from scratch in treating probability, entropy, compression, Shannon¿s theorems, cyclic redundancy checks, and error-correction. For enthusiasts of abstract algebra and number theory.

User’s Reviews

Editorial Reviews: Excerpt. © Reprinted by permission. All rights reserved. This book is intended to be accessible to undergraduate students with two years of typical mathematics experience, most likely meaning calculus with a little linear algebra and differential equations. Thus, specifically, there is no assumption of a background in abstract algebra or number theory, nor of probability, nor of linear algebra. All these things are introduced and developed to a degree sufficient to address the issues at hand. We will address the fundamental problem of transmitting information effectively and accurately. The specific mode of transmission does not really play a role in our discussion. On the other hand, we should mention that the importance of the issues of efficiency and accuracy has increased largely due to the advent of the internet and, even more so, due to the rapid development of wireless communications. For this reason it makes sense to think of networked computers or wireless devices as archetypical fundamental practical examples. The underlying concepts of information and information content of data make sense independently of computers, and are relevant in looking at the operation of natural languages such as English, and of other modes of operation by which people acquire and process data. The issue of efficiency is the obvious one: transmitting information costs time, money, and bandwidth. It is important to use as little as possible of each of these resources. Data compression is one way to pursue this efficiency. Some well known examples of compression schemes are commonly used for graphics: GIFs, JPEGs, and more recently PNGs. These clever file format schemes are enormously more efficient in terms of filesize than straightforward bitmap descriptions of graphics files. There are also general-purpose compression schemes, such as gzip, bzip2, ZIP, etc. The issue of accuracy is addressed by detection and correction of errors that occur during transmission or storage of data. The single most important practical example is the TCP/IP protocol, widely used on the internet: one basic aspect of this is that if any of the packets composing a message is discovered to be mangled or lost, the packet is simply retransmitted. The detection of lost packets is based on numbering the collection making up a given message. The detection of mangled packets is by use of 16-bit checksums in the headers of IP and TCP packets. We will not worry about the technical details of TCP/IP here, but only note that email and many other types of internet traffic depend upon this protocol, which makes essential use of rudimentary error-detection devices. And it is a fact of life that dust settles on CD-ROMs, static permeates network lines, etc. That is, there is noise in all communication systems. Human natural languages have evolved to include sufficient redundancy so that usually much less than 100% of a message need be received to be properly understood. Such redundancy must be designed into CD-ROM and other data storage protocols to achieve similar robustness. There are other uses for detection of changes in data: if the data in question is the operating system of your computer, a change not initiated by you is probably a sign of something bad, either failure in hardware or software, or intrusion by hostile agents (whether software or wetware). Therefore, an important component of systems security is implementation of a suitable procedure to detect alterations in critical files. In pre-internet times, various schemes were used to reduce the bulk of communication without losing the content: this influenced the design of the telegraphic alphabet, traffic lights, shorthand, etc. With the advent of the telephone and radio, these matters became even more significant. Communication with exploratory spacecraft having very limited resources available in deep space is a dramatic example of how the need for efficient and accurate transmission of information has increased in our recent history. In this course we will begin with the model of communication and information made explicit by Claude Shannon in the 1940’s, after some preliminary forays by Hartley and others in the preceding decades. Many things are omitted due to lack of space and time. In spite of their tremendous importance, we do not mention convolutional codes at all. This is partly because there is less known about them mathematically. Concatenated codes are mentioned only briefly. Finally, we also omit any discussion of the so-called turbo codes. Turbo codes have been recently developed experimentally. Their remarkably good behavior, seemingly approaching the Shannon bound, has led to the conjecture that they are explicit solutions to the fifty-year old existence results of Shannon. However, at this time there is insufficient understanding of the reasons for their good behavior, and for this reason we will not attempt to study them here. We do give a very brief introduction to geometric Goppa codes, attached to algebraic curves, which are a natural generalization of Reed-Solomon codes (which we discuss), and which exceed the Gilbert-Varshamov lower bound for performance. The exercises at the ends of the chapters are mostly routine, with a few more difficult exercises indicated by single or double asterisks. Short answers are given at the end of the book for a good fraction of the exercises, indicated by ‘(ans.)’ following the exercise. I offer my sincere thanks to the reviewers of the notes that became this volume. They found many unfortunate errors, and offered many good ideas about improvements to the text. While 1 did not choose to take absolutely all the advice given, I greatly appreciate the thought and energy these people put into their reviews: John Bowman, University of Alberta; Sergio Lopez, Ohio University; Navin Kashyap, University of California, San Diego; James Osterburg, University of Cincinnati; LeRoy Bearnson, Brigham Young University; David Grant, University of Colorado at Boulder; Jose Voloch, University of Texas. Paul Garrett

Reviews from Amazon users which were colected at the time this book was published on the website:

⭐I took this course at the U of Minn (where the author is a professor). He has a reputation of being a good professor and a good guy (and I have no reason to doubt it). Unfortunately, his book is very hard to understand. While packed chock full of information, it is written in a **very, very** dense style. It makes a lot of assumptions about your prior knowledge and there are few examples to illustrate the theory. While this may be OK for a grad student in math (or even a bright senior), it is definitely not sufficient for a non-math major and most undergrads. For a much better book (although of more limited scope and rigor) check out Roman.

⭐The Mathematics of Coding Theory written by Paul Garrett is the lecture textbook for Math 5251, U of Minnesota-Twin Cities. This book is well decorated and printing quality is also pretty nice. Prof. Andrew Odlyzko lectures this course every Spring, he is very strong in Mathematics and knowledgable in coding. I think the lectures on the basis of this textbook are wonderful. Three in-class mid-terms together with a term paper, no final will make you happy. Some chapters are a bit difficult to understand, but in comprehensive this book is well written and applicable for both Math undergraduates and Engineering graduate students. You can search for the lecture notes from past semesters and come to office hours for having hints to solve textbook problems. Great deal!

⭐I took this course at the University of Minnesota (the author is a professor here) using this textbook. While the author does provide some good proofs of the mathematics behind coding theory, he provides very little in the way of practical, useful information. The chapters that actually have example problems have very few, and they are incredibly brief with little explanation given. There are a few answers in the back of the book, but often they are just the answer alone without any work or proof. Some of the exercises at the end of each chapter are impossible to complete using only the information available in the book.

Keywords

Free Download The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition in PDF format
The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition PDF Free Download
Download The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition 2003 PDF Free
The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition 2003 PDF Free Download
Download The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition PDF
Free Download Ebook The Mathematics of Coding Theory: Information, Compression, Error Correction, and Finite Fields 1st Edition

Previous articleIntroduction to Cardinal Arithmetic (Modern Birkhäuser Classics) by Michael Holz (PDF)
Next articleMathematics of Classical and Quantum Physics (Dover Books on Physics) by Frederick W. Byron (PDF)