
Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime
Try Prime
and start saving today with fast, free delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Streaming of thousands of movies and TV shows with limited ads on Prime Video.
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
Buy new:
$14.36$14.36
FREE delivery: Monday, March 25 on orders over $35.00 shipped by Amazon.
Ships from: Amazon.com Sold by: Amazon.com
Buy used: $7.56
Other Sellers on Amazon
+ $4.48 shipping
97% positive over last 12 months
+ $3.99 shipping
89% positive over last 12 months
Usually ships within 4 to 5 days.
+ $3.99 shipping
77% positive over last 12 months
Usually ships within 3 to 4 days.

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Follow the authors
OK
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines Paperback – August 1, 2005
Purchase options and add-ons
From the inventor of the PalmPilot comes a new and compelling theory of intelligence, brain function, and the future of intelligent machines
Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one stroke, with a new understanding of intelligence itself.
Hawkins develops a powerful theory of how the human brain works, explaining why computers are not intelligent and how, based on this new theory, we can finally build intelligent machines.
The brain is not a computer, but a memory system that stores experiences in a way that reflects the true structure of the world, remembering sequences of events and their nested relationships and making predictions based on those memories. It is this memory-prediction system that forms the basis of intelligence, perception, creativity, and even consciousness.
In an engaging style that will captivate audiences from the merely curious to the professional scientist, Hawkins shows how a clear understanding of how the brain works will make it possible for us to build intelligent machines, in silicon, that will exceed our human ability in surprising ways.
Written with acclaimed science writer Sandra Blakeslee, On Intelligence promises to completely transfigure the possibilities of the technology age. It is a landmark book in its scope and clarity.
- Print length272 pages
- LanguageEnglish
- PublisherSt. Martin's Griffin
- Publication dateAugust 1, 2005
- Dimensions5.4 x 1.2 x 8.25 inches
- ISBN-100805078533
- ISBN-13978-0805078534
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now
Frequently bought together

Similar items that may ship from close to you
Editorial Reviews
Review
“On Intelligence will have a big impact; everyone should read it. In the same way that Erwin Schrödinger's 1943 classic What is Life? made how molecules store genetic information then the big problem for biology, On Intelligence lays out the framework for understanding the brain.” ―James D. Watson, president, Cold Spring Harbor Laboratory, and Nobel laureate in Physiology
“Brilliant and embued with startling clarity. On Intelligence is the most important book in neuroscience, psychology, and artificial intelligence in a generation.” ―Malcolm Young, neurobiologist and provost, University of Newcastle
“Read this book. Burn all the others. It is original, inventive, and thoughtful, from one of the world's foremost thinkers. Jeff Hawkins will change the way the world thinks about intelligence and the prospect of intelligent machines.” ―John Doerr, partner, Kleiner Perkins Caufield & Byers
About the Author
Jeff Hawkins, co-author of On Intelligence, is one of the most successful and highly regarded computer architects and entrepreneurs in Silicon Valley. He founded Palm Computing and Handspring, and created the Redwood Neuroscience Institute to promote research on memory and cognition. Also a member of the scientific board of Cold Spring Harbor Laboratories, he lives in northern California.
Sandra Blakeslee has been writing about science and medicine for The New York Times for more than thirty years and is the co-author of Phantoms in the Brain by V. S. Ramachandran and of Judith Wallerstein's bestselling books on psychology and marriage. She lives in Santa Fe, New Mexico.
Excerpt. © Reprinted by permission. All rights reserved.
On Intelligence
By Jeff Hawkins, Sandra BlakesleeSt. Martin's Press
Copyright ©2005 Jeff HawkinsAll rights reserved.
ISBN: 978-0-8050-7853-4
Excerpt
From On Intelligence:Let me show why computing is not intelligence. Consider the task of catching a ball. Someone throws a ball to you, you see it traveling towards you, and in less than a second you snatch it out of the air. This doesn't seem too difficult-until you try to program a robot arm to do the same. As many a graduate student has found out the hard way, it seems nearly impossible. When engineers or computer scientists try to solve this problem, they first try to calculate the flight of the ball to determine where it will be when it reaches the arm. This calculation requires solving a set of equations of the type you learn in high school physics. Next, all the joints of a robotic arm have to be adjusted in concert to move the hand into the proper position. This whole operation has to be repeated multiple times, for as the ball approaches, the robot gets better information about its location and trajectory. If the robot waits to start moving until it knows exactly where the ball will land it will be too late to catch it. A computer requires millions of steps to solve the numerous mathematical equations to catch the ball. And although it's imaginable that a computer might be programmed to successfully solve this problem, the brain solves it in a different, faster, more intelligent way.
(Continues...)Excerpted from On Intelligence by Jeff Hawkins. Copyright © 2005 by Jeff Hawkins. Excerpted by permission of St. Martin's Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Product details
- Publisher : St. Martin's Griffin; Reprint edition (August 1, 2005)
- Language : English
- Paperback : 272 pages
- ISBN-10 : 0805078533
- ISBN-13 : 978-0805078534
- Item Weight : 8.6 ounces
- Dimensions : 5.4 x 1.2 x 8.25 inches
- Best Sellers Rank: #236,768 in Books (See Top 100 in Books)
- #375 in Medical Cognitive Psychology
- #421 in Artificial Intelligence & Semantics
- #686 in Cognitive Psychology (Books)
- Customer Reviews:
Important information
To report an issue with this product or seller, click here.
About the authors
Sandra (aka Sandy) Blakeslee. I am a science writer with endless curiosity and interests but have spent the past 35 years or so writing about the brain, mostly for the New York Times where I started my career back in the dark ages (late 60s.) I've been writing books for the past few years (The Body Has a Mind of It's Own, On intelligence, Sleights of Mind, Dirt Is Good and more.) As for back story -- I graduated from Berkeley in 1965 (Free Speech Movement major), went to Peace Corps in Borneo, joined the NYT in 1968 as a staff writer, then took off on my own, raised a family, lived in many parts of the world, now live in Santa Fe NM and even have grandchildren. To quote Churchill, so much to do....
Jeff Hawkins is a well-known scientist and entrepreneur, considered one of the most successful and highly regarded computer architects in Silicon Valley. He is widely known for founding Palm Computing and Handspring Inc. and for being the architect of many successful handheld computers. He is often credited with starting the entire handheld computing industry.
Despite his successes as a technology entrepreneur, Hawkins’ primary passion and occupation has been neuroscience. From 2002 to 2005, Hawkins directed the Redwood Neuroscience Institute, now located at U.C. Berkeley. He is currently co-founder and chief scientist at Numenta, a research company focused on neocortical theory.
Hawkins has written two books, "On Intelligence" (2004 with Sandra Blakeslee) and "A Thousand Brains: A new theory of intelligence" (2021). Many of his scientific papers have become some of the most downloaded and cited papers in their journals.
Hawkins has given over one hundred invited talks at research universities, scientific institutions, and corporate research laboratories. He has been recognized with numerous personal and industry awards. He is considered a true visionary by many and has a loyal following – spanning scientists, technologists, and business leaders. Jeff was elected to the National Academy of Engineering in 2003.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Hawkins defines intelligence as the ability to make predictions. I think this is an excellent definition of intelligence.
He says the cortex makes predictions via memory. The rat in the maze has a memory which includes both the motor activity of turning right and the experience of food. This activates turning right again, which is equivalent to the prediction that if he turns right, food will occur.
The primate visual system, which is the sense best understood, has four cortical areas that are in a hierarchy. In the lowest area, at the back of the head, cells respond to edges in particular locations, sometimes to edges moving in specific directions. In the highest area you can find cells that respond to faces, sometimes particular faces, such as the face of Bill Clinton.
But the microscopic appearance of the cortex is basically the same everywhere. There is not even much difference between motor cortex and sensory cortex. The book makes sense of the connections found in all areas of the cortex.
The cortex is a sheet covering the brain composed of small adjacent columns of cells, each with six layers. Information from a lower cortical area excites the layer 4 of a column. Layer 4 cells excite cells in layers 2 and 3 of the same column, which in turn excite cells in layers 5 and 6. Layers 2 and 3 have connections to the higher cortical area. Layer 5 has motor connections (the visual area affects eye movements) and layer 6 connects to the lower cortical area. Layer 6 goes to the long fibers in layer 1 of the area below, which can excite layers 2 and or 3 in many columns.
So there are two ways of exciting a column. Either by the area below stimulating layer 4, or by the area above stimulating layers 2 and 3. The synapses from the area above are far from the cell bodies of the neurons, but Hawkins suggests that synapses far from the cell body may fire a cell if several synapses are activated simultaneously.
The lowest area, at the back of the head, is not actually the beginning of processing. It receives input from the thalamus, in the middle of the brain (which receives input from the eyes). Cells in the thalamus respond to small circle of light, and the first stage of processing is to convert this response to spots to response to moving edges.
And the highest visual area is not the end of the story. It connects to multisensory areas of the cortex, where vision is combined with hearing and touch, etc.
The very highest area is not cortex at all, but the hippocampus.
Perception always involves prediction. When we look at a face, our fixation point is constantly shifting, and we predict what the result of the next fixation will be.
According to Hawkins, when an area of the cortex knows what it is perceiving, it sends to the area below information on the name of the sequence, and where we are in the sequence. If the next item in the sequence agrees with what the higher area thought it should be, the lower area sends no information back up. But if something unexpected occurs, it transmits information up. If the higher area can interpret the event, it revises its output to the lower area, and sends nothing to the area above it.
But truly unexpected events will percolate all the way up to the hippocampus. It is the hippocampus that processes the truly novel, eventually storing the once novel sequence in the cortex. If the hippocampus on both sides is destroyed, the person may still be intelligent, but can learn nothing new (at least, no new declarative memory).
When building an artificial auto-associative memory, which can learn sequences, it is necessary to build in a delay so that the next item will be predicted when it will occur. Hawkins suggests that the necessary delay is embodied in the feedback loop between layer 5 and the nonspecific areas of the thalamus. A cell in a nonspecific thalamic area may stimulate many cortical cells.
I think this theory of how the cortex works makes a lot of sense, and I am grateful to Hawkins and Blakeslee for writing it in a book that is accessible to people with limited AI and neuroscience.
But I am not convinced that the mammalian cortex is the only way to achieve intelligence. Hawkins suggests that the rat walks and sniffs with its "reptilian brain", but needs the cortex to learn the correct turn in the maze. But alligators can learn mazes using only their reptilian brains. I would have been quite surprised if they could not.
Even bees can predict, using a brain of one cubic millimeter. Not only can they learn to locate a bowl of sugar water, if you move the bowl a little further away each day, the bee will go to the correct predicted location rather than to the last experienced location.
And large-brained birds achieve primate levels of intelligence without a cortex. The part of the forebrain that is enlarged in highly intelligent birds has a nuclear rather than a laminar (layered) structure. The parrot Alex had language and intelligence equivalent to a two year old human, and Aesop's fable of the crow that figured out to get what he wanted from the surface of the water by dropping stones in the water and raising the water level, has been replicated in crows presented with the problem.
Taking his lead from Johns Hopkins neuroscience researcher Vernon Mountcastle back in the seventies, Hawkins presumes that the remarkably uniform appearance of the cortex (it basically consists, he tells us, of six layers of neuronal cells throughout) suggests that the various areas of the cortex, demonstrated by researchers to be responsible for different functions (vision, touch, hearing, conceptualizing, etc.), really do everything they do by performing the same processes. He is clear, of course, to emphasize that he is not talking about other things brains presumably do including emotions, instinctual drives, somatic sensations, etc. which he assigns to the lizard brain. It's just the intelligence part that he is interested in though he's certainly aware that for intelligence to work as it does in us it must be integrated with the broad range of other features found in consciousness including those produced in the lizard brain. So his argument is not that the cortex, in its special capacity, is a stand-alone but that it is a significant and inextricable add-on to the rest of our brain and works only with and in support of the other features.
For Hawkins, the key to understanding how the cortex does intelligence comes down to understanding the pertinent algorithm. He argues that neuronal groups work in two hierarchical ways, both up and down the line in linked columns spanning the six layers of neurons, found more or less uniformly throughout the cortex, and also by combining and linking different cortical areas horizontally (responsible for different functions , e.g., shapes, colors, sound, touch, taste, smell, language, motor control) in other, non-physically determined (because non physically contiguous) hierarchies via links established between cortical layers through extension of myriads of cellular axons traveling transversely across the cortical areas AND to other parts of the lizard brain (each of which axon produces multiple connections, through the tree-like dendrites at its end points, resulting in difficult to estimate -- but likely in the hundreds of millions [or more] -- connections).
The basic cortical algorithm, performed by all these interconnecting neurons in the cortex, on Hawkins' view, is one of patterning and of the capture and retention of so-called "invariant representations". He argues that human memory is not precise, the way computational memory is (a case made, as well, by Gerald Edelman in his own work). But, where Edelman ( Bright Air, Brilliant Fire: On The Matter Of The Mind ) emphasizes the dynamic and incomplete quality of human recollections, Hawkins emphasizes their general nature. We don't remember things precisely, in detail, he says, but, rather, in only general patterns (adumbrations rather than precise images).
This, he suggests, is because of the basic patterning algorithm of the neuronal group operations in the cortex.
When information flows in, he says, various neurons in the affected groups fire, in very fine detail, much as our taste buds operate in the tongue with different nerves for the different tastes which then pass the captured information up the line to combine further upstream via the brain's more comprehensive processes. In the vision parts of the cortex for instance, Hawkins notes that some cortical cells at the input end of the relevant cellular columns will fire in response to vertical lines, others to horizontals or diagonals, while others, nearby, presumably pick up color information, etc. The various firings pass up the line in increasingly broad (and more generalized) combinations, eventually losing much of the detail but generating patterns driven by the lower level details received.
At the highest level of the cortex, Hawkins reasons we have only the broadest, most general pictures, combining the increasingly broad and more general patterns passed up from below with related general patterns from other areas (say visual patterns with touch patterns and sound patterns, etc.) to give us still larger patterns via associative linkage. When new inputs come in (as they are constantly doing) the passage of the information up the line encounters the stored general patterns higher up which respond by sending signals down the same routes (and also down our motor routes if and when actions are called for).
The ability of the incoming inputs to match stored generic patterns higher up (when the information coming down the line matches the information heading up) is successful prediction. When there is no match, prediction fails and new general patterns form at the higher end of the cortical columns to replace the previous patterns. Thus memory in us is seen as an ongoing adjusting process with repetitive matches producing stronger and stronger traces of previously stored patterns.
Because patterning happens at every level, a kind of pyramid of patterns from the lowest level in the cortex to the highest is seen. At all levels, associative mechanisms are utilized and, at the highest levels, these connect and combine multiple specialized patterns into still larger overarching representational patterns. The capacity to retain invariant representations at all levels, until adjustments are made, gives us the invariant representational capability that forms the basis of human memory and underlies prediction which, he thinks, is what we mean by "intelligence" (i.e., the dynamic process of matching old patterns to new inputs where the more successful the matching, the more "intelligent" we deem the operations performed).
So the cortex, on this view, is a "memory machine" (as Hawkins puts it), using a patterning and matching mechanism to constantly fit the stored representations held in the cortex to the world. And intelligence is seen as the outcome of this massive process that is constantly going on in our brains, i.e., the ability to quickly adjust to incoming information and make successful predictions about it. It's this increasingly complex and generalizing capacity of cortexes, he argues, that gives us the ability to construct and use massively complex pictures of the world around us (the source of our sensory inputs)*.
Hawkins thinks that this is a whole different way of conceiving of intelligent machines, replacing the notion prevalent in mainstream AI that the way to build machine intelligence is to construct massive systems of complex algorithms to perform intelligent functions typical of human capability. Instead, of that, he proposes, we need to concentrate on building chips that will be hardwired to work like cortical neurons in picking up, storing and matching/adjusting a constant inflow of sensory information and which can then be linked in a cortex-like architecture matching the cortical arrangements found in human brains.
Such machines, he proposes, will learn about their world in a way that is analogous to how we do it, build pictures based on sensory information received, recognize patterns and connections and think out of the more confining algorithm-intensive computational box.
Hawkins notes that we don't have to give such machines the kinds of sensory information available to humans and suggests that there is a whole range of different kinds of sensory inputs that might make more sense for such machines, depending on what complex operations they are built to perform (which may include security monitoring, weather prediction, automobile control or work in areas outside ordinary human safety zones, say in outer space, in high radiation areas or at great depths on the ocean floor). Nor does he think we have to worry about such machine intelligences supplanting us (a la The Matrix ) since there is no reason, he argues, that we would have to give such machines drives or feelings, or even a sense of selves such as we have, any of which might make them competitors to humans in our own environment. (Of course, it bears noting that we don't really have any idea of how brains produce drives and selves, per se, so it's at least a moot question whether we can simply, as Hawkins suggests, resolve not to provide these to such machines. After all, what if the synthetic cortical array he envisions turns out to have some or all of the capabilities Hawkins now thinks are seated beyond the cortex in human brains? In such a case, mere resolve not to give such capabilities to the proposed cortical array machines might not be enough!)
One of the main reasons Hawkins argues for a simple hardwired algorithm configured in a cortex-like architecture, versus a massively computational AI application (as envisioned in many AI circles), is that he believes even the most powerful computers today, with far faster processing capacities than any human brain, cannot hope to keep up with this kind of cortical architecture. He comes to this conclusion because he believes too many steps are involved in order to program intelligence comparable to what humans have, thus requiring a computational platform of vast, likely unwieldy, size, and detailed programming that must prove too monumental to undertake and maintain error-free. Nature, he argues, chose a simpler, more elegant and, in the end, superior way: a simple patterning/predicting algorithm.
In many ways Hawkins is much better than Gerald Edelman in dealing with the brain since Edelman gets lost in complexities, vagueness and what look like linguistic confusions in trying to describe brain process or argue against the AI thesis. Hawkins, though he limits his scope to intelligence rather than the full range of consciousness features, gives us a much more detailed and structured picture of how the mechanism under consideration might actually work.
In the end he gives us a picture best understood as arrays of firing cells (think flashing lights) that constantly do what they do in response to incoming and outgoing signal flows, with the incoming reflecting the array of sensory inputs we get from the world outside and the outgoing the stored general patterns that serve as our world "pictures" (not unlike Plato's forms, as he suggests, albeit without the platonistic mysticism) which are built up by the constant inflow.
Thus, he envisions a constant upward and downward flow of signals in the cortical system which is not only dynamic based on the interplay of the dual directional flow of the signals but is reflective of the facts beyond the brain in the world through the compound construction of invariant representations (occurring at every level of cortical activity). To the extent the invariant representations he describes successfully match incoming signals, they are predicting effectively and the organism depending on them is more likely to succeed in its environment. To the extent they are unable to generate effective prediction, the organism depending on them suffers.
A key weakness of Hawkins' explanation lies in his failure to either show exactly how the pattern matching and adjusting of the neuronal group hierarchies become the world of which we are consciously aware, in all its rich detail (how mere physical inputs become mind -- the components of our mental lives) and how the cortex integrates the many inputs of the rest of the brain. As John Searle ( Minds, Brains and Science (1984 Reith Lectures) and Mind, Language, and Society : Philosophy in the Real World ) has noted, our idea of intelligence is very much intertwined with our idea of being aware, being a subject, having experience of the inputs we receive, etc. If we understand something, it's not just that we can produce effective responses to the stimuli received but that we are aware of the meanings of what we're doing, what is going on, etc.
Hawkins' "intelligence" looks to be a very much truncated form of this, albeit deliberately so, because he wants to argue for intelligent machines that will be "smarter" than computers but not quite smart enough to be a threat to us. Still, despite the fact that he has offered an intriguing possibility, which may well be an important step forward in the process of understanding minds and brains and of building real artificial intelligence, one can't escape the feeling he has still missed something along the way by distancing himself from the question of what it is to be aware -- to understand what one is doing when one is doing it.
SWM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* One of the critical differences between us and mammals lower down the development scale, he suggests, is the relative size of our cortexes. Many mammals with smaller brains just have smaller cortexes and, thus, fewer cells there, while some mammals, e.g., dolphins, actually have larger brains but less dense cortexes -- three layers vs. our six. Thus, says Hawkins, the intelligence we have reflects a greater capacity to form representations (covering more inputs, including past and present and a greater capacity for abstraction).
Top reviews from other countries


Se siete arrivati sin qui, vi invito ad acquistate il libro e a leggerlo per intero.
L'autore si mostra come un amico: racconta i suoi successi (senza boria) e i suoi errori (cosa ha pensato erroneamente per molti anni e i "no" ricevuti, ammettendolo senza mezzi termini e senza vergongna).
In questo libro si trovano dei concetti chiave molto interessanti, che non si trovano facilmente altrove.
Scorrevole, ricco di esempi, si respira il fermento scientifico e le discussioni tra più discipline.
Consigliatissimo.
(Leggete la bibliografia. Si trovano diverse chicche.)

Still relevant and groundbreaking in 2018 as deep neural net AI proves Hawkins right.
This book will change the way you think about your mind. When you understand sequence memory prediction you'll see the world in a different way - a true paradigm shift in the same league as the theory of evolution.


La plus grande partie du livre est consacrée à la description du neocortex, son rôle , et la manière dont il interagit et construit un modèle du monde extérieur.
Auto-association, mémoire et prédiction dans le cadre d'une organsation hiérarchique sont les mots clés de ce livre passionnant, pas uniquement réservés à ceux qui s'intéressent à l'IA, mais aussi ceux qui cherche à comprendre ce qu'est l'intelligence humaine.