My book, THE RAVENOUS BRAIN, officially comes out today (in the US at least – 13th Sep in the UK/Europe in hardback, but the Kindle edition is available everywhere now, I think).
While I’m writing, I thought I’d take the opportunity to update you on some of the events surrounding the launch.
The book has already received some very positive reviews, including in New Scientist.
Another review to pick out is the Times Higher Education magazine, where it is book of the week.
All the reviews I know of are included on my book page, with links where available to the full reviews.
And it’s also been chosen as the September main selection for Scientific American Book Club, and seconnd selection at the Book of the Month Club – both bits of news I feel really honoured by.
If you want to read a snippet or two, there is an abridged excerpt of the introduction, titled Consciousness: The Currency of Life on Huffington Post.
There is also an extended excerpt of chapter three, titled Touring The Brain, on Salon.
I’ve also written a couple of articles on issues related to the book, in various places.
For instance, I wrote an article in Wired UK magazine about the perils and limits of unconscious decisions and learning (NB online version of article might take a few days to turn up, but it’s out in the old fashioned paper version right now).
And there’s an article called When Do We Become Truly Conscious in Slate magazine, which right now is the most read story on the site, beating down into second place a feature article about a New York dominatrix! So maybe there’s hope for science yet!
The book publicist has been doing a wonderful job on my behalf and I’ve also been or will be interviewed for various radio stations about the book, consciousness or the brain, and you can find out details about those here.
So please grab a copy of The Ravenous Brain – and if you want to tell me what you think of it, I’d love to hear from you.
19 comments
Skip to comment form
Daniel, I just bought the Kindle version, my ravenous brain eager ingest its wisdom and comment on its impact on my hungry noodle. Sounds like you’ve got a winner.
Author
Thanks a lot Tony! I hope you enjoy the book.
It’s going on my book review shortlist for BioJournalism.com. Glad to hear of the good reviews!
Author
Thanks Kallen, please let me know if/when you publish a review on your site.
Congrats and thanks for a marvelous and insightful book, Daniel! I have posted my thoughts and review on my blog … Hope you enjoy it!
Author
Thanks! And thank you for such a lovely review as well – miles better quality than one or two of the official magazine reviews I received!
Thanks so much, Daniel, I am very glad you enjoyed it! Minor philosophical differences aside, I consider your book an absolute must-read!
In fact, I am promoting it at my college and hope to have it included in our library soon. It is already being considered at the public library here. Anyhow, best of luck and look forward to your next endeavour!
Have you considered the views of Alva Noe? I think much is missing on the origin of a thought. Perhaps we should look in the direction of quantum physics such as the string theory and harmonics from yet to be discovered dimensions or entanglement. I am sure your book is not the last word on the mind and the brain.
Author
Thanks for the comment, Lisa.
Re Noe, I completely agree that interactions with the external world are a vital, substantive ingredient of our conscious selves. However, under-estimating the role that the brain plays in generating consciousness isn’t so helpful, nor is dismissing or largely ignoring current neuroscientific progress in consciousness!
Appealing to areas such as quantum physics, as I mention in the book, is basically following the specious syllogism: consciousness is really mysterious, quantum mechanics is really mysterious, therefore quantum mechanics explains consciousness. That’s a bit facetious, but only a bit! I argue in the book that the best approach is to stop assuming that consciousness is some intrinsically mysterious, potentially unknowable process, and go out and test this, via science.
Already, after just two decades of solid work, it’s looking far less mysterious, and more tractable. Definitely we’re not at the last word yet, but we’re making a surprising degree of scientific progress, and I suggest you have a look at my book and then once you’ve read more of the science, judge for yourself whether consciousness is necessarily so mysterious or not.
I have read your book and think you are over confident and dismiss the origin of a thought. There is nothing mysterious, just things not yet defined to the point of certainty. Theories need measured proof and the fMRI does not measure the behavior of the workings of atoms of the brain. The fMRI leaves much to be questioned. Keep an open mind like a child would.
Just came across your book on Amazon. Read your article in Slate magazine and now I’m set to purchase myself a copy. Looking forward to your insights.
Congratulations Daniel,
This morning i was Reading the spanish internet journal “ElConfidencial.com” where i found your article about neurons and the meaning of the life and I’m completely agree with you.
I always said that for better understanding of everything we must understand how our brain Works, and how real is the reality from the point of view of a human brain.
How our brain makes our reality and our awareness and consciousness, and how we are subject to physical laws, and that we are alive only when we know and we feel that we are alive.
Anyway, there’s another issue that I have not yet managed to explain and it’s the premonitory dreams. There must be a link between differents states of our consciousness and the reality. I’m sure there’s no spiritual or religion meaning, only physics and nature,
So thanks for your job Daniel, i wil read your book.
Has this blog died following the book publication?
Author
Not dead. I’ve just been unbelievably busy with book promotion, then catching on research and a few other writing tasks. I have a few blog posts in my head that I keep meaning to add. As soon as I have a spare moment (not honestly sure when this will be), I’ll put them up.
I’ve been intensely interested in consciousness for several decades, and coming across your book at a book sale felt rather like stumbling across a gold mine. Rarely have I devoured a book as intensely and thoroughly as I did yours.
One thing I wish you had explained in a bit more detail is the subject of the “neuronal rhythm”, or brain waves. You say that high gamma waves are initiated by the thalamus, and “perpetuate through the relevant parts of the cortex and bind together all components of an object in consciousness.” This gives me a vague idea of the process, but it is awfully vague.
Understanding that individual neurons fire according to response to some stimulus, and that each neuron’s firing communicates its signals to many other neurons, I’m still left wondering how these neuronal firings relate to the overall neuronal rhythm you speak of. Do all of the brain’s neurons fire (or not fire) simultaneously and at the frequency represented by the neuronal rhythm? Is this rhythm more or less analogous to the clock rate of a computer processor, such that whatever happens in the brain happens in discrete steps?
Author
Thanks so much for the compliments and comments, David.
Neuronal rhythms are a particularly hot topic at the moment, but it’s a tricky thing to study precisely. What you need is to simultaneously record a large set of neurons. Emerging techniques are beginning to get at this, and see these waves first hand, as one set of neurons triggers others nearby, just like in a Mexican wave. But it’s relatively early days.
In terms of “discrete” steps, it seems that the neuronal rhythms related to visual perception do work something like this, in the alpha range, and you can enhance or interfere with detection of a stimulus, for instance, if you present stimuli at the peak or troughs of these waves.
Thanks for the reply. I guess we’ll have to wait to see what new research reveals about neuronal rhythms.
I’ve got another, bigger question. In your book, you equate the term “experience” with consciousness. (Or at least, the index entry for “experience” just says “See consciousness.”) I would like to use the word “experience” in a different way, though: meaning approximately what you refer to as “subjectivity,” or in Nagel’s words, that it “is like something” to be a conscious human – or presumably, a bat, or a monkey, or an octopus.
Your explanation of consciousness – its purpose, and how the brain creates it – is fascinating and makes a lot of sense. I’m 98% convinced that you’re correct that the mind, and our conscious experience, are simply products of the physical brain. (I’ll leave the other 2% for another day.) But this still leaves what you call “the abiding mystery of subjectivity” dangling there like a tantalizingly ripe piece fruit just out of reach.
It seems very plausible to me that all of the computation needed for animals, including humans, to successfully survive and carry on their lives COULD be accomplished without any whiff of experience.
I’ve been a software developer for over 40 years, so I have some insight into how computers work. The feats that are now being accomplished with technology are astounding: recognizing human faces, driving cars through traffic, beating chess grandmasters, winning at Jeopardy. Yet I’m pretty sure that the computers that do these things, like the one sitting on my desk, have no experience of doing them. It’s not “like something” to be Watson, the Jeopardy champion.
In your book you point out that our brains are massively parallel systems, while today’s computers operate almost entirely in serial fashion. Clearly this is a very significant difference. Still, to me it leaves open the question of whether experience is an emergent property specifically of biological brains, or whether experience can also emerge out of silicon devices.
Note that I’m not arguing that experience can NOT arise artificially. It’s just that, even with all that we’ve learned about consciousness, the phenomenon of experience seems almost as mysterious as ever, and I’m not convinced that we know enough to be confident that it can be constructed in any way other than the way it’s been done for millions of years. I really hope that I live long enough to see a definitive answer to this question. Whichever way it’s answered, it’s sure to have a profound impact on our understanding of what it means to be alive and aware.
I’ve written more about this in my own blog, including a thought experiment that I believe is significantly different from the “Chinese Room” proposed by John Searle and described in your book. I won’t repeat it here, but if you have time I’d appreciate it if you’d take a look: http://just.thinkofit.com/christof-koch-could-the-internet-learn-to-feel/
On a related note, here’s my criticism of a recent article in Scientific American by Christof Koch and Giulio Tononi that proposes a test to determine whether a machine has reached a state of conscious awareness: http://just.thinkofit.com/how-will-we-know-when-a-computer-becomes-conscious/
Thanks again for your wonderful book.
Author
Hi David,
Thanks for the very thoughtful comments.
Unlike you, though, it doesn’t seem plausible to me that the complexity of human thought could occur “without any whiff of experience” as loads of evidence suggests (see my book) that if we are going to have complex thoughts, we have to have consciousness too. So in turn that suggests that consciousness is necessary for complex thoughts, in some way. And if we built a computer with a similar kind of architecture and complexity to a human brain, so that it too could come up with lots of complex thoughts, I think it highly plausible that the computer would be conscious, in similar ways to us.
I agree that the Koch/Tononi article might not have given the most watertight test for artificial consciousness. But Tononi’s Information Integration Theory is an intriguing, popular theory (I describe it somewhat in the book), and just takes the leap that consciousness simply is integrated information, that subjectivity comes from a closed network of a particular type of architecture which can’t communicate with other networks. His theory applies equally to artificial as to real networks. If his theory, or something resembling it, is right, we could simply quantify an artificial being for its levels of consciousness and problem solved. There is already some empirical confirmation for this theory, although researchers suspect the confirmation would work for a large bunch of other similar theories too.
My approach at this stage would be more cautious – if there’s a robot that behaves like us, with language and physical movements reflecting the kinds of complex thought that we have, in their breadth, and tells us that it thinks it is conscious, then I think it’s very likely that this robot is conscious. We can then work down from there, and look at slightly simpler robots, with a similar artificial architecture, but somewhat less nodes, who might not have the capacity for language, but still show complex, flexible behaviours. Then we can say they are probably conscious, but we aren’t so sure, and so on.
As for your thought experiment, I’m afraid I didn’t quite understand it: so there’s no logical or mechanical relationship between the rocks? That’s obviously wildly different to different neurons in a brain, so the simulation seems to be missing something crucial, perhaps? But aside from this, I agree that it’s critical that whatever network/brain/computer you have, it has to be “embodied,” meaning it has to interact with the world via some kind of sensory system. Our own neural meaning comes from interaction with the world, and is constantly shaped by it. I think thought experiments like the Chinese Room argument, and perhaps your rocks, fail as valid thought experiments partly because of the lack of intimate connection between the internal information in whatever form and the environment.
First, I want to reiterate that I draw a distinction between consciousness as you very clearly define it in your book (essentially, as a particular form of information processing) and experience, or what you refer to as “subjectivity” – the something-it-is-like-to-be. I realize that in your model, and Tononi’s, experience is either equivalent to consciousness, or inevitably arises from it. I grant that this is a very real and intriguing possibility but I’m not quite ready to accept it as an established truth.
In the original “Bunch of Rocks” cartoon at xkcd (http://xkcd.com/505/) the character creates a computer simulation of the entire universe with accuracy down to the quantum level. The twist is that instead of using electronic circuitry, his “computer” consists of a bunch of rocks that he moves around in the sand according to rules that match the laws of physics. In principle, the computers we use in real life and this imaginary character’s bunch of rocks are both Turing machines, and hence formally equivalent. The logic is the same, it’s just that the data is represented as patterns of rocks rather than patterns of electronic bits.
In my version, I’m saying let’s simplify this by choosing to simulate just a single human brain rather than the entire universe. Maybe I should expand on that a little and include not just a brain, but an entire live human body, plus enough of an immediate environment so the simulated human has some sensory input.
In order for this to be an accurate simulation, of course the simulated neurons must interact with one another in the same way they do in a real brain. Since the brain is a massively parallel system, we would want the rocks representing neurons to be flipped or moved simultaneously rather than serially. So perhaps we need to imagine that instead of just one guy moving rocks around, we have a billion guys, each responsible for moving one rock according to a given set of rules, so they can all move in parallel. Maybe we need a bunch of rocks to represent the state of each neuron. And maybe we can replace the guys with a bunch of dumb machines that move the rocks. After all, they are just following the rules in a rulebook, moving rocks around based on each neuron’s function and its inputs. For my purposes, these details don’t really matter.
As you explain in your book, the Chinese Room experiment could never be carried out in real life because the number of possible inputs and corresponding outputs is simply too enormous. Likewise, the bunch-of-rocks simulation would be completely impractical to carry out. But it seems to me it’s still worth considering, from two different perspectives.
The first is at sort of a gut level: it just seems wrong somehow to imagine that a bunch of rocks being moved around would generate a conscious entity that experiences itself and its surroundings. I admit that this gut reaction doesn’t prove anything, but it’s something to consider.
The second perspective is perhaps more significant. In a brain, the action of a given neuron has an inherent meaning. It’s easiest to recognize the inherent meaning of neurons involved with the lowest level of sensory perception: detecting colors, edges of objects, etc., but presumably this carries up to higher levels of organization, such that certain patterns of firing have inherent meaning in the brain and to the organism as a whole. I would maintain that this is a fundamental difference between an actual brain and any digital simulation of a brain, be it a simulation based in electronic circuitry or one based on rocks in the sand. A given pattern of bits in a computer can represent an infinite number of different things: an image, a sound, text, numbers in a spreadsheet, whatever. The meaning of any particular pattern depends entirely and arbitrarily on how some programmer decided to encode data or program instructions. The same physical bits might hold the text of an email in one instant, a picture of a walrus the next instant, and the digits of Pi a moment later. I don’t believe this is true of brains.
But again, I’m not claiming that this proves a machine can never be conscious or have an experience of itself. I just think it’s reason enough to maintain some healthy skepticism about whether it’s possible. I’m eager for future research to determine the answer, and I think whichever way it goes, it will be equally momentous.